hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ff8aeee94a7db582304106815b0e7201db725d20 | 35 | py | Python | opencoverage/clients/__init__.py | pavelito/opencoverage | ee2820dc1c5261263e8be1f041ce915e54248905 | [
"MIT"
] | 25 | 2021-01-20T17:38:03.000Z | 2021-12-13T22:23:22.000Z | opencoverage/clients/__init__.py | pavelito/opencoverage | ee2820dc1c5261263e8be1f041ce915e54248905 | [
"MIT"
] | 16 | 2021-01-23T17:51:19.000Z | 2021-03-21T11:25:05.000Z | opencoverage/clients/__init__.py | pavelito/opencoverage | ee2820dc1c5261263e8be1f041ce915e54248905 | [
"MIT"
] | 6 | 2021-01-22T12:47:05.000Z | 2022-01-27T09:49:53.000Z | from .scm import SCMClient # noqa
| 17.5 | 34 | 0.742857 | 5 | 35 | 5.2 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 35 | 1 | 35 | 35 | 0.928571 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ff8c62840d9799cb3c83bf48eac0fc480a66246d | 13,862 | py | Python | tests/terraform_compliance/steps/test_main_steps.py | supergarotinho/terraform-compliance | 7b4a84d58d55e52cae34a606165bc820593b1240 | [
"MIT"
] | null | null | null | tests/terraform_compliance/steps/test_main_steps.py | supergarotinho/terraform-compliance | 7b4a84d58d55e52cae34a606165bc820593b1240 | [
"MIT"
] | null | null | null | tests/terraform_compliance/steps/test_main_steps.py | supergarotinho/terraform-compliance | 7b4a84d58d55e52cae34a606165bc820593b1240 | [
"MIT"
] | null | null | null | from unittest import TestCase
from terraform_compliance.steps.steps import (
i_action_them,
i_expect_the_result_is_operator_than_number,
it_condition_contain_something,
encryption_is_enabled,
its_value_condition_match_the_search_regex_regex,
its_value_must_be_set_by_a_variable,
it_must_not_have_proto_protocol_and_port_port_for_cidr,
its_property_contains_key
)
from tests.mocks import MockedStep, MockedWorld, MockedTerraformPropertyList, MockedTerraformResourceList
from mock import patch
class Test_Step_Cases(TestCase):
def setUp(self):
self.step = MockedStep()
def test_i_action_them_count(self):
step = MockedStep()
step.context.stash.resource_list = [1,2,3]
i_action_them(step, 'count')
self.assertEqual(step.context.stash, 3)
def test_i_action_them_sum(self):
step = MockedStep()
step.context.stash.resource_list = [1,2,3]
i_action_them(step, 'sum')
self.assertEqual(step.context.stash, 6)
def test_i_action_them_undefined(self):
# with self.assertRaises():
self.assertIsNone(i_action_them(self.step, 'undefined action'))
def test_i_action_them_resource_list_as_dict(self):
step = MockedStep()
step.context.stash.resource_list = None
self.assertIsNone(i_action_them(step, 'something that is not important'))
def test_i_expect_the_result_is_operator_than_number_resource_list_as_dict(self):
step = MockedStep()
step.context.stash = 42
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'operator', 'not_important'))
def test_i_expect_the_result_is_more_than_number_success(self):
step = MockedStep()
step.context.stash = 1
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'more', 0))
def test_i_expect_the_result_is_more_than_number_failure(self):
step = MockedStep()
step.context.stash = 1
with self.assertRaises(AssertionError) as err:
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'more', 1))
self.assertEqual(str(err.exception), '1 is not more than 1')
def test_i_expect_the_result_is_more_and_equal_than_number_success(self):
step = MockedStep()
step.context.stash = 1
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'more and equal', 0))
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'more and equal', 1))
def test_i_expect_the_result_is_more_and_equal_than_number_failure(self):
step = MockedStep()
step.context.stash = 1
with self.assertRaises(AssertionError) as err:
i_expect_the_result_is_operator_than_number(step, 'more and equal', 2)
self.assertEqual(str(err.exception), '1 is not more and equal than 2')
def test_i_expect_the_result_is_less_than_number_success(self):
step = MockedStep()
step.context.stash = 1
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'less', 2))
def test_i_expect_the_result_is_less_than_number_failure(self):
step = MockedStep()
step.context.stash = 1
with self.assertRaises(AssertionError) as err:
i_expect_the_result_is_operator_than_number(step, 'less', 1)
self.assertEqual(str(err.exception), '1 is not less than 1')
def test_i_expect_the_result_is_less_and_equal_than_number_success(self):
step = MockedStep()
step.context.stash = 1
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'less and equal', 1))
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'less and equal', 2))
def test_i_expect_the_result_is_less_and_equal_than_number_failure(self):
step = MockedStep()
step.context.stash = 1
with self.assertRaises(AssertionError) as err:
i_expect_the_result_is_operator_than_number(step, 'less and equal', 0)
self.assertEqual(str(err.exception), '1 is not less and equal than 0')
def test_i_expect_the_result_is_invalid_operator_than_number_failure(self):
step = MockedStep()
step.context.stash = 1
self.assertIsNone(i_expect_the_result_is_operator_than_number(step, 'invalid_operator', 0))
def test_it_condition_contain_something_resource_list(self):
step = MockedStep()
step.context.stash.resource_list = None
self.assertIsNone(it_condition_contain_something(step, 'should', 'not_important'))
@patch('terraform_compliance.steps.steps.world', side_effect=MockedWorld())
def test_it_must_contain_something_property_can_not_be_found(self, *args):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
step.sentence = 'Then it must contain'
with self.assertRaises(AssertionError) as err:
it_condition_contain_something(step, 'non_existent_property_value', MockedTerraformPropertyList)
self.assertEqual(str(err.exception), 'non_existent_property_value property in test_name can not be found in '
'test_resource_name (test_resource_type). It is set to test_value instead')
def test_it_condition_must_something_property_can_not_be_found(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.sentence = 'Then it must ..'
with self.assertRaises(Exception) as err:
it_condition_contain_something(step=step, something=None, resourcelist=MockedTerraformResourceList)
self.assertEqual(str(err.exception), 'should_have_properties hit')
step.sentence = 'When it contains'
it_condition_contain_something(step=step, something=None, resourcelist=MockedTerraformResourceList)
self.assertEqual(step.state, 'skipped')
def test_it_condition_must_something_property_is_found(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.sentence = 'Then it must ..'
it_condition_contain_something(step=step, something='something', resourcelist=MockedTerraformResourceList)
self.assertEqual(step.context.stash[0].__class__, MockedTerraformPropertyList)
def test_it_condition_must_something_property_stash_is_dict_found(self):
step = MockedStep()
step.context.stash = {'something': 'something else'}
self.assertIsNone(it_condition_contain_something(step=step, something='something', resourcelist=MockedTerraformResourceList))
def test_it_condition_should_something_property_stash_is_dict_found(self):
step = MockedStep()
step.context.stash = {}
step.sentence = 'Then it must contain'
with self.assertRaises(AssertionError) as err:
it_condition_contain_something(step=step, something='something', resourcelist=MockedTerraformResourceList)
self.assertEqual(str(err.exception), 'something does not exist.')
step.sentence = 'When it contains'
step.context.stash = {}
it_condition_contain_something(step=step, something='something', resourcelist=MockedTerraformResourceList)
self.assertEqual(step.state, 'skipped')
def test_encryption_is_enabled_resource_list(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
self.assertIsNone(encryption_is_enabled(step))
def test_its_value_condition_match_the_search_regex_regex_resource_list(self):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
self.assertIsNone(its_value_condition_match_the_search_regex_regex(step, 'condition', 'some_regex'))
def test_its_value_must_match_the_search_regex_regex_string_unicode_success(self):
step = MockedStep()
step.context.stash = 'some string'
self.assertIsNone(its_value_condition_match_the_search_regex_regex(step, 'must', '^[sometring\s]+$'))
def test_its_value_must_match_the_search_regex_regex_string_unicode_failure(self):
step = MockedStep()
step.context.stash = 'some string'
step.context.name = 'test name'
step.context.type = 'test type'
with self.assertRaises(AssertionError) as err:
its_value_condition_match_the_search_regex_regex(step, 'must', 'non_match_regex')
self.assertEqual(str(err.exception), '{} {} tests failed on {} regex: {}'.format(step.context.name,
step.context.type,
'non_match_regex',
step.context.stash))
def test_its_value_must_match_not_the_search_regex_regex_string_unicode_success(self):
step = MockedStep()
step.context.stash = 'some string'
self.assertIsNone(its_value_condition_match_the_search_regex_regex(step, 'must not', 'non_match_regex'))
def test_its_value_must_not_match_the_search_regex_regex_string_unicode_failure(self):
step = MockedStep()
step.context.stash = 'some string'
step.context.name = 'test name'
step.context.type = 'test type'
with self.assertRaises(AssertionError) as err:
its_value_condition_match_the_search_regex_regex(step, 'must not', '^[sometring\s]+$')
self.assertEqual(str(err.exception), '{} {} tests failed on {} regex: {}'.format(step.context.name,
step.context.type,
'^[sometring\s]+$',
step.context.stash))
def test_its_value_must_match_the_search_regex_regex_success(self):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
self.assertIsNone(its_value_condition_match_the_search_regex_regex(step, 'must', '^[tesvalu_\s]+$'))
def test_its_value_must_match_the_search_regex_regex_failure(self):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
with self.assertRaises(AssertionError):
its_value_condition_match_the_search_regex_regex(step, 'must', 'non_match_regex')
def test_its_value_must_not_match_the_search_regex_regex_success(self):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
self.assertIsNone(its_value_condition_match_the_search_regex_regex(step, 'must not', '^[tesvalu_\s]+$'))
def test_its_value_must_not_match_the_search_regex_regex_failure(self):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
with self.assertRaises(AssertionError):
its_value_condition_match_the_search_regex_regex(step, 'must not', 'non_match_regex')
def test_its_value_must_be_set_by_a_variable_resource_list(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.context.search_value = 'something'
self.assertIsNone(its_value_must_be_set_by_a_variable(step))
@patch.object(MockedTerraformResourceList, 'property', return_value=MockedTerraformResourceList())
def test_its_value_must_be_set_by_a_variable(self, *args):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.context.search_value = MockedTerraformResourceList()
self.assertIsNone(its_value_must_be_set_by_a_variable(step))
def test_its_property_contains_key_resource_list(self):
step = MockedStep()
step.context.stash.resource_list = None
self.assertIsNone(its_property_contains_key(step, 'something', 'not_important'))
def test_its_property_contain_something_property_can_not_be_found(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.sentence = 'When its .. contains Name'
its_property_contains_key(step=step, property="???", key="Name", resourcelist=MockedTerraformResourceList)
self.assertEqual(step.state, 'skipped')
def test_its_property_contains_key_property_key_that_can_not_be_found(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.sentence = 'When its .. contains Name'
its_property_contains_key(step=step, property="tags", key=None, resourcelist=MockedTerraformResourceList)
self.assertEqual(step.state, 'skipped')
def test_its_property_contains_key_property_key_is_found(self):
step = MockedStep()
step.context.stash = MockedTerraformResourceList()
step.sentence = 'When its something contains key'
its_property_contains_key(step=step, property='tags', key="key", resourcelist=MockedTerraformResourceList)
self.assertEqual(step.context.stash.__class__, MockedTerraformPropertyList)
def test_its_property_contains_key_property_is_dict_found(self):
step = MockedStep()
step.context.stash = {'something': 'something else'}
its_property_contains_key(step=step, property='something', key="key", resourcelist=MockedTerraformResourceList)
self.assertEqual(step.state, 'skipped')
def test_its_property_contains_key_property_is_property_list(self):
step = MockedStep()
step.context.stash = MockedTerraformPropertyList()
its_property_contains_key(step=step, property='something', key="key", resourcelist=MockedTerraformResourceList)
self.assertEqual(step.state, 'skipped')
| 51.340741 | 133 | 0.705382 | 1,639 | 13,862 | 5.57352 | 0.078707 | 0.065025 | 0.077066 | 0.101259 | 0.856814 | 0.826163 | 0.819486 | 0.787192 | 0.748331 | 0.699945 | 0 | 0.003555 | 0.208556 | 13,862 | 269 | 134 | 51.531599 | 0.829095 | 0.001803 | 0 | 0.528889 | 0 | 0 | 0.095772 | 0.009758 | 0 | 0 | 0 | 0 | 0.231111 | 1 | 0.173333 | false | 0 | 0.035556 | 0 | 0.213333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4414a9d3c51d7f42144ca2ac8129dc5c28b0f761 | 2,253 | py | Python | examples/example_pos.py | MeganTj/pyastar2d | 62372a82540e4abdba1fdd0746566a4cfe154be5 | [
"MIT"
] | null | null | null | examples/example_pos.py | MeganTj/pyastar2d | 62372a82540e4abdba1fdd0746566a4cfe154be5 | [
"MIT"
] | null | null | null | examples/example_pos.py | MeganTj/pyastar2d | 62372a82540e4abdba1fdd0746566a4cfe154be5 | [
"MIT"
] | null | null | null | import numpy as np
import pyastar2d
# The start and goal coordinates are in matrix coordinates (i, j).
start = (1, 0)
goal = (3, 3)
# The minimum cost must be 1 for the heuristic to be valid.
weights = np.ones((4, 4), dtype=np.float32) - np.array([[1, 0, 0, 0],
[1, 0, 0, 1],
[0, 0, 0, 1],
[0, 1, 1, 1],], dtype=np.float32)
print("Cost matrix:")
print(weights)
pos = pyastar2d.astar_pos(weights, start, goal, allow_diagonal=False)
# The path is returned as a numpy array of (i, j) coordinates.
print(f"Cell with max value along path from {start} to {goal}:")
print(pos)
start = (1, 3)
goal = (3, 3)
# The minimum cost must be 1 for the heuristic to be valid.
weights = np.ones((4, 4), dtype=np.float32) - np.array([[0, 0, 0, 0],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 1, 1, 1],], dtype=np.float32)
print("Cost matrix:")
print(weights)
pos = pyastar2d.astar_pos(weights, start, goal, allow_diagonal=False)
# The path is returned as a numpy array of (i, j) coordinates.
print(f"Cell with max value along path from {start} to {goal}:")
print(pos)
start = (2, 1)
goal = (3, 3)
# The minimum cost must be 1 for the heuristic to be valid.
weights = np.ones((4, 4), dtype=np.float32) - np.array([[0, 0, 0, 0],
[0, 1, 1, 1],
[0, 1, 0, 1],
[0, 1, 1, 1],], dtype=np.float32)
print("Cost matrix:")
print(weights)
pos = pyastar2d.astar_pos(weights, start, goal, allow_diagonal=False)
# The path is returned as a numpy array of (i, j) coordinates.
print(f"Cell with max value along path from {start} to {goal}:")
print(pos)
start = (1, 2)
goal = (3, 2)
# The minimum cost must be 1 for the heuristic to be valid.
weights = np.ones((4, 4), dtype=np.float32) - np.array([[0, 0, 0, 0],
[0, 1, 0.6, 1],
[0, 1, 0.55, 1],
[0, 1, 1, 1],], dtype=np.float32)
print("Cost matrix:")
print(weights)
pos = pyastar2d.astar_pos(weights, start, goal, dist_weights=[0.05, 0.1], allow_diagonal=False)
# The path is returned as a numpy array of (i, j) coordinates.
print(f"Cell with max value along path from {start} to {goal}:")
print(pos) | 33.626866 | 95 | 0.587661 | 377 | 2,253 | 3.488064 | 0.145889 | 0.031939 | 0.031939 | 0.024335 | 0.907985 | 0.905703 | 0.901901 | 0.901141 | 0.901141 | 0.901141 | 0 | 0.072282 | 0.256991 | 2,253 | 67 | 96 | 33.626866 | 0.713262 | 0.23968 | 0 | 0.695652 | 0 | 0 | 0.15493 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.043478 | 0 | 0.043478 | 0.347826 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4445b92ff5bd82ecd83df7699be70e28915d171b | 49 | py | Python | kbucket/__init__.py | alexmorley/kbucket | c7d478525dca6745278a46a21963c3255441e22f | [
"Apache-2.0"
] | 8 | 2018-06-29T12:43:27.000Z | 2022-03-16T04:24:35.000Z | kbucket/__init__.py | alexmorley/kbucket | c7d478525dca6745278a46a21963c3255441e22f | [
"Apache-2.0"
] | 1 | 2018-10-19T17:33:07.000Z | 2018-10-19T17:33:07.000Z | kbucket/__init__.py | alexmorley/kbucket | c7d478525dca6745278a46a21963c3255441e22f | [
"Apache-2.0"
] | 2 | 2018-10-20T15:06:31.000Z | 2021-09-23T01:04:40.000Z | from .kbucketclient import KBucketClient, client
| 24.5 | 48 | 0.857143 | 5 | 49 | 8.4 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.102041 | 49 | 1 | 49 | 49 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
924e93b71121380a7309e159f768d5963373ddcf | 3,365 | py | Python | tests/barriers/test_edit_wto_status.py | felix781/market-access-python-frontend | 3b0e49feb4fdf0224816326938a46002aa4a2b1c | [
"MIT"
] | 1 | 2021-12-15T04:14:03.000Z | 2021-12-15T04:14:03.000Z | tests/barriers/test_edit_wto_status.py | felix781/market-access-python-frontend | 3b0e49feb4fdf0224816326938a46002aa4a2b1c | [
"MIT"
] | 19 | 2019-12-11T11:32:47.000Z | 2022-03-29T15:40:57.000Z | tests/barriers/test_edit_wto_status.py | felix781/market-access-python-frontend | 3b0e49feb4fdf0224816326938a46002aa4a2b1c | [
"MIT"
] | 2 | 2021-02-09T09:38:45.000Z | 2021-03-29T19:07:09.000Z | from http import HTTPStatus
from django.urls import reverse
from mock import patch
from core.tests import MarketAccessTestCase
class EditWTOStatusTestCase(MarketAccessTestCase):
@patch("utils.api.resources.APIResource.patch")
def test_empty_wto_has_been_notified_error(self, mock_patch):
response = self.client.post(
reverse(
"barriers:edit_wto_status", kwargs={"barrier_id": self.barrier["id"]}
),
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "wto_has_been_notified" in form.errors
assert "wto_should_be_notified" not in form.errors
assert mock_patch.called is False
@patch("utils.api.resources.APIResource.patch")
def test_empty_wto_should_be_notified_error(self, mock_patch):
response = self.client.post(
reverse(
"barriers:edit_wto_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"wto_has_been_notified": "no"},
)
assert response.status_code == HTTPStatus.OK
assert "form" in response.context
form = response.context["form"]
assert form.is_valid() is False
assert "wto_has_been_notified" not in form.errors
assert "wto_should_be_notified" in form.errors
assert mock_patch.called is False
@patch("utils.api.resources.APIResource.patch")
def test_success_wto_has_been_notified(self, mock_patch):
response = self.client.post(
reverse(
"barriers:edit_wto_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"wto_has_been_notified": "yes"},
)
assert response.status_code == HTTPStatus.FOUND
mock_patch.assert_called_with(
id=self.barrier["id"],
wto_profile={
"wto_has_been_notified": True,
"wto_should_be_notified": None,
},
)
@patch("utils.api.resources.APIResource.patch")
def test_success_should_be_notified(self, mock_patch):
response = self.client.post(
reverse(
"barriers:edit_wto_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"wto_has_been_notified": "no", "wto_should_be_notified": "yes"},
)
assert response.status_code == HTTPStatus.FOUND
mock_patch.assert_called_with(
id=self.barrier["id"],
wto_profile={
"wto_has_been_notified": False,
"wto_should_be_notified": True,
},
)
@patch("utils.api.resources.APIResource.patch")
def test_success_should_not_be_notified(self, mock_patch):
response = self.client.post(
reverse(
"barriers:edit_wto_status", kwargs={"barrier_id": self.barrier["id"]}
),
data={"wto_has_been_notified": "no", "wto_should_be_notified": "no"},
)
assert response.status_code == HTTPStatus.FOUND
mock_patch.assert_called_with(
id=self.barrier["id"],
wto_profile={
"wto_has_been_notified": False,
"wto_should_be_notified": False,
},
)
| 36.978022 | 85 | 0.610401 | 380 | 3,365 | 5.102632 | 0.155263 | 0.06034 | 0.05673 | 0.102114 | 0.887055 | 0.887055 | 0.883961 | 0.882929 | 0.844765 | 0.841155 | 0 | 0 | 0.283507 | 3,365 | 90 | 86 | 37.388889 | 0.804231 | 0 | 0 | 0.597561 | 0 | 0 | 0.2211 | 0.192571 | 0 | 0 | 0 | 0 | 0.219512 | 1 | 0.060976 | false | 0 | 0.04878 | 0 | 0.121951 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
9256a3abb85771e62751b3864812a1b5b759611e | 185 | py | Python | test.py | AtMostafa/SuperData | b418e486f067e01ed9aa921e8cc3545d3c6f1395 | [
"MIT"
] | null | null | null | test.py | AtMostafa/SuperData | b418e486f067e01ed9aa921e8cc3545d3c6f1395 | [
"MIT"
] | null | null | null | test.py | AtMostafa/SuperData | b418e486f067e01ed9aa921e8cc3545d3c6f1395 | [
"MIT"
] | null | null | null | from superdata import *
if __name__ == '__main__':
print(f'using the _np_ key: {np.array}')
print(f'using the _plt_ key: {plt.plot}')
print(f'using the _pd_ key: {pd}') | 30.833333 | 45 | 0.632432 | 29 | 185 | 3.551724 | 0.551724 | 0.174757 | 0.320388 | 0.407767 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.210811 | 185 | 6 | 46 | 30.833333 | 0.705479 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.6 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
92856f9f0c717767e4318cf0ab50bd7ed2664022 | 106 | py | Python | pdf2image/__init__.py | fglz/pdf2image | 7572f436b1b386a2256f65cc7f47134e1f5f98ee | [
"MIT"
] | null | null | null | pdf2image/__init__.py | fglz/pdf2image | 7572f436b1b386a2256f65cc7f47134e1f5f98ee | [
"MIT"
] | null | null | null | pdf2image/__init__.py | fglz/pdf2image | 7572f436b1b386a2256f65cc7f47134e1f5f98ee | [
"MIT"
] | null | null | null | """
__init__ of the pdf2image module
"""
from .pdf2image import convert_from_bytes, convert_from_path | 21.2 | 60 | 0.773585 | 14 | 106 | 5.285714 | 0.714286 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022222 | 0.150943 | 106 | 5 | 60 | 21.2 | 0.8 | 0.301887 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
92962e9d0cbec4d209b2e5799d85a7b179aaacf3 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/lib/tests/test_shape_base.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/lib/tests/test_shape_base.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/lib/tests/test_shape_base.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/ed/8b/7d/81196e64c725f725a6dbd289fa65a5956120011861f68b8638ac76d4c9 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ba01b43660b3b0051613ba9496221d24c704eb4 | 38 | py | Python | fast_docker_web/__main__.py | danilocgsilva/fast-docker-web | 0ff607a77ab227463d0b97005692570a61ff5380 | [
"MIT"
] | null | null | null | fast_docker_web/__main__.py | danilocgsilva/fast-docker-web | 0ff607a77ab227463d0b97005692570a61ff5380 | [
"MIT"
] | null | null | null | fast_docker_web/__main__.py | danilocgsilva/fast-docker-web | 0ff607a77ab227463d0b97005692570a61ff5380 | [
"MIT"
] | null | null | null | def main():
print("Sorry! WIP!")
| 9.5 | 24 | 0.526316 | 5 | 38 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 38 | 3 | 25 | 12.666667 | 0.689655 | 0 | 0 | 0 | 0 | 0 | 0.297297 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
2bb3b4641f45866e2528e62d3a544b4070730ffe | 4,047 | py | Python | tests/copying.py | tdegeus/GooseHDF5 | 89992156942fd28b5c82bacbd4308a75b3a62f2f | [
"MIT"
] | null | null | null | tests/copying.py | tdegeus/GooseHDF5 | 89992156942fd28b5c82bacbd4308a75b3a62f2f | [
"MIT"
] | 16 | 2019-04-11T12:03:50.000Z | 2022-03-13T15:03:34.000Z | tests/copying.py | tdegeus/GooseHDF5 | 89992156942fd28b5c82bacbd4308a75b3a62f2f | [
"MIT"
] | 1 | 2021-09-28T16:28:16.000Z | 2021-09-28T16:28:16.000Z | import os
import shutil
import unittest
import h5py
import numpy as np
import GooseHDF5 as g5
class Test_itereator(unittest.TestCase):
def test_copy_plain(self):
dirname = "mytest"
sourcepath = os.path.join(dirname, "foo_1.h5")
destpath = os.path.join(dirname, "bar_1.h5")
if not os.path.isdir(dirname):
os.makedirs(dirname)
datasets = ["/a", "/b/foo", "/c/d/foo"]
with h5py.File(sourcepath, "w") as source:
with h5py.File(destpath, "w") as dest:
for d in datasets:
source[d] = np.random.rand(10)
g5.copy(source, dest, datasets)
for path in datasets:
self.assertTrue(g5.equal(source, dest, path))
shutil.rmtree(dirname)
def test_copy_nonrecursive(self):
dirname = "mytest"
sourcepath = os.path.join(dirname, "foo_1.h5")
destpath = os.path.join(dirname, "bar_1.h5")
if not os.path.isdir(dirname):
os.makedirs(dirname)
datasets = ["/a", "/b/foo", "/b/bar", "/c/d/foo"]
with h5py.File(sourcepath, "w") as source:
with h5py.File(destpath, "w") as dest:
for d in datasets:
source[d] = np.random.rand(10)
g5.copy(source, dest, ["/a", "/b", "/c/d/foo"], recursive=False)
for path in ["/a", "/c/d/foo"]:
self.assertTrue(g5.equal(source, dest, path))
self.assertTrue(g5.exists(dest, "/b"))
self.assertFalse(g5.exists(dest, "/b/foo"))
self.assertFalse(g5.exists(dest, "/b/bar"))
shutil.rmtree(dirname)
def test_copy_recursive(self):
dirname = "mytest"
sourcepath = os.path.join(dirname, "foo_1.h5")
destpath = os.path.join(dirname, "bar_1.h5")
if not os.path.isdir(dirname):
os.makedirs(dirname)
datasets = ["/a", "/b/foo", "/b/bar", "/c/d/foo"]
with h5py.File(sourcepath, "w") as source:
with h5py.File(destpath, "w") as dest:
for d in datasets:
source[d] = np.random.rand(10)
g5.copy(source, dest, ["/a", "/b", "/c/d/foo"])
for path in datasets:
self.assertTrue(g5.equal(source, dest, path))
shutil.rmtree(dirname)
def test_copy_attrs(self):
dirname = "mytest"
sourcepath = os.path.join(dirname, "foo_2.h5")
destpath = os.path.join(dirname, "bar_2.h5")
if not os.path.isdir(dirname):
os.makedirs(dirname)
datasets = ["/a", "/b/foo", "/c/d/foo"]
with h5py.File(sourcepath, "w") as source:
with h5py.File(destpath, "w") as dest:
for d in datasets:
source[d] = np.random.rand(10)
meta = source.create_group("/meta")
meta.attrs["version"] = np.random.rand(10)
datasets += ["/meta"]
g5.copy(source, dest, datasets)
for path in datasets:
self.assertTrue(g5.equal(source, dest, path))
shutil.rmtree(dirname)
def test_copy_groupattrs(self):
dirname = "mytest"
sourcepath = os.path.join(dirname, "foo_3.h5")
destpath = os.path.join(dirname, "bar_3.h5")
if not os.path.isdir(dirname):
os.makedirs(dirname)
datasets = ["/a", "/b/foo", "/c/d/foo"]
with h5py.File(sourcepath, "w") as source:
with h5py.File(destpath, "w") as dest:
for d in datasets:
source[d] = np.random.rand(10)
source["/b"].attrs["version"] = np.random.rand(10)
datasets += ["/b"]
g5.copy(source, dest, datasets)
for path in datasets:
self.assertTrue(g5.equal(source, dest, path))
shutil.rmtree(dirname)
if __name__ == "__main__":
unittest.main()
| 26.625 | 80 | 0.521621 | 497 | 4,047 | 4.187123 | 0.130785 | 0.043248 | 0.048054 | 0.081691 | 0.859202 | 0.859202 | 0.817876 | 0.739548 | 0.739548 | 0.694378 | 0 | 0.022272 | 0.334322 | 4,047 | 151 | 81 | 26.801325 | 0.750186 | 0 | 0 | 0.684783 | 0 | 0 | 0.073141 | 0 | 0 | 0 | 0 | 0 | 0.086957 | 1 | 0.054348 | false | 0 | 0.065217 | 0 | 0.130435 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2bb6f38fa55a344b1e46cd8d2e99d76bbcd938ca | 108 | py | Python | iterators/utils.py | 4thel00z/subdomain-takeover-scraper | ca971695267e5dd0aead1b1fe8b381917f92189f | [
"MIT"
] | null | null | null | iterators/utils.py | 4thel00z/subdomain-takeover-scraper | ca971695267e5dd0aead1b1fe8b381917f92189f | [
"MIT"
] | null | null | null | iterators/utils.py | 4thel00z/subdomain-takeover-scraper | ca971695267e5dd0aead1b1fe8b381917f92189f | [
"MIT"
] | null | null | null | import re
def split_iter(string):
return (x.group(0) for x in re.finditer(r"[A-Za-z\-0-9']+", string))
| 21.6 | 72 | 0.638889 | 22 | 108 | 3.090909 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.032609 | 0.148148 | 108 | 4 | 73 | 27 | 0.706522 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
2bbc36bdbfc3c6e1b0f0e3018cb9af88c1f28925 | 46 | py | Python | witpy_test/__init__.py | NITIN-ME/witpy_test | e7eae2e771138462a475195ccb9ee8a849389869 | [
"MIT"
] | null | null | null | witpy_test/__init__.py | NITIN-ME/witpy_test | e7eae2e771138462a475195ccb9ee8a849389869 | [
"MIT"
] | null | null | null | witpy_test/__init__.py | NITIN-ME/witpy_test | e7eae2e771138462a475195ccb9ee8a849389869 | [
"MIT"
] | null | null | null | from witpy_test.witpy_analyzer import Analyzer | 46 | 46 | 0.913043 | 7 | 46 | 5.714286 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 46 | 1 | 46 | 46 | 0.930233 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2bd54bb90845f74e946464355b847283fcc5df1d | 28 | py | Python | cvstudio/view/widgets/top_bar/__init__.py | haruiz/PytorchCvStudio | ccf79dd0cc0d61f3fd01b1b5d96f7cda7b681eef | [
"MIT"
] | 32 | 2019-10-31T03:10:52.000Z | 2020-12-23T11:50:53.000Z | cvstudio/view/widgets/top_bar/__init__.py | haruiz/CvStudio | ccf79dd0cc0d61f3fd01b1b5d96f7cda7b681eef | [
"MIT"
] | 19 | 2019-10-31T15:06:05.000Z | 2020-06-15T02:21:55.000Z | cvstudio/view/widgets/top_bar/__init__.py | haruiz/PytorchCvStudio | ccf79dd0cc0d61f3fd01b1b5d96f7cda7b681eef | [
"MIT"
] | 8 | 2019-10-31T03:32:50.000Z | 2020-07-17T20:47:37.000Z | from .top_bar import TopBar
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2be0b793e5522320f3ad164e438586a45e5fe67e | 14,163 | py | Python | OPTIMAS/interactive_plotting.py | jeremyforest/whole_optic_analysis_pipeline | fdacc7965a05489bba8fcb3fc6c2a27c9fae9996 | [
"MIT"
] | null | null | null | OPTIMAS/interactive_plotting.py | jeremyforest/whole_optic_analysis_pipeline | fdacc7965a05489bba8fcb3fc6c2a27c9fae9996 | [
"MIT"
] | null | null | null | OPTIMAS/interactive_plotting.py | jeremyforest/whole_optic_analysis_pipeline | fdacc7965a05489bba8fcb3fc6c2a27c9fae9996 | [
"MIT"
] | null | null | null | import numpy as np
import plotly.graph_objects as go
import argparse
import pdb
import copy
def interactive_plotting(input_data_folder, experiment,
timing = True):
#import pdb; pdb.set_trace()
#classical_ephy = False
#if classical_ephy:
# from classical_ephy import import_ephy_data
#### ONLY ENTER EXPERIMENTS WITH NO TIMINGS PROBLEMS
experiments = [experiment]
# experiments = [[experiment[i]] for i in range(len(experiment))]
# experiments = [89]
# experiments = [131,132,133,141,142,148]
#if multiple experiments
if len(experiments) > 1:
normalize = True
else:
normalize = False
# input_data_folder = args.main_folder_path
# experiments = range(args.experiments[0], args.experiments[1]+1)
## index for numpy array
dlp_on = 0
dlp_off = 1
laser_on = 2
laser_off = 3
rois_signal = []
#if classical_ephy:
# ephy_signal = []
for experiment in experiments:
# experiment=89
# experiment = experiment[0]
print(f'working on: {input_data_folder}/{experiment}')
rois_signal.append(np.load(f'{input_data_folder}/{experiment}/dF_over_F0_backcorrect.npy', allow_pickle=True))
#if classical_ephy:
# ephy_signal.append(import_ephy_data(input_data_folder, experiment))
signal_length = []
all_dlp_on_value_on_x = []
takeClosest = lambda num,collection:min(collection,key=lambda x:abs(x-num))
#####################################################
##### TO CHANGE IN UPDATED PIPELINE VERSION #########
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
for trace in range(len(rois_signal[0])-2): ## the last one is the dlp and laser timings and before that is the x_axis data
signal_length.append(len(rois_signal[exp_nb][trace]))
all_dlp_on_value_on_x.append(rois_signal[exp_nb][-1][0][dlp_on])
min_signal_length = np.min(signal_length)
max_dlp_on_value_on_x = np.max(all_dlp_on_value_on_x)
x_axis = rois_signal[0][-2]
value_to_center_on = takeClosest(max_dlp_on_value_on_x, x_axis)
x_axis_index_for_centering = x_axis.index(value_to_center_on)
shift = []
new_length = []
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
# pdb.set_trace()
dlp_on_value_on_x = rois_signal[exp_nb][-1][0][dlp_on]
shift.append(x_axis_index_for_centering - x_axis.index(takeClosest(dlp_on_value_on_x, x_axis)))
new_length.append(len(rois_signal[exp_nb][trace]) + shift[exp_nb])
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
for trace in range(len(rois_signal[0])-2): ## the last one is the dlp and laser timings and before that is the x_axis data
start_padding_array = np.zeros((shift[exp_nb]))
rois_signal[exp_nb][trace] = np.insert(rois_signal[exp_nb][trace], 0, start_padding_array)
# end_padding_array = np.zeros(([np.max(shift) - shift[i] for i in range(len(shift))][exp_nb]))
end_padding_array = np.zeros((max(new_length) - len(rois_signal[exp_nb][trace])))
rois_signal[exp_nb][trace] = np.insert(rois_signal[exp_nb][trace], -1, end_padding_array)
# print([len(rois_signal[exp_nb][trace])for trace in range(len(rois_signal[0])-2)])
# print(f'rois signal length: {len(rois_signal[exp_nb][trace])}')
## replace dlp times and laser times with new times taking padding into consideration
dlp_on_index = x_axis.index(takeClosest(rois_signal[exp_nb][-1][0][dlp_on], x_axis)) + shift[exp_nb] ## dlp on index after padding
dlp_off_index = x_axis.index(takeClosest(rois_signal[exp_nb][-1][0][dlp_off], x_axis)) + shift[exp_nb]
laser_on_index = x_axis.index(takeClosest(rois_signal[exp_nb][-1][0][laser_on], x_axis)) + shift[exp_nb]
laser_off_index = x_axis.index(takeClosest(rois_signal[exp_nb][-1][0][laser_off], x_axis)) + shift[exp_nb]
#adjusting x_axis
# padding_length = len(start_padding_array) + len(end_padding_array)
# frame_time_difference = x_axis[-2] - x_axis[-3]
#
# len(x_axis)
# [x_axis.append(x_axis[-2] + frame_time_difference) for _ in range(padding_length)]
# len(x_axis)
_lst = list(rois_signal[exp_nb][-1][0])
_lst[0] = x_axis[dlp_on_index]
_lst[1] = x_axis[dlp_off_index]
_lst[2] = x_axis[laser_on_index]
_lst[3] = x_axis[laser_off_index]
rois_signal[exp_nb][-1][0] = tuple(_lst)
# normalize traces
if normalize:
new_rois_signal = rois_signal
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
for trace in range(len(rois_signal[0])-2):
mu = np.mean(rois_signal[exp_nb][trace])
sigma = np.std(rois_signal[exp_nb][trace])
new_rois_signal[exp_nb][trace] = [((x - mu)/sigma) for x in rois_signal[exp_nb][trace]]
rois_signal = new_rois_signal
################################################################################################################################
averaged_rois_signal = np.zeros(((len(rois_signal[0])-2, len(rois_signal[0][0]))))
for trace in range(len(rois_signal[0])-2):
_traces_for_average = np.zeros((len(rois_signal), len(rois_signal[0][0])))
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
_traces_for_average[exp_nb] = rois_signal[exp_nb][trace]
averaged_rois_signal[trace] = np.mean(_traces_for_average, axis=0)
def moving_average(a, n=2) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
rois_signal_moving_average = []
moving_average_points = 5
for trace in range(len(averaged_rois_signal)):
moving_average_data = np.zeros((1, len(averaged_rois_signal[0])))
moving_average_data = moving_average(averaged_rois_signal[trace],
n = moving_average_points) ## calculate moving average
padding = np.zeros((len(rois_signal[exp_nb][trace]) - len(moving_average_data))) ##padding the start for plot
moving_average_data = np.insert(moving_average_data, 0, padding)
rois_signal_moving_average.append(moving_average_data)
rois_signal_per_experiment_moving_average = copy.deepcopy(rois_signal)
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
for trace in range(len(rois_signal[0])-2):
moving_average_data = np.zeros((1, len(rois_signal[0][0])))
moving_average_data = moving_average(rois_signal[exp_nb][trace],
n = moving_average_points)
padding = np.zeros((len(rois_signal[exp_nb][trace]) - len(moving_average_data)))
moving_average_data = np.insert(moving_average_data, 0, padding)
rois_signal_per_experiment_moving_average[exp_nb][trace] = moving_average_data
fig = go.Figure()
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
print(f'experiment_{experiment}')
for trace in range(len(rois_signal[0])-2): ## the last one is the dlp and laser timings and before that is the x_axis data
print(f'trace_{trace}')
fig.add_trace(go.Scatter(
x = rois_signal[0][-2],
y = rois_signal[exp_nb][trace],
name = f'experiment {experiment}-neuron {trace}',
#line_color = color[experiment],
opacity=0.5))
fig.add_trace(go.Scatter(
x = rois_signal[0][-2],
y = rois_signal_per_experiment_moving_average[exp_nb][trace],
name = f'experiment {experiment}-neuron {trace} - moving average',
#line_color = color[experiment],
opacity=0.5))
#####################################################
##### TO CHANGE IN UPDATED PIPELINE VERSION #########
fig.add_shape(type = "rect", xref = 'x', yref = 'paper', ## dlp activation
x0 = rois_signal[exp_nb][-1][0][dlp_on], y0 = 0, x1 = rois_signal[exp_nb][-1][0][dlp_off], y1 = 1, ## dlp
fillcolor="LightSkyBlue", opacity = 0.1, layer = 'below', line_width = 1)
fig.add_shape(type = "rect", xref = 'x', yref = 'y', ## dlp activation
x0 = rois_signal[exp_nb][-1][0][laser_on], y0 = 30, x1 = rois_signal[exp_nb][-1][0][laser_off], y1 = 31, ## laser
fillcolor="darksalmon", opacity = 0.05, layer = 'below', line_width = 1)
####################################################################################################################
# for trace in range(len(rois_signal[0])-2): ## the last one is the dlp and laser timings and before that is the x_axis data
# fig.add_trace(go.Scatter(
# x = rois_signal[0][-2],
# y = averaged_rois_signal[trace],
# name = f'trace {trace}- average',
# #line_color = color[experiment],
# opacity=1))
# fig.add_trace(go.Scatter(
# x = rois_signal[0][-2][0:len(rois_signal_moving_average[0])],
# y = rois_signal_moving_average[trace],
# name = f'trace {trace}- average using moving average',
# #line_color = color[experiment],
# opacity=0.5))
fig.write_html(f"{input_data_folder}/{experiment}/interactive_figure.html")
# fig.show()
def interactive_plotting_no_timing(input_data_folder, experiment,
restrict=False):
# experiments = [experiment for experiment in experiment]
experiments = [experiment]
if len(experiments) > 1:
normalize = True
else:
normalize = False
## index for numpy array
dlp_on = 0
dlp_off = 1
laser_on = 2
laser_off = 3
rois_signal = []
for experiment in experiments:
print(f'working on: {input_data_folder}/{experiment}')
rois_signal.append(np.load(f'{input_data_folder}/{experiment}/dF_over_F0_backcorrect.npy', allow_pickle=True))
x_axis = rois_signal[0][-2]
if normalize:
new_rois_signal = rois_signal
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
for trace in range(len(rois_signal[0])-2):
mu = np.mean(rois_signal[exp_nb][trace])
sigma = np.std(rois_signal[exp_nb][trace])
new_rois_signal[exp_nb][trace] = [((x - mu)/sigma) for x in rois_signal[exp_nb][trace]]
rois_signal = new_rois_signal
averaged_rois_signal = np.zeros(((len(rois_signal[0])-2, len(rois_signal[0][0]))))
for trace in range(len(rois_signal[0])-2):
_traces_for_average = np.zeros((len(rois_signal), len(rois_signal[0][0])))
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
_traces_for_average[exp_nb] = rois_signal[exp_nb][trace]
averaged_rois_signal[trace] = np.mean(_traces_for_average, axis=0)
def moving_average(a, n=2) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
rois_signal_moving_average = []
moving_average_points = 5
for trace in range(len(averaged_rois_signal)):
moving_average_data = np.zeros((1, len(averaged_rois_signal[0])))
moving_average_data = moving_average(averaged_rois_signal[trace],
n = moving_average_points) ## calculate moving average
padding = np.zeros((len(rois_signal[exp_nb][trace]) - len(moving_average_data))) ##padding the start for plot
moving_average_data = np.insert(moving_average_data, 0, padding)
rois_signal_moving_average.append(moving_average_data)
rois_signal_per_experiment_moving_average = copy.deepcopy(rois_signal)
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
for trace in range(len(rois_signal[0])-2):
moving_average_data = np.zeros((1, len(rois_signal[0][0])))
moving_average_data = moving_average(rois_signal[exp_nb][trace],
n = moving_average_points)
padding = np.zeros((len(rois_signal[exp_nb][trace]) - len(moving_average_data)))
moving_average_data = np.insert(moving_average_data, 0, padding)
rois_signal_per_experiment_moving_average[exp_nb][trace] = moving_average_data
fig = go.Figure()
for experiment, exp_nb in zip(experiments, range(len(rois_signal))):
print(f'experiment_{experiment}')
for trace in range(len(rois_signal[0])-2): ## the last one is the dlp and laser timings and before that is the x_axis data
print(f'trace_{trace}')
fig.add_trace(go.Scatter(
x = rois_signal[0][-2],
y = rois_signal[exp_nb][trace],
name = f'experiment {experiment}-neuron {trace}',
opacity=0.5))
fig.add_trace(go.Scatter(
x = rois_signal[0][-2],
y = rois_signal_per_experiment_moving_average[exp_nb][trace],
name = f'experiment {experiment}-neuron {trace} - moving average',
opacity=0.5))
fig.write_html(f"{input_data_folder}/{experiment}/interactive_figure.html")
if __name__ == "__main__":
experiment = 'experiment_41'
input_data_folder = f'/home/jeremy/Desktop/2020_11_23'
interactive_plotting(input_data_folder, experiment)
# interactive_plotting_no_timing(input_data_folder, experiment)
| 49.00692 | 138 | 0.603333 | 1,878 | 14,163 | 4.255059 | 0.100639 | 0.147666 | 0.069954 | 0.073207 | 0.807408 | 0.782505 | 0.722813 | 0.707671 | 0.65361 | 0.621074 | 0 | 0.017952 | 0.256655 | 14,163 | 288 | 139 | 49.177083 | 0.741071 | 0.188166 | 0 | 0.744565 | 0 | 0 | 0.061265 | 0.033623 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021739 | false | 0 | 0.027174 | 0 | 0.059783 | 0.032609 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a60ec7f1c2098fa91cf146334594a5bfcf6ef6d0 | 20 | py | Python | mysite/jxl/services/__init__.py | alex-gagnon/jxl_django | 2060a686551f1ed22b96b3ae72572999557bf812 | [
"MIT"
] | null | null | null | mysite/jxl/services/__init__.py | alex-gagnon/jxl_django | 2060a686551f1ed22b96b3ae72572999557bf812 | [
"MIT"
] | 5 | 2021-04-08T19:42:19.000Z | 2022-02-10T12:10:02.000Z | mysite/jxl/services/__init__.py | alex-gagnon/jxl_django | 2060a686551f1ed22b96b3ae72572999557bf812 | [
"MIT"
] | null | null | null | from .jxl import JXL | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a61d98d66b8e02ba96a18ca58e0c0804f5a57ce1 | 260 | py | Python | tests/test_context.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | 4 | 2019-08-10T12:56:12.000Z | 2020-01-21T09:51:20.000Z | tests/test_context.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | 71 | 2019-04-09T05:39:21.000Z | 2020-05-16T23:09:24.000Z | tests/test_context.py | furious-luke/polecat | 7be5110f76dc42b15c922c1bb7d49220e916246d | [
"MIT"
] | null | null | null | from polecat.core.context import OptionDict, active_context
def test_active_context():
@active_context
def wrapped(context=None):
return context
assert isinstance(active_context(), OptionDict)
assert isinstance(wrapped(), OptionDict)
| 26 | 59 | 0.75 | 29 | 260 | 6.551724 | 0.482759 | 0.273684 | 0.168421 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 260 | 9 | 60 | 28.888889 | 0.87963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 1 | 0.285714 | false | 0 | 0.142857 | 0.142857 | 0.571429 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
a637c2fb96dfecc1aed9e37b1913728b2ed228cb | 7,848 | py | Python | Py.Py/Plotting/Velocity_plot.py | cheshirepezz/PiC1d | 088454aa89617f142abe3f629052958b6b622be9 | [
"MIT"
] | 3 | 2020-11-22T12:52:49.000Z | 2021-04-27T12:22:46.000Z | Py.Py/Plotting/Velocity_plot.py | cheshirepezz/PiC1d | 088454aa89617f142abe3f629052958b6b622be9 | [
"MIT"
] | null | null | null | Py.Py/Plotting/Velocity_plot.py | cheshirepezz/PiC1d | 088454aa89617f142abe3f629052958b6b622be9 | [
"MIT"
] | 1 | 2020-09-12T16:54:34.000Z | 2020-09-12T16:54:34.000Z | import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
label_size = 20
mpl.rcParams['xtick.labelsize'] = label_size
mpl.rcParams['ytick.labelsize'] = label_size
#"""
###############################################################################
# #
# 2-Stream: Variations of v_0: N_grid=512, nppc=100 and delta_t=delta_x/4 #
# #
###############################################################################
v = [0.01, 0.05, 0.1]
Output_folder = os.getcwd() + '/Electro_Static/'
fig = plt.figure(figsize=(10,7))
plt.xlabel(r'Time [$\omega_{p}^{-1}$]', fontsize = 20)
plt.ylabel(r'v$_{max}$', fontsize = 20)
plt.xlim([0, 40])
#plt.ylim([10**-8.5, 10**-1.5])
for i in range(len(v)):
Input_folder=os.getcwd() + '/Electro_Static/Velocity Variation/v_zero_' + str(v[i])
Time = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[0]
Velocity = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[3]
plt.semilogy(Time, Velocity, linewidth=2.0, label=r'$v_b$: ' +str(v[i]))
plt.title(r"Velocity Study of $v_b$", fontsize=25)
plt.legend(loc='lower right')
plt.grid(linestyle='--', linewidth='1.', color='black')
fig.tight_layout()
plt.savefig(Output_folder + 'V_zero_velocity_study.png')
plt.close(fig)
#"""
#"""
###############################################################################
# #
# 2-Stream: Variations of gamma_0: N_grid=64, nppc=50 and delta_t=delta_x/4 #
# #
###############################################################################
gamma = [2, 4, 6] #, 8, 10]
Output_folder = os.getcwd() + '/Electro_Static/'
fig = plt.figure(figsize=(10,7))
plt.xlabel(r'Time [$\omega_{p}^{-1}$]', fontsize = 20)
plt.ylabel(r'$\gamma_{max}$', fontsize = 20)
plt.xlim([0, 440])
#plt.ylim([10**-5.5, 10**2.5])
for i in range(len(gamma)):
Input_folder=os.getcwd() + '/Electro_Static/Gamma Variation/gamma_' + str(gamma[i])
Time = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[0]
Velocity = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[3]
plt.semilogy(Time, Velocity, linewidth=2.0, label=r'$\gamma_0$: ' +str(gamma[i]))
plt.title(r"Velocity Study of $\gamma_0$", fontsize=25)
plt.legend(loc='lower right')
plt.grid(linestyle='--', linewidth='1.', color='black')
fig.tight_layout()
plt.savefig(Output_folder + 'gamma_zero_velocity_study.png')
plt.close(fig)
#"""
#"""
###############################################################################
# #
# 2-Stream: Variations of v_0: N_grid=128, nppc=500 and delta_t=delta_x/4 #
# #
###############################################################################
v = [0.01, 0.05, 0.1]
Output_folder = os.getcwd() + '/Electro_Magnetic/'
Time_begin = [150, 30, 10]
Time_end = [370, 90, 40]
fig = plt.figure(figsize=(10,7))
plt.xlabel(r'Time [$\omega_{p}^{-1}$]', fontsize = 20)
plt.ylabel(r'$v_{max}$', fontsize = 20)
#plt.xlim([0, 400])
#plt.ylim([10**-8.5, 10**-1.5])
for i in range(len(v)):
Input_folder=os.getcwd() + '/Electro_Magnetic/Velocity Variation/v_zero_' + str(v[i])
Time = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[0]
Velocity_x = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[3]
Velocity_y = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[4]
Velocity_z = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[5]
plt.semilogy(Time, Velocity_x, linewidth=2.0, label=r'$v_b$: ' + str(v[i]))
plt.title(r"Velocity $v_x$ Study of $v_b$", fontsize=25)
plt.legend(loc='lower right')
plt.grid(linestyle='--', linewidth='1.', color='black')
fig.tight_layout()
plt.savefig(Output_folder + 'V_zero_velocityx_study.png')
plt.close(fig)
#"""
#"""
###############################################################################
# #
# 2-Stream: Variations of v_0: N_grid=128, nppc=500 and delta_t=delta_x/4 #
# #
###############################################################################
v = [0.01, 0.05, 0.1]
Output_folder = os.getcwd() + '/Electro_Magnetic/'
Time_begin = [150, 30, 10]
Time_end = [370, 90, 40]
fig = plt.figure(figsize=(10,7))
plt.xlabel(r'Time [$\omega_{p}^{-1}$]', fontsize = 20)
plt.ylabel(r'$v_{max}$', fontsize = 20)
#plt.xlim([0, 40])
#plt.ylim([10**-8.5, 10**-1.5])
for i in range(len(v)):
Input_folder=os.getcwd() + '/Electro_Magnetic/Velocity Variation/v_zero_' + str(v[i])
Time = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[0]
Velocity_x = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[3]
Velocity_y = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[4]
Velocity_z = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[5]
plt.semilogy(Time, Velocity_y, linewidth=2.0, label=r'$v_b$: ' + str(v[i]))
plt.title(r"Velocity $v_y$ Study of $v_b$", fontsize=25)
plt.legend(loc='lower right')
plt.grid(linestyle='--', linewidth='1.', color='black')
fig.tight_layout()
plt.savefig(Output_folder + 'V_zero_velocityy_study.png')
plt.close(fig)
#"""
#"""
###############################################################################
# #
# 2-Stream: Variations of v_0: N_grid=128, nppc=500 and delta_t=delta_x/4 #
# #
###############################################################################
v = [0.01, 0.05, 0.1]
Output_folder = os.getcwd() + '/Electro_Magnetic/'
Time_begin = [150, 30, 10]
Time_end = [370, 90, 40]
fig = plt.figure(figsize=(10,7))
plt.xlabel(r'Time [$\omega_{p}^{-1}$]', fontsize = 20)
plt.ylabel(r'$v_{max}$', fontsize = 20)
#plt.xlim([0, 40])
#plt.ylim([10**-8.5, 10**-1.5])
for i in range(len(v)):
Input_folder=os.getcwd() + '/Electro_Magnetic/Velocity Variation/v_zero_' + str(v[i])
Time = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[0]
Velocity_x = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[3]
Velocity_y = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[4]
Velocity_z = np.loadtxt(os.path.normpath(Input_folder + '/Parameters.txt'), skiprows=1, unpack=True)[5]
plt.semilogy(Time, Velocity_z, linewidth=2.0, label=r'$v_b$: ' + str(v[i]))
plt.title(r"Velocity $v_z$ Study of $v_b$", fontsize=25)
plt.legend(loc='lower right')
plt.grid(linestyle='--', linewidth='1.', color='black')
fig.tight_layout()
plt.savefig(Output_folder + 'V_zero_velocityz_study.png')
plt.close(fig)
#"""
| 40.246154 | 107 | 0.516565 | 994 | 7,848 | 3.942656 | 0.12173 | 0.058944 | 0.044909 | 0.06124 | 0.911967 | 0.907119 | 0.884154 | 0.877775 | 0.871141 | 0.871141 | 0 | 0.045975 | 0.221203 | 7,848 | 194 | 108 | 40.453608 | 0.595223 | 0.177752 | 0 | 0.68932 | 0 | 0 | 0.204489 | 0.045422 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.058252 | 0 | 0.058252 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a66133546a743baffc3bae0d5925f765234be11b | 2,745 | gyp | Python | binding.gyp | raadad/node-lwan | 6bef658eb80337a5e0d3a2d519b83694d055bb61 | [
"MIT"
] | 44 | 2015-05-27T03:42:00.000Z | 2020-07-18T11:49:42.000Z | binding.gyp | raadad/node-lwan | 6bef658eb80337a5e0d3a2d519b83694d055bb61 | [
"MIT"
] | 2 | 2015-07-12T05:17:17.000Z | 2015-11-25T08:31:47.000Z | binding.gyp | raadad/node-lwan | 6bef658eb80337a5e0d3a2d519b83694d055bb61 | [
"MIT"
] | 5 | 2015-12-20T18:48:22.000Z | 2020-06-27T19:45:55.000Z | {
'variables': {
'shared_libzip%':'true',
'shared_libzip_includes%':'/usr/lib',
'shared_libzip_libpath%':'/usr/include'
},
"targets": [
{
'conditions': [
['shared_libzip == "false"', {
'dependencies': [
'deps/libzip.gyp:libzip'
]
},
{
'libraries': [
'-L<@(shared_libzip_libpath)',
'-lz'
],
'include_dirs': [
'<@(shared_libzip_includes)',
'<@(shared_libzip_libpath)/libzip/include',
]
}
],
],
"cflags_cc": ['-std=c++11'],
"cflags_c": [ '-Wall', '-Wextra', '-Wshadow' ,'-Wconversion', '-std=gnu11', '-Wunused-variable' ],
"target_name": "tread",
"sources": [
"tread.cc",
"./lwan/common/mime-types.h",
"./lwan/common/base64.c",
"./lwan/common/base64.h",
"./lwan/common/hash.c",
"./lwan/common/hash.h",
"./lwan/common/int-to-str.c",
"./lwan/common/int-to-str.h",
"./lwan/common/list.c",
"./lwan/common/list.h",
"./lwan/common/lwan.c",
"./lwan/common/lwan-cache.c",
"./lwan/common/lwan-cache.h",
"./lwan/common/lwan-config.c",
"./lwan/common/lwan-config.h",
"./lwan/common/lwan-coro.c",
"./lwan/common/lwan-coro.h",
"./lwan/common/lwan.h",
"./lwan/common/lwan-http-authorize.c",
"./lwan/common/lwan-http-authorize.h",
"./lwan/common/lwan-io-wrappers.c",
"./lwan/common/lwan-io-wrappers.h",
"./lwan/common/lwan-job.c",
"./lwan/common/lwan-private.h",
"./lwan/common/lwan-redirect.c",
"./lwan/common/lwan-redirect.h",
"./lwan/common/lwan-request.c",
"./lwan/common/lwan-response.c",
"./lwan/common/lwan-serve-files.c",
"./lwan/common/lwan-serve-files.h",
"./lwan/common/lwan-socket.c",
"./lwan/common/lwan-status.c",
"./lwan/common/lwan-status.h",
"./lwan/common/lwan-template.c",
"./lwan/common/lwan-template.h",
"./lwan/common/lwan-tables.c",
"./lwan/common/lwan-thread.c",
"./lwan/common/lwan-trie.c",
"./lwan/common/lwan-trie.h",
"./lwan/common/murmur3.c",
"./lwan/common/murmur3.h",
"./lwan/common/reallocarray.c",
"./lwan/common/reallocarray.h",
"./lwan/common/realpathat.c",
"./lwan/common/realpathat.h",
"./lwan/common/sd-daemon.c",
"./lwan/common/sd-daemon.h",
"./lwan/common/strbuf.c",
"./lwan/common/strbuf.h",
]
}
]
}
| 32.294118 | 105 | 0.484517 | 293 | 2,745 | 4.484642 | 0.245734 | 0.365297 | 0.30898 | 0.194064 | 0.234399 | 0.038052 | 0 | 0 | 0 | 0 | 0 | 0.005208 | 0.300546 | 2,745 | 84 | 106 | 32.678571 | 0.679167 | 0 | 0 | 0.036145 | 0 | 0 | 0.602914 | 0.471403 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a6a042ccca6395bebd885571d4e21b096f272054 | 159 | py | Python | scripts/visualize_execute.py | suneric/visualizer3d | ebaa5cb05f8e0ebb7eaa753709d09bde3a77b6c1 | [
"MIT"
] | null | null | null | scripts/visualize_execute.py | suneric/visualizer3d | ebaa5cb05f8e0ebb7eaa753709d09bde3a77b6c1 | [
"MIT"
] | null | null | null | scripts/visualize_execute.py | suneric/visualizer3d | ebaa5cb05f8e0ebb7eaa753709d09bde3a77b6c1 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
import subprocess
subprocess.call(["/home/yufeng/catkin_ws/devel/lib/aircraft_scanning_visualize/asv3d", "/home/yufeng/Temp/Scanning/"])
| 39.75 | 118 | 0.792453 | 22 | 159 | 5.590909 | 0.818182 | 0.162602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.037736 | 159 | 3 | 119 | 53 | 0.797386 | 0.125786 | 0 | 0 | 0 | 0 | 0.673913 | 0.673913 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
a6a70f295b2a91a4e958ffd00e3aacfb29136635 | 36 | py | Python | reward/tfm/img/__init__.py | lgvaz/torchrl | cfff8acaf70d1fec72169162b95ab5ad3547d17a | [
"MIT"
] | 5 | 2018-06-21T14:33:40.000Z | 2018-08-18T02:26:03.000Z | reward/tfm/img/__init__.py | lgvaz/reward | cfff8acaf70d1fec72169162b95ab5ad3547d17a | [
"MIT"
] | null | null | null | reward/tfm/img/__init__.py | lgvaz/reward | cfff8acaf70d1fec72169162b95ab5ad3547d17a | [
"MIT"
] | 2 | 2018-05-08T03:34:49.000Z | 2018-06-22T15:04:17.000Z | from .img import Gray, Resize, Stack | 36 | 36 | 0.777778 | 6 | 36 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.138889 | 36 | 1 | 36 | 36 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6ae555dd7aef7d67207803294aec6f50f031d62 | 63 | py | Python | quickfix_doc/datadictionary/__init__.py | connamara/QuickFIX-doc | fa75e27dfada2da12148e9ea67d0ceb6a31f1d46 | [
"DOC"
] | 3 | 2018-12-25T19:49:56.000Z | 2021-07-17T01:41:08.000Z | quickfix_doc/datadictionary/__init__.py | connamara/QuickFIX-doc | fa75e27dfada2da12148e9ea67d0ceb6a31f1d46 | [
"DOC"
] | 1 | 2018-12-07T20:53:31.000Z | 2018-12-07T20:53:31.000Z | quickfix_doc/datadictionary/__init__.py | connamara/QuickFIX-doc | fa75e27dfada2da12148e9ea67d0ceb6a31f1d46 | [
"DOC"
] | 3 | 2020-05-21T03:07:19.000Z | 2021-07-18T03:07:06.000Z | from . import util
from . import fields
from . import messages
| 15.75 | 22 | 0.761905 | 9 | 63 | 5.333333 | 0.555556 | 0.625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 63 | 3 | 23 | 21 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a6c6049930b84debf8c60bb2c8d315f5ef3db67d | 277 | py | Python | dashboards/admin.py | EddyAnalytics/eddy-backend | bc465996e51b9ebc3e498ad0d6434bac80b173a6 | [
"Apache-2.0"
] | 1 | 2021-09-24T07:52:08.000Z | 2021-09-24T07:52:08.000Z | dashboards/admin.py | EddyAnalytics/eddy-backend | bc465996e51b9ebc3e498ad0d6434bac80b173a6 | [
"Apache-2.0"
] | 2 | 2021-05-25T22:16:18.000Z | 2021-06-09T19:16:24.000Z | dashboards/admin.py | EddyAnalytics/eddy-backend | bc465996e51b9ebc3e498ad0d6434bac80b173a6 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from dashboards.models import Dashboard, Widget, WidgetType
from utils.utils import ReadOnlyIdAdmin
admin.site.register(Dashboard, ReadOnlyIdAdmin)
admin.site.register(Widget, ReadOnlyIdAdmin)
admin.site.register(WidgetType, ReadOnlyIdAdmin)
| 30.777778 | 59 | 0.848375 | 32 | 277 | 7.34375 | 0.4375 | 0.255319 | 0.306383 | 0.408511 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079422 | 277 | 8 | 60 | 34.625 | 0.921569 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
471a1a826661e0a610ac5191732620ced86a3b20 | 166 | py | Python | graphbrain/commands/generate_synonyms.py | renkexinmay/graphbrain | 65d34db1c92a56af492ce62a9c114956f4b14b8b | [
"MIT"
] | null | null | null | graphbrain/commands/generate_synonyms.py | renkexinmay/graphbrain | 65d34db1c92a56af492ce62a9c114956f4b14b8b | [
"MIT"
] | null | null | null | graphbrain/commands/generate_synonyms.py | renkexinmay/graphbrain | 65d34db1c92a56af492ce62a9c114956f4b14b8b | [
"MIT"
] | null | null | null | from graphbrain.hypergraph import HyperGraph
import graphbrain.synonyms.synonyms as synonyms
def run(params):
hg = HyperGraph(params)
synonyms.generate(hg)
| 20.75 | 47 | 0.783133 | 20 | 166 | 6.5 | 0.55 | 0.246154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 166 | 7 | 48 | 23.714286 | 0.915493 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b36eb6fc6951c1b11dbd9d6bb10aca53538ee1e | 179 | py | Python | app/ffmpeg/services/__init__.py | ihor-pyvovarnyk/oae-sound-processing-tool | 602420cd9705997002b6cb9eb86bd09be899bd5d | [
"BSD-2-Clause"
] | null | null | null | app/ffmpeg/services/__init__.py | ihor-pyvovarnyk/oae-sound-processing-tool | 602420cd9705997002b6cb9eb86bd09be899bd5d | [
"BSD-2-Clause"
] | null | null | null | app/ffmpeg/services/__init__.py | ihor-pyvovarnyk/oae-sound-processing-tool | 602420cd9705997002b6cb9eb86bd09be899bd5d | [
"BSD-2-Clause"
] | null | null | null | from .command_builder_service import CommandBuilderService
from .schemas_provider_service import SchemasProviderService
from .schema_compiler_service import SchemaCompilerService
| 44.75 | 60 | 0.916201 | 18 | 179 | 8.777778 | 0.666667 | 0.246835 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067039 | 179 | 3 | 61 | 59.666667 | 0.946108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b6a27013df3e5f60979d4072be3cbba61360c3e | 53 | py | Python | codql/helper/__init__.py | Heersin/codeql_packer | 5d1258ce2419a67161ac3b844219ebdbe5310e59 | [
"MIT"
] | null | null | null | codql/helper/__init__.py | Heersin/codeql_packer | 5d1258ce2419a67161ac3b844219ebdbe5310e59 | [
"MIT"
] | null | null | null | codql/helper/__init__.py | Heersin/codeql_packer | 5d1258ce2419a67161ac3b844219ebdbe5310e59 | [
"MIT"
] | null | null | null | from . import cmd_helper
from . import system_helper
| 17.666667 | 27 | 0.811321 | 8 | 53 | 5.125 | 0.625 | 0.487805 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150943 | 53 | 2 | 28 | 26.5 | 0.911111 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5b7752dc449fea712f03ae93509e31936468379d | 91 | py | Python | python/hamming/hamming.py | StevenACoffman/exercism | 2e585832cbd75e83bbb05fd71e1692f5ec99827b | [
"MIT"
] | 1 | 2020-07-24T20:13:05.000Z | 2020-07-24T20:13:05.000Z | python/hamming/hamming.py | StevenACoffman/exercism | 2e585832cbd75e83bbb05fd71e1692f5ec99827b | [
"MIT"
] | null | null | null | python/hamming/hamming.py | StevenACoffman/exercism | 2e585832cbd75e83bbb05fd71e1692f5ec99827b | [
"MIT"
] | null | null | null | def distance(first, second):
return sum([1 for x, y in zip(first, second) if x != y])
| 22.75 | 60 | 0.626374 | 17 | 91 | 3.352941 | 0.764706 | 0.385965 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.014085 | 0.21978 | 91 | 3 | 61 | 30.333333 | 0.788732 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
5b7d9e981a9198b6d3db4ceda0a608dc3b67afff | 9,939 | py | Python | backwardcompatibilityml/tensorflow/loss/new_error.py | microsoft/BackwardCompatibilityML | 5910e485453f07fd5c85114d15c423c5db521122 | [
"MIT"
] | 54 | 2020-09-11T18:36:59.000Z | 2022-03-29T00:47:55.000Z | backwardcompatibilityml/tensorflow/loss/new_error.py | microsoft/BackwardCompatibilityML | 5910e485453f07fd5c85114d15c423c5db521122 | [
"MIT"
] | 115 | 2020-10-08T16:55:34.000Z | 2022-03-12T00:50:21.000Z | backwardcompatibilityml/tensorflow/loss/new_error.py | microsoft/BackwardCompatibilityML | 5910e485453f07fd5c85114d15c423c5db521122 | [
"MIT"
] | 11 | 2020-10-04T09:40:11.000Z | 2021-12-21T21:03:33.000Z | import tensorflow.compat.v2 as tf
class BCNLLLoss(object):
"""
Backward Compatibility New Error Negative Log Likelihood Loss
This class implements the backward compatibility loss function
with the underlying loss function being the Negative Log Likelihood
loss.
Note that the final layer of each model is assumed to have a
softmax output.
Example usage:
h1 = MyModel()
... train h1 ...
h1.trainable = False
lambda_c = 0.5 (regularization parameter)
h2 = MyNewModel() (this may be the same model type as MyModel)
bcloss = BCNLLLoss(h1, h2, lambda_c)
optimizer = tf.keras.optimizers.SGD(0.01)
tf_helpers.bc_fit(
h2,
training_set=ds_train,
testing_set=ds_test,
epochs=6,
bc_loss=bc_loss,
optimizer=optimizer)
Args:
h1: Our reference model which we would like to be compatible with.
h2: Our new model which will be the updated model.
lambda_c: A float between 0.0 and 1.0, which is a regularization
parameter that determines how much we want to penalize model h2
for being incompatible with h1. Lower values panalize less and
higher values penalize more.
"""
def __init__(self, h1, h2, lambda_c, clip_value_min=1e-10, clip_value_max=4.0):
self.h1 = h1
self.h2 = h2
self.lambda_c = lambda_c
self.clip_value_min = clip_value_min
self.clip_value_max = clip_value_max
self.__name__ = "BCNLLLoss"
def nll_loss(self, target_labels, model_output):
# Pick the model output probabilities corresponding to the ground truth labels
model_outputs_for_targets = tf.gather(
model_output, tf.dtypes.cast(target_labels, tf.int32), axis=1)
# We make sure to clip the probability values so that they do not
# result in Nan's once we take the logarithm
model_outputs_for_targets = tf.clip_by_value(
model_outputs_for_targets,
clip_value_min=self.clip_value_min,
clip_value_max=self.clip_value_max)
loss = -1 * tf.reduce_mean(tf.math.log(model_outputs_for_targets))
return loss
def dissonance(self, h2_output, target_labels):
nll_loss = self.nll_loss(target_labels, h2_output)
return nll_loss
def __call__(self, x, y):
h1_output = tf.argmax(self.h1(x), axis=1)
h2_output = self.h2(x)
h1_diff = h1_output - y
h1_correct = (h1_diff == 0)
_, x_support = tf.dynamic_partition(x, tf.dtypes.cast(h1_correct, tf.int32), 2)
_, y_support = tf.dynamic_partition(y, tf.dtypes.cast(h1_correct, tf.int32), 2)
h2_support_output = self.h2(x_support)
dissonance = self.dissonance(h2_support_output, y_support)
new_error_loss = self.nll_loss(y, h2_output) + self.lambda_c * dissonance
return new_error_loss
class BCCrossEntropyLoss(object):
"""
Backward Compatibility New Error Cross Entropy Loss
This class implements the backward compatibility loss function
with the underlying loss function being the Negative Log Likelihood
loss.
Note that the final layer of each model is assumed to have a
softmax output.
Example usage:
h1 = MyModel()
... train h1 ...
h1.trainable = False
lambda_c = 0.5 (regularization parameter)
h2 = MyNewModel() (this may be the same model type as MyModel)
bcloss = BCCrossEntropyLoss(h1, h2, lambda_c)
optimizer = tf.keras.optimizers.SGD(0.01)
tf_helpers.bc_fit(
h2,
training_set=ds_train,
testing_set=ds_test,
epochs=6,
bc_loss=bc_loss,
optimizer=optimizer)
Args:
h1: Our reference model which we would like to be compatible with.
h2: Our new model which will be the updated model.
lambda_c: A float between 0.0 and 1.0, which is a regularization
parameter that determines how much we want to penalize model h2
for being incompatible with h1. Lower values panalize less and
higher values penalize more.
"""
def __init__(self, h1, h2, lambda_c):
self.h1 = h1
self.h2 = h2
self.lambda_c = lambda_c
self.__name__ = "BCCrossEntropyLoss"
self.cce_loss = tf.keras.losses.SparseCategoricalCrossentropy(
reduction=tf.keras.losses.Reduction.SUM)
def dissonance(self, h2_output, target_labels):
cross_entropy_loss = self.cce_loss(target_labels, h2_output)
return cross_entropy_loss
def __call__(self, x, y):
h1_output = tf.argmax(self.h1(x), axis=1)
h2_output = self.h2(x)
h1_diff = h1_output - y
h1_correct = (h1_diff == 0)
_, x_support = tf.dynamic_partition(x, tf.dtypes.cast(h1_correct, tf.int32), 2)
_, y_support = tf.dynamic_partition(y, tf.dtypes.cast(h1_correct, tf.int32), 2)
h2_support_output = self.h2(x_support)
dissonance = self.dissonance(h2_support_output, y_support)
new_error_loss = self.cce_loss(y, h2_output) + self.lambda_c * dissonance
return tf.reduce_sum(new_error_loss)
class BCBinaryCrossEntropyLoss(object):
"""
Backward Compatibility New Error Binary Cross Entropy Loss
This class implements the backward compatibility loss function
with the underlying loss function being the Negative Log Likelihood
loss.
Note that the final layer of each model is assumed to have a
softmax output.
Example usage:
h1 = MyModel()
... train h1 ...
h1.trainable = False
lambda_c = 0.5 (regularization parameter)
h2 = MyNewModel() (this may be the same model type as MyModel)
bcloss = BCBinaryCrossEntropyLoss(h1, h2, lambda_c)
optimizer = tf.keras.optimizers.SGD(0.01)
tf_helpers.bc_fit(
h2,
training_set=ds_train,
testing_set=ds_test,
epochs=6,
bc_loss=bc_loss,
optimizer=optimizer)
Args:
h1: Our reference model which we would like to be compatible with.
h2: Our new model which will be the updated model.
lambda_c: A float between 0.0 and 1.0, which is a regularization
parameter that determines how much we want to penalize model h2
for being incompatible with h1. Lower values panalize less and
higher values penalize more.
"""
def __init__(self, h1, h2, lambda_c):
self.h1 = h1
self.h2 = h2
self.lambda_c = lambda_c
self.__name__ = "BCBinaryCrossEntropyLoss"
self.bce_loss = tf.keras.losses.BinaryCrossentropy(
reduction=tf.keras.losses.Reduction.SUM)
def dissonance(self, h2_output, target_labels):
cross_entropy_loss = self.bce_loss(target_labels, h2_output)
return cross_entropy_loss
def __call__(self, x, y):
h1_output = tf.argmax(self.h1(x), axis=1)
h2_output = self.h2(x)
h1_diff = h1_output - tf.argmax(y, axis=1)
h1_correct = (h1_diff == 0)
_, x_support = tf.dynamic_partition(x, tf.dtypes.cast(h1_correct, tf.int32), 2)
_, y_support = tf.dynamic_partition(y, tf.dtypes.cast(h1_correct, tf.int32), 2)
h2_support_output = self.h2(x_support)
dissonance = self.dissonance(h2_support_output, y_support)
new_error_loss = self.bce_loss(y, h2_output) + self.lambda_c * dissonance
return tf.reduce_sum(new_error_loss)
class BCKLDivLoss(object):
"""
Backward Compatibility New Error Kullback Liebler Divergence Loss
This class implements the backward compatibility loss function
with the underlying loss function being the Negative Log Likelihood
loss.
Note that the final layer of each model is assumed to have a
softmax output.
Example usage:
h1 = MyModel()
... train h1 ...
h1.trainable = False
lambda_c = 0.5 (regularization parameter)
h2 = MyNewModel() (this may be the same model type as MyModel)
bcloss = BCKLDivLoss(h1, h2, lambda_c)
optimizer = tf.keras.optimizers.SGD(0.01)
tf_helpers.bc_fit(
h2,
training_set=ds_train,
testing_set=ds_test,
epochs=6,
bc_loss=bc_loss,
optimizer=optimizer)
Args:
h1: Our reference model which we would like to be compatible with.
h2: Our new model which will be the updated model.
lambda_c: A float between 0.0 and 1.0, which is a regularization
parameter that determines how much we want to penalize model h2
for being incompatible with h1. Lower values panalize less and
higher values penalize more.
"""
def __init__(self, h1, h2, lambda_c):
self.h1 = h1
self.h2 = h2
self.lambda_c = lambda_c
self.__name__ = "BCKLDivLoss"
self.kldiv_loss = tf.keras.losses.KLDivergence(
reduction=tf.keras.losses.Reduction.SUM)
def dissonance(self, h2_output, target_labels):
kldiv_loss = self.kldiv_loss(target_labels, h2_output)
return kldiv_loss
def __call__(self, x, y):
h1_output = tf.argmax(self.h1(x), axis=1)
h2_output = self.h2(x)
h1_diff = h1_output - tf.argmax(y, axis=1)
h1_correct = (h1_diff == 0)
_, x_support = tf.dynamic_partition(x, tf.dtypes.cast(h1_correct, tf.int32), 2)
_, y_support = tf.dynamic_partition(y, tf.dtypes.cast(h1_correct, tf.int32), 2)
h2_support_output = self.h2(x_support)
dissonance = self.dissonance(h2_support_output, y_support)
new_error_loss = self.kldiv_loss(y, h2_output) + self.lambda_c * dissonance
return tf.reduce_sum(new_error_loss)
| 36.540441 | 87 | 0.648757 | 1,371 | 9,939 | 4.491612 | 0.126915 | 0.031829 | 0.017538 | 0.01429 | 0.856285 | 0.825755 | 0.803995 | 0.797986 | 0.797986 | 0.791491 | 0 | 0.029979 | 0.275078 | 9,939 | 271 | 88 | 36.675277 | 0.824705 | 0.466948 | 0 | 0.642857 | 0 | 0 | 0.012909 | 0.004997 | 0 | 0 | 0 | 0 | 0 | 1 | 0.132653 | false | 0 | 0.010204 | 0 | 0.27551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b91cf330cb207b13489c868549d2da719db2121 | 13,558 | py | Python | todo/test_views.py | staeff/todo-backend-flask | 52e51cf72f8cc8115614646a4dd9601c9316a7e8 | [
"MIT"
] | 17 | 2016-04-25T19:36:39.000Z | 2020-04-25T00:46:39.000Z | todo/test_views.py | spacecode-live/todo-backend-flask | 2cc5a97928643c04fe95e1d076d463986d964394 | [
"MIT"
] | null | null | null | todo/test_views.py | spacecode-live/todo-backend-flask | 2cc5a97928643c04fe95e1d076d463986d964394 | [
"MIT"
] | 14 | 2016-10-03T20:01:55.000Z | 2022-03-14T21:45:52.000Z | from todo.test_base import BaseTestCase
import unittest
from flask import json, url_for
class IndexTestCase(BaseTestCase):
def test_index(self):
response = self.app.get(url_for("index"))
self.assertEqual(response.status_code, 200)
def test_cors_headers(self):
response = self.app.get(url_for("index"), headers={"Origin": "www.example.com"})
self.assertEqual(response.headers["Access-Control-Allow-Origin"], "www.example.com")
def test_index_allows_posts(self):
data = dict(title="some text")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
self.assertEqual(response.status_code, 200)
def test_index_returns_lists(self):
response = self.app.get(url_for("index") )
self.assertIsInstance(json.loads(response.data.decode("utf-8")), list)
def test_index_returns_entry(self):
data = dict(title="some other text")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
self.assertEqual(data["title"], json.loads(response.data)["title"])
def test_index_allows_delete(self):
response = self.app.delete(url_for("index"))
self.assertEqual(response.status_code, 200)
def test_index_responds_with_empty_array_after_delete(self):
response = self.app.delete(url_for("index"))
self.assertEqual(response.data.decode("utf-8"), "[]")
def test_index_saves_posted_data(self):
data = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("index"))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data[0]["title"], data["title"])
def test_index_deletes_all_entries_after_delete(self):
data1 = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data1), content_type="application/json")
data2 = dict(title="some different text")
self.app.post(url_for("index"), data=json.dumps(data2), content_type="application/json")
data3 = dict(title="more different text")
self.app.post(url_for("index"), data=json.dumps(data3), content_type="application/json")
self.app.delete(url_for("index"))
response = self.app.get(url_for("index"))
self.assertEqual(response.data.decode("utf-8"), "[]")
def test_index_returns_multiple_entries_properly_formatted(self):
data1 = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data1), content_type="application/json")
data2 = dict(title="some different text")
self.app.post(url_for("index"), data=json.dumps(data2), content_type="application/json")
data3 = dict(title="more different text")
self.app.post(url_for("index"), data=json.dumps(data3), content_type="application/json")
response = self.app.get(url_for("index"))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data[0]["title"], data1["title"])
self.assertEqual(response_data[1]["title"], data2["title"])
self.assertEqual(response_data[2]["title"], data3["title"])
def test_index_returns_no_comma_at_the_end_of_the_list(self):
data = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("index"))
self.assertEqual(response.data.decode("utf-8")[-2:], "}]")
def test_entries_contain_completed_property(self):
data = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("index"))
response_data = json.loads(response.data.decode("utf-8"))
self.assertIn("completed", response_data[0])
def test_new_entries_have_completed_property(self):
data = dict(title="different text")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertIn("completed", response_data)
def test_new_entries_are_not_completed_post(self):
data = dict(title="different text")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data["completed"], False)
def test_new_entries_are_not_completed_get(self):
data = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("index"))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data[0]["completed"], False)
def test_new_entries_have_url_property(self):
data = dict(title="different text")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertIn("url", response_data)
def test_entries_have_url_property(self):
data = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("index"))
response_data = json.loads(response.data.decode("utf-8"))
self.assertIn("url", response_data[0])
def test_entries_have_proper_url(self):
data = dict(title="different text")
self.app.post(url_for("index"), data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("index"))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(url_for("entry", entry_id=1, _external=True), response_data[0]["url"])
def test_new_entries_have_proper_url(self):
data = dict(title="different text")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(url_for("entry", entry_id=1, _external=True), response_data["url"])
def test_can_create_new_entry_with_order(self):
data = dict(title="different text", order=10)
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
self.assertEqual(response.status_code, 200)
def test_new_entries_with_order_have_correct_order_property(self):
data = dict(title="different text", order=10)
self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(data["order"], response_data["order"])
def test_new_entries_order_input_validation_string(self):
data = dict(title="different text", order="not a number")
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response.status_code, 400)
self.assertEqual(data["order"] + " is not an integer.", response_data["message"])
def test_new_entries_order_input_validation_float(self):
data = dict(title="different text", order=23.3)
response = self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response.status_code, 400)
self.assertEqual(str(data["order"]) + " is not an integer.", response_data["message"])
class EntryTestCase(BaseTestCase):
def setUp(self):
BaseTestCase.setUp(self)
self.data = dict(title="text", order=10)
self.app.post(url_for("index"),
data=json.dumps(self.data), content_type="application/json")
def test_entry_returns_entry(self):
response = self.app.get(url_for("entry", entry_id=1))
self.assertEqual(response.status_code, 200)
def test_entry_returns_correct_entry(self):
response = self.app.get(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(self.data["title"], response_data["title"])
def test_entry_allows_patching_title(self):
data = dict(title="different text")
response = self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
self.assertEqual(response.status_code, 200)
def test_patching_entry_changes_title(self):
data = dict(title="different text")
self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(data["title"], response_data["title"])
def test_patching_entrys_completedness(self):
data = dict(completed=True)
self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(data["completed"], response_data["completed"])
def test_entry_allows_delete(self):
response = self.app.delete(url_for("entry", entry_id=1))
self.assertEqual(response.status_code, 200)
def test_entry_delete_returns_empty_json(self):
response = self.app.delete(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data, dict())
def test_entry_delete_deletes_entry(self):
self.app.delete(url_for("entry", entry_id=1))
response = self.app.get(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data, dict())
def test_entry_delete_only_deletes_referenced_entry(self):
data = dict(title="other")
self.app.post(url_for("index"),
data=json.dumps(data), content_type="application/json")
self.app.delete(url_for("entry", entry_id=1))
response = self.app.get(url_for("entry", entry_id=2))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response_data["title"], data["title"])
def test_can_patch_order(self):
data = dict(order=3)
response = self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
self.assertEqual(response.status_code, 200)
def test_patching_order_changes_order(self):
data = dict(order=3)
self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response = self.app.get(url_for("entry", entry_id=1))
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(data["order"], response_data["order"])
def test_patching_completed_input_validation_string(self):
data = dict(completed="not a bool")
response = self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response.status_code, 400)
self.assertEqual(data["completed"] + " is not a boolean.", response_data["message"])
def test_patching_completed_input_validation_float(self):
data = dict(completed=23.5)
response = self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response.status_code, 400)
self.assertEqual(str(data["completed"]) + " is not a boolean.", response_data["message"])
def test_patching_order_input_validation_string(self):
data = dict(order="not a number")
response = self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response.status_code, 400)
self.assertEqual(data["order"] + " is not an integer.", response_data["message"])
def test_patching_order_input_validation_float(self):
data = dict(order=23.5)
response = self.app.patch(url_for("entry", entry_id=1),
data=json.dumps(data), content_type="application/json")
response_data = json.loads(response.data.decode("utf-8"))
self.assertEqual(response.status_code, 400)
self.assertEqual(str(data["order"]) + " is not an integer.", response_data["message"])
if __name__ == "__main__":
unittest.main()
| 49.301818 | 97 | 0.668314 | 1,795 | 13,558 | 4.84234 | 0.071866 | 0.109066 | 0.065578 | 0.098711 | 0.885182 | 0.853659 | 0.83295 | 0.789692 | 0.773125 | 0.754027 | 0 | 0.01213 | 0.185204 | 13,558 | 274 | 98 | 49.481752 | 0.77469 | 0 | 0 | 0.634783 | 0 | 0 | 0.132984 | 0.001991 | 0 | 0 | 0 | 0 | 0.2 | 1 | 0.169565 | false | 0 | 0.013043 | 0 | 0.191304 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5b9ded576231db1f899cc175c3e9d1323c4af1af | 70,746 | py | Python | cardano-node-tests/cardano_node_tests/tests/test_pools.py | MitchellTesla/Cardano-SCK | f394506eb0875622093805c009951f6905261778 | [
"Apache-2.0"
] | 6 | 2021-08-30T00:49:12.000Z | 2022-01-27T07:07:53.000Z | cardano-node-tests/cardano_node_tests/tests/test_pools.py | c-spider/Cardano-SCK | 1accb0426289489e371eb67422ccb19ffaab5f3c | [
"Apache-2.0"
] | 17 | 2021-08-31T23:27:44.000Z | 2022-03-25T20:35:16.000Z | cardano-node-tests/cardano_node_tests/tests/test_pools.py | c-spider/Cardano-SCK | 1accb0426289489e371eb67422ccb19ffaab5f3c | [
"Apache-2.0"
] | 3 | 2021-05-20T08:26:00.000Z | 2022-03-27T22:31:36.000Z | """Tests for operations with stake pools.
* pool registration
* pool deregistration
* pool update
* pool metadata
* pool reregistration
"""
import json
import logging
from pathlib import Path
from typing import List
from typing import Tuple
import allure
import hypothesis
import hypothesis.strategies as st
import pytest
from _pytest.tmpdir import TempdirFactory
from cardano_clusterlib import clusterlib
from cardano_node_tests.utils import cluster_management
from cardano_node_tests.utils import cluster_nodes
from cardano_node_tests.utils import clusterlib_utils
from cardano_node_tests.utils import helpers
LOGGER = logging.getLogger(__name__)
DEREG_BUFFER_SEC = 30
@pytest.fixture(scope="module")
def create_temp_dir(tmp_path_factory: TempdirFactory):
"""Create a temporary dir."""
p = Path(tmp_path_factory.getbasetemp()).joinpath(helpers.get_id_for_mktemp(__file__)).resolve()
p.mkdir(exist_ok=True, parents=True)
return p
@pytest.fixture
def temp_dir(create_temp_dir: Path):
"""Change to a temporary dir."""
with helpers.change_cwd(create_temp_dir):
yield create_temp_dir
# use the "temp_dir" fixture for all tests automatically
pytestmark = pytest.mark.usefixtures("temp_dir")
@pytest.fixture(scope="module")
def pool_cost_start_cluster(tmp_path_factory: TempdirFactory) -> Path:
"""Update *minPoolCost* to 500."""
pytest_globaltemp = helpers.get_pytest_globaltemp(tmp_path_factory)
# need to lock because this same fixture can run on several workers in parallel
with helpers.FileLockIfXdist(f"{pytest_globaltemp}/startup_files_pool_500.lock"):
destdir = pytest_globaltemp / "startup_files_pool_500"
destdir.mkdir(exist_ok=True)
# return existing script if it is already generated by other worker
destdir_ls = list(destdir.glob("start-cluster*"))
if destdir_ls:
return destdir_ls[0]
startup_files = cluster_nodes.get_cluster_type().cluster_scripts.copy_scripts_files(
destdir=destdir
)
with open(startup_files.genesis_spec) as fp_in:
genesis_spec = json.load(fp_in)
genesis_spec["protocolParams"]["minPoolCost"] = 500
with open(startup_files.genesis_spec, "w") as fp_out:
json.dump(genesis_spec, fp_out)
return startup_files.start_script
@pytest.fixture
def cluster_mincost(
cluster_manager: cluster_management.ClusterManager, pool_cost_start_cluster: Path
) -> clusterlib.ClusterLib:
return cluster_manager.get(
mark="minPoolCost", cleanup=True, start_cmd=str(pool_cost_start_cluster)
)
def _check_pool(
cluster_obj: clusterlib.ClusterLib,
stake_pool_id: str,
pool_data: clusterlib.PoolData,
):
"""Check and return ledger state of the pool."""
pool_params: dict = cluster_obj.get_pool_params(stake_pool_id).pool_params
assert pool_params, (
"The newly created stake pool id is not shown inside the available stake pools;\n"
f"Pool ID: {stake_pool_id} vs Existing IDs: "
f"{list(cluster_obj.get_registered_stake_pools_ledger_state())}"
)
assert not clusterlib_utils.check_pool_data(
pool_params=pool_params, pool_creation_data=pool_data
)
def _check_staking(
pool_owners: List[clusterlib.PoolUser],
cluster_obj: clusterlib.ClusterLib,
stake_pool_id: str,
):
"""Check that staking was correctly setup."""
pool_params: dict = cluster_obj.get_pool_params(stake_pool_id).pool_params
LOGGER.info("Waiting up to 3 epochs for stake pool to be registered.")
helpers.wait_for(
lambda: stake_pool_id in cluster_obj.get_stake_distribution(),
delay=10,
num_sec=3 * cluster_obj.epoch_length_sec,
message="register stake pool",
)
for owner in pool_owners:
stake_addr_info = cluster_obj.get_stake_addr_info(owner.stake.address)
# check that the stake address was delegated
assert stake_addr_info.delegation, f"Stake address was not delegated yet: {stake_addr_info}"
assert stake_pool_id == stake_addr_info.delegation, "Stake address delegated to wrong pool"
assert (
# strip 'e0' from the beginning of the address hash
helpers.decode_bech32(stake_addr_info.address)[2:]
in pool_params["owners"]
), "'owner' value is different than expected"
def _create_register_pool(
cluster_obj: clusterlib.ClusterLib,
temp_template: str,
pool_owners: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
) -> clusterlib.PoolCreationOutput:
"""Create and register a stake pool.
Common functionality for tests.
"""
src_address = pool_owners[0].payment.address
src_init_balance = cluster_obj.get_address_balance(src_address)
# create and register pool
pool_creation_out = cluster_obj.create_stake_pool(
pool_data=pool_data, pool_owners=pool_owners, tx_name=temp_template
)
# check that the balance for source address was correctly updated
assert (
cluster_obj.get_address_balance(src_address)
== src_init_balance - pool_creation_out.tx_raw_output.fee - cluster_obj.get_pool_deposit()
), f"Incorrect balance for source address `{src_address}`"
# check that pool was correctly setup
_check_pool(
cluster_obj=cluster_obj,
stake_pool_id=pool_creation_out.stake_pool_id,
pool_data=pool_data,
)
return pool_creation_out
def _create_register_pool_delegate_stake_tx(
cluster_obj: clusterlib.ClusterLib,
pool_owners: List[clusterlib.PoolUser],
temp_template: str,
pool_data: clusterlib.PoolData,
):
"""Create and register a stake pool, delegate stake address - all in single TX.
Common functionality for tests.
"""
# create node VRF key pair
node_vrf = cluster_obj.gen_vrf_key_pair(node_name=pool_data.pool_name)
# create node cold key pair and counter
node_cold = cluster_obj.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
# create stake address registration certs
stake_addr_reg_cert_files = [
cluster_obj.gen_stake_addr_registration_cert(
addr_name=f"{temp_template}_addr{i}", stake_vkey_file=p.stake.vkey_file
)
for i, p in enumerate(pool_owners)
]
# create stake address delegation cert
stake_addr_deleg_cert_files = [
cluster_obj.gen_stake_addr_delegation_cert(
addr_name=f"{temp_template}_addr{i}",
stake_vkey_file=p.stake.vkey_file,
cold_vkey_file=node_cold.vkey_file,
)
for i, p in enumerate(pool_owners)
]
# create stake pool registration cert
pool_reg_cert_file = cluster_obj.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[p.stake.vkey_file for p in pool_owners],
)
src_address = pool_owners[0].payment.address
src_init_balance = cluster_obj.get_address_balance(src_address)
# register and delegate stake address, create and register pool
tx_files = clusterlib.TxFiles(
certificate_files=[
pool_reg_cert_file,
*stake_addr_reg_cert_files,
*stake_addr_deleg_cert_files,
],
signing_key_files=[
*[p.payment.skey_file for p in pool_owners],
*[p.stake.skey_file for p in pool_owners],
node_cold.skey_file,
],
)
tx_raw_output = cluster_obj.send_tx(
src_address=src_address, tx_name=temp_template, tx_files=tx_files
)
# check that the balance for source address was correctly updated
assert (
cluster_obj.get_address_balance(src_address)
== src_init_balance
- tx_raw_output.fee
- len(pool_owners) * cluster_obj.get_address_deposit()
- cluster_obj.get_pool_deposit()
), f"Incorrect balance for source address `{src_address}`"
# check that pool and staking were correctly setup
stake_pool_id = cluster_obj.get_stake_pool_id(node_cold.vkey_file)
_check_pool(cluster_obj=cluster_obj, stake_pool_id=stake_pool_id, pool_data=pool_data)
_check_staking(
pool_owners,
cluster_obj=cluster_obj,
stake_pool_id=stake_pool_id,
)
return clusterlib.PoolCreationOutput(
stake_pool_id=stake_pool_id,
vrf_key_pair=node_vrf,
cold_key_pair=node_cold,
pool_reg_cert_file=pool_reg_cert_file,
pool_data=pool_data,
pool_owners=pool_owners,
tx_raw_output=tx_raw_output,
)
def _create_register_pool_tx_delegate_stake_tx(
cluster_obj: clusterlib.ClusterLib,
pool_owners: List[clusterlib.PoolUser],
temp_template: str,
pool_data: clusterlib.PoolData,
) -> clusterlib.PoolCreationOutput:
"""Create and register a stake pool - first TX; delegate stake address - second TX.
Common functionality for tests.
"""
# create and register pool
pool_creation_out = _create_register_pool(
cluster_obj=cluster_obj,
temp_template=temp_template,
pool_owners=pool_owners,
pool_data=pool_data,
)
# create stake address registration certs
stake_addr_reg_cert_files = [
cluster_obj.gen_stake_addr_registration_cert(
addr_name=f"{temp_template}_addr{i}", stake_vkey_file=p.stake.vkey_file
)
for i, p in enumerate(pool_owners)
]
# create stake address delegation cert
stake_addr_deleg_cert_files = [
cluster_obj.gen_stake_addr_delegation_cert(
addr_name=f"{temp_template}_addr{i}",
stake_vkey_file=p.stake.vkey_file,
cold_vkey_file=pool_creation_out.cold_key_pair.vkey_file,
)
for i, p in enumerate(pool_owners)
]
src_address = pool_owners[0].payment.address
src_init_balance = cluster_obj.get_address_balance(src_address)
# register and delegate stake address
tx_files = clusterlib.TxFiles(
certificate_files=[*stake_addr_reg_cert_files, *stake_addr_deleg_cert_files],
signing_key_files=[
*[p.payment.skey_file for p in pool_owners],
*[p.stake.skey_file for p in pool_owners],
pool_creation_out.cold_key_pair.skey_file,
],
)
tx_raw_output = cluster_obj.send_tx(
src_address=src_address, tx_name=temp_template, tx_files=tx_files
)
# check that the balance for source address was correctly updated
assert (
cluster_obj.get_address_balance(src_address)
== src_init_balance
- tx_raw_output.fee
- len(pool_owners) * cluster_obj.get_address_deposit()
), f"Incorrect balance for source address `{src_address}`"
# check that staking was correctly setup
_check_staking(
pool_owners,
cluster_obj=cluster_obj,
stake_pool_id=pool_creation_out.stake_pool_id,
)
return pool_creation_out
@pytest.mark.testnets
class TestStakePool:
"""General tests for stake pools."""
@allure.link(helpers.get_vcs_link())
def test_stake_pool_metadata(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Create and register a stake pool with metadata.
Check that pool was registered and stake address delegated.
"""
temp_template = helpers.get_func_name()
pool_name = "cardano-node-tests"
pool_metadata = {
"name": pool_name,
"description": "cardano-node-tests E2E tests",
"ticker": "IOG1",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=1000,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.2,
pool_metadata_url="https://bit.ly/3bDUg9z",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=3,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool and delegate stake address
pool_creation_out = _create_register_pool_delegate_stake_tx(
cluster_obj=cluster,
pool_owners=pool_owners,
temp_template=temp_template,
pool_data=pool_data,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
def test_stake_pool_metadata_not_avail(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Create and register a stake pool with metadata file not available.
Check that pool was registered and stake address delegated.
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}"
pool_name = f"pool_{rand_str}"
pool_metadata = {
"name": pool_name,
"description": "Shelley QA E2E test Test",
"ticker": "QA1",
"homepage": "www.test1.com",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=1000,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.2,
pool_metadata_url="https://www.where_metadata_file_is_located.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=1,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool and delegate stake address
pool_creation_out = _create_register_pool_tx_delegate_stake_tx(
cluster_obj=cluster,
pool_owners=pool_owners,
temp_template=temp_template,
pool_data=pool_data,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
@pytest.mark.parametrize("no_of_addr", [1, 3])
def test_create_stake_pool(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
no_of_addr: int,
):
"""Create and register a stake pool (without metadata).
Check that pool was registered.
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}_{no_of_addr}"
pool_data = clusterlib.PoolData(
pool_name=f"pool_{rand_str}",
pool_pledge=12345,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.123,
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=no_of_addr,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool
pool_creation_out = _create_register_pool(
cluster_obj=cluster,
temp_template=temp_template,
pool_owners=pool_owners,
pool_data=pool_data,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
@pytest.mark.parametrize("no_of_addr", [1, 3])
def test_deregister_stake_pool(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
no_of_addr: int,
):
"""Deregister stake pool.
* deregister stake pool
* check that the stake addresses are no longer delegated
* check that the pool deposit was returned to reward account
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}_{no_of_addr}"
pool_name = f"pool_{rand_str}"
pool_metadata = {
"name": pool_name,
"description": "Shelley QA E2E test Test",
"ticker": "QA1",
"homepage": "www.test1.com",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=222,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.512,
pool_metadata_url="https://www.where_metadata_file_is_located.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=no_of_addr,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool and delegate stake address
pool_creation_out = _create_register_pool_tx_delegate_stake_tx(
cluster_obj=cluster,
pool_owners=pool_owners,
temp_template=temp_template,
pool_data=pool_data,
)
pool_owner = pool_owners[0]
src_register_balance = cluster.get_address_balance(pool_owner.payment.address)
src_register_reward = cluster.get_stake_addr_info(
pool_owner.stake.address
).reward_account_balance
# deregister stake pool
clusterlib_utils.wait_for_epoch_interval(
cluster_obj=cluster, start=1, stop=-DEREG_BUFFER_SEC, force_epoch=False
)
depoch = cluster.get_epoch() + 1
__, tx_raw_output = cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
assert cluster.get_pool_params(pool_creation_out.stake_pool_id).retiring == depoch
# check that the pool was deregistered
cluster.wait_for_new_epoch()
assert not (
cluster.get_pool_params(pool_creation_out.stake_pool_id).pool_params
), f"The pool {pool_creation_out.stake_pool_id} was not deregistered"
# check that the balance for source address was correctly updated
assert src_register_balance - tx_raw_output.fee == cluster.get_address_balance(
pool_owner.payment.address
)
# check that the stake addresses are no longer delegated
for owner_rec in pool_owners:
stake_addr_info = cluster.get_stake_addr_info(owner_rec.stake.address)
assert (
not stake_addr_info.delegation
), f"Stake address is still delegated: {stake_addr_info}"
# check that the pool deposit was returned to reward account
assert (
cluster.get_stake_addr_info(pool_owner.stake.address).reward_account_balance
== src_register_reward + cluster.get_pool_deposit()
)
@allure.link(helpers.get_vcs_link())
def test_reregister_stake_pool(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Reregister stake pool.
* deregister stake pool
* check that the stake addresses are no longer delegated
* reregister the pool by resubmitting the pool registration certificate
* delegate stake address to pool again (the address is already registered)
* check that pool was correctly setup
* check that the stake addresses were delegated
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}"
pool_name = f"pool_{rand_str}"
pool_metadata = {
"name": pool_name,
"description": "Shelley QA E2E test Test",
"ticker": "QA1",
"homepage": "www.test1.com",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=222,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.512,
pool_metadata_url="https://www.where_metadata_file_is_located.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster, name_template=temp_template
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=1_500_000_000,
)
# register pool and delegate stake address
pool_creation_out = _create_register_pool_delegate_stake_tx(
cluster_obj=cluster,
pool_owners=pool_owners,
temp_template=temp_template,
pool_data=pool_data,
)
# deregister stake pool
clusterlib_utils.wait_for_epoch_interval(
cluster_obj=cluster, start=1, stop=-DEREG_BUFFER_SEC, force_epoch=False
)
depoch = cluster.get_epoch() + 1
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
assert cluster.get_pool_params(pool_creation_out.stake_pool_id).retiring == depoch
# check that the pool was deregistered
cluster.wait_for_new_epoch()
assert not (
cluster.get_pool_params(pool_creation_out.stake_pool_id).pool_params
), f"The pool {pool_creation_out.stake_pool_id} was not deregistered"
# check that the stake addresses are no longer delegated
for owner_rec in pool_owners:
stake_addr_info = cluster.get_stake_addr_info(owner_rec.stake.address)
assert (
not stake_addr_info.delegation
), f"Stake address is still delegated: {stake_addr_info}"
src_address = pool_owners[0].payment.address
src_init_balance = cluster.get_address_balance(src_address)
# reregister the pool by resubmitting the pool registration certificate,
# delegate stake address to pool again (the address is already registered)
tx_files = clusterlib.TxFiles(
certificate_files=[
pool_creation_out.pool_reg_cert_file,
*list(temp_dir.glob(f"{temp_template}*_stake_deleg.cert")),
],
signing_key_files=pool_creation_out.tx_raw_output.tx_files.signing_key_files,
)
tx_raw_output = cluster.send_tx(
src_address=src_address, tx_name=temp_template, tx_files=tx_files
)
# check that the balance for source address was correctly updated
assert (
cluster.get_address_balance(src_address)
== src_init_balance - tx_raw_output.fee - cluster.get_pool_deposit()
), (
f"Incorrect balance for source address `{src_address}` "
f"({src_init_balance}, {tx_raw_output.fee}, {cluster.get_pool_deposit()})"
)
LOGGER.info("Waiting up to 5 epochs for stake pool to be reregistered.")
helpers.wait_for(
lambda: pool_creation_out.stake_pool_id in cluster.get_stake_distribution(),
delay=10,
num_sec=5 * cluster.epoch_length_sec,
message="reregister stake pool",
)
# check that pool was correctly setup
_check_pool(
cluster_obj=cluster, stake_pool_id=pool_creation_out.stake_pool_id, pool_data=pool_data
)
# check that the stake addresses were delegated
_check_staking(
pool_owners=pool_owners,
cluster_obj=cluster,
stake_pool_id=pool_creation_out.stake_pool_id,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
def test_cancel_stake_pool_deregistration(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Reregister a stake pool that is in course of being retired.
* deregister stake pool in epoch + 2
* reregister the pool by resubmitting the pool registration certificate
* delegate stake address to pool again (the address is already registered)
* check that no additional pool deposit was used
* check that pool is still correctly setup
* check that the stake addresses is still delegated
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}"
pool_name = f"pool_{rand_str}"
pool_metadata = {
"name": pool_name,
"description": "Shelley QA E2E test Test",
"ticker": "QA1",
"homepage": "www.test1.com",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=222,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.512,
pool_metadata_url="https://www.where_metadata_file_is_located.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster, name_template=temp_template
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=1_500_000_000,
)
# register pool and delegate stake address
pool_creation_out = _create_register_pool_delegate_stake_tx(
cluster_obj=cluster,
pool_owners=pool_owners,
temp_template=temp_template,
pool_data=pool_data,
)
# deregister stake pool in epoch + 2
clusterlib_utils.wait_for_epoch_interval(
cluster_obj=cluster, start=1, stop=-DEREG_BUFFER_SEC, force_epoch=False
)
depoch = cluster.get_epoch() + 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
assert cluster.get_pool_params(pool_creation_out.stake_pool_id).retiring == depoch
cluster.wait_for_new_epoch()
src_address = pool_owners[0].payment.address
src_init_balance = cluster.get_address_balance(src_address)
# reregister the pool by resubmitting the pool registration certificate,
# delegate stake address to pool again (the address is already registered)
tx_files = clusterlib.TxFiles(
certificate_files=[
pool_creation_out.pool_reg_cert_file,
*list(temp_dir.glob(f"{temp_template}*_stake_deleg.cert")),
],
signing_key_files=pool_creation_out.tx_raw_output.tx_files.signing_key_files,
)
tx_raw_output = cluster.send_tx(
src_address=src_address,
tx_name=temp_template,
tx_files=tx_files,
deposit=0, # no additional deposit, the pool is already registered
)
# check that the balance for source address was correctly updated
# and no additional pool deposit was used
assert (
cluster.get_address_balance(src_address) == src_init_balance - tx_raw_output.fee
), f"Incorrect balance for source address `{src_address}`"
LOGGER.info("Checking for 3 epochs that the stake pool will NOT get deregistered.")
pool_deregistered = helpers.wait_for(
lambda: not cluster.get_pool_params(pool_creation_out.stake_pool_id).pool_params,
delay=10,
num_sec=3 * cluster.epoch_length_sec,
silent=True,
)
assert not pool_deregistered, "Pool got deregistered"
# check that pool is still correctly setup
_check_pool(
cluster_obj=cluster, stake_pool_id=pool_creation_out.stake_pool_id, pool_data=pool_data
)
# check that the stake addresses is still delegated
_check_staking(
pool_owners=pool_owners,
cluster_obj=cluster,
stake_pool_id=pool_creation_out.stake_pool_id,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
@pytest.mark.parametrize("no_of_addr", [1, 2])
def test_update_stake_pool_metadata(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
no_of_addr: int,
):
"""Update stake pool metadata.
* register pool
* update the pool metadata by resubmitting the pool registration certificate
* check that the pool metadata hash was correctly updated on chain
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}_{no_of_addr}"
pool_name = f"pool_{rand_str}"
pool_metadata = {
"name": pool_name,
"description": "Shelley QA E2E test Test",
"ticker": "QA1",
"homepage": "www.test1.com",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
pool_metadata_updated = {
"name": f"{pool_name}_U",
"description": "pool description update",
"ticker": "QA22",
"homepage": "www.qa22.com",
}
pool_metadata_updated_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata_updated.json",
pool_metadata_updated,
)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=4567,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.01,
pool_metadata_url="https://init_location.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
pool_data_updated = pool_data._replace(
pool_metadata_url="https://www.updated_location.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_updated_file),
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=no_of_addr,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool
pool_creation_out = _create_register_pool(
cluster_obj=cluster,
temp_template=temp_template,
pool_owners=pool_owners,
pool_data=pool_data,
)
# update the pool metadata by resubmitting the pool registration certificate
cluster.register_stake_pool(
pool_data=pool_data_updated,
pool_owners=pool_owners,
vrf_vkey_file=pool_creation_out.vrf_key_pair.vkey_file,
cold_key_pair=pool_creation_out.cold_key_pair,
tx_name=temp_template,
deposit=0, # no additional deposit, the pool is already registered
)
# check that pool is going to be updated with correct data
future_params = cluster.get_pool_params(pool_creation_out.stake_pool_id).future_pool_params
assert not clusterlib_utils.check_pool_data(
pool_params=future_params, pool_creation_data=pool_data_updated
)
cluster.wait_for_new_epoch()
# check that the pool metadata hash was correctly updated on chain
_check_pool(
cluster_obj=cluster,
stake_pool_id=pool_creation_out.stake_pool_id,
pool_data=pool_data_updated,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
@pytest.mark.parametrize("no_of_addr", [1, 2])
def test_update_stake_pool_parameters(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
no_of_addr: int,
):
"""Update stake pool parameters.
* register pool
* update the pool parameters by resubmitting the pool registration certificate
* check that the pool parameters were correctly updated on chain
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}_{no_of_addr}"
pool_name = f"pool_{rand_str}"
pool_metadata = {
"name": pool_name,
"description": "Shelley QA E2E test Test",
"ticker": "QA1",
"homepage": "www.test1.com",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{pool_name}_registration_metadata.json", pool_metadata
)
min_pool_cost = cluster.get_protocol_params().get("minPoolCost", 500)
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=4567,
pool_cost=min_pool_cost,
pool_margin=0.01,
pool_metadata_url="https://www.where_metadata_file_is_located.com",
pool_metadata_hash=cluster.gen_pool_metadata_hash(pool_metadata_file),
)
pool_data_updated = pool_data._replace(
pool_pledge=1, pool_cost=min_pool_cost + 1_000_000, pool_margin=0.9
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=no_of_addr,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool
pool_creation_out = _create_register_pool(
cluster_obj=cluster,
temp_template=temp_template,
pool_owners=pool_owners,
pool_data=pool_data,
)
# update the pool parameters by resubmitting the pool registration certificate
cluster.register_stake_pool(
pool_data=pool_data_updated,
pool_owners=pool_owners,
vrf_vkey_file=pool_creation_out.vrf_key_pair.vkey_file,
cold_key_pair=pool_creation_out.cold_key_pair,
tx_name=temp_template,
deposit=0, # no additional deposit, the pool is already registered
)
# check that pool is going to be updated with correct data
future_params = cluster.get_pool_params(pool_creation_out.stake_pool_id).future_pool_params
assert not clusterlib_utils.check_pool_data(
pool_params=future_params, pool_creation_data=pool_data_updated
)
cluster.wait_for_new_epoch()
# check that the pool parameters were correctly updated on chain
_check_pool(
cluster_obj=cluster,
stake_pool_id=pool_creation_out.stake_pool_id,
pool_data=pool_data_updated,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=pool_creation_out.cold_key_pair,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
def test_sign_in_multiple_stages(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
):
"""Create and register a stake pool with TX signed in multiple stages.
* create stake pool registration cert
* create witness file for each signing key
* sign TX using witness files
* create and register pool
* check that the pool was correctly registered on chain
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}"
pool_data = clusterlib.PoolData(
pool_name=f"pool_{rand_str}",
pool_pledge=5,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.01,
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=2,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# create node VRF key pair
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
# create node cold key pair and counter
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
# create stake pool registration cert
pool_reg_cert_file = cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[p.stake.vkey_file for p in pool_owners],
)
src_address = pool_owners[0].payment.address
src_init_balance = cluster.get_address_balance(src_address)
# keys to sign the TX with
witness_skeys = (
pool_owners[0].payment.skey_file,
pool_owners[1].payment.skey_file,
pool_owners[0].stake.skey_file,
pool_owners[1].stake.skey_file,
node_cold.skey_file,
)
tx_files = clusterlib.TxFiles(
certificate_files=[
pool_reg_cert_file,
],
)
fee = cluster.calculate_tx_fee(
src_address=src_address,
tx_name=temp_template,
tx_files=tx_files,
witness_count_add=len(witness_skeys),
)
tx_raw_output = cluster.build_raw_tx(
src_address=src_address,
tx_name=temp_template,
tx_files=tx_files,
fee=fee,
)
# create witness file for each signing key
witness_files = [
cluster.witness_tx(
tx_body_file=tx_raw_output.out_file,
witness_name=f"{temp_template}_skey{idx}",
signing_key_files=[skey],
)
for idx, skey in enumerate(witness_skeys)
]
# sign TX using witness files
tx_witnessed_file = cluster.assemble_tx(
tx_body_file=tx_raw_output.out_file, witness_files=witness_files, tx_name=temp_template
)
# create and register pool
cluster.submit_tx(tx_file=tx_witnessed_file, txins=tx_raw_output.txins)
# check that the balance for source address was correctly updated
assert (
cluster.get_address_balance(src_address)
== src_init_balance - tx_raw_output.fee - cluster.get_pool_deposit()
), f"Incorrect balance for source address `{src_address}`"
cluster.wait_for_new_epoch()
# check that the pool was correctly registered on chain
stake_pool_id = cluster.get_stake_pool_id(node_cold.vkey_file)
_check_pool(
cluster_obj=cluster,
stake_pool_id=stake_pool_id,
pool_data=pool_data,
)
# deregister stake pool
depoch = 1 if cluster.time_to_epoch_end() >= DEREG_BUFFER_SEC else 2
cluster.deregister_stake_pool(
pool_owners=pool_owners,
cold_key_pair=node_cold,
epoch=cluster.get_epoch() + depoch,
pool_name=pool_data.pool_name,
tx_name=temp_template,
)
@allure.link(helpers.get_vcs_link())
def test_pool_registration_deregistration(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
):
"""Send both pool registration and deregistration certificates in single TX.
* create pool registration cert
* create pool deregistration cert
* register and deregister stake pool in single TX
* check that the pool deposit was NOT returned to reward account as the reward address
is not registered (deposit is lost)
"""
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}"
pool_data = clusterlib.PoolData(
pool_name=f"pool_{rand_str}",
pool_pledge=5,
pool_cost=cluster.get_protocol_params().get("minPoolCost", 500),
pool_margin=0.01,
)
# create pool owners
pool_owner = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=1,
)[0]
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owner.payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
src_init_balance = cluster.get_address_balance(pool_owner.payment.address)
src_init_reward = cluster.get_stake_addr_info(
pool_owner.stake.address
).reward_account_balance
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
# create pool registration cert
pool_reg_cert_file = cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[pool_owner.stake.vkey_file],
)
# create pool deregistration cert
pool_dereg_cert_file = cluster.gen_pool_deregistration_cert(
pool_name=pool_data.pool_name,
cold_vkey_file=node_cold.vkey_file,
epoch=cluster.get_epoch() + 1,
)
# register and deregister stake pool in single TX
tx_files = clusterlib.TxFiles(
certificate_files=[pool_reg_cert_file, pool_dereg_cert_file],
signing_key_files=[
pool_owner.payment.skey_file,
pool_owner.stake.skey_file,
node_cold.skey_file,
],
)
tx_raw_output = cluster.send_tx(
src_address=pool_owner.payment.address,
tx_name="conflicting_certs",
tx_files=tx_files,
)
# check that the balance for source address was correctly updated
assert (
cluster.get_address_balance(pool_owner.payment.address)
== src_init_balance - tx_raw_output.fee - cluster.get_pool_deposit()
), f"Incorrect balance for source address `{pool_owner.payment.address}`"
# check that the pool deposit was NOT returned to reward account as the reward address
# is not registered (deposit is lost)
cluster.wait_for_new_epoch(3, padding_seconds=30)
assert (
cluster.get_stake_addr_info(pool_owner.stake.address).reward_account_balance
== src_init_reward
)
@pytest.mark.run(order=2)
class TestPoolCost:
"""Tests for stake pool cost."""
@pytest.fixture
def pool_owners(
self,
cluster_manager: cluster_management.ClusterManager,
cluster_mincost: clusterlib.ClusterLib,
):
"""Create class scoped pool owners."""
cluster = cluster_mincost
with cluster_manager.cache_fixture() as fixture_cache:
if fixture_cache.value:
return fixture_cache.value # type: ignore
rand_str = clusterlib.get_rand_str()
temp_template = (
f"{helpers.get_func_name()}_{rand_str}_ci{cluster_manager.cluster_instance_num}"
)
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=1,
)
fixture_cache.value = pool_owners
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
return pool_owners
@allure.link(helpers.get_vcs_link())
@hypothesis.given(pool_cost=st.integers(max_value=499)) # minPoolCost is now 500
@helpers.hypothesis_settings()
def test_stake_pool_low_cost(
self,
cluster_mincost: clusterlib.ClusterLib,
pool_owners: List[clusterlib.PoolUser],
pool_cost: int,
):
"""Try to create and register a stake pool with pool cost lower than *minPoolCost*.
Expect failure. Property-based test.
"""
cluster = cluster_mincost
rand_str = clusterlib.get_rand_str(4)
temp_template = f"test_stake_pool_low_cost_{rand_str}"
pool_data = clusterlib.PoolData(
pool_name=f"pool_{rand_str}",
pool_pledge=12345,
pool_cost=pool_cost,
pool_margin=0.123,
)
# register pool, expect failure
with pytest.raises(clusterlib.CLIError) as excinfo:
_create_register_pool(
cluster_obj=cluster,
temp_template=temp_template,
pool_owners=pool_owners,
pool_data=pool_data,
)
# check that it failed in an expected way
expected_msg = "--pool-cost: Failed reading" if pool_cost < 0 else "StakePoolCostTooLowPOOL"
assert expected_msg in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
@pytest.mark.parametrize("pool_cost", [500, 9999999])
def test_stake_pool_cost(
self,
cluster_manager: cluster_management.ClusterManager,
cluster_mincost: clusterlib.ClusterLib,
pool_owners: List[clusterlib.PoolUser],
pool_cost: int,
):
"""Create and register a stake pool with *pool cost* >= *minPoolCost*."""
cluster = cluster_mincost
rand_str = clusterlib.get_rand_str(4)
temp_template = f"{helpers.get_func_name()}_{rand_str}_{pool_cost}"
pool_data = clusterlib.PoolData(
pool_name=f"pool_{rand_str}",
pool_pledge=12345,
pool_cost=pool_cost,
pool_margin=0.123,
)
# create pool owners
pool_owners = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=temp_template,
no_of_addr=1,
)
# fund source address
clusterlib_utils.fund_from_faucet(
pool_owners[0].payment,
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=900_000_000,
)
# register pool
_create_register_pool(
cluster_obj=cluster,
temp_template=temp_template,
pool_owners=pool_owners,
pool_data=pool_data,
)
@pytest.mark.testnets
class TestNegative:
"""Stake pool tests that are expected to fail."""
@pytest.fixture
def pool_users(
self,
cluster_manager: cluster_management.ClusterManager,
cluster: clusterlib.ClusterLib,
) -> List[clusterlib.PoolUser]:
"""Create pool users."""
with cluster_manager.cache_fixture() as fixture_cache:
if fixture_cache.value:
return fixture_cache.value # type: ignore
created_users = clusterlib_utils.create_pool_users(
cluster_obj=cluster,
name_template=f"test_negative_ci{cluster_manager.cluster_instance_num}",
no_of_addr=2,
)
fixture_cache.value = created_users
# fund source addresses
clusterlib_utils.fund_from_faucet(
created_users[0],
cluster_obj=cluster,
faucet_data=cluster_manager.cache.addrs_data["user1"],
amount=600_000_000,
)
return created_users
@pytest.fixture
def pool_data(self) -> clusterlib.PoolData:
pool_data = clusterlib.PoolData(
pool_name=f"pool_{clusterlib.get_rand_str(4)}",
pool_pledge=5,
pool_cost=500_000_000,
pool_margin=0.01,
)
return pool_data
@pytest.fixture
def gen_pool_registration_cert_data(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
) -> Tuple[str, str, clusterlib.KeyPair, clusterlib.ColdKeyPair]:
pool_name = f"pool_{clusterlib.get_rand_str(4)}"
pool_metadata = {
"name": pool_name,
"description": "cardano-node-tests E2E tests",
"ticker": "IOG2",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / "hypothesis_metadata_registration_metadata.json", pool_metadata
)
pool_metadata_hash = cluster.gen_pool_metadata_hash(pool_metadata_file)
# create node VRF key pair
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_name)
# create node cold key pair and counter
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_name)
return pool_name, pool_metadata_hash, node_vrf, node_cold
@allure.link(helpers.get_vcs_link())
def test_pool_registration_cert_wrong_vrf(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
):
"""Try to generate pool registration certificate using wrong VRF key.
Expect failure.
"""
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.skey_file, # skey instead of vkey
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[pool_users[0].stake.vkey_file],
)
assert "Expected: VrfVerificationKey_PraosVRF" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_pool_registration_cert_wrong_cold(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
):
"""Try to generate pool registration certificate using wrong Cold vkey.
Expect failure.
"""
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.skey_file, # skey instead of vkey
owner_stake_vkey_files=[pool_users[0].stake.vkey_file],
)
assert "Expected: StakePoolVerificationKey" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_pool_registration_cert_wrong_stake(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
):
"""Try to generate pool registration certificate using wrong stake vkey.
Expect failure.
"""
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[pool_users[0].stake.skey_file], # skey instead of vkey
)
assert "Expected: StakeVerificationKeyShelley" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_pool_registration_missing_cold_skey(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
):
"""Try to register pool using transaction with missing Cold skey.
Expect failure.
"""
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
pool_reg_cert_file = cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[pool_users[0].stake.vkey_file],
)
tx_files = clusterlib.TxFiles(
certificate_files=[pool_reg_cert_file],
signing_key_files=[
pool_users[0].payment.skey_file,
# missing node_cold.skey_file
],
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.send_tx(
src_address=pool_users[0].payment.address,
tx_name="missing_cold_key",
tx_files=tx_files,
)
assert "MissingVKeyWitnessesUTXOW" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_pool_registration_missing_payment_skey(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
):
"""Try to register pool using transaction with missing payment skey.
Expect failure.
"""
node_vrf = cluster.gen_vrf_key_pair(node_name=pool_data.pool_name)
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
pool_reg_cert_file = cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[pool_users[0].stake.vkey_file],
)
tx_files = clusterlib.TxFiles(
certificate_files=[pool_reg_cert_file],
signing_key_files=[
# missing payment skey file
node_cold.skey_file,
],
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.send_tx(
src_address=pool_users[0].payment.address, tx_name="missing_skey", tx_files=tx_files
)
assert "MissingVKeyWitnessesUTXOW" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_pool_deregistration_not_registered(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
pool_data: clusterlib.PoolData,
):
"""Try to deregister pool that is not registered.
Expect failure.
"""
node_cold = cluster.gen_cold_key_pair_and_counter(node_name=pool_data.pool_name)
pool_dereg_cert_file = cluster.gen_pool_deregistration_cert(
pool_name=pool_data.pool_name,
cold_vkey_file=node_cold.vkey_file,
epoch=cluster.get_epoch() + 2,
)
tx_files = clusterlib.TxFiles(
certificate_files=[pool_dereg_cert_file],
signing_key_files=[pool_users[0].payment.skey_file, node_cold.skey_file],
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.send_tx(
src_address=pool_users[0].payment.address,
tx_name="deregister_unregistered",
tx_files=tx_files,
)
assert "StakePoolNotRegisteredOnKeyPOOL" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_stake_pool_metadata_no_name(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Try to create pool metadata hash when missing the *name* key.
Expect failure.
"""
temp_template = helpers.get_func_name()
pool_metadata = {
"description": "cardano-node-tests E2E tests",
"ticker": "IOG1",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
assert 'key "name" not found' in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_stake_pool_metadata_no_description(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Try to create pool metadata hash when missing the *description* key.
Expect failure.
"""
temp_template = helpers.get_func_name()
pool_metadata = {
"name": "cardano-node-tests",
"ticker": "IOG1",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
assert 'key "description" not found' in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_stake_pool_metadata_no_ticker(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Try to create pool metadata hash when missing the *ticker* key.
Expect failure.
"""
temp_template = helpers.get_func_name()
pool_metadata = {
"name": "cardano-node-tests",
"description": "cardano-node-tests E2E tests",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
assert 'key "ticker" not found' in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
def test_stake_pool_metadata_no_homepage(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
):
"""Try to create pool metadata hash when missing the *homepage* key.
Expect failure.
"""
temp_template = helpers.get_func_name()
pool_metadata = {
"name": "cardano-node-tests",
"description": "cardano-node-tests E2E tests",
"ticker": "IOG1",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
assert 'key "homepage" not found' in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
@hypothesis.given(pool_name=st.text(min_size=51))
@helpers.hypothesis_settings()
def test_stake_pool_metadata_long_name(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
pool_name: str,
):
"""Try to create pool metadata hash when the *name* value is longer than allowed.
Expect failure. Property-based test.
"""
temp_template = "test_stake_pool_metadata_long_name"
pool_metadata = {
"name": pool_name,
"description": "cardano-node-tests E2E tests",
"ticker": "IOG1",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
err_value = str(excinfo.value)
assert (
"Stake pool metadata must consist of at most 512 bytes" in err_value
or '"name" must have at most 50 characters' in err_value
)
@allure.link(helpers.get_vcs_link())
@hypothesis.given(pool_description=st.text(min_size=256))
@helpers.hypothesis_settings()
def test_stake_pool_metadata_long_description(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
pool_description: str,
):
"""Try to create pool metadata hash when the *description* value is longer than allowed.
Expect failure. Property-based test.
"""
temp_template = "test_stake_pool_metadata_long_description"
pool_metadata = {
"name": "cardano-node-tests",
"description": pool_description,
"ticker": "IOG1",
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
err_value = str(excinfo.value)
assert (
"Stake pool metadata must consist of at most 512 bytes" in err_value
or '"description" must have at most 255 characters' in err_value
)
@allure.link(helpers.get_vcs_link())
@hypothesis.given(pool_ticker=st.text())
@helpers.hypothesis_settings()
def test_stake_pool_metadata_long_ticker(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
pool_ticker: str,
):
"""Try to create pool metadata hash when the *ticker* value is longer than allowed.
Expect failure. Property-based test.
"""
hypothesis.assume(not (3 <= len(pool_ticker) <= 5))
temp_template = "test_stake_pool_metadata_long_ticker"
pool_metadata = {
"name": "cardano-node-tests",
"description": "cardano-node-tests E2E tests",
"ticker": pool_ticker,
"homepage": "https://github.com/input-output-hk/cardano-node-tests",
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
assert '"ticker" must have at least 3 and at most 5 characters' in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
@hypothesis.given(pool_homepage=st.text(min_size=425))
@helpers.hypothesis_settings()
def test_stake_pool_metadata_long_homepage(
self,
cluster: clusterlib.ClusterLib,
temp_dir: Path,
pool_homepage: str,
):
"""Try to create pool metadata hash when the *homepage* value is longer than allowed.
Expect failure. Property-based test.
"""
temp_template = "test_stake_pool_metadata_long_homepage"
pool_metadata = {
"name": "CND",
"description": "CND",
"ticker": "CND",
"homepage": pool_homepage,
}
pool_metadata_file = helpers.write_json(
temp_dir / f"{temp_template}_registration_metadata.json", pool_metadata
)
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_metadata_hash(pool_metadata_file)
assert "Stake pool metadata must consist of at most 512 bytes" in str(excinfo.value)
@allure.link(helpers.get_vcs_link())
@hypothesis.given(
metadata_url=st.text(alphabet=st.characters(blacklist_categories=["C"]), min_size=25)
)
@helpers.hypothesis_settings()
def test_stake_pool_long_metadata_url(
self,
cluster: clusterlib.ClusterLib,
pool_users: List[clusterlib.PoolUser],
gen_pool_registration_cert_data: Tuple[
str, str, clusterlib.KeyPair, clusterlib.ColdKeyPair
],
metadata_url: str,
):
"""Try to create pool registration cert when the *metadata-url* is longer than allowed.
Expect failure. Property-based test.
"""
pool_name, pool_metadata_hash, node_vrf, node_cold = gen_pool_registration_cert_data
pool_data = clusterlib.PoolData(
pool_name=pool_name,
pool_pledge=1000,
pool_cost=500_000_000,
pool_margin=0.2,
pool_metadata_url=(f"https://gist.githubusercontent.com/{metadata_url}.json"),
pool_metadata_hash=pool_metadata_hash,
)
# create stake pool registration cert
with pytest.raises(clusterlib.CLIError) as excinfo:
cluster.gen_pool_registration_cert(
pool_data=pool_data,
vrf_vkey_file=node_vrf.vkey_file,
cold_vkey_file=node_cold.vkey_file,
owner_stake_vkey_files=[p.stake.vkey_file for p in pool_users],
)
assert "option --metadata-url: The provided string must have at most 64 characters" in str(
excinfo.value
)
| 36.058104 | 100 | 0.64692 | 8,556 | 70,746 | 4.992637 | 0.052127 | 0.039048 | 0.018822 | 0.017792 | 0.870731 | 0.84491 | 0.822319 | 0.79912 | 0.777302 | 0.746752 | 0 | 0.009613 | 0.275069 | 70,746 | 1,961 | 101 | 36.076492 | 0.823305 | 0.126862 | 0 | 0.689965 | 0 | 0 | 0.100471 | 0.03524 | 0 | 0 | 0 | 0 | 0.028374 | 1 | 0.027682 | false | 0 | 0.010381 | 0.000692 | 0.049135 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5bb238cf831a7295536099205a97a6a07b326a05 | 272 | bzl | Python | rules/scala/init.bzl | mktitov/rules_scala3 | 510a89102755b0d49122da45dda86c114a8a91a4 | [
"Apache-2.0"
] | 1 | 2021-12-15T23:40:26.000Z | 2021-12-15T23:40:26.000Z | rules/scala/init.bzl | mktitov/rules_scala3 | 510a89102755b0d49122da45dda86c114a8a91a4 | [
"Apache-2.0"
] | 1 | 2021-12-28T10:10:18.000Z | 2021-12-28T10:10:18.000Z | rules/scala/init.bzl | timothyklim/rules_scala3 | 34d6173f288a7478bddd7f57a9e18e9f1324b4b6 | [
"Apache-2.0"
] | null | null | null | load("@com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
load("@rules_proto//proto:repositories.bzl", "rules_proto_dependencies", "rules_proto_toolchains")
def rules_scala3_init():
protobuf_deps()
rules_proto_dependencies()
rules_proto_toolchains()
| 30.222222 | 98 | 0.779412 | 33 | 272 | 5.939394 | 0.424242 | 0.255102 | 0.22449 | 0.27551 | 0.428571 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0.004032 | 0.088235 | 272 | 8 | 99 | 34 | 0.78629 | 0 | 0 | 0 | 0 | 0 | 0.496324 | 0.448529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | true | 0 | 0 | 0 | 0.166667 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5bbacb49da205b75c423e3dde1f0749d2307c970 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/lib/tests/test_format.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/lib/tests/test_format.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/lib/tests/test_format.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/e3/f2/98/0c095d5a5dc1b1d996b69117a2f03e67c54357249c1164b999e0a41c5b | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.458333 | 0 | 96 | 1 | 96 | 96 | 0.4375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5bde800732dff43ffc4358bb4357168eefd43bb3 | 307 | py | Python | battlesnake_builder/__init__.py | Tch1b0/battlesnake-builder | fa2f69147444b2f2d53a7954bfc9487b1ca84f54 | [
"MIT"
] | null | null | null | battlesnake_builder/__init__.py | Tch1b0/battlesnake-builder | fa2f69147444b2f2d53a7954bfc9487b1ca84f54 | [
"MIT"
] | 1 | 2022-03-21T19:32:27.000Z | 2022-03-21T19:32:27.000Z | battlesnake_builder/__init__.py | Tch1b0/battlesnake-builder | fa2f69147444b2f2d53a7954bfc9487b1ca84f54 | [
"MIT"
] | null | null | null | from battlesnake_builder.battlesnake import BattleSnake, Config
from battlesnake_builder.coordinate import Coordinate
from battlesnake_builder.board import Board
from battlesnake_builder.snake import Snake, BodyFragment
from battlesnake_builder.reqdata import Data
from battlesnake_builder.game import Game
| 43.857143 | 63 | 0.889251 | 38 | 307 | 7.026316 | 0.315789 | 0.337079 | 0.494382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084691 | 307 | 6 | 64 | 51.166667 | 0.950178 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5bf747201e37c1c414d5e8c38f25da7861b246aa | 203 | py | Python | test_package_kthdesa/test_package_kthdesa.py | alekordESA/package-template | c95a64bf125d41f1bcfd50494dbd0daeb0b27fca | [
"MIT"
] | null | null | null | test_package_kthdesa/test_package_kthdesa.py | alekordESA/package-template | c95a64bf125d41f1bcfd50494dbd0daeb0b27fca | [
"MIT"
] | null | null | null | test_package_kthdesa/test_package_kthdesa.py | alekordESA/package-template | c95a64bf125d41f1bcfd50494dbd0daeb0b27fca | [
"MIT"
] | null | null | null | from __future__ import print_function
def hello():
"""Return a intro sample message"""
return ("This is a sample package")
def say_hello():
"""Prints a sample message"""
print (hello()) | 22.555556 | 39 | 0.665025 | 27 | 203 | 4.777778 | 0.62963 | 0.20155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.206897 | 203 | 9 | 40 | 22.555556 | 0.801242 | 0.261084 | 0 | 0 | 0 | 0 | 0.171429 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | true | 0 | 0.2 | 0 | 0.8 | 0.4 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
754d929ca7a3efcced8b15a997375f4ba9f961b3 | 6,142 | py | Python | molsysmt/tests/structure/get_neighbors/test_get_neighbors_from_molsysmt_MolSys.py | dprada/molsysmt | 83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d | [
"MIT"
] | null | null | null | molsysmt/tests/structure/get_neighbors/test_get_neighbors_from_molsysmt_MolSys.py | dprada/molsysmt | 83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d | [
"MIT"
] | null | null | null | molsysmt/tests/structure/get_neighbors/test_get_neighbors_from_molsysmt_MolSys.py | dprada/molsysmt | 83f150bfe3cfa7603566a0ed4aed79d9b0c97f5d | [
"MIT"
] | null | null | null | """
Unit and regression test for the get_neighbors module of the molsysmt package on molsysmt MolSys molecular
systems.
"""
# Import package, test suite, and other packages as needed
import molsysmt as msm
from molsysmt import puw
import numpy as np
# Distance between atoms in space and time
def test_get_neighbors_from_molsysmt_MolSys_1():
molsys = msm.convert(msm.demo['pentalanine']['traj.h5'], to_form='molsysmt.MolSys')
CA_atoms_list = msm.select(molsys, selection='atom_name=="CA"')
neighbors, distances = msm.structure.get_neighbors(molsys, selection=CA_atoms_list, num_neighbors=3)
check_shape_1 = ((5000, 5, 3)==neighbors.shape)
check_shape_2 = ((5000, 5, 3)==distances.shape)
check_distance = np.isclose(puw.get_value(distances[2000,0,0], to_unit='nm'), 0.38743175)
assert check_shape_1 and check_shape_2 and check_distance
def test_get_neighbors_from_molsysmt_MolSys_2():
molsys = msm.convert(msm.demo['pentalanine']['traj.h5'], to_form='molsysmt.MolSys')
CA_atoms_list = msm.select(molsys, selection='atom_name=="CA"')
neighbors, distances = msm.structure.get_neighbors(molsys, selection=CA_atoms_list, selection_2='all', num_neighbors=4)
check_neighbors = (10==neighbors[2000,0,3])
check_distance = np.isclose(puw.get_value(distances[2000,0,3], to_unit='nm'), 0.1532800)
assert check_neighbors and check_distance
def test_get_neighbors_from_molsysmt_MolSys_3():
molsys = msm.convert(msm.demo['TcTIM']['1tcd.msmpk'], to_form='molsysmt.MolSys')
atoms_in_residues_chain_0 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==0",
atom_index=True)
atoms_in_residues_chain_1 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==1",
atom_index=True)
neighbors, distances = msm.structure.get_neighbors(molsys, groups_of_atoms=atoms_in_residues_chain_0,
group_behavior= 'geometric_center', num_neighbors=8)
check_shape_1 = ((1, 248, 8)==neighbors.shape)
check_neighbors = (2==neighbors[0,0,7])
check_distance = np.isclose(puw.get_value(distances[0,0,7], to_unit='nm'), 0.86807833)
assert check_shape_1 and check_neighbors and check_distance
def test_get_neighbors_from_molsysmt_MolSys_4():
molsys = msm.convert(msm.demo['TcTIM']['1tcd.msmpk'], to_form='molsysmt.MolSys')
atoms_in_residues_chain_0 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==0",
atom_index=True)
atoms_in_residues_chain_1 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==1",
atom_index=True)
neighbors, distances = msm.structure.get_neighbors(molsys,
groups_of_atoms=atoms_in_residues_chain_0,
group_behavior= 'geometric_center',
groups_of_atoms_2=atoms_in_residues_chain_1,
group_behavior_2= 'geometric_center',
num_neighbors=8)
check_neighbors = (69==neighbors[0,0,7])
check_distance = np.isclose(puw.get_value(distances[0,0,7], to_unit='nm'), 3.5652103)
assert check_neighbors and check_distance
def test_get_neighbors_from_molsysmt_MolSys_5():
molsys = msm.convert(msm.demo['TcTIM']['1tcd.msmpk'], to_form='molsysmt.MolSys')
atoms_in_residues_chain_1 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==1",
atom_index=True)
neighbors, distances = msm.structure.get_neighbors(molsys, selection=100,
groups_of_atoms_2=atoms_in_residues_chain_1,
group_behavior_2= 'geometric_center',
num_neighbors=4)
check_neighbors = (77==neighbors[0,0,3])
check_distance = np.isclose(puw.get_value(distances[0,0,3], to_unit='nm'), 0.8498448)
assert check_neighbors and check_distance
def test_get_neighbors_from_molsysmt_MolSys_6():
molsys = msm.convert(msm.demo['TcTIM']['1tcd.msmpk'], to_form='molsysmt.MolSys')
CA_atoms = msm.select(molsys, selection='atom_name=="CA"')
neighbors, distances = msm.structure.get_neighbors(molsys, selection=CA_atoms, threshold='8 angstroms')
check_shape_1 = ((1, 497)==neighbors.shape)
check_shape_2 = ((1, 497)==distances.shape)
check_neighbors = (14==len(neighbors[0,9]))
check_neighbors_2 = (21==neighbors[0, 20][0])
check_distance = np.isclose(puw.get_value(distances[0, 20][0], to_unit='nm'), 0.3807746)
assert check_shape_1 and check_shape_2 and check_neighbors and check_neighbors_2 and check_distance
def test_get_neighbors_from_molsysmt_MolSys_7():
molsys = msm.convert(msm.demo['TcTIM']['1tcd.msmpk'], to_form='molsysmt.MolSys')
atoms_in_residues_chain_0 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==0",
atom_index=True)
atoms_in_residues_chain_1 = msm.get(molsys, target='group',
selection="molecule_type=='protein' and chain_index==1",
atom_index=True)
neighbors, distances = msm.structure.get_neighbors(molsys,
groups_of_atoms= atoms_in_residues_chain_0,
group_behavior='geometric_center',
groups_of_atoms_2= atoms_in_residues_chain_1,
group_behavior_2='geometric_center',
threshold=1.2*puw.unit('nanometers'))
check_n_contacts = (18==len(neighbors[0,11]))
assert check_n_contacts
| 58.495238 | 123 | 0.636926 | 775 | 6,142 | 4.736774 | 0.139355 | 0.049033 | 0.053119 | 0.070825 | 0.828112 | 0.792972 | 0.773631 | 0.763552 | 0.761101 | 0.749387 | 0 | 0.043858 | 0.253826 | 6,142 | 104 | 124 | 59.057692 | 0.757146 | 0.034842 | 0 | 0.517241 | 0 | 0 | 0.123204 | 0.028393 | 0 | 0 | 0 | 0 | 0.08046 | 1 | 0.08046 | false | 0 | 0.034483 | 0 | 0.114943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
75611db8bee59a8bcf595f99d619d92b7f51d22e | 74,724 | py | Python | vimfiles/bundle/vim-python/submodules/rope/ropetest/refactor/extracttest.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 463 | 2015-01-15T08:17:42.000Z | 2022-03-28T15:10:20.000Z | vimfiles/bundle/vim-python/submodules/rope/ropetest/refactor/extracttest.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 52 | 2015-01-06T02:43:59.000Z | 2022-03-14T11:15:21.000Z | vimfiles/bundle/vim-python/submodules/rope/ropetest/refactor/extracttest.py | ciskoinch8/vimrc | 5bf77a7e7bc70fac5173ab2e9ea05d7dda3e52b8 | [
"MIT"
] | 249 | 2015-01-07T22:49:49.000Z | 2022-03-18T02:32:06.000Z | from textwrap import dedent
try:
import unittest2 as unittest
except ImportError:
import unittest
import rope.base.codeanalyze
import rope.base.exceptions
from rope.refactor import extract
from ropetest import testutils
class ExtractMethodTest(unittest.TestCase):
def setUp(self):
super(ExtractMethodTest, self).setUp()
self.project = testutils.sample_project()
self.pycore = self.project.pycore
def tearDown(self):
testutils.remove_project(self.project)
super(ExtractMethodTest, self).tearDown()
def do_extract_method(self, source_code, start, end, extracted, **kwds):
testmod = testutils.create_module(self.project, 'testmod')
testmod.write(source_code)
extractor = extract.ExtractMethod(
self.project, testmod, start, end)
self.project.do(extractor.get_changes(extracted, **kwds))
return testmod.read()
def do_extract_variable(self, source_code, start, end, extracted, **kwds):
testmod = testutils.create_module(self.project, 'testmod')
testmod.write(source_code)
extractor = extract.ExtractVariable(self.project, testmod, start, end)
self.project.do(extractor.get_changes(extracted, **kwds))
return testmod.read()
def _convert_line_range_to_offset(self, code, start, end):
lines = rope.base.codeanalyze.SourceLinesAdapter(code)
return lines.get_line_start(start), lines.get_line_end(end)
def test_simple_extract_function(self):
code = dedent("""\
def a_func():
print('one')
print('two')
""")
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = dedent('''\
def a_func():
extracted()
print('two')
def extracted():
print('one')
''')
self.assertEqual(expected, refactored)
def test_simple_extract_function_one_line(self):
code = dedent("""\
def a_func():
resp = 'one'
print(resp)
""")
selected = "'one'"
start, end = code.index(selected), code.index(selected) + len(selected)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = dedent('''\
def a_func():
resp = extracted()
print(resp)
def extracted():
return 'one'
''')
self.assertEqual(expected, refactored)
def test_extract_function_at_the_end_of_file(self):
code = "def a_func():\n print('one')"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func():\n extracted()\n" \
"def extracted():\n print('one')\n"
self.assertEqual(expected, refactored)
def test_extract_function_after_scope(self):
code = "def a_func():\n print('one')\n print('two')" \
"\n\nprint('hey')\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func():\n extracted()\n print('two')\n\n" \
"def extracted():\n print('one')\n\nprint('hey')\n"
self.assertEqual(expected, refactored)
@testutils.only_for('3.5')
def test_extract_function_containing_dict_generalized_unpacking(self):
code = dedent('''\
def a_func(dict1):
dict2 = {}
a_var = {a: b, **dict1, **dict2}
''')
start = code.index('{a')
end = code.index('2}') + len('2}')
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = dedent('''\
def a_func(dict1):
dict2 = {}
a_var = extracted(dict1, dict2)
def extracted(dict1, dict2):
return {a: b, **dict1, **dict2}
''')
self.assertEqual(expected, refactored)
def test_simple_extract_function_with_parameter(self):
code = "def a_func():\n a_var = 10\n print(a_var)\n"
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "def a_func():\n a_var = 10\n new_func(a_var)\n\n" \
"def new_func(a_var):\n print(a_var)\n"
self.assertEqual(expected, refactored)
def test_not_unread_variables_as_parameter(self):
code = "def a_func():\n a_var = 10\n print('hey')\n"
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "def a_func():\n a_var = 10\n new_func()\n\n" \
"def new_func():\n print('hey')\n"
self.assertEqual(expected, refactored)
def test_simple_extract_function_with_two_parameter(self):
code = 'def a_func():\n a_var = 10\n another_var = 20\n' \
' third_var = a_var + another_var\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var = 10\n another_var = 20\n' \
' new_func(a_var, another_var)\n\n' \
'def new_func(a_var, another_var):\n' \
' third_var = a_var + another_var\n'
self.assertEqual(expected, refactored)
def test_simple_extract_function_with_return_value(self):
code = 'def a_func():\n a_var = 10\n print(a_var)\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var = new_func()' \
'\n print(a_var)\n\n' \
'def new_func():\n a_var = 10\n return a_var\n'
self.assertEqual(expected, refactored)
def test_extract_function_with_multiple_return_values(self):
code = 'def a_func():\n a_var = 10\n another_var = 20\n' \
' third_var = a_var + another_var\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var, another_var = new_func()\n' \
' third_var = a_var + another_var\n\n' \
'def new_func():\n a_var = 10\n another_var = 20\n' \
' return a_var, another_var\n'
self.assertEqual(expected, refactored)
def test_simple_extract_method(self):
code = 'class AClass(object):\n\n' \
' def a_func(self):\n print(1)\n print(2)\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n\n' \
' def a_func(self):\n' \
' self.new_func()\n' \
' print(2)\n\n' \
' def new_func(self):\n print(1)\n'
self.assertEqual(expected, refactored)
def test_extract_method_with_args_and_returns(self):
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' a_var = 10\n' \
' another_var = a_var * 3\n' \
' third_var = a_var + another_var\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' a_var = 10\n' \
' another_var = self.new_func(a_var)\n' \
' third_var = a_var + another_var\n\n' \
' def new_func(self, a_var):\n' \
' another_var = a_var * 3\n' \
' return another_var\n'
self.assertEqual(expected, refactored)
def test_extract_method_with_self_as_argument(self):
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' print(self)\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' self.new_func()\n\n' \
' def new_func(self):\n' \
' print(self)\n'
self.assertEqual(expected, refactored)
def test_extract_method_with_no_self_as_argument(self):
code = 'class AClass(object):\n' \
' def a_func():\n' \
' print(1)\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_with_multiple_methods(self):
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' print(self)\n\n' \
' def another_func(self):\n' \
' pass\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' self.new_func()\n\n' \
' def new_func(self):\n' \
' print(self)\n\n' \
' def another_func(self):\n' \
' pass\n'
self.assertEqual(expected, refactored)
def test_extract_function_with_function_returns(self):
code = 'def a_func():\n def inner_func():\n pass\n' \
' inner_func()\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n' \
' inner_func = new_func()\n inner_func()\n\n' \
'def new_func():\n' \
' def inner_func():\n pass\n' \
' return inner_func\n'
self.assertEqual(expected, refactored)
def test_simple_extract_global_function(self):
code = "print('one')\nprint('two')\nprint('three')\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "print('one')\n\ndef new_func():\n print('two')\n" \
"\nnew_func()\nprint('three')\n"
self.assertEqual(expected, refactored)
def test_extract_global_function_inside_ifs(self):
code = 'if True:\n a = 10\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n a = 10\n\nif True:\n' \
' new_func()\n'
self.assertEqual(expected, refactored)
def test_extract_function_while_inner_function_reads(self):
code = 'def a_func():\n a_var = 10\n' \
' def inner_func():\n print(a_var)\n' \
' return inner_func\n'
start, end = self._convert_line_range_to_offset(code, 3, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a_var = 10\n' \
' inner_func = new_func(a_var)' \
'\n return inner_func\n\n' \
'def new_func(a_var):\n' \
' def inner_func():\n print(a_var)\n' \
' return inner_func\n'
self.assertEqual(expected, refactored)
def test_extract_method_bad_range(self):
code = "def a_func():\n pass\na_var = 10\n"
start, end = self._convert_line_range_to_offset(code, 2, 3)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_bad_range2(self):
code = "class AClass(object):\n pass\n"
start, end = self._convert_line_range_to_offset(code, 1, 1)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_return(self):
code = 'def a_func(arg):\n if arg:\n return arg * 2' \
'\n return 1'
start, end = self._convert_line_range_to_offset(code, 2, 4)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_yield(self):
code = "def a_func(arg):\n yield arg * 2\n"
start, end = self._convert_line_range_to_offset(code, 2, 2)
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_uncomplete_lines(self):
code = 'a_var = 20\nanother_var = 30\n'
start = code.index('20')
end = code.index('30') + 2
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_containing_uncomplete_lines2(self):
code = 'a_var = 20\nanother_var = 30\n'
start = code.index('20')
end = code.index('another') + 5
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_function_and_argument_as_paramenter(self):
code = 'def a_func(arg):\n print(arg)\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func(arg):\n new_func(arg)\n\n' \
'def new_func(arg):\n print(arg)\n'
self.assertEqual(expected, refactored)
def test_extract_function_and_end_as_the_start_of_a_line(self):
code = 'print("hey")\nif True:\n pass\n'
start = 0
end = code.index('\n') + 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n print("hey")\n\n' \
'new_func()\nif True:\n pass\n'
self.assertEqual(expected, refactored)
def test_extract_function_and_indented_blocks(self):
code = 'def a_func(arg):\n if True:\n' \
' if True:\n print(arg)\n'
start, end = self._convert_line_range_to_offset(code, 3, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func(arg):\n ' \
'if True:\n new_func(arg)\n\n' \
'def new_func(arg):\n if True:\n print(arg)\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_multi_line_headers(self):
code = 'def a_func(\n arg):\n print(arg)\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func(\n arg):\n new_func(arg)\n\n' \
'def new_func(arg):\n print(arg)\n'
self.assertEqual(expected, refactored)
def test_single_line_extract_function(self):
code = 'a_var = 10 + 20\n'
start = code.index('10')
end = code.index('20') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "\ndef new_func():\n " \
"return 10 + 20\n\na_var = new_func()\n"
self.assertEqual(expected, refactored)
def test_single_line_extract_function2(self):
code = 'def a_func():\n a = 10\n b = a * 20\n'
start = code.rindex('a')
end = code.index('20') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a = 10\n b = new_func(a)\n' \
'\ndef new_func(a):\n return a * 20\n'
self.assertEqual(expected, refactored)
def test_single_line_extract_method_and_logical_lines(self):
code = 'a_var = 10 +\\\n 20\n'
start = code.index('10')
end = code.index('20') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n ' \
'return 10 + 20\n\na_var = new_func()\n'
self.assertEqual(expected, refactored)
def test_single_line_extract_method_and_logical_lines2(self):
code = 'a_var = (10,\\\n 20)\n'
start = code.index('10') - 1
end = code.index('20') + 3
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = '\ndef new_func():\n' \
' return (10, 20)\n\na_var = new_func()\n'
self.assertEqual(expected, refactored)
def test_single_line_extract_method(self):
code = "class AClass(object):\n\n" \
" def a_func(self):\n a = 10\n b = a * a\n"
start = code.rindex('=') + 2
end = code.rindex('a') + 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class AClass(object):\n\n' \
' def a_func(self):\n' \
' a = 10\n b = self.new_func(a)\n\n' \
' def new_func(self, a):\n return a * a\n'
self.assertEqual(expected, refactored)
def test_single_line_extract_function_if_condition(self):
code = 'if True:\n pass\n'
start = code.index('True')
end = code.index('True') + 4
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = "\ndef new_func():\n return True\n\nif new_func():" \
"\n pass\n"
self.assertEqual(expected, refactored)
def test_unneeded_params(self):
code = 'class A(object):\n ' \
'def a_func(self):\n a_var = 10\n a_var += 2\n'
start = code.rindex('2')
end = code.rindex('2') + 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'class A(object):\n' \
' def a_func(self):\n a_var = 10\n' \
' a_var += self.new_func()\n\n' \
' def new_func(self):\n return 2\n'
self.assertEqual(expected, refactored)
def test_breaks_and_continues_inside_loops(self):
code = 'def a_func():\n for i in range(10):\n continue\n'
start = code.index('for')
end = len(code) - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n new_func()\n\n' \
'def new_func():\n' \
' for i in range(10):\n continue\n'
self.assertEqual(expected, refactored)
def test_breaks_and_continues_outside_loops(self):
code = 'def a_func():\n' \
' for i in range(10):\n a = i\n continue\n'
start = code.index('a = i')
end = len(code) - 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_for_loop_variable_scope(self):
code = dedent('''\
def my_func():
i = 0
for dummy in range(10):
i += 1
print(i)
''')
start, end = self._convert_line_range_to_offset(code, 4, 5)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def my_func():
i = 0
for dummy in range(10):
i = new_func(i)
def new_func(i):
i += 1
print(i)
return i
''')
self.assertEqual(expected, refactored)
def test_for_loop_variable_scope_read_then_write(self):
code = dedent('''\
def my_func():
i = 0
for dummy in range(10):
a = i + 1
i = a + 1
''')
start, end = self._convert_line_range_to_offset(code, 4, 5)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def my_func():
i = 0
for dummy in range(10):
i = new_func(i)
def new_func(i):
a = i + 1
i = a + 1
return i
''')
self.assertEqual(expected, refactored)
def test_for_loop_variable_scope_write_then_read(self):
code = dedent('''\
def my_func():
i = 0
for dummy in range(10):
i = 'hello'
print(i)
''')
start, end = self._convert_line_range_to_offset(code, 4, 5)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def my_func():
i = 0
for dummy in range(10):
new_func()
def new_func():
i = 'hello'
print(i)
''')
self.assertEqual(expected, refactored)
def test_for_loop_variable_scope_write_only(self):
code = dedent('''\
def my_func():
i = 0
for num in range(10):
i = 'hello' + num
print(i)
''')
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def my_func():
i = 0
for num in range(10):
i = new_func(num)
print(i)
def new_func(num):
i = 'hello' + num
return i
''')
self.assertEqual(expected, refactored)
def test_variable_writes_followed_by_variable_reads_after_extraction(self):
code = 'def a_func():\n a = 1\n a = 2\n b = a\n'
start = code.index('a = 1')
end = code.index('a = 2') - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n new_func()\n a = 2\n b = a\n\n' \
'def new_func():\n a = 1\n'
self.assertEqual(expected, refactored)
def test_var_writes_followed_by_var_reads_inside_extraction(self):
code = 'def a_func():\n a = 1\n a = 2\n b = a\n'
start = code.index('a = 2')
end = len(code) - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n a = 1\n new_func()\n\n' \
'def new_func():\n a = 2\n b = a\n'
self.assertEqual(expected, refactored)
def test_extract_variable(self):
code = 'a_var = 10 + 20\n'
start = code.index('10')
end = code.index('20') + 2
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'new_var = 10 + 20\na_var = new_var\n'
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.6')
def test_extract_variable_f_string(self):
code = dedent('''\
foo(f"abc {a_var} def", 10)
''')
start = code.index('f"')
end = code.index('def"') + 4
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = dedent('''\
new_var = f"abc {a_var} def"
foo(new_var, 10)
''')
self.assertEqual(expected, refactored)
def test_extract_variable_multiple_lines(self):
code = 'a = 1\nb = 2\n'
start = code.index('1')
end = code.index('1') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'c = 1\na = c\nb = 2\n'
self.assertEqual(expected, refactored)
def test_extract_variable_in_the_middle_of_statements(self):
code = 'a = 1 + 2\n'
start = code.index('1')
end = code.index('1') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'c = 1\na = c + 2\n'
self.assertEqual(expected, refactored)
def test_extract_variable_for_a_tuple(self):
code = 'a = 1, 2\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'c = 1, 2\na = c\n'
self.assertEqual(expected, refactored)
def test_extract_variable_for_a_string(self):
code = 'def a_func():\n a = "hey!"\n'
start = code.index('"')
end = code.rindex('"') + 1
refactored = self.do_extract_variable(code, start, end, 'c')
expected = 'def a_func():\n c = "hey!"\n a = c\n'
self.assertEqual(expected, refactored)
def test_extract_variable_inside_ifs(self):
code = 'if True:\n a = 1 + 2\n'
start = code.index('1')
end = code.rindex('2') + 1
refactored = self.do_extract_variable(code, start, end, 'b')
expected = 'if True:\n b = 1 + 2\n a = b\n'
self.assertEqual(expected, refactored)
def test_extract_variable_inside_ifs_and_logical_lines(self):
code = 'if True:\n a = (3 + \n(1 + 2))\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'b')
expected = 'if True:\n b = 1 + 2\n a = (3 + \n(b))\n'
self.assertEqual(expected, refactored)
# TODO: Handle when extracting a subexpression
def xxx_test_extract_variable_for_a_subexpression(self):
code = 'a = 3 + 1 + 2\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'b')
expected = 'b = 1 + 2\na = 3 + b\n'
self.assertEqual(expected, refactored)
def test_extract_variable_starting_from_the_start_of_the_line(self):
code = 'a_dict = {1: 1}\na_dict.values().count(1)\n'
start = code.rindex('a_dict')
end = code.index('count') - 1
refactored = self.do_extract_variable(code, start, end, 'values')
expected = 'a_dict = {1: 1}\n' \
'values = a_dict.values()\nvalues.count(1)\n'
self.assertEqual(expected, refactored)
def test_extract_variable_on_the_last_line_of_a_function(self):
code = 'def f():\n a_var = {}\n a_var.keys()\n'
start = code.rindex('a_var')
end = code.index('.keys')
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'def f():\n a_var = {}\n ' \
'new_var = a_var\n new_var.keys()\n'
self.assertEqual(expected, refactored)
def test_extract_variable_on_the_indented_function_statement(self):
code = 'def f():\n if True:\n a_var = 1 + 2\n'
start = code.index('1')
end = code.index('2') + 1
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'def f():\n if True:\n' \
' new_var = 1 + 2\n a_var = new_var\n'
self.assertEqual(expected, refactored)
def test_extract_method_on_the_last_line_of_a_function(self):
code = 'def f():\n a_var = {}\n a_var.keys()\n'
start = code.rindex('a_var')
end = code.index('.keys')
refactored = self.do_extract_method(code, start, end, 'new_f')
expected = 'def f():\n a_var = {}\n new_f(a_var).keys()\n\n' \
'def new_f(a_var):\n return a_var\n'
self.assertEqual(expected, refactored)
def test_raising_exception_when_on_incomplete_variables(self):
code = 'a_var = 10 + 20\n'
start = code.index('10') + 1
end = code.index('20') + 2
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_raising_exception_when_on_incomplete_variables_on_end(self):
code = 'a_var = 10 + 20\n'
start = code.index('10')
end = code.index('20') + 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_raising_exception_on_bad_parens(self):
code = 'a_var = (10 + 20) + 30\n'
start = code.index('20')
end = code.index('30') + 2
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_raising_exception_on_bad_operators(self):
code = 'a_var = 10 + 20 + 30\n'
start = code.index('10')
end = code.rindex('+') + 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
# FIXME: Extract method should be more intelligent about bad ranges
def xxx_test_raising_exception_on_function_parens(self):
code = 'a = range(10)'
start = code.index('(')
end = code.rindex(')') + 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_method_and_extra_blank_lines(self):
code = '\nprint(1)\n'
refactored = self.do_extract_method(code, 0, len(code), 'new_f')
expected = '\n\ndef new_f():\n print(1)\n\nnew_f()\n'
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.6')
def test_extract_method_f_string_extract_method(self):
code = dedent('''\
def func(a_var):
foo(f"abc {a_var}", 10)
''')
start = code.index('f"')
end = code.index('}"') + 2
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def func(a_var):
foo(new_func(a_var), 10)
def new_func(a_var):
return f"abc {a_var}"
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.6')
def test_extract_method_f_string_extract_method_complex_expression(self):
code = dedent('''\
def func(a_var):
b_var = int
c_var = 10
fill = 10
foo(f"abc {a_var + f'{b_var(a_var)}':{fill}16}" f"{c_var}", 10)
''')
start = code.index('f"')
end = code.index('c_var}"') + 7
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def func(a_var):
b_var = int
c_var = 10
fill = 10
foo(new_func(a_var, b_var, c_var, fill), 10)
def new_func(a_var, b_var, c_var, fill):
return f"abc {a_var + f'{b_var(a_var)}':{fill}16}" f"{c_var}"
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.6')
def test_extract_method_f_string_false_comment(self):
code = dedent('''\
def func(a_var):
foo(f"abc {a_var} # ", 10)
''')
start = code.index('f"')
end = code.index('# "') + 3
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def func(a_var):
foo(new_func(a_var), 10)
def new_func(a_var):
return f"abc {a_var} # "
''')
self.assertEqual(expected, refactored)
@unittest.expectedFailure
@testutils.only_for_versions_higher('3.6')
def test_extract_method_f_string_false_format_value_in_regular_string(self):
code = dedent('''\
def func(a_var):
b_var = 1
foo(f"abc {a_var} " "{b_var}" f"{b_var} def", 10)
''')
start = code.index('f"')
end = code.index('def"') + 4
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def func(a_var):
b_var = 1
foo(new_func(a_var, b_var), 10)
def new_func(a_var, b_var):
return f"abc {a_var} " "{b_var}" f"{b_var} def"
''')
self.assertEqual(expected, refactored)
def test_variable_writes_in_the_same_line_as_variable_read(self):
code = 'a = 1\na = 1 + a\n'
start = code.index('\n') + 1
end = len(code)
refactored = self.do_extract_method(code, start, end, 'new_f',
global_=True)
expected = 'a = 1\n\ndef new_f(a):\n a = 1 + a\n\nnew_f(a)\n'
self.assertEqual(expected, refactored)
def test_variable_writes_in_the_same_line_as_variable_read2(self):
code = dedent('''\
a = 1
a += 1
''')
start = code.index('\n') + 1
end = len(code)
refactored = self.do_extract_method(code, start, end, 'new_f',
global_=True)
expected = dedent('''\
a = 1
def new_f(a):
a += 1
new_f(a)
''')
self.assertEqual(expected, refactored)
def test_variable_writes_in_the_same_line_as_variable_read3(self):
code = dedent('''\
a = 1
a += 1
print(a)
''')
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end, 'new_f')
expected = dedent('''\
a = 1
def new_f(a):
a += 1
return a
a = new_f(a)
print(a)
''')
self.assertEqual(expected, refactored)
def test_variable_writes_only(self):
code = dedent('''\
i = 1
print(i)
''')
start, end = self._convert_line_range_to_offset(code, 1, 1)
refactored = self.do_extract_method(code, start, end, 'new_f')
expected = dedent('''\
def new_f():
i = 1
return i
i = new_f()
print(i)
''')
self.assertEqual(expected, refactored)
def test_variable_and_similar_expressions(self):
code = 'a = 1\nb = 1\n'
start = code.index('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end,
'one', similar=True)
expected = 'one = 1\na = one\nb = one\n'
self.assertEqual(expected, refactored)
def test_definition_should_appear_before_the_first_use(self):
code = 'a = 1\nb = 1\n'
start = code.rindex('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end,
'one', similar=True)
expected = 'one = 1\na = one\nb = one\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_similar_expressions(self):
code = 'a = 1\nb = 1\n'
start = code.index('1')
end = start + 1
refactored = self.do_extract_method(code, start, end,
'one', similar=True)
expected = '\ndef one():\n return 1\n\na = one()\nb = one()\n'
self.assertEqual(expected, refactored)
def test_simple_extract_method_and_similar_statements(self):
code = 'class AClass(object):\n\n' \
' def func1(self):\n a = 1 + 2\n b = a\n' \
' def func2(self):\n a = 1 + 2\n b = a\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end,
'new_func', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self):\n' \
' a = self.new_func()\n b = a\n\n' \
' def new_func(self):\n' \
' a = 1 + 2\n return a\n' \
' def func2(self):\n' \
' a = self.new_func()\n b = a\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_similar_statements2(self):
code = 'class AClass(object):\n\n' \
' def func1(self, p1):\n a = p1 + 2\n' \
' def func2(self, p2):\n a = p2 + 2\n'
start = code.rindex('p1')
end = code.index('2\n') + 1
refactored = self.do_extract_method(code, start, end,
'new_func', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self, p1):\n ' \
'a = self.new_func(p1)\n\n' \
' def new_func(self, p1):\n return p1 + 2\n' \
' def func2(self, p2):\n a = self.new_func(p2)\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_similar_sttemnts_return_is_different(self):
code = 'class AClass(object):\n\n' \
' def func1(self, p1):\n a = p1 + 2\n' \
' def func2(self, p2):\n self.attr = p2 + 2\n'
start = code.rindex('p1')
end = code.index('2\n') + 1
refactored = self.do_extract_method(code, start, end,
'new_func', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self, p1):' \
'\n a = self.new_func(p1)\n\n' \
' def new_func(self, p1):\n return p1 + 2\n' \
' def func2(self, p2):\n' \
' self.attr = self.new_func(p2)\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_similar_sttemnts_overlapping_regions(self):
code = 'def func(p):\n' \
' a = p\n' \
' b = a\n' \
' c = b\n' \
' d = c\n' \
' return d'
start = code.index('a')
end = code.rindex('a') + 1
refactored = self.do_extract_method(
code, start, end, 'new_func', similar=True)
expected = 'def func(p):\n' \
' b = new_func(p)\n' \
' d = new_func(b)\n' \
' return d\n' \
'def new_func(p):\n' \
' a = p\n' \
' b = a\n' \
' return b\n'
self.assertEqual(expected, refactored)
def test_definition_should_appear_where_it_is_visible(self):
code = 'if True:\n a = 1\nelse:\n b = 1\n'
start = code.rindex('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end,
'one', similar=True)
expected = 'one = 1\nif True:\n a = one\nelse:\n b = one\n'
self.assertEqual(expected, refactored)
def test_extract_variable_and_similar_statements_in_classes(self):
code = 'class AClass(object):\n\n' \
' def func1(self):\n a = 1\n' \
' def func2(self):\n b = 1\n'
start = code.index(' 1') + 1
refactored = self.do_extract_variable(code, start, start + 1,
'one', similar=True)
expected = 'class AClass(object):\n\n' \
' def func1(self):\n one = 1\n a = one\n' \
' def func2(self):\n b = 1\n'
self.assertEqual(expected, refactored)
def test_extract_method_in_staticmethods(self):
code = 'class AClass(object):\n\n' \
' @staticmethod\n def func2():\n b = 1\n'
start = code.index(' 1') + 1
refactored = self.do_extract_method(code, start, start + 1,
'one', similar=True)
expected = 'class AClass(object):\n\n' \
' @staticmethod\n def func2():\n' \
' b = AClass.one()\n\n' \
' @staticmethod\n def one():\n' \
' return 1\n'
self.assertEqual(expected, refactored)
def test_extract_normal_method_with_staticmethods(self):
code = 'class AClass(object):\n\n' \
' @staticmethod\n def func1():\n b = 1\n' \
' def func2(self):\n b = 1\n'
start = code.rindex(' 1') + 1
refactored = self.do_extract_method(code, start, start + 1,
'one', similar=True)
expected = 'class AClass(object):\n\n' \
' @staticmethod\n def func1():\n b = 1\n' \
' def func2(self):\n b = self.one()\n\n' \
' def one(self):\n return 1\n'
self.assertEqual(expected, refactored)
def test_extract_variable_with_no_new_lines_at_the_end(self):
code = 'a_var = 10'
start = code.index('10')
end = start + 2
refactored = self.do_extract_variable(code, start, end, 'new_var')
expected = 'new_var = 10\na_var = new_var'
self.assertEqual(expected, refactored)
def test_extract_method_containing_return_in_functions(self):
code = 'def f(arg):\n return arg\nprint(f(1))\n'
start, end = self._convert_line_range_to_offset(code, 1, 3)
refactored = self.do_extract_method(code, start, end, 'a_func')
expected = '\ndef a_func():\n def f(arg):\n return arg\n' \
' print(f(1))\n\na_func()\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_varying_first_parameter(self):
code = 'class C(object):\n' \
' def f1(self):\n print(str(self))\n' \
' def f2(self):\n print(str(1))\n'
start = code.index('print(') + 6
end = code.index('))\n') + 1
refactored = self.do_extract_method(code, start, end,
'to_str', similar=True)
expected = 'class C(object):\n' \
' def f1(self):\n print(self.to_str())\n\n' \
' def to_str(self):\n return str(self)\n' \
' def f2(self):\n print(str(1))\n'
self.assertEqual(expected, refactored)
def test_extract_method_when_an_attribute_exists_in_function_scope(self):
code = 'class A(object):\n def func(self):\n pass\n' \
'a = A()\n' \
'def f():\n' \
' func = a.func()\n' \
' print(func)\n'
start, end = self._convert_line_range_to_offset(code, 6, 6)
refactored = self.do_extract_method(code, start, end, 'g')
refactored = refactored[refactored.index('A()') + 4:]
expected = 'def f():\n func = g()\n print(func)\n\n' \
'def g():\n func = a.func()\n return func\n'
self.assertEqual(expected, refactored)
def test_global_option_for_extract_method(self):
code = 'def a_func():\n print(1)\n'
start, end = self._convert_line_range_to_offset(code, 2, 2)
refactored = self.do_extract_method(code, start, end,
'extracted', global_=True)
expected = 'def a_func():\n extracted()\n\n' \
'def extracted():\n print(1)\n'
self.assertEqual(expected, refactored)
def test_global_extract_method(self):
code = 'class AClass(object):\n\n' \
' def a_func(self):\n print(1)\n'
start, end = self._convert_line_range_to_offset(code, 4, 4)
refactored = self.do_extract_method(code, start, end,
'new_func', global_=True)
expected = 'class AClass(object):\n\n' \
' def a_func(self):\n new_func()\n\n' \
'def new_func():\n print(1)\n'
self.assertEqual(expected, refactored)
def test_extract_method_with_multiple_methods(self): # noqa
code = 'class AClass(object):\n' \
' def a_func(self):\n' \
' print(1)\n\n' \
' def another_func(self):\n' \
' pass\n'
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end,
'new_func', global_=True)
expected = 'class AClass(object):\n' \
' def a_func(self):\n' \
' new_func()\n\n' \
' def another_func(self):\n' \
' pass\n\n' \
'def new_func():\n' \
' print(1)\n'
self.assertEqual(expected, refactored)
def test_where_to_seach_when_extracting_global_names(self):
code = 'def a():\n return 1\ndef b():\n return 1\nb = 1\n'
start = code.index('1')
end = start + 1
refactored = self.do_extract_variable(code, start, end, 'one',
similar=True, global_=True)
expected = 'def a():\n return one\none = 1\n' \
'def b():\n return one\nb = one\n'
self.assertEqual(expected, refactored)
def test_extracting_pieces_with_distinct_temp_names(self):
code = 'a = 1\nprint(a)\nb = 1\nprint(b)\n'
start = code.index('a')
end = code.index('\nb')
refactored = self.do_extract_method(code, start, end, 'f',
similar=True, global_=True)
expected = '\ndef f():\n a = 1\n print(a)\n\nf()\nf()\n'
self.assertEqual(expected, refactored)
def test_extract_methods_in_glob_funcs_should_be_glob(self):
code = 'def f():\n a = 1\ndef g():\n b = 1\n'
start = code.rindex('1')
refactored = self.do_extract_method(code, start, start + 1, 'one',
similar=True, global_=False)
expected = 'def f():\n a = one()\ndef g():\n b = one()\n\n' \
'def one():\n return 1\n'
self.assertEqual(expected, refactored)
def test_extract_methods_in_glob_funcs_should_be_glob_2(self):
code = 'if 1:\n var = 2\n'
start = code.rindex('2')
refactored = self.do_extract_method(code, start, start + 1, 'two',
similar=True, global_=False)
expected = '\ndef two():\n return 2\n\nif 1:\n var = two()\n'
self.assertEqual(expected, refactored)
def test_extract_method_and_try_blocks(self):
code = 'def f():\n try:\n pass\n' \
' except Exception:\n pass\n'
start, end = self._convert_line_range_to_offset(code, 2, 5)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f():\n g()\n\ndef g():\n try:\n pass\n' \
' except Exception:\n pass\n'
self.assertEqual(expected, refactored)
def test_extract_and_not_passing_global_functions(self):
code = 'def next(p):\n return p + 1\nvar = next(1)\n'
start = code.rindex('next')
refactored = self.do_extract_method(code, start, len(code) - 1, 'two')
expected = 'def next(p):\n return p + 1\n' \
'\ndef two():\n return next(1)\n\nvar = two()\n'
self.assertEqual(expected, refactored)
def test_extracting_with_only_one_return(self):
code = 'def f():\n var = 1\n return var\n'
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f():\n return g()\n\n' \
'def g():\n var = 1\n return var\n'
self.assertEqual(expected, refactored)
def test_extracting_variable_and_implicit_continuations(self):
code = 's = ("1"\n "2")\n'
start = code.index('"')
end = code.rindex('"') + 1
refactored = self.do_extract_variable(code, start, end, 's2')
expected = 's2 = "1" "2"\ns = (s2)\n'
self.assertEqual(expected, refactored)
def test_extracting_method_and_implicit_continuations(self):
code = 's = ("1"\n "2")\n'
start = code.index('"')
end = code.rindex('"') + 1
refactored = self.do_extract_method(code, start, end, 'f')
expected = '\ndef f():\n return "1" "2"\n\ns = (f())\n'
self.assertEqual(expected, refactored)
def test_passing_conditional_updated_vars_in_extracted(self):
code = 'def f(a):\n' \
' if 0:\n' \
' a = 1\n' \
' print(a)\n'
start, end = self._convert_line_range_to_offset(code, 2, 4)
refactored = self.do_extract_method(code, start, end, 'g')
expected = 'def f(a):\n' \
' g(a)\n\n' \
'def g(a):\n' \
' if 0:\n' \
' a = 1\n' \
' print(a)\n'
self.assertEqual(expected, refactored)
def test_returning_conditional_updated_vars_in_extracted(self):
code = dedent("""\
def f(a):
if 0:
a = 1
print(a)
""")
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'g')
expected = dedent("""\
def f(a):
a = g(a)
print(a)
def g(a):
if 0:
a = 1
return a
""")
self.assertEqual(expected, refactored)
def test_extract_method_with_variables_possibly_written_to(self):
code = "def a_func(b):\n" \
" if b > 0:\n" \
" a = 2\n" \
" print(a)\n"
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'extracted')
expected = "def a_func(b):\n" \
" a = extracted(b)\n" \
" print(a)\n\n" \
"def extracted(b):\n" \
" if b > 0:\n" \
" a = 2\n" \
" return a\n"
self.assertEqual(expected, refactored)
def test_extract_method_with_list_comprehension(self):
code = "def foo():\n" \
" x = [e for e in []]\n" \
" f = 23\n" \
"\n" \
" for e, f in []:\n" \
" def bar():\n" \
" e[42] = 1\n"
start, end = self._convert_line_range_to_offset(code, 4, 7)
refactored = self.do_extract_method(code, start, end, 'baz')
expected = "def foo():\n" \
" x = [e for e in []]\n" \
" f = 23\n" \
"\n" \
" baz()\n" \
"\n" \
"def baz():\n" \
" for e, f in []:\n" \
" def bar():\n" \
" e[42] = 1\n"
self.assertEqual(expected, refactored)
def test_extract_method_with_list_comprehension_and_iter(self):
code = dedent("""\
def foo():
x = [e for e in []]
f = 23
for x, f in x:
def bar():
x[42] = 1
""")
start, end = self._convert_line_range_to_offset(code, 4, 7)
refactored = self.do_extract_method(code, start, end, 'baz')
expected = dedent("""\
def foo():
x = [e for e in []]
f = 23
baz(x)
def baz(x):
for x, f in x:
def bar():
x[42] = 1
""")
self.assertEqual(expected, refactored)
def test_extract_method_with_list_comprehension_and_orelse(self):
code = "def foo():\n" \
" x = [e for e in []]\n" \
" f = 23\n" \
"\n" \
" for e, f in []:\n" \
" def bar():\n" \
" e[42] = 1\n"
start, end = self._convert_line_range_to_offset(code, 4, 7)
refactored = self.do_extract_method(code, start, end, 'baz')
expected = "def foo():\n" \
" x = [e for e in []]\n" \
" f = 23\n" \
"\n" \
" baz()\n" \
"\n" \
"def baz():\n" \
" for e, f in []:\n" \
" def bar():\n" \
" e[42] = 1\n"
self.assertEqual(expected, refactored)
def test_extract_function_with_for_else_statemant(self):
code = 'def a_func():\n for i in range(10):\n a = i\n ' \
'else:\n a = None\n'
start = code.index('for')
end = len(code) - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n new_func()\n\n' \
'def new_func():\n' \
' for i in range(10):\n a = i\n else:\n' \
' a = None\n'
self.assertEqual(expected, refactored)
def test_extract_function_with_for_else_statemant_more(self):
"""TODO: fixed code to test passed """
code = 'def a_func():\n'\
' for i in range(10):\n'\
' a = i\n'\
' else:\n'\
' for i in range(5):\n'\
' b = i\n'\
' else:\n'\
' b = None\n'\
' a = None\n'
start = code.index('for')
end = len(code) - 1
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = 'def a_func():\n new_func()\n\n' \
'def new_func():\n' \
' for i in range(10):\n'\
' a = i\n'\
' else:\n'\
' for i in range(5):\n'\
' b = i\n'\
' else:\n'\
' b = None\n'\
' a = None\n'
self.assertEqual(expected, refactored)
def test_extract_function_with_for_else_statemant_outside_loops(self):
code = dedent('''\
def a_func():
for i in range(10):
a = i
else:
a=None
''')
start = code.index('a = i')
end = len(code) - 1
with self.assertRaises(rope.base.exceptions.RefactoringError):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_function_with_inline_assignment_in_method(self):
code = dedent('''\
def foo():
i = 1
i += 1
print(i)
''')
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def foo():
i = 1
i = new_func(i)
print(i)
def new_func(i):
i += 1
return i
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.8')
def test_extract_function_statement_with_inline_assignment_in_condition(self):
code = dedent('''\
def foo(a):
if i := a == 5:
i += 1
print(i)
''')
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def foo(a):
i = new_func(a)
print(i)
def new_func(a):
if i := a == 5:
i += 1
return i
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.8')
def test_extract_function_expression_with_inline_assignment_in_condition(self):
code = dedent('''\
def foo(a):
if i := a == 5:
i += 1
print(i)
''')
extract_target = 'i := a == 5'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def foo(a):
if i := new_func(a):
i += 1
print(i)
def new_func(a):
return (i := a == 5)
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.8')
def test_extract_function_expression_with_inline_assignment_complex(self):
code = dedent('''\
def foo(a):
if i := a == (c := 5):
i += 1
c += 1
print(i)
''')
extract_target = 'i := a == (c := 5)'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def foo(a):
if i, c := new_func(a):
i += 1
c += 1
print(i)
def new_func(a):
return (i := a == (c := 5))
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.8')
def test_extract_function_expression_with_inline_assignment_in_inner_expression(self):
code = dedent('''\
def foo(a):
if a == (c := 5):
c += 1
print(i)
''')
extract_target = 'a == (c := 5)'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
with self.assertRaisesRegexp(rope.base.exceptions.RefactoringError, 'Extracted piece cannot contain named expression \\(:= operator\\).'):
self.do_extract_method(code, start, end, 'new_func')
def test_extract_exec(self):
code = dedent('''\
exec("def f(): pass", {})
''')
start, end = self._convert_line_range_to_offset(code, 1, 1)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def new_func():
exec("def f(): pass", {})
new_func()
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_lower('3')
def test_extract_exec_statement(self):
code = dedent('''\
exec "def f(): pass" in {}
''')
start, end = self._convert_line_range_to_offset(code, 1, 1)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def new_func():
exec "def f(): pass" in {}
new_func()
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.5')
def test_extract_async_function(self):
code = dedent('''\
async def my_func(my_list):
for x in my_list:
var = x + 1
return var
''')
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
async def my_func(my_list):
for x in my_list:
var = new_func(x)
return var
def new_func(x):
var = x + 1
return var
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.5')
def test_extract_inner_async_function(self):
code = dedent('''\
def my_func(my_list):
async def inner_func(my_list):
for x in my_list:
var = x + 1
return inner_func
''')
start, end = self._convert_line_range_to_offset(code, 2, 4)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def my_func(my_list):
inner_func = new_func(my_list)
return inner_func
def new_func(my_list):
async def inner_func(my_list):
for x in my_list:
var = x + 1
return inner_func
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.5')
def test_extract_around_inner_async_function(self):
code = dedent('''\
def my_func(lst):
async def inner_func(obj):
for x in obj:
var = x + 1
return map(inner_func, lst)
''')
start, end = self._convert_line_range_to_offset(code, 5, 5)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
def my_func(lst):
async def inner_func(obj):
for x in obj:
var = x + 1
return new_func(inner_func, lst)
def new_func(inner_func, lst):
return map(inner_func, lst)
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.5')
def test_extract_refactor_around_async_for_loop(self):
code = dedent('''\
async def my_func(my_list):
async for x in my_list:
var = x + 1
return var
''')
start, end = self._convert_line_range_to_offset(code, 3, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
async def my_func(my_list):
async for x in my_list:
var = new_func(x)
return var
def new_func(x):
var = x + 1
return var
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.5')
@testutils.only_for_versions_lower('3.8')
def test_extract_refactor_containing_async_for_loop_should_error_before_py38(self):
"""
Refactoring async/await syntaxes is only supported in Python 3.8 and
higher because support for ast.PyCF_ALLOW_TOP_LEVEL_AWAIT was only
added to the standard library in Python 3.8.
"""
code = dedent('''\
async def my_func(my_list):
async for x in my_list:
var = x + 1
return var
''')
start, end = self._convert_line_range_to_offset(code, 2, 3)
with self.assertRaisesRegexp(rope.base.exceptions.RefactoringError, 'Extracted piece can only have async/await statements if Rope is running on Python 3.8 or higher'):
self.do_extract_method(code, start, end, 'new_func')
@testutils.only_for_versions_higher('3.8')
def test_extract_refactor_containing_async_for_loop_is_supported_after_py38(self):
code = dedent('''\
async def my_func(my_list):
async for x in my_list:
var = x + 1
return var
''')
start, end = self._convert_line_range_to_offset(code, 2, 3)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
async def my_func(my_list):
var = new_func(my_list)
return var
def new_func(my_list):
async for x in my_list:
var = x + 1
return var
''')
self.assertEqual(expected, refactored)
@testutils.only_for_versions_higher('3.5')
def test_extract_await_expression(self):
code = dedent('''\
async def my_func(my_list):
for url in my_list:
resp = await request(url)
return resp
''')
selected = 'request(url)'
start, end = code.index(selected), code.index(selected) + len(selected)
refactored = self.do_extract_method(code, start, end, 'new_func')
expected = dedent('''\
async def my_func(my_list):
for url in my_list:
resp = await new_func(url)
return resp
def new_func(url):
return request(url)
''')
self.assertEqual(expected, refactored)
def test_extract_to_staticmethod(self):
code = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = a_var + 1
''')
extract_target = 'a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'second_method', kind="staticmethod")
expected = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = A.second_method(a_var)
@staticmethod
def second_method(a_var):
return a_var + 1
''')
self.assertEqual(expected, refactored)
def test_extract_to_staticmethod_when_self_in_body(self):
code = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = self.a_var + 1
''')
extract_target = 'self.a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'second_method', kind="staticmethod")
expected = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = A.second_method(self)
@staticmethod
def second_method(self):
return self.a_var + 1
''')
self.assertEqual(expected, refactored)
def test_extract_from_function_to_staticmethod_raises_exception(self):
code = dedent('''\
def first_method():
a_var = 1
b_var = a_var + 1
''')
extract_target = 'a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
with self.assertRaisesRegexp(rope.base.exceptions.RefactoringError, "Cannot extract to staticmethod/classmethod outside class"):
self.do_extract_method(code, start, end, 'second_method', kind="staticmethod")
def test_extract_method_in_classmethods(self):
code = dedent('''\
class AClass(object):
@classmethod
def func2(cls):
b = 1
''')
start = code.index(' 1') + 1
refactored = self.do_extract_method(code, start, start + 1,
'one', similar=True)
expected = dedent('''\
class AClass(object):
@classmethod
def func2(cls):
b = AClass.one()
@classmethod
def one(cls):
return 1
''')
self.assertEqual(expected, refactored)
def test_extract_from_function_to_classmethod_raises_exception(self):
code = dedent('''\
def first_method():
a_var = 1
b_var = a_var + 1
''')
extract_target = 'a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
with self.assertRaisesRegexp(rope.base.exceptions.RefactoringError, "Cannot extract to staticmethod/classmethod outside class"):
self.do_extract_method(code, start, end, 'second_method', kind="classmethod")
def test_extract_to_classmethod_when_self_in_body(self):
code = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = self.a_var + 1
''')
extract_target = 'self.a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'second_method', kind="classmethod")
expected = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = A.second_method(self)
@classmethod
def second_method(cls, self):
return self.a_var + 1
''')
self.assertEqual(expected, refactored)
def test_extract_to_classmethod(self):
code = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = a_var + 1
''')
extract_target = 'a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'second_method', kind="classmethod")
expected = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = A.second_method(a_var)
@classmethod
def second_method(cls, a_var):
return a_var + 1
''')
self.assertEqual(expected, refactored)
def test_extract_to_classmethod_when_name_starts_with_at_sign(self):
code = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = a_var + 1
''')
extract_target = 'a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, '@second_method')
expected = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = A.second_method(a_var)
@classmethod
def second_method(cls, a_var):
return a_var + 1
''')
self.assertEqual(expected, refactored)
def test_extract_to_staticmethod_when_name_starts_with_dollar_sign(self):
code = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = a_var + 1
''')
extract_target = 'a_var + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, '$second_method')
expected = dedent('''\
class A:
def first_method(self):
a_var = 1
b_var = A.second_method(a_var)
@staticmethod
def second_method(a_var):
return a_var + 1
''')
self.assertEqual(expected, refactored)
def test_raises_exception_when_sign_in_name_and_kind_mismatch(self):
with self.assertRaisesRegexp(rope.base.exceptions.RefactoringError, "Kind and shortcut in name mismatch"):
self.do_extract_method("code", 0,1, '$second_method', kind="classmethod")
def test_extracting_from_static_with_function_arg(self):
code = dedent('''\
class A:
@staticmethod
def first_method(someargs):
b_var = someargs + 1
''')
extract_target = 'someargs + 1'
start, end = code.index(extract_target), code.index(extract_target) + len(extract_target)
refactored = self.do_extract_method(code, start, end, 'second_method')
expected = dedent('''\
class A:
@staticmethod
def first_method(someargs):
b_var = A.second_method(someargs)
@staticmethod
def second_method(someargs):
return someargs + 1
''')
self.assertEqual(expected, refactored)
if __name__ == '__main__':
unittest.main()
| 41.124931 | 175 | 0.519338 | 9,088 | 74,724 | 4.039173 | 0.039283 | 0.042715 | 0.046747 | 0.070802 | 0.878283 | 0.850142 | 0.831917 | 0.807018 | 0.772774 | 0.732538 | 0 | 0.017541 | 0.356833 | 74,724 | 1,816 | 176 | 41.147577 | 0.746255 | 0.004416 | 0 | 0.684178 | 0 | 0.00309 | 0.360593 | 0.010787 | 0 | 0 | 0 | 0.000551 | 0.081582 | 1 | 0.084672 | false | 0.014215 | 0.004944 | 0 | 0.118665 | 0.051298 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f396392718dced183210c2641758b3812c4dac46 | 152 | py | Python | nflapi/views.py | OnerInce/nfl-rest_api | 8d66d68ae7f04476a1b9f509e69a9d0dc83bfcca | [
"Apache-2.0"
] | 2 | 2021-06-14T18:14:10.000Z | 2022-01-29T18:45:28.000Z | nflapi/views.py | OnerInce/nfl-rest_api | 8d66d68ae7f04476a1b9f509e69a9d0dc83bfcca | [
"Apache-2.0"
] | null | null | null | nflapi/views.py | OnerInce/nfl-rest_api | 8d66d68ae7f04476a1b9f509e69a9d0dc83bfcca | [
"Apache-2.0"
] | 1 | 2022-02-09T14:14:20.000Z | 2022-02-09T14:14:20.000Z | from django.http import JsonResponse
def WelcomeView(request):
return JsonResponse({'result':'success', 'message':'Welcome to the NFL Rest API'})
| 25.333333 | 86 | 0.743421 | 19 | 152 | 5.947368 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.131579 | 152 | 5 | 87 | 30.4 | 0.856061 | 0 | 0 | 0 | 0 | 0 | 0.309211 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
f3c0d696dbd46cb57ca3a86480600e181f1d3703 | 1,832 | py | Python | alembic/versions/5730636f9d23_add_amount_dollars_to_tweet_attempt.py | jonathanzong/dmca | 70157cff983310e5951024aa80e99e7a5404d758 | [
"MIT"
] | 2 | 2022-02-16T22:50:06.000Z | 2022-02-21T19:38:02.000Z | alembic/versions/5730636f9d23_add_amount_dollars_to_tweet_attempt.py | jonathanzong/dmca | 70157cff983310e5951024aa80e99e7a5404d758 | [
"MIT"
] | 2 | 2022-02-01T05:48:07.000Z | 2022-02-01T05:49:29.000Z | alembic/versions/5730636f9d23_add_amount_dollars_to_tweet_attempt.py | jonathanzong/bartleby | 70157cff983310e5951024aa80e99e7a5404d758 | [
"MIT"
] | null | null | null | """add amount_dollars to tweet attempt
Revision ID: 5730636f9d23
Revises: 474e96e30c5c
Create Date: 2018-03-29 17:42:55.141809
"""
# revision identifiers, used by Alembic.
revision = '474e96e30c5c'
down_revision = '5c990792272f'
branch_labels = None
depends_on = None
from alembic import op
import sqlalchemy as sa
def upgrade(engine_name):
globals()["upgrade_%s" % engine_name]()
def downgrade(engine_name):
globals()["downgrade_%s" % engine_name]()
def upgrade_development():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('twitter_user_recruitment_tweet_attempt', sa.Column('amount_dollars', sa.Integer(), nullable=True))
# ### end Alembic commands ###
def downgrade_development():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('twitter_user_recruitment_tweet_attempt', 'amount_dollars')
# ### end Alembic commands ###
def upgrade_test():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('twitter_user_recruitment_tweet_attempt', sa.Column('amount_dollars', sa.Integer(), nullable=True))
# ### end Alembic commands ###
def downgrade_test():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('twitter_user_recruitment_tweet_attempt', 'amount_dollars')
# ### end Alembic commands ###
def upgrade_production():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('twitter_user_recruitment_tweet_attempt', sa.Column('amount_dollars', sa.Integer(), nullable=True))
# ### end Alembic commands ###
def downgrade_production():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('twitter_user_recruitment_tweet_attempt', 'amount_dollars')
# ### end Alembic commands ###
| 28.184615 | 117 | 0.712336 | 218 | 1,832 | 5.747706 | 0.279817 | 0.072626 | 0.100559 | 0.110136 | 0.721468 | 0.721468 | 0.721468 | 0.721468 | 0.721468 | 0.681564 | 0 | 0.036246 | 0.156659 | 1,832 | 64 | 118 | 28.625 | 0.774757 | 0.341157 | 0 | 0.272727 | 0 | 0 | 0.321653 | 0.204852 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0.090909 | 0 | 0.454545 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f3c1551e2d9cb4b23cecb9d94059a1a666aea801 | 8 | py | Python | constants.py | Aniq55/pyxel | 9285e5b899ca6ec694112447b073fa7ead630159 | [
"MIT"
] | null | null | null | constants.py | Aniq55/pyxel | 9285e5b899ca6ec694112447b073fa7ead630159 | [
"MIT"
] | null | null | null | constants.py | Aniq55/pyxel | 9285e5b899ca6ec694112447b073fa7ead630159 | [
"MIT"
] | 1 | 2018-08-16T19:42:12.000Z | 2018-08-16T19:42:12.000Z | N = 3.5
| 4 | 7 | 0.375 | 3 | 8 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.4 | 0.375 | 8 | 1 | 8 | 8 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
45f18e0b880cf37403f7ab47ce999eee864048d2 | 38 | py | Python | dst_topology/__init__.py | CiscoDevNet/dst-automation | dffcde76f1bd7dc4dd4350c7a224f8ad9679ad4a | [
"BSD-3-Clause"
] | 4 | 2020-04-28T16:38:18.000Z | 2021-06-09T08:45:24.000Z | dst_topology/__init__.py | CiscoDevNet/dst-automation | dffcde76f1bd7dc4dd4350c7a224f8ad9679ad4a | [
"BSD-3-Clause"
] | 6 | 2020-11-04T16:35:42.000Z | 2021-04-25T13:38:56.000Z | dst_topology/__init__.py | CiscoDevNet/dst-automation | dffcde76f1bd7dc4dd4350c7a224f8ad9679ad4a | [
"BSD-3-Clause"
] | 3 | 2020-05-13T22:43:50.000Z | 2021-05-01T22:30:33.000Z | from .dst_topology import DSTTopology
| 19 | 37 | 0.868421 | 5 | 38 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 38 | 1 | 38 | 38 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
45f3e2020d63f96b1734bfa6a768b952b5c5c598 | 5,468 | py | Python | smdt/regression.py | avanteijlingen/PyMolSAR | b6f7188f1462630b54347ff1de28160e993d31f2 | [
"MIT"
] | null | null | null | smdt/regression.py | avanteijlingen/PyMolSAR | b6f7188f1462630b54347ff1de28160e993d31f2 | [
"MIT"
] | null | null | null | smdt/regression.py | avanteijlingen/PyMolSAR | b6f7188f1462630b54347ff1de28160e993d31f2 | [
"MIT"
] | null | null | null | from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import StandardScaler, LabelEncoder
try:
from sklearn.preprocessing import Imputer
except ImportError:
from sklearn.impute import SimpleImputer
from sklearn.feature_selection import SelectKBest, mutual_info_regression
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LassoCV, RidgeCV, ElasticNetCV
from sklearn.decomposition import PCA
from sklearn import metrics
from sklearn.svm import LinearSVR
from sklearn.pipeline import Pipeline
from smdt import data_processing
from smdt import molecular_descriptors
import numpy as np
import pandas as pd
def fit_Ridge(X_train, X_test, y_train, y_test):
a = Imputer(missing_values='NaN', strategy='median', axis=0)
b = StandardScaler()
c = SelectKBest(score_func=mutual_info_regression)
clf = RidgeCV(cv=10)
model = Pipeline([('impute', a), ('scaling', b), ('anova', c), ('rf', clf)])
# Grid Search CV
parameters = {'anova__k': [5, 10, 20, 40]}
grid = GridSearchCV(model, parameters)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
# Metrics
metric = [grid.score(X_test, y_test),
metrics.explained_variance_score(y_test, y_pred),
metrics.mean_absolute_error(y_test, y_pred),
metrics.mean_squared_error(y_test, y_pred),
metrics.median_absolute_error(y_test, y_pred),
metrics.r2_score(y_test, y_pred)]
return grid, y_pred, metric
def fit_ElasticNet(X_train, X_test, y_train, y_test):
a = Imputer(missing_values='NaN', strategy='median', axis=0)
b = StandardScaler()
c = SelectKBest(score_func=mutual_info_regression)
clf = ElasticNetCV(cv=10)
model = Pipeline([('impute', a), ('scaling', b), ('anova', c), ('rf', clf)])
# Grid Search CV
parameters = {'anova__k': [5, 10, 20, 40]}
grid = GridSearchCV(model, parameters)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
# Metrics
metric = [grid.score(X_test, y_test),
metrics.explained_variance_score(y_test, y_pred),
metrics.mean_absolute_error(y_test, y_pred),
metrics.mean_squared_error(y_test, y_pred),
metrics.median_absolute_error(y_test, y_pred),
metrics.r2_score(y_test, y_pred)]
return grid, y_pred, metric
def fit_LinearSVR(X_train, X_test, y_train, y_test):
a = Imputer(missing_values='NaN', strategy='median', axis=0)
b = StandardScaler()
c = SelectKBest(score_func=mutual_info_regression)
clf = LinearSVR()
model = Pipeline([('impute', a), ('scaling', b), ('anova', c), ('rf', clf)])
# Grid Search CV
parameters = {'anova__k': [5, 10, 20, 40],
'rf__C':[1,5,10],'rf__loss':['epsilon_insensitive','squared_epsilon_insensitive'],'rf__epsilon':[0,0.1]}
grid = GridSearchCV(model, parameters)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
# Metrics
metric = [grid.score(X_test, y_test),
metrics.explained_variance_score(y_test, y_pred),
metrics.mean_absolute_error(y_test, y_pred),
metrics.mean_squared_error(y_test, y_pred),
metrics.median_absolute_error(y_test, y_pred),
metrics.r2_score(y_test, y_pred)]
return grid, y_pred, metric
def fit_Lasso(X_train, X_test, y_train, y_test):
a = Imputer(missing_values='NaN', strategy='median', axis=0)
b = StandardScaler()
c = SelectKBest(score_func=mutual_info_regression)
clf = LassoCV(cv=10)
model = Pipeline([('impute', a), ('scaling', b), ('anova', c), ('rf', clf)])
# Grid Search CV
parameters = {'anova__k': [5, 10, 20, 40]}
grid = GridSearchCV(model, parameters)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
# Metrics
metric = [grid.score(X_test, y_test),
metrics.explained_variance_score(y_test, y_pred),
metrics.mean_absolute_error(y_test, y_pred),
metrics.mean_squared_error(y_test, y_pred),
metrics.median_absolute_error(y_test, y_pred),
metrics.r2_score(y_test, y_pred)]
return grid, y_pred, metric
def fit_RandomForestRegressor(X_train, X_test, y_train, y_test):
a = Imputer(missing_values='NaN', strategy='median', axis=0)
b = StandardScaler()
c = SelectKBest(score_func=mutual_info_regression)
clf = RandomForestRegressor()
model = Pipeline([('impute', a), ('scaling', b), ('anova', c), ('rf', clf)])
# Grid Search CV
parameters = {'anova__k': [5,10,20,40],
'rf__n_estimators': [10, 100], 'rf__criterion': ['mse', 'mae'],
'rf__max_features': ['auto', 'sqrt', 'log2'], 'rf__oob_score': [True, False],
"rf__max_depth": [3, None], "rf__min_samples_split": [2, 3, 10], "rf__min_samples_leaf": [1, 3, 10]}
grid = GridSearchCV(model, parameters)
grid.fit(X_train, y_train)
y_pred = grid.predict(X_test)
# Metrics
metric = [grid.score(X_test, y_test),
metrics.explained_variance_score(y_test, y_pred),
metrics.mean_absolute_error(y_test, y_pred),
metrics.mean_squared_error(y_test, y_pred),
metrics.median_absolute_error(y_test, y_pred),
metrics.r2_score(y_test, y_pred)]
return grid, y_pred, metric
| 35.738562 | 122 | 0.65801 | 744 | 5,468 | 4.533602 | 0.151882 | 0.051883 | 0.044471 | 0.074118 | 0.735251 | 0.735251 | 0.735251 | 0.735251 | 0.735251 | 0.735251 | 0 | 0.017048 | 0.216898 | 5,468 | 152 | 123 | 35.973684 | 0.770668 | 0.020849 | 0 | 0.707547 | 0 | 0 | 0.072084 | 0.008987 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04717 | false | 0 | 0.150943 | 0 | 0.245283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
34236dfeb5d60b14b04a99217d8a38201a226d55 | 14,925 | py | Python | tests/cupyx_tests/test_pinned_array.py | Onkar627/cupy | 8eef1ad5393c0a92c5065bc05137bf997f37044a | [
"MIT"
] | 6,180 | 2016-11-01T14:22:30.000Z | 2022-03-31T08:39:20.000Z | tests/cupyx_tests/test_pinned_array.py | Onkar627/cupy | 8eef1ad5393c0a92c5065bc05137bf997f37044a | [
"MIT"
] | 6,281 | 2016-12-22T07:42:31.000Z | 2022-03-31T19:57:02.000Z | tests/cupyx_tests/test_pinned_array.py | Onkar627/cupy | 8eef1ad5393c0a92c5065bc05137bf997f37044a | [
"MIT"
] | 829 | 2017-02-23T05:46:12.000Z | 2022-03-27T17:40:03.000Z | import unittest
import numpy
import pytest
import cupy
from cupy import testing
from cupy.testing._loops import _wraps_partial
import cupyx
def numpy_cupyx_array_equal(target_func, name='func'):
_mod = (cupy, numpy)
_numpy_funcs = {
'empty': numpy.empty,
'empty_like': numpy.empty_like,
'zeros': numpy.zeros,
'zeros_like': numpy.zeros_like,
}
_cupy_funcs = {
'empty': cupyx.empty_pinned,
'empty_like': cupyx.empty_like_pinned,
'zeros': cupyx.zeros_pinned,
'zeros_like': cupyx.zeros_like_pinned,
}
def _get_test_func(xp, func):
if xp is numpy:
return _numpy_funcs[func]
elif xp is cupy:
return _cupy_funcs[func]
else:
assert False
def _check_pinned_mem_used(a, xp):
if xp is cupy:
assert isinstance(a.base, cupy.cuda.PinnedMemoryPointer)
assert a.base.ptr == a.ctypes.data
def decorator(impl):
@_wraps_partial(impl, name)
def test_func(self, *args, **kw):
out = []
for xp in _mod:
func = _get_test_func(xp, target_func)
kw[name] = func
a = impl(self, *args, **kw)
_check_pinned_mem_used(a, xp)
out.append(a)
numpy.testing.assert_array_equal(*out)
return test_func
return decorator
# test_empty_scalar_none is removed
# test_zeros_scalar_none is removed
class TestBasic(unittest.TestCase):
@testing.for_CF_orders()
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty')
def test_empty(self, dtype, order, func):
a = func((2, 3, 4), dtype=dtype, order=order)
a.fill(0)
return a
@testing.slow
def test_empty_huge_size(self):
a = cupyx.empty_pinned((1024, 2048, 1024), dtype='b')
a.fill(123)
assert (a == 123).all()
# Free huge memory for slow test
del a
cupy.get_default_pinned_memory_pool().free_all_blocks()
@testing.slow
def test_empty_huge_size_fill0(self):
a = cupyx.empty_pinned((1024, 2048, 1024), dtype='b')
a.fill(0)
assert (a == 0).all()
# Free huge memory for slow test
del a
cupy.get_default_pinned_memory_pool().free_all_blocks()
@testing.for_CF_orders()
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty')
def test_empty_scalar(self, dtype, order, func):
a = func((), dtype=dtype, order=order)
a.fill(0)
return a
@testing.for_CF_orders()
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty')
def test_empty_int(self, dtype, order, func):
a = func(3, dtype=dtype, order=order)
a.fill(0)
return a
@testing.slow
def test_empty_int_huge_size(self):
a = cupyx.empty_pinned(2 ** 31, dtype='b')
a.fill(123)
assert (a == 123).all()
# Free huge memory for slow test
del a
cupy.get_default_pinned_memory_pool().free_all_blocks()
@testing.slow
def test_empty_int_huge_size_fill0(self):
a = cupyx.empty_pinned(2 ** 31, dtype='b')
a.fill(0)
assert (a == 0).all()
# Free huge memory for slow test
del a
cupy.get_default_pinned_memory_pool().free_all_blocks()
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
b = func(a, order=order)
b.fill(0)
return b
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_contiguity(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
b = func(a, order=order)
b.fill(0)
if order in ['f', 'F']:
assert b.flags.f_contiguous
else:
assert b.flags.c_contiguous
return b
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_contiguity2(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
a = numpy.asfortranarray(a)
b = func(a, order=order)
b.fill(0)
if order in ['c', 'C']:
assert b.flags.c_contiguous
else:
assert b.flags.f_contiguous
return b
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_contiguity3(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
# test strides that are both non-contiguous and non-descending
a = a[:, ::2, :].swapaxes(0, 1)
b = func(a, order=order)
b.fill(0)
if order in ['k', 'K', None]:
assert not b.flags.c_contiguous
assert not b.flags.f_contiguous
elif order in ['f', 'F']:
assert not b.flags.c_contiguous
assert b.flags.f_contiguous
else:
assert b.flags.c_contiguous
assert not b.flags.f_contiguous
return b
@testing.for_all_dtypes()
@testing.gpu
def test_empty_like_K_strides(self, dtype):
# test strides that are both non-contiguous and non-descending;
# also test accepting cupy.ndarray
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
a = a[:, ::2, :].swapaxes(0, 1)
b = numpy.empty_like(a, order='K')
b.fill(0)
# GPU case
ag = testing.shaped_arange((2, 3, 4), cupy, dtype)
ag = ag[:, ::2, :].swapaxes(0, 1)
bg = cupyx.empty_like_pinned(ag, order='K')
bg.fill(0)
# make sure NumPy and CuPy strides agree
assert b.strides == bg.strides
@testing.with_requires('numpy>=1.19')
@testing.for_all_dtypes()
def test_empty_like_invalid_order(self, dtype):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
with pytest.raises(ValueError):
cupyx.empty_like_pinned(a, order='Q')
def test_empty_like_subok(self):
a = testing.shaped_arange((2, 3, 4), numpy)
with pytest.raises(TypeError):
cupyx.empty_like_pinned(a, subok=True)
@testing.for_CF_orders()
def test_empty_zero_sized_array_strides(self, order):
a = numpy.empty((1, 0, 2), dtype='d', order=order)
b = cupyx.empty_pinned((1, 0, 2), dtype='d', order=order)
assert b.strides == a.strides
@testing.for_CF_orders()
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='zeros')
def test_zeros(self, dtype, order, func):
return func((2, 3, 4), dtype=dtype, order=order)
@testing.for_CF_orders()
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='zeros')
def test_zeros_scalar(self, dtype, order, func):
return func((), dtype=dtype, order=order)
@testing.for_CF_orders()
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='zeros')
def test_zeros_int(self, dtype, order, func):
return func(3, dtype=dtype, order=order)
@testing.for_CF_orders()
def test_zeros_strides(self, order):
a = numpy.zeros((2, 3), dtype='d', order=order)
b = cupyx.zeros_pinned((2, 3), dtype='d', order=order)
assert b.strides == a.strides
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='zeros_like')
def test_zeros_like(self, dtype, order, func):
a = numpy.ndarray((2, 3, 4), dtype=dtype)
return func(a, order=order)
def test_zeros_like_subok(self):
a = numpy.ndarray((2, 3, 4))
with pytest.raises(TypeError):
cupyx.zeros_like_pinned(a, subok=True)
@testing.parameterize(
*testing.product({
'shape': [4, (4, ), (4, 2), (4, 2, 3), (5, 4, 2, 3)],
})
)
class TestBasicReshape(unittest.TestCase):
@testing.with_requires('numpy>=1.17.0')
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_reshape(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
b = func(a, order=order, shape=self.shape)
b.fill(0)
return b
@testing.for_CF_orders()
@testing.for_all_dtypes()
@testing.gpu
def test_empty_like_reshape_cupy_only(self, dtype, order):
a = testing.shaped_arange((2, 3, 4), cupy, dtype)
b = cupyx.empty_like_pinned(a, shape=self.shape)
b.fill(0)
c = cupyx.empty_pinned(self.shape, order=order, dtype=dtype)
c.fill(0)
numpy.testing.assert_array_equal(b, c)
@testing.with_requires('numpy>=1.17.0')
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_reshape_contiguity(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
b = func(a, order=order, shape=self.shape)
b.fill(0)
if order in ['f', 'F']:
assert b.flags.f_contiguous
else:
assert b.flags.c_contiguous
return b
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@testing.gpu
def test_empty_like_reshape_contiguity_cupy_only(self, dtype, order):
a = testing.shaped_arange((2, 3, 4), cupy, dtype)
b = cupyx.empty_like_pinned(a, order=order, shape=self.shape)
b.fill(0)
c = cupyx.empty_pinned(self.shape)
c.fill(0)
if order in ['f', 'F']:
assert b.flags.f_contiguous
else:
assert b.flags.c_contiguous
numpy.testing.assert_array_equal(b, c)
@testing.with_requires('numpy>=1.17.0')
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_reshape_contiguity2(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
a = numpy.asfortranarray(a)
b = func(a, order=order, shape=self.shape)
b.fill(0)
shape = self.shape if not numpy.isscalar(self.shape) else (self.shape,)
if (order in ['c', 'C'] or
(order in ['k', 'K', None] and len(shape) != a.ndim)):
assert b.flags.c_contiguous
else:
assert b.flags.f_contiguous
return b
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@testing.gpu
def test_empty_like_reshape_contiguity2_cupy_only(self, dtype, order):
a = testing.shaped_arange((2, 3, 4), cupy, dtype)
a = cupy.asfortranarray(a)
b = cupyx.empty_like_pinned(a, order=order, shape=self.shape)
b.fill(0)
c = cupyx.empty_pinned(self.shape)
c.fill(0)
shape = self.shape if not numpy.isscalar(self.shape) else (self.shape,)
if (order in ['c', 'C'] or
(order in ['k', 'K', None] and len(shape) != a.ndim)):
assert b.flags.c_contiguous
else:
assert b.flags.f_contiguous
numpy.testing.assert_array_equal(b, c)
@testing.with_requires('numpy>=1.17.0')
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='empty_like')
def test_empty_like_reshape_contiguity3(self, dtype, order, func):
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
# test strides that are both non-contiguous and non-descending
a = a[:, ::2, :].swapaxes(0, 1)
b = func(a, order=order, shape=self.shape)
b.fill(0)
shape = self.shape if not numpy.isscalar(self.shape) else (self.shape,)
if len(shape) == 1:
assert b.flags.c_contiguous
assert b.flags.f_contiguous
elif order in ['k', 'K', None] and len(shape) == a.ndim:
assert not b.flags.c_contiguous
assert not b.flags.f_contiguous
elif order in ['f', 'F']:
assert not b.flags.c_contiguous
assert b.flags.f_contiguous
else:
assert b.flags.c_contiguous
assert not b.flags.f_contiguous
return b
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@testing.gpu
def test_empty_like_reshape_contiguity3_cupy_only(self, dtype, order):
a = testing.shaped_arange((2, 3, 4), cupy, dtype)
# test strides that are both non-contiguous and non-descending
a = a[:, ::2, :].swapaxes(0, 1)
b = cupyx.empty_like_pinned(a, order=order, shape=self.shape)
b.fill(0)
shape = self.shape if not numpy.isscalar(self.shape) else (self.shape,)
if len(shape) == 1:
assert b.flags.c_contiguous
assert b.flags.f_contiguous
elif order in ['k', 'K', None] and len(shape) == a.ndim:
assert not b.flags.c_contiguous
assert not b.flags.f_contiguous
elif order in ['f', 'F']:
assert not b.flags.c_contiguous
assert b.flags.f_contiguous
else:
assert b.flags.c_contiguous
assert not b.flags.f_contiguous
c = cupyx.zeros_pinned(self.shape)
c.fill(0)
testing.assert_array_equal(b, c)
@testing.with_requires('numpy>=1.17.0')
@testing.for_all_dtypes()
@testing.gpu
def test_empty_like_K_strides_reshape(self, dtype):
# test strides that are both non-contiguous and non-descending
a = testing.shaped_arange((2, 3, 4), numpy, dtype)
a = a[:, ::2, :].swapaxes(0, 1)
b = cupyx.empty_like_pinned(a, order='K', shape=self.shape)
b.fill(0)
# GPU case
ag = testing.shaped_arange((2, 3, 4), cupy, dtype)
ag = ag[:, ::2, :].swapaxes(0, 1)
bg = cupyx.empty_like_pinned(ag, order='K', shape=self.shape)
bg.fill(0)
# make sure NumPy and CuPy strides agree
assert b.strides == bg.strides
return
@testing.with_requires('numpy>=1.17.0')
@testing.for_orders('CFAK')
@testing.for_all_dtypes()
@numpy_cupyx_array_equal(target_func='zeros_like')
def test_zeros_like_reshape(self, dtype, order, func):
a = numpy.ndarray((2, 3, 4), dtype=dtype)
return func(a, order=order, shape=self.shape)
@testing.for_CF_orders()
@testing.for_all_dtypes()
@testing.gpu
def test_zeros_like_reshape_cupy_only(self, dtype, order):
a = testing.shaped_arange((2, 3, 4), cupy, dtype)
b = cupyx.zeros_like_pinned(a, shape=self.shape)
c = cupyx.zeros_pinned(self.shape, order=order, dtype=dtype)
numpy.testing.assert_array_equal(b, c)
| 34.709302 | 79 | 0.613467 | 2,107 | 14,925 | 4.131941 | 0.067869 | 0.053986 | 0.035837 | 0.052378 | 0.870664 | 0.845624 | 0.7898 | 0.773375 | 0.759591 | 0.755801 | 0 | 0.020118 | 0.260637 | 14,925 | 429 | 80 | 34.79021 | 0.768826 | 0.041943 | 0 | 0.681319 | 0 | 0 | 0.026677 | 0 | 0 | 0 | 0 | 0 | 0.14011 | 1 | 0.101648 | false | 0 | 0.019231 | 0.008242 | 0.184066 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3436dfe1c05f337be534b26b848f531535cb9286 | 148 | py | Python | autoreg/inference/__init__.py | zhenwendai/RGP | be679607d3457a1038a2fe39b36b816ea380ea39 | [
"BSD-3-Clause"
] | 17 | 2016-10-24T01:31:30.000Z | 2021-07-31T08:12:02.000Z | autoreg/inference/__init__.py | zhenwendai/RGP | be679607d3457a1038a2fe39b36b816ea380ea39 | [
"BSD-3-Clause"
] | null | null | null | autoreg/inference/__init__.py | zhenwendai/RGP | be679607d3457a1038a2fe39b36b816ea380ea39 | [
"BSD-3-Clause"
] | 11 | 2017-07-11T09:11:48.000Z | 2022-01-25T12:10:48.000Z | from __future__ import absolute_import # for better python 2 to python 3 compatibility
from .vardtc import VarDTC
from .svi_vardtc import SVI_VarDTC | 49.333333 | 86 | 0.844595 | 23 | 148 | 5.130435 | 0.565217 | 0.20339 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.015625 | 0.135135 | 148 | 3 | 87 | 49.333333 | 0.90625 | 0.304054 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
345e3855ec57a1f0ad4678787dada3ae79d77722 | 157 | py | Python | fbd/__init__.py | olety/FBG | 337c81ed661c11ee7283cffff63b1949363a8151 | [
"MIT"
] | null | null | null | fbd/__init__.py | olety/FBG | 337c81ed661c11ee7283cffff63b1949363a8151 | [
"MIT"
] | 11 | 2017-05-26T13:36:09.000Z | 2021-08-17T14:37:32.000Z | fbd/__init__.py | olety/FBD | 337c81ed661c11ee7283cffff63b1949363a8151 | [
"MIT"
] | null | null | null | from fbd import gatherer, storage, tools, visualizer
from fbd.gatherer import Gatherer
from fbd.storage import Storage
from fbd.visualizer import Visualizer
| 31.4 | 52 | 0.840764 | 22 | 157 | 6 | 0.318182 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121019 | 157 | 4 | 53 | 39.25 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b01e42e4b213337d89b6b90d55dfdb1d8ea7642 | 30 | py | Python | tests/syntax/future_braces.py | matan-h/friendly | 3ab0fc6541c837271e8865e247750007acdd18fb | [
"MIT"
] | 1,062 | 2015-11-18T01:04:33.000Z | 2022-03-29T07:13:30.000Z | tests/future_braces.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | 191 | 2019-04-08T14:39:18.000Z | 2021-03-14T22:14:56.000Z | tests/future_braces.py | CoDeRgAnEsh/1line | 507ef35b0006fc2998463dee92c2fdae53fe0694 | [
"MIT"
] | 100 | 2015-11-17T09:01:22.000Z | 2021-09-12T13:58:28.000Z | from __future__ import braces
| 15 | 29 | 0.866667 | 4 | 30 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b0be9b179ae3864fab5a39fa01c7a72214fdb6e | 43 | py | Python | data/micro-benchmark/classes/imported_nested_attr_access/main.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 121 | 2020-12-16T20:31:37.000Z | 2022-03-21T20:32:43.000Z | data/micro-benchmark/classes/imported_nested_attr_access/main.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 24 | 2021-03-13T00:04:00.000Z | 2022-03-21T17:28:11.000Z | data/micro-benchmark/classes/imported_nested_attr_access/main.py | vitsalis/pycg-evaluation | ce37eb5668465b0c17371914e863d699826447ee | [
"Apache-2.0"
] | 19 | 2021-03-23T10:58:47.000Z | 2022-03-24T19:46:50.000Z | from nest import imported
imported.func()
| 10.75 | 25 | 0.790698 | 6 | 43 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.139535 | 43 | 3 | 26 | 14.333333 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b4508aafa7995bc9b94d94fdc443e89a2e255e7 | 4,200 | py | Python | sotabench.py | theblackcat102/GPU-Efficient-Networks | 843279471e363ca96af70345008e399f0dea3bbb | [
"Apache-2.0"
] | 182 | 2020-07-09T02:40:11.000Z | 2022-03-28T05:40:58.000Z | sotabench.py | theblackcat102/GPU-Efficient-Networks | 843279471e363ca96af70345008e399f0dea3bbb | [
"Apache-2.0"
] | 14 | 2020-07-31T03:42:39.000Z | 2021-09-06T04:04:02.000Z | sotabench.py | theblackcat102/GPU-Efficient-Networks | 843279471e363ca96af70345008e399f0dea3bbb | [
"Apache-2.0"
] | 37 | 2020-07-13T02:08:22.000Z | 2022-02-28T06:42:19.000Z | import gc
import math
import torch
from torchbench.datasets.utils import download_file_from_google_drive
from torchbench.image_classification import ImageNet
from torchvision.transforms import transforms
import GENet
# GENet-large
file_id = '1xuyW2GB_kUfJNf2G146rk1sdKuYGxWlE'
destination = './GENet_params/'
filename = 'GENet_large.pth'
download_file_from_google_drive(file_id, destination, filename=filename)
input_image_size = 256
model = GENet.genet_large(pretrained=True, root='./GENet_params/')
model = GENet.fuse_bn(model)
input_image_crop = 0.875
resize_image_size = int(math.ceil(input_image_size / input_image_crop))
transforms_normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
transform_list = [transforms.Resize(resize_image_size),
transforms.CenterCrop(input_image_size), transforms.ToTensor(), transforms_normalize]
transformer = transforms.Compose(transform_list)
# load model
model = model.cuda().half()
model.eval()
def send_data(input, target, device, dtype=torch.float16, non_blocking: bool = True):
input = input.to(device=device, dtype=torch.float16, non_blocking=non_blocking)
if target is not None:
target = target.to(device=device, dtype=torch.float16, non_blocking=non_blocking)
return input, target
print('Benchmarking GENet-large-pro')
# Run the benchmark
ImageNet.benchmark(
model=model,
paper_model_name='GENet-large-pro',
paper_arxiv_id='2006.14090',
input_transform=transformer,
send_data_to_device=send_data,
batch_size=256,
num_workers=8,
num_gpu=1,
pin_memory=True,
paper_results={'Top 1 Accuracy': 0.813},
model_description="GENet-large-pro"
)
del model
gc.collect()
torch.cuda.empty_cache()
# GENet-normal
file_id = '1rpL0BKI_l5Xg4vN5fHGXPzTna5kW9hfs'
destination = './GENet_params/'
filename = 'GENet_normal.pth'
download_file_from_google_drive(file_id, destination, filename=filename)
input_image_size = 192
model = GENet.genet_normal(pretrained=True, root='./GENet_params/')
model = GENet.fuse_bn(model)
input_image_crop = 0.875
resize_image_size = int(math.ceil(input_image_size / input_image_crop))
transforms_normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
transform_list = [transforms.Resize(resize_image_size),
transforms.CenterCrop(input_image_size), transforms.ToTensor(), transforms_normalize]
transformer = transforms.Compose(transform_list)
# load model
model = model.cuda().half()
model.eval()
print('Benchmarking GENet-normal')
# Run the benchmark
ImageNet.benchmark(
model=model,
paper_model_name='GENet-normal-pro',
paper_arxiv_id='2006.14090',
input_transform=transformer,
send_data_to_device=send_data,
batch_size=256,
num_workers=8,
num_gpu=1,
pin_memory=True,
paper_results={'Top 1 Accuracy': 0.800},
model_description="GENet-normal-pro"
)
del model
gc.collect()
torch.cuda.empty_cache()
# GENet-light
file_id = '1jAkklQlQFPZi4odKUvbKEsNPYSS76GAv'
destination = './GENet_params/'
filename = 'GENet_small.pth'
download_file_from_google_drive(file_id, destination, filename=filename)
input_image_size = 192
model = GENet.genet_small(pretrained=True, root='./GENet_params/')
model = GENet.fuse_bn(model)
input_image_crop = 0.875
resize_image_size = int(math.ceil(input_image_size / input_image_crop))
transforms_normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
transform_list = [transforms.Resize(resize_image_size),
transforms.CenterCrop(input_image_size), transforms.ToTensor(), transforms_normalize]
transformer = transforms.Compose(transform_list)
# load model
model = model.cuda().half()
model.eval()
print('Benchmarking GENet-light')
# Run the benchmark
ImageNet.benchmark(
model=model,
paper_model_name='GENet-light-pro',
paper_arxiv_id='2006.14090',
input_transform=transformer,
send_data_to_device=send_data,
batch_size=256,
num_workers=8,
num_gpu=1,
pin_memory=True,
paper_results={'Top 1 Accuracy': 0.757},
model_description="GENet-light-pro"
)
del model
gc.collect()
torch.cuda.empty_cache()
| 30.656934 | 103 | 0.759762 | 578 | 4,200 | 5.264706 | 0.195502 | 0.049293 | 0.041407 | 0.028919 | 0.810056 | 0.766678 | 0.755504 | 0.755504 | 0.755504 | 0.742688 | 0 | 0.047257 | 0.123333 | 4,200 | 136 | 104 | 30.882353 | 0.779196 | 0.029286 | 0 | 0.657407 | 0 | 0 | 0.11704 | 0.024342 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009259 | false | 0 | 0.064815 | 0 | 0.083333 | 0.027778 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1b583a7d44be7d30effa57667797082c033107b0 | 132 | py | Python | operation/admin.py | loopyme/CQUHub | fe98b9142b8ed0f6821ef9d364b2eccdb02af927 | [
"MIT"
] | 3 | 2019-07-30T14:06:50.000Z | 2019-09-22T08:10:41.000Z | operation/admin.py | loopyme/CQUHub | fe98b9142b8ed0f6821ef9d364b2eccdb02af927 | [
"MIT"
] | 1 | 2020-06-05T21:48:14.000Z | 2020-06-05T21:48:14.000Z | operation/admin.py | CQU-AI/CQUHub | fe98b9142b8ed0f6821ef9d364b2eccdb02af927 | [
"MIT"
] | 2 | 2019-10-22T05:59:36.000Z | 2019-11-05T08:34:45.000Z | from django.contrib import admin
from .models import Topic_Comment
# Register your models here.
admin.site.register(Topic_Comment)
| 22 | 34 | 0.825758 | 19 | 132 | 5.631579 | 0.631579 | 0.224299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 132 | 5 | 35 | 26.4 | 0.91453 | 0.19697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b88ea69cc6204a73b81ac1e48dd0fbb9317e214 | 36 | py | Python | pygdpr/__init__.py | GDPRxiv/crawler | 178ef9ff6c3641ba8b761a49e42c2579e453c1ca | [
"MIT"
] | null | null | null | pygdpr/__init__.py | GDPRxiv/crawler | 178ef9ff6c3641ba8b761a49e42c2579e453c1ca | [
"MIT"
] | 2 | 2022-02-19T06:56:03.000Z | 2022-02-19T07:00:00.000Z | pygdpr/__init__.py | GDPRxiv/crawler | 178ef9ff6c3641ba8b761a49e42c2579e453c1ca | [
"MIT"
] | null | null | null | from pygdpr.models.gdpr import GDPR
| 18 | 35 | 0.833333 | 6 | 36 | 5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b8900071bfc3e2fbe3d44b8ed6259630d54edfb | 116 | py | Python | apps/cars/factory/__init__.py | agorsk1/car-rating-app | 354c5933f4cbad69c9a57d1839f9086cd5cf9a1d | [
"MIT"
] | 1 | 2022-03-03T11:15:25.000Z | 2022-03-03T11:15:25.000Z | apps/cars/factory/__init__.py | agorsk1/car-rating-app | 354c5933f4cbad69c9a57d1839f9086cd5cf9a1d | [
"MIT"
] | null | null | null | apps/cars/factory/__init__.py | agorsk1/car-rating-app | 354c5933f4cbad69c9a57d1839f9086cd5cf9a1d | [
"MIT"
] | null | null | null | from .car_factory import CarFactory
from .user_factory import UserFactory
from .rating_factory import RatingFactory
| 29 | 41 | 0.87069 | 15 | 116 | 6.533333 | 0.6 | 0.397959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 116 | 3 | 42 | 38.666667 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1b93a3e1343ed08b970719ce3e04a8d3ad8f1808 | 3,066 | py | Python | iotbot/webhook.py | ITJoker233/python-iotbot | 5ea293d36e7fa5b2cc83acfce3ccb5382bcc5afa | [
"MIT"
] | 31 | 2020-07-24T03:47:56.000Z | 2020-12-18T15:45:09.000Z | iotbot/webhook.py | ITJoker233/python-iotbot | 5ea293d36e7fa5b2cc83acfce3ccb5382bcc5afa | [
"MIT"
] | 20 | 2020-08-06T15:47:40.000Z | 2020-10-09T14:39:23.000Z | iotbot/webhook.py | ITJoker233/python-iotbot | 5ea293d36e7fa5b2cc83acfce3ccb5382bcc5afa | [
"MIT"
] | 18 | 2020-08-02T04:06:35.000Z | 2022-01-06T14:27:11.000Z | # 该功能即是一个内置插件
import traceback
import requests
from . import sugar
from .config import config
from .exceptions import InvalidConfigError
from .model import EventMsg, FriendMsg, GroupMsg
def _check_config():
if not config.webhook_post_url:
raise InvalidConfigError('缺少配置项: webhook_post_url')
_check_config()
def receive_group_msg(ctx: GroupMsg):
try:
resp = requests.post(
config.webhook_post_url, json=ctx.message, timeout=config.webhook_timeout
)
resp.raise_for_status()
except Exception:
print(traceback.format_exc())
else:
try:
data = resp.json()
assert isinstance(data, dict)
except Exception:
pass
else:
# 取出所有支持的字段
# 1. 图片(pic)存在,发送图片消息,此时文字(msg)存在, 则发送图文消息
# 2. 图片(pic)不存在, 文字(msg)存在,单独发送文字消息
# 3. 语音(voice)只要存在,则发送
msg: str = data.get('msg') or ''
at: bool = bool(data.get('at')) or False # 只要存在值就判定真
pic_url: str = data.get('pic_url')
pic_base64: str = data.get('pic_base64')
voice_url: str = data.get('voice_url')
voice_base64: str = data.get('voice_base64')
if any([pic_url, pic_base64]): # 图片,纯文字二选一
sugar.Picture(pic_url=pic_url, pic_base64=pic_base64, content=msg)
elif msg:
sugar.Text(msg, at)
if any([voice_url, voice_base64]):
sugar.Voice(voice_url=voice_url, voice_base64=voice_base64)
return None
return None
def receive_friend_msg(ctx: FriendMsg):
try:
resp = requests.post(
config.webhook_post_url, json=ctx.message, timeout=config.webhook_timeout
)
resp.raise_for_status()
except Exception:
print(traceback.format_exc())
else:
try:
data = resp.json()
assert isinstance(data, dict)
except Exception:
pass
else:
# 取出所有支持的字段
# 1. 图片(pic)存在,发送图片消息,此时文字(msg)存在, 则发送图文消息
# 2. 图片(pic)不存在, 文字(msg)存在,单独发送文字消息
# 3. 语音(voice)只要存在,则发送
msg: str = data.get('msg') or ''
at: bool = bool(data.get('at')) or False # 只要存在值就判定真
pic_url: str = data.get('pic_url')
pic_base64: str = data.get('pic_base64')
voice_url: str = data.get('voice_url')
voice_base64: str = data.get('voice_base64')
if any([pic_url, pic_base64]): # 图片,纯文字二选一
sugar.Picture(pic_url=pic_url, pic_base64=pic_base64, content=msg)
elif msg:
sugar.Text(msg, at)
if any([voice_url, voice_base64]):
sugar.Voice(voice_url=voice_url, voice_base64=voice_base64)
return None
return None
def receive_events(ctx: EventMsg):
# 事件消息只上报(懒)
try:
requests.post(
config.webhook_post_url, json=ctx.message, timeout=config.webhook_timeout
)
except Exception:
pass
| 31.285714 | 85 | 0.578278 | 375 | 3,066 | 4.546667 | 0.216 | 0.049267 | 0.058651 | 0.052786 | 0.79824 | 0.79824 | 0.79824 | 0.79824 | 0.79824 | 0.79824 | 0 | 0.02202 | 0.318656 | 3,066 | 97 | 86 | 31.608247 | 0.79416 | 0.089367 | 0 | 0.773333 | 0 | 0 | 0.039251 | 0 | 0 | 0 | 0 | 0 | 0.026667 | 1 | 0.053333 | false | 0.04 | 0.08 | 0 | 0.186667 | 0.026667 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1bafbd087b69d76bdb9349eba61b63de84b62839 | 26,213 | py | Python | tests/unit/test_packaging__manager.py | RerrerBuub/asciidoxy | 3402f37d59e30975e9919653465839e396f05513 | [
"Apache-2.0"
] | 14 | 2020-04-28T08:51:43.000Z | 2022-02-12T13:40:34.000Z | tests/unit/test_packaging__manager.py | RerrerBuub/asciidoxy | 3402f37d59e30975e9919653465839e396f05513 | [
"Apache-2.0"
] | 47 | 2020-05-18T14:19:31.000Z | 2022-03-04T13:46:46.000Z | tests/unit/test_packaging__manager.py | RerrerBuub/asciidoxy | 3402f37d59e30975e9919653465839e396f05513 | [
"Apache-2.0"
] | 8 | 2020-05-17T20:52:42.000Z | 2022-02-25T16:16:01.000Z | # Copyright (C) 2019-2021, TomTom (http://tomtom.com).
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for managing packages."""
import pytest
import toml
from pathlib import Path
from unittest.mock import MagicMock, call
from asciidoxy.packaging.manager import (FileCollisionError, PackageManager, UnknownFileError,
UnknownPackageError)
@pytest.fixture
def package_manager(build_dir):
return PackageManager(build_dir)
@pytest.fixture(params=[True, False], ids=["warnings-are-errors", "warnings-are-not-errors"])
def warnings_are_and_are_not_errors(request, package_manager):
package_manager.warnings_are_errors = request.param
return request.param
def create_package_dir(parent: Path,
name: str,
xml: bool = True,
adoc: bool = True,
images: bool = True,
contents: bool = True,
root_doc: bool = False) -> Path:
pkg_dir = parent / name
pkg_dir.mkdir(parents=True)
data = {"package": {"name": name}}
if xml:
(pkg_dir / "xml").mkdir()
(pkg_dir / "xml" / f"{name}.xml").touch()
data["reference"] = {"type": "doxygen", "dir": "xml"}
if adoc:
(pkg_dir / "adoc").mkdir()
(pkg_dir / "adoc" / f"{name}.adoc").touch()
data["asciidoc"] = {"src_dir": "adoc"}
if root_doc:
data["asciidoc"]["root_doc"] = f"{name}.adoc"
if images:
(pkg_dir / "images").mkdir()
(pkg_dir / "images" / f"{name}.png").touch()
data["asciidoc"]["image_dir"] = "images"
if contents:
with (pkg_dir / "contents.toml").open("w", encoding="utf-8") as contents_file:
toml.dump(data, contents_file)
return pkg_dir
def create_package_spec(parent: Path, *names: str) -> Path:
data = {
"sources": {
"local": {
"type": "local",
"xml_subdir": "xml",
"include_subdir": "adoc"
}
},
}
data["packages"] = {
name: {
"source": "local",
"package_dir": str(parent / name)
}
for name in names
}
spec_file = parent / "spec.toml"
with spec_file.open("w", encoding="utf-8") as spec_file_handle:
toml.dump(data, spec_file_handle)
return spec_file
def test_collect(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
assert len(package_manager.packages) == 2
packages = package_manager.packages
assert "a" in packages
assert "b" in packages
pkg_a = packages["a"]
assert pkg_a.reference_dir is not None
assert pkg_a.reference_dir.is_dir()
assert pkg_a.adoc_src_dir is not None
assert pkg_a.adoc_src_dir.is_dir()
assert pkg_a.adoc_image_dir is not None
assert pkg_a.adoc_image_dir.is_dir()
pkg_b = packages["b"]
assert pkg_b.reference_dir is not None
assert pkg_b.reference_dir.is_dir()
assert pkg_b.adoc_src_dir is not None
assert pkg_b.adoc_src_dir.is_dir()
assert pkg_b.adoc_image_dir is not None
assert pkg_b.adoc_image_dir.is_dir()
def test_load_reference(package_manager, event_loop, tmp_path, build_dir):
pkg_a_dir = create_package_dir(tmp_path, "a")
pkg_b_dir = create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
parser_mock = MagicMock()
package_manager.load_reference(parser_mock)
parser_mock.parse.assert_has_calls(
[call(pkg_a_dir / "xml" / "a.xml"),
call(pkg_b_dir / "xml" / "b.xml")], any_order=True)
def test_prepare_work_directory(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "chapter.adoc").touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
package_manager.set_input_files(in_file, src_dir)
work_file = package_manager.prepare_work_directory(in_file)
assert work_file.is_file()
assert work_file.name == "index.adoc"
work_dir = work_file.parent
assert (work_dir / "index.adoc").is_file()
assert (work_dir / "chapter.adoc").is_file()
assert (work_dir / "other").is_dir()
assert (work_dir / "other" / "another.adoc").is_file()
assert (work_dir / "a.adoc").is_file()
assert (work_dir / "b.adoc").is_file()
assert (work_dir / "images").is_dir()
assert (work_dir / "images" / "a.png").is_file()
assert (work_dir / "images" / "b.png").is_file()
def test_prepare_work_directory__no_include_dir(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "chapter.adoc").touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
package_manager.set_input_files(in_file)
work_file = package_manager.prepare_work_directory(in_file)
assert work_file.is_file()
assert work_file.name == "index.adoc"
work_dir = work_file.parent
assert (work_dir / "index.adoc").is_file()
assert not (work_dir / "chapter.adoc").is_file()
assert not (work_dir / "other").is_dir()
assert not (work_dir / "other" / "another.adoc").is_file()
assert (work_dir / "a.adoc").is_file()
assert (work_dir / "b.adoc").is_file()
assert (work_dir / "images").is_dir()
assert (work_dir / "images" / "a.png").is_file()
assert (work_dir / "images" / "b.png").is_file()
def test_prepare_work_directory__explicit_images(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "chapter.adoc").touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
image_dir = tmp_path / "images"
image_dir.mkdir()
(image_dir / "image.png").touch()
package_manager.set_input_files(in_file, None, image_dir)
work_file = package_manager.prepare_work_directory(in_file)
assert work_file.is_file()
assert work_file.name == "index.adoc"
work_dir = work_file.parent
assert (work_dir / "index.adoc").is_file()
assert not (work_dir / "chapter.adoc").is_file()
assert not (work_dir / "other").is_dir()
assert not (work_dir / "other" / "another.adoc").is_file()
assert (work_dir / "images").is_dir()
assert (work_dir / "images" / "image.png").is_file()
assert (work_dir / "a.adoc").is_file()
assert (work_dir / "b.adoc").is_file()
assert (work_dir / "images" / "a.png").is_file()
assert (work_dir / "images" / "b.png").is_file()
def test_prepare_work_directory__implicit_images(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "chapter.adoc").touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
image_dir = src_dir / "images"
image_dir.mkdir()
(image_dir / "image.png").touch()
package_manager.set_input_files(in_file, None, None)
work_file = package_manager.prepare_work_directory(in_file)
assert work_file.is_file()
assert work_file.name == "index.adoc"
work_dir = work_file.parent
assert (work_dir / "index.adoc").is_file()
assert not (work_dir / "chapter.adoc").is_file()
assert not (work_dir / "other").is_dir()
assert not (work_dir / "other" / "another.adoc").is_file()
assert (work_dir / "images").is_dir()
assert (work_dir / "images" / "image.png").is_file()
assert (work_dir / "a.adoc").is_file()
assert (work_dir / "b.adoc").is_file()
assert (work_dir / "images" / "a.png").is_file()
assert (work_dir / "images" / "b.png").is_file()
def test_prepare_work_directory__file_collision(package_manager, event_loop, tmp_path, build_dir,
warnings_are_and_are_not_errors):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "a.adoc").touch()
package_manager.set_input_files(in_file, src_dir)
if warnings_are_and_are_not_errors:
with pytest.raises(FileCollisionError) as excinfo:
package_manager.prepare_work_directory(in_file)
assert "File a.adoc from package INPUT already exists in package a." in str(excinfo.value)
else:
package_manager.prepare_work_directory(in_file)
def test_prepare_work_directory__dir_and_file_collision__file_overwrites_dir_from_input(
package_manager, event_loop, tmp_path, build_dir, warnings_are_and_are_not_errors):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "a.adoc").mkdir()
package_manager.set_input_files(in_file, src_dir)
with pytest.raises(FileCollisionError) as excinfo:
package_manager.prepare_work_directory(in_file)
assert ("Package a contains file a.adoc, which is also a directory in package INPUT."
in str(excinfo.value))
def test_prepare_work_directory__dir_and_file_collision__dir_overwrites_file(
package_manager, event_loop, tmp_path, build_dir, warnings_are_and_are_not_errors):
create_package_dir(tmp_path, "a")
pkg_b_dir = create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(pkg_b_dir / "adoc" / "a.adoc").mkdir()
package_manager.set_input_files(in_file, src_dir)
with pytest.raises(FileCollisionError) as excinfo:
package_manager.prepare_work_directory(in_file)
assert ("Package a contains file a.adoc, which is also a directory in package b."
in str(excinfo.value))
def test_prepare_work_directory__dir_and_file_collision__file_overwrites_dir(
package_manager, event_loop, tmp_path, build_dir, warnings_are_and_are_not_errors):
pkg_a_dir = create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(pkg_a_dir / "adoc" / "b.adoc").mkdir()
package_manager.set_input_files(in_file, src_dir)
with pytest.raises(FileCollisionError) as excinfo:
package_manager.prepare_work_directory(in_file)
assert "File b.adoc from package b is also a directory in package a." in str(excinfo.value)
def test_prepare_work_directory__same_dir_in_multiple_packages(package_manager, event_loop,
tmp_path, build_dir):
pkg_a_dir = create_package_dir(tmp_path, "a")
pkg_b_dir = create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
(pkg_a_dir / "adoc" / "other").mkdir()
(pkg_a_dir / "adoc" / "other" / "a_another.adoc").touch()
(pkg_b_dir / "adoc" / "other").mkdir()
(pkg_b_dir / "adoc" / "other" / "b_another.adoc").touch()
package_manager.set_input_files(in_file, src_dir)
work_file = package_manager.prepare_work_directory(in_file)
assert work_file.is_file()
assert work_file.name == "index.adoc"
work_dir = work_file.parent
assert (work_dir / "other").is_dir()
assert (work_dir / "other" / "another.adoc").is_file()
assert (work_dir / "other" / "a_another.adoc").is_file()
assert (work_dir / "other" / "b_another.adoc").is_file()
def test_make_image_directory(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
output_dir = tmp_path / "output"
package_manager.make_image_directory(output_dir)
assert (output_dir / "images").is_dir()
assert (output_dir / "images" / "a.png").is_file()
assert (output_dir / "images" / "b.png").is_file()
def test_make_image_directory__existing_output_dir(package_manager, event_loop, tmp_path,
build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
output_dir = tmp_path / "output"
package_manager.make_image_directory(output_dir)
assert (output_dir / "images").is_dir()
assert (output_dir / "images" / "a.png").is_file()
assert (output_dir / "images" / "b.png").is_file()
package_manager2 = PackageManager(build_dir)
package_manager2.collect(spec_file)
package_manager2.make_image_directory(output_dir)
assert (output_dir / "images").is_dir()
assert (output_dir / "images" / "a.png").is_file()
assert (output_dir / "images" / "b.png").is_file()
def test_make_image_directory__from_input_files(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
image_dir = tmp_path / "images"
image_dir.mkdir()
(image_dir / "image.png").touch()
package_manager.set_input_files(in_file, None, image_dir)
package_manager.collect(spec_file)
output_dir = tmp_path / "output"
package_manager.make_image_directory(output_dir)
assert (output_dir / "images").is_dir()
assert (output_dir / "images" / "image.png").is_file()
assert (output_dir / "images" / "a.png").is_file()
assert (output_dir / "images" / "b.png").is_file()
def test_make_image_directory__file_collision__file_overwrites_directory(
package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
output_dir = tmp_path / "output"
(output_dir / "images" / "a.png").mkdir(parents=True)
with pytest.raises(FileCollisionError) as excinfo:
package_manager.make_image_directory(output_dir)
assert ("Unexpected directory a.png, blocking creation of a file from package a."
in str(excinfo.value))
def test_make_image_directory__file_collision__directory_overwrites_file(
package_manager, event_loop, tmp_path, build_dir):
pkg_a_dir = create_package_dir(tmp_path, "a")
(pkg_a_dir / "images" / "a_subdir").mkdir(parents=True)
(pkg_a_dir / "images" / "a_subdir" / "a_subdir_file.png").touch()
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
output_dir = tmp_path / "output"
(output_dir / "images").mkdir(parents=True)
(output_dir / "images" / "a_subdir").touch()
with pytest.raises(FileCollisionError) as excinfo:
package_manager.make_image_directory(output_dir)
assert ("Unexpected file a_subdir, blocking creation of a directory from package a."
in str(excinfo.value))
def test_file_in_work_directory__present(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.set_input_files(in_file)
work_file = package_manager.prepare_work_directory(in_file)
work_dir = work_file.parent
assert package_manager.file_in_work_directory("INPUT", "index.adoc") == work_dir / "index.adoc"
assert package_manager.file_in_work_directory("a", "a.adoc") == work_dir / "a.adoc"
assert package_manager.file_in_work_directory("b", "b.adoc") == work_dir / "b.adoc"
def test_file_in_work_directory__input_file_no_include_dir(package_manager, event_loop, tmp_path,
build_dir):
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.set_input_files(in_file)
work_file = package_manager.prepare_work_directory(in_file)
work_dir = work_file.parent
assert package_manager.file_in_work_directory("INPUT", "index.adoc") == work_dir / "index.adoc"
def test_file_in_work_directory__file_from_input_include_dir(package_manager, event_loop, tmp_path,
build_dir):
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "chapter.adoc").touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
package_manager.set_input_files(in_file, in_file.parent)
work_file = package_manager.prepare_work_directory(in_file)
work_dir = work_file.parent
assert package_manager.file_in_work_directory("INPUT", "index.adoc") == work_dir / "index.adoc"
assert package_manager.file_in_work_directory("INPUT",
"chapter.adoc") == work_dir / "chapter.adoc"
assert package_manager.file_in_work_directory(
"INPUT", "other/another.adoc") == work_dir / "other" / "another.adoc"
def test_file_in_work_directory__default_root_doc(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a", root_doc=True)
spec_file = create_package_spec(tmp_path, "a")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
work_file = package_manager.prepare_work_directory(in_file)
work_dir = work_file.parent
assert package_manager.file_in_work_directory("a", None) == work_dir / "a.adoc"
assert package_manager.file_in_work_directory("a", "") == work_dir / "a.adoc"
def test_file_in_work_directory__no_root_doc_no_filename(package_manager, event_loop, tmp_path,
build_dir):
create_package_dir(tmp_path, "a", root_doc=False)
spec_file = create_package_spec(tmp_path, "a")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.prepare_work_directory(in_file)
with pytest.raises(UnknownFileError):
package_manager.file_in_work_directory("a", None)
with pytest.raises(UnknownFileError):
package_manager.file_in_work_directory("a", "")
def test_file_in_work_directory__unknown_package(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.prepare_work_directory(in_file)
with pytest.raises(UnknownPackageError):
package_manager.file_in_work_directory("c", "a.adoc")
def test_file_in_work_directory__unknown_file(package_manager, event_loop, tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.prepare_work_directory(in_file)
with pytest.raises(UnknownFileError):
package_manager.file_in_work_directory("a", "c.adoc")
def test_file_in_work_directory__package_must_match(package_manager, event_loop, tmp_path,
build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.prepare_work_directory(in_file)
with pytest.raises(UnknownFileError):
package_manager.file_in_work_directory("b", "a.adoc")
with pytest.raises(UnknownFileError):
package_manager.file_in_work_directory("a", "b.adoc")
def test_file_in_work_directory__package_without_include_files(package_manager, event_loop,
tmp_path, build_dir):
create_package_dir(tmp_path, "a", adoc=False)
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.prepare_work_directory(in_file)
with pytest.raises(UnknownFileError):
package_manager.file_in_work_directory("a", "a.adoc")
@pytest.mark.parametrize("package_hint", [None, "", "a", "b", "INPUT"])
def test_find_original_file__with_include_dir(package_hint, package_manager, event_loop, tmp_path,
build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
(src_dir / "chapter.adoc").touch()
(src_dir / "other").mkdir()
(src_dir / "other" / "another.adoc").touch()
package_manager.set_input_files(in_file, src_dir)
package_manager.prepare_work_directory(in_file)
assert package_manager.find_original_file(
package_manager.file_in_work_directory("INPUT", "index.adoc"),
package_hint) == ("INPUT", Path("index.adoc"))
assert package_manager.find_original_file(
package_manager.file_in_work_directory("INPUT", "chapter.adoc"),
package_hint) == ("INPUT", Path("chapter.adoc"))
assert package_manager.find_original_file(
package_manager.file_in_work_directory("INPUT", "other/another.adoc"),
package_hint) == ("INPUT", Path("other/another.adoc"))
assert package_manager.find_original_file(package_manager.file_in_work_directory("a", "a.adoc"),
package_hint) == ("a", Path("a.adoc"))
assert package_manager.find_original_file(package_manager.file_in_work_directory("b", "b.adoc"),
package_hint) == ("b", Path("b.adoc"))
@pytest.mark.parametrize("package_hint", [None, "", "a", "b", "INPUT"])
def test_find_original_file__without_include_dir(package_hint, package_manager, event_loop,
tmp_path, build_dir):
create_package_dir(tmp_path, "a")
create_package_dir(tmp_path, "b")
spec_file = create_package_spec(tmp_path, "a", "b")
package_manager.collect(spec_file)
src_dir = tmp_path / "src"
src_dir.mkdir()
in_file = src_dir / "index.adoc"
in_file.touch()
package_manager.set_input_files(in_file)
package_manager.prepare_work_directory(in_file)
assert package_manager.find_original_file(
package_manager.file_in_work_directory("INPUT", "index.adoc"),
package_hint) == ("INPUT", Path("index.adoc"))
assert package_manager.find_original_file(package_manager.file_in_work_directory("a", "a.adoc"),
package_hint) == ("a", Path("a.adoc"))
assert package_manager.find_original_file(package_manager.file_in_work_directory("b", "b.adoc"),
package_hint) == ("b", Path("b.adoc"))
| 37.024011 | 100 | 0.675695 | 3,668 | 26,213 | 4.442203 | 0.05398 | 0.112557 | 0.046643 | 0.055972 | 0.842887 | 0.830551 | 0.816251 | 0.789861 | 0.767706 | 0.756966 | 0 | 0.00086 | 0.201236 | 26,213 | 707 | 101 | 37.076379 | 0.777343 | 0.022966 | 0 | 0.690167 | 0 | 0 | 0.103583 | 0.000899 | 0 | 0 | 0 | 0 | 0.19295 | 1 | 0.057514 | false | 0 | 0.009276 | 0.001855 | 0.074212 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
942466aaf9d1074f6547125fb5e0f74493ab1286 | 36 | py | Python | keyword_spotting_data_generator/extractor/__init__.py | Lipster-develop/honk | c3aae750c428520ba340961bddd526f9c999bb93 | [
"MIT"
] | 477 | 2017-10-07T04:23:56.000Z | 2022-03-29T08:37:44.000Z | keyword_spotting_data_generator/extractor/__init__.py | Lipster-develop/honk | c3aae750c428520ba340961bddd526f9c999bb93 | [
"MIT"
] | 66 | 2017-10-02T16:43:39.000Z | 2021-11-01T09:23:06.000Z | keyword_spotting_data_generator/extractor/__init__.py | Lipster-develop/honk | c3aae750c428520ba340961bddd526f9c999bb93 | [
"MIT"
] | 132 | 2017-10-07T04:23:58.000Z | 2022-03-08T03:09:16.000Z | from .sphinx_stt_extractor import *
| 18 | 35 | 0.833333 | 5 | 36 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
945db18726007587785cfd6163759b1df4fd7c77 | 93 | py | Python | examples/traductor/traductor/translators/domainname.py | connectthefuture/docker-hacks | d7ea13522188233d5e8a97179d2b0a872239f58d | [
"MIT"
] | 5 | 2015-09-19T09:47:45.000Z | 2018-09-24T21:48:51.000Z | traductor/translators/domainname.py | the0rem/traductor | 6b190a1e829379f8f4bf41f86ea50e937c4cf2ed | [
"Apache-2.0"
] | 1 | 2015-09-21T08:39:30.000Z | 2015-09-21T14:39:28.000Z | traductor/translators/domainname.py | the0rem/traductor | 6b190a1e829379f8f4bf41f86ea50e937c4cf2ed | [
"Apache-2.0"
] | null | null | null | from .base import BaseTranslator
class Domainname(BaseTranslator):
"""
"""
pass | 13.285714 | 33 | 0.655914 | 8 | 93 | 7.625 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236559 | 93 | 7 | 34 | 13.285714 | 0.859155 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
947f22857bbffdf0033c7a2218e55eb28abb9259 | 191 | py | Python | src/lib/trains/train_factory.py | bairw660606/PPDM | d89c1e583a87b1fe5f1c6bb94ed4b09838d5e547 | [
"MIT"
] | 28 | 2021-04-13T15:11:56.000Z | 2022-03-31T08:16:02.000Z | src/lib/trains/train_factory.py | bairw660606/PPDM | d89c1e583a87b1fe5f1c6bb94ed4b09838d5e547 | [
"MIT"
] | 5 | 2021-04-15T12:35:50.000Z | 2021-09-26T12:47:54.000Z | src/lib/trains/train_factory.py | bairw660606/PPDM | d89c1e583a87b1fe5f1c6bb94ed4b09838d5e547 | [
"MIT"
] | 2 | 2021-08-05T01:57:58.000Z | 2022-03-31T08:16:04.000Z | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from .hoidet import HoidetTrainer
train_factory = {
'hoidet': HoidetTrainer
}
| 19.1 | 38 | 0.827225 | 22 | 191 | 6.5 | 0.5 | 0.20979 | 0.335664 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.136126 | 191 | 9 | 39 | 21.222222 | 0.866667 | 0 | 0 | 0 | 0 | 0 | 0.031414 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.571429 | 0 | 0.571429 | 0.142857 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8472ee02496e0c9ebcdb5d60cac125d4a641f8be | 20 | py | Python | airtest/utils/apkparser/__init__.py | koyoki/Airtest | ea8391bd4819d9231e7b35f18c14662e6109fad0 | [
"Apache-2.0"
] | 6,140 | 2018-01-24T03:27:48.000Z | 2022-03-31T14:37:54.000Z | airtest/utils/apkparser/__init__.py | koyoki/Airtest | ea8391bd4819d9231e7b35f18c14662e6109fad0 | [
"Apache-2.0"
] | 993 | 2018-02-02T11:21:40.000Z | 2022-03-31T20:41:41.000Z | airtest/utils/apkparser/__init__.py | koyoki/Airtest | ea8391bd4819d9231e7b35f18c14662e6109fad0 | [
"Apache-2.0"
] | 1,022 | 2018-03-05T07:45:22.000Z | 2022-03-31T04:29:57.000Z | from .apk import APK | 20 | 20 | 0.8 | 4 | 20 | 4 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 20 | 1 | 20 | 20 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
84824bad7f671791bab334c86e2767624041a406 | 37 | py | Python | app/backend/api/endpoints/auth.py | sourcery-ai-bot/find-my-pet | 82654e20740c3bd7935374868ec0dd0b4d2c33bd | [
"Apache-2.0"
] | 2 | 2020-03-16T10:33:00.000Z | 2021-04-08T09:15:34.000Z | app/backend/api/endpoints/auth.py | sourcery-ai-bot/find-my-pet | 82654e20740c3bd7935374868ec0dd0b4d2c33bd | [
"Apache-2.0"
] | 21 | 2020-03-16T11:09:34.000Z | 2021-04-08T12:27:55.000Z | app/backend/api/endpoints/auth.py | sourcery-ai-bot/find-my-pet | 82654e20740c3bd7935374868ec0dd0b4d2c33bd | [
"Apache-2.0"
] | 1 | 2021-04-08T09:15:36.000Z | 2021-04-08T09:15:36.000Z | # coding=utf-8
# TODO:Authorization
| 9.25 | 20 | 0.72973 | 5 | 37 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03125 | 0.135135 | 37 | 3 | 21 | 12.333333 | 0.8125 | 0.837838 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0.333333 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
848a4fac7699ea0c783e3a865beed32480e9f823 | 116 | py | Python | DexLab/constants.py | chinyh/dexlab-api-wrapper-python3 | f8a3616e4930200bb783e50f1757e98fbb69c1bf | [
"MIT"
] | null | null | null | DexLab/constants.py | chinyh/dexlab-api-wrapper-python3 | f8a3616e4930200bb783e50f1757e98fbb69c1bf | [
"MIT"
] | null | null | null | DexLab/constants.py | chinyh/dexlab-api-wrapper-python3 | f8a3616e4930200bb783e50f1757e98fbb69c1bf | [
"MIT"
] | null | null | null | PUBLIC_API_URL = 'https://serum-api.dexlab.space'
PRIVATE_API_URL = 'https://serum-api.dexlab.space'
VERSION = 'v1' | 29 | 50 | 0.741379 | 18 | 116 | 4.555556 | 0.555556 | 0.146341 | 0.268293 | 0.390244 | 0.731707 | 0.731707 | 0.731707 | 0 | 0 | 0 | 0 | 0.009346 | 0.077586 | 116 | 4 | 51 | 29 | 0.757009 | 0 | 0 | 0 | 0 | 0 | 0.529915 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
84fc012e36576626f61f223d38a8ff15284d223f | 68 | py | Python | csvObject/__init__.py | sbaker-dev/csvObject | e31668c9b71284c7e7f6516e61c9617ad7abb7b1 | [
"MIT"
] | null | null | null | csvObject/__init__.py | sbaker-dev/csvObject | e31668c9b71284c7e7f6516e61c9617ad7abb7b1 | [
"MIT"
] | null | null | null | csvObject/__init__.py | sbaker-dev/csvObject | e31668c9b71284c7e7f6516e61c9617ad7abb7b1 | [
"MIT"
] | null | null | null | from csvObject.csvObject import *
from csvObject.csvWriter import *
| 22.666667 | 33 | 0.823529 | 8 | 68 | 7 | 0.5 | 0.464286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 68 | 2 | 34 | 34 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1706c2f209be6179f16b0148d5ae2b2d6881625e | 124 | py | Python | doc/core/eval.py | ponyatov/metaLeds | e017f4b14dcdf87275154d8220fbdae06cd9f370 | [
"MIT"
] | null | null | null | doc/core/eval.py | ponyatov/metaLeds | e017f4b14dcdf87275154d8220fbdae06cd9f370 | [
"MIT"
] | null | null | null | doc/core/eval.py | ponyatov/metaLeds | e017f4b14dcdf87275154d8220fbdae06cd9f370 | [
"MIT"
] | null | null | null | class Object:
## Lisp eval() in context
def eval(self, env):
raise NotImplementedError(['eval', self, env])
| 24.8 | 54 | 0.620968 | 15 | 124 | 5.133333 | 0.733333 | 0.207792 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.241935 | 124 | 4 | 55 | 31 | 0.819149 | 0.177419 | 0 | 0 | 0 | 0 | 0.040404 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
ca04b5a00fc845b234f87f415966e155d1cbb95d | 39,268 | py | Python | pytests/rebalance_new/rebalance_out.py | pavithra-mahamani/TAF | ff854adcc6ca3e50d9dc64e7756ca690251128d3 | [
"Apache-2.0"
] | null | null | null | pytests/rebalance_new/rebalance_out.py | pavithra-mahamani/TAF | ff854adcc6ca3e50d9dc64e7756ca690251128d3 | [
"Apache-2.0"
] | null | null | null | pytests/rebalance_new/rebalance_out.py | pavithra-mahamani/TAF | ff854adcc6ca3e50d9dc64e7756ca690251128d3 | [
"Apache-2.0"
] | null | null | null | import Jython_tasks.task as jython_tasks
from membase.api.exception import RebalanceFailedException
from membase.api.rest_client import RestConnection
from rebalance_base import RebalanceBaseTest
from couchbase_helper.documentgenerator import doc_generator
from remote.remote_util import RemoteMachineShellConnection
from membase.helper.rebalance_helper import RebalanceHelper
import time
class RebalanceOutTests(RebalanceBaseTest):
def setUp(self):
super(RebalanceOutTests, self).setUp()
def tearDown(self):
super(RebalanceOutTests, self).tearDown()
def test_rebalance_out_with_ops_durable(self):
self.gen_create = doc_generator(self.key, self.num_items,
self.num_items + self.items)
self.gen_delete = doc_generator(self.key, self.items / 2,
self.items)
servs_out = [self.cluster.servers[len(self.cluster.nodes_in_cluster) - i - 1] for i in range(self.nodes_out)]
rebalance_task = self.task.async_rebalance(
self.cluster.servers[:self.nodes_init], [], servs_out)
time.sleep(10)
tasks_info = self.bucket_util._async_load_all_buckets(
self.cluster, self.gen_create, "create", 0,
batch_size=self.batch_size, process_concurrency=self.process_concurrency,
replicate_to=self.replicate_to, persist_to=self.persist_to,
timeout_secs=self.sdk_timeout, retries=self.sdk_retries,
durability=self.durability_level,
ryow=self.ryow, check_persistence=self.check_persistence)
self.task_manager.get_task_result(rebalance_task)
for task in tasks_info.keys():
self.task_manager.get_task_result(task)
if task.__class__ == jython_tasks.Durability:
self.log.error(task.sdk_acked_curd_failed.keys())
self.log.error(task.sdk_exception_crud_succeed.keys())
self.assertTrue(
len(task.sdk_acked_curd_failed) == 0,
"sdk_acked_curd_failed for docs: %s" % task.sdk_acked_curd_failed.keys())
self.assertTrue(
len(task.sdk_exception_crud_succeed) == 0,
"sdk_exception_crud_succeed for docs: %s" % task.sdk_exception_crud_succeed.keys())
self.assertTrue(
len(task.sdk_exception_crud_succeed) == 0,
"create failed for docs: %s" % task.create_failed.keys())
self.assertTrue(
len(task.sdk_exception_crud_succeed) == 0,
"update failed for docs: %s" % task.update_failed.keys())
self.assertTrue(
len(task.sdk_exception_crud_succeed) == 0,
"delete failed for docs: %s" % task.delete_failed.keys())
self.assertTrue(rebalance_task.result, "Rebalance Failed")
self.cluster.nodes_in_cluster.extend(servs_out)
self.sleep(60, "Wait for cluster to be ready after rebalance")
tasks = list()
for bucket in self.bucket_util.buckets:
if self.doc_ops is not None:
if "update" in self.doc_ops:
tasks.append(self.task.async_validate_docs(
self.cluster, bucket, self.gen_update, "update", 0,
batch_size=10))
if "create" in self.doc_ops:
tasks.append(self.task.async_validate_docs(
self.cluster, bucket, self.gen_create, "create", 0,
batch_size=10, process_concurrency=8))
if "delete" in self.doc_ops:
tasks.append(self.task.async_validate_docs(
self.cluster, bucket, self.gen_delete, "delete", 0,
batch_size=10))
for task in tasks:
self.task.jython_task_manager.get_task_result(task)
self.bucket_util.verify_stats_all_buckets(self.num_items*2)
def rebalance_out_with_ops(self):
self.gen_create = doc_generator(self.key, self.num_items,
self.num_items + self.items)
self.gen_delete = doc_generator(self.key, self.items / 2,
self.items)
servs_out = [self.cluster.servers[self.nodes_init - i - 1]
for i in range(self.nodes_out)]
tasks = list()
rebalance_task = self.task.async_rebalance(
self.cluster.servers[:self.nodes_init], [], servs_out)
tasks_info = self.loadgen_docs()
self.sleep(15, "Wait for rebalance to start")
self.task.jython_task_manager.get_task_result(rebalance_task)
if not rebalance_task.result:
for task, _ in tasks_info.items():
self.task_manager.get_task_result(task)
self.fail("Rebalance Failed")
if not self.atomicity:
self.bucket_util.verify_doc_op_task_exceptions(tasks_info,
self.cluster)
self.bucket_util.log_doc_ops_task_failures(tasks_info)
for task, task_info in tasks_info.items():
self.assertFalse(
task_info["ops_failed"],
"Doc ops failed for task: {}".format(task.thread_name))
else:
for task, task_info in tasks_info.items():
self.task_manager.get_task_result(task)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(servs_out))
self.sleep(20)
if not self.atomicity:
for bucket in self.bucket_util.buckets:
if self.doc_ops is not None:
if "update" in self.doc_ops:
tasks.append(self.task.async_validate_docs(
self.cluster, bucket, self.gen_update, "update", 0,
batch_size=10))
if "create" in self.doc_ops:
tasks.append(
self.task.async_validate_docs(
self.cluster, bucket, self.gen_create, "create", 0,
batch_size=10, process_concurrency=8))
if "delete" in self.doc_ops:
tasks.append(
self.task.async_validate_docs(
self.cluster, bucket, self.gen_delete, "delete", 0,
batch_size=10))
for task in tasks:
self.task.jython_task_manager.get_task_result(task)
if not self.atomicity:
self.bucket_util.verify_stats_all_buckets(self.num_items)
"""Rebalances nodes out of a cluster while doing docs ops:create, delete, update.
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. Before rebalance we perform docs ops(add/remove/update/read)
in the cluster( operate with a half of items that were loaded before).It then remove nodes_out
from the cluster at a time and rebalances. Once the cluster has been rebalanced we wait for the
disk queues to drain, and then verify that there has been no data loss, sum(curr_items) match the
curr_items_total. We also check for data and its meta-data, vbucket sequene numbers"""
def rebalance_out_after_ops(self):
self.gen_delete = self.get_doc_generator(self.items / 2,
self.items)
self.gen_create = self.get_doc_generator(self.num_items,
self.num_items + self.items / 2)
# define which doc's ops will be performed during rebalancing
# allows multiple of them but one by one
self.check_temporary_failure_exception = False
self.loadgen_docs(task_verification=True)
servs_out = [self.cluster.servers[self.nodes_init - i - 1] for i in range(self.nodes_out)]
if not self.atomicity:
self.bucket_util._wait_for_stats_all_buckets()
self.bucket_util.verify_stats_all_buckets(self.num_items, timeout=120)
prev_failover_stats = self.bucket_util.get_failovers_logs(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
prev_vbucket_stats = self.bucket_util.get_vbucket_seqnos(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
# record_data_set = self.bucket_util.get_data_set_all(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
self.bucket_util.compare_vbucketseq_failoverlogs(prev_vbucket_stats, prev_failover_stats)
self.add_remove_servers_and_rebalance([], servs_out)
if not self.atomicity:
self.bucket_util.verify_stats_all_buckets(self.num_items, timeout=120)
self.bucket_util.verify_cluster_stats(self.num_items, check_ep_items_remaining=True)
new_failover_stats = self.bucket_util.compare_failovers_logs(prev_failover_stats,
self.cluster.servers[:self.nodes_init - self.nodes_out], self.bucket_util.buckets)
new_vbucket_stats = self.bucket_util.compare_vbucket_seqnos(prev_vbucket_stats,
self.cluster.servers[:self.nodes_init - self.nodes_out], self.bucket_util.buckets,
perNode=False)
self.sleep(60)
# self.bucket_util.data_analysis_all(record_data_set, self.cluster.servers[:self.nodes_init - self.nodes_out], self.bucket_util.buckets)
self.bucket_util.compare_vbucketseq_failoverlogs(new_vbucket_stats, new_failover_stats)
self.bucket_util.verify_unacked_bytes_all_buckets()
nodes = self.cluster_util.get_nodes_in_cluster(self.cluster.master)
self.bucket_util.vb_distribution_analysis(
servers=nodes, buckets=self.bucket_util.buckets, std=1.0,
total_vbuckets=self.vbuckets, num_replicas=self.num_replicas)
"""Rebalances nodes out with failover and full recovery add back of a node
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. Before rebalance we perform docs ops(add/remove/update/read)
in the cluster( operate with a half of items that were loaded before).It then remove nodes_out
from the cluster at a time and rebalances. Once the cluster has been rebalanced we wait for the
disk queues to drain, and then verify that there has been no data loss, sum(curr_items) match the
curr_items_total. We also check for data and its meta-data, vbucket sequene numbers"""
def rebalance_out_with_failover_full_addback_recovery(self):
self.gen_delete = self.get_doc_generator(self.items / 2,
self.items)
self.gen_create = self.get_doc_generator(self.num_items,
self.num_items + self.items / 2)
# define which doc's ops will be performed during rebalancing
# allows multiple of them but one by one
tasks_info = self.loadgen_docs()
servs_out = [self.cluster.servers[self.nodes_init - i - 1] for i in range(self.nodes_out)]
self.bucket_util.verify_stats_all_buckets(self.num_items, timeout=120)
self.bucket_util._wait_for_stats_all_buckets()
self.rest = RestConnection(self.cluster.master)
chosen = self.cluster_util.pick_nodes(self.cluster.master, howmany=1)
self.sleep(20)
prev_failover_stats = self.bucket_util.get_failovers_logs(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
prev_vbucket_stats = self.bucket_util.get_vbucket_seqnos(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
record_data_set = self.bucket_util.get_data_set_all(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
self.bucket_util.compare_vbucketseq_failoverlogs(prev_vbucket_stats, prev_failover_stats)
# Mark Node for failover
success_failed_over = self.rest.fail_over(chosen[0].id, graceful=False)
# Mark Node for full recovery
if success_failed_over:
self.rest.set_recovery_type(otpNode=chosen[0].id, recoveryType="full")
self.add_remove_servers_and_rebalance([], servs_out)
if not self.atomicity:
self.bucket_util.verify_doc_op_task_exceptions(tasks_info,
self.cluster)
self.bucket_util.log_doc_ops_task_failures(tasks_info)
for task, task_info in tasks_info.items():
self.assertFalse(
task_info["ops_failed"],
"Doc ops failed for task: {}".format(task.thread_name))
else:
for task, task_info in tasks_info.items():
self.task_manager.get_task_result(task)
self.bucket_util.verify_cluster_stats(self.num_items, check_ep_items_remaining=True)
self.bucket_util.compare_failovers_logs(prev_failover_stats, self.cluster.servers[:self.nodes_init - self.nodes_out], self.bucket_util.buckets)
self.sleep(30)
self.bucket_util.data_analysis_all(record_data_set, self.cluster.servers[:self.nodes_init - self.nodes_out], self.bucket_util.buckets)
self.bucket_util.verify_unacked_bytes_all_buckets()
nodes = self.cluster_util.get_nodes_in_cluster(self.cluster.master)
self.bucket_util.vb_distribution_analysis(
servers=nodes, buckets=self.bucket_util.buckets, std=1.0,
total_vbuckets=self.vbuckets, num_replicas=self.num_replicas)
"""Rebalances nodes out with failover
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. Before rebalance we perform docs ops(add/remove/update/read)
in the cluster( operate with a half of items that were loaded before).It then remove nodes_out
from the cluster at a time and rebalances. Once the cluster has been rebalanced we wait for the
disk queues to drain, and then verify that there has been no data loss, sum(curr_items) match the
curr_items_total. We also check for data and its meta-data, vbucket sequene numbers"""
def rebalance_out_with_failover(self):
fail_over = self.input.param("fail_over", False)
self.rest = RestConnection(self.cluster.master)
self.gen_delete = self.get_doc_generator(self.items / 2,
self.items)
self.gen_create = self.get_doc_generator(self.num_items,
self.num_items + self.items / 2)
# define which doc's ops will be performed during rebalancing
# allows multiple of them but one by one
tasks_info = self.loadgen_docs()
ejectedNode = self.cluster_util.find_node_info(self.cluster.master, self.cluster.servers[self.nodes_init - 1])
if not self.atomicity:
self.bucket_util.verify_stats_all_buckets(self.num_items, timeout=120)
self.bucket_util._wait_for_stats_all_buckets()
self.sleep(20)
prev_failover_stats = self.bucket_util.get_failovers_logs(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
prev_vbucket_stats = self.bucket_util.get_vbucket_seqnos(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
record_data_set = self.bucket_util.get_data_set_all(self.cluster.servers[:self.nodes_init], self.bucket_util.buckets)
self.bucket_util.compare_vbucketseq_failoverlogs(prev_vbucket_stats, prev_failover_stats)
self.rest = RestConnection(self.cluster.master)
chosen = self.cluster_util.pick_nodes(self.cluster.master, howmany=1)
new_server_list = self.cluster_util.add_remove_servers(
self.cluster.servers, self.cluster.servers[:self.nodes_init],
[self.cluster.servers[self.nodes_init - 1], chosen[0]], [])
# Mark Node for failover
success_failed_over = self.rest.fail_over(chosen[0].id, graceful=fail_over)
self.nodes = self.rest.node_statuses()
self.rest.rebalance(otpNodes=[node.id for node in self.nodes], ejectedNodes=[chosen[0].id])
self.assertTrue(self.rest.monitorRebalance(stop_if_loop=True), msg="Rebalance failed")
self.cluster.nodes_in_cluster = new_server_list
if not self.atomicity:
self.bucket_util.verify_doc_op_task_exceptions(tasks_info,
self.cluster)
self.bucket_util.log_doc_ops_task_failures(tasks_info)
for task, task_info in tasks_info.items():
self.assertFalse(
task_info["ops_failed"],
"Doc ops failed for task: {}".format(task.thread_name))
self.bucket_util.verify_cluster_stats(self.num_items, check_ep_items_remaining=True)
else:
for task, task_info in tasks_info.items():
self.task_manager.get_task_result(task)
self.sleep(30)
self.bucket_util.data_analysis_all(record_data_set, new_server_list, self.bucket_util.buckets)
self.bucket_util.verify_unacked_bytes_all_buckets()
nodes = self.cluster_util.get_nodes_in_cluster(self.cluster.master)
self.bucket_util.vb_distribution_analysis(servers=nodes, buckets=self.bucket_util.buckets,num_replicas =self.num_replicas,std=1.0, total_vbuckets=self.vbuckets)
"""Rebalances nodes out of a cluster while doing docs ops:create, delete, update along with compaction.
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. It then remove nodes_out from the cluster at a time
and rebalances. During the rebalance we perform docs ops(add/remove/update/read)
in the cluster( operate with a half of items that were loaded before).
Once the cluster has been rebalanced we wait for the disk queues to drain,
and then verify that there has been no data loss, sum(curr_items) match the curr_items_total.
Once all nodes have been rebalanced the test is finished."""
def rebalance_out_with_compaction_and_ops(self):
self.gen_delete = self.get_doc_generator(self.items / 2,
self.items)
self.gen_create = self.get_doc_generator(self.num_items,
self.num_items + self.items / 2)
servs_out = [self.cluster.servers[self.nodes_init - i - 1] for i in range(self.nodes_out)]
rebalance_task = self.task.async_rebalance(
self.cluster.servers[:1], [], servs_out)
compaction_task = list()
for bucket in self.bucket_util.buckets:
compaction_task.append(self.task.async_compact_bucket(self.cluster.master, bucket))
# define which doc's ops will be performed during rebalancing
# allows multiple of them but one by one
tasks_info = self.loadgen_docs()
self.task.jython_task_manager.get_task_result(rebalance_task)
if not rebalance_task.result:
for task, _ in tasks_info.items():
self.task_manager.get_task_result(task)
self.fail("Rebalance Failed")
if not self.atomicity:
self.bucket_util.verify_doc_op_task_exceptions(tasks_info,
self.cluster)
self.bucket_util.log_doc_ops_task_failures(tasks_info)
for task, task_info in tasks_info.items():
self.assertFalse(
task_info["ops_failed"],
"Doc ops failed for task: {}".format(task.thread_name))
else:
for task, task_info in tasks_info.items():
self.task_manager.get_task_result(task)
for task in compaction_task:
self.task_manager.get_task_result(task)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(servs_out))
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
"""Rebalances nodes from a cluster during getting random keys.
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. Then we send requests to all nodes in the cluster
to get random key values. Next step is remove nodes_out from the cluster
and rebalance it. During rebalancing we get random keys from all nodes and verify
that are different every time. Once the cluster has been rebalanced
we again get random keys from all new nodes in the cluster,
than we wait for the disk queues to drain, and then verify that there has been no data loss,
sum(curr_items) match the curr_items_total."""
def rebalance_out_get_random_key(self):
servs_out = [self.cluster.servers[self.nodes_init - i - 1] for i in range(self.nodes_out)]
# get random keys for new added nodes
rest_cons = [RestConnection(self.cluster.servers[i]) for i in xrange(self.nodes_init - self.nodes_out)]
rebalance = self.task.async_rebalance(self.cluster.servers[:self.nodes_init], [], servs_out)
self.sleep(2)
result = []
num_iter = 0
# get random keys for each node during rebalancing
while rest_cons[0]._rebalance_progress_status() == 'running' and num_iter < 100:
list_threads = []
temp_result = []
self.log.info("getting random keys for all nodes in cluster....")
for rest in rest_cons:
result.append(rest.get_random_key('default'))
self.sleep(1)
temp_result.append(rest.get_random_key('default'))
if tuple(temp_result) == tuple(result):
self.log.exception("random keys are not changed")
else:
result = temp_result
num_iter += 1
self.task.jython_task_manager.get_task_result(rebalance)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(servs_out))
if not self.atomicity:
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
"""Rebalances nodes out of a cluster while doing docs' ops.
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. It then removes two nodes at a time from the
cluster and rebalances. During the rebalance we update(all of the items in the cluster)/
delete( num_items/(num_servers -1) in each iteration)/
create(a half of initial items in each iteration). Once the cluster has been rebalanced
the test waits for the disk queues to drain and then verifies that there has been no data loss,
sum(curr_items) match the curr_items_total.
Once all nodes have been rebalanced out of the cluster the test finishes."""
def incremental_rebalance_out_with_ops(self):
items = self.items
delete_from = items/2
create_from = items
majority = (self.num_replicas+1)/2+1
for i in reversed(range(majority, self.nodes_init, 2)):
self.gen_delete = self.get_doc_generator(delete_from,
delete_from+items/2)
self.gen_create = self.get_doc_generator(create_from,
create_from+items)
delete_from += items
create_from += items
rebalance_task = self.task.async_rebalance(self.cluster.servers[:i], [], self.cluster.servers[i:i + 2])
tasks_info = self.loadgen_docs()
self.task.jython_task_manager.get_task_result(rebalance_task)
if not rebalance_task.result:
for task, _ in tasks_info.items():
self.task_manager.get_task_result(task)
self.fail("Rebalance Failed")
if not self.atomicity:
self.bucket_util.verify_doc_op_task_exceptions(tasks_info,
self.cluster)
self.bucket_util.log_doc_ops_task_failures(tasks_info)
for task, task_info in tasks_info.items():
self.assertFalse(
task_info["ops_failed"],
"Doc ops failed for task: {}".format(task.thread_name))
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(self.cluster.servers[i:i + 2]))
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
else:
for task, task_info in tasks_info.items():
self.task_manager.get_task_result(task)
"""Rebalances nodes out of a cluster during view queries.
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. It creates num_views as
development/production view with default map view funcs(is_dev_ddoc = True by default).
It then removes nodes_out nodes at a time and rebalances that node from the cluster.
During the rebalancing we perform view queries for all views and verify the expected number
of docs for them. Perform the same view queries after cluster has been completed. Then we wait for
the disk queues to drain, and then verify that there has been no data loss,sum(curr_items) match
the curr_items_total. Once successful view queries the test is finished."""
def rebalance_out_with_queries(self):
num_views = self.input.param("num_views", 5)
is_dev_ddoc = self.input.param("is_dev_ddoc", False)
ddoc_name = "ddoc1"
prefix = ("", "dev_")[is_dev_ddoc]
query = dict()
query["connectionTimeout"] = 60000
query["full_set"] = "true"
views = list()
tasks = list()
for bucket in self.bucket_util.buckets:
temp = self.bucket_util.make_default_views(
self.default_view,
self.default_view_name,
num_views, is_dev_ddoc)
temp_tasks = self.bucket_util.async_create_views(
self.cluster.master, ddoc_name, temp, bucket)
views += temp
tasks += temp_tasks
timeout = None
if self.active_resident_threshold == 0:
timeout = max(self.wait_timeout * 4, len(self.bucket_util.buckets) * self.wait_timeout * self.num_items / 50000)
for task in tasks:
self.task.jython_task_manager.get_task_result(task)
for bucket in self.bucket_util.buckets:
for view in views:
# run queries to create indexes
self.bucket_util.query_view(self.cluster.master, prefix + ddoc_name, view.name, query)
active_tasks = self.cluster_util.async_monitor_active_task(
self.cluster.servers, "indexer", "_design/" + prefix + ddoc_name,
wait_task=False)
for active_task in active_tasks:
self.task_manager.get_task_result(active_task)
self.assertTrue(active_task.result)
expected_rows = self.num_items
if self.max_verify:
expected_rows = self.max_verify
query["limit"] = expected_rows
query["stale"] = "false"
for bucket in self.bucket_util.buckets:
self.bucket_util.perform_verify_queries(
num_views, prefix, ddoc_name, self.default_view_name,
query, bucket=bucket, wait_time=timeout,
expected_rows=expected_rows)
servs_out = self.cluster.servers[-self.nodes_out:]
rebalance = self.task.async_rebalance([self.cluster.master], [], servs_out)
self.sleep(self.wait_timeout / 5)
# see that the result of view queries are the same as expected during the test
for bucket in self.bucket_util.buckets:
self.bucket_util.perform_verify_queries(
num_views, prefix, ddoc_name, self.default_view_name,
query, bucket=bucket, wait_time=timeout,
expected_rows=expected_rows)
# verify view queries results after rebalancing
self.task.jython_task_manager.get_task_result(rebalance)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(servs_out))
for bucket in self.bucket_util.buckets:
self.bucket_util.perform_verify_queries(
num_views, prefix, ddoc_name, self.default_view_name,
query, bucket=bucket, wait_time=timeout,
expected_rows=expected_rows)
if not self.atomicity:
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
"""Rebalances nodes out of a cluster during view queries incrementally.
This test begins with all servers clustered together and loading a given number of items
into the cluster. It creates num_views as development/production view with
default map view funcs(is_dev_ddoc = True by default). It then adds two nodes at a time and
rebalances that node into the cluster. During the rebalancing we perform view queries
for all views and verify the expected number of docs for them.
Perform the same view queries after cluster has been completed. Then we wait for
the disk queues to drain, and then verify that there has been no data loss, sum(curr_items) match
the curr_items_total. Once all nodes have been rebalanced in the test is finished."""
def incremental_rebalance_out_with_queries(self):
num_views = self.input.param("num_views", 5)
is_dev_ddoc = self.input.param("is_dev_ddoc", True)
views = self.bucket_util.make_default_views(self.default_view,
self.default_view_name,
num_views, is_dev_ddoc)
ddoc_name = "ddoc1"
prefix = ("", "dev_")[is_dev_ddoc]
# increase timeout for big data
timeout = None
if self.active_resident_threshold == 0:
timeout = max(self.wait_timeout * 5, self.wait_timeout * self.num_items / 25000)
query = {}
query["connectionTimeout"] = 60000
query["full_set"] = "true"
tasks = self.bucket_util.async_create_views(self.cluster.master, ddoc_name, views, 'default')
for task in tasks:
self.task.jython_task_manager.get_task_result(task)
for view in views:
# run queries to create indexes
self.bucket_util.query_view(self.cluster.master, prefix + ddoc_name, view.name, query, timeout=self.wait_timeout * 2)
for i in xrange(3):
active_tasks = self.cluster_util.async_monitor_active_task(self.cluster.servers, "indexer",
"_design/" + prefix + ddoc_name, wait_task=False)
for active_task in active_tasks:
self.task_manager.get_task_result(active_task)
self.assertTrue(active_task.result)
self.sleep(2)
expected_rows = self.num_items
if self.max_verify:
expected_rows = self.max_verify
query["limit"] = expected_rows
query["stale"] = "false"
self.bucket_util.perform_verify_queries(
num_views, prefix, ddoc_name, self.default_view_name,
query, wait_time=timeout,
expected_rows=expected_rows)
query["stale"] = "update_after"
for i in reversed(range(1, self.nodes_init, 2)):
rebalance = self.task.async_rebalance(self.cluster.servers[:i], [], self.cluster.servers[i:i + 2])
self.sleep(self.wait_timeout / 5)
# see that the result of view queries are the same as expected during the test
self.bucket_util.perform_verify_queries(
num_views, prefix, ddoc_name, self.default_view_name,
query, wait_time=timeout,
expected_rows=expected_rows)
# verify view queries results after rebalancing
self.task.jython_task_manager.get_task_result(rebalance)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(self.cluster.servers[i:i + 2]))
self.bucket_util.perform_verify_queries(
num_views, prefix, ddoc_name, self.default_view_name,
query, wait_time=timeout,
expected_rows=expected_rows)
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
"""Rebalances nodes into a cluster when one node is warming up.
This test begins with loads a user defined number of items into the cluster
and all servers clustered together. Next steps are: stop defined
node(master_restart = False by default), wait 20 sec and start the stopped node.
Without waiting for the node to start up completely, rebalance out servs_out servers.
Expect that rebalance is failed. Wait for warmup completed and start
rebalance with the same configuration. Once the cluster has been rebalanced
we wait for the disk queues to drain, and then verify that there has been no data loss,
sum(curr_items) match the curr_items_total."""
def rebalance_out_with_warming_up(self):
master_restart = self.input.param("master_restart", False)
if master_restart:
warmup_node = self.cluster.master
else:
warmup_node = self.cluster.servers[len(self.cluster.servers) - self.nodes_out - 1]
servs_out = self.cluster.servers[len(self.cluster.servers) - self.nodes_out:]
shell = RemoteMachineShellConnection(warmup_node)
shell.stop_couchbase()
self.sleep(20)
shell.start_couchbase()
shell.disconnect()
try:
rebalance = self.task.async_rebalance(
self.cluster.servers, [], servs_out)
self.task.jython_task_manager.get_task_result(rebalance)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(servs_out))
except RebalanceFailedException:
self.log.info("rebalance was failed as expected")
self.assertTrue(self.bucket_util._wait_warmup_completed(
self, [warmup_node], 'default',
wait_time=self.wait_timeout * 10))
self.log.info("second attempt to rebalance")
rebalance = self.task.async_rebalance(
self.cluster.servers, [], servs_out)
self.task.jython_task_manager.get_task_result(rebalance)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - set(servs_out))
if not self.atomicity:
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
"""Rebalances nodes out of a cluster while doing mutations and deletions.
This test begins with all servers clustered together and loads a user defined
number of items into the cluster. It then removes one node at a time from the
cluster and rebalances. During the rebalance we update half of the items in the
cluster and delete the other half. Once the cluster has been rebalanced the test
recreates all of the deleted items, waits for the disk queues to drain, and then
verifies that there has been no data loss, sum(curr_items) match the curr_items_total.
Once all nodes have been rebalanced out of the cluster the test finishes."""
def incremental_rebalance_out_with_mutation_and_deletion(self):
gen_2 = self.get_doc_generator(self.num_items / 2 + 2000,
self.num_items)
for i in reversed(range(self.nodes_init)[1:]):
# don't use batch for rebalance out 2-1 nodes
rebalance_task = self.task.async_rebalance(
self.cluster.servers[:i], [], [self.cluster.servers[i]])
self.sleep(5, "Wait for rebalance to start")
tasks_info = dict()
tem_tasks_info = self.bucket_util._async_load_all_buckets(
self.cluster, self.gen_update, "update", 0)
tasks_info.update(tem_tasks_info.copy())
tem_tasks_info = self.bucket_util._async_load_all_buckets(
self.cluster, gen_2, "delete", 0)
tasks_info.update(tem_tasks_info.copy())
self.task.jython_task_manager.get_task_result(rebalance_task)
self.cluster.nodes_in_cluster.remove(self.cluster.servers[i])
for task in tasks_info.keys():
self.task_manager.get_task_result(task)
self.sleep(5, "Let the cluster relax for some time")
self._load_all_buckets(self.cluster, gen_2, "create", 0)
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
"""Rebalances nodes out of a cluster while doing mutations and expirations.
This test begins with all servers clustered together and loads a user defined number
of items into the cluster. It then removes one node at a time from the cluster and
rebalances. During the rebalance we update all of the items in the cluster and set
half of the items to expire in 5 seconds. Once the cluster has been rebalanced the
test recreates all of the expired items, waits for the disk queues to drain, and then
verifies that there has been no data loss, sum(curr_items) match the curr_items_total.
Once all nodes have been rebalanced out of the cluster the test finishes."""
def incremental_rebalance_out_with_mutation_and_expiration(self):
gen_2 = self.get_doc_generator(self.num_items / 2 + 2000, self.num_items)
batch_size = 1000
for i in reversed(range(self.nodes_init)[2:]):
# don't use batch for rebalance out 2-1 nodes
rebalance = self.task.async_rebalance(self.cluster.servers[:i], [], [self.cluster.servers[i]])
self.sleep(5, "Wait for rebalance to start")
self._load_all_buckets(self.cluster, self.gen_update, "update", 0, batch_size=batch_size, timeout_secs=60)
self._load_all_buckets(self.cluster, gen_2, "update", 5, batch_size=batch_size, timeout_secs=60)
self.task.jython_task_manager.get_task_result(rebalance)
self.cluster.nodes_in_cluster = list(set(self.cluster.nodes_in_cluster) - {self.cluster.servers[i]})
self.sleep(5, "Let the cluster relax for some time")
self._load_all_buckets(self.cluster, gen_2, "create", 0)
self.bucket_util.verify_cluster_stats(self.num_items)
self.bucket_util.verify_unacked_bytes_all_buckets()
| 57.158661 | 168 | 0.658246 | 5,223 | 39,268 | 4.710511 | 0.071989 | 0.051416 | 0.060318 | 0.026826 | 0.845751 | 0.827785 | 0.817258 | 0.789863 | 0.768727 | 0.760761 | 0 | 0.006611 | 0.264286 | 39,268 | 686 | 169 | 57.241983 | 0.844998 | 0.031756 | 0 | 0.663366 | 0 | 0 | 0.036621 | 0.001545 | 0 | 0 | 0 | 0 | 0.029703 | 1 | 0.029703 | false | 0 | 0.015842 | 0 | 0.047525 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ca0808f3c20bdd654e6e8927e600f24ca9e7b773 | 1,412 | py | Python | tests/test_test_helpers.py | QualiSystems/Shellfoundry-Traffic | 967a6ab0208116506fcf42822bb3f293c3be18c6 | [
"Apache-2.0"
] | null | null | null | tests/test_test_helpers.py | QualiSystems/Shellfoundry-Traffic | 967a6ab0208116506fcf42822bb3f293c3be18c6 | [
"Apache-2.0"
] | 4 | 2020-10-29T13:16:29.000Z | 2020-11-22T09:00:05.000Z | tests/test_test_helpers.py | QualiSystems/Shellfoundry-Traffic | 967a6ab0208116506fcf42822bb3f293c3be18c6 | [
"Apache-2.0"
] | null | null | null |
import pytest
from shellfoundry_traffic.test_helpers import create_session_from_config, TestHelpers
RESERVATION_NAME = 'testing 1 2 3'
@pytest.fixture()
def session():
session = create_session_from_config()
yield session
# todo: delete session.
def test_reservation(session) -> None:
test_helper = TestHelpers(session)
test_helper.create_reservation(RESERVATION_NAME)
reservations = test_helper.session.GetCurrentReservations(reservationOwner=test_helper.session.username)
assert [r for r in reservations.Reservations if r.Name == RESERVATION_NAME]
test_helper.end_reservation()
reservations = test_helper.session.GetCurrentReservations(reservationOwner=test_helper.session.username)
assert not [r for r in reservations.Reservations if r.Name == RESERVATION_NAME]
def test_topology_reservation(session) -> None:
test_helper = TestHelpers(session)
test_helper.create_topology_reservation('Test Topology', reservation_name=RESERVATION_NAME)
reservations = test_helper.session.GetCurrentReservations(reservationOwner=test_helper.session.username)
assert [r for r in reservations.Reservations if r.Name == RESERVATION_NAME]
test_helper.end_reservation()
reservations = test_helper.session.GetCurrentReservations(reservationOwner=test_helper.session.username)
assert not [r for r in reservations.Reservations if r.Name == RESERVATION_NAME]
| 41.529412 | 108 | 0.800283 | 167 | 1,412 | 6.538922 | 0.215569 | 0.128205 | 0.124542 | 0.106227 | 0.750916 | 0.750916 | 0.750916 | 0.750916 | 0.750916 | 0.750916 | 0 | 0.002427 | 0.124646 | 1,412 | 33 | 109 | 42.787879 | 0.881068 | 0.014873 | 0 | 0.521739 | 0 | 0 | 0.018732 | 0 | 0 | 0 | 0 | 0.030303 | 0.173913 | 1 | 0.130435 | false | 0 | 0.086957 | 0 | 0.217391 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ca0d071a442eb24e7ad8e35e632fbcb13d0853c4 | 1,240 | py | Python | spam_bot/spam_bot.py | HirushaPramuditha/Spam-Bot | 099abe0ecd8582ab138057a63dd5ddb344c99b56 | [
"MIT"
] | 1 | 2022-03-22T07:59:57.000Z | 2022-03-22T07:59:57.000Z | spam_bot/spam_bot.py | HirushaPramuditha/Spam-Bot | 099abe0ecd8582ab138057a63dd5ddb344c99b56 | [
"MIT"
] | null | null | null | spam_bot/spam_bot.py | HirushaPramuditha/Spam-Bot | 099abe0ecd8582ab138057a63dd5ddb344c99b56 | [
"MIT"
] | 1 | 2021-08-02T22:14:22.000Z | 2021-08-02T22:14:22.000Z | import pyautogui
import time
countdown = [5, 4, 3, 2, 1]
class ReadFile:
def __init__(self, file):
self.file = file
def spam(self):
f = open(self.file, "r")
print("""
╔═══╗─────────╔══╗───╔╗
║╔═╗║─────────║╔╗║──╔╝╚╗
║╚══╦══╦══╦╗╔╗║╚╝╚╦═╩╗╔╝
╚══╗║╔╗║╔╗║╚╝║║╔═╗║╔╗║║
║╚═╝║╚╝║╔╗║║║║║╚═╝║╚╝║╚╗
╚═══╣╔═╩╝╚╩╩╩╝╚═══╩══╩═╝
────║║
────╚╝
""")
print(
"To stop the program, move the curser to the upper left corner of the screen.")
print("")
for num in countdown:
print(f"Starting in {num}...")
time.sleep(1)
print("Boom!")
for line in f:
pyautogui.typewrite(line)
pyautogui.press("enter")
def spam(msg, count):
print("""
╔═══╗─────────╔══╗───╔╗
║╔═╗║─────────║╔╗║──╔╝╚╗
║╚══╦══╦══╦╗╔╗║╚╝╚╦═╩╗╔╝
╚══╗║╔╗║╔╗║╚╝║║╔═╗║╔╗║║
║╚═╝║╚╝║╔╗║║║║║╚═╝║╚╝║╚╗
╚═══╣╔═╩╝╚╩╩╩╝╚═══╩══╩═╝
────║║
────╚╝
""")
print("To stop the program, move the curser to the upper left corner of the screen.")
print("")
for num in countdown:
print(f"Starting in {num}...")
time.sleep(1)
print("Boom!")
for _ in range(int(count)):
pyautogui.typewrite(msg)
pyautogui.press("enter")
| 18.787879 | 91 | 0.397581 | 132 | 1,240 | 6.030303 | 0.401515 | 0.030151 | 0.130653 | 0.190955 | 0.721106 | 0.721106 | 0.721106 | 0.721106 | 0.721106 | 0.721106 | 0 | 0.007584 | 0.255645 | 1,240 | 65 | 92 | 19.076923 | 0.521127 | 0 | 0 | 0.666667 | 0 | 0 | 0.435484 | 0.229032 | 0 | 0 | 0 | 0 | 0 | 1 | 0.0625 | false | 0 | 0.041667 | 0 | 0.125 | 0.208333 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ca11a2eaf1e52944e20bb0956bd6aa1ddf10dfc4 | 1,471 | py | Python | Username_Passwd.py | arpansarkar190794/Arpan_Sarkar | b36f66f0ed00668b005fae903ce463883a803fd5 | [
"bzip2-1.0.6"
] | null | null | null | Username_Passwd.py | arpansarkar190794/Arpan_Sarkar | b36f66f0ed00668b005fae903ce463883a803fd5 | [
"bzip2-1.0.6"
] | null | null | null | Username_Passwd.py | arpansarkar190794/Arpan_Sarkar | b36f66f0ed00668b005fae903ce463883a803fd5 | [
"bzip2-1.0.6"
] | null | null | null | User={'Mrudula':'Mrudula123','Arpan':'Arpan123','Diganta':'Diganta123','Aliyas':'Aliyas123'}
print ("Enter your Username" )
Username= input()
print ("Enter your Password" )
Password= input()
if Username== 'Mrudula' and Password== User['Mrudula']:
print('Your Login is Succesfull')
elif Username == 'Arpan' and Password == User['Arpan']:
print('Your Login is Succesfull')
elif Username == 'Diganta' and Password == User['Diganta']:
print('Your Login is Succesfull')
elif Username == 'Aliyas' and Password == User['Aliyas']:
print('Your Login is Succesfull')
elif Username == 'Mrudula' and Password != User['Mrudula']:
print('Incorrect Password')
elif Username == 'Arpan' and Password != User['Arpan']:
print('Incorrect Password')
elif Username == 'Diganta' and Password != User['Diganta']:
print('Incorrect Password')
elif Username == 'Aliyas' and Password != User['Aliyas']:
print('Incorrect Password')
elif Username != 'Mrudula' and Password == User['Mrudula']:
print('Incorrect Username')
elif Username != 'Arpan' and Password == User['Arpan']:
print('Incorrect Username')
elif Username != 'Diganta' and Password == User['Diganta']:
print('Incorrect Username')
elif Username != 'Aliyas' and Password == User['Aliyas']:
print('Incorrect Username')
else:
print('Invalid Credentials') | 47.451613 | 93 | 0.622026 | 153 | 1,471 | 5.980392 | 0.163399 | 0.144262 | 0.196721 | 0.069945 | 0.810929 | 0.749727 | 0.749727 | 0.612022 | 0.46776 | 0 | 0 | 0.010526 | 0.225017 | 1,471 | 31 | 94 | 47.451613 | 0.792105 | 0 | 0 | 0.387097 | 0 | 0 | 0.352982 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.580645 | 0 | 0 | 0 | 0.483871 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 6 |
ca3e12a7e372d55eaadf410d422893daee193562 | 41 | py | Python | lib/build/__init__.py | dayekuaipao/PyTorchTemplate | d34733e96cf2bdd6859be46708e9d6d5dd977841 | [
"MIT"
] | 1 | 2020-08-24T17:09:38.000Z | 2020-08-24T17:09:38.000Z | lib/build/__init__.py | dayekuaipao/PyTorchTemplate | d34733e96cf2bdd6859be46708e9d6d5dd977841 | [
"MIT"
] | null | null | null | lib/build/__init__.py | dayekuaipao/PyTorchTemplate | d34733e96cf2bdd6859be46708e9d6d5dd977841 | [
"MIT"
] | 2 | 2020-08-24T17:09:43.000Z | 2021-05-19T03:04:10.000Z | from lib.build.registry import Registries | 41 | 41 | 0.878049 | 6 | 41 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ca4bcc7a7525d7a252e6d264336f9a3edd7e5fff | 37 | py | Python | tests/test_statement_parser.py | Gallaecio/shublang | 3e145b4ae0149c91267be34bdac37a2dd7a7346f | [
"BSD-3-Clause"
] | 10 | 2020-05-11T21:58:39.000Z | 2022-01-28T01:00:57.000Z | tests/test_statement_parser.py | Gallaecio/shublang | 3e145b4ae0149c91267be34bdac37a2dd7a7346f | [
"BSD-3-Clause"
] | 29 | 2020-04-29T06:51:49.000Z | 2021-04-05T10:57:46.000Z | tests/test_statement_parser.py | Gallaecio/shublang | 3e145b4ae0149c91267be34bdac37a2dd7a7346f | [
"BSD-3-Clause"
] | 6 | 2020-03-31T18:21:25.000Z | 2021-04-24T02:17:55.000Z | # TODO add tests for StatementParser
| 18.5 | 36 | 0.810811 | 5 | 37 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162162 | 37 | 1 | 37 | 37 | 0.967742 | 0.918919 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 1 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
04751df2c5673d757c40fa7257db6b46bc4d8d5e | 182 | py | Python | great_expectations/render/renderer/__init__.py | orenovadia/great_expectations | 76ef0c4e066227f8b589a1ee6ac885618f65906e | [
"Apache-2.0"
] | null | null | null | great_expectations/render/renderer/__init__.py | orenovadia/great_expectations | 76ef0c4e066227f8b589a1ee6ac885618f65906e | [
"Apache-2.0"
] | null | null | null | great_expectations/render/renderer/__init__.py | orenovadia/great_expectations | 76ef0c4e066227f8b589a1ee6ac885618f65906e | [
"Apache-2.0"
] | 1 | 2022-02-10T04:20:37.000Z | 2022-02-10T04:20:37.000Z | from .column_section_renderer import DescriptiveColumnSectionRenderer, PrescriptiveColumnSectionRenderer
from .page_renderer import DescriptivePageRenderer, PrescriptivePageRenderer
| 60.666667 | 104 | 0.923077 | 13 | 182 | 12.692308 | 0.769231 | 0.169697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054945 | 182 | 2 | 105 | 91 | 0.959302 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
04838f0946d976c23b5bef8cf9ee08ea7e1eb143 | 162 | py | Python | anchor/packages/templatetags/more_human.py | kam1sh/anchor | 6699a7c3f0d43f358ea399490227cbeaa63df075 | [
"MIT"
] | 1 | 2019-05-04T07:24:40.000Z | 2019-05-04T07:24:40.000Z | anchor/packages/templatetags/more_human.py | kam1sh/ciconia | 6699a7c3f0d43f358ea399490227cbeaa63df075 | [
"MIT"
] | null | null | null | anchor/packages/templatetags/more_human.py | kam1sh/ciconia | 6699a7c3f0d43f358ea399490227cbeaa63df075 | [
"MIT"
] | null | null | null | import humanize
from django import template
register = template.Library()
@register.filter
def naturalsize(value: int):
return humanize.naturalsize(value)
| 16.2 | 38 | 0.783951 | 19 | 162 | 6.684211 | 0.684211 | 0.251969 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135802 | 162 | 9 | 39 | 18 | 0.907143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0.333333 | 0.166667 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
0493cc6d0f9830db3bac51273f59829b8bb7b8be | 35 | py | Python | backend/server/scripts/transcribe/music/__init__.py | vaastav/eTone | a544605c5d23d1d984385bb9c52a65d63f4bdd41 | [
"BSD-3-Clause"
] | 2 | 2018-04-16T08:55:40.000Z | 2018-08-09T09:58:47.000Z | backend/server/scripts/transcribe/music/__init__.py | vaastav/eTone | a544605c5d23d1d984385bb9c52a65d63f4bdd41 | [
"BSD-3-Clause"
] | 3 | 2017-12-25T07:55:03.000Z | 2019-07-10T02:58:44.000Z | backend/server/scripts/transcribe/music/__init__.py | vaastav/eTone | a544605c5d23d1d984385bb9c52a65d63f4bdd41 | [
"BSD-3-Clause"
] | 2 | 2019-07-05T21:21:16.000Z | 2021-12-31T21:13:37.000Z | from .splitter import SongSplitter
| 17.5 | 34 | 0.857143 | 4 | 35 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 35 | 1 | 35 | 35 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
04bf80eb35809964bb4d926b51d8078744fd8c10 | 32 | py | Python | config/__init__.py | sajjadmaneshi/dws_dev_007_python_q2 | b95617041f13de43fbdce398adb0cbbcc6276a1e | [
"Apache-2.0"
] | null | null | null | config/__init__.py | sajjadmaneshi/dws_dev_007_python_q2 | b95617041f13de43fbdce398adb0cbbcc6276a1e | [
"Apache-2.0"
] | null | null | null | config/__init__.py | sajjadmaneshi/dws_dev_007_python_q2 | b95617041f13de43fbdce398adb0cbbcc6276a1e | [
"Apache-2.0"
] | null | null | null | from config.config import Config | 32 | 32 | 0.875 | 5 | 32 | 5.6 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
04d1b8a8efcc1c679b27212ce0d8b00f59d2f71b | 20,418 | py | Python | testproject/tests/test_second_step_authentication.py | CambiaTech/django-trench | 5ee1d0c4c01ce982529583d85ff8eb8377d224d3 | [
"MIT"
] | null | null | null | testproject/tests/test_second_step_authentication.py | CambiaTech/django-trench | 5ee1d0c4c01ce982529583d85ff8eb8377d224d3 | [
"MIT"
] | null | null | null | testproject/tests/test_second_step_authentication.py | CambiaTech/django-trench | 5ee1d0c4c01ce982529583d85ff8eb8377d224d3 | [
"MIT"
] | null | null | null | import pytest
from django.contrib.auth import get_user_model
from rest_framework.status import (
HTTP_200_OK,
HTTP_204_NO_CONTENT,
HTTP_400_BAD_REQUEST,
HTTP_401_UNAUTHORIZED,
)
from rest_framework.test import APIClient
from time import sleep
from twilio.base.exceptions import TwilioException, TwilioRestException
from tests.utils import TrenchAPIClient
from trench.backends.provider import get_mfa_handler
from trench.command.replace_mfa_method_backup_codes import (
regenerate_backup_codes_for_mfa_method_command,
)
from trench.exceptions import MFAMethodDoesNotExistError
from trench.models import MFAMethod
User = get_user_model()
@pytest.mark.django_db
def test_mfa_method_manager(active_user):
with pytest.raises(MFAMethodDoesNotExistError):
MFAMethod.objects.get_primary_active_name(user_id=active_user.id)
@pytest.mark.django_db
def test_mfa_model(active_user_with_email_otp):
mfa_method = active_user_with_email_otp.mfa_methods.first()
assert "email" in str(mfa_method)
mfa_method.backup_codes = ["test1", "test2"]
assert mfa_method.backup_codes == ["test1", "test2"]
mfa_method.backup_codes = ""
@pytest.mark.django_db
def test_custom_validity_period(active_user_with_email_otp, settings):
ORIGINAL_VALIDITY_PERIOD = settings.TRENCH_AUTH["MFA_METHODS"]["email"][
"VALIDITY_PERIOD"
]
settings.TRENCH_AUTH["MFA_METHODS"]["email"]["VALIDITY_PERIOD"] = 3
mfa_method = active_user_with_email_otp.mfa_methods.first()
client = TrenchAPIClient()
response_first_step = client._first_factor_request(user=active_user_with_email_otp)
ephemeral_token = client._extract_ephemeral_token_from_response(
response=response_first_step
)
handler = get_mfa_handler(mfa_method=mfa_method)
code = handler.create_code()
sleep(3)
response_second_step = client._second_factor_request(
code=code, ephemeral_token=ephemeral_token
)
assert response_second_step.status_code == HTTP_401_UNAUTHORIZED
response_second_step = client._second_factor_request(
handler=handler, ephemeral_token=ephemeral_token
)
assert response_second_step.status_code == HTTP_200_OK
settings.TRENCH_AUTH["MFA_METHODS"]["email"][
"VALIDITY_PERIOD"
] = ORIGINAL_VALIDITY_PERIOD
@pytest.mark.django_db
def test_ephemeral_token_verification(active_user_with_email_otp):
mfa_method = active_user_with_email_otp.mfa_methods.first()
client = TrenchAPIClient()
response = client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
assert response.status_code == HTTP_200_OK
assert client.get_username_from_jwt(response=response) == getattr(
active_user_with_email_otp,
User.USERNAME_FIELD,
)
@pytest.mark.django_db
def test_wrong_second_step_verification_with_empty_code(active_user_with_email_otp):
client = TrenchAPIClient()
response_first_step = client._first_factor_request(user=active_user_with_email_otp)
ephemeral_token = client._extract_ephemeral_token_from_response(
response=response_first_step
)
response_second_step = client._second_factor_request(
code="", ephemeral_token=ephemeral_token
)
msg_error = "This field may not be blank."
assert response_second_step.status_code == HTTP_400_BAD_REQUEST
assert response_second_step.data.get("code")[0] == msg_error
@pytest.mark.django_db
def test_wrong_second_step_verification_with_wrong_code(active_user_with_email_otp):
client = TrenchAPIClient()
response_first_step = client._first_factor_request(user=active_user_with_email_otp)
ephemeral_token = client._extract_ephemeral_token_from_response(
response=response_first_step
)
response_second_step = client._second_factor_request(
code="invalid", ephemeral_token=ephemeral_token
)
assert response_second_step.status_code == HTTP_401_UNAUTHORIZED
assert response_second_step.data.get("error") == "Invalid or expired code."
@pytest.mark.django_db
def test_wrong_second_step_verification_with_ephemeral_token(
active_user_with_email_otp,
):
client = TrenchAPIClient()
mfa_method = active_user_with_email_otp.mfa_methods.first()
handler = get_mfa_handler(mfa_method=mfa_method)
response = client._second_factor_request(
code=handler.create_code(), ephemeral_token="invalid"
)
assert response.status_code == HTTP_401_UNAUTHORIZED
@pytest.mark.django_db
def test_second_method_activation(active_user_with_email_otp):
mfa_method = active_user_with_email_otp.mfa_methods.first()
client = TrenchAPIClient()
client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
assert len(active_user_with_email_otp.mfa_methods.all()) == 1
try:
client.post(
path="/auth/sms_twilio/activate/",
data={"phone_number": "555-555-555"},
format="json",
)
except TwilioException:
# Twilio will raise exception because the secret key used is invalid
pass
assert len(active_user_with_email_otp.mfa_methods.all()) == 2
@pytest.mark.django_db
def test_second_method_activation_already_active(active_user_with_email_otp):
mfa_method = active_user_with_email_otp.mfa_methods.first()
client = TrenchAPIClient()
client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
assert len(active_user_with_email_otp.mfa_methods.all()) == 1
response = client.post(
path="/auth/email/activate/",
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("error") == "MFA method already active."
@pytest.mark.django_db
def test_use_backup_code(active_user_with_encrypted_backup_codes):
client = TrenchAPIClient()
active_user, backup_codes = active_user_with_encrypted_backup_codes
response_first_step = client._first_factor_request(user=active_user)
ephemeral_token = client._extract_ephemeral_token_from_response(
response=response_first_step
)
response_second_step = client._second_factor_request(
code=backup_codes.pop(), ephemeral_token=ephemeral_token
)
assert response_second_step.status_code == HTTP_200_OK
@pytest.mark.django_db
def test_activation_otp(active_user):
client = TrenchAPIClient()
client.authenticate(user=active_user)
response = client.post(
path="/auth/email/activate/",
format="json",
)
assert response.status_code == HTTP_200_OK
@pytest.mark.django_db
def test_activation_otp_confirm_wrong(active_user):
client = TrenchAPIClient()
client.authenticate(user=active_user)
response = client.post(
path="/auth/email/activate/",
format="json",
)
assert response.status_code == HTTP_200_OK
response = client.post(
path="/auth/email/activate/confirm/",
data={"code": "test00"},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
error_code = "code_invalid_or_expired"
assert response.data.get("code")[0].code == error_code
@pytest.mark.django_db
def test_confirm_activation_otp(active_user):
client = TrenchAPIClient()
client.authenticate(user=active_user)
# create new MFA method
client.post(
path="/auth/email/activate/",
format="json",
)
mfa_method = active_user.mfa_methods.first()
handler = get_mfa_handler(mfa_method=mfa_method)
# activate the newly created MFA method
response = client.post(
path="/auth/email/activate/confirm/",
data={"code": handler.create_code()},
format="json",
)
assert response.status_code == HTTP_200_OK
assert len(response.data.get("backup_codes")) == 8
mfa_method.delete()
assert active_user.mfa_methods.count() == 0
@pytest.mark.django_db
def test_deactivation_of_primary_method(active_user_with_email_otp):
client = TrenchAPIClient()
mfa_method = active_user_with_email_otp.mfa_methods.first()
handler = get_mfa_handler(mfa_method=mfa_method)
client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
response = client.post(
path="/auth/email/deactivate/",
data={"code": handler.create_code()},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_deactivation_of_secondary_method(active_user_with_many_otp_methods):
user, _ = active_user_with_many_otp_methods
client = TrenchAPIClient()
mfa_method = user.mfa_methods.filter(is_primary=False).first()
handler = get_mfa_handler(mfa_method=mfa_method)
client.authenticate_multi_factor(mfa_method=mfa_method, user=user)
response = client.post(
path=f"/auth/{mfa_method.name}/deactivate/",
data={"code": handler.create_code()},
format="json",
)
assert response.status_code == HTTP_204_NO_CONTENT
mfa_method.refresh_from_db()
assert not mfa_method.is_active
# revert changes
mfa_method.is_active = True
mfa_method.save()
@pytest.mark.django_db
def test_deactivation_of_disabled_method(
active_user_with_email_and_inactive_other_methods_otp,
):
user = active_user_with_email_and_inactive_other_methods_otp
client = TrenchAPIClient()
mfa_method = user.mfa_methods.first()
handler = get_mfa_handler(mfa_method=mfa_method)
client.authenticate_multi_factor(mfa_method=mfa_method, user=user)
response = client.post(
path="/auth/sms_twilio/deactivate/",
data={"code": handler.create_code()},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("code")[0].code == "not_enabled"
@pytest.mark.django_db
def test_change_primary_method(active_user_with_many_otp_methods):
active_user, _ = active_user_with_many_otp_methods
client = TrenchAPIClient()
primary_mfa = active_user.mfa_methods.filter(is_primary=True).first()
handler = get_mfa_handler(mfa_method=primary_mfa)
client.authenticate_multi_factor(mfa_method=primary_mfa, user=active_user)
response = client.post(
path="/auth/mfa/change-primary-method/",
data={
"method": "sms_twilio",
"code": handler.create_code(),
},
format="json",
)
new_primary_method = active_user.mfa_methods.filter(
is_primary=True,
).first()
assert response.status_code == HTTP_204_NO_CONTENT
assert primary_mfa != new_primary_method
assert new_primary_method.name == "sms_twilio"
# revert changes
new_primary_method.is_primary = False
new_primary_method.save()
primary_mfa.is_primary = True
primary_mfa.save()
@pytest.mark.django_db
def test_change_primary_method_with_backup_code(
active_user_with_many_otp_methods,
):
active_user, backup_code = active_user_with_many_otp_methods
client = TrenchAPIClient()
primary_mfa_method = active_user.mfa_methods.filter(is_primary=True).first()
client.authenticate_multi_factor(mfa_method=primary_mfa_method, user=active_user)
response = client.post(
path="/auth/mfa/change-primary-method/",
data={
"method": "sms_twilio",
"code": backup_code,
},
format="json",
)
new_primary_method = active_user.mfa_methods.filter(
is_primary=True,
).first()
assert response.status_code == HTTP_204_NO_CONTENT
assert primary_mfa_method != new_primary_method
assert new_primary_method.name == "sms_twilio"
# revert changes
primary_mfa_method.is_primary = True
primary_mfa_method.save()
new_primary_method.is_primary = False
new_primary_method.save()
@pytest.mark.django_db
def test_change_primary_method_with_invalid_code(active_user_with_many_otp_methods):
active_user, _ = active_user_with_many_otp_methods
client = TrenchAPIClient()
primary_mfa_method = active_user.mfa_methods.filter(is_primary=True).first()
client.authenticate_multi_factor(mfa_method=primary_mfa_method, user=active_user)
response = client.post(
path="/auth/mfa/change-primary-method/",
data={
"method": "sms_twilio",
"code": "invalid",
},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("code")[0].code == "code_invalid_or_expired"
@pytest.mark.django_db
def test_change_primary_method_to_inactive(active_user_with_email_otp):
client = TrenchAPIClient()
primary_mfa_method = active_user_with_email_otp.mfa_methods.filter(
is_primary=True
).first()
handler = get_mfa_handler(mfa_method=primary_mfa_method)
client.authenticate_multi_factor(
mfa_method=primary_mfa_method, user=active_user_with_email_otp
)
response = client.post(
path="/auth/mfa/change-primary-method/",
data={
"method": "sms_twilio",
"code": handler.create_code(),
},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("error") == "Requested MFA method does not exist."
@pytest.mark.django_db
def test_confirm_activation_otp_with_backup_code(
active_user_with_encrypted_backup_codes,
):
active_user, backup_codes = active_user_with_encrypted_backup_codes
client = TrenchAPIClient()
response = client._first_factor_request(user=active_user)
ephemeral_token = client._extract_ephemeral_token_from_response(response=response)
response = client._second_factor_request(
ephemeral_token=ephemeral_token, code=backup_codes.pop()
)
assert response.status_code == HTTP_200_OK
client._update_jwt_from_response(response=response)
try:
client.post(
path="/auth/sms_twilio/activate/",
data={"phone_number": "555-555-555"},
format="json",
)
except (TwilioRestException, TwilioException):
# twilio rises this exception in test, but the new mfa_method is
# created anyway.
pass
backup_codes = regenerate_backup_codes_for_mfa_method_command(
user_id=active_user.id, name="sms_twilio"
)
response = client.post(
path="/auth/sms_twilio/activate/confirm/",
data={"code": backup_codes.pop()},
format="json",
)
assert response.status_code == HTTP_200_OK
assert len(response.data.get("backup_codes")) == 8
# revert changes
active_user.mfa_methods.filter(name="sms_twilio").delete()
@pytest.mark.django_db
def test_request_code_for_active_mfa_method(active_user_with_email_otp):
client = TrenchAPIClient()
mfa_method = active_user_with_email_otp.mfa_methods.first()
client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
response = client.post(
path="/auth/code/request/",
data={"method": "email"},
format="json",
)
expected_msg = "Email message with MFA code has been sent."
assert response.status_code == HTTP_200_OK
assert response.data.get("details") == expected_msg
@pytest.mark.django_db
def test_request_code_for_not_inactive_mfa_method(active_user_with_email_otp):
client = TrenchAPIClient()
mfa_method = active_user_with_email_otp.mfa_methods.first()
client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
response = client.post(
path="/auth/code/request/",
data={"method": "sms_twilio"},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("error") == "Requested MFA method does not exist."
@pytest.mark.django_db
def test_request_code_for_invalid_mfa_method(active_user_with_email_otp):
client = TrenchAPIClient()
mfa_method = active_user_with_email_otp.mfa_methods.first()
client.authenticate_multi_factor(
mfa_method=mfa_method, user=active_user_with_email_otp
)
response = client.post(
path="/auth/code/request/",
data={"method": "invalid"},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_backup_codes_regeneration(active_user_with_encrypted_backup_codes):
active_user, _ = active_user_with_encrypted_backup_codes
client = TrenchAPIClient()
mfa_method = active_user.mfa_methods.first()
handler = get_mfa_handler(mfa_method=mfa_method)
client.authenticate_multi_factor(mfa_method=mfa_method, user=active_user)
old_backup_codes = active_user.mfa_methods.first().backup_codes
response = client.post(
path="/auth/email/codes/regenerate/",
data={
"code": handler.create_code(),
},
format="json",
)
new_backup_codes = active_user.mfa_methods.first().backup_codes
assert response.status_code == HTTP_200_OK
assert old_backup_codes != new_backup_codes
@pytest.mark.django_db
def test_backup_codes_regeneration_without_otp(active_user_with_encrypted_backup_codes):
active_user, _ = active_user_with_encrypted_backup_codes
client = TrenchAPIClient()
mfa_method = active_user.mfa_methods.first()
client.authenticate_multi_factor(mfa_method=mfa_method, user=active_user)
response = client.post(path="/auth/email/codes/regenerate/", format="json")
assert response.data.get("code")[0].code == "required"
assert response.status_code == HTTP_400_BAD_REQUEST
@pytest.mark.django_db
def test_backup_codes_regeneration_disabled_method(
active_user_with_many_otp_methods,
):
active_user, _ = active_user_with_many_otp_methods
client = TrenchAPIClient()
primary_method = active_user.mfa_methods.filter(is_primary=True).first()
handler = get_mfa_handler(mfa_method=primary_method)
client.authenticate_multi_factor(mfa_method=primary_method, user=active_user)
active_user.mfa_methods.filter(name="sms_twilio").update(is_active=False)
response = client.post(
path="/auth/sms_twilio/codes/regenerate/",
data={"code": handler.create_code()},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("code")[0].code == "not_enabled"
# revert changes
active_user.mfa_methods.filter(name="sms_twilio").update(is_active=True)
@pytest.mark.django_db
def test_yubikey(active_user_with_yubi, offline_yubikey):
client = TrenchAPIClient()
yubikey_method = active_user_with_yubi.mfa_methods.first()
response = client.authenticate_multi_factor(
mfa_method=yubikey_method, user=active_user_with_yubi
)
assert response.status_code == HTTP_200_OK
@pytest.mark.django_db
def test_yubikey_exception(active_user_with_yubi, fake_yubikey):
client = TrenchAPIClient()
yubikey_method = active_user_with_yubi.mfa_methods.first()
response = client.authenticate_multi_factor(
mfa_method=yubikey_method, user=active_user_with_yubi
)
assert response.status_code == HTTP_401_UNAUTHORIZED
assert response.data.get("error") is not None
@pytest.mark.django_db
def test_confirm_yubikey_activation_with_backup_code(
active_user_with_encrypted_backup_codes,
):
active_user, backup_codes = active_user_with_encrypted_backup_codes
client = TrenchAPIClient()
response = client._first_factor_request(user=active_user)
ephemeral_token = client._extract_ephemeral_token_from_response(response=response)
response = client._second_factor_request(
ephemeral_token=ephemeral_token, code=backup_codes.pop()
)
client._update_jwt_from_response(response=response)
client.post(
path="/auth/yubi/activate/",
format="json",
)
response = client.post(
path="/auth/yubi/activate/confirm/",
data={
"code": backup_codes.pop(),
},
format="json",
)
assert response.status_code == HTTP_400_BAD_REQUEST
assert response.data.get("code") is not None
@pytest.mark.django_db
def test_get_mfa_config():
client = APIClient()
response = client.get(
path="/auth/mfa/config/",
format="json",
)
assert response.status_code == HTTP_200_OK
| 34.665535 | 88 | 0.735625 | 2,627 | 20,418 | 5.295013 | 0.071565 | 0.078361 | 0.06844 | 0.056003 | 0.865492 | 0.845075 | 0.833285 | 0.775413 | 0.723796 | 0.661754 | 0 | 0.008446 | 0.170732 | 20,418 | 588 | 89 | 34.72449 | 0.813076 | 0.013713 | 0 | 0.6 | 0 | 0 | 0.075272 | 0.030208 | 0 | 0 | 0 | 0 | 0.119192 | 1 | 0.062626 | false | 0.00404 | 0.022222 | 0 | 0.084848 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b6ca3cac8cffbd4bcfda56ba32cbfa0681425794 | 45 | py | Python | lib/__init__.py | DigitalArtsNetworkMelbourne/hueforecast | b8eaefa35149eed68230a4e81771887da527d72f | [
"MIT"
] | null | null | null | lib/__init__.py | DigitalArtsNetworkMelbourne/hueforecast | b8eaefa35149eed68230a4e81771887da527d72f | [
"MIT"
] | null | null | null | lib/__init__.py | DigitalArtsNetworkMelbourne/hueforecast | b8eaefa35149eed68230a4e81771887da527d72f | [
"MIT"
] | null | null | null | from converter import ColorHelper, Converter
| 22.5 | 44 | 0.866667 | 5 | 45 | 7.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 45 | 1 | 45 | 45 | 0.975 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
8e051e0a233fc73ea8edb6f4a03e9f05d214d922 | 83 | py | Python | funtools/__init__.py | ayersb/funtools | 87e1ede044d2ffb95ea8d08f4d2cae0e89d3d3a8 | [
"MIT"
] | 1 | 2021-12-27T22:08:15.000Z | 2021-12-27T22:08:15.000Z | funtools/__init__.py | ayersb/funtools | 87e1ede044d2ffb95ea8d08f4d2cae0e89d3d3a8 | [
"MIT"
] | null | null | null | funtools/__init__.py | ayersb/funtools | 87e1ede044d2ffb95ea8d08f4d2cae0e89d3d3a8 | [
"MIT"
] | null | null | null | from .funwrap import *
from .funwrap import funwrap as fn
from .cache import cache
| 20.75 | 34 | 0.783133 | 13 | 83 | 5 | 0.461538 | 0.338462 | 0.523077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.168675 | 83 | 3 | 35 | 27.666667 | 0.942029 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
8e2196f29382c3aba69172ca7eb88fb66b95bb9c | 23,888 | py | Python | sdk/python/pulumi_digitalocean/custom_image.py | pulumi/pulumi-digitalocean | b924205ec8f66f5240a755c91aa8642162038dfb | [
"ECL-2.0",
"Apache-2.0"
] | 53 | 2019-04-25T14:43:12.000Z | 2022-03-14T15:51:44.000Z | sdk/python/pulumi_digitalocean/custom_image.py | pulumi/pulumi-digitalocean | b924205ec8f66f5240a755c91aa8642162038dfb | [
"ECL-2.0",
"Apache-2.0"
] | 158 | 2019-04-15T21:47:18.000Z | 2022-03-29T21:21:57.000Z | sdk/python/pulumi_digitalocean/custom_image.py | pulumi/pulumi-digitalocean | b924205ec8f66f5240a755c91aa8642162038dfb | [
"ECL-2.0",
"Apache-2.0"
] | 10 | 2019-04-15T20:16:11.000Z | 2021-05-28T19:08:32.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['CustomImageArgs', 'CustomImage']
@pulumi.input_type
class CustomImageArgs:
def __init__(__self__, *,
regions: pulumi.Input[Sequence[pulumi.Input[str]]],
url: pulumi.Input[str],
description: Optional[pulumi.Input[str]] = None,
distribution: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None):
"""
The set of arguments for constructing a CustomImage resource.
:param pulumi.Input[Sequence[pulumi.Input[str]]] regions: A list of regions. (Currently only one is supported).
:param pulumi.Input[str] url: A URL from which the custom Linux virtual machine image may be retrieved.
:param pulumi.Input[str] description: An optional description for the image.
:param pulumi.Input[str] distribution: An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
:param pulumi.Input[str] name: A name for the Custom Image.
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of optional tags for the image.
"""
pulumi.set(__self__, "regions", regions)
pulumi.set(__self__, "url", url)
if description is not None:
pulumi.set(__self__, "description", description)
if distribution is not None:
pulumi.set(__self__, "distribution", distribution)
if name is not None:
pulumi.set(__self__, "name", name)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter
def regions(self) -> pulumi.Input[Sequence[pulumi.Input[str]]]:
"""
A list of regions. (Currently only one is supported).
"""
return pulumi.get(self, "regions")
@regions.setter
def regions(self, value: pulumi.Input[Sequence[pulumi.Input[str]]]):
pulumi.set(self, "regions", value)
@property
@pulumi.getter
def url(self) -> pulumi.Input[str]:
"""
A URL from which the custom Linux virtual machine image may be retrieved.
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: pulumi.Input[str]):
pulumi.set(self, "url", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
An optional description for the image.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def distribution(self) -> Optional[pulumi.Input[str]]:
"""
An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
"""
return pulumi.get(self, "distribution")
@distribution.setter
def distribution(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "distribution", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
A name for the Custom Image.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of optional tags for the image.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@pulumi.input_type
class _CustomImageState:
def __init__(__self__, *,
created_at: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
distribution: Optional[pulumi.Input[str]] = None,
image_id: Optional[pulumi.Input[int]] = None,
min_disk_size: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
public: Optional[pulumi.Input[bool]] = None,
regions: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
size_gigabytes: Optional[pulumi.Input[float]] = None,
slug: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
type: Optional[pulumi.Input[str]] = None,
url: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering CustomImage resources.
:param pulumi.Input[str] description: An optional description for the image.
:param pulumi.Input[str] distribution: An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
:param pulumi.Input[str] name: A name for the Custom Image.
:param pulumi.Input[Sequence[pulumi.Input[str]]] regions: A list of regions. (Currently only one is supported).
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of optional tags for the image.
:param pulumi.Input[str] url: A URL from which the custom Linux virtual machine image may be retrieved.
"""
if created_at is not None:
pulumi.set(__self__, "created_at", created_at)
if description is not None:
pulumi.set(__self__, "description", description)
if distribution is not None:
pulumi.set(__self__, "distribution", distribution)
if image_id is not None:
pulumi.set(__self__, "image_id", image_id)
if min_disk_size is not None:
pulumi.set(__self__, "min_disk_size", min_disk_size)
if name is not None:
pulumi.set(__self__, "name", name)
if public is not None:
pulumi.set(__self__, "public", public)
if regions is not None:
pulumi.set(__self__, "regions", regions)
if size_gigabytes is not None:
pulumi.set(__self__, "size_gigabytes", size_gigabytes)
if slug is not None:
pulumi.set(__self__, "slug", slug)
if status is not None:
pulumi.set(__self__, "status", status)
if tags is not None:
pulumi.set(__self__, "tags", tags)
if type is not None:
pulumi.set(__self__, "type", type)
if url is not None:
pulumi.set(__self__, "url", url)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "created_at")
@created_at.setter
def created_at(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "created_at", value)
@property
@pulumi.getter
def description(self) -> Optional[pulumi.Input[str]]:
"""
An optional description for the image.
"""
return pulumi.get(self, "description")
@description.setter
def description(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "description", value)
@property
@pulumi.getter
def distribution(self) -> Optional[pulumi.Input[str]]:
"""
An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
"""
return pulumi.get(self, "distribution")
@distribution.setter
def distribution(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "distribution", value)
@property
@pulumi.getter(name="imageId")
def image_id(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "image_id")
@image_id.setter
def image_id(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "image_id", value)
@property
@pulumi.getter(name="minDiskSize")
def min_disk_size(self) -> Optional[pulumi.Input[int]]:
return pulumi.get(self, "min_disk_size")
@min_disk_size.setter
def min_disk_size(self, value: Optional[pulumi.Input[int]]):
pulumi.set(self, "min_disk_size", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
A name for the Custom Image.
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter
def public(self) -> Optional[pulumi.Input[bool]]:
return pulumi.get(self, "public")
@public.setter
def public(self, value: Optional[pulumi.Input[bool]]):
pulumi.set(self, "public", value)
@property
@pulumi.getter
def regions(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of regions. (Currently only one is supported).
"""
return pulumi.get(self, "regions")
@regions.setter
def regions(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "regions", value)
@property
@pulumi.getter(name="sizeGigabytes")
def size_gigabytes(self) -> Optional[pulumi.Input[float]]:
return pulumi.get(self, "size_gigabytes")
@size_gigabytes.setter
def size_gigabytes(self, value: Optional[pulumi.Input[float]]):
pulumi.set(self, "size_gigabytes", value)
@property
@pulumi.getter
def slug(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "slug")
@slug.setter
def slug(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "slug", value)
@property
@pulumi.getter
def status(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "status")
@status.setter
def status(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "status", value)
@property
@pulumi.getter
def tags(self) -> Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]:
"""
A list of optional tags for the image.
"""
return pulumi.get(self, "tags")
@tags.setter
def tags(self, value: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]]):
pulumi.set(self, "tags", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def url(self) -> Optional[pulumi.Input[str]]:
"""
A URL from which the custom Linux virtual machine image may be retrieved.
"""
return pulumi.get(self, "url")
@url.setter
def url(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "url", value)
class CustomImage(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
distribution: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
regions: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
url: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides a resource which can be used to create a [custom image](https://www.digitalocean.com/docs/images/custom-images/)
from a URL. The URL must point to an image in one of the following file formats:
- Raw (.img) with an MBR or GPT partition table
- qcow2
- VHDX
- VDI
- VMDK
The image may be compressed using gzip or bzip2. See the DigitalOcean Custom
Image documentation for [additional requirements](https://www.digitalocean.com/docs/images/custom-images/#image-requirements).
## Example Usage
```python
import pulumi
import pulumi_digitalocean as digitalocean
flatcar = digitalocean.CustomImage("flatcar",
url="https://stable.release.flatcar-linux.net/amd64-usr/2605.7.0/flatcar_production_digitalocean_image.bin.bz2",
regions=["nyc3"])
example = digitalocean.Droplet("example",
image=flatcar.id,
region="nyc3",
size="s-1vcpu-1gb",
ssh_keys=["12345"])
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: An optional description for the image.
:param pulumi.Input[str] distribution: An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
:param pulumi.Input[str] name: A name for the Custom Image.
:param pulumi.Input[Sequence[pulumi.Input[str]]] regions: A list of regions. (Currently only one is supported).
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of optional tags for the image.
:param pulumi.Input[str] url: A URL from which the custom Linux virtual machine image may be retrieved.
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: CustomImageArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides a resource which can be used to create a [custom image](https://www.digitalocean.com/docs/images/custom-images/)
from a URL. The URL must point to an image in one of the following file formats:
- Raw (.img) with an MBR or GPT partition table
- qcow2
- VHDX
- VDI
- VMDK
The image may be compressed using gzip or bzip2. See the DigitalOcean Custom
Image documentation for [additional requirements](https://www.digitalocean.com/docs/images/custom-images/#image-requirements).
## Example Usage
```python
import pulumi
import pulumi_digitalocean as digitalocean
flatcar = digitalocean.CustomImage("flatcar",
url="https://stable.release.flatcar-linux.net/amd64-usr/2605.7.0/flatcar_production_digitalocean_image.bin.bz2",
regions=["nyc3"])
example = digitalocean.Droplet("example",
image=flatcar.id,
region="nyc3",
size="s-1vcpu-1gb",
ssh_keys=["12345"])
```
:param str resource_name: The name of the resource.
:param CustomImageArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(CustomImageArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
description: Optional[pulumi.Input[str]] = None,
distribution: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
regions: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
url: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = CustomImageArgs.__new__(CustomImageArgs)
__props__.__dict__["description"] = description
__props__.__dict__["distribution"] = distribution
__props__.__dict__["name"] = name
if regions is None and not opts.urn:
raise TypeError("Missing required property 'regions'")
__props__.__dict__["regions"] = regions
__props__.__dict__["tags"] = tags
if url is None and not opts.urn:
raise TypeError("Missing required property 'url'")
__props__.__dict__["url"] = url
__props__.__dict__["created_at"] = None
__props__.__dict__["image_id"] = None
__props__.__dict__["min_disk_size"] = None
__props__.__dict__["public"] = None
__props__.__dict__["size_gigabytes"] = None
__props__.__dict__["slug"] = None
__props__.__dict__["status"] = None
__props__.__dict__["type"] = None
super(CustomImage, __self__).__init__(
'digitalocean:index/customImage:CustomImage',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
created_at: Optional[pulumi.Input[str]] = None,
description: Optional[pulumi.Input[str]] = None,
distribution: Optional[pulumi.Input[str]] = None,
image_id: Optional[pulumi.Input[int]] = None,
min_disk_size: Optional[pulumi.Input[int]] = None,
name: Optional[pulumi.Input[str]] = None,
public: Optional[pulumi.Input[bool]] = None,
regions: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
size_gigabytes: Optional[pulumi.Input[float]] = None,
slug: Optional[pulumi.Input[str]] = None,
status: Optional[pulumi.Input[str]] = None,
tags: Optional[pulumi.Input[Sequence[pulumi.Input[str]]]] = None,
type: Optional[pulumi.Input[str]] = None,
url: Optional[pulumi.Input[str]] = None) -> 'CustomImage':
"""
Get an existing CustomImage resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] description: An optional description for the image.
:param pulumi.Input[str] distribution: An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
:param pulumi.Input[str] name: A name for the Custom Image.
:param pulumi.Input[Sequence[pulumi.Input[str]]] regions: A list of regions. (Currently only one is supported).
:param pulumi.Input[Sequence[pulumi.Input[str]]] tags: A list of optional tags for the image.
:param pulumi.Input[str] url: A URL from which the custom Linux virtual machine image may be retrieved.
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _CustomImageState.__new__(_CustomImageState)
__props__.__dict__["created_at"] = created_at
__props__.__dict__["description"] = description
__props__.__dict__["distribution"] = distribution
__props__.__dict__["image_id"] = image_id
__props__.__dict__["min_disk_size"] = min_disk_size
__props__.__dict__["name"] = name
__props__.__dict__["public"] = public
__props__.__dict__["regions"] = regions
__props__.__dict__["size_gigabytes"] = size_gigabytes
__props__.__dict__["slug"] = slug
__props__.__dict__["status"] = status
__props__.__dict__["tags"] = tags
__props__.__dict__["type"] = type
__props__.__dict__["url"] = url
return CustomImage(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter(name="createdAt")
def created_at(self) -> pulumi.Output[str]:
return pulumi.get(self, "created_at")
@property
@pulumi.getter
def description(self) -> pulumi.Output[Optional[str]]:
"""
An optional description for the image.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def distribution(self) -> pulumi.Output[Optional[str]]:
"""
An optional distribution name for the image. Valid values are documented [here](https://docs.digitalocean.com/reference/api/api-reference/#operation/create_custom_image)
"""
return pulumi.get(self, "distribution")
@property
@pulumi.getter(name="imageId")
def image_id(self) -> pulumi.Output[int]:
return pulumi.get(self, "image_id")
@property
@pulumi.getter(name="minDiskSize")
def min_disk_size(self) -> pulumi.Output[int]:
return pulumi.get(self, "min_disk_size")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
A name for the Custom Image.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def public(self) -> pulumi.Output[bool]:
return pulumi.get(self, "public")
@property
@pulumi.getter
def regions(self) -> pulumi.Output[Sequence[str]]:
"""
A list of regions. (Currently only one is supported).
"""
return pulumi.get(self, "regions")
@property
@pulumi.getter(name="sizeGigabytes")
def size_gigabytes(self) -> pulumi.Output[float]:
return pulumi.get(self, "size_gigabytes")
@property
@pulumi.getter
def slug(self) -> pulumi.Output[str]:
return pulumi.get(self, "slug")
@property
@pulumi.getter
def status(self) -> pulumi.Output[str]:
return pulumi.get(self, "status")
@property
@pulumi.getter
def tags(self) -> pulumi.Output[Optional[Sequence[str]]]:
"""
A list of optional tags for the image.
"""
return pulumi.get(self, "tags")
@property
@pulumi.getter
def type(self) -> pulumi.Output[str]:
return pulumi.get(self, "type")
@property
@pulumi.getter
def url(self) -> pulumi.Output[str]:
"""
A URL from which the custom Linux virtual machine image may be retrieved.
"""
return pulumi.get(self, "url")
| 40.0134 | 216 | 0.629061 | 2,800 | 23,888 | 5.171071 | 0.077857 | 0.106361 | 0.092824 | 0.074453 | 0.835348 | 0.801713 | 0.743007 | 0.713654 | 0.69107 | 0.634505 | 0 | 0.002301 | 0.254228 | 23,888 | 596 | 217 | 40.080537 | 0.810441 | 0.276122 | 0 | 0.631436 | 1 | 0 | 0.070611 | 0.002599 | 0 | 0 | 0 | 0 | 0 | 1 | 0.165312 | false | 0.00271 | 0.01355 | 0.04336 | 0.281843 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6d4dbbd676192747c7073fa494bcadb9c0ee4c2c | 3,017 | py | Python | tests/test_vertex_array_index.py | asnt/moderngl | b39cedd8cf216c34e43371b4aec822f6084f0f79 | [
"MIT"
] | 916 | 2019-03-11T19:15:20.000Z | 2022-03-31T19:22:16.000Z | tests/test_vertex_array_index.py | asnt/moderngl | b39cedd8cf216c34e43371b4aec822f6084f0f79 | [
"MIT"
] | 218 | 2019-03-11T06:05:52.000Z | 2022-03-30T16:59:22.000Z | tests/test_vertex_array_index.py | asnt/moderngl | b39cedd8cf216c34e43371b4aec822f6084f0f79 | [
"MIT"
] | 110 | 2019-04-06T18:32:24.000Z | 2022-03-21T20:30:47.000Z | import unittest
import moderngl
import numpy as np
from common import get_context
class TestCase(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.ctx = get_context()
def test_1(self):
prog = self.ctx.program(
vertex_shader='''
#version 330
in vec4 in_vert;
out vec4 out_vert;
void main() {
out_vert = in_vert;
}
''',
varyings=['out_vert']
)
vertices = [
4.0, 2.0, 7.5, 1.8,
3.0, 2.8, 6.0, 10.0
]
count = 10
indices = [0, 1] * 10
# UNSIGNED_INT index
vbo1 = self.ctx.buffer(np.array(vertices, dtype='f4').tobytes())
vbo2 = self.ctx.buffer(reserve=vbo1.size * count)
index = self.ctx.buffer(np.array(indices, dtype='u4').tobytes())
vao = self.ctx.simple_vertex_array(prog, vbo1, 'in_vert', index_buffer=index, index_element_size=4)
vao.transform(vbo2, moderngl.POINTS)
res = np.frombuffer(vbo2.read(), dtype='f4')
np.testing.assert_almost_equal(res, vertices * count)
# UNSIGNED_SHORT index
vbo1 = self.ctx.buffer(np.array(vertices, dtype='f4').tobytes())
vbo2 = self.ctx.buffer(reserve=vbo1.size * count)
index = self.ctx.buffer(np.array(indices, dtype='u2').tobytes())
vao = self.ctx.simple_vertex_array(prog, vbo1, 'in_vert', index_buffer=index, index_element_size=2)
vao.transform(vbo2, moderngl.POINTS)
res = np.frombuffer(vbo2.read(), dtype='f4')
np.testing.assert_almost_equal(res, vertices * count)
# UNSIGNED_BYTE index
vbo1 = self.ctx.buffer(np.array(vertices, dtype='f4').tobytes())
vbo2 = self.ctx.buffer(reserve=vbo1.size * count)
index = self.ctx.buffer(np.array(indices, dtype='u1').tobytes())
vao = self.ctx.simple_vertex_array(prog, vbo1, 'in_vert', index_buffer=index, index_element_size=1)
vao.transform(vbo2, moderngl.POINTS)
res = np.frombuffer(vbo2.read(), dtype='f4')
np.testing.assert_almost_equal(res, vertices * count)
def test_2(self):
prog = self.ctx.program(
vertex_shader='''
#version 330
in vec4 in_vert;
out vec4 out_vert;
void main() {
out_vert = in_vert;
}
''',
varyings=['out_vert']
)
vertices = [
4.0, 2.0, 7.5, 1.8,
3.0, 2.8, 6.0, 10.0
]
indices = [0, 1, 0, 1, 0, 1, 0, 1, 0]
vbo1 = self.ctx.buffer(np.array(vertices, dtype='f4').tobytes())
index_u4 = self.ctx.buffer(np.array(indices, dtype='u4').tobytes())
with self.assertRaises(moderngl.Error):
self.ctx.simple_vertex_array(prog, vbo1, 'in_vert', index_buffer=index_u4, index_element_size=0)
if __name__ == '__main__':
unittest.main()
| 32.095745 | 108 | 0.563474 | 385 | 3,017 | 4.262338 | 0.205195 | 0.072517 | 0.087142 | 0.073126 | 0.823278 | 0.823278 | 0.823278 | 0.817794 | 0.817794 | 0.79159 | 0 | 0.046622 | 0.303281 | 3,017 | 93 | 109 | 32.44086 | 0.734063 | 0.019556 | 0 | 0.571429 | 0 | 0 | 0.16046 | 0 | 0 | 0 | 0 | 0 | 0.057143 | 1 | 0.042857 | false | 0 | 0.057143 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ede8fa7bb7529f536a7477eefda8736a6045d38d | 130 | py | Python | app/models/__init__.py | spark8103/dlop2 | 7f35ccb603af97c2d344a9db86f5fa33a8f73c8f | [
"Apache-2.0"
] | null | null | null | app/models/__init__.py | spark8103/dlop2 | 7f35ccb603af97c2d344a9db86f5fa33a8f73c8f | [
"Apache-2.0"
] | 1 | 2017-07-22T21:22:24.000Z | 2017-07-22T21:22:24.000Z | app/models/__init__.py | spark8103/dlop2 | 7f35ccb603af97c2d344a9db86f5fa33a8f73c8f | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# flake8: noqa
from ._base import *
from .asset_model import *
from flask.ext.sqlalchemy import models_committed
| 18.571429 | 49 | 0.769231 | 19 | 130 | 5.105263 | 0.789474 | 0.206186 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018018 | 0.146154 | 130 | 6 | 50 | 21.666667 | 0.855856 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6104b51582993a5473f7f3218eb3f2bd988dbe65 | 30 | py | Python | src/microblog/microblog.py | TR33HGR/python-rest-server | 01c1ba5bad69bb4f7c6a71baf7a067b1e85da78f | [
"MIT"
] | null | null | null | src/microblog/microblog.py | TR33HGR/python-rest-server | 01c1ba5bad69bb4f7c6a71baf7a067b1e85da78f | [
"MIT"
] | null | null | null | src/microblog/microblog.py | TR33HGR/python-rest-server | 01c1ba5bad69bb4f7c6a71baf7a067b1e85da78f | [
"MIT"
] | null | null | null | from microblog.app import app
| 15 | 29 | 0.833333 | 5 | 30 | 5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b61f2560ecf16cf44910d88b3de161388d512470 | 275 | py | Python | src/ebonite/build/provider/__init__.py | koskotG/ebonite | 9f9ae016b70fb24865d5edc99142afb8ab4ddc59 | [
"Apache-2.0"
] | 1 | 2019-11-27T14:33:45.000Z | 2019-11-27T14:33:45.000Z | src/ebonite/build/provider/__init__.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | null | null | null | src/ebonite/build/provider/__init__.py | geffy/ebonite | 2d85eeca44ac1799e743bafe333887712e325060 | [
"Apache-2.0"
] | null | null | null | from .base import LOADER_ENV, ProviderBase, PythonProvider, SERVER_ENV
from .ml_model import MLModelProvider
from .ml_model_multi import MLModelMultiProvider
__all__ = ['LOADER_ENV', 'ProviderBase', 'PythonProvider', 'SERVER_ENV', 'MLModelProvider', 'MLModelMultiProvider']
| 45.833333 | 115 | 0.818182 | 29 | 275 | 7.37931 | 0.482759 | 0.084112 | 0.196262 | 0.327103 | 0.411215 | 0.411215 | 0 | 0 | 0 | 0 | 0 | 0 | 0.087273 | 275 | 5 | 116 | 55 | 0.85259 | 0 | 0 | 0 | 0 | 0 | 0.294545 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b65247413de3fe5d6a185434dfdf782965828d0c | 741 | py | Python | String/383. Ransom Note.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | 138 | 2020-02-08T05:25:26.000Z | 2021-11-04T11:59:28.000Z | String/383. Ransom Note.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | null | null | null | String/383. Ransom Note.py | beckswu/Leetcode | 480e8dc276b1f65961166d66efa5497d7ff0bdfd | [
"MIT"
] | 24 | 2021-01-02T07:18:43.000Z | 2022-03-20T08:17:54.000Z | """
383. Ransom Note
canConstruct("a", "b") -> false
canConstruct("aa", "ab") -> false
canConstruct("aa", "aab") -> true
"""
class Solution:
def canConstruct(self, ransomNote, magazine):
"""
:type ransomNote: str
:type magazine: str
:rtype: bool
"""
return collections.Counter(ransomNote) & collections.Counter(magazine) == collections.Counter(ransomNote)
class Solution:
def canConstruct(self, ransomNote, magazine):
return collections.Counter(ransomNote) - collections.Counter(magazine) == {}
class Solution:
def canConstruct(self, ransomNote, magazine):
return not collections.Counter(ransomNote) - collections.Counter(magazine) | 28.5 | 116 | 0.642375 | 69 | 741 | 6.898551 | 0.362319 | 0.264706 | 0.235294 | 0.176471 | 0.693277 | 0.693277 | 0.579832 | 0.235294 | 0 | 0 | 0 | 0.005263 | 0.230769 | 741 | 26 | 117 | 28.5 | 0.829825 | 0.232119 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.222222 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b675db24d8cd3362186c46194e3ae8a666f57085 | 228 | py | Python | capslayer/decoders/__init__.py | moejoe95/CapsLayer | be191f1adc9c0906a3f4e4b6bd78ccac29329dc0 | [
"Apache-2.0"
] | null | null | null | capslayer/decoders/__init__.py | moejoe95/CapsLayer | be191f1adc9c0906a3f4e4b6bd78ccac29329dc0 | [
"Apache-2.0"
] | null | null | null | capslayer/decoders/__init__.py | moejoe95/CapsLayer | be191f1adc9c0906a3f4e4b6bd78ccac29329dc0 | [
"Apache-2.0"
] | null | null | null | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from capslayer.decoders.deconv_decoder import DeconvDecoderNet
from capslayer.decoders.fc_decoder import FCDecoderNet
| 32.571429 | 62 | 0.890351 | 28 | 228 | 6.678571 | 0.5 | 0.160428 | 0.256684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092105 | 228 | 6 | 63 | 38 | 0.903382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.2 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b69141b398b1eb3de443d1fc6ce2cd6a3bdd1117 | 117 | py | Python | Pycharm_Project/0415/modle/cal.py | duanjiefei/Python-Study | 88e17a3eab9112a2515f09b2bcf4e032059cc28b | [
"Apache-2.0"
] | null | null | null | Pycharm_Project/0415/modle/cal.py | duanjiefei/Python-Study | 88e17a3eab9112a2515f09b2bcf4e032059cc28b | [
"Apache-2.0"
] | null | null | null | Pycharm_Project/0415/modle/cal.py | duanjiefei/Python-Study | 88e17a3eab9112a2515f09b2bcf4e032059cc28b | [
"Apache-2.0"
] | null | null | null | def add(x,y):
return x+y
def sub(x,y):
return x - y
if __name__ == "__main__":# 方便测试
print("方便测试") | 14.625 | 33 | 0.538462 | 20 | 117 | 2.75 | 0.55 | 0.145455 | 0.290909 | 0.327273 | 0.363636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.290598 | 117 | 8 | 34 | 14.625 | 0.662651 | 0.034188 | 0 | 0 | 0 | 0 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 0.666667 | 0.166667 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
1e2db4d11eb62d070a9ca0e975d3cc98b9597990 | 49 | py | Python | Lib/test/test_compiler/testcorpus/54_list_comp_recur_func.py | diogommartins/cinder | 79103e9119cbecef3b085ccf2878f00c26e1d175 | [
"CNRI-Python-GPL-Compatible"
] | 1,886 | 2021-05-03T23:58:43.000Z | 2022-03-31T19:15:58.000Z | Lib/test/test_compiler/testcorpus/54_list_comp_recur_func.py | diogommartins/cinder | 79103e9119cbecef3b085ccf2878f00c26e1d175 | [
"CNRI-Python-GPL-Compatible"
] | 70 | 2021-05-04T23:25:35.000Z | 2022-03-31T18:42:08.000Z | Lib/test/test_compiler/testcorpus/54_list_comp_recur_func.py | diogommartins/cinder | 79103e9119cbecef3b085ccf2878f00c26e1d175 | [
"CNRI-Python-GPL-Compatible"
] | 52 | 2021-05-04T21:26:03.000Z | 2022-03-08T18:02:56.000Z | def recur1(a):
return [recur1(b) for b in a]
| 16.333333 | 33 | 0.612245 | 10 | 49 | 3 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.054054 | 0.244898 | 49 | 2 | 34 | 24.5 | 0.756757 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
1e3f57429109af83552dc6cd36075551cd825cea | 11,102 | py | Python | test/test_user_message_api.py | ReutersMedia/sqs-browser-events | 6be8de94fa65efb973a5bce87fee6243dea8d0b9 | [
"MIT"
] | 63 | 2017-03-31T01:30:04.000Z | 2021-05-05T11:46:14.000Z | test/test_user_message_api.py | ReutersMedia/sqs-browser-events | 6be8de94fa65efb973a5bce87fee6243dea8d0b9 | [
"MIT"
] | 3 | 2017-06-02T18:40:43.000Z | 2017-09-05T00:50:24.000Z | test/test_user_message_api.py | ReutersMedia/sqs-browser-events | 6be8de94fa65efb973a5bce87fee6243dea8d0b9 | [
"MIT"
] | 3 | 2017-04-14T15:47:26.000Z | 2020-07-13T08:34:36.000Z | from __future__ import print_function
import sys
import os
import pprint
import urllib
import random
import uuid
import time
import json
import boto3
import unittest
import testenv
import requests
from requests_aws4auth import AWS4Auth
from common import get_msgs
class TestUserMessageApi(unittest.TestCase):
def setUp(self):
self.props = testenv.setup()
auth = AWS4Auth(os.getenv('AWS_ACCESS_KEY_ID'),
os.getenv('AWS_SECRET_ACCESS_KEY'),
self.props['REGION'],
'execute-api')
def call_gateway(path,params=None,data=None):
url = self.props['GATEWAY_URL'] + path
if data is None:
print("calling {0} : {1!r}".format(url,params))
resp = requests.get(url,params=params,auth=auth)
else:
print("calling POST {0}".format(url))
resp = requests.post(url,json=data)
d = json.loads(resp.text)
return d
self.call_gw = call_gateway
def test_create_and_query(self):
ac_id1 = random.randint(10000000,50000000)
ac_id2 = random.randint(50000001,80000000)
session1 = str(uuid.uuid1())
session2 = str(uuid.uuid1())
user_id1 = random.randint(80000001,90000000)
user_id2 = user_id1 + 1
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id1,user_id1,session1))
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id2,user_id2,session2))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id1),{'msg':'test1','_type':'type1'})
self.call_gw('/notify/user/{0}'.format(user_id1),{'msg':'test1','_type':'type2'})
self.call_gw('/notify/user/{0}'.format(user_id1),{'msg':'test1','_type':'type3'})
self.call_gw('/notify/user/{0}'.format(user_id1),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id2),{'msg':'test3'})
self.call_gw('/notify/user/{0}'.format(user_id2),{'msg':'test4','_type':'type1'})
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id1),{'msg':'test5'})
self.call_gw('/notify/user/{0}'.format(user_id1),{'msg':'test6'})
time.sleep(10)
r = self.call_gw('/messages/user/{0}'.format(user_id1))
self.assertEqual(len(r['messages']),6)
r = self.call_gw('/messages/user/{0}'.format(user_id2))
self.assertEqual(len(r['messages']),2)
r = self.call_gw('/messages/user/{0}?_type=type1,type2'.format(user_id1))
self.assertEqual(len(r['messages']),2)
def test_set_read_status_post(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
r = self.call_gw('/messages/set-read/user/{0}'.format(user_id),data=[m['messageId'] for m in r['messages']])
time.sleep(0.5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
def test_set_read_status_post_async(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
r = self.call_gw('/messages/set-read/user/{0}?async=1'.format(user_id),data=[m['messageId'] for m in r['messages']])
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
def test_sqs_only_flag(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1','_sqs_only':1})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2','_sqs_only':1})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
msgs = [x['msg'] for x in r['messages']]
self.assertEqual(msgs, ['test3'])
def test_set_read_status_asof(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
r = self.call_gw('/messages/set-read/user/{0}/asof/{1}'.format(user_id,time.time()))
time.sleep(0.5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
def test_read_receipt_msgs_async(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
s = r['session']
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
r = self.call_gw('/messages/set-read/user/{0}/asof/{1}?async=1'.format(user_id,time.time()))
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
# get sqs messages, should have a read-receipt msg present with all of the messages IDs
time.sleep(10)
msgs = get_msgs(s,raw=True)
# filter for message-read-receipt type
msgs = [x for x in msgs if x.get('_type')=='message-read-receipt']
msg_ids = sorted([x['messageId'] for x in r['messages']])
self.assertEqual(sorted(msgs[0]['messages-receipted']),msg_ids)
def test_read_receipt_msgs(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
s = r['session']
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
r = self.call_gw('/messages/set-read/user/{0}/asof/{1}'.format(user_id,time.time()))
time.sleep(0.5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
# get sqs messages, should have a read-receipt msg present with all of the messages IDs
time.sleep(10)
msgs = get_msgs(s,raw=True)
# filter for message-read-receipt type
msgs = [x for x in msgs if x.get('_type')=='message-read-receipt']
msg_ids = sorted([x['messageId'] for x in r['messages']])
self.assertEqual(sorted(msgs[0]['messages-receipted']),msg_ids)
def test_set_read_status(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
msg_list = ','.join([m['messageId'] for m in r['messages']])
r = self.call_gw('/messages/set-read/user/{0}/message/{1}'.format(user_id,msg_list))
time.sleep(0.5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
def test_set_read_status_async(self):
ac_id = random.randint(10000000,50000000)
session = str(uuid.uuid1())
user_id = random.randint(80000001,90000000)
r = self.call_gw('/create/{0}/{1}/{2}'.format(ac_id,user_id,session))
time.sleep(1)
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test1'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test2'})
self.call_gw('/notify/user/{0}'.format(user_id),{'msg':'test3'})
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],0)
# set read receipt
msg_list = ','.join([m['messageId'] for m in r['messages']])
r = self.call_gw('/messages/set-read/user/{0}/message/{1}?async=1'.format(user_id,msg_list))
time.sleep(5)
r = self.call_gw('/messages/user/{0}'.format(user_id))
for m in r['messages']:
self.assertEqual(m['is_read'],1)
if __name__=="__main__":
unittest.main()
| 45.130081 | 124 | 0.590614 | 1,606 | 11,102 | 3.922167 | 0.083437 | 0.086363 | 0.107954 | 0.119067 | 0.846484 | 0.835053 | 0.824417 | 0.815844 | 0.795206 | 0.78441 | 0 | 0.061879 | 0.21978 | 11,102 | 245 | 125 | 45.314286 | 0.66532 | 0.032787 | 0 | 0.658879 | 0 | 0 | 0.203711 | 0.029927 | 0 | 0 | 0 | 0 | 0.093458 | 1 | 0.051402 | false | 0 | 0.070093 | 0 | 0.130841 | 0.018692 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ea795730096143e518dab80200b23fdbcf8b813 | 1,769 | py | Python | MillerArrays/cns2mtz.py | MooersLab/jupyterlabcctbxsnipsplus | 80a380046adcc9b16581ed1681884017514edbb7 | [
"MIT"
] | null | null | null | MillerArrays/cns2mtz.py | MooersLab/jupyterlabcctbxsnipsplus | 80a380046adcc9b16581ed1681884017514edbb7 | [
"MIT"
] | null | null | null | MillerArrays/cns2mtz.py | MooersLab/jupyterlabcctbxsnipsplus | 80a380046adcc9b16581ed1681884017514edbb7 | [
"MIT"
] | null | null | null | # Description: Miller arrays to convert CNS reflection file into an mtz file
# Source: NA
"""
from iotbx import reflection_file_reader
import os
reflection_file = reflection_file_reader.any_reflection_file(file_name=os.path.expandvars("${1:\$CNS_SOLVE/doc/html/tutorial/data/pen/scale.hkl}"))
from cctbx import crystal
crystal_symmetry = crystal.symmetry( unit_cell=(${2:97.37, 46.64, 65.47, 90, 115.4, 90}), space_group_symbol="${3:C2}")
miller_arrays = reflection_file.as_miller_arrays( crystal_symmetry=crystal_symmetry)
mtz_dataset = None
for miller_array in miller_arrays:
if (mtz_dataset is None):
mtz_dataset = miller_array.as_mtz_dataset(
column_root_label=miller_array.info().labels[0])
else:
mtz_dataset.add_miller_array(
miller_array=miller_array,
column_root_label=miller_array.info().labels[0])
mtz_object = mtz_dataset.mtz_object()
mtz_object.show_summary()
"""
from iotbx import reflection_file_reader
import os
reflection_file = reflection_file_reader.any_reflection_file(file_name=os.path.expandvars("\$CNS_SOLVE/doc/html/tutorial/data/pen/scale.hkl"))
from cctbx import crystal
crystal_symmetry = crystal.symmetry( unit_cell=(97.37, 46.64, 65.47, 90, 115.4, 90), space_group_symbol="C2")
miller_arrays = reflection_file.as_miller_arrays( crystal_symmetry=crystal_symmetry)
mtz_dataset = None
for miller_array in miller_arrays:
if (mtz_dataset is None):
mtz_dataset = miller_array.as_mtz_dataset(
column_root_label=miller_array.info().labels[0])
else:
mtz_dataset.add_miller_array(
miller_array=miller_array,
column_root_label=miller_array.info().labels[0])
mtz_object = mtz_dataset.mtz_object()
mtz_object.show_summary()
| 43.146341 | 147 | 0.755794 | 261 | 1,769 | 4.793103 | 0.268199 | 0.123102 | 0.063949 | 0.095923 | 0.941647 | 0.941647 | 0.941647 | 0.941647 | 0.941647 | 0.941647 | 0 | 0.032216 | 0.140192 | 1,769 | 40 | 148 | 44.225 | 0.79027 | 0.525155 | 0 | 0.117647 | 0 | 0 | 0.060168 | 0.057762 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.176471 | 0 | 0.176471 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ea9bf8f906e922e6289c35335ed47dec32dd6fd | 5,073 | py | Python | taiga/projects/migrations/0060_auto_20180614_1338.py | threefoldtech/Threefold-Circles | cbc433796b25cf7af9a295af65d665a4a279e2d6 | [
"Apache-2.0"
] | null | null | null | taiga/projects/migrations/0060_auto_20180614_1338.py | threefoldtech/Threefold-Circles | cbc433796b25cf7af9a295af65d665a4a279e2d6 | [
"Apache-2.0"
] | 12 | 2019-11-25T14:08:32.000Z | 2021-06-24T10:35:51.000Z | taiga/projects/migrations/0060_auto_20180614_1338.py | threefoldtech/Threefold-Circles | cbc433796b25cf7af9a295af65d665a4a279e2d6 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
# Generated by Django 1.11.2 on 2018-06-14 13:38
from __future__ import unicode_literals
import django.core.serializers.json
from django.db import migrations, models
import django.db.models.deletion
import taiga.base.db.models.fields.json
class Migration(migrations.Migration):
dependencies = [
('projects', '0059_auto_20170116_1633'),
]
operations = [
migrations.CreateModel(
name='IssueDueDate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, verbose_name='name')),
('order', models.IntegerField(default=10, verbose_name='order')),
('by_default', models.BooleanField(default=False, verbose_name='by default')),
('color', models.CharField(default='#999999', max_length=20, verbose_name='color')),
('days_to_due', models.IntegerField(blank=True, default=None, null=True, verbose_name='days to due')),
('project', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='issue_duedates', to='projects.Project', verbose_name='project')),
],
options={
'verbose_name': 'issue due date',
'verbose_name_plural': 'issue due dates',
'ordering': ['project', 'order', 'name'],
},
),
migrations.CreateModel(
name='TaskDueDate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, verbose_name='name')),
('order', models.IntegerField(default=10, verbose_name='order')),
('by_default', models.BooleanField(default=False, verbose_name='by default')),
('color', models.CharField(default='#999999', max_length=20, verbose_name='color')),
('days_to_due', models.IntegerField(blank=True, default=None, null=True, verbose_name='days to due')),
('project', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='task_duedates', to='projects.Project', verbose_name='project')),
],
options={
'verbose_name': 'task due date',
'verbose_name_plural': 'task due dates',
'ordering': ['project', 'order', 'name'],
},
),
migrations.CreateModel(
name='UserStoryDueDate',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=255, verbose_name='name')),
('order', models.IntegerField(default=10, verbose_name='order')),
('by_default', models.BooleanField(default=False, verbose_name='by default')),
('color', models.CharField(default='#999999', max_length=20, verbose_name='color')),
('days_to_due', models.IntegerField(blank=True, default=None, null=True, verbose_name='days to due')),
('project', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, related_name='us_duedates', to='projects.Project', verbose_name='project')),
],
options={
'verbose_name': 'user story due date',
'verbose_name_plural': 'user story due dates',
'ordering': ['project', 'order', 'name'],
},
),
migrations.AddField(
model_name='projecttemplate',
name='issue_duedates',
field=taiga.base.db.models.fields.json.JSONField(blank=True, encoder=django.core.serializers.json.DjangoJSONEncoder, null=True, verbose_name='issue duedates'),
),
migrations.AddField(
model_name='projecttemplate',
name='task_duedates',
field=taiga.base.db.models.fields.json.JSONField(blank=True, encoder=django.core.serializers.json.DjangoJSONEncoder, null=True, verbose_name='task duedates'),
),
migrations.AddField(
model_name='projecttemplate',
name='us_duedates',
field=taiga.base.db.models.fields.json.JSONField(blank=True, encoder=django.core.serializers.json.DjangoJSONEncoder, null=True, verbose_name='us duedates'),
),
migrations.AlterUniqueTogether(
name='issuestatus',
unique_together=set([('project', 'name')]),
),
migrations.AlterUniqueTogether(
name='userstoryduedate',
unique_together=set([('project', 'name')]),
),
migrations.AlterUniqueTogether(
name='taskduedate',
unique_together=set([('project', 'name')]),
),
migrations.AlterUniqueTogether(
name='issueduedate',
unique_together=set([('project', 'name')]),
),
]
| 50.227723 | 171 | 0.600828 | 509 | 5,073 | 5.829077 | 0.194499 | 0.111223 | 0.032356 | 0.038423 | 0.837546 | 0.803842 | 0.782609 | 0.732053 | 0.670374 | 0.631951 | 0 | 0.019043 | 0.254682 | 5,073 | 100 | 172 | 50.73 | 0.76567 | 0.013404 | 0 | 0.612903 | 1 | 0 | 0.183127 | 0.004598 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.053763 | 0 | 0.086022 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1ecfeb879a684228606eb4120ff218d3fea00209 | 473 | py | Python | python/gym_RLrecon/envs/__init__.py | syedsaifhasan/rl_reconstruct | 1462d3650c3334083a7b4cc34c88e6f5d1095ce3 | [
"BSD-3-Clause"
] | 3 | 2019-08-19T12:51:41.000Z | 2021-03-29T11:28:06.000Z | python/gym_RLrecon/envs/__init__.py | syedsaifhasan/rl_reconstruct | 1462d3650c3334083a7b4cc34c88e6f5d1095ce3 | [
"BSD-3-Clause"
] | null | null | null | python/gym_RLrecon/envs/__init__.py | syedsaifhasan/rl_reconstruct | 1462d3650c3334083a7b4cc34c88e6f5d1095ce3 | [
"BSD-3-Clause"
] | 2 | 2019-01-14T07:55:40.000Z | 2021-12-11T13:34:35.000Z | from gym_RLrecon.envs.RLrecon_env import RLreconEnv
from gym_RLrecon.envs.RLrecon_simple_v0_env import RLreconSimpleV0Env
from gym_RLrecon.envs.RLrecon_simple_v1_env import RLreconSimpleV1Env
from gym_RLrecon.envs.RLrecon_simple_v2_env import RLreconSimpleV2Env
from gym_RLrecon.envs.RLrecon_simple_v3_env import RLreconSimpleV3Env
from gym_RLrecon.envs.RLrecon_very_simple_env import RLreconVerySimpleEnv
from gym_RLrecon.envs.RLrecon_env_wrapper import RLreconEnvWrapper
| 59.125 | 73 | 0.911205 | 67 | 473 | 6.059701 | 0.298507 | 0.12069 | 0.241379 | 0.310345 | 0.504926 | 0.44335 | 0 | 0 | 0 | 0 | 0 | 0.017978 | 0.059197 | 473 | 7 | 74 | 67.571429 | 0.894382 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1ed5014808f57a2a658dfa58a31bbc3ff8e59e23 | 157 | py | Python | util/string/benchmark/join/metrics/main.py | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | util/string/benchmark/join/metrics/main.py | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | util/string/benchmark/join/metrics/main.py | jochenater/catboost | de2786fbc633b0d6ea6a23b3862496c6151b95c2 | [
"Apache-2.0"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | import yatest.common as yc
def test_export_metrics(metrics):
metrics.set_benchmark(yc.execute_benchmark('util/string/benchmark/join/join', threads=8))
| 26.166667 | 93 | 0.796178 | 23 | 157 | 5.26087 | 0.73913 | 0.231405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006993 | 0.089172 | 157 | 5 | 94 | 31.4 | 0.839161 | 0 | 0 | 0 | 0 | 0 | 0.197452 | 0.197452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
94af3ec540a61d4027bb36f424744c3f91c08814 | 14,987 | py | Python | tests/test_mixes.py | lycantropos/hypothesis_geometry | 23e1638144ffba089eee21eb623b0499713e0b1c | [
"MIT"
] | 9 | 2020-01-16T13:52:16.000Z | 2022-03-16T00:01:26.000Z | tests/test_mixes.py | lycantropos/hypothesis_geometry | 23e1638144ffba089eee21eb623b0499713e0b1c | [
"MIT"
] | 38 | 2020-01-16T12:08:51.000Z | 2021-01-11T11:06:32.000Z | tests/test_mixes.py | lycantropos/hypothesis_geometry | 23e1638144ffba089eee21eb623b0499713e0b1c | [
"MIT"
] | 1 | 2020-03-12T10:29:44.000Z | 2020-03-12T10:29:44.000Z | from typing import Tuple
import pytest
from ground.hints import Scalar
from hypothesis import given
from hypothesis.errors import HypothesisWarning
from hypothesis.strategies import DataObject
from hypothesis_geometry.hints import Strategy
from hypothesis_geometry.planar import mixes
from tests import strategies
from tests.utils import (ScalarsLimitsType,
SizesPair,
is_mix,
mix_has_coordinates_in_range,
mix_has_coordinates_types,
mix_has_valid_sizes,
mix_segments_do_not_cross_or_overlap)
@given(strategies.scalars_strategies,
strategies.mix_components_sizes_pairs_triplets,
strategies.concave_contours_sizes_pairs,
strategies.polygon_holes_sizes_pairs,
strategies.convex_contours_sizes_pairs)
def test_basic(coordinates: Strategy[Scalar],
components_sizes_pair: Tuple[SizesPair, SizesPair, SizesPair],
polygons_border_sizes_pair: SizesPair,
polygons_holes_list_sizes_pair: SizesPair,
polygons_holes_sizes_pair: SizesPair) -> None:
points_sizes_pair, segments_sizes_pair, polygons_sizes_pair = (
components_sizes_pair)
min_points_size, max_points_size = points_sizes_pair
min_segments_size, max_segments_size = segments_sizes_pair
min_polygons_size, max_polygons_size = polygons_sizes_pair
(min_mix_polygon_border_size,
max_mix_polygon_border_size) = polygons_border_sizes_pair
(min_mix_polygon_holes_size,
max_mix_polygon_holes_size) = polygons_holes_list_sizes_pair
(min_mix_polygon_hole_size,
max_mix_polygon_hole_size) = polygons_holes_sizes_pair
result = mixes(coordinates,
min_points_size=min_points_size,
max_points_size=max_points_size,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size,
min_polygon_border_size=min_mix_polygon_border_size,
max_polygon_border_size=max_mix_polygon_border_size,
min_polygon_holes_size=min_mix_polygon_holes_size,
max_polygon_holes_size=max_mix_polygon_holes_size,
min_polygon_hole_size=min_mix_polygon_hole_size,
max_polygon_hole_size=max_mix_polygon_hole_size)
assert isinstance(result, Strategy)
@given(strategies.data,
strategies.scalars_strategy_with_limit_and_type_pairs,
strategies.mix_components_sizes_pairs_triplets,
strategies.concave_contours_sizes_pairs,
strategies.polygon_holes_sizes_pairs,
strategies.convex_contours_sizes_pairs)
def test_properties(data: DataObject,
coordinates_limits_type_pair: Tuple[ScalarsLimitsType,
ScalarsLimitsType],
components_sizes_pair: Tuple[SizesPair, SizesPair,
SizesPair],
polygon_border_sizes_pair: SizesPair,
polygon_holes_sizes_pair: SizesPair,
polygon_hole_sizes_pair: SizesPair) -> None:
(x_coordinates_limits_type,
y_coordinates_limits_type) = coordinates_limits_type_pair
((x_coordinates, (min_x_value, max_x_value)),
x_type) = x_coordinates_limits_type
((y_coordinates, (min_y_value, max_y_value)),
y_type) = y_coordinates_limits_type
points_sizes_pair, segments_sizes_pair, polygons_sizes_pair = (
components_sizes_pair)
min_points_size, max_points_size = points_sizes_pair
min_segments_size, max_segments_size = segments_sizes_pair
min_polygons_size, max_polygons_size = polygons_sizes_pair
(min_polygon_border_size,
max_polygon_border_size) = polygon_border_sizes_pair
min_polygon_holes_size, max_polygon_holes_size = polygon_holes_sizes_pair
min_polygon_hole_size, max_polygon_hole_size = polygon_hole_sizes_pair
strategy = mixes(x_coordinates, y_coordinates,
min_points_size=min_points_size,
max_points_size=max_points_size,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size,
min_polygon_border_size=min_polygon_border_size,
max_polygon_border_size=max_polygon_border_size,
min_polygon_holes_size=min_polygon_holes_size,
max_polygon_holes_size=max_polygon_holes_size,
min_polygon_hole_size=min_polygon_hole_size,
max_polygon_hole_size=max_polygon_hole_size)
result = data.draw(strategy)
assert is_mix(result)
assert mix_has_valid_sizes(result,
min_points_size=min_points_size,
max_points_size=max_points_size,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size,
min_polygon_border_size=min_polygon_border_size,
max_polygon_border_size=max_polygon_border_size,
min_polygon_holes_size=min_polygon_holes_size,
max_polygon_holes_size=max_polygon_holes_size,
min_polygon_hole_size=min_polygon_hole_size,
max_polygon_hole_size=max_polygon_hole_size)
assert mix_has_coordinates_types(result,
x_type=x_type,
y_type=y_type)
assert mix_has_coordinates_in_range(result,
min_x_value=min_x_value,
max_x_value=max_x_value,
min_y_value=min_y_value,
max_y_value=max_y_value)
assert mix_segments_do_not_cross_or_overlap(result)
@given(strategies.data,
strategies.scalars_strategies_with_limits_and_types,
strategies.mix_components_sizes_pairs_triplets,
strategies.concave_contours_sizes_pairs,
strategies.polygon_holes_sizes_pairs,
strategies.convex_contours_sizes_pairs)
def test_same_coordinates(data: DataObject,
coordinates_limits_type: ScalarsLimitsType,
components_sizes_pair: Tuple[SizesPair, SizesPair,
SizesPair],
polygons_border_sizes_pair: SizesPair,
polygon_holes_sizes_pair: SizesPair,
polygon_hole_sizes_pair: SizesPair) -> None:
((coordinates, (min_mix_polygon_value, max_mix_polygon_value)),
type_) = coordinates_limits_type
points_sizes_pair, segments_sizes_pair, polygons_sizes_pair = (
components_sizes_pair)
min_points_size, max_points_size = points_sizes_pair
min_segments_size, max_segments_size = segments_sizes_pair
min_polygons_size, max_polygons_size = polygons_sizes_pair
(min_polygon_border_size,
max_polygon_border_size) = polygons_border_sizes_pair
min_polygon_holes_size, max_polygon_holes_size = polygon_holes_sizes_pair
min_polygon_hole_size, max_polygon_hole_size = polygon_hole_sizes_pair
strategy = mixes(coordinates,
min_points_size=min_points_size,
max_points_size=max_points_size,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size,
min_polygon_border_size=min_polygon_border_size,
max_polygon_border_size=max_polygon_border_size,
min_polygon_holes_size=min_polygon_holes_size,
max_polygon_holes_size=max_polygon_holes_size,
min_polygon_hole_size=min_polygon_hole_size,
max_polygon_hole_size=max_polygon_hole_size)
result = data.draw(strategy)
assert is_mix(result)
assert mix_has_valid_sizes(
result,
min_points_size=min_points_size,
max_points_size=max_points_size,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size,
min_polygon_border_size=min_polygon_border_size,
max_polygon_border_size=max_polygon_border_size,
min_polygon_holes_size=min_polygon_holes_size,
max_polygon_holes_size=max_polygon_holes_size,
min_polygon_hole_size=min_polygon_hole_size,
max_polygon_hole_size=max_polygon_hole_size)
assert mix_has_coordinates_types(result,
x_type=type_,
y_type=type_)
assert mix_has_coordinates_in_range(result,
min_x_value=min_mix_polygon_value,
max_x_value=max_mix_polygon_value,
min_y_value=min_mix_polygon_value,
max_y_value=max_mix_polygon_value)
assert mix_segments_do_not_cross_or_overlap(result)
@given(strategies.scalars_strategies,
strategies.invalid_mix_components_sizes_pairs_triplets)
def test_invalid_components_sizes(coordinates: Strategy[Scalar],
invalid_components_sizes_pairs
: Tuple[SizesPair, SizesPair, SizesPair]
) -> None:
points_sizes_pair, segments_sizes_pair, polygons_sizes_pair = (
invalid_components_sizes_pairs)
min_points_size, max_points_size = points_sizes_pair
min_segments_size, max_segments_size = segments_sizes_pair
min_polygons_size, max_polygons_size = polygons_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_points_size=min_points_size,
max_points_size=max_points_size,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size)
@given(strategies.scalars_strategies,
strategies.invalid_mix_points_sizes_pairs)
def test_invalid_points_sizes(coordinates: Strategy[Scalar],
invalid_sizes_pair: SizesPair) -> None:
min_points_size, max_points_size = invalid_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_points_size=min_points_size,
max_points_size=max_points_size)
@given(strategies.scalars_strategies,
strategies.invalid_mix_polygons_sizes_pairs)
def test_invalid_polygons_sizes(coordinates: Strategy[Scalar],
invalid_sizes_pair: SizesPair) -> None:
min_polygons_size, max_polygons_size = invalid_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_polygons_size=min_polygons_size,
max_polygons_size=max_polygons_size)
@given(strategies.scalars_strategies,
strategies.invalid_mix_segments_sizes_pairs)
def test_invalid_segments_sizes(coordinates: Strategy[Scalar],
invalid_sizes_pair: SizesPair) -> None:
min_segments_size, max_segments_size = invalid_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_segments_size=min_segments_size,
max_segments_size=max_segments_size)
@given(strategies.scalars_strategies,
strategies.invalid_convex_contours_sizes_pairs)
def test_invalid_polygon_border_sizes(coordinates: Strategy[Scalar],
invalid_sizes_pair: SizesPair) -> None:
min_polygon_border_size, max_polygon_border_size = invalid_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_polygon_border_size=min_polygon_border_size,
max_polygon_border_size=max_polygon_border_size)
@given(strategies.scalars_strategies,
strategies.invalid_polygon_holes_sizes_pairs)
def test_invalid_polygon_holes_list_sizes(coordinates: Strategy[Scalar],
invalid_sizes_pair: SizesPair
) -> None:
min_polygon_holes_size, max_polygon_holes_size = invalid_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_polygon_holes_size=min_polygon_holes_size,
max_polygon_holes_size=max_polygon_holes_size)
@given(strategies.scalars_strategies,
strategies.invalid_convex_contours_sizes_pairs)
def test_invalid_polygon_holes_sizes(coordinates: Strategy[Scalar],
invalid_sizes_pair: SizesPair) -> None:
min_polygon_hole_size, max_polygon_hole_size = invalid_sizes_pair
with pytest.raises(ValueError):
mixes(coordinates,
min_polygon_hole_size=min_polygon_hole_size,
max_polygon_hole_size=max_polygon_hole_size)
@given(strategies.scalars_strategies,
strategies.non_valid_convex_contours_sizes_pairs)
def test_non_valid_polygon_border_sizes(coordinates: Strategy[Scalar],
non_valid_sizes_pair: SizesPair
) -> None:
min_polygon_border_size, max_polygon_border_size = non_valid_sizes_pair
with pytest.warns(HypothesisWarning) as warnings:
mixes(coordinates,
min_polygon_border_size=min_polygon_border_size,
max_polygon_border_size=max_polygon_border_size)
assert len(warnings) == 1
@given(strategies.scalars_strategies,
strategies.non_valid_convex_contours_sizes_pairs)
def test_non_valid_polygon_holes_sizes(coordinates: Strategy[Scalar],
non_valid_sizes_pair: SizesPair
) -> None:
min_polygon_hole_size, max_polygon_hole_size = non_valid_sizes_pair
with pytest.warns(HypothesisWarning) as warnings:
mixes(coordinates,
min_polygon_hole_size=min_polygon_hole_size,
max_polygon_hole_size=max_polygon_hole_size)
assert len(warnings) == 1
| 46.688474 | 79 | 0.668246 | 1,686 | 14,987 | 5.346975 | 0.049229 | 0.08619 | 0.074542 | 0.040044 | 0.908375 | 0.848031 | 0.816528 | 0.799113 | 0.741098 | 0.741098 | 0 | 0.000187 | 0.286515 | 14,987 | 320 | 80 | 46.834375 | 0.842888 | 0 | 0 | 0.643636 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.047273 | 1 | 0.043636 | false | 0 | 0.036364 | 0 | 0.08 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
94eac7b419f8bc0cdf879096e73f0a12163e0395 | 41 | py | Python | bolt4ds/recsys/data/__init__.py | leepand/bolt4ds | 0b0e71deb8fc421d32e54d38a4c38a914e3aa732 | [
"BSD-3-Clause"
] | null | null | null | bolt4ds/recsys/data/__init__.py | leepand/bolt4ds | 0b0e71deb8fc421d32e54d38a4c38a914e3aa732 | [
"BSD-3-Clause"
] | null | null | null | bolt4ds/recsys/data/__init__.py | leepand/bolt4ds | 0b0e71deb8fc421d32e54d38a4c38a914e3aa732 | [
"BSD-3-Clause"
] | null | null | null | from .lightfm_data_process import Dataset | 41 | 41 | 0.902439 | 6 | 41 | 5.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.073171 | 41 | 1 | 41 | 41 | 0.921053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a208e64d54fc88acaa5f911cc7f0b7ff23e3ea3e | 3,918 | py | Python | regym/tests/rl_algorithms/rps_test.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | 2 | 2020-09-13T15:53:20.000Z | 2020-12-08T15:57:05.000Z | regym/tests/rl_algorithms/rps_test.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | null | null | null | regym/tests/rl_algorithms/rps_test.py | KnwSondess/Regym | 825c7dacf955a3e2f6c658c0ecb879a0ca036c1a | [
"MIT"
] | 1 | 2021-09-20T13:48:30.000Z | 2021-09-20T13:48:30.000Z | from tqdm import tqdm
import gym
from regym.rl_loops.multiagent_loops import simultaneous_action_rl_loop
from environments import ParallelEnv
def learns_against_fixed_opponent_RPS(agent, fixed_opponent, total_episodes, training_percentage, reward_threshold):
'''
Test used to make sure that agent is 'learning' by learning a best response
against an agent that only plays rock in rock paper scissors.
i.e from random, learns to play only (or mostly) paper
'''
env = gym.make('RockPaperScissors-v0')
maximum_average_reward = 1.0
training_episodes = int(total_episodes * training_percentage)
inference_episodes = total_episodes - training_episodes
training_trajectories = simulate(env, agent, fixed_opponent, episodes=training_episodes, training=True)
agent.training = False
inference_trajectories = simulate(env, agent, fixed_opponent, episodes=inference_episodes, training=False)
average_inference_rewards = [sum(map(lambda experience: experience[2][0], t)) / len(t) for t in inference_trajectories]
average_inference_reward = sum(average_inference_rewards) / len(average_inference_rewards)
assert average_inference_reward >= maximum_average_reward - reward_threshold
def simulate(env, agent, fixed_opponent, episodes, training):
agent_vector = [agent, fixed_opponent]
trajectories = list()
mode = 'Training' if training else 'Inference'
progress_bar = tqdm(range(episodes))
for e in progress_bar:
trajectory = simultaneous_action_rl_loop.run_episode(env, agent_vector, training=training)
trajectories.append(trajectory)
avg_trajectory_reward = sum(map(lambda experience: experience[2][0], trajectory)) / len(trajectory)
progress_bar.set_description(f'{mode} {agent.name} against {fixed_opponent.name}. Last avg reward: {avg_trajectory_reward}')
return trajectories
def learns_against_fixed_opponent_RPS_parallel(agent, fixed_opponent, total_episodes, training_percentage, reward_threshold_percentage, envname='RockPaperScissors-v0', nbr_parallel_env=2):
'''
Test used to make sure that agent is 'learning' by learning a best response
against an agent that only plays randomly.
i.e from random, learns to play only (or mostly) paper
'''
env = ParallelEnv(envname, nbr_parallel_env)
maximum_average_reward = 10.0
training_episodes = int(total_episodes * training_percentage)
inference_episodes = total_episodes - training_episodes
training_trajectories = simulate_parallel(env, agent, fixed_opponent, episodes=training_episodes, training=True)
agent.training = False
env = gym.make(envname)
inference_trajectories = simulate(env, agent, fixed_opponent, episodes=inference_episodes, training=False)
average_inference_rewards = [sum(map(lambda experience: experience[2][0], t)) for t in inference_trajectories]
average_inference_reward = sum(average_inference_rewards) / len(average_inference_rewards)
assert average_inference_reward >= maximum_average_reward*reward_threshold_percentage
def simulate_parallel(env, agent, fixed_opponent, episodes, training):
agent_vector = [agent, fixed_opponent]
trajectories = list()
mode = 'Training' if training else 'Inference'
progress_bar = tqdm(range(episodes))
for e in progress_bar:
per_actor_trajectories = simultaneous_action_rl_loop.run_episode_parallel(env, agent_vector, training=training, self_play=False)
trajectory = []
for t in per_actor_trajectories.values():
trajectories.append(t)
for exp in t:
trajectory.append( exp)
avg_trajectory_reward = sum(map(lambda experience: experience[2][0], trajectory)) / len(trajectory)
progress_bar.set_description(f'{mode} {agent.name} against {fixed_opponent.name}. Last avg reward: {avg_trajectory_reward}')
return trajectories
| 49.594937 | 188 | 0.760337 | 489 | 3,918 | 5.834356 | 0.210634 | 0.08973 | 0.063091 | 0.044164 | 0.819138 | 0.798107 | 0.75184 | 0.749036 | 0.740624 | 0.695759 | 0 | 0.004862 | 0.160031 | 3,918 | 78 | 189 | 50.230769 | 0.862048 | 0.093415 | 0 | 0.490566 | 0 | 0.037736 | 0.073039 | 0.025678 | 0 | 0 | 0 | 0 | 0.037736 | 1 | 0.075472 | false | 0 | 0.075472 | 0 | 0.188679 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a21f5b8813aa4f88733797f06eb91b8dd515d3a9 | 22 | py | Python | picpac/__init__.py | stavka0619/picpac | 638f56312400fbb6204b7001c0c137386a675e83 | [
"BSD-2-Clause"
] | null | null | null | picpac/__init__.py | stavka0619/picpac | 638f56312400fbb6204b7001c0c137386a675e83 | [
"BSD-2-Clause"
] | null | null | null | picpac/__init__.py | stavka0619/picpac | 638f56312400fbb6204b7001c0c137386a675e83 | [
"BSD-2-Clause"
] | null | null | null | from _picpac import *
| 11 | 21 | 0.772727 | 3 | 22 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 22 | 1 | 22 | 22 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bfc6d1b785c53fc58e325caff8b801e4161b18c2 | 4,693 | py | Python | test.py | Amitlamichhane/NLPP | 4976d62f5229f7fc7f97460a5990fb9f38a6ef93 | [
"Unlicense"
] | null | null | null | test.py | Amitlamichhane/NLPP | 4976d62f5229f7fc7f97460a5990fb9f38a6ef93 | [
"Unlicense"
] | null | null | null | test.py | Amitlamichhane/NLPP | 4976d62f5229f7fc7f97460a5990fb9f38a6ef93 | [
"Unlicense"
] | null | null | null | import unittest
import todo
class testTask(unittest.TestCase):
def test_evaluation(self):
golden_list = [['B-TAR', 'I-TAR', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'O'], ['B-TAR', 'O', 'B-HYP', 'I-HYP']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.286)
#auto generate at the end
golden_list = [['B-TAR', 'I-TAR', 'O', 'B-HYP'] ,['B-TAR', 'I-TAR', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'I-TAR', 'O', 'B-HYP'],['B-TAR', 'I-TAR', 'O', 'B-HYP']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
golden_list = [['B-TAR', 'I-TAR', 'I-TAR', 'B-HYP','I-HYP','I-HYP','O']]
predict_list = [['B-TAR', 'I-TAR', 'O', 'B-HYP','I-HYP','I-HYP','O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
golden_list = [['O', 'O']]
predict_list = [['O', 'O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
#B-hyp with I-HYP
#SIMPLE CASES
#2 true positive
#2 false negative
#2 false positive
golden_list = [['B-TAR', 'O', 'B-HYP', 'I-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'B-HYP', 'I-HYP']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.5)
#two different way for simple B-HYP prediction mistake
#2 true positive
#2 false negative
golden_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP','O']]
predict_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'O','O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.857)
#BTAR WITH ITAR in golden and BTAR with ITAR in golden
golden_list = [['B-TAR', 'I-TAR', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP'],['B-TAR', 'I-TAR', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'I-TAR', 'O', 'B-HYP'],['B-TAR', 'I-TAR', 'O', 'B-HYP']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.667)
"""
#when B-Tar is equal at one and not equal in another
#simple BTAR AND BHYP
"""
golden_list = [['B-TAR', 'O', 'O', 'B-HYP'],['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'B-HYP'],['B-TAR', 'I-TAR', 'O', 'B-HYP']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result,0.75)
#more exhaustive test needed
golden_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'O'], ['B-TAR', 'O', 'B-HYP', 'O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.571)
golden_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'O'], ['B-TAR', 'O', 'B-HYP', 'O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.571)
golden_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'O'], ['B-TAR', 'O', 'B-HYP', 'O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.571)
golden_list = [['B-TAR', 'O', 'O', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'O', 'O'], ['B-TAR', 'O', 'B-HYP', 'O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.571)
golden_list = [['B-TAR', 'I-TAR', 'I-TAR', 'B-HYP'], ['B-TAR', 'O', 'O', 'B-HYP']]
predict_list = [['B-TAR', 'O', 'B-HYP', 'O'], ['B-TAR', 'O', 'B-HYP', 'O']]
result = todo.evaluate(golden_list, predict_list)
print("answers shuld be this " + str(result))
self.assertEqual(result, 0.571)
if __name__=="__main__":
unittest.main()
| 36.1 | 120 | 0.515662 | 684 | 4,693 | 3.44883 | 0.102339 | 0.083086 | 0.084782 | 0.06613 | 0.86986 | 0.85036 | 0.823654 | 0.81348 | 0.81348 | 0.803306 | 0 | 0.011864 | 0.245685 | 4,693 | 129 | 121 | 36.379845 | 0.65452 | 0.056041 | 0 | 0.588235 | 0 | 0 | 0.219388 | 0 | 0 | 0 | 0 | 0 | 0.147059 | 1 | 0.014706 | false | 0 | 0.029412 | 0 | 0.058824 | 0.191176 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
44a575c270e0e724123fa8218c2ebf13c59dab2b | 2,219 | py | Python | src/models.py | prakashchhipa/Depth-Contrast-Self-Supervised-Method | c68f2ea85063be3a63216985fbe806621174889b | [
"Apache-2.0"
] | null | null | null | src/models.py | prakashchhipa/Depth-Contrast-Self-Supervised-Method | c68f2ea85063be3a63216985fbe806621174889b | [
"Apache-2.0"
] | null | null | null | src/models.py | prakashchhipa/Depth-Contrast-Self-Supervised-Method | c68f2ea85063be3a63216985fbe806621174889b | [
"Apache-2.0"
] | null | null | null | import torchvision.models as models
import torch.nn as nn
import torch
import torch.nn.functional as F
from efficientnet_pytorch import EfficientNet
class EfficientNet_Model(torch.nn.Module):
def __init__(self, pretrained=True):
super(EfficientNet_Model, self).__init__()
num_classes = 7
self.model = EfficientNet.from_pretrained("efficientnet-b2")
num_ftrs=self.model._fc.in_features
self.model._fc = nn.Sequential(nn.Dropout(0.2), nn.Linear(num_ftrs, num_classes))
#self.model.fc=nn.Linear(512,num_classes)
def forward(self, x):
output = self.model(x)
return output
class Resnext_Model(torch.nn.Module):
def __init__(self, pretrained=True):
super(Resnext_Model, self).__init__()
#num_classes = 10
#new setting
num_classes = 7
self.model = models.resnext50_32x4d(pretrained=True)
#self.model.conv1=nn.Conv2d(2,64,kernel_size=(3,3),stride=(2,2),padding=(3,3),bias=False)
#self.model.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,bias=False)
num_ftrs=self.model.fc.in_features
self.model.fc = nn.Sequential(nn.Dropout(0.5), nn.Linear(num_ftrs, num_classes))
#self.model.fc=nn.Linear(num_ftrs,512)
#self.model.fc=nn.Linear(512,num_classes)
def forward(self, x):
output = self.model(x)
return output
class Densenet_Model(torch.nn.Module):
def __init__(self, pretrained=True):
super(Densenet_Model, self).__init__()
#num_classes = 10
#new setting
num_classes = 7
self.model = models.densenet121(pretrained=True)
#self.model.conv1=nn.Conv2d(2,64,kernel_size=(3,3),stride=(2,2),padding=(3,3),bias=False)
#self.model.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,bias=False)
#num_ftrs=self.model.fc.in_features
#self.model.fc = nn.Sequential(nn.Dropout(0.3), nn.Linear(num_ftrs, num_classes))
self.model.classifier = nn.Linear(1024, num_classes)
#self.model.fc=nn.Linear(num_ftrs,512)
#self.model.fc=nn.Linear(512,num_classes)
def forward(self, x):
output = self.model(x)
return output | 38.258621 | 97 | 0.666967 | 324 | 2,219 | 4.376543 | 0.182099 | 0.139633 | 0.085331 | 0.073343 | 0.806065 | 0.782793 | 0.782793 | 0.782793 | 0.758815 | 0.758815 | 0 | 0.044557 | 0.200991 | 2,219 | 58 | 98 | 38.258621 | 0.755217 | 0.316359 | 0 | 0.441176 | 0 | 0 | 0.00998 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.176471 | false | 0 | 0.147059 | 0 | 0.5 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
44b4a2832e0889d31fcc4cc36f18895d4b3a86ba | 200 | py | Python | venv/Lib/site-packages/IPython/utils/daemonize.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 6,989 | 2017-07-18T06:23:18.000Z | 2022-03-31T15:58:36.000Z | venv/Lib/site-packages/IPython/utils/daemonize.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 1,978 | 2017-07-18T09:17:58.000Z | 2022-03-31T14:28:43.000Z | venv/Lib/site-packages/IPython/utils/daemonize.py | ajayiagbebaku/NFL-Model | afcc67a85ca7138c58c3334d45988ada2da158ed | [
"MIT"
] | 1,228 | 2017-07-18T09:03:13.000Z | 2022-03-29T05:57:40.000Z | from warnings import warn
warn("IPython.utils.daemonize has moved to ipyparallel.apps.daemonize since IPython 4.0", DeprecationWarning, stacklevel=2)
from ipyparallel.apps.daemonize import daemonize
| 40 | 123 | 0.835 | 27 | 200 | 6.185185 | 0.666667 | 0.179641 | 0.287425 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016575 | 0.095 | 200 | 4 | 124 | 50 | 0.906077 | 0 | 0 | 0 | 0 | 0 | 0.405 | 0.245 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78377c6d5c4fc06c7d1811473141049acfd1bb17 | 6,179 | py | Python | sc-projects/my_photoshop/blur.py | wangyuhsin/sc-projects | c136e48521893aaf1fefa5bec82fc874ca547b72 | [
"MIT"
] | null | null | null | sc-projects/my_photoshop/blur.py | wangyuhsin/sc-projects | c136e48521893aaf1fefa5bec82fc874ca547b72 | [
"MIT"
] | null | null | null | sc-projects/my_photoshop/blur.py | wangyuhsin/sc-projects | c136e48521893aaf1fefa5bec82fc874ca547b72 | [
"MIT"
] | null | null | null | """
File: blur.py
-------------------------------
This file shows the original image(smiley-face.png)
first, and then its blurred image. The blur algorithm
uses the average RGB values of a pixel's nearest neighbors.
"""
from simpleimage import SimpleImage
def blur(old_img):
"""
:param old_img:
:return:
"""
new_img = SimpleImage.blank(old_img.width, old_img.height)
for x in range(old_img.width):
for y in range(old_img.height):
if (x > 0) and (x < (old_img.width-1)) and (y > 0) and (y < (old_img.height-1)):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(x - 1, x + 2):
for j in range(y - 1, y + 2):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 9)
new_pixel.green = (green // 9)
new_pixel.blue = (blue // 9)
elif (x == 0) and (y > 0) and (y < (old_img.height - 1)):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(2):
for j in range(y - 1, y + 2):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 6)
new_pixel.green = (green // 6)
new_pixel.blue = (blue // 6)
elif (x == (old_img.width - 1)) and (y > 0) and (y < (old_img.height - 1)):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(x - 1, x + 1):
for j in range(y - 1, y + 2):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 6)
new_pixel.green = (green // 6)
new_pixel.blue = (blue // 6)
elif (x > 0) and (x < (old_img.width - 1)) and y == 0:
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(x - 1, x + 2):
for j in range(2):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 6)
new_pixel.green = (green // 6)
new_pixel.blue = (blue // 6)
elif (x > 0) and (x < old_img.width - 1) and (y == old_img.height - 1):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(x - 1, x + 2):
for j in range(y - 1, y + 1):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 6)
new_pixel.green = (green // 6)
new_pixel.blue = (blue // 6)
elif (x == 0) and (y == 0):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(2):
for j in range(2):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 4)
new_pixel.green = (green // 4)
new_pixel.blue = (blue // 4)
elif (x == 0) and (y == old_img.height - 1):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(2):
for j in range(y - 1, y + 1):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 4)
new_pixel.green = (green // 4)
new_pixel.blue = (blue // 4)
elif (x == old_img.width - 1) and (y == 0):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(x-1, x+1):
for j in range(2):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 4)
new_pixel.green = (green // 4)
new_pixel.blue = (blue // 4)
elif (x == old_img.width - 1) and (y == old_img.height - 1):
new_pixel = new_img.get_pixel(x, y)
red = 0
green = 0
blue = 0
for i in range(x - 1, x + 1):
for j in range(y-1, y+1):
red += old_img.get_pixel(i, j).red
green += old_img.get_pixel(i, j).green
blue += old_img.get_pixel(i, j).blue
new_pixel.red = (red // 4)
new_pixel.green = (green // 4)
new_pixel.blue = (blue // 4)
return new_img
def main():
"""
TODO:
"""
old_img = SimpleImage("images/smiley-face.png")
old_img.show()
blurred_img = blur(old_img)
for i in range(9):
blurred_img = blur(blurred_img)
blurred_img.show()
if __name__ == '__main__':
main()
| 39.356688 | 92 | 0.415278 | 817 | 6,179 | 2.965728 | 0.077111 | 0.118861 | 0.163434 | 0.156005 | 0.799009 | 0.799009 | 0.796533 | 0.796533 | 0.796533 | 0.796533 | 0 | 0.032821 | 0.462534 | 6,179 | 156 | 93 | 39.608974 | 0.696778 | 0.039165 | 0 | 0.795455 | 0 | 0 | 0.005089 | 0.003732 | 0 | 0 | 0 | 0.00641 | 0 | 1 | 0.015152 | false | 0 | 0.007576 | 0 | 0.030303 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
78981dc2d42e71735ea5d98446eb3de888220988 | 428 | py | Python | lib/node_types/esp/pre_extra_script.py | WhereIsTheExit/iotempower | 9079ff9bc42b6a456bc016d638de0713feb49c62 | [
"MIT"
] | 7 | 2019-05-22T19:05:27.000Z | 2022-01-19T09:34:24.000Z | lib/node_types/esp/pre_extra_script.py | WhereIsTheExit/iotempower | 9079ff9bc42b6a456bc016d638de0713feb49c62 | [
"MIT"
] | 20 | 2019-06-13T09:41:02.000Z | 2022-01-21T10:13:51.000Z | lib/node_types/esp/pre_extra_script.py | WhereIsTheExit/iotempower | 9079ff9bc42b6a456bc016d638de0713feb49c62 | [
"MIT"
] | 12 | 2019-06-04T09:18:13.000Z | 2022-01-13T10:09:31.000Z | Import("env")
# access to global construction environment
#print env
# Dump construction environment (for debug purpose)
#print env.Dump()
import os
# now in environment variable PLATFORMIO_BUILD_CACHE_DIR
# env.CacheDir(os.environ['IOTEMPOWER_COMPILE_CACHE']+'/scons')
# # Dump construction environment (for debug purpose)
# print "=========== env dump ========"
# print env.Dump()
# print "=========== env dump end ===="
| 23.777778 | 63 | 0.691589 | 52 | 428 | 5.596154 | 0.5 | 0.137457 | 0.206186 | 0.206186 | 0.453608 | 0.453608 | 0.371134 | 0.371134 | 0.371134 | 0 | 0 | 0 | 0.133178 | 428 | 17 | 64 | 25.176471 | 0.784367 | 0.880841 | 0 | 0 | 0 | 0 | 0.076923 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
78d3bb47c9dafdcf39dfeb8f3a09cecb7ee8f516 | 96 | py | Python | venv/lib/python3.8/site-packages/cachy/contracts/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/pytzdata/commands/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/pytzdata/commands/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/3b/d0/93/d41d85f9c541f1e953d04a0225b82471f7e3be59aab5ea9abece208838 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.40625 | 0 | 96 | 1 | 96 | 96 | 0.489583 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
154c6157d3c6d621f806bcfd006ce4d3364026e3 | 44 | py | Python | cumulocitypython/__init__.py | jaks6/cumulocitypython | e3871058b000bbbc0dfa6e264d7c976f2ccb93b3 | [
"MIT"
] | null | null | null | cumulocitypython/__init__.py | jaks6/cumulocitypython | e3871058b000bbbc0dfa6e264d7c976f2ccb93b3 | [
"MIT"
] | null | null | null | cumulocitypython/__init__.py | jaks6/cumulocitypython | e3871058b000bbbc0dfa6e264d7c976f2ccb93b3 | [
"MIT"
] | 2 | 2020-11-05T20:30:07.000Z | 2020-12-01T21:19:06.000Z | from .connection import CumulocityConnection | 44 | 44 | 0.909091 | 4 | 44 | 10 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.97561 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
155709723fff20e5d8e165ef0e3c9f881349d703 | 113 | py | Python | src/wai/spectralio/mixins/__init__.py | waikato-datamining/wai-spectral-io | a0edba2208b0b646ed54782cb0832ce10eed0d5e | [
"MIT"
] | null | null | null | src/wai/spectralio/mixins/__init__.py | waikato-datamining/wai-spectral-io | a0edba2208b0b646ed54782cb0832ce10eed0d5e | [
"MIT"
] | 3 | 2020-07-01T01:54:03.000Z | 2020-12-02T07:47:30.000Z | src/wai/spectralio/mixins/__init__.py | waikato-datamining/wai-spectral-io | a0edba2208b0b646ed54782cb0832ce10eed0d5e | [
"MIT"
] | null | null | null | from ._LocaleOptionsMixin import LocaleOptionsMixin
from ._ProductCodeOptionsMixin import ProductCodeOptionsMixin | 56.5 | 61 | 0.920354 | 8 | 113 | 12.75 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.061947 | 113 | 2 | 61 | 56.5 | 0.962264 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
156af6938e35bfb409103f010481273ef4eb6fa9 | 233 | py | Python | pyKey/__init__.py | andohuman/pyWinKey | 6e25d5ef355b4c4568147df04714c04c12393658 | [
"MIT"
] | 40 | 2019-08-18T07:02:06.000Z | 2021-09-19T18:04:35.000Z | pyKey/__init__.py | andohuman/pyWinKey | 6e25d5ef355b4c4568147df04714c04c12393658 | [
"MIT"
] | 2 | 2019-08-21T21:58:45.000Z | 2021-06-06T16:54:33.000Z | pyKey/__init__.py | andohuman/pyWinKey | 6e25d5ef355b4c4568147df04714c04c12393658 | [
"MIT"
] | 3 | 2020-12-31T08:33:19.000Z | 2021-06-14T19:13:07.000Z | import sys
if sys.platform == 'linux':
from pyKey.linux import pressKey, releaseKey, press, showKeys, sendSequence
elif sys.platform == 'win32':
from pyKey.windows import pressKey, releaseKey, press, showKeys, sendSequence
| 29.125 | 81 | 0.751073 | 28 | 233 | 6.25 | 0.535714 | 0.125714 | 0.274286 | 0.331429 | 0.56 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0.010152 | 0.154506 | 233 | 7 | 82 | 33.285714 | 0.878173 | 0 | 0 | 0 | 0 | 0 | 0.043103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
15bc9be9e8caf74939ef085aba54822cafcbc7cf | 183 | py | Python | tests/context.py | ulysseslizarraga/instagram_smart_nine | 9bd48b37e2e6998547a6e33deb032ea5c03a6e70 | [
"MIT"
] | 1 | 2020-12-31T00:37:41.000Z | 2020-12-31T00:37:41.000Z | tests/context.py | ulysseslizarraga/instagram_smart_nine | 9bd48b37e2e6998547a6e33deb032ea5c03a6e70 | [
"MIT"
] | null | null | null | tests/context.py | ulysseslizarraga/instagram_smart_nine | 9bd48b37e2e6998547a6e33deb032ea5c03a6e70 | [
"MIT"
] | null | null | null | import sys
from os import path
sys.path.append( path.dirname( path.dirname( path.abspath(__file__))))
currDir = path.dirname( path.dirname( path.abspath(__file__)))
import smart_nine | 30.5 | 70 | 0.781421 | 27 | 183 | 4.962963 | 0.444444 | 0.328358 | 0.447761 | 0.328358 | 0.552239 | 0.552239 | 0.552239 | 0 | 0 | 0 | 0 | 0 | 0.092896 | 183 | 6 | 71 | 30.5 | 0.807229 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.