hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4df5ee8a79d0ffc0a08b2bf77a127060d827eb92 | 64 | py | Python | vscvs/models/convolutional/__init__.py | fcoclavero/vscvs | 27fab0bc62fb68da044cf6f2516e3c1853f77533 | [
"MIT"
] | 1 | 2019-07-02T19:07:15.000Z | 2019-07-02T19:07:15.000Z | vscvs/models/convolutional/__init__.py | fcoclavero/vscvs | 27fab0bc62fb68da044cf6f2516e3c1853f77533 | [
"MIT"
] | 2 | 2019-10-23T18:05:37.000Z | 2020-09-25T14:16:25.000Z | vscvs/models/convolutional/__init__.py | fcoclavero/vscvs | 27fab0bc62fb68da044cf6f2516e3c1853f77533 | [
"MIT"
] | null | null | null | from .cnn import *
from .resnet import *
from .resnext import *
| 16 | 22 | 0.71875 | 9 | 64 | 5.111111 | 0.555556 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1875 | 64 | 3 | 23 | 21.333333 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
150e88305afde54d026780b7add24d9936727f86 | 43 | py | Python | cana/datasets/__init__.py | cana-asteroids/cana | afb7b0155cda8578c487d43a1a665f8823486633 | [
"MIT"
] | 6 | 2019-11-24T16:30:40.000Z | 2021-08-23T19:29:43.000Z | cana/datasets/__init__.py | cana-asteroids/cana | afb7b0155cda8578c487d43a1a665f8823486633 | [
"MIT"
] | 1 | 2020-06-18T17:23:24.000Z | 2021-09-08T16:51:50.000Z | cana/datasets/__init__.py | depra/cana | 95a3af2f72195befcb6bf7a65eb4fecbfdf52f5b | [
"MIT"
] | null | null | null | from .specdata import getspectrum, listdata | 43 | 43 | 0.860465 | 5 | 43 | 7.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.948718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1292699e82ba9b4f8cffb2e016293fea148be84f | 13,136 | py | Python | tests/test_product.py | ywalakamar/store-manager-api-v2-revisted | ab326735f702719ee884263f5eb395b9e7a50011 | [
"Apache-2.0"
] | null | null | null | tests/test_product.py | ywalakamar/store-manager-api-v2-revisted | ab326735f702719ee884263f5eb395b9e7a50011 | [
"Apache-2.0"
] | 91 | 2019-01-02T13:12:50.000Z | 2019-09-04T22:45:17.000Z | tests/test_product.py | ywalakamar/store-manager-api-v2-revisted | ab326735f702719ee884263f5eb395b9e7a50011 | [
"Apache-2.0"
] | null | null | null | from .base import *
class ProductTestCase(BaseTestCase):
""" This class represents the product test case """
def setUp(self):
super(ProductTestCase, self).setUp()
def test_create_a_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product, token)
self.assertEqual(resp.status_code, 201)
def test_create_an_existing_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(
response['message'] == "Sorry, such a product already exists, please confirm its category")
def test_create_product_with_missing_product_name_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_without_product_name_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'product_name' key missing")
def test_create_product_with_missing_product_category_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_without_category_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'category' key missing")
def test_create_product_with_missing_quantity_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_without_quantity_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'quantity' key missing")
def test_create_product_with_missing_price_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_without_price_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'unit_price' key missing")
def test_create_product_with_an_empty_value(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = resp = create_product(self, product_with_an_empty_value, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(
response['message'] == "Sorry, there's an empty value, please check your input values")
def test_create_product_with_non_string_product_name(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(
self, product_with_non_string_product_name, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "A product name's value must be a string")
def test_create_product_with_non_string_category(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_with_non_string_category, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] ==
"A category's value must be a string")
def test_create_product_with_non_integer_quantity(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_with_non_integer_quantity, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] ==
"A quantity's value must be an integer")
def test_create_product_with_non_positive_integer_quantity(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_with_non_positive_integer_quantity, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] ==
"A quantity's value must be a positive integer")
def test_create_product_with_non_float_price(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product_with_non_float_price, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(
response['message'] == "A price's value must be of float data type")
def test_create_product_with_non_positive_float_price(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(
self, product_with_non_positive_float_price, token)
self.assertEqual(resp.status_code, 400)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] ==
"A price's value must be a positive float")
def test_get_all_products(self):
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
create_product(self, product, token)
response = get_all_products(self, token)
self.assertEqual(response.status_code, 200)
response = json.loads(response.data.decode())
self.assertTrue(response['message'] == "Success")
def test_xget_non_existing_products(self):
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
response = get_all_products(self, token)
self.assertEqual(response.status_code, 404)
response = json.loads(response.data.decode())
self.assertTrue(response['message'] == "No product record(s) available")
def test_get_specific_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = create_product(self, product, token)
response = get_specific_product(self, token)
self.assertEqual(response.status_code, 200)
def test_xget_non_existing_specific_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = get_non_existing_product(self, token)
self.assertEqual(resp.status_code, 404)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "Sorry, such a product does not exist")
def test_update_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = product_update(self, update_product, token)
response_data = json.loads(resp.data.decode())
self.assertEqual(response_data['message'], "Update successful")
def test_update_non_existent_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = product_update(self, update_non_existent_product, token)
response_data = json.loads(resp.data.decode())
self.assertEqual(response_data['message'], "Sorry, such a product does not exist")
def test_update_product_with_missing_product_name_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = product_update(
self, update_product_without_product_name_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'product_name' key missing")
def test_update_product_with_missing_category_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = product_update(self, update_product_without_category_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'category' key missing")
def test_update_product_with_missing_quantity_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = product_update(self, update_product_without_quantity_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'quantity' key missing")
def test_update_product_with_missing_price_key(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = product_update(self, update_product_without_price_key, token)
response = json.loads(resp.data.decode())
self.assertTrue(response['message'] == "'unit_price' key missing")
def test_xdelete_product(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = delete_specific_product(self, token)
self.assertEqual(resp.status_code, 200)
response_data = json.loads(resp.data.decode())
self.assertEqual(response_data['message'], "delete operation successful!")
def test_xxdelete_product_not_existing(self):
user_registration(self, test_admin_user)
admin_login = user_login(self, admin_user_login)
response_content = json.loads(admin_login.data.decode('utf-8'))
token = response_content["access_token"]
resp = delete_specific_product(self, token)
self.assertEqual(resp.status_code, 404)
response_data = json.loads(resp.data.decode())
self.assertEqual(response_data['message'], "Sorry, such a product does not exist!")
def teardown(self):
super(ProductTestCase, self).teardown()
# Make the tests conveniently executable
if __name__ == "__main__":
unittest.main()
| 49.946768 | 103 | 0.6993 | 1,645 | 13,136 | 5.260182 | 0.063222 | 0.057783 | 0.040448 | 0.054894 | 0.933665 | 0.923957 | 0.913556 | 0.8879 | 0.874032 | 0.874032 | 0 | 0.006642 | 0.197701 | 13,136 | 262 | 104 | 50.137405 | 0.814404 | 0.006395 | 0 | 0.68559 | 0 | 0 | 0.102491 | 0 | 0 | 0 | 0 | 0 | 0.165939 | 1 | 0.117904 | false | 0 | 0.004367 | 0 | 0.126638 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4222d05a98a983b4e6280544a03fb0a0dd57dfaf | 172 | py | Python | oabutton/apps/web/templatetags/oafilters.py | OAButton/OAButton_old | c67802226bccdf9da941b17ae22225393731af29 | [
"MIT"
] | 1 | 2015-01-29T16:20:11.000Z | 2015-01-29T16:20:11.000Z | oabutton/apps/web/templatetags/oafilters.py | OAButton/OAButton_old | c67802226bccdf9da941b17ae22225393731af29 | [
"MIT"
] | 1 | 2020-09-01T17:12:51.000Z | 2020-09-01T17:12:51.000Z | oabutton/apps/web/templatetags/oafilters.py | OAButton/OAButton_old | c67802226bccdf9da941b17ae22225393731af29 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
import pyjade
@pyjade.register_filter('label_with_classes')
def label_with_classes(value, arg):
return value.label_tag(attrs={'class': arg})
| 19.111111 | 48 | 0.715116 | 24 | 172 | 4.875 | 0.708333 | 0.153846 | 0.273504 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006623 | 0.122093 | 172 | 8 | 49 | 21.5 | 0.768212 | 0.122093 | 0 | 0 | 0 | 0 | 0.154362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4224c75f83edd0227eb01baf9e6e954d5256eab8 | 104 | py | Python | bitmovin_api_sdk/encoding/configurations/video/h265/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 11 | 2019-07-03T10:41:16.000Z | 2022-02-25T21:48:06.000Z | bitmovin_api_sdk/encoding/configurations/video/h265/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 8 | 2019-11-23T00:01:25.000Z | 2021-04-29T12:30:31.000Z | bitmovin_api_sdk/encoding/configurations/video/h265/customdata/__init__.py | jaythecaesarean/bitmovin-api-sdk-python | 48166511fcb9082041c552ace55a9b66cc59b794 | [
"MIT"
] | 13 | 2020-01-02T14:58:18.000Z | 2022-03-26T12:10:30.000Z | from bitmovin_api_sdk.encoding.configurations.video.h265.customdata.customdata_api import CustomdataApi
| 52 | 103 | 0.903846 | 13 | 104 | 7 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.03 | 0.038462 | 104 | 1 | 104 | 104 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4241e0dd844bd0193ef920a9b10288b0cc43e52d | 11 | py | Python | python/testData/joinLines/Tuple-after.py | jnthn/intellij-community | 8fa7c8a3ace62400c838e0d5926a7be106aa8557 | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/joinLines/Tuple-after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 173 | 2018-07-05T13:59:39.000Z | 2018-08-09T01:12:03.000Z | python/testData/joinLines/Tuple-after.py | Cyril-lamirand/intellij-community | 60ab6c61b82fc761dd68363eca7d9d69663cfa39 | [
"Apache-2.0"
] | 2 | 2020-03-15T08:57:37.000Z | 2020-04-07T04:48:14.000Z | a = (1, 2)
| 5.5 | 10 | 0.272727 | 3 | 11 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0.363636 | 11 | 1 | 11 | 11 | 0.142857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
424b7092d8392c37ed0773671eb85f7c2dd3ba7e | 94 | py | Python | utils/lit/tests/Inputs/shtest-timeout/short.py | kpdev/llvm-tnt | d81ccf6fad01597f0143a1598d94d7d29002cf41 | [
"MIT"
] | 1,073 | 2017-06-28T05:11:54.000Z | 2022-03-31T12:52:07.000Z | utils/lit/tests/Inputs/shtest-timeout/short.py | rjordans/avr-llvm | 1802f85a455fbbe5fd1b55183bf588e2e4731dc5 | [
"FSFAP"
] | 23 | 2017-07-01T02:22:04.000Z | 2020-10-16T09:42:03.000Z | utils/lit/tests/Inputs/shtest-timeout/short.py | rjordans/avr-llvm | 1802f85a455fbbe5fd1b55183bf588e2e4731dc5 | [
"FSFAP"
] | 244 | 2017-06-28T05:08:57.000Z | 2022-03-13T05:03:12.000Z | # RUN: %{python} %s
from __future__ import print_function
import sys
print("short program")
| 13.428571 | 37 | 0.744681 | 13 | 94 | 5 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148936 | 94 | 6 | 38 | 15.666667 | 0.8125 | 0.180851 | 0 | 0 | 0 | 0 | 0.173333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
424f948139af05bd55fb17705b4728fdf6a8b6a1 | 7,456 | py | Python | src/projects/views.py | MEEM-MLHD/territoire_conseil | a1213575bc4fa12574859aab0dfa90f4eff7c6eb | [
"BSD-3-Clause"
] | null | null | null | src/projects/views.py | MEEM-MLHD/territoire_conseil | a1213575bc4fa12574859aab0dfa90f4eff7c6eb | [
"BSD-3-Clause"
] | null | null | null | src/projects/views.py | MEEM-MLHD/territoire_conseil | a1213575bc4fa12574859aab0dfa90f4eff7c6eb | [
"BSD-3-Clause"
] | null | null | null | import json
import requests
from djgeojson.serializers import Serializer as GeoJSONSerializer
from django.shortcuts import render, redirect
from django.forms import formset_factory, inlineformset_factory, modelformset_factory
from django.views.decorators.cache import never_cache
from django.contrib.gis.geos import GEOSGeometry, GeometryCollection
from .filters import ProjectFilter
from .forms import ProjectForm, ReferentForm, StakeHolderTypeForm, LeaderForm
from .models import Project, Referent, StakeHolderType, Leader, Department, Region
def home(request):
queryset = Project.objects.all().order_by('-update')
f = ProjectFilter(request.GET, queryset=queryset)
geojson = GeoJSONSerializer().serialize(f.qs,
geometry_field='geom',
properties=('name', 'detail_url', 'feature_image', ))
return render(request, 'home.html', {'filter': f, 'geojson': geojson})
def profile(request):
user = request.user
# projects_owner = Project.objects.filter(owner=request.user)
# projects_editor = Project.objects.filter(editors=request.user)
return render(request, 'profile.html', {
'user': user,
# 'projects_owner': projects_owner,
# 'projects_editor': projects_editor
})
def detail(request, pk):
project = Project.objects.get(id=pk)
geojson = GeoJSONSerializer().serialize([project, ],
geometry_field='geom',
properties=('name', 'detail_url', 'feature_image', ))
return render(request, 'detail.html', {
'project': project,
'geojson': geojson,
})
@never_cache
def add(request):
if request.method == 'POST':
form = ProjectForm(request.POST, request.FILES)
if form.is_valid():
project = form.save()
if project.town_insee:
# town_insee
r = requests.get('https://geo.api.gouv.fr/communes?code=%s&fields=contour,departement,region' % (project.town_insee))
data = r.json()[0]
coord = data['contour']
mpoly = GEOSGeometry(json.dumps(coord))
project.geom = GeometryCollection(mpoly)
department_name = data['departement']['nom']
department_insee = data['departement']['code']
departement, created = Department.objects.get_or_create(name=department_name, insee=department_insee)
project.department = departement
region_name = data['region']['nom']
region_insee = data['region']['code']
region, created = Region.objects.get_or_create(name=region_name, insee=region_insee)
project.region = region
elif project.department:
project.geom = project.department.geom
elif project.region:
project.geom = project.region.geom
project.save()
ReferentFormSet = inlineformset_factory(Project, Referent, form=ReferentForm, extra=0, can_delete=True)
referent_formset = ReferentFormSet(request.POST, request.FILES, instance=project)
StakeHolderTypeFormset = inlineformset_factory(Project, StakeHolderType, form=StakeHolderTypeForm, extra=0, can_delete=True)
stakeholdertype_formset = StakeHolderTypeFormset(request.POST, request.FILES, instance=project)
LeaderFormset = inlineformset_factory(Project, Leader, form=LeaderForm, extra=0, can_delete=True)
leader_formset = LeaderFormset(request.POST, request.FILES, instance=project)
if referent_formset.is_valid():
referent_formset.save()
if stakeholdertype_formset.is_valid():
stakeholdertype_formset.save()
if leader_formset.is_valid():
leader_formset.save()
return redirect('detail', pk=project.id)
form = ProjectForm()
referent_formset = inlineformset_factory(Project, Referent, form=ReferentForm, extra=0, can_delete=True)
stakeholdertype_formset = inlineformset_factory(Project, StakeHolderType, form=StakeHolderTypeForm, extra=0, can_delete=True)
leader_formset = inlineformset_factory(Project, Leader, form=LeaderForm, extra=0, can_delete=True)
return render(request, 'add.html', {
'form': form,
'referent_formset': referent_formset,
'stakeholdertype_formset': stakeholdertype_formset,
'leader_formset': leader_formset,
})
@never_cache
def update(request, pk):
project = Project.objects.get(id=pk)
if request.method == 'POST':
form = ProjectForm(request.POST, request.FILES, instance=project)
if form.is_valid():
project = form.save()
if project.town_insee:
# town_insee
r = requests.get('https://geo.api.gouv.fr/communes?code=%s&fields=contour,departement,region' % (project.town_insee))
data = r.json()[0]
coord = data['contour']
mpoly = GEOSGeometry(json.dumps(coord))
project.geom = GeometryCollection(mpoly)
department_name = data['departement']['nom']
department_insee = data['departement']['code']
departement, created = Department.objects.get_or_create(name=department_name, insee=department_insee)
project.department = departement
region_name = data['region']['nom']
region_insee = data['region']['code']
region, created = Region.objects.get_or_create(name=region_name, insee=region_insee)
project.region = region
project.save()
ReferentFormSet = inlineformset_factory(Project, Referent, form=ReferentForm, extra=0, can_delete=True)
referent_formset = ReferentFormSet(request.POST, request.FILES, instance=project)
StakeHolderTypeFormset = inlineformset_factory(Project, StakeHolderType, form=StakeHolderTypeForm, extra=0, can_delete=True)
stakeholdertype_formset = StakeHolderTypeFormset(request.POST, request.FILES, instance=project)
LeaderFormset = inlineformset_factory(Project, Leader, form=LeaderForm, extra=0, can_delete=True)
leader_formset = LeaderFormset(request.POST, request.FILES, instance=project)
if referent_formset.is_valid():
referent_formset.save()
if stakeholdertype_formset.is_valid():
stakeholdertype_formset.save()
if leader_formset.is_valid():
leader_formset.save()
return redirect('detail', pk=project.id)
form = ProjectForm(instance=project)
referent_formset = inlineformset_factory(Project, Referent, form=ReferentForm, extra=0, can_delete=True)
referent_formset = referent_formset(instance=project)
stakeholdertype_formset = inlineformset_factory(Project, StakeHolderType, form=StakeHolderTypeForm, extra=0, can_delete=True)
stakeholdertype_formset = stakeholdertype_formset(instance=project)
leader_formset = inlineformset_factory(Project, Leader, form=LeaderForm, extra=0, can_delete=True)
leader_formset = leader_formset(instance=project)
return render(request, 'update.html', {
'project': project,
'form': form,
'referent_formset': referent_formset,
'stakeholdertype_formset': stakeholdertype_formset,
'leader_formset': leader_formset,
})
| 42.850575 | 136 | 0.668053 | 756 | 7,456 | 6.425926 | 0.152116 | 0.043228 | 0.066694 | 0.037052 | 0.741663 | 0.741663 | 0.741663 | 0.738576 | 0.723343 | 0.723343 | 0 | 0.002437 | 0.22948 | 7,456 | 173 | 137 | 43.098266 | 0.843168 | 0.028568 | 0 | 0.703125 | 0 | 0.015625 | 0.076009 | 0.006357 | 0 | 0 | 0 | 0 | 0 | 1 | 0.039063 | false | 0 | 0.078125 | 0 | 0.171875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4256b59cc947d755d2e9dc35fd3b79b0fc7743bc | 30 | py | Python | Assembly Files/edd project modules/printed Reciept.py | Pranesh6767/E_tax_with_MySQL | 104c7f57c569938439c35f42706edbd575181d34 | [
"MIT"
] | 1 | 2020-10-15T09:09:24.000Z | 2020-10-15T09:09:24.000Z | Assembly Files/edd project modules/printed Reciept.py | Pranesh6767/E_tax_with_MySQL | 104c7f57c569938439c35f42706edbd575181d34 | [
"MIT"
] | null | null | null | Assembly Files/edd project modules/printed Reciept.py | Pranesh6767/E_tax_with_MySQL | 104c7f57c569938439c35f42706edbd575181d34 | [
"MIT"
] | 2 | 2020-10-01T20:56:05.000Z | 2021-09-22T06:38:36.000Z |
import mysql.connector
| 6 | 23 | 0.666667 | 3 | 30 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.3 | 30 | 4 | 24 | 7.5 | 0.952381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c409a393ab672f3f6235e034ab6b24a4ece21de7 | 39 | py | Python | pydemic/__init__.py | Catastropha/pydemic | b5d984d8b500586f6ea13e5435c8904b9d1b57a4 | [
"MIT"
] | 3 | 2019-10-12T12:04:09.000Z | 2021-01-24T06:21:54.000Z | pydemic/__init__.py | Catastropha/pydemic | b5d984d8b500586f6ea13e5435c8904b9d1b57a4 | [
"MIT"
] | null | null | null | pydemic/__init__.py | Catastropha/pydemic | b5d984d8b500586f6ea13e5435c8904b9d1b57a4 | [
"MIT"
] | null | null | null | from .agents import PSOAgent, GWOAgent
| 19.5 | 38 | 0.820513 | 5 | 39 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 39 | 1 | 39 | 39 | 0.941176 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c425e121af18cde2de103316851c70e185a4d8fb | 96,094 | py | Python | src/frr/tests/topotests/lib/ospf.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/topotests/lib/ospf.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | src/frr/tests/topotests/lib/ospf.py | zhouhaifeng/vpe | 9c644ffd561988e5740021ed26e0f7739844353d | [
"Apache-2.0"
] | null | null | null | #
# Copyright (c) 2020 by VMware, Inc. ("VMware")
# Used Copyright (c) 2018 by Network Device Education Foundation, Inc.
# ("NetDEF") in this file.
#
# Permission to use, copy, modify, and/or distribute this software
# for any purpose with or without fee is hereby granted, provided
# that the above copyright notice and this permission notice appear
# in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND VMWARE DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL VMWARE BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THIS SOFTWARE.
#
import ipaddress
import sys
from copy import deepcopy
# Import common_config to use commomnly used APIs
from lib.common_config import (
create_common_configurations,
InvalidCLIError,
generate_ips,
retry,
run_frr_cmd,
validate_ip_address,
)
from lib.topolog import logger
from lib.topotest import frr_unicode
################################
# Configure procs
################################
def create_router_ospf(tgen, topo=None, input_dict=None, build=False, load_config=True):
"""
API to configure ospf on router.
Parameters
----------
* `tgen` : Topogen object
* `topo` : json file data
* `input_dict` : Input dict data, required when configuring from testcase
* `build` : Only for initial setup phase this is set as True.
* `load_config` : Loading the config to router this is set as True.
Usage
-----
input_dict = {
"r1": {
"ospf": {
"router_id": "22.22.22.22",
"area": [{ "id": "0.0.0.0", "type": "nssa"}]
}
}
result = create_router_ospf(tgen, topo, input_dict)
Returns
-------
True or False
"""
logger.debug("Entering lib API: create_router_ospf()")
result = False
if topo is None:
topo = tgen.json_topo
if not input_dict:
input_dict = deepcopy(topo)
else:
topo = topo["routers"]
input_dict = deepcopy(input_dict)
for ospf in ["ospf", "ospf6"]:
config_data_dict = {}
for router in input_dict.keys():
if ospf not in input_dict[router]:
logger.debug("Router %s: %s not present in input_dict", router, ospf)
continue
config_data = __create_ospf_global(
tgen, input_dict, router, build, load_config, ospf
)
if config_data:
if router not in config_data_dict:
config_data_dict[router] = config_data
else:
config_data_dict[router].extend(config_data)
try:
result = create_common_configurations(
tgen, config_data_dict, ospf, build, load_config
)
except InvalidCLIError:
logger.error("create_router_ospf (ipv4)", exc_info=True)
result = False
logger.debug("Exiting lib API: create_router_ospf()")
return result
def __create_ospf_global(tgen, input_dict, router, build, load_config, ospf):
"""
Helper API to create ospf global configuration.
Parameters
----------
* `tgen` : Topogen object
* `input_dict` : Input dict data, required when configuring from testcase
* `router` : router to be configured.
* `build` : Only for initial setup phase this is set as True.
* `load_config` : Loading the config to router this is set as True.
* `ospf` : either 'ospf' or 'ospf6'
Usage
-----
input_dict = {
"routers": {
"r1": {
"links": {
"r3": {
"ipv6": "2013:13::1/64",
"ospf6": {
"hello_interval": 1,
"dead_interval": 4,
"network": "point-to-point"
}
}
},
"ospf6": {
"router_id": "1.1.1.1",
"neighbors": {
"r3": {
"area": "1.1.1.1"
}
}
}
}
}
Returns
-------
list of configuration commands
"""
config_data = []
if ospf not in input_dict[router]:
return config_data
logger.debug("Entering lib API: __create_ospf_global()")
ospf_data = input_dict[router][ospf]
del_ospf_action = ospf_data.setdefault("delete", False)
if del_ospf_action:
config_data = ["no router {}".format(ospf)]
return config_data
cmd = "router {}".format(ospf)
config_data.append(cmd)
# router id
router_id = ospf_data.setdefault("router_id", None)
del_router_id = ospf_data.setdefault("del_router_id", False)
if del_router_id:
config_data.append("no {} router-id".format(ospf))
if router_id:
config_data.append("{} router-id {}".format(ospf, router_id))
# log-adjacency-changes
log_adj_changes = ospf_data.setdefault("log_adj_changes", None)
del_log_adj_changes = ospf_data.setdefault("del_log_adj_changes", False)
if del_log_adj_changes:
config_data.append("no log-adjacency-changes detail")
if log_adj_changes:
config_data.append("log-adjacency-changes {}".format(log_adj_changes))
# aggregation timer
aggr_timer = ospf_data.setdefault("aggr_timer", None)
del_aggr_timer = ospf_data.setdefault("del_aggr_timer", False)
if del_aggr_timer:
config_data.append("no aggregation timer")
if aggr_timer:
config_data.append("aggregation timer {}".format(aggr_timer))
# maximum path information
ecmp_data = ospf_data.setdefault("maximum-paths", {})
if ecmp_data:
cmd = "maximum-paths {}".format(ecmp_data)
del_action = ospf_data.setdefault("del_max_path", False)
if del_action:
cmd = "no maximum-paths"
config_data.append(cmd)
# redistribute command
redistribute_data = ospf_data.setdefault("redistribute", {})
if redistribute_data:
for redistribute in redistribute_data:
if "redist_type" not in redistribute:
logger.debug(
"Router %s: 'redist_type' not present in " "input_dict", router
)
else:
cmd = "redistribute {}".format(redistribute["redist_type"])
for red_type in redistribute_data:
if "route_map" in red_type:
cmd = cmd + " route-map {}".format(red_type["route_map"])
del_action = redistribute.setdefault("delete", False)
if del_action:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# area information
area_data = ospf_data.setdefault("area", {})
if area_data:
for area in area_data:
if "id" not in area:
logger.debug(
"Router %s: 'area id' not present in " "input_dict", router
)
else:
cmd = "area {}".format(area["id"])
if "type" in area:
cmd = cmd + " {}".format(area["type"])
del_action = area.setdefault("delete", False)
if del_action:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# def route information
def_rte_data = ospf_data.setdefault("default-information", {})
if def_rte_data:
if "originate" not in def_rte_data:
logger.debug(
"Router %s: 'originate key' not present in " "input_dict", router
)
else:
cmd = "default-information originate"
if "always" in def_rte_data:
cmd = cmd + " always"
if "metric" in def_rte_data:
cmd = cmd + " metric {}".format(def_rte_data["metric"])
if "metric-type" in def_rte_data:
cmd = cmd + " metric-type {}".format(def_rte_data["metric-type"])
if "route-map" in def_rte_data:
cmd = cmd + " route-map {}".format(def_rte_data["route-map"])
del_action = def_rte_data.setdefault("delete", False)
if del_action:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# area interface information for ospf6d only
if ospf == "ospf6":
area_iface = ospf_data.setdefault("neighbors", {})
if area_iface:
for neighbor in area_iface:
if "area" in area_iface[neighbor]:
iface = input_dict[router]["links"][neighbor]["interface"]
cmd = "interface {} area {}".format(
iface, area_iface[neighbor]["area"]
)
if area_iface[neighbor].setdefault("delete", False):
cmd = "no {}".format(cmd)
config_data.append(cmd)
try:
if "area" in input_dict[router]["links"][neighbor]["ospf6"]:
iface = input_dict[router]["links"][neighbor]["interface"]
cmd = "interface {} area {}".format(
iface,
input_dict[router]["links"][neighbor]["ospf6"]["area"],
)
if input_dict[router]["links"][neighbor].setdefault(
"delete", False
):
cmd = "no {}".format(cmd)
config_data.append(cmd)
except KeyError:
pass
# summary information
summary_data = ospf_data.setdefault("summary-address", {})
if summary_data:
for summary in summary_data:
if "prefix" not in summary:
logger.debug(
"Router %s: 'summary-address' not present in " "input_dict",
router,
)
else:
cmd = "summary {}/{}".format(summary["prefix"], summary["mask"])
_tag = summary.setdefault("tag", None)
if _tag:
cmd = "{} tag {}".format(cmd, _tag)
_advertise = summary.setdefault("advertise", True)
if not _advertise:
cmd = "{} no-advertise".format(cmd)
del_action = summary.setdefault("delete", False)
if del_action:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# ospf gr information
gr_data = ospf_data.setdefault("graceful-restart", {})
if gr_data:
if "opaque" in gr_data and gr_data["opaque"]:
cmd = "capability opaque"
if gr_data.setdefault("delete", False):
cmd = "no {}".format(cmd)
config_data.append(cmd)
if "helper enable" in gr_data and not gr_data["helper enable"]:
cmd = "graceful-restart helper enable"
if gr_data.setdefault("delete", False):
cmd = "no {}".format(cmd)
config_data.append(cmd)
elif "helper enable" in gr_data and type(gr_data["helper enable"]) is list:
for rtrs in gr_data["helper enable"]:
cmd = "graceful-restart helper enable {}".format(rtrs)
if gr_data.setdefault("delete", False):
cmd = "no {}".format(cmd)
config_data.append(cmd)
if "helper" in gr_data:
if type(gr_data["helper"]) is not list:
gr_data["helper"] = list(gr_data["helper"])
for helper_role in gr_data["helper"]:
cmd = "graceful-restart helper {}".format(helper_role)
if gr_data.setdefault("delete", False):
cmd = "no {}".format(cmd)
config_data.append(cmd)
if "supported-grace-time" in gr_data:
cmd = "graceful-restart helper supported-grace-time {}".format(
gr_data["supported-grace-time"]
)
if gr_data.setdefault("delete", False):
cmd = "no {}".format(cmd)
config_data.append(cmd)
logger.debug("Exiting lib API: create_ospf_global()")
return config_data
def create_router_ospf6(
tgen, topo=None, input_dict=None, build=False, load_config=True
):
"""
API to configure ospf on router
Parameters
----------
* `tgen` : Topogen object
* `topo` : json file data
* `input_dict` : Input dict data, required when configuring from testcase
* `build` : Only for initial setup phase this is set as True.
Usage
-----
input_dict = {
"r1": {
"ospf6": {
"router_id": "22.22.22.22",
}
}
Returns
-------
True or False
"""
logger.debug("Entering lib API: create_router_ospf6()")
result = False
if topo is None:
topo = tgen.json_topo
if not input_dict:
input_dict = deepcopy(topo)
else:
topo = topo["routers"]
input_dict = deepcopy(input_dict)
config_data_dict = {}
for router in input_dict.keys():
if "ospf6" not in input_dict[router]:
logger.debug("Router %s: 'ospf6' not present in input_dict", router)
continue
config_data = __create_ospf_global(
tgen, input_dict, router, build, load_config, "ospf6"
)
if config_data:
config_data_dict[router] = config_data
try:
result = create_common_configurations(
tgen, config_data_dict, "ospf6", build, load_config
)
except InvalidCLIError:
logger.error("create_router_ospf6", exc_info=True)
result = False
logger.debug("Exiting lib API: create_router_ospf6()")
return result
def config_ospf_interface(
tgen, topo=None, input_dict=None, build=False, load_config=True
):
"""
API to configure ospf on router.
Parameters
----------
* `tgen` : Topogen object
* `topo` : json file data
* `input_dict` : Input dict data, required when configuring from testcase
* `build` : Only for initial setup phase this is set as True.
* `load_config` : Loading the config to router this is set as True.
Usage
-----
r1_ospf_auth = {
"r1": {
"links": {
"r2": {
"ospf": {
"authentication": "message-digest",
"authentication-key": "ospf",
"message-digest-key": "10"
}
}
}
}
}
result = config_ospf_interface(tgen, topo, r1_ospf_auth)
Returns
-------
True or False
"""
logger.debug("Enter lib config_ospf_interface")
result = False
if topo is None:
topo = tgen.json_topo
if not input_dict:
input_dict = deepcopy(topo)
else:
input_dict = deepcopy(input_dict)
config_data_dict = {}
for router in input_dict.keys():
config_data = []
for lnk in input_dict[router]["links"].keys():
if "ospf" not in input_dict[router]["links"][lnk]:
logger.debug(
"Router %s: ospf config is not present in" "input_dict", router
)
continue
ospf_data = input_dict[router]["links"][lnk]["ospf"]
data_ospf_area = ospf_data.setdefault("area", None)
data_ospf_auth = ospf_data.setdefault("authentication", None)
data_ospf_dr_priority = ospf_data.setdefault("priority", None)
data_ospf_cost = ospf_data.setdefault("cost", None)
data_ospf_mtu = ospf_data.setdefault("mtu_ignore", None)
try:
intf = topo["routers"][router]["links"][lnk]["interface"]
except KeyError:
intf = topo["switches"][router]["links"][lnk]["interface"]
# interface
cmd = "interface {}".format(intf)
config_data.append(cmd)
# interface area config
if data_ospf_area:
cmd = "ip ospf area {}".format(data_ospf_area)
config_data.append(cmd)
# interface ospf auth
if data_ospf_auth:
if data_ospf_auth == "null":
cmd = "ip ospf authentication null"
elif data_ospf_auth == "message-digest":
cmd = "ip ospf authentication message-digest"
else:
cmd = "ip ospf authentication"
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
if "message-digest-key" in ospf_data:
cmd = "ip ospf message-digest-key {} md5 {}".format(
ospf_data["message-digest-key"], ospf_data["authentication-key"]
)
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
if (
"authentication-key" in ospf_data
and "message-digest-key" not in ospf_data
):
cmd = "ip ospf authentication-key {}".format(
ospf_data["authentication-key"]
)
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# interface ospf dr priority
if data_ospf_dr_priority:
cmd = "ip ospf priority {}".format(ospf_data["priority"])
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# interface ospf cost
if data_ospf_cost:
cmd = "ip ospf cost {}".format(ospf_data["cost"])
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# interface ospf mtu
if data_ospf_mtu:
cmd = "ip ospf mtu-ignore"
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
if build:
return config_data
if config_data:
config_data_dict[router] = config_data
result = create_common_configurations(
tgen, config_data_dict, "interface_config", build=build
)
logger.debug("Exiting lib API: config_ospf_interface()")
return result
def clear_ospf(tgen, router, ospf=None):
"""
This API is to clear ospf neighborship by running
clear ip ospf interface * command,
Parameters
----------
* `tgen`: topogen object
* `router`: device under test
Usage
-----
clear_ospf(tgen, "r1")
"""
logger.debug("Entering lib API: clear_ospf()")
if router not in tgen.routers():
return False
rnode = tgen.routers()[router]
# Clearing OSPF
if ospf:
version = "ipv6"
else:
version = "ip"
cmd = "clear {} ospf interface".format(version)
logger.info("Clearing ospf process on router %s.. using command '%s'", router, cmd)
run_frr_cmd(rnode, cmd)
logger.debug("Exiting lib API: clear_ospf()")
def redistribute_ospf(tgen, topo, dut, route_type, **kwargs):
"""
Redstribution of routes inside ospf.
Parameters
----------
* `tgen`: Topogen object
* `topo` : json file data
* `dut`: device under test
* `route_type`: "static" or "connected" or ....
* `kwargs`: pass extra information (see below)
Usage
-----
redistribute_ospf(tgen, topo, "r0", "static", delete=True)
redistribute_ospf(tgen, topo, "r0", "static", route_map="rmap_ipv4")
"""
ospf_red = {dut: {"ospf": {"redistribute": [{"redist_type": route_type}]}}}
for k, v in kwargs.items():
ospf_red[dut]["ospf"]["redistribute"][0][k] = v
result = create_router_ospf(tgen, topo, ospf_red)
assert result is True, "Testcase : Failed \n Error: {}".format(result)
################################
# Verification procs
################################
@retry(retry_timeout=80)
def verify_ospf_neighbor(
tgen, topo=None, dut=None, input_dict=None, lan=False, expected=True
):
"""
This API is to verify ospf neighborship by running
show ip ospf neighbour command,
Parameters
----------
* `tgen` : Topogen object
* `topo` : json file data
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
* `lan` : verify neighbors in lan topology
* `expected` : expected results from API, by-default True
Usage
-----
1. To check FULL neighbors.
verify_ospf_neighbor(tgen, topo, dut=dut)
2. To check neighbors with their roles.
input_dict = {
"r0": {
"ospf": {
"neighbors": {
"r1": {
"state": "Full",
"role": "DR"
},
"r2": {
"state": "Full",
"role": "DROther"
},
"r3": {
"state": "Full",
"role": "DROther"
}
}
}
}
}
result = verify_ospf_neighbor(tgen, topo, dut, input_dict, lan=True)
Returns
-------
True or False (Error Message)
"""
logger.debug("Entering lib API: verify_ospf_neighbor()")
result = False
if topo is None:
topo = tgen.json_topo
if input_dict:
for router, rnode in tgen.routers().items():
if "ospf" not in topo["routers"][router]:
continue
if dut is not None and dut != router:
continue
logger.info("Verifying OSPF neighborship on router %s:", router)
show_ospf_json = run_frr_cmd(
rnode, "show ip ospf neighbor all json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
return errormsg
ospf_data_list = input_dict[router]["ospf"]
ospf_nbr_list = ospf_data_list["neighbors"]
for ospf_nbr, nbr_data in ospf_nbr_list.items():
data_ip = topo["routers"][ospf_nbr]["links"]
data_rid = topo["routers"][ospf_nbr]["ospf"]["router_id"]
if ospf_nbr in data_ip:
nbr_details = nbr_data[ospf_nbr]
elif lan:
for switch in topo["switches"]:
if "ospf" in topo["switches"][switch]["links"][router]:
neighbor_ip = data_ip[switch]["ipv4"].split("/")[0]
else:
continue
else:
neighbor_ip = data_ip[router]["ipv4"].split("/")[0]
nh_state = None
neighbor_ip = neighbor_ip.lower()
nbr_rid = data_rid
try:
nh_state = show_ospf_json[nbr_rid][0]["state"].split("/")[0]
intf_state = show_ospf_json[nbr_rid][0]["state"].split("/")[1]
except KeyError:
errormsg = "[DUT: {}] OSPF peer {} missing".format(router, nbr_rid)
return errormsg
nbr_state = nbr_data.setdefault("state", None)
nbr_role = nbr_data.setdefault("role", None)
if nbr_state:
if nbr_state == nh_state:
logger.info(
"[DUT: {}] OSPF Nbr is {}:{} State {}".format(
router, ospf_nbr, nbr_rid, nh_state
)
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF is not Converged, neighbor"
" state is {}".format(router, nh_state)
)
return errormsg
if nbr_role:
if nbr_role == intf_state:
logger.info(
"[DUT: {}] OSPF Nbr is {}: {} Role {}".format(
router, ospf_nbr, nbr_rid, nbr_role
)
)
else:
errormsg = (
"[DUT: {}] OSPF is not Converged with rid"
"{}, role is {}".format(router, nbr_rid, intf_state)
)
return errormsg
continue
else:
for router, rnode in tgen.routers().items():
if "ospf" not in topo["routers"][router]:
continue
if dut is not None and dut != router:
continue
logger.info("Verifying OSPF neighborship on router %s:", router)
show_ospf_json = run_frr_cmd(
rnode, "show ip ospf neighbor all json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
return errormsg
ospf_data_list = topo["routers"][router]["ospf"]
ospf_neighbors = ospf_data_list["neighbors"]
total_peer = 0
total_peer = len(ospf_neighbors.keys())
no_of_ospf_nbr = 0
ospf_nbr_list = ospf_data_list["neighbors"]
no_of_peer = 0
for ospf_nbr, nbr_data in ospf_nbr_list.items():
if nbr_data:
data_ip = topo["routers"][nbr_data["nbr"]]["links"]
data_rid = topo["routers"][nbr_data["nbr"]]["ospf"]["router_id"]
else:
data_ip = topo["routers"][ospf_nbr]["links"]
data_rid = topo["routers"][ospf_nbr]["ospf"]["router_id"]
if ospf_nbr in data_ip:
nbr_details = nbr_data[ospf_nbr]
elif lan:
for switch in topo["switches"]:
if "ospf" in topo["switches"][switch]["links"][router]:
neighbor_ip = data_ip[switch]["ipv4"].split("/")[0]
else:
continue
else:
neighbor_ip = data_ip[router]["ipv4"].split("/")[0]
nh_state = None
neighbor_ip = neighbor_ip.lower()
nbr_rid = data_rid
try:
nh_state = show_ospf_json[nbr_rid][0]["state"].split("/")[0]
except KeyError:
errormsg = "[DUT: {}] OSPF peer {} missing,from " "{} ".format(
router, nbr_rid, ospf_nbr
)
return errormsg
if nh_state == "Full":
no_of_peer += 1
if no_of_peer == total_peer:
logger.info("[DUT: {}] OSPF is Converged".format(router))
result = True
else:
errormsg = "[DUT: {}] OSPF is not Converged".format(router)
return errormsg
logger.debug("Exiting API: verify_ospf_neighbor()")
return result
################################
# Verification procs
################################
@retry(retry_timeout=50)
def verify_ospf6_neighbor(tgen, topo=None, dut=None, input_dict=None, lan=False):
"""
This API is to verify ospf neighborship by running
show ipv6 ospf neighbour command,
Parameters
----------
* `tgen` : Topogen object
* `topo` : json file data
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
* `lan` : verify neighbors in lan topology
Usage
-----
1. To check FULL neighbors.
verify_ospf_neighbor(tgen, topo, dut=dut)
2. To check neighbors with their roles.
input_dict = {
"r0": {
"ospf6": {
"neighbors": {
"r1": {
"state": "Full",
"role": "DR"
},
"r2": {
"state": "Full",
"role": "DROther"
},
"r3": {
"state": "Full",
"role": "DROther"
}
}
}
}
}
result = verify_ospf6_neighbor(tgen, topo, dut, input_dict, lan=True)
Returns
-------
True or False (Error Message)
"""
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
result = False
if topo is None:
topo = tgen.json_topo
if input_dict:
for router, rnode in tgen.routers().items():
if "ospf6" not in topo["routers"][router]:
continue
if dut is not None and dut != router:
continue
logger.info("Verifying OSPF neighborship on router %s:", router)
show_ospf_json = run_frr_cmd(
rnode, "show ipv6 ospf neighbor json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF6 is not running"
return errormsg
ospf_data_list = input_dict[router]["ospf6"]
ospf_nbr_list = ospf_data_list["neighbors"]
for ospf_nbr, nbr_data in ospf_nbr_list.items():
try:
data_ip = data_rid = topo["routers"][ospf_nbr]["ospf6"]["router_id"]
except KeyError:
data_ip = data_rid = topo["routers"][nbr_data["nbr"]]["ospf6"][
"router_id"
]
if ospf_nbr in data_ip:
nbr_details = nbr_data[ospf_nbr]
elif lan:
for switch in topo["switches"]:
if "ospf6" in topo["switches"][switch]["links"][router]:
neighbor_ip = data_ip
else:
continue
else:
neighbor_ip = data_ip[router]["ipv6"].split("/")[0]
nh_state = None
neighbor_ip = neighbor_ip.lower()
nbr_rid = data_rid
get_index_val = dict(
(d["neighborId"], dict(d, index=index))
for (index, d) in enumerate(show_ospf_json["neighbors"])
)
try:
nh_state = get_index_val.get(neighbor_ip)["state"]
intf_state = get_index_val.get(neighbor_ip)["ifState"]
except TypeError:
errormsg = "[DUT: {}] OSPF peer {} missing,from " "{} ".format(
router, nbr_rid, ospf_nbr
)
return errormsg
nbr_state = nbr_data.setdefault("state", None)
nbr_role = nbr_data.setdefault("role", None)
if nbr_state:
if nbr_state == nh_state:
logger.info(
"[DUT: {}] OSPF6 Nbr is {}:{} State {}".format(
router, ospf_nbr, nbr_rid, nh_state
)
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF6 is not Converged, neighbor"
" state is {} , Expected state is {}".format(
router, nh_state, nbr_state
)
)
return errormsg
if nbr_role:
if nbr_role == intf_state:
logger.info(
"[DUT: {}] OSPF6 Nbr is {}: {} Role {}".format(
router, ospf_nbr, nbr_rid, nbr_role
)
)
else:
errormsg = (
"[DUT: {}] OSPF6 is not Converged with rid"
"{}, role is {}, Expected role is {}".format(
router, nbr_rid, intf_state, nbr_role
)
)
return errormsg
continue
else:
for router, rnode in tgen.routers().items():
if "ospf6" not in topo["routers"][router]:
continue
if dut is not None and dut != router:
continue
logger.info("Verifying OSPF6 neighborship on router %s:", router)
show_ospf_json = run_frr_cmd(
rnode, "show ipv6 ospf neighbor json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF6 is not running"
return errormsg
ospf_data_list = topo["routers"][router]["ospf6"]
ospf_neighbors = ospf_data_list["neighbors"]
total_peer = 0
total_peer = len(ospf_neighbors.keys())
no_of_ospf_nbr = 0
ospf_nbr_list = ospf_data_list["neighbors"]
no_of_peer = 0
for ospf_nbr, nbr_data in ospf_nbr_list.items():
try:
data_ip = data_rid = topo["routers"][ospf_nbr]["ospf6"]["router_id"]
except KeyError:
data_ip = data_rid = topo["routers"][nbr_data["nbr"]]["ospf6"][
"router_id"
]
if ospf_nbr in data_ip:
nbr_details = nbr_data[ospf_nbr]
elif lan:
for switch in topo["switches"]:
if "ospf6" in topo["switches"][switch]["links"][router]:
neighbor_ip = data_ip
else:
continue
else:
neighbor_ip = data_ip
nh_state = None
neighbor_ip = neighbor_ip.lower()
nbr_rid = data_rid
get_index_val = dict(
(d["neighborId"], dict(d, index=index))
for (index, d) in enumerate(show_ospf_json["neighbors"])
)
try:
nh_state = get_index_val.get(neighbor_ip)["state"]
intf_state = get_index_val.get(neighbor_ip)["ifState"]
except TypeError:
errormsg = "[DUT: {}] OSPF peer {} missing,from " "{} ".format(
router, nbr_rid, ospf_nbr
)
return errormsg
if nh_state == "Full":
no_of_peer += 1
if no_of_peer == total_peer:
logger.info("[DUT: {}] OSPF6 is Converged".format(router))
result = True
else:
errormsg = "[DUT: {}] OSPF6 is not Converged".format(router)
return errormsg
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
@retry(retry_timeout=40)
def verify_ospf_rib(
tgen, dut, input_dict, next_hop=None, tag=None, metric=None, fib=None, expected=True
):
"""
This API is to verify ospf routes by running
show ip ospf route command.
Parameters
----------
* `tgen` : Topogen object
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
* `next_hop` : next to be verified
* `tag` : tag to be verified
* `metric` : metric to be verified
* `fib` : True if the route is installed in FIB.
* `expected` : expected results from API, by-default True
Usage
-----
input_dict = {
"r1": {
"static_routes": [
{
"network": ip_net,
"no_of_ip": 1,
"routeType": "N"
}
]
}
}
result = verify_ospf_rib(tgen, dut, input_dict,next_hop=nh)
Returns
-------
True or False (Error Message)
"""
logger.info("Entering lib API: verify_ospf_rib()")
result = False
router_list = tgen.routers()
additional_nexthops_in_required_nhs = []
found_hops = []
for routerInput in input_dict.keys():
for router, rnode in router_list.items():
if router != dut:
continue
logger.info("Checking router %s RIB:", router)
# Verifying RIB routes
command = "show ip ospf route"
found_routes = []
missing_routes = []
if (
"static_routes" in input_dict[routerInput]
or "prefix" in input_dict[routerInput]
):
if "prefix" in input_dict[routerInput]:
static_routes = input_dict[routerInput]["prefix"]
else:
static_routes = input_dict[routerInput]["static_routes"]
for static_route in static_routes:
cmd = "{}".format(command)
cmd = "{} json".format(cmd)
ospf_rib_json = run_frr_cmd(rnode, cmd, isjson=True)
# Verifying output dictionary ospf_rib_json is not empty
if bool(ospf_rib_json) is False:
errormsg = (
"[DUT: {}] No routes found in OSPF route "
"table".format(router)
)
return errormsg
network = static_route["network"]
no_of_ip = static_route.setdefault("no_of_ip", 1)
_tag = static_route.setdefault("tag", None)
_rtype = static_route.setdefault("routeType", None)
# Generating IPs for verification
ip_list = generate_ips(network, no_of_ip)
st_found = False
nh_found = False
for st_rt in ip_list:
st_rt = str(ipaddress.ip_network(frr_unicode(st_rt)))
_addr_type = validate_ip_address(st_rt)
if _addr_type != "ipv4":
continue
if st_rt in ospf_rib_json:
st_found = True
found_routes.append(st_rt)
if fib and next_hop:
if type(next_hop) is not list:
next_hop = [next_hop]
for mnh in range(0, len(ospf_rib_json[st_rt])):
if (
"fib"
in ospf_rib_json[st_rt][mnh]["nexthops"][0]
):
found_hops.append(
[
rib_r["ip"]
for rib_r in ospf_rib_json[st_rt][mnh][
"nexthops"
]
]
)
if found_hops[0]:
missing_list_of_nexthops = set(
found_hops[0]
).difference(next_hop)
additional_nexthops_in_required_nhs = set(
next_hop
).difference(found_hops[0])
if additional_nexthops_in_required_nhs:
logger.info(
"Nexthop "
"%s is not active for route %s in "
"RIB of router %s\n",
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
errormsg = (
"Nexthop {} is not active"
" for route {} in RIB of router"
" {}\n".format(
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
)
return errormsg
else:
nh_found = True
elif next_hop and fib is None:
if type(next_hop) is not list:
next_hop = [next_hop]
found_hops = [
rib_r["ip"]
for rib_r in ospf_rib_json[st_rt]["nexthops"]
]
if found_hops:
missing_list_of_nexthops = set(
found_hops
).difference(next_hop)
additional_nexthops_in_required_nhs = set(
next_hop
).difference(found_hops)
if additional_nexthops_in_required_nhs:
logger.info(
"Missing nexthop %s for route"
" %s in RIB of router %s\n",
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
errormsg = (
"Nexthop {} is Missing for "
"route {} in RIB of router {}\n".format(
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
)
return errormsg
else:
nh_found = True
if _rtype:
if "routeType" not in ospf_rib_json[st_rt]:
errormsg = (
"[DUT: {}]: routeType missing"
" for route {} in OSPF RIB \n".format(
dut, st_rt
)
)
return errormsg
elif _rtype != ospf_rib_json[st_rt]["routeType"]:
errormsg = (
"[DUT: {}]: routeType mismatch"
" for route {} in OSPF RIB \n".format(
dut, st_rt
)
)
return errormsg
else:
logger.info(
"[DUT: {}]: Found routeType {}"
" for route {}".format(dut, _rtype, st_rt)
)
if tag:
if "tag" not in ospf_rib_json[st_rt]:
errormsg = (
"[DUT: {}]: tag is not"
" present for"
" route {} in RIB \n".format(dut, st_rt)
)
return errormsg
if _tag != ospf_rib_json[st_rt]["tag"]:
errormsg = (
"[DUT: {}]: tag value {}"
" is not matched for"
" route {} in RIB \n".format(
dut,
_tag,
st_rt,
)
)
return errormsg
if metric is not None:
if "type2cost" not in ospf_rib_json[st_rt]:
errormsg = (
"[DUT: {}]: metric is"
" not present for"
" route {} in RIB \n".format(dut, st_rt)
)
return errormsg
if metric != ospf_rib_json[st_rt]["type2cost"]:
errormsg = (
"[DUT: {}]: metric value "
"{} is not matched for "
"route {} in RIB \n".format(
dut,
metric,
st_rt,
)
)
return errormsg
else:
missing_routes.append(st_rt)
if nh_found:
logger.info(
"[DUT: {}]: Found next_hop {} for all OSPF"
" routes in RIB".format(router, next_hop)
)
if len(missing_routes) > 0:
errormsg = "[DUT: {}]: Missing route in RIB, " "routes: {}".format(
dut, missing_routes
)
return errormsg
if found_routes:
logger.info(
"[DUT: %s]: Verified routes in RIB, found" " routes are: %s\n",
dut,
found_routes,
)
result = True
logger.info("Exiting lib API: verify_ospf_rib()")
return result
@retry(retry_timeout=20)
def verify_ospf_interface(
tgen, topo=None, dut=None, lan=False, input_dict=None, expected=True
):
"""
This API is to verify ospf routes by running
show ip ospf interface command.
Parameters
----------
* `tgen` : Topogen object
* `topo` : topology descriptions
* `dut`: device under test
* `lan`: if set to true this interface belongs to LAN.
* `input_dict` : Input dict data, required when configuring from testcase
* `expected` : expected results from API, by-default True
Usage
-----
input_dict= {
'r0': {
'links':{
's1': {
'ospf':{
'priority':98,
'timerDeadSecs': 4,
'area': '0.0.0.3',
'mcastMemberOspfDesignatedRouters': True,
'mcastMemberOspfAllRouters': True,
'ospfEnabled': True,
}
}
}
}
}
result = verify_ospf_interface(tgen, topo, dut=dut, input_dict=input_dict)
Returns
-------
True or False (Error Message)
"""
logger.debug("Entering lib API: verify_ospf_interface()")
result = False
if topo is None:
topo = tgen.json_topo
for router, rnode in tgen.routers().items():
if "ospf" not in topo["routers"][router]:
continue
if dut is not None and dut != router:
continue
logger.info("Verifying OSPF interface on router %s:", router)
show_ospf_json = run_frr_cmd(rnode, "show ip ospf interface json", isjson=True)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
return errormsg
# To find neighbor ip type
ospf_intf_data = input_dict[router]["links"]
for ospf_intf, intf_data in ospf_intf_data.items():
intf = topo["routers"][router]["links"][ospf_intf]["interface"]
if intf in show_ospf_json["interfaces"]:
for intf_attribute in intf_data["ospf"]:
if (
intf_data["ospf"][intf_attribute]
== show_ospf_json["interfaces"][intf][intf_attribute]
):
logger.info(
"[DUT: %s] OSPF interface %s: %s is %s",
router,
intf,
intf_attribute,
intf_data["ospf"][intf_attribute],
)
else:
errormsg = "[DUT: {}] OSPF interface {}: {} is {}, \
Expected is {}".format(
router,
intf,
intf_attribute,
intf_data["ospf"][intf_attribute],
show_ospf_json["interfaces"][intf][intf_attribute],
)
return errormsg
result = True
logger.debug("Exiting API: verify_ospf_interface()")
return result
@retry(retry_timeout=20)
def verify_ospf_database(tgen, topo, dut, input_dict, expected=True):
"""
This API is to verify ospf lsa's by running
show ip ospf database command.
Parameters
----------
* `tgen` : Topogen object
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
* `topo` : next to be verified
* `expected` : expected results from API, by-default True
Usage
-----
input_dict = {
"areas": {
"0.0.0.0": {
"Router Link States": {
"100.1.1.0-100.1.1.0": {
"LSID": "100.1.1.0",
"Advertised router": "100.1.1.0",
"LSA Age": 130,
"Sequence Number": "80000006",
"Checksum": "a703",
"Router links": 3
}
},
"Net Link States": {
"10.0.0.2-100.1.1.1": {
"LSID": "10.0.0.2",
"Advertised router": "100.1.1.1",
"LSA Age": 137,
"Sequence Number": "80000001",
"Checksum": "9583"
}
},
},
}
}
result = verify_ospf_database(tgen, topo, dut, input_dict)
Returns
-------
True or False (Error Message)
"""
result = False
router = dut
logger.debug("Entering lib API: verify_ospf_database()")
if "ospf" not in topo["routers"][dut]:
errormsg = "[DUT: {}] OSPF is not configured on the router.".format(dut)
return errormsg
rnode = tgen.routers()[dut]
logger.info("Verifying OSPF interface on router %s:", dut)
show_ospf_json = run_frr_cmd(rnode, "show ip ospf database json", isjson=True)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
return errormsg
# for inter and inter lsa's
ospf_db_data = input_dict.setdefault("areas", None)
ospf_external_lsa = input_dict.setdefault("AS External Link States", None)
if ospf_db_data:
for ospf_area, area_lsa in ospf_db_data.items():
if ospf_area in show_ospf_json["areas"]:
if "Router Link States" in area_lsa:
for lsa in area_lsa["Router Link States"]:
if (
lsa
in show_ospf_json["areas"][ospf_area]["Router Link States"]
):
logger.info(
"[DUT: %s] OSPF LSDB area %s:Router " "LSA %s",
router,
ospf_area,
lsa,
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Router LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "Net Link States" in area_lsa:
for lsa in area_lsa["Net Link States"]:
if lsa in show_ospf_json["areas"][ospf_area]["Net Link States"]:
logger.info(
"[DUT: %s] OSPF LSDB area %s:Network " "LSA %s",
router,
ospf_area,
lsa,
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Network LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "Summary Link States" in area_lsa:
for lsa in area_lsa["Summary Link States"]:
if (
lsa
in show_ospf_json["areas"][ospf_area]["Summary Link States"]
):
logger.info(
"[DUT: %s] OSPF LSDB area %s:Summary " "LSA %s",
router,
ospf_area,
lsa,
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Summary LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "ASBR-Summary Link States" in area_lsa:
for lsa in area_lsa["ASBR-Summary Link States"]:
if (
lsa
in show_ospf_json["areas"][ospf_area][
"ASBR-Summary Link States"
]
):
logger.info(
"[DUT: %s] OSPF LSDB area %s:ASBR Summary " "LSA %s",
router,
ospf_area,
lsa,
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" ASBR Summary LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if ospf_external_lsa:
for ospf_ext_lsa, ext_lsa_data in ospf_external_lsa.items():
if ospf_ext_lsa in show_ospf_json["AS External Link States"]:
logger.info(
"[DUT: %s] OSPF LSDB:External LSA %s", router, ospf_ext_lsa
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB : expected"
" External LSA is {}".format(router, ospf_ext_lsa)
)
return errormsg
logger.debug("Exiting API: verify_ospf_database()")
return result
@retry(retry_timeout=20)
def verify_ospf_summary(tgen, topo, dut, input_dict, ospf=None, expected=True):
"""
This API is to verify ospf routes by running
show ip ospf interface command.
Parameters
----------
* `tgen` : Topogen object
* `topo` : topology descriptions
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
Usage
-----
input_dict = {
"11.0.0.0/8": {
"Summary address": "11.0.0.0/8",
"Metric-type": "E2",
"Metric": 20,
"Tag": 0,
"External route count": 5
}
}
result = verify_ospf_summary(tgen, topo, dut, input_dict)
Returns
-------
True or False (Error Message)
"""
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
result = False
router = dut
logger.info("Verifying OSPF summary on router %s:", router)
rnode = tgen.routers()[dut]
if ospf:
if "ospf6" not in topo["routers"][dut]:
errormsg = "[DUT: {}] OSPF6 is not configured on the router.".format(router)
return errormsg
show_ospf_json = run_frr_cmd(
rnode, "show ipv6 ospf summary detail json", isjson=True
)
else:
if "ospf" not in topo["routers"][dut]:
errormsg = "[DUT: {}] OSPF is not configured on the router.".format(router)
return errormsg
show_ospf_json = run_frr_cmd(
rnode, "show ip ospf summary detail json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
return errormsg
# To find neighbor ip type
ospf_summary_data = input_dict
if ospf:
show_ospf_json = show_ospf_json["default"]
for ospf_summ, summ_data in ospf_summary_data.items():
if ospf_summ not in show_ospf_json:
continue
summary = ospf_summary_data[ospf_summ]["Summary address"]
if summary in show_ospf_json:
for summ in summ_data:
if summ_data[summ] == show_ospf_json[summary][summ]:
logger.info(
"[DUT: %s] OSPF summary %s:%s is %s",
router,
summary,
summ,
summ_data[summ],
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF summary {} : {} is {}, "
"Expected is {}".format(
router,
summary,
summ,
show_ospf_json[summary][summ],
summ_data[summ],
)
)
return errormsg
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
@retry(retry_timeout=30)
def verify_ospf6_rib(
tgen, dut, input_dict, next_hop=None, tag=None, metric=None, fib=None
):
"""
This API is to verify ospf routes by running
show ip ospf route command.
Parameters
----------
* `tgen` : Topogen object
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
* `next_hop` : next to be verified
* `tag` : tag to be verified
* `metric` : metric to be verified
* `fib` : True if the route is installed in FIB.
Usage
-----
input_dict = {
"r1": {
"static_routes": [
{
"network": ip_net,
"no_of_ip": 1,
"routeType": "N"
}
]
}
}
result = verify_ospf6_rib(tgen, dut, input_dict,next_hop=nh)
Returns
-------
True or False (Error Message)
"""
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
result = False
router_list = tgen.routers()
additional_nexthops_in_required_nhs = []
found_hops = []
for routerInput in input_dict.keys():
for router, rnode in router_list.items():
if router != dut:
continue
logger.info("Checking router %s RIB:", router)
# Verifying RIB routes
command = "show ipv6 ospf route"
found_routes = []
missing_routes = []
if (
"static_routes" in input_dict[routerInput]
or "prefix" in input_dict[routerInput]
):
if "prefix" in input_dict[routerInput]:
static_routes = input_dict[routerInput]["prefix"]
else:
static_routes = input_dict[routerInput]["static_routes"]
for static_route in static_routes:
cmd = "{}".format(command)
cmd = "{} json".format(cmd)
ospf_rib_json = run_frr_cmd(rnode, cmd, isjson=True)
# Fix for PR 2644182
try:
ospf_rib_json = ospf_rib_json["routes"]
except KeyError:
pass
# Verifying output dictionary ospf_rib_json is not empty
if bool(ospf_rib_json) is False:
errormsg = (
"[DUT: {}] No routes found in OSPF6 route "
"table".format(router)
)
return errormsg
network = static_route["network"]
no_of_ip = static_route.setdefault("no_of_ip", 1)
_tag = static_route.setdefault("tag", None)
_rtype = static_route.setdefault("routeType", None)
# Generating IPs for verification
ip_list = generate_ips(network, no_of_ip)
st_found = False
nh_found = False
for st_rt in ip_list:
st_rt = str(ipaddress.ip_network(frr_unicode(st_rt)))
_addr_type = validate_ip_address(st_rt)
if _addr_type != "ipv6":
continue
if st_rt in ospf_rib_json:
st_found = True
found_routes.append(st_rt)
if fib and next_hop:
if type(next_hop) is not list:
next_hop = [next_hop]
for mnh in range(0, len(ospf_rib_json[st_rt])):
if (
"fib"
in ospf_rib_json[st_rt][mnh]["nextHops"][0]
):
found_hops.append(
[
rib_r["ip"]
for rib_r in ospf_rib_json[st_rt][mnh][
"nextHops"
]
]
)
if found_hops[0]:
missing_list_of_nexthops = set(
found_hops[0]
).difference(next_hop)
additional_nexthops_in_required_nhs = set(
next_hop
).difference(found_hops[0])
if additional_nexthops_in_required_nhs:
logger.info(
"Nexthop "
"%s is not active for route %s in "
"RIB of router %s\n",
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
errormsg = (
"Nexthop {} is not active"
" for route {} in RIB of router"
" {}\n".format(
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
)
return errormsg
else:
nh_found = True
elif next_hop and fib is None:
if type(next_hop) is not list:
next_hop = [next_hop]
found_hops = [
rib_r["nextHop"]
for rib_r in ospf_rib_json[st_rt]["nextHops"]
]
if found_hops:
missing_list_of_nexthops = set(
found_hops
).difference(next_hop)
additional_nexthops_in_required_nhs = set(
next_hop
).difference(found_hops)
if additional_nexthops_in_required_nhs:
logger.info(
"Missing nexthop %s for route"
" %s in RIB of router %s\n",
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
errormsg = (
"Nexthop {} is Missing for "
"route {} in RIB of router {}\n".format(
additional_nexthops_in_required_nhs,
st_rt,
dut,
)
)
return errormsg
else:
nh_found = True
if _rtype:
if "destinationType" not in ospf_rib_json[st_rt]:
errormsg = (
"[DUT: {}]: destinationType missing"
"for route {} in OSPF RIB \n".format(dut, st_rt)
)
return errormsg
elif _rtype != ospf_rib_json[st_rt]["destinationType"]:
errormsg = (
"[DUT: {}]: destinationType mismatch"
"for route {} in OSPF RIB \n".format(dut, st_rt)
)
return errormsg
else:
logger.info(
"DUT: {}]: Found destinationType {}"
"for route {}".format(dut, _rtype, st_rt)
)
if tag:
if "tag" not in ospf_rib_json[st_rt]:
errormsg = (
"[DUT: {}]: tag is not"
" present for"
" route {} in RIB \n".format(dut, st_rt)
)
return errormsg
if _tag != ospf_rib_json[st_rt]["tag"]:
errormsg = (
"[DUT: {}]: tag value {}"
" is not matched for"
" route {} in RIB \n".format(
dut,
_tag,
st_rt,
)
)
return errormsg
if metric is not None:
if "type2cost" not in ospf_rib_json[st_rt]:
errormsg = (
"[DUT: {}]: metric is"
" not present for"
" route {} in RIB \n".format(dut, st_rt)
)
return errormsg
if metric != ospf_rib_json[st_rt]["type2cost"]:
errormsg = (
"[DUT: {}]: metric value "
"{} is not matched for "
"route {} in RIB \n".format(
dut,
metric,
st_rt,
)
)
return errormsg
else:
missing_routes.append(st_rt)
if nh_found:
logger.info(
"[DUT: {}]: Found next_hop {} for all OSPF"
" routes in RIB".format(router, next_hop)
)
if len(missing_routes) > 0:
errormsg = "[DUT: {}]: Missing route in RIB, " "routes: {}".format(
dut, missing_routes
)
return errormsg
if found_routes:
logger.info(
"[DUT: %s]: Verified routes in RIB, found" " routes are: %s\n",
dut,
found_routes,
)
result = True
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
@retry(retry_timeout=6)
def verify_ospf6_interface(tgen, topo=None, dut=None, lan=False, input_dict=None):
"""
This API is to verify ospf routes by running
show ip ospf interface command.
Parameters
----------
* `tgen` : Topogen object
* `topo` : topology descriptions
* `dut`: device under test
* `lan`: if set to true this interface belongs to LAN.
* `input_dict` : Input dict data, required when configuring from testcase
Usage
-----
input_dict= {
'r0': {
'links':{
's1': {
'ospf6':{
'priority':98,
'timerDeadSecs': 4,
'area': '0.0.0.3',
'mcastMemberOspfDesignatedRouters': True,
'mcastMemberOspfAllRouters': True,
'ospfEnabled': True,
}
}
}
}
}
result = verify_ospf_interface(tgen, topo, dut=dut, input_dict=input_dict)
Returns
-------
True or False (Error Message)
"""
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
result = False
if topo is None:
topo = tgen.json_topo
for router, rnode in tgen.routers().items():
if "ospf6" not in topo["routers"][router]:
continue
if dut is not None and dut != router:
continue
logger.info("Verifying OSPF interface on router %s:", router)
show_ospf_json = run_frr_cmd(
rnode, "show ipv6 ospf interface json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF6 is not running"
return errormsg
# To find neighbor ip type
ospf_intf_data = input_dict[router]["links"]
for ospf_intf, intf_data in ospf_intf_data.items():
intf = topo["routers"][router]["links"][ospf_intf]["interface"]
if intf in show_ospf_json:
for intf_attribute in intf_data["ospf6"]:
if intf_data["ospf6"][intf_attribute] is not list:
if (
intf_data["ospf6"][intf_attribute]
== show_ospf_json[intf][intf_attribute]
):
logger.info(
"[DUT: %s] OSPF6 interface %s: %s is %s",
router,
intf,
intf_attribute,
intf_data["ospf6"][intf_attribute],
)
elif intf_data["ospf6"][intf_attribute] is list:
for addr_list in len(show_ospf_json[intf][intf_attribute]):
if (
show_ospf_json[intf][intf_attribute][addr_list][
"address"
].split("/")[0]
== intf_data["ospf6"]["internetAddress"][0]["address"]
):
break
else:
errormsg = "[DUT: {}] OSPF6 interface {}: {} is {}, \
Expected is {}".format(
router,
intf,
intf_attribute,
intf_data["ospf6"][intf_attribute],
intf_data["ospf6"][intf_attribute],
)
return errormsg
else:
errormsg = "[DUT: {}] OSPF6 interface {}: {} is {}, \
Expected is {}".format(
router,
intf,
intf_attribute,
intf_data["ospf6"][intf_attribute],
intf_data["ospf6"][intf_attribute],
)
return errormsg
result = True
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
@retry(retry_timeout=20)
def verify_ospf6_database(tgen, topo, dut, input_dict):
"""
This API is to verify ospf lsa's by running
show ip ospf database command.
Parameters
----------
* `tgen` : Topogen object
* `dut`: device under test
* `input_dict` : Input dict data, required when configuring from testcase
* `topo` : next to be verified
Usage
-----
input_dict = {
"areas": {
"0.0.0.0": {
"routerLinkStates": {
"100.1.1.0-100.1.1.0": {
"LSID": "100.1.1.0",
"Advertised router": "100.1.1.0",
"LSA Age": 130,
"Sequence Number": "80000006",
"Checksum": "a703",
"Router links": 3
}
},
"networkLinkStates": {
"10.0.0.2-100.1.1.1": {
"LSID": "10.0.0.2",
"Advertised router": "100.1.1.1",
"LSA Age": 137,
"Sequence Number": "80000001",
"Checksum": "9583"
}
},
},
}
}
result = verify_ospf_database(tgen, topo, dut, input_dict)
Returns
-------
True or False (Error Message)
"""
result = False
router = dut
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
if "ospf" not in topo["routers"][dut]:
errormsg = "[DUT: {}] OSPF is not configured on the router.".format(dut)
return errormsg
rnode = tgen.routers()[dut]
logger.info("Verifying OSPF interface on router %s:", dut)
show_ospf_json = run_frr_cmd(rnode, "show ip ospf database json", isjson=True)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
return errormsg
# for inter and inter lsa's
ospf_db_data = input_dict.setdefault("areas", None)
ospf_external_lsa = input_dict.setdefault("asExternalLinkStates", None)
if ospf_db_data:
for ospf_area, area_lsa in ospf_db_data.items():
if ospf_area in show_ospf_json["areas"]:
if "routerLinkStates" in area_lsa:
for lsa in area_lsa["routerLinkStates"]:
for rtrlsa in show_ospf_json["areas"][ospf_area][
"routerLinkStates"
]:
if (
lsa["lsaId"] == rtrlsa["lsaId"]
and lsa["advertisedRouter"]
== rtrlsa["advertisedRouter"]
):
result = True
break
if result:
logger.info(
"[DUT: %s] OSPF LSDB area %s:Router " "LSA %s",
router,
ospf_area,
lsa,
)
break
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Router LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "networkLinkStates" in area_lsa:
for lsa in area_lsa["networkLinkStates"]:
for netlsa in show_ospf_json["areas"][ospf_area][
"networkLinkStates"
]:
if (
lsa
in show_ospf_json["areas"][ospf_area][
"networkLinkStates"
]
):
if (
lsa["lsaId"] == netlsa["lsaId"]
and lsa["advertisedRouter"]
== netlsa["advertisedRouter"]
):
result = True
break
if result:
logger.info(
"[DUT: %s] OSPF LSDB area %s:Network " "LSA %s",
router,
ospf_area,
lsa,
)
break
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Network LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "summaryLinkStates" in area_lsa:
for lsa in area_lsa["summaryLinkStates"]:
for t3lsa in show_ospf_json["areas"][ospf_area][
"summaryLinkStates"
]:
if (
lsa["lsaId"] == t3lsa["lsaId"]
and lsa["advertisedRouter"] == t3lsa["advertisedRouter"]
):
result = True
break
if result:
logger.info(
"[DUT: %s] OSPF LSDB area %s:Summary " "LSA %s",
router,
ospf_area,
lsa,
)
break
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Summary LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "nssaExternalLinkStates" in area_lsa:
for lsa in area_lsa["nssaExternalLinkStates"]:
for t7lsa in show_ospf_json["areas"][ospf_area][
"nssaExternalLinkStates"
]:
if (
lsa["lsaId"] == t7lsa["lsaId"]
and lsa["advertisedRouter"] == t7lsa["advertisedRouter"]
):
result = True
break
if result:
logger.info(
"[DUT: %s] OSPF LSDB area %s:Type7 " "LSA %s",
router,
ospf_area,
lsa,
)
break
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" Type7 LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "asbrSummaryLinkStates" in area_lsa:
for lsa in area_lsa["asbrSummaryLinkStates"]:
for t4lsa in show_ospf_json["areas"][ospf_area][
"asbrSummaryLinkStates"
]:
if (
lsa["lsaId"] == t4lsa["lsaId"]
and lsa["advertisedRouter"] == t4lsa["advertisedRouter"]
):
result = True
break
if result:
logger.info(
"[DUT: %s] OSPF LSDB area %s:ASBR Summary " "LSA %s",
router,
ospf_area,
lsa,
)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB area {}: expected"
" ASBR Summary LSA is {}".format(router, ospf_area, lsa)
)
return errormsg
if "linkLocalOpaqueLsa" in area_lsa:
for lsa in area_lsa["linkLocalOpaqueLsa"]:
try:
for lnklsa in show_ospf_json["areas"][ospf_area][
"linkLocalOpaqueLsa"
]:
if (
lsa["lsaId"] in lnklsa["lsaId"]
and "linkLocalOpaqueLsa"
in show_ospf_json["areas"][ospf_area]
):
logger.info(
(
"[DUT: FRR] OSPF LSDB area %s:Opaque-LSA"
"%s",
ospf_area,
lsa,
)
)
result = True
else:
errormsg = (
"[DUT: FRR] OSPF LSDB area: {} "
"expected Opaque-LSA is {}, Found is {}".format(
ospf_area, lsa, show_ospf_json
)
)
raise ValueError(errormsg)
return errormsg
except KeyError:
errormsg = "[DUT: FRR] linkLocalOpaqueLsa Not " "present"
return errormsg
if ospf_external_lsa:
for lsa in ospf_external_lsa:
try:
for t5lsa in show_ospf_json["asExternalLinkStates"]:
if (
lsa["lsaId"] == t5lsa["lsaId"]
and lsa["advertisedRouter"] == t5lsa["advertisedRouter"]
):
result = True
break
except KeyError:
result = False
if result:
logger.info("[DUT: %s] OSPF LSDB:External LSA %s", router, lsa)
result = True
else:
errormsg = (
"[DUT: {}] OSPF LSDB : expected"
" External LSA is {}".format(router, lsa)
)
return errormsg
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
def config_ospf6_interface(
tgen, topo=None, input_dict=None, build=False, load_config=True
):
"""
API to configure ospf on router.
Parameters
----------
* `tgen` : Topogen object
* `topo` : json file data
* `input_dict` : Input dict data, required when configuring from testcase
* `build` : Only for initial setup phase this is set as True.
* `load_config` : Loading the config to router this is set as True.
Usage
-----
r1_ospf_auth = {
"r1": {
"links": {
"r2": {
"ospf": {
"authentication": 'message-digest',
"authentication-key": "ospf",
"message-digest-key": "10"
}
}
}
}
}
result = config_ospf6_interface(tgen, topo, r1_ospf_auth)
Returns
-------
True or False
"""
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
result = False
if topo is None:
topo = tgen.json_topo
if not input_dict:
input_dict = deepcopy(topo)
else:
input_dict = deepcopy(input_dict)
config_data_dict = {}
for router in input_dict.keys():
config_data = []
for lnk in input_dict[router]["links"].keys():
if "ospf6" not in input_dict[router]["links"][lnk]:
logger.debug(
"Router %s: ospf6 config is not present in"
"input_dict, passed input_dict %s",
router,
str(input_dict),
)
continue
ospf_data = input_dict[router]["links"][lnk]["ospf6"]
data_ospf_area = ospf_data.setdefault("area", None)
data_ospf_auth = ospf_data.setdefault("hash-algo", None)
data_ospf_dr_priority = ospf_data.setdefault("priority", None)
data_ospf_cost = ospf_data.setdefault("cost", None)
data_ospf_mtu = ospf_data.setdefault("mtu_ignore", None)
try:
intf = topo["routers"][router]["links"][lnk]["interface"]
except KeyError:
intf = topo["switches"][router]["links"][lnk]["interface"]
# interface
cmd = "interface {}".format(intf)
config_data.append(cmd)
# interface area config
if data_ospf_area:
cmd = "ipv6 ospf area {}".format(data_ospf_area)
config_data.append(cmd)
# interface ospf auth
if data_ospf_auth:
cmd = "ipv6 ospf6 authentication"
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
if "hash-algo" in ospf_data:
cmd = "{} key-id {} hash-algo {} key {}".format(
cmd,
ospf_data["key-id"],
ospf_data["hash-algo"],
ospf_data["key"],
)
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# interface ospf dr priority
if data_ospf_dr_priority:
cmd = "ipv6 ospf priority {}".format(ospf_data["priority"])
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# interface ospf cost
if data_ospf_cost:
cmd = "ipv6 ospf cost {}".format(ospf_data["cost"])
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
# interface ospf mtu
if data_ospf_mtu:
cmd = "ipv6 ospf mtu-ignore"
if "del_action" in ospf_data:
cmd = "no {}".format(cmd)
config_data.append(cmd)
if build:
return config_data
if config_data:
config_data_dict[router] = config_data
result = create_common_configurations(
tgen, config_data_dict, "interface_config", build=build
)
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
@retry(retry_timeout=20)
def verify_ospf_gr_helper(tgen, topo, dut, input_dict=None):
"""
This API is used to vreify gr helper using command
show ip ospf graceful-restart helper
Parameters
----------
* `tgen` : Topogen object
* `topo` : topology descriptions
* 'dut' : router
* 'input_dict' - values to be verified
Usage:
-------
input_dict = {
"helperSupport":"Disabled",
"strictLsaCheck":"Enabled",
"restartSupoort":"Planned and Unplanned Restarts",
"supportedGracePeriod":1800
}
result = verify_ospf_gr_helper(tgen, topo, dut, input_dict)
"""
logger.debug("Entering lib API: {}".format(sys._getframe().f_code.co_name))
result = False
if "ospf" not in topo["routers"][dut]:
errormsg = "[DUT: {}] OSPF is not configured on the router.".format(dut)
return errormsg
rnode = tgen.routers()[dut]
logger.info("Verifying OSPF GR details on router %s:", dut)
show_ospf_json = run_frr_cmd(
rnode, "show ip ospf graceful-restart helper json", isjson=True
)
# Verifying output dictionary show_ospf_json is empty or not
if not bool(show_ospf_json):
errormsg = "OSPF is not running"
raise ValueError(errormsg)
return errormsg
for ospf_gr, gr_data in input_dict.items():
try:
if input_dict[ospf_gr] == show_ospf_json[ospf_gr]:
logger.info(
"[DUT: FRR] OSPF GR Helper: %s is %s",
ospf_gr,
show_ospf_json[ospf_gr],
)
result = True
else:
errormsg = (
"[DUT: FRR] OSPF GR Helper: {} expected is {}, Found "
"is {}".format(
ospf_gr, input_dict[ospf_gr], show_ospf_json[ospf_gr]
)
)
raise ValueError(errormsg)
return errormsg
except KeyError:
errormsg = "[DUT: FRR] OSPF GR Helper: {}".format(ospf_gr)
return errormsg
logger.debug("Exiting lib API: {}".format(sys._getframe().f_code.co_name))
return result
| 37.713501 | 88 | 0.425594 | 8,653 | 96,094 | 4.537733 | 0.052121 | 0.032777 | 0.021087 | 0.013065 | 0.820247 | 0.796409 | 0.7619 | 0.74448 | 0.724335 | 0.698739 | 0 | 0.009117 | 0.482944 | 96,094 | 2,547 | 89 | 37.728308 | 0.781146 | 0.156742 | 0 | 0.6792 | 0 | 0 | 0.139527 | 0.005247 | 0 | 0 | 0 | 0 | 0.000606 | 1 | 0.010309 | false | 0.001819 | 0.003639 | 0 | 0.064888 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c44d4a02a7df8a073d9d5532327b3ab170e38993 | 200 | py | Python | emloop/tests/utils/dummy_module.py | iterait/cxflow | 4dfccf2b49186261ab3fe151bc4f7d0de454ac03 | [
"MIT"
] | 3 | 2018-10-08T09:01:31.000Z | 2018-12-17T08:36:20.000Z | emloop/tests/utils/dummy_module.py | iterait/emloop | 4dfccf2b49186261ab3fe151bc4f7d0de454ac03 | [
"MIT"
] | 65 | 2018-10-03T14:06:03.000Z | 2020-06-11T20:56:53.000Z | emloop/tests/utils/dummy_module.py | iterait/emloop | 4dfccf2b49186261ab3fe151bc4f7d0de454ac03 | [
"MIT"
] | null | null | null | """This is a dummy module for the reflection test."""
from .reflection_test import ImportedClass # pylint: disable=unused-import
class DuplicateClass: # pylint: disable=missing-docstring
pass
| 28.571429 | 75 | 0.765 | 25 | 200 | 6.08 | 0.8 | 0.184211 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15 | 200 | 6 | 76 | 33.333333 | 0.894118 | 0.56 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
c45a1e0b37ef65340309710af275dfb392224368 | 17 | py | Python | __init__.py | Hoeiriis/SameShitDifferentHyperParameter | 68d29042202746df60b7cee6f85d4baca1b9aa8b | [
"MIT"
] | 2 | 2019-07-03T03:00:12.000Z | 2022-03-16T17:09:35.000Z | __init__.py | Hoeiriis/SameShitDifferentHyperParameter | 68d29042202746df60b7cee6f85d4baca1b9aa8b | [
"MIT"
] | null | null | null | __init__.py | Hoeiriis/SameShitDifferentHyperParameter | 68d29042202746df60b7cee6f85d4baca1b9aa8b | [
"MIT"
] | 1 | 2022-03-07T13:48:05.000Z | 2022-03-07T13:48:05.000Z | from src import * | 17 | 17 | 0.764706 | 3 | 17 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.176471 | 17 | 1 | 17 | 17 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c45e36e5c31aeb60cb7a9cb6770a667b5ed51688 | 20 | py | Python | ragabot/__init__.py | vikipedia/ragabot | 8b0c71881b61c72b216a107acf07003a51efa66b | [
"MIT"
] | 14 | 2019-06-11T12:59:15.000Z | 2020-09-08T21:38:27.000Z | ragabot/__init__.py | vikipedia/ragabot | 8b0c71881b61c72b216a107acf07003a51efa66b | [
"MIT"
] | null | null | null | ragabot/__init__.py | vikipedia/ragabot | 8b0c71881b61c72b216a107acf07003a51efa66b | [
"MIT"
] | 5 | 2019-10-08T06:51:17.000Z | 2021-02-20T22:01:59.000Z | from . import stats
| 10 | 19 | 0.75 | 3 | 20 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c469b72dde79efce7a9d5a257447b915d7e59776 | 991 | py | Python | tests/gamestonk_terminal/cryptocurrency/onchain/test_blockchain_view.py | GarnixJu2015/GamestonkTerminal | ec400e46ddce4ac934af836b863528f14a13d865 | [
"MIT"
] | null | null | null | tests/gamestonk_terminal/cryptocurrency/onchain/test_blockchain_view.py | GarnixJu2015/GamestonkTerminal | ec400e46ddce4ac934af836b863528f14a13d865 | [
"MIT"
] | null | null | null | tests/gamestonk_terminal/cryptocurrency/onchain/test_blockchain_view.py | GarnixJu2015/GamestonkTerminal | ec400e46ddce4ac934af836b863528f14a13d865 | [
"MIT"
] | null | null | null | import pytest
from gamestonk_terminal.cryptocurrency.onchain import blockchain_view
@pytest.mark.vcr
@pytest.mark.record_stdout
def test_display_btc_circulating_supply(mocker):
# MOCK CHARTS
mocker.patch.object(target=blockchain_view.gtff, attribute="USE_ION", new=True)
mocker.patch(target="gamestonk_terminal.stocks.options.yfinance_view.plt.ion")
mocker.patch(target="gamestonk_terminal.stocks.options.yfinance_view.plt.show")
blockchain_view.display_btc_circulating_supply(1_601_596_800, 1_641_573_787, "")
@pytest.mark.vcr
@pytest.mark.record_stdout
def test_display_btc_confirmed_transactions(mocker):
# MOCK CHARTS
mocker.patch.object(target=blockchain_view.gtff, attribute="USE_ION", new=True)
mocker.patch(target="gamestonk_terminal.stocks.options.yfinance_view.plt.ion")
mocker.patch(target="gamestonk_terminal.stocks.options.yfinance_view.plt.show")
blockchain_view.display_btc_confirmed_transactions(1_601_596_800, 1_641_573_787, "")
| 41.291667 | 88 | 0.81332 | 138 | 991 | 5.521739 | 0.326087 | 0.086614 | 0.089239 | 0.136483 | 0.80315 | 0.80315 | 0.80315 | 0.80315 | 0.750656 | 0.750656 | 0 | 0.043956 | 0.081736 | 991 | 23 | 89 | 43.086957 | 0.793407 | 0.023209 | 0 | 0.625 | 0 | 0 | 0.24456 | 0.230052 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0.125 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6729f266f25597dfb094a98526dee1262abd924e | 5,365 | py | Python | tests/ut/python/fallback/python_builtin/test_graph_fallback_sum.py | httpsgithu/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | 1 | 2022-02-23T09:13:43.000Z | 2022-02-23T09:13:43.000Z | tests/ut/python/fallback/python_builtin/test_graph_fallback_sum.py | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | tests/ut/python/fallback/python_builtin/test_graph_fallback_sum.py | 949144093/mindspore | c29d6bb764e233b427319cb89ba79e420f1e2c64 | [
"Apache-2.0"
] | null | null | null | # Copyright 2022 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ============================================================================
""" test graph fallback buildin python function sum"""
import pytest
import numpy as np
from mindspore import ms_function, context, Tensor
context.set_context(mode=context.GRAPH_MODE)
def test_fallback_sum_with_x_list_n_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x list and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum([1, 2, 3])
return x
out = foo()
assert out == 6
def test_fallback_sum_with_x_tuple_n_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x tuple and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum((1, 2, 3))
return x
out = foo()
assert out == 6
def test_fallback_sum_with_x_dict_n_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x dict and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum({1: 10, 2: 20, 3: 30})
return x
out = foo()
assert out == 6
def test_fallback_sum_with_x_numpy_array_n_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x numpy array and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum(np.array([1, 2, 3]))
return Tensor(x)
out = foo()
assert out.asnumpy() == 6
def test_fallback_sum_with_x_tensor_n_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x tensor and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum(Tensor([1, 2, 3]))
return x
out = foo()
assert out.asnumpy() == 6
def test_fallback_sum_with_x_tensor_n_default_2():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x tensor and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum(Tensor([[1, 1], [2, 2]]))
return x
out = foo()
assert np.allclose(out.asnumpy(), np.array([3, 3]))
def test_fallback_sum_with_x_numpy_array_n_default_2():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x numpy array and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum(np.array([[1, 1], [2, 2]]))
return Tensor(x)
out = foo()
assert np.allclose(out.asnumpy(), np.array([3, 3]))
def test_fallback_sum_with_x_list_n_not_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x list and input n not default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum([1, 2, 3], 10)
return x
out = foo()
assert out == 16
def test_fallback_sum_with_x_tensor_n_not_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x tensor and input n not default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum(Tensor([1, 2, 3]), 10)
return x
out = foo()
assert out == 16
def test_fallback_sum_with_x_tuple_n_not_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x tuple and input n not default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum((1, 2, 3), 10)
return x
out = foo()
assert out == 16
def test_fallback_sum_with_x_dict_n_not_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x dict and input n not default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum({1: 10, 2: 20, 3: 30}, 10)
return x
out = foo()
assert out == 16
def test_fallback_sum_with_x_numpy_array_n_not_default():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x numpy array and input n default.
Expectation: No exception.
"""
@ms_function
def foo():
x = sum(np.array([[1, 1], [2, 2]]), 5)
return Tensor(x)
out = foo()
assert np.allclose(out.asnumpy(), np.array([8, 8]))
def test_fallback_sum_with_x_not_iterable():
"""
Feature: JIT Fallback
Description: Test sum() in graph mode with input x not iterable.
Expectation: TypeError.
"""
@ms_function
def foo():
x = sum(1)
return x
with pytest.raises(TypeError) as ex:
foo()
assert "object is not iterable" in str(ex.value)
| 26.428571 | 87 | 0.624231 | 771 | 5,365 | 4.193256 | 0.155642 | 0.037117 | 0.060316 | 0.072379 | 0.776678 | 0.773894 | 0.763068 | 0.756573 | 0.739561 | 0.734612 | 0 | 0.022602 | 0.257782 | 5,365 | 202 | 88 | 26.559406 | 0.789302 | 0.438956 | 0 | 0.645833 | 0 | 0 | 0.008175 | 0 | 0 | 0 | 0 | 0 | 0.135417 | 1 | 0.270833 | false | 0 | 0.03125 | 0 | 0.4375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67619a99b6868f8a1963d20da89cd261ba3e6626 | 12,067 | py | Python | tests/sentry/api/endpoints/test_user_notification_fine_tuning.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 4 | 2019-05-27T13:55:07.000Z | 2021-03-30T07:05:09.000Z | tests/sentry/api/endpoints/test_user_notification_fine_tuning.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 99 | 2019-05-20T14:16:33.000Z | 2021-01-19T09:25:15.000Z | tests/sentry/api/endpoints/test_user_notification_fine_tuning.py | AlexWayfer/sentry | ef935cda2b2e960bd602fda590540882d1b0712d | [
"BSD-3-Clause"
] | 1 | 2020-08-10T07:55:40.000Z | 2020-08-10T07:55:40.000Z | from __future__ import absolute_import
from sentry.models import UserEmail, UserOption
from sentry.testutils import APITestCase
from django.core.urlresolvers import reverse
class UserNotificationFineTuningTest(APITestCase):
def setUp(self):
self.user = self.create_user(email='a@example.com')
self.org = self.create_organization(name='Org Name', owner=self.user)
self.org2 = self.create_organization(name='Another Org', owner=self.user)
self.team = self.create_team(name='Team Name', organization=self.org, members=[self.user])
self.project = self.create_project(
organization=self.org,
teams=[self.team],
name='Project Name'
)
self.project2 = self.create_project(
organization=self.org,
teams=[self.team],
name='Another Name'
)
self.login_as(user=self.user)
def test_returns_correct_defaults(self):
UserOption.objects.create(user=self.user, project=self.project, key="mail:alert", value=1)
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'alerts',
}
)
resp = self.client.get(url)
assert resp.data.get(self.project.id) == 1
UserOption.objects.create(
user=self.user,
organization=self.org,
key="deploy-emails",
value=1)
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'deploy',
}
)
resp = self.client.get(url)
assert resp.data.get(self.org.id) == 1
UserOption.objects.create(
user=self.user,
organization=None,
key="reports:disabled-organizations",
value=[
self.org.id])
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'reports',
}
)
resp = self.client.get(url)
assert resp.data.get(self.org.id) == 0
def test_invalid_notification_type(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'invalid',
}
)
resp = self.client.get(url)
assert resp.status_code == 404
resp = self.client.put(url)
assert resp.status_code == 404
def test_update_invalid_project(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'alerts',
}
)
update = {}
update['123'] = 1
resp = self.client.put(url, data=update)
assert resp.status_code == 403
def test_saves_and_returns_alerts(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'alerts',
}
)
update = {}
update[self.project.id] = 1
update[self.project2.id] = 2
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert UserOption.objects.get(
user=self.user,
project=self.project,
key="mail:alert").value == 1
assert UserOption.objects.get(
user=self.user,
project=self.project2,
key="mail:alert").value == 2
update = {}
update[self.project.id] = -1
# Can return to default
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert not UserOption.objects.filter(
user=self.user,
project=self.project,
key="mail:alert").exists()
assert UserOption.objects.get(
user=self.user,
project=self.project2,
key="mail:alert").value == 2
def test_saves_and_returns_workflow(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'workflow',
}
)
update = {}
update[self.project.id] = 1
update[self.project2.id] = 2
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert UserOption.objects.get(
user=self.user,
project=self.project,
key="workflow:notifications").value == '1'
assert UserOption.objects.get(
user=self.user,
project=self.project2,
key="workflow:notifications").value == '2'
update = {}
update[self.project.id] = -1
# Can return to default
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert not UserOption.objects.filter(
user=self.user,
project=self.project,
key="workflow:notifications")
assert UserOption.objects.get(
user=self.user,
project=self.project2,
key="workflow:notifications").value == '2'
def test_saves_and_returns_email_routing(self):
UserEmail.objects.create(user=self.user, email='alias@example.com', is_verified=True).save()
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'email',
}
)
update = {}
update[self.project.id] = 'a@example.com'
update[self.project2.id] = 'alias@example.com'
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert UserOption.objects.get(
user=self.user,
project=self.project,
key="mail:email").value == 'a@example.com'
assert UserOption.objects.get(
user=self.user,
project=self.project2,
key="mail:email").value == 'alias@example.com'
def test_email_routing_emails_must_be_verified(self):
UserEmail.objects.create(
user=self.user,
email='alias@example.com',
is_verified=False).save()
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'email',
}
)
update = {}
update[self.project.id] = 'alias@example.com'
resp = self.client.put(url, data=update)
assert resp.status_code == 400
def test_email_routing_emails_must_be_valid(self):
new_user = self.create_user(email="b@example.com")
UserEmail.objects.create(user=new_user, email="alias2@example.com", is_verified=True).save()
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'email',
}
)
update = {}
update[self.project2.id] = 'alias2@example.com'
resp = self.client.put(url, data=update)
assert resp.status_code == 400
def test_saves_and_returns_deploy(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'deploy',
}
)
update = {}
update[self.org.id] = 0
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert UserOption.objects.get(
user=self.user,
organization=self.org.id,
key="deploy-emails").value == '0'
update = {}
update[self.org.id] = 1
resp = self.client.put(url, data=update)
assert UserOption.objects.get(
user=self.user,
organization=self.org,
key="deploy-emails").value == '1'
update = {}
update[self.org.id] = -1
resp = self.client.put(url, data=update)
assert not UserOption.objects.filter(
user=self.user,
organization=self.org,
key="deploy-emails").exists()
def test_saves_and_returns_weekly_reports(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'reports',
}
)
update = {}
update[self.org.id] = 0
update[self.org2.id] = "0"
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert set(UserOption.objects.get(
user=self.user,
key="reports:disabled-organizations").value) == set([self.org.id, self.org2.id])
update = {}
update[self.org.id] = 1
resp = self.client.put(url, data=update)
assert set(UserOption.objects.get(
user=self.user,
key="reports:disabled-organizations").value) == set([self.org2.id])
update = {}
update[self.org.id] = 0
resp = self.client.put(url, data=update)
assert set(UserOption.objects.get(
user=self.user,
key="reports:disabled-organizations").value) == set([self.org.id, self.org2.id])
def test_enable_weekly_reports_from_default_setting(self):
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'reports',
}
)
update = {}
update[self.org.id] = 1
update[self.org2.id] = "1"
resp = self.client.put(url, data=update)
assert resp.status_code == 204
assert set(UserOption.objects.get(
user=self.user,
key="reports:disabled-organizations").value) == set([])
# can disable
update = {}
update[self.org.id] = 0
resp = self.client.put(url, data=update)
assert set(UserOption.objects.get(
user=self.user,
key="reports:disabled-organizations").value) == set([self.org.id])
# re-enable
update = {}
update[self.org.id] = 1
resp = self.client.put(url, data=update)
assert set(UserOption.objects.get(
user=self.user,
key="reports:disabled-organizations").value) == set([])
def test_permissions(self):
new_user = self.create_user(email='b@example.com')
new_org = self.create_organization(name='New Org')
new_team = self.create_team(name='New Team', organization=new_org, members=[new_user])
new_project = self.create_project(
organization=new_org,
teams=[new_team],
name='New Project'
)
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'reports',
}
)
update = {}
update[new_org.id] = 0
resp = self.client.put(url, data=update)
assert resp.status_code == 403
assert not UserOption.objects.filter(
user=self.user,
organization=new_org,
key="reports").exists()
url = reverse(
'sentry-api-0-user-notifications-fine-tuning', kwargs={
'user_id': 'me',
'notification_type': 'alerts',
}
)
update = {}
update[new_project.id] = 1
resp = self.client.put(url, data=update)
assert resp.status_code == 403
assert not UserOption.objects.filter(
user=self.user,
project=new_project,
key="mail:alert").exists()
| 31.180879 | 100 | 0.547775 | 1,317 | 12,067 | 4.921033 | 0.085042 | 0.040734 | 0.049992 | 0.052461 | 0.846474 | 0.798488 | 0.788613 | 0.768709 | 0.767783 | 0.734455 | 0 | 0.014008 | 0.325599 | 12,067 | 386 | 101 | 31.261658 | 0.782379 | 0.005387 | 0 | 0.654952 | 0 | 0 | 0.153372 | 0.078603 | 0 | 0 | 0 | 0 | 0.124601 | 1 | 0.041534 | false | 0 | 0.01278 | 0 | 0.057508 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67808c8de4ec0e996bd38a9feaf71abc41051560 | 29,809 | py | Python | release/stubs.min/Autodesk/Revit/Utility.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 182 | 2017-06-27T02:26:15.000Z | 2022-03-30T18:53:43.000Z | release/stubs.min/Autodesk/Revit/Utility.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 28 | 2017-06-27T13:38:23.000Z | 2022-03-15T11:19:44.000Z | release/stubs.min/Autodesk/Revit/Utility.py | htlcnn/ironpython-stubs | 780d829e2104b2789d5f4d6f32b0ec9f2930ca03 | [
"MIT"
] | 67 | 2017-06-28T09:43:59.000Z | 2022-03-20T21:17:10.000Z | # encoding: utf-8
# module Autodesk.Revit.Utility calls itself Utility
# from RevitAPI,Version=17.0.0.0,Culture=neutral,PublicKeyToken=null
# by generator 1.145
# no doc
# no imports
# no functions
# classes
class AssetProperty(object,IDisposable):
""" Represents a property of material. """
def Dispose(self):
""" Dispose(self: AssetProperty) """
pass
def GetAllConnectedProperties(self):
"""
GetAllConnectedProperties(self: AssetProperty) -> IList[AssetProperty]
Gets the list of the connected properties.
Connected properties are the
detachable properties of an AssetProperty.
e.g. diffuse property can have
texture as its connected property. It can also detach texture on runtime.
Returns: A list of the connected properties.
"""
pass
def GetConnectedPropertiesNames(self):
"""
GetConnectedPropertiesNames(self: AssetProperty) -> IList[str]
Gets names of all connected properties.
Returns: A list of the names of the connected properties.
"""
pass
def GetConnectedProperty(self,*__args):
"""
GetConnectedProperty(self: AssetProperty,index: int) -> AssetProperty
Gets one connected property with specified index.
Returns: The AProperty of that index.
GetConnectedProperty(self: AssetProperty,name: str) -> AssetProperty
Gets a connected property by its name.
name: Name of the property.
Returns: The property with the specified name.
"""
pass
@staticmethod
def GetTypeName(type):
"""
GetTypeName(type: AssetPropertyType) -> str
Get the name of the AssetProperty
"""
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __repr__(self,*args):
""" __repr__(self: object) -> str """
pass
IsReadOnly=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Identifies if the object is read-only or modifiable.
If true,the object may not be modified. If false,the object's contents may be modified.
Get: IsReadOnly(self: AssetProperty) -> bool
"""
IsValidObject=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Specifies whether the .NET object represents a valid Revit entity.
Get: IsValidObject(self: AssetProperty) -> bool
"""
Name=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the name of the AssetProperty
Get: Name(self: AssetProperty) -> str
"""
NumberOfConnectedProperties=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""The number of currently connected properties.
Get: NumberOfConnectedProperties(self: AssetProperty) -> int
"""
Type=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Returns the type of the AssetProperty
Get: Type(self: AssetProperty) -> AssetPropertyType
"""
class AssetProperties(AssetProperty,IDisposable):
""" Represents a set of asset property(s). """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __getitem__(self,*args):
""" x.__getitem__(y) <==> x[y]x.__getitem__(y) <==> x[y] """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Size=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Gets the number of the AssetProperty(s) in the object.
Get: Size(self: AssetProperties) -> int
"""
class Asset(AssetProperties,IDisposable):
""" Represents the properties of a material pertinent to rendering. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __getitem__(self,*args):
""" x.__getitem__(y) <==> x[y]x.__getitem__(y) <==> x[y] """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
AssetType=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Type of the item the asset represents.
Get: AssetType(self: Asset) -> AssetType
"""
LibraryName=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""The name of the library the asset belongs to.
Get: LibraryName(self: Asset) -> str
"""
Title=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""The title of the asset.
Get: Title(self: Asset) -> str
"""
class AssetPropertyBoolean(AssetProperty,IDisposable):
""" Represents a property of Boolean value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyBoolean) -> bool
"""
class AssetPropertyDistance(AssetProperty,IDisposable):
""" Represents a property of distance value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
DisplayUnitType=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""The unit type of the property
Get: DisplayUnitType(self: AssetPropertyDistance) -> DisplayUnitType
"""
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""The value of the property
Get: Value(self: AssetPropertyDistance) -> float
"""
class AssetPropertyDouble(AssetProperty,IDisposable):
""" Represents a property of double value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyDouble) -> float
"""
class AssetPropertyDoubleArray2d(AssetProperty,IDisposable):
""" Represents a property consisting of an array of double values. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyDoubleArray2d) -> DoubleArray
"""
class AssetPropertyDoubleArray3d(AssetProperty,IDisposable):
""" Represents a property consisting of an array of double values. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyDoubleArray3d) -> DoubleArray
"""
class AssetPropertyDoubleArray4d(AssetProperty,IDisposable):
""" Represents a property consisting of an array of double values. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyDoubleArray4d) -> DoubleArray
"""
class AssetPropertyDoubleMatrix44(AssetProperty,IDisposable):
""" Represents a property consisting of an array of double values. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyDoubleMatrix44) -> DoubleArray
"""
class AssetPropertyEnum(AssetProperty,IDisposable):
""" Represents a property of enum value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyEnum) -> int
"""
class AssetPropertyFloat(AssetProperty,IDisposable):
""" Represents a property of float value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyFloat) -> Single
"""
class AssetPropertyFloatArray(AssetProperty,IDisposable):
""" Represents a property consisting of an array of float values. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def GetValue(self):
"""
GetValue(self: AssetPropertyFloatArray) -> IList[Single]
Get the value of the property.
"""
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
class AssetPropertyInt64(AssetProperty,IDisposable):
""" Represents a property of Int64 value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property.
Get: Value(self: AssetPropertyInt64) -> Int64
"""
class AssetPropertyInteger(AssetProperty,IDisposable):
""" Represents a property of integer value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyInteger) -> int
"""
class AssetPropertyList(AssetProperty,IDisposable):
""" Represents a list of AssetProperty(s). """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def GetValue(self):
"""
GetValue(self: AssetPropertyList) -> IList[AssetProperty]
Gets collection of properties stored in this property list.
"""
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
class AssetPropertyReference(AssetProperty,IDisposable):
""" A reference property of material. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
class AssetPropertyString(AssetProperty,IDisposable):
""" Represents a property of string value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyString) -> str
"""
class AssetPropertyTime(AssetProperty,IDisposable):
""" Represents a property of DateTime value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property
Get: Value(self: AssetPropertyTime) -> DateTime
"""
class AssetPropertyType(Enum,IComparable,IFormattable,IConvertible):
"""
An enumerated type listing AssetProperty Types in Revit.
enum AssetPropertyType,values: APT_Asset (15),APT_Boolean (2),APT_Distance (14),APT_Double (6),APT_Double44 (10),APT_DoubleArray2d (7),APT_DoubleArray3d (8),APT_DoubleArray4d (9),APT_Enum (3),APT_Float (5),APT_FloatArray (20),APT_Int64 (17),APT_Integer (4),APT_List (19),APT_Properties (1),APT_Reference (16),APT_String (11),APT_Time (12),APT_UInt64 (18),APT_Unknown (0)
"""
def __eq__(self,*args):
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self,*args):
""" __format__(formattable: IFormattable,format: str) -> str """
pass
def __ge__(self,*args):
pass
def __gt__(self,*args):
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self,*args):
pass
def __lt__(self,*args):
pass
def __ne__(self,*args):
pass
def __reduce_ex__(self,*args):
pass
def __str__(self,*args):
pass
APT_Asset=None
APT_Boolean=None
APT_Distance=None
APT_Double=None
APT_Double44=None
APT_DoubleArray2d=None
APT_DoubleArray3d=None
APT_DoubleArray4d=None
APT_Enum=None
APT_Float=None
APT_FloatArray=None
APT_Int64=None
APT_Integer=None
APT_List=None
APT_Properties=None
APT_Reference=None
APT_String=None
APT_Time=None
APT_UInt64=None
APT_Unknown=None
value__=None
class AssetPropertyUInt64(AssetProperty,IDisposable):
""" Represents a property of UInt64 value. """
def Dispose(self):
""" Dispose(self: AssetProperty,A_0: bool) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetProperty,disposing: bool) """
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
Value=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get the value of the property.
Get: Value(self: AssetPropertyUInt64) -> UInt64
"""
class AssetSet(APIObject,IDisposable,IEnumerable):
"""
A set that contains assets.
AssetSet()
"""
def Clear(self):
"""
Clear(self: AssetSet)
Removes every asset from the set,rendering it empty.
"""
pass
def Contains(self,item):
"""
Contains(self: AssetSet,item: Asset) -> bool
Tests for the existence of a asset within the set.
item: The asset to be searched for.
Returns: The Contains method returns True if the asset is within the set,otherwise
False.
"""
pass
def Dispose(self):
""" Dispose(self: AssetSet,A_0: bool) """
pass
def Erase(self,item):
"""
Erase(self: AssetSet,item: Asset) -> int
Removes a specified asset from the set.
item: The asset to be erased.
Returns: The number of assets that were erased from the set.
"""
pass
def ForwardIterator(self):
"""
ForwardIterator(self: AssetSet) -> AssetSetIterator
Retrieve a forward moving iterator to the set.
Returns: Returns a forward moving iterator to the set.
"""
pass
def GetEnumerator(self):
"""
GetEnumerator(self: AssetSet) -> IEnumerator
Retrieve a forward moving iterator to the set.
Returns: Returns a forward moving iterator to the set.
"""
pass
def Insert(self,item):
"""
Insert(self: AssetSet,item: Asset) -> bool
Insert the specified asset into the set.
item: The asset to be inserted into the set.
Returns: Returns whether the asset was inserted into the set.
"""
pass
def ReleaseManagedResources(self,*args):
""" ReleaseManagedResources(self: APIObject) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetSet) """
pass
def ReverseIterator(self):
"""
ReverseIterator(self: AssetSet) -> AssetSetIterator
Retrieve a backward moving iterator to the set.
Returns: Returns a backward moving iterator to the set.
"""
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __iter__(self,*args):
""" __iter__(self: IEnumerable) -> object """
pass
IsEmpty=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Test to see if the set is empty.
Get: IsEmpty(self: AssetSet) -> bool
"""
Size=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Returns the number of assets that are in the set.
Get: Size(self: AssetSet) -> int
"""
class AssetSetIterator(APIObject,IDisposable,IEnumerator):
"""
An iterator to a asset set.
AssetSetIterator()
"""
def Dispose(self):
""" Dispose(self: AssetSetIterator,A_0: bool) """
pass
def MoveNext(self):
"""
MoveNext(self: AssetSetIterator) -> bool
Move the iterator one item forward.
Returns: Returns True if the iterator was successfully moved forward one item and the
Current
property will return a valid item. False will be returned
it the iterator has reached the end of
the set.
"""
pass
def next(self,*args):
""" next(self: object) -> object """
pass
def ReleaseManagedResources(self,*args):
""" ReleaseManagedResources(self: APIObject) """
pass
def ReleaseUnmanagedResources(self,*args):
""" ReleaseUnmanagedResources(self: AssetSetIterator) """
pass
def Reset(self):
"""
Reset(self: AssetSetIterator)
Bring the iterator back to the start of the set.
"""
pass
def __enter__(self,*args):
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self,*args):
""" __exit__(self: IDisposable,exc_type: object,exc_value: object,exc_back: object) """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __iter__(self,*args):
""" __iter__(self: IEnumerator) -> object """
pass
Current=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Retrieves the item that is the current focus of the iterator.
Get: Current(self: AssetSetIterator) -> object
"""
class AssetType(Enum,IComparable,IFormattable,IConvertible):
"""
An enumerated to list Asset Types in Revit.
enum AssetType,values: Appearance (4),Content (5)
"""
def __eq__(self,*args):
""" x.__eq__(y) <==> x==yx.__eq__(y) <==> x==yx.__eq__(y) <==> x==y """
pass
def __format__(self,*args):
""" __format__(formattable: IFormattable,format: str) -> str """
pass
def __ge__(self,*args):
pass
def __gt__(self,*args):
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
def __le__(self,*args):
pass
def __lt__(self,*args):
pass
def __ne__(self,*args):
pass
def __reduce_ex__(self,*args):
pass
def __str__(self,*args):
pass
Appearance=None
Content=None
value__=None
| 28.470869 | 372 | 0.687443 | 3,558 | 29,809 | 5.310006 | 0.078415 | 0.047425 | 0.060975 | 0.072408 | 0.730429 | 0.715291 | 0.676441 | 0.671201 | 0.667655 | 0.663738 | 0 | 0.004367 | 0.177966 | 29,809 | 1,046 | 373 | 28.498088 | 0.76665 | 0.50005 | 0 | 0.808901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.400524 | false | 0.400524 | 0 | 0 | 0.596859 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
679cf76f9cb66a1ec6f0c9baeafab08bbc27dfe2 | 97 | py | Python | packages/file_helper/__init__.py | s3h4n/LetMeSpeak | 30cdc59719cd1c185802835e3a1c43e393cac8ea | [
"MIT"
] | null | null | null | packages/file_helper/__init__.py | s3h4n/LetMeSpeak | 30cdc59719cd1c185802835e3a1c43e393cac8ea | [
"MIT"
] | null | null | null | packages/file_helper/__init__.py | s3h4n/LetMeSpeak | 30cdc59719cd1c185802835e3a1c43e393cac8ea | [
"MIT"
] | null | null | null | # Import FileHelper from packages/file_helper/file_helper.py
from .file_helper import FileHelper
| 32.333333 | 60 | 0.85567 | 14 | 97 | 5.714286 | 0.5 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.092784 | 97 | 2 | 61 | 48.5 | 0.909091 | 0.597938 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
67af834cec1303fb65002baf29cba7e90c378cbc | 5,147 | py | Python | tests/test_path_tar.py | jhermann/dephell_archive | 582e7e38d7dd702a267b6436da42bc372c8e4d44 | [
"MIT"
] | null | null | null | tests/test_path_tar.py | jhermann/dephell_archive | 582e7e38d7dd702a267b6436da42bc372c8e4d44 | [
"MIT"
] | 19 | 2019-12-17T12:29:36.000Z | 2020-06-03T07:56:22.000Z | tests/test_path_tar.py | jhermann/dephell_archive | 582e7e38d7dd702a267b6436da42bc372c8e4d44 | [
"MIT"
] | 6 | 2019-09-04T05:30:51.000Z | 2021-09-28T02:43:22.000Z | # built-in
from pathlib import Path
# external
import pytest
# project
from dephell_archive import ArchivePath
sdist_path = Path(__file__).parent / 'requirements' / 'sdist.tar.gz'
def test_toplevel(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
assert path.is_dir()
assert not path.is_file()
assert path.exists()
with pytest.raises(IsADirectoryError):
with path.open():
pass
def test_toplevel_missing_cache_path(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir), 'missing'),
)
assert path.is_dir()
assert not path.is_file()
assert path.exists()
with pytest.raises(IsADirectoryError):
with path.open():
pass
with pytest.raises(NotImplementedError):
with path.open('w'):
pass
def test_open(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0' / 'setup.py'
with subpath.open() as stream:
content = stream.read()
assert 'from setuptools import' in content
def test_open_write(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0'
with pytest.raises(NotImplementedError):
with subpath.open('w'):
pass
def test_open_missing_cache_path(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir), 'missing'),
)
subpath = path / 'dephell-0.2.0' / 'setup.py'
with subpath.open() as stream:
content = stream.read()
assert 'from setuptools import' in content
def test_glob(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
paths = list(path.glob('*/setup.py'))
assert len(paths) == 1
assert paths[0].member_path.as_posix() == 'dephell-0.2.0/setup.py'
def test_glob_dir(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
paths = list(path.glob('dephell-*/'))
assert len(paths) == 1
def test_iterdir_non_recursive_tarball(tmpdir):
path = ArchivePath(
archive_path=Path('tests', 'requirements', 'sdist.tar.gz'),
cache_path=Path(str(tmpdir)),
)
paths = [str(subpath) for subpath in path.iterdir(_recursive=False)]
assert paths == ['dephell-0.2.0']
def test_iterdir_recursive_tarball(tmpdir):
path = ArchivePath(
archive_path=Path('tests', 'requirements', 'sdist.tar.gz'),
cache_path=Path(str(tmpdir)),
)
paths = [str(subpath) for subpath in path.iterdir(_recursive=True)]
assert 'dephell-0.2.0' in paths
assert str(Path('dephell-0.2.0', 'setup.py')) in paths
assert str(Path('dephell-0.2.0', 'dephell', '__init__.py')) in paths
for path in paths:
assert paths.count(path) == 1, 'duplicate dir: ' + path
def test_iterdir_subpath_non_recursive(tmpdir):
path = ArchivePath(
archive_path=Path('tests', 'requirements', 'sdist.tar.gz'),
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0'
paths = set(str(item) for item in subpath.iterdir(_recursive=False))
assert paths == {
'dephell',
'dephell.egg-info',
'PKG-INFO',
'README.md',
'setup.cfg',
'setup.py',
}
subpath = subpath / 'dephell.egg-info'
paths = set(str(item) for item in subpath.iterdir(_recursive=False))
assert paths == {
'dependency_links.txt',
'entry_points.txt',
'PKG-INFO',
'requires.txt',
'SOURCES.txt',
'top_level.txt',
}
def test_iterdir_subpath_recursive(tmpdir):
path = ArchivePath(
archive_path=Path('tests', 'requirements', 'sdist.tar.gz'),
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0'
paths = [str(item) for item in subpath.iterdir(_recursive=True)]
assert 'dephell' in paths
assert str(Path('dephell', '__init__.py')) in paths
for path in paths:
assert paths.count(path) == 1, 'duplicate dir: ' + path
def test_exists(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0' / 'setup.py'
assert subpath.exists() is True
subpath = path / 'dephell-0.2.0' / 'not-a-setup.py'
assert subpath.exists() is False
def test_is_file(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0' / 'setup.py'
assert subpath.is_file() is True
subpath = path / 'dephell-0.2.0'
assert subpath.is_file() is False
def test_is_dir(tmpdir):
path = ArchivePath(
archive_path=sdist_path,
cache_path=Path(str(tmpdir)),
)
subpath = path / 'dephell-0.2.0' / 'setup.py'
assert subpath.is_dir() is False
subpath = path / 'dephell-0.2.0'
assert subpath.is_dir() is True
| 25.994949 | 72 | 0.626579 | 664 | 5,147 | 4.700301 | 0.126506 | 0.048702 | 0.046139 | 0.051266 | 0.835309 | 0.784044 | 0.738866 | 0.736623 | 0.713553 | 0.671259 | 0 | 0.013538 | 0.239363 | 5,147 | 197 | 73 | 26.126904 | 0.783653 | 0.004857 | 0 | 0.565789 | 0 | 0 | 0.141657 | 0.004299 | 0 | 0 | 0 | 0 | 0.177632 | 1 | 0.092105 | false | 0.026316 | 0.032895 | 0 | 0.125 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67f7da366614a7ee6712ac472d94b476c61fbb99 | 1,216 | py | Python | test_lab4.py | jschmidtnj/CS115 | fa2374f1ae9c9b63e572850a97af6086112d7a36 | [
"MIT"
] | null | null | null | test_lab4.py | jschmidtnj/CS115 | fa2374f1ae9c9b63e572850a97af6086112d7a36 | [
"MIT"
] | null | null | null | test_lab4.py | jschmidtnj/CS115 | fa2374f1ae9c9b63e572850a97af6086112d7a36 | [
"MIT"
] | 1 | 2022-01-03T01:44:39.000Z | 2022-01-03T01:44:39.000Z | '''
Created on Feb 14, 2015
@author: Brian Borowski
CS115 - Lab 4 Test Script
'''
import unittest
import lab4
class Test(unittest.TestCase):
def test01(self):
self.assertEqual(lab4.knapsack(0, []), [0, []])
def test02(self):
self.assertEqual(lab4.knapsack(0, [[2, 100], [3, 112], [4, 125]]), [0, []])
def test03(self):
self.assertEqual(lab4.knapsack(6, [[1, 4], [5, 150], [4, 180]]), [184, [[1, 4], [4, 180]]])
def test04(self):
self.assertEqual(lab4.knapsack(7, [[2, 100], [3, 112], [4, 125]]), [237, [[3, 112], [4, 125]]])
def test05(self):
self.assertEqual(lab4.knapsack(76, [[36, 35], [10, 28], [39, 47], [8, 1], [7, 24]]), [100, [[10, 28], [39, 47], [8, 1], [7, 24]]])
def test06(self):
self.assertEqual(lab4.knapsack(8, [[36, 35], [10, 28], [39, 47], [8, 1], [7, 24]]), [24, [[7, 24]]])
def test07(self):
self.assertEqual(lab4.knapsack(24, [[36, 35], [10, 28], [39, 47], [8, 1], [7, 24]]), [52, [[10, 28], [7, 24]]])
def test08(self):
self.assertEqual(lab4.knapsack(25, [[36, 35], [10, 28], [39, 47], [8, 1], [7, 24]]), [53, [[10, 28], [8, 1], [7, 24]]])
if __name__ == "__main__":
unittest.main()
| 31.179487 | 138 | 0.518914 | 184 | 1,216 | 3.38587 | 0.320652 | 0.102729 | 0.243981 | 0.295345 | 0.569823 | 0.271268 | 0.130016 | 0.130016 | 0.109149 | 0.109149 | 0 | 0.220711 | 0.213816 | 1,216 | 38 | 139 | 32 | 0.430962 | 0.061678 | 0 | 0 | 0 | 0 | 0.007061 | 0 | 0 | 0 | 0 | 0 | 0.380952 | 1 | 0.380952 | false | 0 | 0.095238 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
e1e4d2a2443432128ecaace20371853900062062 | 221 | py | Python | onmt/modules/IIDStochasticTransformer/__init__.py | DanSchum/NMTGMinor | ebcf33cc77b7c2bc73192f1975b99487db0ebc8a | [
"MIT"
] | null | null | null | onmt/modules/IIDStochasticTransformer/__init__.py | DanSchum/NMTGMinor | ebcf33cc77b7c2bc73192f1975b99487db0ebc8a | [
"MIT"
] | null | null | null | onmt/modules/IIDStochasticTransformer/__init__.py | DanSchum/NMTGMinor | ebcf33cc77b7c2bc73192f1975b99487db0ebc8a | [
"MIT"
] | null | null | null | #~ from onmt.modules.StochasticTransformer.Models import StochasticTransformerEncoder, StochasticTransformerDecoder
#~ # For flake8 compatibility.
#~ __all__ = [StochasticTransformerEncoder, StochasticTransformerDecoder]
| 55.25 | 115 | 0.855204 | 14 | 221 | 13.214286 | 0.857143 | 0.605405 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004878 | 0.072398 | 221 | 3 | 116 | 73.666667 | 0.897561 | 0.968326 | 0 | null | 0 | null | 0 | 0 | null | 0 | 0 | 0 | null | 1 | null | true | 0 | 0 | null | null | null | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1e92ad7d3ef0a42c9aabc071e312aee3964d061 | 2,057 | py | Python | tests/integration/test_themed.py | PaulPichaureau/statik | e7a2a1b1088b1f5ed9b596a9cb4cf6c3bac847c8 | [
"MIT"
] | null | null | null | tests/integration/test_themed.py | PaulPichaureau/statik | e7a2a1b1088b1f5ed9b596a9cb4cf6c3bac847c8 | [
"MIT"
] | 2 | 2021-03-20T05:39:26.000Z | 2021-03-26T00:46:50.000Z | tests/integration/test_themed.py | PaulPichaureau/statik | e7a2a1b1088b1f5ed9b596a9cb4cf6c3bac847c8 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import os.path
import xml.etree.ElementTree as ET
import unittest
from statik.generator import generate
class TestThemedStatikProject(unittest.TestCase):
def test_theme1(self):
test_path = os.path.dirname(os.path.realpath(__file__))
output_data = generate(
os.path.join(test_path, 'data-themed', 'config-theme1.yml'),
os.path.join(test_path, 'data-themed'),
in_memory=True,
)
self.assertIn('index.html', output_data)
self.assertIn('override-me', output_data)
self.assertIn('index.html', output_data['override-me'])
# parse the home page
homepage = ET.fromstring(output_data['index.html'])
self.assertEqual('Home - Theme 1', homepage.findall('./head/title')[0].text.strip())
self.assertEqual('/assets/theme1.css', homepage.findall('./head/link')[0].attrib['href'])
self.check_override_page(output_data['override-me']['index.html'])
def test_theme2(self):
test_path = os.path.dirname(os.path.realpath(__file__))
output_data = generate(
os.path.join(test_path, 'data-themed', 'config-theme2.yml'),
os.path.join(test_path, 'data-themed'),
in_memory=True,
)
self.assertIn('index.html', output_data)
self.assertIn('override-me', output_data)
self.assertIn('index.html', output_data['override-me'])
# parse the home page
homepage = ET.fromstring(output_data['index.html'])
self.assertEqual('Home - Theme 2', homepage.findall('./head/title')[0].text.strip())
self.assertEqual('/assets/theme2.css', homepage.findall('./head/link')[0].attrib['href'])
self.check_override_page(output_data['override-me']['index.html'])
def check_override_page(self, html):
page = ET.fromstring(html)
self.assertEqual('I win all the things!', page.findall('./body')[0].text.strip())
if __name__ == "__main__":
unittest.main()
| 34.864407 | 97 | 0.644628 | 258 | 2,057 | 4.949612 | 0.286822 | 0.09397 | 0.031323 | 0.043853 | 0.729836 | 0.729836 | 0.729836 | 0.729836 | 0.729836 | 0.729836 | 0 | 0.0085 | 0.199319 | 2,057 | 58 | 98 | 35.465517 | 0.766849 | 0.029655 | 0 | 0.461538 | 1 | 0 | 0.189257 | 0 | 0 | 0 | 0 | 0 | 0.282051 | 1 | 0.076923 | false | 0 | 0.128205 | 0 | 0.230769 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1ed97201a092c5f6b36a7a8778ed528af37ea04 | 136 | py | Python | cabot_alert_opsgenie/__init__.py | deltaroe/cabot-alert-opsgenie | ae3d790b756accc5331b1b35d6338840ff141627 | [
"MIT"
] | null | null | null | cabot_alert_opsgenie/__init__.py | deltaroe/cabot-alert-opsgenie | ae3d790b756accc5331b1b35d6338840ff141627 | [
"MIT"
] | null | null | null | cabot_alert_opsgenie/__init__.py | deltaroe/cabot-alert-opsgenie | ae3d790b756accc5331b1b35d6338840ff141627 | [
"MIT"
] | 1 | 2019-05-02T16:18:07.000Z | 2019-05-02T16:18:07.000Z | __author__ = '\n ~\n . .\n /V\\\n // \\\\\n /(___)\\\n ^ ~ ^\nd i b i t s'
# -*- coding: utf-8 -*-
__all__ = ["models",] | 34 | 89 | 0.367647 | 19 | 136 | 2.052632 | 0.684211 | 0.205128 | 0.153846 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010526 | 0.301471 | 136 | 4 | 90 | 34 | 0.4 | 0.154412 | 0 | 0 | 0 | 0.5 | 0.701754 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e1ef7b1cc9ca9740475d76193a1391fdc579dd8a | 3,295 | py | Python | test/test_execution.py | majosm/squirm | 20ce51b0fa4c73a54a7e5fa4f993882f493b0e62 | [
"MIT"
] | null | null | null | test/test_execution.py | majosm/squirm | 20ce51b0fa4c73a54a7e5fa4f993882f493b0e62 | [
"MIT"
] | 1 | 2021-02-25T06:27:39.000Z | 2021-02-25T06:27:39.000Z | test/test_execution.py | majosm/squirm | 20ce51b0fa4c73a54a7e5fa4f993882f493b0e62 | [
"MIT"
] | null | null | null | from squirm.execution import ( # noqa
get_some_executor,
ProcessError,
ExecParams,
ExecParamError)
import pytest
from functools import partial
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_execute_success(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
mpi_exec(["true"], exec_params=ExecParams(num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_execute_fail(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
with pytest.raises(ProcessError):
mpi_exec(["false"], exec_params=ExecParams(num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_run_success(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
mpi_exec.run("from mpi4py import MPI; MPI.COMM_WORLD.Barrier(); assert True",
exec_params=ExecParams(num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_run_fail(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
with pytest.raises(ProcessError):
mpi_exec.run("from mpi4py import MPI; MPI.COMM_WORLD.Barrier(); "
+ "assert False", exec_params=ExecParams(num_tasks=num_tasks))
def _test_func(arg1, arg2):
from mpi4py import MPI
MPI.COMM_WORLD.Barrier()
assert arg1 + arg2 == "hello"
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_call_success(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
mpi_exec.call(partial(_test_func, "he", "llo"), exec_params=ExecParams(
num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_call_fail(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
with pytest.raises(ProcessError):
mpi_exec.call(partial(_test_func, "good", "bye"), exec_params=ExecParams(
num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_call_with_args(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
mpi_exec.call(_test_func, "he", "llo", exec_params=ExecParams(
num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_call_with_kwargs(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
mpi_exec.call(_test_func, arg1="he", arg2="llo", exec_params=ExecParams(
num_tasks=num_tasks))
@pytest.mark.parametrize("num_tasks", [1, 2])
def test_call_with_args_and_kwargs(num_tasks):
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
mpi_exec.call(_test_func, "he", arg2="llo",
exec_params=ExecParams(num_tasks=num_tasks))
def test_unsupported_param():
pytest.importorskip("mpi4py")
mpi_exec = get_some_executor()
try:
mpi_exec(["true"], exec_params=ExecParams(num_tasks=2, gpus_per_task=1))
pytest.skip("Oops. Unsupported param is actually supported.")
except ExecParamError as e:
assert e.param_name == "gpus_per_task"
if __name__ == "__main__":
import sys
if len(sys.argv) > 1:
exec(sys.argv[1])
else:
from pytest import main
main([__file__])
| 30.794393 | 82 | 0.689226 | 441 | 3,295 | 4.818594 | 0.165533 | 0.139294 | 0.105412 | 0.127059 | 0.828706 | 0.828706 | 0.816471 | 0.816471 | 0.771765 | 0.727529 | 0 | 0.015596 | 0.182701 | 3,295 | 106 | 83 | 31.084906 | 0.773487 | 0.001214 | 0 | 0.469136 | 0 | 0 | 0.11432 | 0.015202 | 0 | 0 | 0 | 0 | 0.049383 | 1 | 0.135802 | false | 0 | 0.222222 | 0 | 0.358025 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c044257f90c73be598aa27a72993d4cc8c625e2f | 86 | py | Python | pynemo/core/util.py | SSripilaipong/pynemo | f4dedd2599ec78b2ffe73f55b1d2b8b5da1b1e7f | [
"MIT"
] | null | null | null | pynemo/core/util.py | SSripilaipong/pynemo | f4dedd2599ec78b2ffe73f55b1d2b8b5da1b1e7f | [
"MIT"
] | null | null | null | pynemo/core/util.py | SSripilaipong/pynemo | f4dedd2599ec78b2ffe73f55b1d2b8b5da1b1e7f | [
"MIT"
] | null | null | null | class Enum(type):
def __getitem__(self, item):
return getattr(self, item)
| 21.5 | 34 | 0.651163 | 11 | 86 | 4.727273 | 0.818182 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.232558 | 86 | 3 | 35 | 28.666667 | 0.787879 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
c049809c6486b183f3eb55cc8403751e7b399104 | 5,620 | py | Python | load_data.py | jiaxiang-cheng/Deep-CNN-for-Estimation-of-Remaining-Useful-Life | 047d2ff262875232978bd6c6718ba305e03cc6e7 | [
"Apache-2.0"
] | 15 | 2021-05-17T01:25:59.000Z | 2022-03-30T03:12:15.000Z | load_data.py | jiaxiang-cheng/cnn-pytorch-remaining-useful-life-prediction | 047d2ff262875232978bd6c6718ba305e03cc6e7 | [
"Apache-2.0"
] | 1 | 2022-03-15T02:56:49.000Z | 2022-03-15T02:58:59.000Z | load_data.py | jiaxiang-cheng/cnn-pytorch-remaining-useful-life-prediction | 047d2ff262875232978bd6c6718ba305e03cc6e7 | [
"Apache-2.0"
] | 4 | 2021-12-08T11:55:53.000Z | 2022-03-10T12:33:09.000Z | import pandas as pd
from sklearn.preprocessing import StandardScaler, MinMaxScaler
def load_data_FD001():
dir_path = './CMAPSSData/'
# define column names for easy indexing
index_names = ['unit_nr', 'time_cycles']
setting_names = ['setting_1', 'setting_2', 'setting_3']
sensor_names = ['s_{}'.format(i) for i in range(1, 22)]
col_names = index_names + setting_names + sensor_names
# read data
train_raw = pd.read_csv((dir_path + 'train_FD001.txt'), sep='\s+', header=None, names=col_names)
test_raw = pd.read_csv((dir_path + 'test_FD001.txt'), sep='\s+', header=None, names=col_names)
y_test = pd.read_csv((dir_path + 'RUL_FD001.txt'), sep='\s+', header=None, names=['RUL']).to_numpy()
grouped_by_unit = train_raw.groupby(by="unit_nr")
max_cycle = grouped_by_unit["time_cycles"].max().to_numpy()
grouped_by_unit_t = test_raw.groupby(by="unit_nr")
max_cycle_t = grouped_by_unit_t["time_cycles"].max().to_numpy()
# drop non-informative features, derived from EDA
drop_sensors = ['s_1', 's_5', 's_10', 's_16', 's_18', 's_19']
drop_labels = setting_names + drop_sensors
train_raw.drop(labels=drop_labels, axis=1, inplace=True)
test_raw.drop(labels=drop_labels, axis=1, inplace=True)
remaining_sensors = ['s_2', 's_3', 's_4', 's_6', 's_7', 's_8', 's_9',
's_11', 's_12', 's_13', 's_14', 's_15', 's_17', 's_20', 's_21']
return train_raw, test_raw, max_cycle, max_cycle_t, y_test
def load_data_FD002():
dir_path = './CMAPSSData/'
# define column names for easy indexing
index_names = ['unit_nr', 'time_cycles']
setting_names = ['setting_1', 'setting_2', 'setting_3']
sensor_names = ['s_{}'.format(i) for i in range(1, 22)]
col_names = index_names + setting_names + sensor_names
# read data
train_raw = pd.read_csv((dir_path + 'train_FD002.txt'), sep='\s+', header=None, names=col_names)
test_raw = pd.read_csv((dir_path + 'test_FD002.txt'), sep='\s+', header=None, names=col_names)
y_test = pd.read_csv((dir_path + 'RUL_FD002.txt'), sep='\s+', header=None, names=['RUL']).to_numpy()
grouped_by_unit = train_raw.groupby(by="unit_nr")
max_cycle = grouped_by_unit["time_cycles"].max().to_numpy()
grouped_by_unit_t = test_raw.groupby(by="unit_nr")
max_cycle_t = grouped_by_unit_t["time_cycles"].max().to_numpy()
# drop non-informative features, derived from EDA
# drop_sensors = ['s_1', 's_5', 's_10', 's_16', 's_18', 's_19']
drop_labels = setting_names
train_raw.drop(labels=drop_labels, axis=1, inplace=True)
test_raw.drop(labels=drop_labels, axis=1, inplace=True)
return train_raw, test_raw, max_cycle, max_cycle_t, y_test
def load_data_FD003():
dir_path = './CMAPSSData/'
# define column names for easy indexing
index_names = ['unit_nr', 'time_cycles']
setting_names = ['setting_1', 'setting_2', 'setting_3']
sensor_names = ['s_{}'.format(i) for i in range(1, 22)]
col_names = index_names + setting_names + sensor_names
# read data
train_raw = pd.read_csv((dir_path + 'train_FD003.txt'), sep='\s+', header=None, names=col_names)
test_raw = pd.read_csv((dir_path + 'test_FD003.txt'), sep='\s+', header=None, names=col_names)
y_test = pd.read_csv((dir_path + 'RUL_FD003.txt'), sep='\s+', header=None, names=['RUL']).to_numpy()
grouped_by_unit = train_raw.groupby(by="unit_nr")
max_cycle = grouped_by_unit["time_cycles"].max().to_numpy()
grouped_by_unit_t = test_raw.groupby(by="unit_nr")
max_cycle_t = grouped_by_unit_t["time_cycles"].max().to_numpy()
# drop non-informative features, derived from EDA
drop_sensors = ['s_1', 's_5', 's_10', 's_16', 's_18', 's_19']
drop_labels = setting_names + drop_sensors
train_raw.drop(labels=drop_labels, axis=1, inplace=True)
test_raw.drop(labels=drop_labels, axis=1, inplace=True)
return train_raw, test_raw, max_cycle, max_cycle_t, y_test
def load_data_FD004():
dir_path = './CMAPSSData/'
# define column names for easy indexing
index_names = ['unit_nr', 'time_cycles']
setting_names = ['setting_1', 'setting_2', 'setting_3']
sensor_names = ['s_{}'.format(i) for i in range(1, 22)]
col_names = index_names + setting_names + sensor_names
# read data
train_raw = pd.read_csv((dir_path + 'train_FD004.txt'), sep='\s+', header=None, names=col_names)
test_raw = pd.read_csv((dir_path + 'test_FD004.txt'), sep='\s+', header=None, names=col_names)
y_test = pd.read_csv((dir_path + 'RUL_FD004.txt'), sep='\s+', header=None, names=['RUL']).to_numpy()
grouped_by_unit = train_raw.groupby(by="unit_nr")
max_cycle = grouped_by_unit["time_cycles"].max().to_numpy()
grouped_by_unit_t = test_raw.groupby(by="unit_nr")
max_cycle_t = grouped_by_unit_t["time_cycles"].max().to_numpy()
# drop non-informative features, derived from EDA
# drop_sensors = ['s_1', 's_5', 's_10', 's_16', 's_18', 's_19']
drop_labels = setting_names
train_raw.drop(labels=drop_labels, axis=1, inplace=True)
test_raw.drop(labels=drop_labels, axis=1, inplace=True)
return train_raw, test_raw, max_cycle, max_cycle_t, y_test
def get_info(train_raw, test_raw):
mm = MinMaxScaler()
ss = StandardScaler()
X = train_raw.iloc[:, 2:]
idx = train_raw.iloc[:, 0:2].to_numpy()
X_ss = ss.fit_transform(X)
X_t = test_raw.iloc[:, 2:]
idx_t = test_raw.iloc[:, 0:2].to_numpy()
Xt_ss = ss.fit_transform(X_t)
nf = X_ss.shape[1]
ns = X_ss.shape[0]
ns_t = Xt_ss.shape[0]
return X_ss, idx, Xt_ss, idx_t, nf, ns, ns_t
| 39.858156 | 104 | 0.674199 | 918 | 5,620 | 3.772331 | 0.115468 | 0.041582 | 0.060064 | 0.041582 | 0.907883 | 0.898065 | 0.888825 | 0.883049 | 0.883049 | 0.871499 | 0 | 0.032451 | 0.166548 | 5,620 | 140 | 105 | 40.142857 | 0.706874 | 0.090214 | 0 | 0.62069 | 0 | 0 | 0.13829 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.057471 | false | 0 | 0.022989 | 0 | 0.137931 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c05c06572ace6b5d97fc80c49513d3942f8e39b5 | 267 | py | Python | carpedm/nn/__init__.py | SimulatedANeal/carpedm | 22bd5d28cfff50d7462e2a8e1b8dc1675e2a4c89 | [
"MIT"
] | 2 | 2020-09-30T04:59:06.000Z | 2021-03-30T20:42:44.000Z | carpedm/nn/__init__.py | SimulatedANeal/carpedm | 22bd5d28cfff50d7462e2a8e1b8dc1675e2a4c89 | [
"MIT"
] | null | null | null | carpedm/nn/__init__.py | SimulatedANeal/carpedm | 22bd5d28cfff50d7462e2a8e1b8dc1675e2a4c89 | [
"MIT"
] | 1 | 2018-05-25T07:15:16.000Z | 2018-05-25T07:15:16.000Z | #
# Copyright (C) 2018 Neal Digre.
#
# This software may be modified and distributed under the terms
# of the MIT license. See the LICENSE file for details.
from carpedm.nn import conv
from carpedm.nn import op
from carpedm.nn import rnn
from carpedm.nn import util
| 24.272727 | 63 | 0.771536 | 45 | 267 | 4.577778 | 0.666667 | 0.213592 | 0.252427 | 0.368932 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.018182 | 0.17603 | 267 | 10 | 64 | 26.7 | 0.918182 | 0.546816 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
221c51b9d0baa8f905b0baf055db3f0f861dc1a2 | 16,824 | py | Python | sdk/python/pulumi_equinix_metal/project_ssh_key.py | pulumi/pulumi-equinix-metal | 79213497bddc7ae806d3b27c3f349fdff935a19f | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-01-08T21:57:33.000Z | 2021-01-08T21:57:33.000Z | sdk/python/pulumi_equinix_metal/project_ssh_key.py | pulumi/pulumi-equinix-metal | 79213497bddc7ae806d3b27c3f349fdff935a19f | [
"ECL-2.0",
"Apache-2.0"
] | 33 | 2020-12-23T21:37:39.000Z | 2022-03-25T19:23:17.000Z | sdk/python/pulumi_equinix_metal/project_ssh_key.py | pulumi/pulumi-equinix-metal | 79213497bddc7ae806d3b27c3f349fdff935a19f | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2021-01-08T21:24:44.000Z | 2021-01-08T21:24:44.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from . import _utilities
__all__ = ['ProjectSshKeyArgs', 'ProjectSshKey']
@pulumi.input_type
class ProjectSshKeyArgs:
def __init__(__self__, *,
project_id: pulumi.Input[str],
public_key: pulumi.Input[str],
name: Optional[pulumi.Input[str]] = None):
"""
The set of arguments for constructing a ProjectSshKey resource.
:param pulumi.Input[str] project_id: The ID of parent project
:param pulumi.Input[str] public_key: The public key. If this is a file, it can be read using the file interpolation function
:param pulumi.Input[str] name: The name of the SSH key for identification
"""
pulumi.set(__self__, "project_id", project_id)
pulumi.set(__self__, "public_key", public_key)
if name is not None:
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="projectId")
def project_id(self) -> pulumi.Input[str]:
"""
The ID of parent project
"""
return pulumi.get(self, "project_id")
@project_id.setter
def project_id(self, value: pulumi.Input[str]):
pulumi.set(self, "project_id", value)
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> pulumi.Input[str]:
"""
The public key. If this is a file, it can be read using the file interpolation function
"""
return pulumi.get(self, "public_key")
@public_key.setter
def public_key(self, value: pulumi.Input[str]):
pulumi.set(self, "public_key", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the SSH key for identification
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@pulumi.input_type
class _ProjectSshKeyState:
def __init__(__self__, *,
created: Optional[pulumi.Input[str]] = None,
fingerprint: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
owner_id: Optional[pulumi.Input[str]] = None,
project_id: Optional[pulumi.Input[str]] = None,
public_key: Optional[pulumi.Input[str]] = None,
updated: Optional[pulumi.Input[str]] = None):
"""
Input properties used for looking up and filtering ProjectSshKey resources.
:param pulumi.Input[str] created: The timestamp for when the SSH key was created
:param pulumi.Input[str] fingerprint: The fingerprint of the SSH key
:param pulumi.Input[str] name: The name of the SSH key for identification
:param pulumi.Input[str] owner_id: The ID of parent project (same as project_id)
:param pulumi.Input[str] project_id: The ID of parent project
:param pulumi.Input[str] public_key: The public key. If this is a file, it can be read using the file interpolation function
:param pulumi.Input[str] updated: The timestamp for the last time the SSH key was updated
"""
if created is not None:
pulumi.set(__self__, "created", created)
if fingerprint is not None:
pulumi.set(__self__, "fingerprint", fingerprint)
if name is not None:
pulumi.set(__self__, "name", name)
if owner_id is not None:
pulumi.set(__self__, "owner_id", owner_id)
if project_id is not None:
pulumi.set(__self__, "project_id", project_id)
if public_key is not None:
pulumi.set(__self__, "public_key", public_key)
if updated is not None:
pulumi.set(__self__, "updated", updated)
@property
@pulumi.getter
def created(self) -> Optional[pulumi.Input[str]]:
"""
The timestamp for when the SSH key was created
"""
return pulumi.get(self, "created")
@created.setter
def created(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "created", value)
@property
@pulumi.getter
def fingerprint(self) -> Optional[pulumi.Input[str]]:
"""
The fingerprint of the SSH key
"""
return pulumi.get(self, "fingerprint")
@fingerprint.setter
def fingerprint(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "fingerprint", value)
@property
@pulumi.getter
def name(self) -> Optional[pulumi.Input[str]]:
"""
The name of the SSH key for identification
"""
return pulumi.get(self, "name")
@name.setter
def name(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="ownerId")
def owner_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of parent project (same as project_id)
"""
return pulumi.get(self, "owner_id")
@owner_id.setter
def owner_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "owner_id", value)
@property
@pulumi.getter(name="projectId")
def project_id(self) -> Optional[pulumi.Input[str]]:
"""
The ID of parent project
"""
return pulumi.get(self, "project_id")
@project_id.setter
def project_id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "project_id", value)
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> Optional[pulumi.Input[str]]:
"""
The public key. If this is a file, it can be read using the file interpolation function
"""
return pulumi.get(self, "public_key")
@public_key.setter
def public_key(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "public_key", value)
@property
@pulumi.getter
def updated(self) -> Optional[pulumi.Input[str]]:
"""
The timestamp for the last time the SSH key was updated
"""
return pulumi.get(self, "updated")
@updated.setter
def updated(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "updated", value)
class ProjectSshKey(pulumi.CustomResource):
@overload
def __init__(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
name: Optional[pulumi.Input[str]] = None,
project_id: Optional[pulumi.Input[str]] = None,
public_key: Optional[pulumi.Input[str]] = None,
__props__=None):
"""
Provides an Equinix Metal project SSH key resource to manage project-specific SSH keys.
Project SSH keys will only be populated onto servers that belong to that project, in contrast to User SSH Keys.
## Example Usage
```python
import pulumi
import pulumi_equinix_metal as equinix_metal
project_id = "<UUID_of_your_project>"
test_project_ssh_key = equinix_metal.ProjectSshKey("testProjectSshKey",
public_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDM/unxJeFqxsTJcu6mhqsMHSaVlpu+Jj/P+44zrm6X/MAoHSX3X9oLgujEjjZ74yLfdfe0bJrbL2YgJzNaEkIQQ1VPMHB5EhTKUBGnzlPP0hHTnxsjAm9qDHgUPgvgFDQSAMzdJRJ0Cexo16Ph9VxCoLh3dxiE7s2gaM2FdVg7P8aSxKypsxAhYV3D0AwqzoOyT6WWhBoQ0xZ85XevOTnJCpImSemEGs6nVGEsWcEc1d1YvdxFjAK4SdsKUMkj4Dsy/leKsdi/DEAf356vbMT1UHsXXvy5TlHu/Pa6qF53v32Enz+nhKy7/8W2Yt2yWx8HnQcT2rug9lvCXagJO6oauqRTO77C4QZn13ZLMZgLT66S/tNh2EX0gi6vmIs5dth8uF+K6nxIyKJXbcA4ASg7F1OJrHKFZdTc5v1cPeq6PcbqGgc+8SrPYQmzvQqLoMBuxyos2hUkYOmw3aeWJj9nFa8Wu5WaN89mUeOqSkU4S5cgUzWUOmKey56B/j/s1sVys9rMhZapVs0wL4L9GBBM48N5jAQZnnpo85A8KsZq5ME22bTLqnxsDXqDYZvS7PSI6Dxi7eleOFE/NYYDkrgDLHTQri8ucDMVeVWHgoMY2bPXdn7KKy5jW5jKsf8EPARXg77A4gRYmgKrcwIKqJEUPqyxJBe0CPoGTqgXPRsUiQ== tomk@hp2",
project_id=project_id)
test_device = equinix_metal.Device("testDevice",
hostname="test",
plan="c3.medium.x86",
facilities=["ny5"],
operating_system="ubuntu_20_04",
billing_cycle="hourly",
project_ssh_key_ids=[test_project_ssh_key.id],
project_id=project_id)
```
:param str resource_name: The name of the resource.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] name: The name of the SSH key for identification
:param pulumi.Input[str] project_id: The ID of parent project
:param pulumi.Input[str] public_key: The public key. If this is a file, it can be read using the file interpolation function
"""
...
@overload
def __init__(__self__,
resource_name: str,
args: ProjectSshKeyArgs,
opts: Optional[pulumi.ResourceOptions] = None):
"""
Provides an Equinix Metal project SSH key resource to manage project-specific SSH keys.
Project SSH keys will only be populated onto servers that belong to that project, in contrast to User SSH Keys.
## Example Usage
```python
import pulumi
import pulumi_equinix_metal as equinix_metal
project_id = "<UUID_of_your_project>"
test_project_ssh_key = equinix_metal.ProjectSshKey("testProjectSshKey",
public_key="ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDM/unxJeFqxsTJcu6mhqsMHSaVlpu+Jj/P+44zrm6X/MAoHSX3X9oLgujEjjZ74yLfdfe0bJrbL2YgJzNaEkIQQ1VPMHB5EhTKUBGnzlPP0hHTnxsjAm9qDHgUPgvgFDQSAMzdJRJ0Cexo16Ph9VxCoLh3dxiE7s2gaM2FdVg7P8aSxKypsxAhYV3D0AwqzoOyT6WWhBoQ0xZ85XevOTnJCpImSemEGs6nVGEsWcEc1d1YvdxFjAK4SdsKUMkj4Dsy/leKsdi/DEAf356vbMT1UHsXXvy5TlHu/Pa6qF53v32Enz+nhKy7/8W2Yt2yWx8HnQcT2rug9lvCXagJO6oauqRTO77C4QZn13ZLMZgLT66S/tNh2EX0gi6vmIs5dth8uF+K6nxIyKJXbcA4ASg7F1OJrHKFZdTc5v1cPeq6PcbqGgc+8SrPYQmzvQqLoMBuxyos2hUkYOmw3aeWJj9nFa8Wu5WaN89mUeOqSkU4S5cgUzWUOmKey56B/j/s1sVys9rMhZapVs0wL4L9GBBM48N5jAQZnnpo85A8KsZq5ME22bTLqnxsDXqDYZvS7PSI6Dxi7eleOFE/NYYDkrgDLHTQri8ucDMVeVWHgoMY2bPXdn7KKy5jW5jKsf8EPARXg77A4gRYmgKrcwIKqJEUPqyxJBe0CPoGTqgXPRsUiQ== tomk@hp2",
project_id=project_id)
test_device = equinix_metal.Device("testDevice",
hostname="test",
plan="c3.medium.x86",
facilities=["ny5"],
operating_system="ubuntu_20_04",
billing_cycle="hourly",
project_ssh_key_ids=[test_project_ssh_key.id],
project_id=project_id)
```
:param str resource_name: The name of the resource.
:param ProjectSshKeyArgs args: The arguments to use to populate this resource's properties.
:param pulumi.ResourceOptions opts: Options for the resource.
"""
...
def __init__(__self__, resource_name: str, *args, **kwargs):
resource_args, opts = _utilities.get_resource_args_opts(ProjectSshKeyArgs, pulumi.ResourceOptions, *args, **kwargs)
if resource_args is not None:
__self__._internal_init(resource_name, opts, **resource_args.__dict__)
else:
__self__._internal_init(resource_name, *args, **kwargs)
def _internal_init(__self__,
resource_name: str,
opts: Optional[pulumi.ResourceOptions] = None,
name: Optional[pulumi.Input[str]] = None,
project_id: Optional[pulumi.Input[str]] = None,
public_key: Optional[pulumi.Input[str]] = None,
__props__=None):
if opts is None:
opts = pulumi.ResourceOptions()
if not isinstance(opts, pulumi.ResourceOptions):
raise TypeError('Expected resource options to be a ResourceOptions instance')
if opts.version is None:
opts.version = _utilities.get_version()
if opts.id is None:
if __props__ is not None:
raise TypeError('__props__ is only valid when passed in combination with a valid opts.id to get an existing resource')
__props__ = ProjectSshKeyArgs.__new__(ProjectSshKeyArgs)
__props__.__dict__["name"] = name
if project_id is None and not opts.urn:
raise TypeError("Missing required property 'project_id'")
__props__.__dict__["project_id"] = project_id
if public_key is None and not opts.urn:
raise TypeError("Missing required property 'public_key'")
__props__.__dict__["public_key"] = public_key
__props__.__dict__["created"] = None
__props__.__dict__["fingerprint"] = None
__props__.__dict__["owner_id"] = None
__props__.__dict__["updated"] = None
super(ProjectSshKey, __self__).__init__(
'equinix-metal:index/projectSshKey:ProjectSshKey',
resource_name,
__props__,
opts)
@staticmethod
def get(resource_name: str,
id: pulumi.Input[str],
opts: Optional[pulumi.ResourceOptions] = None,
created: Optional[pulumi.Input[str]] = None,
fingerprint: Optional[pulumi.Input[str]] = None,
name: Optional[pulumi.Input[str]] = None,
owner_id: Optional[pulumi.Input[str]] = None,
project_id: Optional[pulumi.Input[str]] = None,
public_key: Optional[pulumi.Input[str]] = None,
updated: Optional[pulumi.Input[str]] = None) -> 'ProjectSshKey':
"""
Get an existing ProjectSshKey resource's state with the given name, id, and optional extra
properties used to qualify the lookup.
:param str resource_name: The unique name of the resulting resource.
:param pulumi.Input[str] id: The unique provider ID of the resource to lookup.
:param pulumi.ResourceOptions opts: Options for the resource.
:param pulumi.Input[str] created: The timestamp for when the SSH key was created
:param pulumi.Input[str] fingerprint: The fingerprint of the SSH key
:param pulumi.Input[str] name: The name of the SSH key for identification
:param pulumi.Input[str] owner_id: The ID of parent project (same as project_id)
:param pulumi.Input[str] project_id: The ID of parent project
:param pulumi.Input[str] public_key: The public key. If this is a file, it can be read using the file interpolation function
:param pulumi.Input[str] updated: The timestamp for the last time the SSH key was updated
"""
opts = pulumi.ResourceOptions.merge(opts, pulumi.ResourceOptions(id=id))
__props__ = _ProjectSshKeyState.__new__(_ProjectSshKeyState)
__props__.__dict__["created"] = created
__props__.__dict__["fingerprint"] = fingerprint
__props__.__dict__["name"] = name
__props__.__dict__["owner_id"] = owner_id
__props__.__dict__["project_id"] = project_id
__props__.__dict__["public_key"] = public_key
__props__.__dict__["updated"] = updated
return ProjectSshKey(resource_name, opts=opts, __props__=__props__)
@property
@pulumi.getter
def created(self) -> pulumi.Output[str]:
"""
The timestamp for when the SSH key was created
"""
return pulumi.get(self, "created")
@property
@pulumi.getter
def fingerprint(self) -> pulumi.Output[str]:
"""
The fingerprint of the SSH key
"""
return pulumi.get(self, "fingerprint")
@property
@pulumi.getter
def name(self) -> pulumi.Output[str]:
"""
The name of the SSH key for identification
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="ownerId")
def owner_id(self) -> pulumi.Output[str]:
"""
The ID of parent project (same as project_id)
"""
return pulumi.get(self, "owner_id")
@property
@pulumi.getter(name="projectId")
def project_id(self) -> pulumi.Output[str]:
"""
The ID of parent project
"""
return pulumi.get(self, "project_id")
@property
@pulumi.getter(name="publicKey")
def public_key(self) -> pulumi.Output[str]:
"""
The public key. If this is a file, it can be read using the file interpolation function
"""
return pulumi.get(self, "public_key")
@property
@pulumi.getter
def updated(self) -> pulumi.Output[str]:
"""
The timestamp for the last time the SSH key was updated
"""
return pulumi.get(self, "updated")
| 42.271357 | 759 | 0.654779 | 1,888 | 16,824 | 5.602754 | 0.10911 | 0.069673 | 0.086028 | 0.076952 | 0.804216 | 0.782379 | 0.74636 | 0.724617 | 0.696918 | 0.672244 | 0 | 0.019605 | 0.251129 | 16,824 | 397 | 760 | 42.377834 | 0.819986 | 0.379517 | 0 | 0.546729 | 1 | 0 | 0.085556 | 0.004995 | 0 | 0 | 0 | 0 | 0 | 1 | 0.158879 | false | 0.004673 | 0.023364 | 0 | 0.280374 | 0.060748 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2223c1038485f0cb8b37353bbe9c35764c8b864b | 180 | py | Python | bruhster_server/aiohttp_bruhster/views.py | j4yr0u93/bruhster | e2a2bee8e59069f6994f89bb51655f7b695a334f | [
"MIT"
] | null | null | null | bruhster_server/aiohttp_bruhster/views.py | j4yr0u93/bruhster | e2a2bee8e59069f6994f89bb51655f7b695a334f | [
"MIT"
] | null | null | null | bruhster_server/aiohttp_bruhster/views.py | j4yr0u93/bruhster | e2a2bee8e59069f6994f89bb51655f7b695a334f | [
"MIT"
] | null | null | null | from aiohttp import web
async def index(request):
return web.Response(text='Hello Aiohttp!')
async def test(request):
return web.Response(text='Test was a big success')
| 20 | 54 | 0.727778 | 27 | 180 | 4.851852 | 0.62963 | 0.122137 | 0.244275 | 0.366412 | 0.427481 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 180 | 8 | 55 | 22.5 | 0.873333 | 0 | 0 | 0 | 0 | 0 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
222aba472450907de298e6c4d80cfec7d5d266d1 | 83 | py | Python | braintree/iban_bank_account.py | futureironman/braintree_python | 26bb8a857bc29322a8bca2e8e0fe6d99cfe6a1ac | [
"MIT"
] | 182 | 2015-01-09T05:26:46.000Z | 2022-03-16T14:10:06.000Z | braintree/iban_bank_account.py | futureironman/braintree_python | 26bb8a857bc29322a8bca2e8e0fe6d99cfe6a1ac | [
"MIT"
] | 95 | 2015-02-24T23:29:56.000Z | 2022-03-13T03:27:58.000Z | braintree/iban_bank_account.py | futureironman/braintree_python | 26bb8a857bc29322a8bca2e8e0fe6d99cfe6a1ac | [
"MIT"
] | 93 | 2015-02-19T17:59:06.000Z | 2022-03-19T17:01:25.000Z | from braintree.resource import Resource
class IbanBankAccount(Resource):
pass
| 16.6 | 39 | 0.807229 | 9 | 83 | 7.444444 | 0.777778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.144578 | 83 | 4 | 40 | 20.75 | 0.943662 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
97dfae664605d2b1b845af8885e0066651458242 | 48,634 | py | Python | tests/test_api_result.py | Open-EO/openeo-geopyspark-driver | afd5902f426d2aa456d70ed6f2d51b6907de1cab | [
"Apache-2.0"
] | 12 | 2018-03-22T15:02:24.000Z | 2022-03-30T20:13:29.000Z | tests/test_api_result.py | Open-EO/openeo-geopyspark-driver | afd5902f426d2aa456d70ed6f2d51b6907de1cab | [
"Apache-2.0"
] | 116 | 2018-09-27T17:17:14.000Z | 2022-03-30T18:32:29.000Z | tests/test_api_result.py | Open-EO/openeo-geopyspark-driver | afd5902f426d2aa456d70ed6f2d51b6907de1cab | [
"Apache-2.0"
] | 3 | 2019-06-28T15:44:32.000Z | 2021-10-30T07:05:54.000Z | import contextlib
import logging
import textwrap
import numpy as np
import pytest
from numpy.testing import assert_equal
from openeo_driver.testing import TEST_USER
from openeogeotrellis.testing import random_name
from openeogeotrellis.utils import get_jvm, UtcNowClock
_log = logging.getLogger(__name__)
@contextlib.contextmanager
def set_jvm_system_properties(properties: dict):
"""Context manager to temporary set jvm System properties."""
jvm_system = get_jvm().System
orig_properties = {k: jvm_system.getProperty(k) for k in properties.keys()}
def set_all(properties: dict):
for k, v in properties.items():
if v:
jvm_system.setProperty(k, str(v))
else:
jvm_system.clearProperty(k)
set_all(properties)
yield
set_all(orig_properties)
def test_execute_math_basic(api100):
res = api100.check_result({"add": {"process_id": "add", "arguments": {"x": 3, "y": 5}, "result": True}})
assert res.json == 8
def test_load_collection_json_basic(api100):
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-01-10"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Flat:1", "TileRow", "Longitude", "Day"]
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "lc"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
data = result["data"]
assert_equal(data, [[
np.ones((4, 4)),
np.zeros((4, 4)),
[[0, 0, 0, 0], [0.25, 0.25, 0.25, 0.25], [0.5, 0.5, 0.5, 0.5], [0.75, 0.75, 0.75, 0.75]],
np.full((4, 4), fill_value=5)
]])
def test_udp_simple_temporal_reduce(api100, user_defined_process_registry):
"""Test calling a UDP with simple temporal reduce operation"""
udp_id = random_name("udp")
udp_spec = {
"id": udp_id,
"parameters": [
{"name": "data", "schema": {"type": "object", "subtype": "raster-cube"}}
],
"process_graph": {
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_parameter": "data"},
"dimension": "t",
"reducer": {"process_graph": {"max": {
"process_id": "max", "arguments": {"data": {"from_parameter": "data"}}, "result": True
}}}
},
"result": True
}
}
}
user_defined_process_registry.save(user_id=TEST_USER, process_id=udp_id, spec=udp_spec)
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Longitude", "Day"]
},
},
"udp": {
"process_id": udp_id, "arguments": {"data": {"from_node": "lc"}}
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "udp"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["bands", "x", "y"]
data = result["data"]
assert_equal(data, np.array([
np.array([[0, .25, .5, .75]] * 4).T,
np.full((4, 4), fill_value=25)
]))
@pytest.mark.parametrize("udf_code", [
"""
from openeo_udf.api.datacube import DataCube # Old style openeo_udf API
def apply_datacube(cube: DataCube, context: dict) -> DataCube:
return DataCube(cube.get_array().max("t"))
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
return XarrayDataCube(cube.get_array().max("t"))
""",
])
def test_udp_udf_reduce_temporal(api100, user_defined_process_registry, udf_code):
"""Test calling a UDP with a UDF based reduce operation"""
udf_code = textwrap.dedent(udf_code)
udp_id = random_name("udp")
udp_spec = {
"id": udp_id,
"parameters": [
{"name": "data", "schema": {"type": "object", "subtype": "raster-cube"}},
],
"process_graph": {
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_parameter": "data"},
"dimension": "t",
"reducer": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
},
"result": True
}}}
},
"result": True
}
}
}
user_defined_process_registry.save(user_id=TEST_USER, process_id=udp_id, spec=udp_spec)
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 2.0},
"bands": ["Longitude", "Day"]
},
},
"udp": {
"process_id": udp_id, "arguments": {"data": {"from_node": "lc"}}
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "udp"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["bands", "x", "y"]
data = result["data"]
assert_equal(data, np.array([
np.array([[0, .25, .5, .75]] * 8).T,
np.full((4, 8), fill_value=25)
]))
@pytest.mark.parametrize("set_offset", [False, True])
@pytest.mark.parametrize("udf_code", [
"""
from openeo_udf.api.datacube import DataCube # Old style openeo_udf API
def apply_datacube(cube: DataCube, context: dict) -> DataCube:
offset = context.get("offset", 34)
return DataCube(cube.get_array().max("t") + offset)
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
offset = context.get("offset", 34)
return XarrayDataCube(cube.get_array().max("t") + offset)
""",
])
def test_udp_udf_reduce_temporal_with_parameter(api100, user_defined_process_registry, set_offset, udf_code):
"""Test calling a UDP with a UDF based reduce operation and fetching a UDP parameter value (EP-3781)"""
udf_code = textwrap.dedent(udf_code)
udp_id = random_name("udp")
udp_spec = {
"id": udp_id,
"parameters": [
{"name": "data", "schema": {"type": "object", "subtype": "raster-cube"}},
{"name": "offset", "default": 12, "optional": True, "schema": {"type": "number"}},
],
"process_graph": {
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_parameter": "data"},
"dimension": "t",
"reducer": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
"context": {"offset": {"from_parameter": "offset"}}
},
"result": True
}}}
},
"result": True
}
}
}
user_defined_process_registry.save(user_id=TEST_USER, process_id=udp_id, spec=udp_spec)
udp_args = {"data": {"from_node": "lc"}}
if set_offset:
udp_args["offset"] = 56
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Longitude", "Day"]
},
},
"udp": {"process_id": udp_id, "arguments": udp_args},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "udp"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["bands", "x", "y"]
data = result["data"]
expected_offset = 56 if set_offset else 12
assert_equal(data, expected_offset + np.array([
np.array([[0, .25, .5, .75]] * 4).T,
np.full((4, 4), fill_value=25)
]))
@pytest.mark.parametrize("set_parameters", [False, True])
@pytest.mark.parametrize("udf_code", [
"""
from openeo_udf.api.datacube import DataCube # Old style openeo_udf API
def apply_datacube(cube: DataCube, context: dict) -> DataCube:
l_scale = context.get("l_scale", 100)
d_scale = context.get("d_scale", 1)
array = cube.get_array()
res = l_scale * array.sel(bands="Longitude") + d_scale * array.sel(bands="Day")
return DataCube(res)
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
l_scale = context.get("l_scale", 100)
d_scale = context.get("d_scale", 1)
array = cube.get_array()
res = l_scale * array.sel(bands="Longitude") + d_scale * array.sel(bands="Day")
return XarrayDataCube(res)
""",
])
def test_udp_udf_reduce_bands_with_parameter(api100, user_defined_process_registry, set_parameters, udf_code):
"""Test calling a UDP with a UDF based reduce operation and fetching a UDP parameter value (EP-3781)"""
udf_code = textwrap.dedent(udf_code)
udp_id = random_name("udp")
udp_spec = {
"id": udp_id,
"parameters": [
{"name": "data", "schema": {"type": "object", "subtype": "raster-cube"}},
{"name": "l_scale", "default": 1000, "optional": True, "schema": {"type": "number"}},
{"name": "d_scale", "default": 2, "optional": True, "schema": {"type": "number"}},
],
"process_graph": {
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_parameter": "data"},
"dimension": "bands",
"reducer": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
"context": {
"l_scale": {"from_parameter": "l_scale"},
"d_scale": {"from_parameter": "d_scale"}
}
},
"result": True
}}}
},
"result": True
}
}
}
user_defined_process_registry.save(user_id=TEST_USER, process_id=udp_id, spec=udp_spec)
udp_args = {"data": {"from_node": "lc"}}
if set_parameters:
udp_args["l_scale"] = 100000
udp_args["d_scale"] = 3
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Longitude", "Day"]
},
},
"udp": {"process_id": udp_id, "arguments": udp_args},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "udp"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "x", "y"]
data = result["data"]
if set_parameters:
expected = np.array([
np.array([[15, 25015, 50015, 75015]] * 4).T,
np.array([[45, 25045, 50045, 75045]] * 4).T,
np.array([[75, 25075, 50075, 75075]] * 4).T,
])
else:
expected = np.array([
np.array([[10, 10 + 250, 10 + 500, 10 + 750]] * 4).T,
np.array([[30, 30 + 250, 30 + 500, 30 + 750]] * 4).T,
np.array([[50, 50 + 250, 50 + 500, 50 + 750]] * 4).T,
])
assert_equal(data, expected)
def test_apply_square_pixels(api100):
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Longitude", "Day"]
},
},
"apply": {
"process_id": "apply",
"arguments": {
"data": {"from_node": "lc"},
"process": {"process_graph": {"udf": {
"process_id": "multiply",
"arguments": {"x": {"from_parameter": "x"}, "y": {"from_parameter": "x"}},
"result": True
}}}
}
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "apply"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
data = result["data"]
expected = np.array([
[np.array([[0, 0.25 ** 2, 0.5 ** 2, 0.75 ** 2]] * 4).T, np.full((4, 4), fill_value=5 ** 2)],
[np.array([[0, 0.25 ** 2, 0.5 ** 2, 0.75 ** 2]] * 4).T, np.full((4, 4), fill_value=15 ** 2)],
[np.array([[0, 0.25 ** 2, 0.5 ** 2, 0.75 ** 2]] * 4).T, np.full((4, 4), fill_value=25 ** 2)],
])
assert_equal(data, expected)
@pytest.mark.parametrize("udf_code", [
"""
from openeo_udf.api.datacube import DataCube # Old style openeo_udf API
def apply_datacube(cube: DataCube, context: dict) -> DataCube:
array = cube.get_array()
return DataCube(array * array)
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
array = cube.get_array()
return XarrayDataCube(array * array)
""",
])
def test_apply_udf_square_pixels(api100, udf_code):
udf_code = textwrap.dedent(udf_code)
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Longitude", "Day"]
},
},
"apply": {
"process_id": "apply",
"arguments": {
"data": {"from_node": "lc"},
"process": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
},
"result": True
}}}
}
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "apply"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
data = result["data"]
expected = np.array([
[np.array([[0, 0.25 ** 2, 0.5 ** 2, 0.75 ** 2]] * 4).T, np.full((4, 4), fill_value=5 ** 2)],
[np.array([[0, 0.25 ** 2, 0.5 ** 2, 0.75 ** 2]] * 4).T, np.full((4, 4), fill_value=15 ** 2)],
[np.array([[0, 0.25 ** 2, 0.5 ** 2, 0.75 ** 2]] * 4).T, np.full((4, 4), fill_value=25 ** 2)],
])
assert_equal(data, expected)
@pytest.mark.parametrize("set_parameters", [False, True])
@pytest.mark.parametrize("udf_code", [
"""
from openeo_udf.api.datacube import DataCube # Old style openeo_udf API
def apply_datacube(cube: DataCube, context: dict) -> DataCube:
offset = context.get("offset", 100)
return DataCube(cube.get_array() + offset)
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
offset = context.get("offset", 100)
return XarrayDataCube(cube.get_array() + offset)
""",
])
def test_udp_udf_apply_neirghborhood_with_parameter(api100, user_defined_process_registry, set_parameters, udf_code):
"""Test calling a UDP with a UDF based reduce operation and fetching a UDP parameter value (EP-3781)"""
udf_code = textwrap.dedent(udf_code)
udp_id = random_name("udp")
udp_spec = {
"id": udp_id,
"parameters": [
{"name": "data", "schema": {"type": "object", "subtype": "raster-cube"}},
{"name": "offset", "default": 10, "optional": True, "schema": {"type": "number"}},
],
"process_graph": {
"apply_neighborhood": {
"process_id": "apply_neighborhood",
"arguments": {
"data": {"from_parameter": "data"},
"process": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
"context": {
"offset": {"from_parameter": "offset"},
}
},
"result": True
}}},
"size": [{'dimension': 'x', 'unit': 'px', 'value': 32},
{'dimension': 'y', 'unit': 'px', 'value': 32}],
"overlap": [{'dimension': 'x', 'unit': 'px', 'value': 8},
{'dimension': 'y', 'unit': 'px', 'value': 8}],
},
"result": True
}
}
}
user_defined_process_registry.save(user_id=TEST_USER, process_id=udp_id, spec=udp_spec)
udp_args = {"data": {"from_node": "lc"}}
if set_parameters:
udp_args["offset"] = 20
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Longitude", "Day"]
},
},
"udp": {"process_id": udp_id, "arguments": udp_args},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "udp"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
data = result["data"]
expected = np.array([
[[[.0] * 4, [.25] * 4, [.5] * 4, [.75] * 4], [[5] * 4] * 4],
[[[.0] * 4, [.25] * 4, [.5] * 4, [.75] * 4], [[15] * 4] * 4],
[[[.0] * 4, [.25] * 4, [.5] * 4, [.75] * 4], [[25] * 4] * 4],
]) + (20 if set_parameters else 10)
assert_equal(data, expected)
@pytest.mark.parametrize("geometries", [
{"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]},
{"type": "MultiPolygon", "coordinates": [[[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]]},
{
"type": "GeometryCollection",
"geometries": [{"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]}],
},
{
"type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]},
},
{
"type": "FeatureCollection",
"features": [{
"type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]},
}]
}
])
@pytest.mark.parametrize("pixels_threshold", [0, 10000])
@pytest.mark.parametrize("reducer", ["mean", "median"])
def test_ep3718_aggregate_spatial_geometries(api100, geometries, pixels_threshold, reducer):
"""EP-3718: different results when doing aggregate_spatial with Polygon or GeometryCollection"""
with set_jvm_system_properties({"pixels.treshold": pixels_threshold}):
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-20"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 2.0, "north": 2.0},
"bands": ["Flat:1", "Month", "Day"]
},
},
"aggregate": {
"process_id": "aggregate_spatial",
"arguments": {
"data": {"from_node": "lc"},
"geometries": geometries,
"reducer": {"process_graph": {
reducer: {
"process_id": reducer, "arguments": {"data": {"from_parameter": "data"}}, "result": True
}
}}
}
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "aggregate"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
# Strip out empty entries
result = {k: v for (k, v) in result.items() if v != [[]]}
expected = {
"2021-01-05T00:00:00Z": [[1.0, 1.0, 5.0]],
"2021-01-15T00:00:00Z": [[1.0, 1.0, 15.0]],
"2021-01-25T00:00:00Z": [[1.0, 1.0, 25.0]],
"2021-02-05T00:00:00Z": [[1.0, 2.0, 5.0]],
"2021-02-15T00:00:00Z": [[1.0, 2.0, 15.0]],
}
assert result == expected
@pytest.mark.parametrize("geometries", [
{"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]},
{"type": "MultiPolygon", "coordinates": [[[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]]},
{
"type": "GeometryCollection",
"geometries": [{"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]}],
},
{
"type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]},
},
{
"type": "FeatureCollection",
"features": [{
"type": "Feature",
"geometry": {"type": "Polygon", "coordinates": [[[0.1, 0.1], [1.8, 0.1], [1.1, 1.8], [0.1, 0.1]]]},
}]
}
])
def test_ep3887_mask_polygon(api100, geometries):
"""EP-3887: mask_polygon with GeometryCollection/FeatureCollection gives empty result"""
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-04", "2021-01-06"],
"bands": ["Flat:2"]
},
},
"maskpolygon1": {
"process_id": "mask_polygon",
"arguments": {
"data": {"from_node": "lc"},
"mask": geometries,
}
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "maskpolygon1"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
assert result["coords"]["x"]["data"] == [0.125, 0.375, 0.625, 0.875, 1.125, 1.375, 1.625, 1.875]
assert result["coords"]["y"]["data"] == [0.125, 0.375, 0.625, 0.875, 1.125, 1.375, 1.625, 1.875]
assert result["data"] == [[[
[2, 0, 0, 0, 0, 0, 0, 0],
[2, 2, 0, 0, 0, 0, 0, 0],
[2, 2, 2, 2, 0, 0, 0, 0],
[2, 2, 2, 2, 2, 2, 0, 0],
[2, 2, 2, 2, 2, 2, 2, 0],
[2, 2, 2, 2, 2, 0, 0, 0],
[2, 2, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
]]]
def test_apply_dimension_array_concat(api100):
"""EP-3775 apply_dimension with array_concat"""
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-01-10"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Flat:1", "TileRow", "Longitude", "Day"]
},
},
"ad": {
"process_id": "apply_dimension",
"arguments": {
"data": {"from_node": "lc"},
"dimension": "bands",
"process": {"process_graph": {
"one": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 0}
},
"lon": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 2}
},
"day": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 3}
},
"lon x day": {
"process_id": "multiply", "arguments": {"x": {"from_node": "lon"}, "y": {"from_node": "day"}}
},
"array_concat": {
"process_id": "array_concat",
"arguments": {
"array1": {"from_parameter": "data"},
"array2": [{"from_node": "one"}, {"from_node": "lon x day"}, ]
},
"result": True,
}
}},
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "ad"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
data = result["data"]
assert_equal(data, [[
np.ones((4, 4)),
np.zeros((4, 4)),
[[0, 0, 0, 0], [0.25, 0.25, 0.25, 0.25], [0.5, 0.5, 0.5, 0.5], [0.75, 0.75, 0.75, 0.75]],
np.full((4, 4), fill_value=5),
np.ones((4, 4)),
[[0, 0, 0, 0], [1.25, 1.25, 1.25, 1.25], [2.5, 2.5, 2.5, 2.5], [3.75, 3.75, 3.75, 3.75]],
]])
def test_apply_dimension_reduce_time(api100):
"""EP-3775 apply_dimension with array_concat"""
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-01-30"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Flat:1", "TileRow", "Longitude", "Day"]
},
},
"ad": {
"process_id": "apply_dimension",
"arguments": {
"data": {"from_node": "lc"},
"dimension": "t",
"target_dimension": "bands",
"process": {"process_graph": {
"arrayconcat1": {
"arguments": {
"array1": {
"from_node": "quantiles1"
},
"array2": [
{
"from_node": "sd1"
},
{
"from_node": "mean1"
}
]
},
"process_id": "array_concat"
},
"mean1": {
"arguments": {
"data": {
"from_parameter": "data"
}
},
"process_id": "mean"
},
"quantiles1": {
"arguments": {
"data": {
"from_parameter": "data"
},
"probabilities": [
0.25,
0.5,
0.75
]
},
"process_id": "quantiles"
},
"sd1": {
"arguments": {
"data": {
"from_parameter": "data"
}
},
"result": True,
"process_id": "sd"
}
}},
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "ad"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == [ "bands", "x", "y"]
data = result["data"]
#TODO EP-3916: this result is probably wrong, maybe compute expected result with numpy, also user more complex callback graph, to test feature engineering
assert_equal(data, [
np.zeros((4, 4)),
np.zeros((4, 4)),
np.zeros((4, 4)),
np.full((4, 4), fill_value=10.0),
])
@pytest.mark.parametrize("repeat", [1, 3])
def test_apply_dimension_array_create(api100, repeat):
"""EP-3775 apply_dimension with array_create"""
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-01-10"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Flat:1", "Day"]
},
},
"ad": {
"process_id": "apply_dimension",
"arguments": {
"data": {"from_node": "lc"},
"dimension": "bands",
"process": {"process_graph": {
"one": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 0}
},
"day": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 1}
},
"array_create": {
"process_id": "array_create",
"arguments": {
"data": [{"from_node": "one"}, {"from_node": "day"}, ],
"repeat": repeat
},
"result": True,
}
}},
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "ad"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
data = result["data"]
assert_equal(data, [[np.ones((4, 4)), np.full((4, 4), fill_value=5), ] * repeat])
def test_reduce_dimension_array_create_array_concat(api100):
"""EP-3775 reduce_dimension with array_create and array_concat"""
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-01-10"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Flat:1", "Flat:2"]
},
},
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_node": "lc"},
"dimension": "bands",
"reducer": {"process_graph": {
"one": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 0}
},
"two": {
"process_id": "array_element", "arguments": {"data": {"from_parameter": "data"}, "index": 1}
},
"two ones": {
"process_id": "array_create",
"arguments": {"data": [{"from_node": "one"}, {"from_node": "one"}]}
},
"three twos": {
"process_id": "array_create",
"arguments": {"data": [{"from_node": "two"}, {"from_node": "two"}, {"from_node": "two"}]}
},
"array_concat": {
"process_id": "array_concat",
"arguments": {
"array1": {"from_node": "two ones"},
"array2": {"from_node": "three twos"},
},
},
"sum": {
"process_id": "sum",
"arguments": {"data": {"from_node": "array_concat"}},
"result": True
}
}},
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "reduce"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["t", "x", "y"]
data = result["data"]
assert_equal(data, np.full((1, 4, 4), fill_value=8))
@pytest.mark.parametrize("udf_code", [
"""
from openeo_udf.api.datacube import DataCube # Old style openeo_udf API
def apply_datacube(cube: DataCube, context: dict) -> DataCube:
return DataCube(cube.get_array().max("t"))
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
return XarrayDataCube(cube.get_array().max("t"))
""",
"""
from openeo.udf import XarrayDataCube
def apply_hypercube(cube: XarrayDataCube, context: dict) -> XarrayDataCube:
return XarrayDataCube(cube.get_array().max("t"))
""",
])
def test_udf_basic_reduce_temporal(api100, user_defined_process_registry, udf_code):
"""Test doing UDF based temporal reduce"""
udf_code = textwrap.dedent(udf_code)
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 2.0},
"bands": ["Longitude", "Day"]
},
},
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_node": "lc"},
"dimension": "t",
"reducer": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
},
"result": True
}}}
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "reduce"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
_log.info(repr(result))
assert result["dims"] == ["bands", "x", "y"]
data = result["data"]
assert_equal(data, np.array([
np.array([[0, .25, .5, .75]] * 8).T,
np.full((4, 8), fill_value=25)
]))
@pytest.mark.parametrize("udf_code", [
"""
def hello(name: str):
return f"hello {name}"
""",
"""
def apply_datacube(cube, context: dict):
return DataCube(cube.get_array().max("t"))
""",
"""
def apply_hypercube(cube, context: dict):
return DataCube(cube.get_array().max("t"))
""",
"""
from openeo.udf import XarrayDataCube
def apply_datacube(cube: XarrayDataCube) -> XarrayDataCube:
return XarrayDataCube(cube.get_array().max("t"))
"""
])
def test_udf_invalid_signature(api100, user_defined_process_registry, udf_code):
"""Test doing UDF with invalid signature: should raise error"""
udf_code = textwrap.dedent(udf_code)
response = api100.result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": ["2021-01-01", "2021-02-01"],
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 2.0},
"bands": ["Longitude", "Day"]
},
},
"reduce": {
"process_id": "reduce_dimension",
"arguments": {
"data": {"from_node": "lc"},
"dimension": "t",
"reducer": {"process_graph": {"udf": {
"process_id": "run_udf",
"arguments": {
"data": {"from_parameter": "data"},
"udf": udf_code,
"runtime": "Python",
},
"result": True
}}}
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "reduce"}, "format": "json"},
"result": True,
}
})
# TODO: improve status code, error code and message
response.assert_error(status_code=500, error_code=None, message="No UDF found")
def test_extra_validation_creo(api100, requests_mock):
pg = {"lc": {
"process_id": "load_collection",
"arguments": {
"id": "SENTINEL2_L2A_CREO",
"temporal_extent": ["2020-03-01", "2020-03-10"],
"spatial_extent": {"west": -87, "south": 67, "east": -86, "north": 68},
},
"result": True
}}
requests_mock.get(
"https://finder.creodias.eu/resto/api/collections/Sentinel2/search.json?productType=L2A&startDate=2020-03-01T00%3A00%3A00&cloudCover=%5B0%2C100%5D&page=1&maxRecords=100&sortParam=startDate&sortOrder=ascending&status=all&dataset=ESA-DATASET&completionDate=2020-03-10T23%3A59%3A59.999999&geometry=POLYGON+%28%28-87+68%2C+-86+68%2C+-86+67%2C+-87+67%2C+-87+68%29%29",
json={
"type": "FeatureCollection",
"properties": {"totalResults": 28, "itemsPerPage": 28},
"features": [
{
"type": "Feature",
"properties": {
"status": 0,
"productIdentifier": "/eodata/Sentinel-2/MSI/L2A/2020/03/01/S2A_MSIL2A_20200301T173231_N0214_R055_T16WEV_20200301T220803.SAFE",
"title": "S2A_MSIL2A_20200301T173231_N0214_R055_T16WEV_20200301T220803.SAFE",
}
},
{
"type": "Feature",
"properties": {
"status": 31,
"productIdentifier": "/eodata/Sentinel-2/MSI/L2A/2020/03/01/S2A_MSIL2A_20200301T173231_N0214_R055_T16WDA_20200301T220803.SAFE",
"title": "S2A_MSIL2A_20200301T173231_N0214_R055_T16WDA_20200301T220803.SAFE",
}
},
],
}
)
requests_mock.get(
"https://finder.creodias.eu/resto/api/collections/Sentinel2/search.json?productType=L2A&startDate=2020-03-01T00%3A00%3A00&cloudCover=%5B0%2C100%5D&page=2&maxRecords=100&sortParam=startDate&sortOrder=ascending&status=all&dataset=ESA-DATASET&completionDate=2020-03-10T23%3A59%3A59.999999&geometry=POLYGON+%28%28-87+68%2C+-86+68%2C+-86+67%2C+-87+67%2C+-87+68%29%29",
json={
"type": "FeatureCollection",
"properties": {"totalResults": 28, "itemsPerPage": 28},
"features": [],
}
)
response = api100.validation(pg)
assert response.json == {'errors': [
{'code': 'MissingProduct', 'message': "Tile '16WDA' in collection 'SENTINEL2_L2A_CREO' is not available."}
]}
def test_extra_validation_terrascope(api100, requests_mock):
pg = {"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TERRASCOPE_S2_TOC_V2",
"temporal_extent": ["2020-03-01", "2020-03-10"],
"spatial_extent": {"west": -87, "south": 67, "east": -86, "north": 68},
},
"result": True
}}
requests_mock.get(
"https://finder.creodias.eu/resto/api/collections/Sentinel2/search.json?processingLevel=LEVEL1C&startDate=2020-03-01T00%3A00%3A00&cloudCover=%5B0%2C100%5D&page=1&maxRecords=100&sortParam=startDate&sortOrder=ascending&status=all&dataset=ESA-DATASET&completionDate=2020-03-10T23%3A59%3A59.999999&geometry=POLYGON+%28%28-87+68%2C+-86+68%2C+-86+67%2C+-87+67%2C+-87+68%29%29",
json={
"type": "FeatureCollection",
"properties": {"totalResults": 28, "itemsPerPage": 28},
"features": [
{
"type": "Feature",
"properties": {
"status": 0,
"productIdentifier": "/eodata/Sentinel-2/MSI/L1C/2020/03/01/S2A_MSIL1C_20200301T173231_N0209_R055_T16WEA_20200301T210331.SAFE",
"title": "S2A_MSIL1C_20200301T173231_N0209_R055_T16WEA_20200301T210331",
}
},
{
"type": "Feature",
"properties": {
"status": 0,
"productIdentifier": "/eodata/Sentinel-2/MSI/L1C/2020/03/01/S2A_MSIL1C_20200301T173231_N0209_R055_T16WDA_20200301T210331.SAFE",
"title": "S2A_MSIL1C_20200301T1732…DA_20200301T210331.SAFE",
}
},
],
}
)
requests_mock.get(
"https://finder.creodias.eu/resto/api/collections/Sentinel2/search.json?processingLevel=LEVEL1C&startDate=2020-03-01T00%3A00%3A00&cloudCover=%5B0%2C100%5D&page=2&maxRecords=100&sortParam=startDate&sortOrder=ascending&status=all&dataset=ESA-DATASET&completionDate=2020-03-10T23%3A59%3A59.999999&geometry=POLYGON+%28%28-87+68%2C+-86+68%2C+-86+67%2C+-87+67%2C+-87+68%29%29",
json={
"type": "FeatureCollection",
"properties": {"totalResults": 28, "itemsPerPage": 28},
"features": [],
}
)
requests_mock.get(
"https://services.terrascope.be/catalogue/products?collection=urn%3Aeop%3AVITO%3ATERRASCOPE_S2_TOC_V2&bbox=-87%2C67%2C-86%2C68&sortKeys=title&startIndex=1&start=2020-03-01T00%3A00%3A00&end=2020-03-10T23%3A59%3A59.999999",
json={
"type": "FeatureCollection",
"properties": {},
"features": [
{"type": "Feature", "properties": {
"title": "S2A_20200301T173231_16WEA_TOC_V200",
"identifier": "urn:eop:VITO:TERRASCOPE_S2_TOC_V2:S2A_20200301T173231_16WEA_TOC_V200",
}}
]
}
)
requests_mock.get(
"https://services.terrascope.be/catalogue/products?collection=urn%3Aeop%3AVITO%3ATERRASCOPE_S2_TOC_V2&bbox=-87%2C67%2C-86%2C68&sortKeys=title&startIndex=2&start=2020-03-01T00%3A00%3A00&end=2020-03-10T23%3A59%3A59.999999",
json={
"type": "FeatureCollection",
"properties": {},
"features": []
}
)
response = api100.validation(pg)
assert response.json == {'errors': [
{'code': 'MissingProduct', 'message': "Tile '16WDA' in collection 'TERRASCOPE_S2_TOC_V2' is not available."}
]}
@pytest.mark.parametrize(["lc_args", "expected"], [
(
{"id": "TERRASCOPE_S2_TOC_V2"},
["No temporal extent given.", "No spatial extent given."]
),
(
{"id": "TERRASCOPE_S2_TOC_V2", "temporal_extent": ["2020-03-01", "2020-03-10"]},
["No spatial extent given."]
),
(
{"id": "TERRASCOPE_S2_TOC_V2", "spatial_extent": {"west": -87, "south": 67, "east": -86, "north": 68}},
["No temporal extent given."]
),
])
def test_extra_validation_unlimited_extent(api100, lc_args, expected):
pg = {"lc": {"process_id": "load_collection", "arguments": lc_args, "result": True}}
response = api100.validation(pg)
assert response.json == {'errors': [{'code': 'UnlimitedExtent', 'message': m} for m in expected]}
@pytest.mark.parametrize(["temporal_extent", "expected"], [
(("2020-01-01", None), ("2020-01-05 00:00:00", "2020-02-15 00:00:00")),
((None, "2019-01-10"), ("2000-01-05 00:00:00", "2019-01-05 00:00:00")),
((None, None), ("2000-01-05 00:00:00", "2020-02-15 00:00:00")),
])
def test_load_collection_open_temporal_extent(api100, temporal_extent, expected):
with UtcNowClock.mock(now="2020-02-20"):
response = api100.check_result({
"lc": {
"process_id": "load_collection",
"arguments": {
"id": "TestCollection-LonLat4x4",
"temporal_extent": temporal_extent,
"spatial_extent": {"west": 0.0, "south": 0.0, "east": 1.0, "north": 1.0},
"bands": ["Day"]
},
},
"save": {
"process_id": "save_result",
"arguments": {"data": {"from_node": "lc"}, "format": "json"},
"result": True,
}
})
result = response.assert_status_code(200).json
# _log.info(repr(result))
assert result["dims"] == ["t", "bands", "x", "y"]
dates = result["coords"]["t"]["data"]
assert (min(dates), max(dates)) == expected
| 38.659777 | 379 | 0.477053 | 4,990 | 48,634 | 4.484369 | 0.080762 | 0.0366 | 0.043303 | 0.030969 | 0.82987 | 0.796264 | 0.781115 | 0.765429 | 0.748313 | 0.733432 | 0 | 0.082373 | 0.345499 | 48,634 | 1,257 | 380 | 38.690533 | 0.620527 | 0.023975 | 0 | 0.619048 | 0 | 0.005602 | 0.297552 | 0.027169 | 0 | 0 | 0 | 0.001591 | 0.051354 | 1 | 0.021475 | false | 0 | 0.008403 | 0 | 0.029879 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f00fed8ef167ced50d8b125fd796e9094bdc3ba | 76 | py | Python | packages/pytea/pylib/torch/cuda/__init__.py | lego0901/pytea | 8ede650def2e68f4610ba816451d8b9e28f09f76 | [
"MIT"
] | null | null | null | packages/pytea/pylib/torch/cuda/__init__.py | lego0901/pytea | 8ede650def2e68f4610ba816451d8b9e28f09f76 | [
"MIT"
] | null | null | null | packages/pytea/pylib/torch/cuda/__init__.py | lego0901/pytea | 8ede650def2e68f4610ba816451d8b9e28f09f76 | [
"MIT"
] | null | null | null | def is_available():
# TODO: may return symbolic value?
return False
| 19 | 38 | 0.684211 | 10 | 76 | 5.1 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.236842 | 76 | 3 | 39 | 25.333333 | 0.87931 | 0.421053 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
3f194f24f3c66d8748b9b55d13bb3f1d67bcee1a | 96 | py | Python | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_thread_lifecycle.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_thread_lifecycle.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_thread_lifecycle.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/4d/b2/26/1244d30725ae6c21c0a653440a030e7d6135ae5d812cdc4b1d447538ec | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.447917 | 0 | 96 | 1 | 96 | 96 | 0.447917 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f1f5c9e2233424756cc4961846866e5f125c408 | 28 | py | Python | ntnui/apps/database/utils/__init__.py | kapteinstein/tdt4290 | 7bc2d2dbdbcc3fd35a05f1d2893d83255803f73b | [
"MIT"
] | null | null | null | ntnui/apps/database/utils/__init__.py | kapteinstein/tdt4290 | 7bc2d2dbdbcc3fd35a05f1d2893d83255803f73b | [
"MIT"
] | 13 | 2020-06-05T18:26:43.000Z | 2021-06-10T20:36:13.000Z | ntnui/apps/database/utils/__init__.py | kapteinstein/tdt4290 | 7bc2d2dbdbcc3fd35a05f1d2893d83255803f73b | [
"MIT"
] | 1 | 2019-04-07T23:42:22.000Z | 2019-04-07T23:42:22.000Z | from .user_manager import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f2502d2cdf3b36dc4e9af4f15bbfa5028f8df70 | 120 | py | Python | healthysnake/exceptions.py | jimah/healthysnake | 1cfe4c8b9437dcdf792fb790ac1b3914b73f16e9 | [
"MIT"
] | null | null | null | healthysnake/exceptions.py | jimah/healthysnake | 1cfe4c8b9437dcdf792fb790ac1b3914b73f16e9 | [
"MIT"
] | null | null | null | healthysnake/exceptions.py | jimah/healthysnake | 1cfe4c8b9437dcdf792fb790ac1b3914b73f16e9 | [
"MIT"
] | 1 | 2017-10-13T15:55:45.000Z | 2017-10-13T15:55:45.000Z | class DependencyAlreadyPresentException(Exception):
pass
class DependencyNotPresentException(Exception):
pass
| 17.142857 | 51 | 0.816667 | 8 | 120 | 12.25 | 0.625 | 0.265306 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 120 | 6 | 52 | 20 | 0.942308 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3f38c2ec20e1c20fa514de037e4c0dcb0784b2dd | 44 | py | Python | mw_uml_generator/markdown/__init__.py | candyabc/mw-uml-generator | ccfc635fae93617000adb9e93b2821a3aa82324b | [
"MIT"
] | null | null | null | mw_uml_generator/markdown/__init__.py | candyabc/mw-uml-generator | ccfc635fae93617000adb9e93b2821a3aa82324b | [
"MIT"
] | null | null | null | mw_uml_generator/markdown/__init__.py | candyabc/mw-uml-generator | ccfc635fae93617000adb9e93b2821a3aa82324b | [
"MIT"
] | null | null | null | from .markdown_parse import MdTemplateParser | 44 | 44 | 0.909091 | 5 | 44 | 7.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3f52c90bcc762212fd33bbefe055d2763c9a4ca3 | 11,852 | py | Python | source/tests/py_tests/stack_variables_move_errors_test.py | Panzerschrek/U-00DC-Sprache | eb677a66d178985433a62eb6b8a50ce2cdb14b1a | [
"BSD-3-Clause"
] | 45 | 2016-06-21T22:28:43.000Z | 2022-03-26T12:21:46.000Z | source/tests/py_tests/stack_variables_move_errors_test.py | Panzerschrek/U-00DC-Sprache | eb677a66d178985433a62eb6b8a50ce2cdb14b1a | [
"BSD-3-Clause"
] | 6 | 2020-07-12T18:00:10.000Z | 2021-11-30T11:20:14.000Z | source/tests/py_tests/stack_variables_move_errors_test.py | Panzerschrek/U-00DC-Sprache | eb677a66d178985433a62eb6b8a50ce2cdb14b1a | [
"BSD-3-Clause"
] | 5 | 2019-09-03T17:20:34.000Z | 2022-01-30T15:10:21.000Z | from py_tests_common import *
def ExpectedReferenceValue_ForMove_Test0():
c_program_text= """
fn Foo()
{
auto imut x= 0;
move(x); // Expected mutable variable.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExpectedReferenceValue" )
assert( errors_list[0].src_loc.line == 5 )
def ExpectedReferenceValue_ForMove_Test1():
c_program_text= """
fn Foo( i32 imut x )
{
move(x); // Expected mutable variable, got immutable argument.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExpectedReferenceValue" )
assert( errors_list[0].src_loc.line == 4 )
def ExpectedReferenceValue_ForMove_Test2():
c_program_text= """
auto constexpr pi= 3.141592535;
fn Foo()
{
move(pi); // Expected mutable variable, got global constant.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExpectedVariable" )
assert( errors_list[0].src_loc.line == 5 )
def ExpectedVariable_ForMove_Test0():
c_program_text= """
fn Foo()
{
auto mut x= 0;
auto &mut r= x;
move(r); // Expected variable, got reference
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExpectedVariable" )
assert( errors_list[0].src_loc.line == 6 )
def ExpectedVariable_ForMove_Test1():
c_program_text= """
struct S
{
i32 x;
fn Foo( this )
{
move(x); // Error, moving field
}
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExpectedVariable" )
assert( errors_list[0].src_loc.line == 7 )
def ExpectedVariable_ForMove_Test2():
c_program_text= """
fn Foo( i32 &mut x )
{
move(x);
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ExpectedVariable" )
assert( errors_list[0].src_loc.line == 4 )
def AccessingMovedVariable_Test0():
c_program_text= """
fn Foo()
{
auto mut x= 0;
move(x);
++x;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 6 )
def AccessingMovedVariable_Test1():
c_program_text= """
fn Foo()
{
auto mut x= 0;
move(x);
move(x);
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 6 )
def AccessingMovedVariable_Test2():
c_program_text= """
fn Foo()
{
auto mut x= 0;
move(x) + x;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 5 )
def AccessingMovedVariable_Test3():
c_program_text= """
fn Foo()
{
auto mut x= 0;
move(x);
x= 42; // Currently, event can not assign value to moved variable.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 6 )
def AccessingMovedVariable_Test4():
c_program_text= """
fn Foo( i32 mut x )
{
move(x);
--x;
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 5 )
def AccessingMovedVariable_Test5():
c_program_text= """
fn Foo()
{
auto mut b= true;
move(b) && b; // In lazy ligical operator.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 5 )
def AccessingMovedVariable_Test6():
c_program_text= """
fn Foo()
{
auto mut b= true;
if( move(b) ) { b= false; }
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 5 )
def AccessingMovedVariable_InTupleFor_Test0():
c_program_text= """
fn Foo()
{
var bool mut b= false;
var tup[ i32, f32 ] t= zero_init;
for( el : t )
{
move(b); // On second iteration thi variable is already moved.
}
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "AccessingMovedVariable" )
assert( errors_list[0].src_loc.line == 8 )
def AccessingMovedVariable_InTupleFor_Test1():
c_program_text= """
fn Foo()
{
var bool mut b= false;
var tup[ i32 ] t= zero_init;
for( el : t )
{
move(b); // Ok, move 1 time, because loop have 1 iteration.
}
}
"""
tests_lib.build_program( c_program_text )
def AccessingMovedVariable_InTupleFor_Test2():
c_program_text= """
fn Foo()
{
var bool mut b= false;
var tup[ ] t= zero_init;
for( el : t ) // Loop have zero iterations.
{
move(b);
}
b= true; // ok, 'b' is not moved.
}
"""
tests_lib.build_program( c_program_text )
def OuterVariableMoveInsideLoop_Test0():
c_program_text= """
fn Foo()
{
auto mut x= 0;
while( false ){ move(x); }
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "OuterVariableMoveInsideLoop" )
assert( errors_list[0].src_loc.line == 5 )
def OuterVariableMoveInsideLoop_Test1():
c_program_text= """
fn Foo( i32 mut x )
{
while( false ){ move(x); }
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "OuterVariableMoveInsideLoop" )
assert( errors_list[0].src_loc.line == 4 )
def OuterVariableMoveInsideLoop_Test2():
c_program_text= """
fn Foo( i32 mut x )
{
while( false )
{
if( false ) { move(x); }
else { move(x); }
}
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "OuterVariableMoveInsideLoop" )
assert( errors_list[0].src_loc.line == 8 )
def ConditionalMove_Test0():
c_program_text= """
fn Foo()
{
auto mut x= 0;
if( false ) { move(x); }
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 5 )
def ConditionalMove_Test1():
c_program_text= """
fn Foo()
{
auto mut x= 0;
if( false ) { move(x); }
else if( false ) {}
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 6 )
def ConditionalMove_Test2():
c_program_text= """
fn Foo()
{
auto mut x= 0;
if( false ) { move(x); }
else if( false ) {}
else { move(x); }
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 7 )
def ConditionalMove_Test3():
c_program_text= """
fn Foo()
{
auto mut x= 0;
if( false ) { } // Not moved in first branch
else { move(x); }
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 6 )
def ConditionalMove_InTupleFor_Test0():
c_program_text= """
fn Cond() : bool;
fn Foo()
{
var bool mut b= false;
var tup[ i32, f32 ] t= zero_init;
for( el : t )
{
if( Cond() )
{
move(b);
break;
}
} // Error, 'b' moved not in all branches.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 14 )
def ConditionalMove_ForLazyLogicalOperators_Test0():
c_program_text= """
fn Foo()
{
auto mut b= false;
true && move(b); // Second part of lazy logical operator is conditional.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 5 )
def ConditionalMove_ForLazyLogicalOperators_Test1():
c_program_text= """
fn Foo()
{
auto mut b= false;
false || move(b); // Second part of lazy logical operator is conditional.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 5 )
def ConditionalMove_ForLazyLogicalOperators_Test2():
c_program_text= """
fn Foo()
{
auto mut b= false;
false || ( move(b) || true ); // Move inside right part of top-level lazy logical operator.
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "ConditionalMove" )
assert( errors_list[0].src_loc.line == 5 )
def MovedVariableHaveReferences_Test0():
c_program_text= """
fn Foo()
{
auto mut x= 3.5;
auto &ref= x;
move(x);
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "MovedVariableHaveReferences" )
assert( errors_list[0].src_loc.line == 6 )
def MovedVariableHaveReferences_Test1():
c_program_text= """
fn Foo( i16 mut short )
{
auto &mut ref= short;
move(short);
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "MovedVariableHaveReferences" )
assert( errors_list[0].src_loc.line == 5 )
def MovedVariableHaveReferences_Test2():
c_program_text= """
struct S
{
i32& r;
}
fn Foo()
{
var i32 mut x= 45;
var S s{ .r= x };
move(x);
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( len(errors_list) > 0 )
assert( errors_list[0].error_code == "MovedVariableHaveReferences" )
assert( errors_list[0].src_loc.line == 10 )
def MovedVariableHaveReferences_Test3():
c_program_text= """
struct S
{
i32& r;
fn constructor( this'a', i32&'b x ) ' a <- b '
( r= x ) {}
}
fn Bar( S s, i32 x ){}
fn Foo()
{
var i32 mut x= 45;
Bar( S(x), move(x) );
}
"""
errors_list= ConvertErrors( tests_lib.build_program_with_errors( c_program_text ) )
assert( HaveError( errors_list, "MovedVariableHaveReferences", 12 ) )
| 25.004219 | 94 | 0.683176 | 1,615 | 11,852 | 4.726316 | 0.08483 | 0.149352 | 0.121053 | 0.124722 | 0.858378 | 0.842526 | 0.827722 | 0.819599 | 0.793397 | 0.770339 | 0 | 0.022012 | 0.179717 | 11,852 | 473 | 95 | 25.057082 | 0.763115 | 0 | 0 | 0.618005 | 0 | 0.002433 | 0.324502 | 0.034509 | 0 | 0 | 0 | 0 | 0.206813 | 1 | 0.075426 | false | 0 | 0.002433 | 0 | 0.077859 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f66fafddadd1a000fab93a7be99fc6bc4d30c74 | 135 | py | Python | scopes/custom_print_demo.py | padmacho/pythontutorial | 80c58d2d6efc0c3598f92b627338c6cd9fda1759 | [
"Apache-2.0"
] | null | null | null | scopes/custom_print_demo.py | padmacho/pythontutorial | 80c58d2d6efc0c3598f92b627338c6cd9fda1759 | [
"Apache-2.0"
] | null | null | null | scopes/custom_print_demo.py | padmacho/pythontutorial | 80c58d2d6efc0c3598f92b627338c6cd9fda1759 | [
"Apache-2.0"
] | null | null | null | from custom_print import print
print(10) # instead of buit in scope print. Global scope print defined in custom_print module is used | 67.5 | 102 | 0.8 | 23 | 135 | 4.608696 | 0.652174 | 0.207547 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017857 | 0.17037 | 135 | 2 | 102 | 67.5 | 0.928571 | 0.659259 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 1 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
58b0498234ba7e665a029c9fbe9d76d69b6fdac4 | 26 | py | Python | lltk/corpus/eebo_tcp/__init__.py | literarylab/lltk | 0e516d7fa0978c1a3bd2cb7636f0089772e515ec | [
"MIT"
] | 5 | 2021-03-15T21:05:06.000Z | 2022-03-04T10:52:16.000Z | lltk/corpus/eebo_tcp/__init__.py | literarylab/lltk | 0e516d7fa0978c1a3bd2cb7636f0089772e515ec | [
"MIT"
] | 1 | 2021-05-04T17:01:47.000Z | 2021-05-10T15:14:55.000Z | lltk/corpus/eebo_tcp/__init__.py | literarylab/lltk | 0e516d7fa0978c1a3bd2cb7636f0089772e515ec | [
"MIT"
] | null | null | null | from .eebo import EEBO_TCP | 26 | 26 | 0.846154 | 5 | 26 | 4.2 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.913043 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58bdb9189f4e02d36ba3121fd14298582a46633d | 76 | py | Python | pycharles/__init__.py | shakedzy/pycharles | ba330050e85a2785dc1a7408b2c5460bf8519a76 | [
"Apache-2.0"
] | 1 | 2018-06-15T16:39:56.000Z | 2018-06-15T16:39:56.000Z | pycharles/__init__.py | shakedzy/pycharles | ba330050e85a2785dc1a7408b2c5460bf8519a76 | [
"Apache-2.0"
] | null | null | null | pycharles/__init__.py | shakedzy/pycharles | ba330050e85a2785dc1a7408b2c5460bf8519a76 | [
"Apache-2.0"
] | 2 | 2019-07-05T14:03:08.000Z | 2021-05-17T21:17:45.000Z | from pycharles.model import Model
from pycharles import offspring_functions
| 25.333333 | 41 | 0.881579 | 10 | 76 | 6.6 | 0.6 | 0.393939 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105263 | 76 | 2 | 42 | 38 | 0.970588 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58cba7f400347befedbbb4e566695eec52f8d8e4 | 49 | py | Python | util/__init__.py | fendaq/TextRSN | 02a6bc06cd64b581414ed5065a8c93e0c68a807a | [
"MIT"
] | 1 | 2020-05-26T01:00:38.000Z | 2020-05-26T01:00:38.000Z | util/__init__.py | fendaq/TextRSN | 02a6bc06cd64b581414ed5065a8c93e0c68a807a | [
"MIT"
] | null | null | null | util/__init__.py | fendaq/TextRSN | 02a6bc06cd64b581414ed5065a8c93e0c68a807a | [
"MIT"
] | 3 | 2020-05-26T01:00:39.000Z | 2021-02-09T16:38:37.000Z | from .visualize import *
from .detection import * | 24.5 | 24 | 0.77551 | 6 | 49 | 6.333333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 2 | 25 | 24.5 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
451817abd7a3044624c931f8f7e88c2bb39968a6 | 36 | py | Python | remimi/sensors/__init__.py | xiong-jie-y/remimi | 0b0360728ab8abd941a35c78f5c114317fd2caed | [
"MIT"
] | 23 | 2021-03-28T05:03:02.000Z | 2021-10-03T14:45:20.000Z | remimi/sensors/__init__.py | xiong-jie-y/remimi | 0b0360728ab8abd941a35c78f5c114317fd2caed | [
"MIT"
] | null | null | null | remimi/sensors/__init__.py | xiong-jie-y/remimi | 0b0360728ab8abd941a35c78f5c114317fd2caed | [
"MIT"
] | 1 | 2021-09-03T10:53:46.000Z | 2021-09-03T10:53:46.000Z | class StreamFinished(EOFError): pass | 36 | 36 | 0.861111 | 4 | 36 | 7.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.055556 | 36 | 1 | 36 | 36 | 0.911765 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
4526e6c9c4a8771d06c7c3b392336a19da9f7f92 | 20,167 | py | Python | tests/testflows/window_functions/tests/funcs.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 15,577 | 2019-09-23T11:57:53.000Z | 2022-03-31T18:21:48.000Z | tests/testflows/window_functions/tests/funcs.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 16,476 | 2019-09-23T11:47:00.000Z | 2022-03-31T23:06:01.000Z | tests/testflows/window_functions/tests/funcs.py | pdv-ru/ClickHouse | 0ff975bcf3008fa6c6373cbdfed16328e3863ec5 | [
"Apache-2.0"
] | 3,633 | 2019-09-23T12:18:28.000Z | 2022-03-31T15:55:48.000Z | from testflows.core import *
from window_functions.requirements import *
from window_functions.tests.common import *
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_FirstValue("1.0")
)
def first_value(self):
"""Check `first_value` function.
"""
expected = convert_output("""
first_value | ten | four
-------------+-----+------
0 | 0 | 0
0 | 0 | 0
0 | 4 | 0
1 | 1 | 1
1 | 1 | 1
1 | 7 | 1
1 | 9 | 1
0 | 0 | 2
1 | 1 | 3
1 | 3 | 3
""")
with Example("using first_value"):
execute_query(
"SELECT first_value(ten) OVER (PARTITION BY four ORDER BY ten) AS first_value, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("using any equivalent"):
execute_query(
"SELECT any(ten) OVER (PARTITION BY four ORDER BY ten) AS first_value, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_LastValue("1.0")
)
def last_value(self):
"""Check `last_value` function.
"""
with Example("order by window", description="""
Check that last_value returns the last row of the frame that is CURRENT ROW in ORDER BY window
"""):
expected = convert_output("""
last_value | ten | four
------------+-----+------
0 | 0 | 0
0 | 0 | 0
2 | 0 | 2
1 | 1 | 1
1 | 1 | 1
3 | 1 | 3
3 | 3 | 3
0 | 4 | 0
1 | 7 | 1
1 | 9 | 1
""")
with Check("using last_value"):
execute_query(
"SELECT last_value(four) OVER (ORDER BY ten, four) AS last_value, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Check("using anyLast() equivalent"):
execute_query(
"SELECT anyLast(four) OVER (ORDER BY ten, four) AS last_value, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("partition by window", description="""
Check that last_value returns the last row of the frame that is CURRENT ROW in ORDER BY window
"""):
expected = convert_output("""
last_value | ten | four
------------+-----+------
4 | 0 | 0
4 | 0 | 0
4 | 4 | 0
9 | 1 | 1
9 | 1 | 1
9 | 7 | 1
9 | 9 | 1
0 | 0 | 2
3 | 1 | 3
3 | 3 | 3
""")
with Check("using last_value"):
execute_query(
"""SELECT last_value(ten) OVER (PARTITION BY four) AS last_value, ten, four FROM
(SELECT * FROM tenk1 WHERE unique2 < 10 ORDER BY four, ten)
ORDER BY four, ten""",
expected=expected
)
with Check("using anyLast() equivalent"):
execute_query(
"""SELECT anyLast(ten) OVER (PARTITION BY four) AS last_value, ten, four FROM
(SELECT * FROM tenk1 WHERE unique2 < 10 ORDER BY four, ten)
ORDER BY four, ten""",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_Lag_Workaround("1.0")
)
def lag(self):
"""Check `lag` function workaround.
"""
with Example("anyOrNull"):
expected = convert_output("""
lag | ten | four
-----+-----+------
\\N | 0 | 0
0 | 0 | 0
0 | 4 | 0
\\N | 1 | 1
1 | 1 | 1
1 | 7 | 1
7 | 9 | 1
\\N | 0 | 2
\\N | 1 | 3
1 | 3 | 3
""")
execute_query(
"SELECT anyOrNull(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS lag , ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("any"):
expected = convert_output("""
lag | ten | four
-----+-----+------
0 | 0 | 0
0 | 0 | 0
0 | 4 | 0
0 | 1 | 1
1 | 1 | 1
1 | 7 | 1
7 | 9 | 1
0 | 0 | 2
0 | 1 | 3
1 | 3 | 3
""")
execute_query(
"SELECT any(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING) AS lag , ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("anyOrNull with column value as offset"):
expected = convert_output("""
lag | ten | four
-----+-----+------
0 | 0 | 0
0 | 0 | 0
4 | 4 | 0
\\N | 1 | 1
1 | 1 | 1
1 | 7 | 1
7 | 9 | 1
\\N | 0 | 2
\\N | 1 | 3
\\N | 3 | 3
""")
execute_query(
"SELECT any(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN four PRECEDING AND four PRECEDING) AS lag , ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_Lead_Workaround("1.0")
)
def lead(self):
"""Check `lead` function workaround.
"""
with Example("anyOrNull"):
expected = convert_output("""
lead | ten | four
------+-----+------
0 | 0 | 0
4 | 0 | 0
\\N | 4 | 0
1 | 1 | 1
7 | 1 | 1
9 | 7 | 1
\\N | 9 | 1
\\N | 0 | 2
3 | 1 | 3
\\N | 3 | 3
""")
execute_query(
"SELECT anyOrNull(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS lead, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("any"):
expected = convert_output("""
lead | ten | four
------+-----+------
0 | 0 | 0
4 | 0 | 0
0 | 4 | 0
1 | 1 | 1
7 | 1 | 1
9 | 7 | 1
0 | 9 | 1
0 | 0 | 2
3 | 1 | 3
0 | 3 | 3
""")
execute_query(
"SELECT any(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS lead, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("any with arithmetic expr"):
expected = convert_output("""
lead | ten | four
------+-----+------
0 | 0 | 0
8 | 0 | 0
0 | 4 | 0
2 | 1 | 1
14 | 1 | 1
18 | 7 | 1
0 | 9 | 1
0 | 0 | 2
6 | 1 | 3
0 | 3 | 3
""")
execute_query(
"SELECT any(ten * 2) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN 1 FOLLOWING AND 1 FOLLOWING) AS lead, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
with Example("subquery as offset"):
expected = convert_output("""
lead
------
0
0
4
1
7
9
\\N
0
3
\\N
""")
execute_query(
"SELECT anyNull(ten) OVER (PARTITION BY four ORDER BY ten ROWS BETWEEN (SELECT two FROM tenk1 WHERE unique2 = unique2) FOLLOWING AND (SELECT two FROM tenk1 WHERE unique2 = unique2) FOLLOWING) AS lead "
"FROM tenk1 WHERE unique2 < 10",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_RowNumber("1.0")
)
def row_number(self):
"""Check `row_number` function.
"""
expected = convert_output("""
row_number
------------
1
2
3
4
5
6
7
8
9
10
""")
execute_query(
"SELECT row_number() OVER (ORDER BY unique2) AS row_number FROM tenk1 WHERE unique2 < 10",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_Rank("1.0")
)
def rank(self):
"""Check `rank` function.
"""
expected = convert_output("""
rank_1 | ten | four
--------+-----+------
1 | 0 | 0
1 | 0 | 0
3 | 4 | 0
1 | 1 | 1
1 | 1 | 1
3 | 7 | 1
4 | 9 | 1
1 | 0 | 2
1 | 1 | 3
2 | 3 | 3
""")
execute_query(
"SELECT rank() OVER (PARTITION BY four ORDER BY ten) AS rank_1, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_DenseRank("1.0")
)
def dense_rank(self):
"""Check `dense_rank` function.
"""
expected = convert_output("""
dense_rank | ten | four
------------+-----+------
1 | 0 | 0
1 | 0 | 0
2 | 4 | 0
1 | 1 | 1
1 | 1 | 1
2 | 7 | 1
3 | 9 | 1
1 | 0 | 2
1 | 1 | 3
2 | 3 | 3
""")
execute_query(
"SELECT dense_rank() OVER (PARTITION BY four ORDER BY ten) AS dense_rank, ten, four FROM tenk1 WHERE unique2 < 10",
expected=expected
)
@TestScenario
def last_value_with_no_frame(self):
"""Check last_value function with no frame.
"""
expected = convert_output("""
four | ten | sum | last_value
------+-----+-----+------------
0 | 0 | 0 | 0
0 | 2 | 2 | 2
0 | 4 | 6 | 4
0 | 6 | 12 | 6
0 | 8 | 20 | 8
1 | 1 | 1 | 1
1 | 3 | 4 | 3
1 | 5 | 9 | 5
1 | 7 | 16 | 7
1 | 9 | 25 | 9
2 | 0 | 0 | 0
2 | 2 | 2 | 2
2 | 4 | 6 | 4
2 | 6 | 12 | 6
2 | 8 | 20 | 8
3 | 1 | 1 | 1
3 | 3 | 4 | 3
3 | 5 | 9 | 5
3 | 7 | 16 | 7
3 | 9 | 25 | 9
""")
execute_query(
"SELECT four, ten, sum(ten) over (partition by four order by ten) AS sum, "
"last_value(ten) over (partition by four order by ten) AS last_value "
"FROM (select distinct ten, four from tenk1)",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_LastValue("1.0"),
RQ_SRS_019_ClickHouse_WindowFunctions_Lag_Workaround("1.0"),
)
def last_value_with_lag_workaround(self):
"""Check last value with lag workaround.
"""
expected = convert_output("""
last_value | lag | salary
------------+------+--------
4500 | 0 | 3500
4800 | 3500 | 3900
5200 | 3900 | 4200
5200 | 4200 | 4500
5200 | 4500 | 4800
5200 | 4800 | 4800
6000 | 4800 | 5000
6000 | 5000 | 5200
6000 | 5200 | 5200
6000 | 5200 | 6000
""")
execute_query(
"select last_value(salary) over(order by salary range between 1000 preceding and 1000 following) AS last_value, "
"any(salary) over(order by salary rows between 1 preceding and 1 preceding) AS lag, "
"salary from empsalary",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_FirstValue("1.0"),
RQ_SRS_019_ClickHouse_WindowFunctions_Lead_Workaround("1.0")
)
def first_value_with_lead_workaround(self):
"""Check first value with lead workaround.
"""
expected = convert_output("""
first_value | lead | salary
-------------+------+--------
3500 | 3900 | 3500
3500 | 4200 | 3900
3500 | 4500 | 4200
3500 | 4800 | 4500
3900 | 4800 | 4800
3900 | 5000 | 4800
4200 | 5200 | 5000
4200 | 5200 | 5200
4200 | 6000 | 5200
5000 | 0 | 6000
""")
execute_query(
"select first_value(salary) over(order by salary range between 1000 preceding and 1000 following) AS first_value, "
"any(salary) over(order by salary rows between 1 following and 1 following) AS lead,"
"salary from empsalary",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_LeadInFrame("1.0")
)
def leadInFrame(self):
"""Check `leadInFrame` function.
"""
with Example("non default offset"):
expected = convert_output("""
empno | salary | lead
--------+--------+-------
1 | 5000 | 5000
2 | 3900 | 3900
3 | 4800 | 4800
4 | 4800 | 4800
5 | 3500 | 3500
7 | 4200 | 4200
8 | 6000 | 6000
9 | 4500 | 4500
10 | 5200 | 5200
11 | 5200 | 5200
""")
execute_query(
"select empno, salary, leadInFrame(salary,0) OVER (ORDER BY salary) AS lead FROM empsalary ORDER BY empno",
expected=expected
)
with Example("default offset"):
expected = convert_output("""
empno | salary | lead
--------+--------+-------
1 | 5000 | 0
2 | 3900 | 0
3 | 4800 | 4800
4 | 4800 | 0
5 | 3500 | 0
7 | 4200 | 0
8 | 6000 | 0
9 | 4500 | 0
10 | 5200 | 5200
11 | 5200 | 0
""")
execute_query(
"select empno, salary, leadInFrame(salary) OVER (ORDER BY salary) AS lead FROM (SELECT * FROM empsalary ORDER BY empno)",
expected=expected
)
with Example("explicit default value"):
expected = convert_output("""
empno | salary | lead
--------+--------+-------
1 | 5000 | 8
2 | 3900 | 8
3 | 4800 | 4800
4 | 4800 | 8
5 | 3500 | 8
7 | 4200 | 8
8 | 6000 | 8
9 | 4500 | 8
10 | 5200 | 5200
11 | 5200 | 8
""")
execute_query(
"select empno, salary, leadInFrame(salary,1,8) OVER (ORDER BY salary) AS lead FROM empsalary ORDER BY empno",
expected=expected
)
with Example("without order by"):
expected = convert_output("""
empno | salary | lead
--------+--------+-------
1 | 5000 | 3900
2 | 3900 | 4800
3 | 4800 | 4800
4 | 4800 | 3500
5 | 3500 | 4200
7 | 4200 | 6000
8 | 6000 | 4500
9 | 4500 | 5200
10 | 5200 | 5200
11 | 5200 | 0
""")
execute_query(
"select empno, salary, leadInFrame(salary) OVER () AS lead FROM (SELECT * FROM empsalary ORDER BY empno)",
expected=expected
)
with Example("with nulls"):
expected = convert_output("""
number | lead
--------+-----
1 | 1
1 | 2
2 | 3
3 | 0
\\N | 0
""")
execute_query(
"select number, leadInFrame(number,1,0) OVER () AS lead FROM values('number Nullable(Int8)', (1),(1),(2),(3),(NULL))",
expected=expected
)
@TestScenario
@Requirements(
RQ_SRS_019_ClickHouse_WindowFunctions_LagInFrame("1.0")
)
def lagInFrame(self):
"""Check `lagInFrame` function.
"""
with Example("non default offset"):
expected = convert_output("""
empno | salary | lag
--------+--------+-------
1 | 5000 | 5000
2 | 3900 | 3900
3 | 4800 | 4800
4 | 4800 | 4800
5 | 3500 | 3500
7 | 4200 | 4200
8 | 6000 | 6000
9 | 4500 | 4500
10 | 5200 | 5200
11 | 5200 | 5200
""")
execute_query(
"select empno, salary, lagInFrame(salary,0) OVER (ORDER BY salary) AS lag FROM empsalary ORDER BY empno",
expected=expected
)
with Example("default offset"):
expected = convert_output("""
empno | salary | lag
--------+--------+-------
5 | 3500 | 0
2 | 3900 | 3500
7 | 4200 | 3900
9 | 4500 | 4200
3 | 4800 | 4500
4 | 4800 | 4800
1 | 5000 | 4800
10 | 5200 | 5000
11 | 5200 | 5200
8 | 6000 | 5200
""")
execute_query(
"select empno, salary, lagInFrame(salary) OVER (ORDER BY salary) AS lag FROM (SELECT * FROM empsalary ORDER BY empno)",
expected=expected
)
with Example("explicit default value"):
expected = convert_output("""
empno | salary | lag
--------+--------+-------
1 | 5000 | 4800
2 | 3900 | 3500
3 | 4800 | 4500
4 | 4800 | 4800
5 | 3500 | 8
7 | 4200 | 3900
8 | 6000 | 5200
9 | 4500 | 4200
10 | 5200 | 5000
11 | 5200 | 5200
""")
execute_query(
"select empno, salary, lagInFrame(salary,1,8) OVER (ORDER BY salary) AS lag FROM empsalary ORDER BY empno",
expected=expected
)
with Example("without order by"):
expected = convert_output("""
empno | salary | lag
--------+--------+-------
1 | 5000 | 0
2 | 3900 | 5000
3 | 4800 | 3900
4 | 4800 | 4800
5 | 3500 | 4800
7 | 4200 | 3500
8 | 6000 | 4200
9 | 4500 | 6000
10 | 5200 | 4500
11 | 5200 | 5200
""")
execute_query(
"select empno, salary, lagInFrame(salary) OVER () AS lag FROM (SELECT * FROM empsalary ORDER BY empno)",
expected=expected
)
with Example("with nulls"):
expected = convert_output("""
number | lag
--------+-----
1 | 0
1 | 1
2 | 1
3 | 2
\\N | 3
""")
execute_query(
"select number, lagInFrame(number,1,0) OVER () AS lag FROM values('number Nullable(Int8)', (1),(1),(2),(3),(NULL))",
expected=expected
)
@TestFeature
@Name("funcs")
def feature(self):
"""Check true window functions.
"""
for scenario in loads(current_module(), Scenario):
Scenario(run=scenario, flags=TE)
| 30.055142 | 214 | 0.428472 | 2,115 | 20,167 | 3.999527 | 0.06383 | 0.015132 | 0.012767 | 0.011349 | 0.803996 | 0.738385 | 0.717342 | 0.705403 | 0.666627 | 0.626079 | 0 | 0.141484 | 0.458521 | 20,167 | 670 | 215 | 30.1 | 0.63315 | 0.023454 | 0 | 0.590051 | 0 | 0.044597 | 0.699241 | 0.029473 | 0 | 0 | 0 | 0 | 0 | 1 | 0.022298 | false | 0 | 0.005146 | 0 | 0.027444 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4542cc66d15dee533df580968d6d7003c2f467a2 | 164 | py | Python | turisteo/turisteo.py | RaulMurillo/turisteo | c26240c57d1bf184917e160cdeb9b98058ad68b6 | [
"Apache-2.0"
] | 2 | 2020-03-07T21:31:47.000Z | 2021-09-08T07:17:32.000Z | turisteo/turisteo.py | RaulMurillo/turisteo | c26240c57d1bf184917e160cdeb9b98058ad68b6 | [
"Apache-2.0"
] | null | null | null | turisteo/turisteo.py | RaulMurillo/turisteo | c26240c57d1bf184917e160cdeb9b98058ad68b6 | [
"Apache-2.0"
] | 1 | 2020-05-04T07:56:57.000Z | 2020-05-04T07:56:57.000Z | from app import app#, db
# from app.models import User, Post
# @app.shell_context_processor
# def make_shell_context():
# return {'User': User, 'Post': Post}
| 20.5 | 41 | 0.695122 | 24 | 164 | 4.583333 | 0.541667 | 0.127273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170732 | 164 | 7 | 42 | 23.428571 | 0.808824 | 0.810976 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a081fc68ff07bb18e4d5121390d383ea3085fc1 | 14,886 | py | Python | stage/configuration/test_field_mapper_processor.py | streamsets/datacollector-tests | 6c3e908768e1d4a586e9183e2141096921ecd5be | [
"Apache-2.0"
] | 14 | 2019-03-04T10:12:39.000Z | 2021-11-24T16:17:09.000Z | stage/configuration/test_field_mapper_processor.py | Pragatibs/datacollector-tests | aac53b2f0e056009ef0e437c8430651e3cf4d502 | [
"Apache-2.0"
] | 48 | 2019-03-08T14:59:06.000Z | 2021-08-13T14:49:56.000Z | stage/configuration/test_field_mapper_processor.py | Pragatibs/datacollector-tests | aac53b2f0e056009ef0e437c8430651e3cf4d502 | [
"Apache-2.0"
] | 23 | 2018-09-24T20:49:17.000Z | 2021-11-24T16:17:11.000Z | # Copyright 2020 StreamSets Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
import json
from streamsets.testframework.decorators import stub
from streamsets.testframework.markers import sdc_min_version
@pytest.mark.parametrize('stage_attributes', [
{'record_error': None, 'error': None, 'data': {"value": "new_field_name(some"}, 'aggregation_expression': '${fields[0]}', 'expression': '/${record:value("/value")}'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'aggregation_expression': '${0/record:value("/value")}', 'expression': '/${record:value("/value")}'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "value"}, 'aggregation_expression': '${fields', 'expression': '/${record:value("/value")}'},
])
@sdc_min_version('3.8.0')
def test_aggregation_expression(sdc_builder, sdc_executor, stage_attributes):
"""
Here we want to test that
1. A record error should be added if a aggregation_expression cannot be evaluated.
2. If a aggregation_expression is invalid the pipeline should be aborted.
3. If a aggregation_expression evaluates to a valid value a corresponding field should be modified as needed.
The pipeline is as follows:
Dev Raw Data Source >> Field Mapper >> Wiretap
"""
pipeline_builder = sdc_builder.get_pipeline_builder()
dev_raw_data_source = pipeline_builder.add_stage('Dev Raw Data Source')
dev_raw_data_source.data_format = 'JSON'
dev_raw_data_source.json_content = 'MULTIPLE_OBJECTS'
dev_raw_data_source.raw_data = json.dumps(stage_attributes['data'])
dev_raw_data_source.stop_after_first_batch = True
mapper = pipeline_builder.add_stage('Field Mapper', type='processor')
mapper.operate_on = 'FIELD_PATHS'
mapper.mapping_expression = stage_attributes['expression']
mapper.aggregation_expression = stage_attributes['aggregation_expression']
wiretap = pipeline_builder.add_wiretap()
dev_raw_data_source >> mapper >> wiretap.destination
pipeline = pipeline_builder.build()
sdc_executor.add_pipeline(pipeline)
sdc_executor.start_pipeline(pipeline, wait=False).wait_for_status(
status='FINISHED' if stage_attributes['record_error'] is None or stage_attributes['record_error'] else 'START_ERROR',
ignore_errors=True
)
if stage_attributes['record_error'] is None:
assert [record.field for record in wiretap.output_records] == [{
stage_attributes['data']['value']: stage_attributes['data']['value']
}]
elif stage_attributes['record_error']:
assert len(wiretap.output_records) == 0
assert (len(wiretap.error_records) == 1
and stage_attributes['error'] in wiretap.error_records[0].header['errorMessage'])
else:
status = sdc_executor.get_pipeline_status(pipeline).response.json()
assert stage_attributes['error'] in status['message']
@stub
@pytest.mark.parametrize('stage_attributes', [{'append_list_values': False, 'operate_on': 'FIELD_NAMES'},
{'append_list_values': True, 'operate_on': 'FIELD_NAMES'},
{'append_list_values': False, 'operate_on': 'FIELD_PATHS'},
{'append_list_values': True, 'operate_on': 'FIELD_PATHS'}])
def test_append_list_values(sdc_builder, sdc_executor, stage_attributes):
pass
@pytest.mark.parametrize('stage_attributes', [
{'record_error': None, 'error': None, 'data': {"value": "new_field_name"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '/${record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'conditional_expression': '${0/record:value("/value")}', 'expression': '/${record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "new_field_name"}, 'conditional_expression': '${f:name() == "value"', 'expression': '/${record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': None, 'error': None, 'data': {"value": "new_field_name"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_NAMES'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'conditional_expression': '${0/record:value("/value")}', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_NAMES'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "new_field_name"}, 'conditional_expression': '${f:name() == "value"', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_NAMES'},
{'record_error': None, 'error': None, 'data': {"value": "new_field_name"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_VALUES'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'conditional_expression': '${0/record:value("/value")}', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_VALUES'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "new_field_name"}, 'conditional_expression': '${f:name() == "value"', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_VALUES'}
])
@sdc_min_version('3.8.0')
def test_conditional_expression(sdc_builder, sdc_executor, stage_attributes):
"""
Here we want to test that
1. A record error should be added if a conditional_expression cannot be evaluated.
2. If a conditional_expression is invalid the pipeline should be aborted.
3. If a conditional_expression evaluates to a valid value a corresponding field should be modified as needed.
The pipeline is as follows:
Dev Raw Data Source >> Field Mapper >> Wiretap
"""
pipeline_builder = sdc_builder.get_pipeline_builder()
dev_raw_data_source = pipeline_builder.add_stage('Dev Raw Data Source')
dev_raw_data_source.data_format = 'JSON'
dev_raw_data_source.json_content = 'MULTIPLE_OBJECTS'
dev_raw_data_source.raw_data = json.dumps(stage_attributes['data'])
dev_raw_data_source.stop_after_first_batch = True
mapper = pipeline_builder.add_stage('Field Mapper', type='processor')
mapper.operate_on = stage_attributes['operate_on']
mapper.mapping_expression = stage_attributes['expression']
mapper.conditional_expression = stage_attributes['conditional_expression']
mapper.aggregation_expression = ''
wiretap = pipeline_builder.add_wiretap()
dev_raw_data_source >> mapper >> wiretap.destination
pipeline = pipeline_builder.build()
sdc_executor.add_pipeline(pipeline)
sdc_executor.start_pipeline(pipeline, wait=False).wait_for_status(
status='FINISHED' if stage_attributes['record_error'] is None or stage_attributes['record_error'] else 'START_ERROR',
ignore_errors=True
)
if stage_attributes['record_error'] is None:
assert [record.field for record in wiretap.output_records] == [{
"value" if stage_attributes['operate_on'] == 'FIELD_VALUES' else stage_attributes['data']['value']: stage_attributes['data']['value']
}]
elif stage_attributes['record_error']:
assert len(wiretap.output_records) == 0
assert (len(wiretap.error_records) == 1
and stage_attributes['error'] in wiretap.error_records[0].header['errorMessage'])
else:
status = sdc_executor.get_pipeline_status(pipeline).response.json()
assert stage_attributes['error'] in status['message']
@stub
@pytest.mark.parametrize('stage_attributes', [{'maintain_original_paths': False, 'operate_on': 'FIELD_NAMES'},
{'maintain_original_paths': True, 'operate_on': 'FIELD_NAMES'},
{'maintain_original_paths': False, 'operate_on': 'FIELD_PATHS'},
{'maintain_original_paths': True, 'operate_on': 'FIELD_PATHS'}])
def test_maintain_original_paths(sdc_builder, sdc_executor, stage_attributes):
pass
@pytest.mark.parametrize('stage_attributes', [
{'record_error': None, 'error': None, 'data': {"value": "new_field_name(some"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '/${record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': True, 'error': 'INVALID_EVALUATED_FIELD_PATH_02', 'data': {"value": "new_field_name(\'some"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '/${record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'conditional_expression': '${f:name() == "value"}', 'expression': '/${0/record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "value"}, 'conditional_expression': '${f:name() == "value"', 'expression': '/${record:value("/value")}', 'operate_on': 'FIELD_PATHS'},
{'record_error': None, 'error': None, 'data': {"value": "new_field_name(\'some"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_NAMES'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${0/record:value("/value")}', 'operate_on': 'FIELD_NAMES'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "value"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${record:value("/value")', 'operate_on': 'FIELD_NAMES'},
{'record_error': None, 'error': None, 'data': {"value": "new_field_name(\'some"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${record:value("/value")}', 'operate_on': 'FIELD_VALUES'},
{'record_error': True, 'error': 'EXPRESSION_EVALUATION_FAILURE_03', 'data': {"value": True}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${0/record:value("/value")}', 'operate_on': 'FIELD_VALUES'},
{'record_error': False, 'error': 'INVALID_EXPRESSION_04', 'data': {"value": "value"}, 'conditional_expression': '${f:name() == "value"}', 'expression': '${record:value("/value")', 'operate_on': 'FIELD_VALUES'}
])
@sdc_min_version('3.8.0')
def test_mapping_expression(sdc_builder, sdc_executor, stage_attributes):
"""
Here we want to test that
1. if mapping_expression evaluates to an invalid field name, e.g. a field name containing a quote,
a record error is added instead of a stage error as it was before.
2. A record error should also be added if a mapping_expression cannot be evaluated.
3. If a mapping_expression is invalid the pipeline should be aborted.
4. If a mapping_expression evaluates to a valid field name or value the field is modified as needed.
Please notice, cases 1 is applicable only for the FIELD_PATHS operation mode.
The pipeline is as follows:
Dev Raw Data Source >> Field Mapper >> Wiretap
"""
pipeline_builder = sdc_builder.get_pipeline_builder()
dev_raw_data_source = pipeline_builder.add_stage('Dev Raw Data Source')
dev_raw_data_source.data_format = 'JSON'
dev_raw_data_source.json_content = 'MULTIPLE_OBJECTS'
dev_raw_data_source.raw_data = json.dumps(stage_attributes['data'])
dev_raw_data_source.stop_after_first_batch = True
mapper = pipeline_builder.add_stage('Field Mapper', type='processor')
mapper.operate_on = stage_attributes['operate_on']
mapper.mapping_expression = stage_attributes['expression']
mapper.conditional_expression = stage_attributes['conditional_expression']
mapper.aggregation_expression = ''
wiretap = pipeline_builder.add_wiretap()
dev_raw_data_source >> mapper >> wiretap.destination
pipeline = pipeline_builder.build()
sdc_executor.add_pipeline(pipeline)
sdc_executor.start_pipeline(pipeline, wait=False).wait_for_status(
status='FINISHED' if stage_attributes['record_error'] is None or stage_attributes['record_error'] else 'START_ERROR',
ignore_errors=True
)
if stage_attributes['record_error'] is None:
assert [record.field for record in wiretap.output_records] == [{
"value" if stage_attributes['operate_on'] == 'FIELD_VALUES' else stage_attributes['data']['value']: stage_attributes['data']['value']
}]
elif stage_attributes['record_error']:
assert len(wiretap.output_records) == 0
assert (len(wiretap.error_records) == 1
and stage_attributes['error'] in wiretap.error_records[0].header['errorMessage'])
else:
status = sdc_executor.get_pipeline_status(pipeline).response.json()
assert stage_attributes['error'] in status['message']
@stub
@pytest.mark.parametrize('stage_attributes', [{'on_record_error': 'DISCARD'},
{'on_record_error': 'STOP_PIPELINE'},
{'on_record_error': 'TO_ERROR'}])
def test_on_record_error(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'operate_on': 'FIELD_NAMES'},
{'operate_on': 'FIELD_PATHS'},
{'operate_on': 'FIELD_VALUES'}])
def test_operate_on(sdc_builder, sdc_executor, stage_attributes):
pass
@stub
def test_preconditions(sdc_builder, sdc_executor):
pass
@stub
def test_required_fields(sdc_builder, sdc_executor):
pass
@stub
@pytest.mark.parametrize('stage_attributes', [{'operate_on': 'FIELD_NAMES', 'structure_change_allowed': False},
{'operate_on': 'FIELD_NAMES', 'structure_change_allowed': True},
{'operate_on': 'FIELD_PATHS', 'structure_change_allowed': False},
{'operate_on': 'FIELD_PATHS', 'structure_change_allowed': True}])
def test_structure_change_allowed(sdc_builder, sdc_executor, stage_attributes):
pass
| 58.606299 | 240 | 0.681311 | 1,785 | 14,886 | 5.398319 | 0.107563 | 0.082503 | 0.053757 | 0.039851 | 0.855023 | 0.850768 | 0.842881 | 0.775322 | 0.759963 | 0.754462 | 0 | 0.006043 | 0.166264 | 14,886 | 253 | 241 | 58.837945 | 0.770365 | 0.127368 | 0 | 0.638554 | 0 | 0 | 0.376001 | 0.145812 | 0 | 0 | 0 | 0 | 0.072289 | 1 | 0.060241 | false | 0.042169 | 0.024096 | 0 | 0.084337 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7a1833103fae019cbd10eb4da8ae09b93a5f475f | 150 | py | Python | src/onos_grpc_demo/core/__init__.py | tjian123/onos_grpc_demo | 84b3f074034e118fd7d39a346e00aa1b4cf89819 | [
"Apache-2.0"
] | null | null | null | src/onos_grpc_demo/core/__init__.py | tjian123/onos_grpc_demo | 84b3f074034e118fd7d39a346e00aa1b4cf89819 | [
"Apache-2.0"
] | null | null | null | src/onos_grpc_demo/core/__init__.py | tjian123/onos_grpc_demo | 84b3f074034e118fd7d39a346e00aa1b4cf89819 | [
"Apache-2.0"
] | null | null | null | """ Core implementation package.
"""
from __future__ import absolute_import
from ._logger import *
from ._config import *
from ._connector import *
| 16.666667 | 38 | 0.76 | 17 | 150 | 6.235294 | 0.588235 | 0.283019 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153333 | 150 | 8 | 39 | 18.75 | 0.834646 | 0.186667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a1eeea60fa2d246b397b1c366e68d0078f37582 | 14,285 | py | Python | project/systems/Mixture_of_gaussians/latent_probability_models.py | BalintHompot/uncertainty | 544c6c5cf22464d69316a31f97fc87355cd10b7e | [
"Apache-2.0"
] | null | null | null | project/systems/Mixture_of_gaussians/latent_probability_models.py | BalintHompot/uncertainty | 544c6c5cf22464d69316a31f97fc87355cd10b7e | [
"Apache-2.0"
] | null | null | null | project/systems/Mixture_of_gaussians/latent_probability_models.py | BalintHompot/uncertainty | 544c6c5cf22464d69316a31f97fc87355cd10b7e | [
"Apache-2.0"
] | null | null | null | import torch
import torch.nn as nn
from torch.distributions import MultivariateNormal, Normal as Normal_distr
import random
from torch.nn import functional as F
class Normal(nn.Module):
def __init__(self, parameters, latent_dim, encoder_out_dim):
### we need to define the device because of lightning issues
super().__init__()
self.device = 'cuda'
self.p_mu, self.p_std = parameters
self.num_params = len(parameters)
self.latent_dim = latent_dim
self.p = self.define_prior()
self.fc_mu = nn.Linear(encoder_out_dim, self.latent_dim).to(self.device)
self.fc_var = nn.Linear(encoder_out_dim, self.latent_dim).to(self.device)
self.latent_param_network = [self.fc_mu, self.fc_var]
def define_prior(self):
return Normal_distr(torch.full([self.latent_dim], self.p_mu).float().to(self.device),
torch.full([self.latent_dim], self.p_std ).float().to(self.device))
def sample_prior(self, num_samples):
return self.p.sample_n(num_samples)
def q(self, latent_params):
mu, std = latent_params
return Normal_distr(mu, std)
def get_z(self, q):
return q.rsample()
def get_param_network(self, encoder_out_len):
fc_mu = nn.Linear(encoder_out_len, self.latent_dim).to(self.device)
fc_var = nn.Linear(encoder_out_len, self.latent_dim).to(self.device)
return [fc_mu, fc_var]
def get_latent_params(self, encoder_out):
mu = self.latent_param_network[0](encoder_out)
log_var = self.latent_param_network[1](encoder_out)
std = torch.exp(torch.sigmoid(log_var))
return [mu, std]
def sample_q(self, q):
return q.rsample()
def KL(self, p, q, z):
kl = torch.distributions.kl.kl_divergence(q,p)
return kl.mean()
class Isotropic_Normal(Normal):
def __init__(self, latent_dim, encoder_out_dim):
super(Isotropic_Normal, self).__init__([0,1], latent_dim, encoder_out_dim)
class Isotropic_Normal_With_Mean(Normal):
def __init__(self, mean, latent_dim, encoder_out_dim):
super(Isotropic_Normal, self).__init__([mean,1], latent_dim, encoder_out_dim)
class Expert_Feature_Pretrained_Normal(Normal):
### when pretrained for the expert features, the means are just identity functions
def __init__(self, latent_dim, encoder_out_dim):
super(Expert_Feature_Pretrained_Normal, self).__init__([0,1], latent_dim, encoder_out_dim)
def get_param_network(self, encoder_out_len):
id_mu = nn.Identity().to(self.device)
fc_var = nn.Linear(encoder_out_len, self.latent_dim).to(self.device)
return [id_mu, fc_var]
class LogNormal(Normal):
def __init__(self, latent_dim, encoder_out_dim):
super(LogNormal, self).__init__([0,1], latent_dim, encoder_out_dim)
def define_prior(self):
return torch.distributions.LogNormal(torch.full([self.latent_dim], self.p_mu).float().to(self.device),
torch.full([self.latent_dim], self.p_std ).float().to(self.device))
def q(self, latent_params):
mu, std = latent_params
return torch.distributions.LogNormal(mu, std)
class Mixture_of_Gaussians(Normal):
## 4 gaussians for the 4 triage cat
def __init__(self, latent_dim, encoder_out_dim):
super(Mixture_of_Gaussians, self).__init__([0,1], latent_dim, encoder_out_dim)
def define_prior(self):
return [
Normal_distr(torch.full([self.latent_dim], -4).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], -2).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 0).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 2).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 4).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], -6).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device))
]
def KL(self, p, q, z, triage_mask, print_acc = False):
'''
triage_cats = [1,2,3,4]
mc_sample_num = 1024
samples = q.sample([mc_sample_num])
probs_prior = torch.stack([p[triage_cat-1].log_prob(samples) for triage_cat in triage_cats],0)
latent_weights = q.log_prob(samples)
probs = torch.sum(probs_prior + latent_weights,(1,3))
probs_std = torch.std(probs_prior + latent_weights ,(1))
focal = probs_std
triage_labels = torch.sum(triage_mask, 2)
triage_labels = torch.max(triage_labels, 0)[1]
max_probs = torch.max(probs, 0)[1]
acc = max_probs == triage_labels
if print_acc:
print("train acc: ", torch.sum(acc)/acc.shape[0])
'''
kl_all = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.p], 0)
#kl_all *= focal
##kl = kl_all * (kl_all>0.05)
kl = kl_all * triage_mask
##kl_large_enough = kl > 0.5
##kl *= kl_large_enough
return kl.mean(), kl_all ##, torch.sum(acc)/acc.shape[0]
def KL_all_priors(self, p, q):
kl_all = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.p], 0)
return torch.sum(kl_all,2)
def sample_prior(self, num_samples):
gaussian = random.choice(self.p)
return gaussian.sample_n(num_samples)
class Mixture_of_Gaussians_with_common_part(Normal):
## 4 gaussians for the 4 triage cat
def __init__(self, latent_dim, encoder_out_dim):
super(Mixture_of_Gaussians_with_common_part, self).__init__([0,1], latent_dim, encoder_out_dim)
def define_prior(self):
self.specific_dimensions = 10
common_part = Normal_distr(torch.full([self.latent_dim - self.specific_dimensions], self.specific_dimensions).float().to(self.device),
torch.full([self.latent_dim - self.specific_dimensions], 1 ).float().to(self.device))
class_specific_part = [
Normal_distr(torch.full([self.specific_dimensions], -12).float().to(self.device),
torch.full([self.specific_dimensions], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.specific_dimensions], -8).float().to(self.device),
torch.full([self.specific_dimensions], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.specific_dimensions], -4).float().to(self.device),
torch.full([self.specific_dimensions], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.specific_dimensions], 0).float().to(self.device),
torch.full([self.specific_dimensions], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.specific_dimensions], 4).float().to(self.device),
torch.full([self.specific_dimensions], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.specific_dimensions], -14).float().to(self.device),
torch.full([self.specific_dimensions], 1 ).float().to(self.device))
]
self.p_common = common_part
return class_specific_part
def q(self, latent_params):
mu, std = latent_params
common_part = Normal_distr(mu[:,:-self.specific_dimensions].float().to(self.device),std[:,:-self.specific_dimensions].float().to(self.device))
specific_part = Normal_distr(mu[:,-self.specific_dimensions:].float().to(self.device),
std[:,-self.specific_dimensions:].float().to(self.device))
self.q_common = common_part
return specific_part
def KL(self, p, q, z, triage_mask, print_acc = False):
'''
triage_cats = [1,2,3,4,5,6]
mc_sample_num = 1024
samples = q.sample([mc_sample_num])
probs_prior = torch.stack([p[triage_cat-1].log_prob(samples) for triage_cat in triage_cats],0)
latent_weights = q.log_prob(samples)
probs = torch.sum(probs_prior + latent_weights,(1,3))
probs = probs / torch.min(probs)
#print(probs)
#print(probs.shape)
#probs_std = torch.std(probs_prior + latent_weights ,(1))
#focal = 1 - full_prob_mass
triage_labels = torch.sum(triage_mask, 2)
triage_labels = torch.max(triage_labels, 0)[1]
#print(triage_labels.shape)
max_probs = torch.max(probs, 0)[1]
acc = max_probs == triage_labels
if print_acc:
print("train acc: ", torch.sum(acc)/acc.shape[0])
'''
kl_all = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.p], 0)
kl_common = torch.distributions.kl.kl_divergence(self.q_common,self.p_common)
#kl_common = F.cross_entropy(probs.T, triage_labels)
##kl = kl_all * (kl_all>0.05)
kl = kl_all * triage_mask[:,:,-self.specific_dimensions:]
##kl_large_enough = kl > 0.5
##kl *= kl_large_enough
#kl = torch.sum(kl, (0,2)) * focal.detach()
return kl.mean() + kl_common.mean(), kl_all ##, torch.sum(acc)/acc.shape[0]
def get_z(self, q):
common_z = self.q_common.rsample()
specific = q.rsample()
return torch.cat((common_z, specific), 1)
def KL_all_priors(self, p, q):
kl_all = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.p], 0)
return torch.sum(kl_all,2)
def sample_prior(self, num_samples):
gaussian = random.choice(self.p)
return torch.cat((self.p_common.sample_n(num_samples),gaussian.sample_n(num_samples)), 1)
class Mixture_of_Gaussians_with_regularization(Normal):
## 4 gaussians for the 4 triage cat
def __init__(self, latent_dim, encoder_out_dim):
super(Mixture_of_Gaussians_with_regularization, self).__init__([0,1], latent_dim, encoder_out_dim)
def define_prior(self):
self.uniforms = [
Normal_distr(torch.full([self.latent_dim], -4).float().to(self.device),
torch.full([self.latent_dim], 10 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], -2).float().to(self.device),
torch.full([self.latent_dim], 10 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 0).float().to(self.device),
torch.full([self.latent_dim], 10 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 2).float().to(self.device),
torch.full([self.latent_dim], 10 ).float().to(self.device))
]
return [
Normal_distr(torch.full([self.latent_dim], -4).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], -2).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 0).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device)),
Normal_distr(torch.full([self.latent_dim], 2).float().to(self.device),
torch.full([self.latent_dim], 1 ).float().to(self.device))
]
def KL(self, p, q, z, triage_mask, print_acc = False):
triage_cats = [0,1,2,3,4,5]
'''
mc_sample_num = 1024
samples = q.sample([mc_sample_num])
probs_prior = torch.stack([p[triage_cat-1].log_prob(samples) for triage_cat in triage_cats],0)
latent_weights = q.log_prob(samples)
probs = torch.sum(probs_prior + latent_weights,(1,3))
probs_std = torch.std(probs_prior + latent_weights ,(1))
focal = probs_std
triage_labels = torch.sum(triage_mask, 2)
triage_labels = torch.max(triage_labels, 0)[1]
max_probs = torch.max(probs, 0)[1]
acc = max_probs == triage_labels
if print_acc:
print("train acc: ", torch.sum(acc)/acc.shape[0])
'''
kl_all = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.p], 0)
kl_uniforms = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.uniforms], 0)
#kl_all *= focal
##kl = kl_all * (kl_all>0.05)
kl = kl_all * triage_mask + ((kl_uniforms * (triage_mask == 0)) / 4)
##kl_large_enough = kl > 0.5
##kl *= kl_large_enough
return kl.mean(), kl_all ##, torch.sum(acc)/acc.shape[0]
def KL_all_priors(self, p, q):
kl_all = torch.stack([torch.distributions.kl.kl_divergence(q,triage_p) for triage_p in self.p], 0)
return torch.sum(kl_all,2)
def sample_prior(self, num_samples):
gaussian = random.choice(self.p)
return gaussian.sample_n(num_samples) | 44.089506 | 150 | 0.59363 | 1,894 | 14,285 | 4.217001 | 0.068638 | 0.071366 | 0.084137 | 0.106423 | 0.822086 | 0.800927 | 0.779517 | 0.767372 | 0.747465 | 0.738074 | 0 | 0.016032 | 0.275184 | 14,285 | 324 | 151 | 44.089506 | 0.75536 | 0.135037 | 0 | 0.497076 | 0 | 0 | 0.000353 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.192982 | false | 0 | 0.02924 | 0.035088 | 0.415205 | 0.017544 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e138d37bb6b75e0993725d53bc908d039f2fc60c | 8,025 | py | Python | tests/core_tests/ensemble_tests.py | jesse-toftum/cash_ml | 316121a41359f8d18358c17f9be2ab90ad69bcb2 | [
"MIT"
] | 4 | 2018-12-05T14:46:31.000Z | 2019-07-03T12:39:39.000Z | tests/core_tests/ensemble_tests.py | jesse-toftum/cash_ml | 316121a41359f8d18358c17f9be2ab90ad69bcb2 | [
"MIT"
] | 4 | 2018-12-16T18:16:26.000Z | 2019-01-11T00:10:02.000Z | tests/core_tests/ensemble_tests.py | jesse-toftum/cash_ml | 316121a41359f8d18358c17f9be2ab90ad69bcb2 | [
"MIT"
] | 3 | 2018-12-05T14:40:13.000Z | 2019-11-17T00:40:15.000Z | import datetime
import os
import random
import sys
sys.path = [os.path.abspath(os.path.dirname(__file__))] + sys.path
sys.path = [os.path.abspath(os.path.dirname(os.path.dirname(__file__)))] + sys.path
os.environ['is_test_suite'] = 'True'
from cash_ml import Predictor
from cash_ml.utils_models import load_ml_model
from nose.tools import assert_equal, assert_not_equal, with_setup
from sklearn.metrics import accuracy_score
import dill
import numpy as np
import tests.utils_testing as utils
def ensemble_classifier_basic_test(model_name=None):
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
column_descriptions = {
'survived': 'output',
'sex': 'categorical',
'embarked': 'categorical',
'pclass': 'categorical'
}
ensemble_config = [{'model_name': 'LGBMClassifier'}, {'model_name': 'RandomForestClassifier'}]
ml_predictor = Predictor(
type_of_estimator='classifier', column_descriptions=column_descriptions)
ml_predictor.train(df_titanic_train, ensemble_config=ensemble_config)
test_score = ml_predictor.score(df_titanic_test, df_titanic_test.survived)
print('test_score')
print(test_score)
assert -0.15 < test_score < -0.131
def ensemble_regressor_basic_test():
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
column_descriptions = {'MEDV': 'output', 'CHAS': 'categorical'}
ensemble_config = [{'model_name': 'LGBMRegressor'}, {'model_name': 'RandomForestRegressor'}]
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
ml_predictor.train(df_boston_train, ensemble_config=None)
test_score = ml_predictor.score(df_boston_test, df_boston_test.MEDV)
print('test_score')
print(test_score)
assert -3.0 < test_score < -2.8
# TODO: test for warning when passing in ensemble_method!="average" and is_classifier
# TODO: make sure this works for single predictions and batch
def getting_single_predictions_classifier_test():
np.random.seed(0)
df_titanic_train, df_titanic_test = utils.get_titanic_binary_classification_dataset()
column_descriptions = {
'survived': 'output',
'sex': 'categorical',
'embarked': 'categorical',
'pclass': 'categorical',
'age_bucket': 'categorical'
}
ensemble_config = [{'model_name': 'LGBMClassifier'}, {'model_name': 'RandomForestClassifier'}]
ml_predictor = Predictor(
type_of_estimator='classifier', column_descriptions=column_descriptions)
ml_predictor.train(df_titanic_train, ensemble_config=ensemble_config)
file_name = ml_predictor.save(str(random.random()))
saved_ml_pipeline = load_ml_model(file_name)
os.remove(file_name)
try:
keras_file_name = file_name[:-5] + '_keras_deep_learning_model.h5'
os.remove(keras_file_name)
except:
pass
df_titanic_test_dictionaries = df_titanic_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
first_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
lower_bound = -0.16
assert -0.15 < first_score < -0.135
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_titanic_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_titanic_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro, this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() < 60
# 3. make sure we're not modifying the dictionaries (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_titanic_test_dictionaries:
predictions.append(saved_ml_pipeline.predict_proba(row)[1])
print('predictions')
print(predictions)
print('df_titanic_test_dictionaries')
print(df_titanic_test_dictionaries)
second_score = utils.calculate_brier_score_loss(df_titanic_test.survived, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert -0.15 < second_score < -0.135
def getting_single_predictions_regressor_test():
np.random.seed(0)
df_boston_train, df_boston_test = utils.get_boston_regression_dataset()
column_descriptions = {'MEDV': 'output', 'CHAS': 'categorical'}
ensemble_config = [{'model_name': 'LGBMRegressor'}, {'model_name': 'RandomForestRegressor'}]
ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
# NOTE: this is bad practice to pass in our same training set as our fl_data set, but we don't have enough data to do it any other way
ml_predictor.train(df_boston_train, ensemble_config=ensemble_config)
test_score = ml_predictor.score(df_boston_test, df_boston_test.MEDV)
print('test_score')
print(test_score)
assert -3.5 < test_score < -2.8
file_name = ml_predictor.save(str(random.random()))
saved_ml_pipeline = load_ml_model(file_name)
os.remove(file_name)
try:
keras_file_name = file_name[:-5] + '_keras_deep_learning_model.h5'
os.remove(keras_file_name)
except:
pass
df_boston_test_dictionaries = df_boston_test.to_dict('records')
# 1. make sure the accuracy is the same
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
first_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('first_score')
print(first_score)
# Make sure our score is good, but not unreasonably good
lower_bound = -3.5
assert lower_bound < first_score < -2.8
# 2. make sure the speed is reasonable (do it a few extra times)
data_length = len(df_boston_test_dictionaries)
start_time = datetime.datetime.now()
for idx in range(1000):
row_num = idx % data_length
saved_ml_pipeline.predict(df_boston_test_dictionaries[row_num])
end_time = datetime.datetime.now()
duration = end_time - start_time
print('duration.total_seconds()')
print(duration.total_seconds())
# It's very difficult to set a benchmark for speed that will work across all machines.
# On my 2013 bottom of the line 15" MacBook Pro, this runs in about 0.8 seconds for 1000 predictions
# That's about 1 millisecond per prediction
# Assuming we might be running on a test box that's pretty weak, multiply by 3
# Also make sure we're not running unreasonably quickly
assert 0.2 < duration.total_seconds() / 1.0 < 60
# 3. make sure we're not modifying the dictionaries (the score is the same after running a few experiments as it is the first time)
predictions = []
for row in df_boston_test_dictionaries:
predictions.append(saved_ml_pipeline.predict(row))
second_score = utils.calculate_rmse(df_boston_test.MEDV, predictions)
print('second_score')
print(second_score)
# Make sure our score is good, but not unreasonably good
assert lower_bound < second_score < -2.8
| 33.577406 | 138 | 0.726231 | 1,115 | 8,025 | 4.957848 | 0.190135 | 0.029305 | 0.032923 | 0.031657 | 0.832308 | 0.821454 | 0.814399 | 0.808249 | 0.78419 | 0.78419 | 0 | 0.015265 | 0.183676 | 8,025 | 238 | 139 | 33.718487 | 0.828576 | 0.208349 | 0 | 0.652482 | 0 | 0 | 0.113762 | 0.034761 | 0 | 0 | 0 | 0.004202 | 0.070922 | 1 | 0.028369 | false | 0.014184 | 0.078014 | 0 | 0.106383 | 0.170213 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e14870c823426bec5c19896a54d5f2f6241984d2 | 11,111 | py | Python | tests/test_native_ops.py | XanaduAI/pennylane-cirq | cb6defaf448982769ae7e7bd5761d606d4d19fb6 | [
"Apache-2.0"
] | 17 | 2019-10-23T03:01:41.000Z | 2020-07-18T02:34:41.000Z | tests/test_native_ops.py | XanaduAI/pennylane-cirq | cb6defaf448982769ae7e7bd5761d606d4d19fb6 | [
"Apache-2.0"
] | 30 | 2019-10-17T16:58:15.000Z | 2020-07-10T19:43:40.000Z | tests/test_native_ops.py | XanaduAI/pennylane-cirq | cb6defaf448982769ae7e7bd5761d606d4d19fb6 | [
"Apache-2.0"
] | 3 | 2019-11-18T19:33:12.000Z | 2020-04-01T16:12:15.000Z | # Copyright 2018 Xanadu Quantum Technologies Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Tests for the native Cirq ops
"""
import pytest
import numpy as np
from pennylane_cirq import ops, MixedStateSimulatorDevice, SimulatorDevice
@pytest.fixture(scope="function")
def simulator_device_1_wire(shots):
"""Return a single wire instance of the MixedStateSimulatorDevice class."""
yield MixedStateSimulatorDevice(1, shots=shots)
@pytest.mark.parametrize("shots", [None])
class TestApply:
"""Tests that ops are correctly applied"""
@pytest.mark.parametrize(
"par,input,expected_density_matrix",
[
([0.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.5], [1, 0], np.array([[2, 0], [0, 1]]) / 3),
([1.0], [1, 0], np.array([[1, 0], [0, 2]]) / 3),
([0.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.5], [0, 1], np.array([[1, 0], [0, 2]]) / 3),
([1.0], [0, 1], np.array([[2, 0], [0, 1]]) / 3),
([0.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
([0.5], np.array([1, 1]) / np.sqrt(2), np.array([[3, 1], [1, 3]]) / 6),
([1.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, -1 / 3], [-1 / 3, 1]]) / 2),
([0.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
([0.5], np.array([1, -1]) / np.sqrt(2), np.array([[3, -1], [-1, 3]]) / 6),
([1.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, 1 / 3], [1 / 3, 1]]) / 2),
],
)
def test_apply_depolarize_single_wire(
self, simulator_device_1_wire, tol, par, input, expected_density_matrix
):
"""Tests that applying a depolarizing operation yields the expected output state for single wire."""
simulator_device_1_wire.reset()
init_state = np.array(input, dtype=np.complex64)
simulator_device_1_wire._initial_state = simulator_device_1_wire._convert_to_density_matrix(init_state)
simulator_device_1_wire.apply([ops.Depolarize(*par, wires=[0])])
assert np.allclose(simulator_device_1_wire.state, expected_density_matrix, **tol)
@pytest.mark.parametrize(
"par,input,expected_density_matrix",
[
([0.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.5], [1, 0], np.array([[1, 0], [0, 1]]) / 2),
([1.0], [1, 0], np.array([[0, 0], [0, 1]])),
([0.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.5], [0, 1], np.array([[1, 0], [0, 1]]) / 2),
([1.0], [0, 1], np.array([[1, 0], [0, 0]])),
([0.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
([0.5], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
([1.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
([0.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
([0.5], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
([1.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
],
)
def test_apply_bit_flip_single_wire(
self, simulator_device_1_wire, tol, par, input, expected_density_matrix
):
"""Tests that applying a bit flip operation yields the expected output state for single wire."""
simulator_device_1_wire.reset()
init_state = np.array(input, dtype=np.complex64)
simulator_device_1_wire._initial_state = simulator_device_1_wire._convert_to_density_matrix(init_state)
simulator_device_1_wire.apply([ops.BitFlip(*par, wires=[0])])
assert np.allclose(simulator_device_1_wire.state, expected_density_matrix, **tol)
@pytest.mark.parametrize(
"par,input,expected_density_matrix",
[
([0.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.5], [1, 0], np.array([[1, 0], [0, 0]])),
([1.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.5], [0, 1], np.array([[0, 0], [0, 1]])),
([1.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
([0.5], np.array([1, 1]) / np.sqrt(2), np.array([[1, 0], [0, 1]]) / 2),
([1.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
([0.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
([0.5], np.array([1, -1]) / np.sqrt(2), np.array([[1, 0], [0, 1]]) / 2),
([1.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
],
)
def test_apply_phase_flip_single_wire(
self, simulator_device_1_wire, tol, par, input, expected_density_matrix
):
"""Tests that applying a phase flip operation yields the expected output state for single wire."""
simulator_device_1_wire.reset()
init_state = np.array(input, dtype=np.complex64)
simulator_device_1_wire._initial_state = simulator_device_1_wire._convert_to_density_matrix(init_state)
simulator_device_1_wire.apply([ops.PhaseFlip(*par, wires=[0])])
assert np.allclose(simulator_device_1_wire.state, expected_density_matrix, **tol)
@pytest.mark.parametrize(
"par,input,expected_density_matrix",
[
([0.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.5], [1, 0], np.array([[1, 0], [0, 0]])),
([1.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.5], [0, 1], np.array([[0, 0], [0, 1]])),
([1.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
(
[0.5],
np.array([1, 1]) / np.sqrt(2),
np.array([[1, np.sqrt(1 / 2)], [np.sqrt(1 / 2), 1]]) / 2,
),
([1.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 0], [0, 1]]) / 2),
([0.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
(
[0.5],
np.array([1, -1]) / np.sqrt(2),
np.array([[1, -np.sqrt(1 / 2)], [-np.sqrt(1 / 2), 1]]) / 2,
),
([1.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, 0], [0, 1]]) / 2),
],
)
def test_apply_phase_damp_single_wire(
self, simulator_device_1_wire, tol, par, input, expected_density_matrix
):
"""Tests that applying a phase damping operation yields the expected output state for single wire."""
simulator_device_1_wire.reset()
init_state = np.array(input, dtype=np.complex64)
simulator_device_1_wire._initial_state = simulator_device_1_wire._convert_to_density_matrix(init_state)
simulator_device_1_wire.apply([ops.PhaseDamp(*par, wires=[0])])
assert np.allclose(simulator_device_1_wire.state, expected_density_matrix, **tol)
@pytest.mark.parametrize(
"par,input,expected_density_matrix",
[
([0.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.5], [1, 0], np.array([[1, 0], [0, 0]])),
([1.0], [1, 0], np.array([[1, 0], [0, 0]])),
([0.0], [0, 1], np.array([[0, 0], [0, 1]])),
([0.5], [0, 1], np.array([[1, 0], [0, 1]]) / 2),
([1.0], [0, 1], np.array([[1, 0], [0, 0]])),
([0.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 1], [1, 1]]) / 2),
(
[0.5],
np.array([1, 1]) / np.sqrt(2),
np.array([[3 / 2, np.sqrt(1 / 2)], [np.sqrt(1 / 2), 1 / 2]]) / 2,
),
([1.0], np.array([1, 1]) / np.sqrt(2), np.array([[1, 0], [0, 0]])),
([0.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, -1], [-1, 1]]) / 2),
(
[0.5],
np.array([1, -1]) / np.sqrt(2),
np.array([[3 / 2, -np.sqrt(1 / 2)], [-np.sqrt(1 / 2), 1 / 2]]) / 2,
),
([1.0], np.array([1, -1]) / np.sqrt(2), np.array([[1, 0], [0, 0]])),
],
)
def test_apply_amplitude_damp_single_wire(
self, simulator_device_1_wire, tol, par, input, expected_density_matrix
):
"""Tests that applying an amplitude damping operation yields the expected output state for single wire."""
simulator_device_1_wire.reset()
init_state = np.array(input, dtype=np.complex64)
simulator_device_1_wire._initial_state = simulator_device_1_wire._convert_to_density_matrix(init_state)
simulator_device_1_wire.apply([ops.AmplitudeDamp(*par, wires=[0])])
assert np.allclose(simulator_device_1_wire.state, expected_density_matrix, **tol)
@pytest.mark.parametrize(
"input",
[
np.array([1, 0, 0, 0]),
np.array([2, 1, 0, 1]) / np.sqrt(6),
np.array([0, 0, 1, 0]),
np.array([0, 1, 0, 1]) / np.sqrt(2),
np.array([0, 0, 0, 1]),
np.array([2, 1, 2, 1]) / np.sqrt(10),
],
)
def test_apply_iswap(self, tol, input, shots):
"""Tests that applying the iSWAP gate yields the expected output."""
device = SimulatorDevice(2, shots=shots)
iswap_mat = np.array([[1, 0, 0, 0], [0, 0, 1j, 0], [0, 1j, 0, 0], [0, 0, 0, 1]])
expected = iswap_mat @ input
device.reset()
device._initial_state = np.array(input, dtype=np.complex64)
device.apply([ops.ISWAP(wires=[0, 1])])
assert np.allclose(device.state, expected, **tol)
@pytest.mark.parametrize("par", [0, 0.5, 1.42, np.pi / 4, np.pi / 2, np.pi])
@pytest.mark.parametrize(
"input",
[
np.array([1, 0, 0, 0]),
np.array([2, 1, 0, 1]) / np.sqrt(6),
np.array([0, 0, 1, 0]),
np.array([0, 1, 0, 1]) / np.sqrt(2),
np.array([0, 0, 0, 1]),
np.array([2, 1, 2, 1]) / np.sqrt(10),
],
)
def test_apply_cphase(self, tol, par, input, shots):
"""Tests that applying the CPhase gate yields the expected output."""
device = SimulatorDevice(2, shots=shots)
cphase_mat = np.array(
[[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, np.exp(1j * par)]]
)
expected = cphase_mat @ input
device.reset()
device._initial_state = np.array(input, dtype=np.complex64)
device.apply([ops.CPhase(par, wires=[0, 1])])
assert np.allclose(device.state, expected, **tol)
| 45.166667 | 114 | 0.510035 | 1,687 | 11,111 | 3.240664 | 0.088322 | 0.047924 | 0.114139 | 0.07902 | 0.811414 | 0.803366 | 0.786172 | 0.78233 | 0.775562 | 0.77227 | 0 | 0.088938 | 0.269373 | 11,111 | 245 | 115 | 45.35102 | 0.584504 | 0.117901 | 0 | 0.615789 | 0 | 0 | 0.019612 | 0.016942 | 0 | 0 | 0 | 0 | 0.036842 | 1 | 0.042105 | false | 0 | 0.015789 | 0 | 0.063158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e16f84ff2807698632508f7a10fd34e49cb2192e | 120 | py | Python | ourdb/modules/export/__init__.py | SnowyCoder/telegram-ourdb | b7d2f0dd426b238b1c305e9abfd222fe597031e9 | [
"MIT"
] | null | null | null | ourdb/modules/export/__init__.py | SnowyCoder/telegram-ourdb | b7d2f0dd426b238b1c305e9abfd222fe597031e9 | [
"MIT"
] | null | null | null | ourdb/modules/export/__init__.py | SnowyCoder/telegram-ourdb | b7d2f0dd426b238b1c305e9abfd222fe597031e9 | [
"MIT"
] | null | null | null | from modules.export import import_pack, export_pack
__handlers__ = import_pack.__handlers__ + export_pack.__handlers__
| 30 | 66 | 0.858333 | 15 | 120 | 5.8 | 0.4 | 0.413793 | 0.413793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091667 | 120 | 3 | 67 | 40 | 0.798165 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bedd672d72d1802890d3ae96b1348e23d9989116 | 6,258 | py | Python | services/frontend/tests/job_sub.py | elijahc/lyra | c80c633389ceee0b2124a0a4365f405e5d7c2d75 | [
"Apache-2.0"
] | null | null | null | services/frontend/tests/job_sub.py | elijahc/lyra | c80c633389ceee0b2124a0a4365f405e5d7c2d75 | [
"Apache-2.0"
] | null | null | null | services/frontend/tests/job_sub.py | elijahc/lyra | c80c633389ceee0b2124a0a4365f405e5d7c2d75 | [
"Apache-2.0"
] | null | null | null | import requests
import json
job = dict(
lib_name='KRAS',
guide_rna_promoter='TTGACAGCTAGCTCAGTCCTAGGTATAATACTAGT',
subpool_primer='AGCTGGTATCCTTCAAACCC',
ref_seq='TCCTAGGCGGCGGCCGCGGCGGCGGAGGCAGCAGCGGCGGCGGCAGTGGCGGCGGCGAAGGTGGCGGCGGCTCGGCCAGTACTCCCGGCCCCCGCCATTTCGGACTGGGAGCGAGCGCGGCGCAGGCACTGAAGGCGGCGGCGGGGCCAGAGGCTCAGCGGCTCCCAGGTGCGGGAGAGAGGCCTGCTGAAAATGACTGAATATAAACTTGTGGTAGTTGGAGCTGGTGGCGTAGGCAAGAGTGCCTTGACGATACAGCTAATTCAGAATCATTTTGTGGACGAATATGATCCAACAATAGAGGATTCCTACAGGAAGCAAGTAGTAATTGATGGAGAAACCTGTCTCTTGGATATTCTCGACACAGCAGGTCAAGAGGAGTACAGTGCAATGAGGGACCAGTACATGAGGACTGGGGAGGGCTTTCTTTGTGTATTTGCCATAAATAATACTAAATCATTTGAAGATATTCACCATTATAGAGAACAAATTAAAAGAGTTAAGGACTCTGAAGATGTACCTATGGTCCTAGTAGGAAATAAATGTGATTTGCCTTCTAGAACAGTAGACACAAAACAGGCTCAGGACTTAGCAAGAAGTTATGGAATTCCTTTTATTGAAACATCAGCAAAGACAAGACAGAGAGTGGAGGATGCTTTTTATACATTGGTGAGGGAGATCCGACAATACAGATTGAAAAAAATCAGCAAAGAAGAAAAGACTCCTGGCTGTGTGAAAATTAAAAAATGCATTATAATGTAATCTGGGTGTTGATGATGCCTTCTATACATTAGTTCGAGAAATTCGAAAACATAAAGAAAAGATGAGCAAAGATGGTAAAAAGAAGAAAAAGAAGTCAAAGACAAAGTGTGTAATTATGTAAATACAATTTGTACTTTTTTCTTAAGGCATACTAGTACAAGTGGTAATTTTTGTACATTACACTAAATTATTAGCATTTGTTTTAGCATTACCTAATTTTTTTCCTGCTCCATGCAGACTGTTAGCTTTTACCTTAAATGCTTATTTTAAAATGACAGTGGAAGTTTTTTTTTCCTCTAAGTGCCAGTATTCCCAGAGTTTTGGTTTTTGAACTAGCAATGCCTGTGAAAAAGAAACTGAATACCTAAGATTTCTGTCTTGGGGTTTTTGGTGCATGCAGTTGATTACTTCTTATTTTTCTTACCAATTGTGAATGTTGGTGTGAAACAAATTAATGAAGCTTTTGAATCATCCCTATTCTGTGTTTTATCTAGTCACATAAATGGATTAATTACTAATTTCAGTTGAGACCTTCTAATTGGTTTTTACTGAAACATTGAGGGAACACAAATTTATGGGCTTCCTGATGATGATTCTTCTAGGCATCATGTCCTATAGTTTGTCATCCCTGATGAATGTAAAGTTACACTGTTCACAAAGGTTTTGTCTCCTTTCCACTGCTATTAGTCATGGTCACTCTCCCCAAAATATTATATTTTTTCTATAAAAAGAAAAAAATGGAAAAAAATTACAAGGCAATGGAAACTATTATAAGGCCATTTCCTTTTCACATTAGATAAATTACTATAAAGACTCCTAATAGCTTTTCCTGTTAAGGCAGACCCAGTATGAAATGGGGATTATTATAGCAACCATTTTGGGGCTATATTTACATGCTACTAAATTTTTATAATAATTGAAAAGATTTTAACAAGTATAAAAAATTCTCATAGGAATTAAATGTAGTCTCCCTGTGTCAGACTGCTCTTTCATAGTATAACTTTAAATCTTTTCTTCAACTTGAGTCTTTGAAGATAGTTTTAATTCTGCTTGTGACATTAAAAGATTATTTGGGCCAGTTATAGCTTATTAGGTGTTGAAGAGACCAAGGTTGCAAGGCCAGGCCCTGTGTGAACCTTTGAGCTTTCATAGAGAGTTTCACAGCATGGACTGTGTCCCCACGGTCATCCAGTGTTGTCATGCATTGGTTAGTCAAAATGGGGAGGGACTAGGGCAGTTTGGATAGCTCAACAAGATACAATCTCACTCTGTGGTGGTCCTGCTGACAAATCAAGAGCATTGCTTTTGTTTCTTAAGAAAACAAACTCTTTTTTAAAAATTACTTTTAAATATTAACTCAAAAGTTGAGATTTTGGGGTGGTGGTGTGCCAAGACATTAATTTTTTTTTTAAACAATGAAGTGAAAAAGTTTTACAATCTCTAGGTTTGGCTAGTTCTCTTAACACTGGTTAAATTAACATTGCATAAACACTTTTCAAGTCTGATCCATATTTAATAATGCTTTAAAATAAAAATAAAAACAATCCTTTTGATAAATTTAAAATGTTACTTATTTTAAAATAAATGAAGTGAGATGGCATGGTGAGGTGAAAGTATCACTGGACTAGGAAGAAGGTGACTTAGGTTCTAGATAGGTGTCTTTTAGGACTCTGATTTTGAGGACATCACTTACTATCCATTTCTTCATGTTAAAAGAAGTCATCTCAAACTCTTAGTTTTTTTTTTTTACAACTATGTAATTTATATTCCATTTACATAAGGATACACTTATTTGTCAAGCTCAGCACAATCTGTAAATTTTTAACCTATGTTACACCATCTTCAGTGCCAGTCTTGGGCAAAATTGTGCAAGAGGTGAAGTTTATATTTGAATATCCATTCTCGTTTTAGGACTCTTCTTCCATATTAGTGTCATCTTGCCTCCCTACCTTCCACATGCCCCATGACTTGATGCAGTTTTAATACTTGTAATTCCCCTAACCATAAGATTTACTGCTGCTGTGGATATCTCCATGAAGTTTTCCCACTGAGTCACATCAGAAATGCCCTACATCTTATTTCCTCAGGGCTCAAGAGAATCTGACAGATACCATAAAGGGATTTGACCTAATCACTAATTTTCAGGTGGTGGCTGATGCTTTGAACATCTCTTTGCTGCCCAATCCATTAGCGACAGTAGGATTTTTCAAACCTGGTATGAATAGACAGAACCCTATCCAGTGGAAGGAGAATTTAATAAAGATAGTGCTGAAAGAATTCCTTAGGTAATCTATAACTAGGACTACTCCTGGTAACAGTAATACATTCCATTGTTTTAGTAACCAGAAATCTTCATGCAATGAAAAATACTTTAATTCATGAAGCTTACTTTTTTTTTTTGGTGTCAGAGTCTCGCTCTTGTCACCCAGGCTGGAATGCAGTGGCGCCATCTCAGCTCACTGCAACCTCCATCTCCCAGGTTCAAGCGATTCTCGTGCCTCGGCCTCCTGAGTAGCTGGGATTACAGGCGTGTGCCACTACACTCAACTAATTTTTGTATTTTTAGGAGAGACGGGGTTTCACCCTGTTGGCCAGGCTGGTCTCGAACTCCTGACCTCAAGTGATTCACCCACCTTGGCCTCATAAACCTGTTTTGCAGAACTCATTTATTCAGCAAATATTTATTGAGTGCCTACCAGATGCCAGTCACCGCACAAGGCACTGGGTATATGGTATCCCCAAACAAGAGACATAATCCCGGTCCTTAGGTAGTGCTAGTGTGGTCTGTAATATCTTACTAAGGCCTTTGGTATACGACCCAGAGATAACACGATGCGTATTTTAGTTTTGCAAAGAAGGGGTTTGGTCTCTGTGCCAGCTCTATAATTGTTTTGCTACGATTCCACTGAAACTCTTCGATCAAGCTACTTTATGTAAATCACTTCATTGTTTTAAAGGAATAAACTTGATTATATTGTTTTTTTATTTGGCATAACTGTGATTCTTTTAGGACAATTACTGTACACATTAAGGTGTATGTCAGATATTCATATTGACCCAAATGTGTAATATTCCAGTTTTCTCTGCATAAGTAATTAAAATATACTTAAAAATTAATAGTTTTATCTGGGTACAAATAAACAGGTGCCTGAACTAGTTCACAGACAAGGAAACTTCTATGTAAAAATCACTATGATTTCTGAATTGCTATGTGAAACTACAGATCTTTGGAACACTGTTTAGGTAGGGTGTTAAGACTTACACAGTACCTCGTTTCTACACAGAGAAAGAAATGGCCATACTTCAGGAACTGCAGTGCTTATGAGGGGATATTTAGGCCTCTTGAATTTTTGATGTAGATGGGCATTTTTTTAAGGTAGTGGTTAATTACCTTTATGTGAACTTTGAATGGTTTAACAAAAGATTTGTTTTTGTAGAGATTTTAAAGGGGGAGAATTCTAGAAATAAATGTTACCTAATTATTACAGCCTTAAAGACAAAAATCCTTGTTGAAGTTTTTTTAAAAAAAGCTAAATTACATAGACTTAGGCATTAACATGTTTGTGGAAGAATATAGCAGACGTATATTGTATCATTTGAGTGAATGTTCCCAAGTAGGCATTCTAGGCTCTATTTAACTGAGTCACACTGCATAGGAATTTAGAACCTAACTTTTATAGGTTATCAAAACTGTTGTCACCATTGCACAATTTTGTCCTAATATATACATAGAAACTTTGTGGGGCATGTTAAGTTACAGTTTGCACAAGTTCATCTCATTTGTATTCCATTGATTTTTTTTTTCTTCTAAACATTTTTTCTTCAAACAGTATATAACTTTTTTTAGGGGATTTTTTTTTAGACAGCAAAAACTATCTGAAGATTTCCATTTGTCAAAAAGTAATGATTTCTTGATAATTGTGTAGTAATGTTTTTTAGAACCCAGCAGTTACCTTAAAGCTGAATTTATATTTAGTAACTTCTGTGTTAATACTGGATAGCATGAATTCTGCATTGAGAAACTGAATAGCTGTCATAAAATGAAACTTTCTTTCTAAAGAAAGATACTCACATGAGTTCTTGAAGAATAGTCATAACTAGATTAAGATCTGTGTTTTAGTTTAATAGTTTGAAGTGCCTGTTTGGGATAATGATAGGTAATTTAGATGAATTTAGGGGAAAAAAAAGTTATCTGCAGATATGTTGAGGGCCCATCTCTCCCCCCACACCCCCACAGAGCTAACTGGGTTACAGTGTTTTATCCGAAAGTTTCCAATTCCACTGTCTTGTGTTTTCATGTTGAAAATACTTTTGCATTTTTCCTTTGAGTGCCAATTTCTTACTAGTACTATTTCTTAATGTAACATGTTTACCTGGAATGTATTTTAACTATTTTTGTATAGTGTAAACTGAAACATGCACATTTTGTACATTGTGCTTTCTTTTGTGGGACATATGCAGTGTGATCCAGTTGTTTTCCATCATTTGGTTGCGCTGACCTAGGAATGTTGGTCATATCAAACATTAAAAATGACCACTCTTTTAATTGAAATTAACTTTTAAATGTTTATAGGAGTATGTGCTGTGAAGTGATCTAAAATTTGTAATATTTTTGTCATGAACTGTACTACTCCTAATTATTGTAATGTAATAAAAATAGTTACAGTGACTATGAGTGTGTATTTATTCATGAAATTTGAACTGTTTGCCCCGAAATGGATATGGAATACTTTATAAGCCATAGACACTATAGTATACCAGTGAATCTTTTATGCAGCTTGTTAGAAGTATCCTTTATTTCTAAAAGGTGCTGTGGATATTATGTAAAGGCGTGTTTGCTTAAACTTAAAACCATATTTAGAAGTAGATGCAAAACAAATCTGCCTTTATGACAAAAAAATAGGATAACATTATTTATTTATTTCCTTTTATCAAAGAAGGTAATTGATACACAACAGGTGACTTGGTTTTAGGCCCAAAGGTAGCAGCAGCAACATTAATAATGGAAATAATTGAATAGTTAGTTATGTATGTTAATGCCAGTCACCAGCAGGCTATTTCAAGGTCAGAAGTAATGACTCCATACATATTATTTATTTCTATAACTACATTTAAATCATTACCAGG',
region_start=192,
region_stop=762
)
print('submitting...')
r = requests.post('http://localhost:3000/jobs/new',data=job)
print('resp: ')
print(r)
| 368.117647 | 5,908 | 0.979386 | 41 | 6,258 | 149.317073 | 0.829268 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.001618 | 0.012624 | 6,258 | 16 | 5,909 | 391.125 | 0.989157 | 0 | 0 | 0 | 0 | 0 | 0.958293 | 0.946628 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0.214286 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
befc249ff688e508d87f099a32b43b5a267a0990 | 149 | py | Python | project-euler/0006_sum_square_difference.py | alenic/comprosol | 101d43ea7fef5e1847842420ab08e481c82bc526 | [
"MIT"
] | null | null | null | project-euler/0006_sum_square_difference.py | alenic/comprosol | 101d43ea7fef5e1847842420ab08e481c82bc526 | [
"MIT"
] | null | null | null | project-euler/0006_sum_square_difference.py | alenic/comprosol | 101d43ea7fef5e1847842420ab08e481c82bc526 | [
"MIT"
] | null | null | null | def sum_of_squares(n):
return n*(n+1)*(2*n+1)//6
def square_of_sum(n):
return (n*(n+1)//2)**2
print(sum_of_squares(100)-square_of_sum(100)) | 21.285714 | 45 | 0.651007 | 33 | 149 | 2.69697 | 0.363636 | 0.067416 | 0.269663 | 0.202247 | 0.247191 | 0.247191 | 0 | 0 | 0 | 0 | 0 | 0.099237 | 0.120805 | 149 | 7 | 45 | 21.285714 | 0.580153 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.4 | 0.8 | 0.2 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
befded9a66bb053a1d69f35a5a003e251a023c8e | 4,548 | py | Python | model-full.py | binbomb/Keras-SegNet-Basic | de0621fa734a0c3d28a19d6957160c811e5887d1 | [
"MIT"
] | 94 | 2017-01-17T06:23:46.000Z | 2022-03-14T11:32:24.000Z | model-full.py | xkp793003821/Keras-SegNet-Basic | 41746f3f53aea0881b6489b958c9fd3873759b6b | [
"MIT"
] | 10 | 2017-02-19T19:51:08.000Z | 2020-08-11T01:24:32.000Z | model-full.py | xkp793003821/Keras-SegNet-Basic | 41746f3f53aea0881b6489b958c9fd3873759b6b | [
"MIT"
] | 45 | 2017-03-06T16:05:25.000Z | 2022-01-26T06:15:26.000Z | from __future__ import absolute_import
from __future__ import print_function
import os
import keras.models as models
from keras.layers.core import Layer, Dense, Dropout, Activation, Flatten, Reshape, Merge, Permute
from keras.layers.convolutional import Convolution2D, MaxPooling2D, UpSampling2D, ZeroPadding2D
from keras.layers.normalization import BatchNormalization
from keras import backend as K
import cv2
import numpy as np
import json
np.random.seed(07) # 0bserver07 for reproducibility
img_w = 480
img_h = 360
n_labels = 12
kernel = 3
pad = 1
pool_size = 2
encoding_layers = [
Convolution2D(64, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(64, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Convolution2D(128, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(128, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Convolution2D(256, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(256, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(256, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
MaxPooling2D(pool_size=(pool_size, pool_size)),
]
decoding_layers = [
UpSampling2D(size=(pool_size,pool_size)),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
UpSampling2D(size=(pool_size,pool_size)),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(512, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(256, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
UpSampling2D(size=(pool_size,pool_size)),
Convolution2D(256, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(256, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(128, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
UpSampling2D(size=(pool_size,pool_size)),
Convolution2D(128, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(64, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
UpSampling2D(size=(pool_size,pool_size)),
Convolution2D(64, kernel, kernel, border_mode='same'),
BatchNormalization(),
Activation('relu'),
Convolution2D(n_labels, 1, 1, border_mode='valid'),
BatchNormalization(),
]
segnet_basic = models.Sequential()
segnet_basic.add(Layer(input_shape=(3, 360, 480)))
segnet_basic.encoding_layers = encoding_layers
for l in segnet_basic.encoding_layers:
segnet_basic.add(l)
segnet_basic.decoding_layers = decoding_layers
for l in segnet_basic.decoding_layers:
segnet_basic.add(l)
segnet_basic.add(Reshape((n_labels, img_h * img_w), input_shape=(12,img_h, img_w)))
segnet_basic.add(Permute((2, 1)))
segnet_basic.add(Activation('softmax'))
with open('segNet_full_model.json', 'w') as outfile:
outfile.write(json.dumps(json.loads(segnet_basic.to_json()), indent=2))
| 30.119205 | 97 | 0.71438 | 506 | 4,548 | 6.231225 | 0.166008 | 0.065969 | 0.142721 | 0.174437 | 0.732318 | 0.732318 | 0.717729 | 0.697431 | 0.697431 | 0.697431 | 0 | 0.036683 | 0.148857 | 4,548 | 150 | 98 | 30.32 | 0.777835 | 0.006596 | 0 | 0.721311 | 0 | 0 | 0.052037 | 0.004872 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.090164 | null | null | 0.008197 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
834704a05cf9ab52225e2774bc91f0f6086528ea | 456 | py | Python | python/drawing-book.py | gajubadge11/hackerrank-3 | 132a5019b7ed21507bb95b5063fa66c446b0eff7 | [
"MIT"
] | 21 | 2015-02-09T18:08:38.000Z | 2021-11-08T15:00:48.000Z | python/drawing-book.py | gajubadge11/hackerrank-3 | 132a5019b7ed21507bb95b5063fa66c446b0eff7 | [
"MIT"
] | 7 | 2020-04-12T23:00:19.000Z | 2021-01-30T23:44:24.000Z | python/drawing-book.py | gajubadge11/hackerrank-3 | 132a5019b7ed21507bb95b5063fa66c446b0eff7 | [
"MIT"
] | 27 | 2015-07-22T18:08:12.000Z | 2022-02-28T19:50:26.000Z | #!/bin/python3
def fewest_turns(pages_in_book, target_page):
return min(turns_from_front(target_page),
turns_from_back(pages_in_book, target_page))
def turns_from_front(target_page):
return target_page // 2
def turns_from_back(pages_in_book, target_page):
return pages_in_book // 2 - target_page // 2
pages_in_book = int(input().strip())
target_page = int(input().strip())
print(fewest_turns(pages_in_book, target_page))
| 22.8 | 59 | 0.739035 | 71 | 456 | 4.309859 | 0.28169 | 0.294118 | 0.215686 | 0.222222 | 0.627451 | 0.470588 | 0.431373 | 0.222222 | 0 | 0 | 0 | 0.010309 | 0.149123 | 456 | 19 | 60 | 24 | 0.778351 | 0.028509 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.3 | false | 0 | 0 | 0.3 | 0.6 | 0.1 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
55caa83d06e95dceab0ad8f3d989ff6cab34640e | 12,862 | py | Python | openstack_dashboard/test/api_tests/ceilometer_tests.py | enovance/horizon | 2ed6e93c9c4e534883126c93d3283e8c93bc674f | [
"Apache-2.0"
] | 3 | 2016-04-05T14:25:31.000Z | 2018-11-18T16:03:14.000Z | openstack_dashboard/test/api_tests/ceilometer_tests.py | enovance/horizon | 2ed6e93c9c4e534883126c93d3283e8c93bc674f | [
"Apache-2.0"
] | 1 | 2019-10-27T15:57:25.000Z | 2019-10-27T15:57:25.000Z | openstack_dashboard/test/api_tests/ceilometer_tests.py | enovance/horizon | 2ed6e93c9c4e534883126c93d3283e8c93bc674f | [
"Apache-2.0"
] | 15 | 2017-01-12T10:40:00.000Z | 2019-04-19T08:28:05.000Z | # Copyright 2012 Canonical Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django import http
from mox3.mox import IsA # noqa
from openstack_dashboard import api
from openstack_dashboard.test import helpers as test
class CeilometerApiTests(test.APITestCase):
def test_sample_list(self):
samples = self.samples.list()
meter_name = "meter_name"
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.samples = self.mox.CreateMockAnything()
ceilometerclient.samples.list(meter_name=meter_name,
q=[],
limit=None).AndReturn(samples)
self.mox.ReplayAll()
ret_list = api.ceilometer.sample_list(self.request,
meter_name,
query=[])
for c in ret_list:
self.assertIsInstance(c, api.ceilometer.Sample)
def test_meter_list(self):
meters = self.meters.list()
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.meters = self.mox.CreateMockAnything()
ceilometerclient.meters.list([]).AndReturn(meters)
self.mox.ReplayAll()
ret_list = api.ceilometer.meter_list(self.request, [])
for m in ret_list:
self.assertIsInstance(m, api.ceilometer.Meter)
def test_resource_list(self):
resources = self.resources.list()
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.resources = self.mox.CreateMockAnything()
ceilometerclient.resources.list(q=[]).AndReturn(resources)
self.mox.ReplayAll()
ret_list = api.ceilometer.resource_list(self.request, query=[])
for r in ret_list:
self.assertIsInstance(r, api.ceilometer.Resource)
def test_statistic_list(self):
statistics = self.statistics.list()
meter_name = "meter_name"
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.statistics = self.mox.CreateMockAnything()
ceilometerclient.statistics.list(meter_name=meter_name,
period=None, q=[]).\
AndReturn(statistics)
self.mox.ReplayAll()
ret_list = api.ceilometer.statistic_list(self.request,
meter_name,
period=None,
query=[])
for s in ret_list:
self.assertIsInstance(s, api.ceilometer.Statistic)
@test.create_stubs({api.nova: ('flavor_list',),
})
def test_meters_list_all(self):
meters = self.meters.list()
request = self.mox.CreateMock(http.HttpRequest)
api.nova.flavor_list(request, None).AndReturn([])
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.meters = self.mox.CreateMockAnything()
ceilometerclient.meters.list(None).AndReturn(meters)
self.mox.ReplayAll()
meters_object = api.ceilometer.Meters(self.request)
ret_list = meters_object.list_all()
for m in ret_list:
self.assertIsInstance(m, api.ceilometer.Meter)
self.assertEqual(3, len(ret_list))
names = ["disk.read.bytes", "disk.write.bytes", "instance"]
for ret in ret_list:
self.assertIn(ret.name, names)
names.remove(ret.name)
@test.create_stubs({api.nova: ('flavor_list',),
})
def test_meters_list_all_only(self):
meters = self.meters.list()
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.meters = self.mox.CreateMockAnything()
ceilometerclient.meters.list(None).AndReturn(meters)
request = self.mox.CreateMock(http.HttpRequest)
api.nova.flavor_list(request, None).AndReturn([])
self.mox.ReplayAll()
meters_object = api.ceilometer.Meters(self.request)
ret_list = meters_object.list_all(only_meters=["disk.read.bytes"])
self.assertEqual(1, len(ret_list))
self.assertEqual("disk.read.bytes", ret_list[0].name)
ret_list = meters_object.list_all(only_meters=["disk.read.bytes",
"instance"])
self.assertEqual(2, len(ret_list))
self.assertEqual("disk.read.bytes", ret_list[0].name)
self.assertEqual("instance", ret_list[1].name)
@test.create_stubs({api.nova: ('flavor_list',),
})
def test_meters_list_all_except(self):
meters = self.meters.list()
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.meters = self.mox.CreateMockAnything()
ceilometerclient.meters.list(None).AndReturn(meters)
request = self.mox.CreateMock(http.HttpRequest)
api.nova.flavor_list(request, None).AndReturn([])
self.mox.ReplayAll()
meters_object = api.ceilometer.Meters(self.request)
ret_list = meters_object.list_all(except_meters=["disk.write.bytes",
"instance"])
self.assertEqual(1, len(ret_list))
self.assertEqual("disk.read.bytes", ret_list[0].name)
ret_list = meters_object.list_all(except_meters=["disk.write.bytes"])
self.assertEqual(len(ret_list), 2)
names = ["disk.read.bytes", "instance"]
for ret in ret_list:
self.assertIn(ret.name, names)
names.remove(ret.name)
# TODO(lsmola) Test resource aggregates.
@test.create_stubs({api.ceilometer.CeilometerUsage: ("get_user",
"get_tenant")})
def test_global_data_get(self):
class TempUsage(api.base.APIResourceWrapper):
_attrs = ["id", "tenant", "user", "resource", "get_meter"]
meters = ["fake_meter_1",
"fake_meter_2"]
default_query = ["Fake query"]
stats_attr = "max"
resources = self.resources.list()
statistics = self.statistics.list()
user = self.ceilometer_users.list()[0]
tenant = self.ceilometer_tenants.list()[0]
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.resources = self.mox.CreateMockAnything()
# I am returning only 1 resource
ceilometerclient.resources.list(q=IsA(list)).AndReturn(resources[:1])
ceilometerclient.statistics = self.mox.CreateMockAnything()
# check that list is called twice for one resource and 2 meters
ceilometerclient.statistics.list(meter_name=IsA(str),
period=None, q=IsA(list)).\
AndReturn(statistics)
ceilometerclient.statistics.list(meter_name=IsA(str),
period=None, q=IsA(list)).\
AndReturn(statistics)
api.ceilometer.CeilometerUsage\
.get_user(IsA(str)).AndReturn(user)
api.ceilometer.CeilometerUsage\
.get_tenant(IsA(str)).AndReturn(tenant)
self.mox.ReplayAll()
# getting all resources and with statistics
ceilometer_usage = api.ceilometer.CeilometerUsage(http.HttpRequest)
data = ceilometer_usage.global_data_get(
used_cls=TempUsage, query=["fake_query"], with_statistics=True)
first = data[0]
self.assertEqual('fake_project_id__fake_user_id__'
'fake_resource_id',
first.id)
self.assertEqual('user', first.user.name)
self.assertEqual('test_tenant', first.tenant.name)
self.assertEqual('fake_resource_id', first.resource)
self.assertEqual(9, first.get_meter('fake_meter_1'),)
self.assertEqual(9, first.get_meter('fake_meter_2'),)
self.assertEqual(2, len(first.meters))
# check that only one resource is returned
self.assertEqual(1, len(data))
@test.create_stubs({api.ceilometer.CeilometerUsage: ("get_user",
"get_tenant")})
def test_global_data_get_without_statistic_data(self):
class TempUsage(api.base.APIResourceWrapper):
_attrs = ["id", "tenant", "user", "resource", "fake_meter_1",
"fake_meter_2"]
meters = ["fake_meter_1",
"fake_meter_2"]
default_query = ["Fake query"]
stats_attr = "max"
resources = self.resources.list()
user = self.ceilometer_users.list()[0]
tenant = self.ceilometer_tenants.list()[0]
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.resources = self.mox.CreateMockAnything()
ceilometerclient.resources.list(q=IsA(list)).AndReturn(resources)
api.ceilometer.CeilometerUsage\
.get_user(IsA(str)).MultipleTimes().AndReturn(user)
api.ceilometer.CeilometerUsage\
.get_tenant(IsA(str)).MultipleTimes().AndReturn(tenant)
self.mox.ReplayAll()
# getting all resources and with statistics
ceilometer_usage = api.ceilometer.CeilometerUsage(http.HttpRequest)
data = ceilometer_usage.global_data_get(
used_cls=TempUsage, query=["fake_query"], with_statistics=False)
first = data[0]
self.assertEqual('fake_project_id__fake_user_id__'
'fake_resource_id',
first.id)
self.assertEqual('user', first.user.name)
self.assertEqual('test_tenant', first.tenant.name)
self.assertEqual('fake_resource_id', first.resource)
self.assertRaises(AttributeError, getattr, first, 'fake_meter_1')
self.assertRaises(AttributeError, getattr, first, 'fake_meter_2')
self.assertEqual(len(resources), len(data))
@test.create_stubs({api.ceilometer.CeilometerUsage: ("get_user",
"get_tenant")})
def test_global_data_get_all_statistic_data(self):
class TempUsage(api.base.APIResourceWrapper):
_attrs = ["id", "tenant", "user", "resource", "get_meter", ]
meters = ["fake_meter_1",
"fake_meter_2"]
default_query = ["Fake query"]
stats_attr = None # have to return dictionary with all stats
resources = self.resources.list()
statistics = self.statistics.list()
user = self.ceilometer_users.list()[0]
tenant = self.ceilometer_tenants.list()[0]
ceilometerclient = self.stub_ceilometerclient()
ceilometerclient.resources = self.mox.CreateMockAnything()
ceilometerclient.resources.list(q=IsA(list)).AndReturn(resources)
ceilometerclient.statistics = self.mox.CreateMockAnything()
ceilometerclient.statistics.list(meter_name=IsA(str),
period=None, q=IsA(list)).\
MultipleTimes().\
AndReturn(statistics)
api.ceilometer.CeilometerUsage\
.get_user(IsA(str)).MultipleTimes().AndReturn(user)
api.ceilometer.CeilometerUsage\
.get_tenant(IsA(str)).MultipleTimes().AndReturn(tenant)
self.mox.ReplayAll()
# getting all resources and with statistics
ceilometer_usage = api.ceilometer.CeilometerUsage(http.HttpRequest)
data = ceilometer_usage.global_data_get(
used_cls=TempUsage, query=["fake_query"], with_statistics=True)
first = data[0]
self.assertEqual('fake_project_id__fake_user_id__'
'fake_resource_id',
first.id)
self.assertEqual('user', first.user.name)
self.assertEqual('test_tenant', first.tenant.name)
self.assertEqual('fake_resource_id', first.resource)
statistic_obj = api.ceilometer.Statistic(statistics[0])
# check that it returns whole statistic object
self.assertEqual(vars(first.get_meter('fake_meter_1')[0]),
vars(statistic_obj))
self.assertEqual(vars(first.get_meter('fake_meter_2')[0]),
vars(statistic_obj))
self.assertEqual(len(resources), len(data))
| 39.944099 | 77 | 0.616623 | 1,354 | 12,862 | 5.679468 | 0.133678 | 0.056567 | 0.039012 | 0.052016 | 0.804941 | 0.773602 | 0.750455 | 0.718986 | 0.691808 | 0.676723 | 0 | 0.005485 | 0.277095 | 12,862 | 321 | 78 | 40.068536 | 0.821575 | 0.073239 | 0 | 0.720524 | 0 | 0 | 0.071116 | 0.007818 | 0 | 0 | 0 | 0.003115 | 0.165939 | 1 | 0.043668 | false | 0 | 0.017467 | 0 | 0.078603 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
55df1012a365c90d51d9ae58b50a683f8b1bb7c4 | 30 | py | Python | example/azure/cookies/__init__.py | odd12258053/agraffe | 1f191e5c57c06aff2d685fd62785571137f32210 | [
"MIT"
] | 16 | 2020-09-11T07:20:26.000Z | 2022-02-08T20:41:08.000Z | example/azure/none/__init__.py | odd12258053/agraffe | 1f191e5c57c06aff2d685fd62785571137f32210 | [
"MIT"
] | 12 | 2020-09-07T00:17:41.000Z | 2022-02-04T08:05:54.000Z | example/azure/none/__init__.py | odd12258053/agraffe | 1f191e5c57c06aff2d685fd62785571137f32210 | [
"MIT"
] | 2 | 2020-11-09T09:17:49.000Z | 2020-11-10T03:19:16.000Z | from entry_point import main
| 15 | 29 | 0.833333 | 5 | 30 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 1 | 30 | 30 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36121247d83af0fc78271322aadddedb554040a2 | 59,699 | py | Python | cogs/resources/wiki_dyk.py | EZLiang/conwaylife-caterer | 41929146623a98701ad473ed3ec8be0e752186d7 | [
"MIT"
] | 7 | 2019-02-27T20:49:51.000Z | 2021-08-06T15:09:37.000Z | cogs/resources/wiki_dyk.py | EZLiang/conwaylife-caterer | 41929146623a98701ad473ed3ec8be0e752186d7 | [
"MIT"
] | 4 | 2020-07-30T22:55:34.000Z | 2020-11-10T01:54:51.000Z | cogs/resources/wiki_dyk.py | EZLiang/conwaylife-caterer | 41929146623a98701ad473ed3ec8be0e752186d7 | [
"MIT"
] | 5 | 2019-06-27T18:29:06.000Z | 2021-02-13T21:38:38.000Z | #encoding: utf-8
trivia = [
'... that the [caterpillar](https://conwaylife.com/wiki/Caterpillar "Caterpillar") contains over 11 million [cells](https://conwaylife.com/wiki/Cell "Cell"), and the [0E0P metacell](https://conwaylife.com/wiki/0E0P_metacell "0E0P metacell") contains over 18 million cells?',
'... that the [Gosper glider gun](https://conwaylife.com/wiki/Gosper_glider_gun "Gosper glider gun") was the first pattern to be discovered that exhibits infinite growth?',
'... that the [block-laying switch engine](https://conwaylife.com/wiki/Block-laying_switch_engine "Block-laying switch engine") and the [glider-producing switch engine](https://conwaylife.com/wiki/Glider-producing_switch_engine "Glider-producing switch engine") (and [various combinations of two switch engines](http://conwaylife.com/forums/viewtopic.php?f=2&t=1452&p=25314#p25283)) are the only infinitely-growing patterns that are known to have ever occurred naturally from an asymmetric random starting configuration?',
'... that [oscillators](https://conwaylife.com/wiki/Oscillator "Oscillator") are known that oscillate at all [periods](https://conwaylife.com/wiki/Period "Period") other than 19, 34, 38 and 41?',
'... that the [pentadecathlon](https://conwaylife.com/wiki/Pentadecathlon "Pentadecathlon") and the [blinker](https://conwaylife.com/wiki/Blinker "Blinker") are the only known [oscillators](https://conwaylife.com/wiki/Oscillator "Oscillator") that are [polyominos](https://conwaylife.com/wiki/Polyomino "Polyomino") in more than one [phase](https://conwaylife.com/wiki/Phase "Phase")?',
'... that it is impossible for a [period](https://conwaylife.com/wiki/Period "Period")\-[3](https://conwaylife.com/wiki/Category:Oscillators_with_period_3 "Category:Oscillators with period 3") [oscillator](https://conwaylife.com/wiki/Oscillator "Oscillator") to be a [phoenix](https://conwaylife.com/wiki/Phoenix "Phoenix")?',
'... that the 16×16 [soup](https://conwaylife.com/wiki/Soup "Soup") with the [longest known lifespan](https://conwaylife.com/wiki/List_of_long-lived_methuselahs "List of long-lived methuselahs") lasts for [over 49,000 generations](https://conwaylife.com/wiki/49768M "49768M") before stabilizing?',
'... that [replicators](https://conwaylife.com/wiki/Replicator "Replicator") with quadratic population growth have been known to exist in [Conway\'s Game of Life](https://conwaylife.com/wiki/Conway%27s_Game_of_Life "Conway\'s Game of Life") since the early 1970s, but none were found until 2018 when [Adam P. Goucher](https://conwaylife.com/wiki/Adam_P._Goucher "Adam P. Goucher") constructed the [0E0P metacell](https://conwaylife.com/wiki/0E0P_metacell "0E0P metacell")?',
'... that the [first](https://conwaylife.com/wiki/Beluchenko%27s_p37 "Beluchenko\'s p37") [known](https://conwaylife.com/wiki/Beluchenko%27s_p51 "Beluchenko\'s p51") period 37 and 51 oscillators were found in [2009](https://conwaylife.com/wiki/Category:Patterns_found_in_2009 "Category:Patterns found in 2009")?',
'... that a pattern whose population grows without bound but does not tend to infinity is known as a [sawtooth](https://conwaylife.com/wiki/Sawtooth "Sawtooth")?',
'... that there are over 35.4 billion distinct [strict still lifes](https://conwaylife.com/wiki/Still_life#Strict_still_lifes "Still life") with 34 or fewer [cells](https://conwaylife.com/wiki/Cell "Cell")?',
'... that [some](https://conwaylife.com/wiki/Block-laying_switch_engine "Block-laying switch engine") [infinitely-growing](https://conwaylife.com/wiki/Glider-producing_switch_engine "Glider-producing switch engine") patterns can be [constructed](https://conwaylife.com/wiki/Glider_synthesis "Glider synthesis") with as few as three [gliders](https://conwaylife.com/wiki/Glider "Glider")?',
'... that quadratically-growing patterns have been found with as few as [23 initial cells](https://conwaylife.com/wiki/Switch_engine_ping-pong "Switch engine ping-pong")?',
'... that the [blinker](https://conwaylife.com/wiki/Blinker "Blinker") is the only known [oscillator](https://conwaylife.com/wiki/Oscillator "Oscillator") that is [one cell thick](https://conwaylife.com/wiki/One-cell-thick_pattern "One-cell-thick pattern")?',
'... that [Gemini](https://conwaylife.com/wiki/Gemini "Gemini"), the first [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship") found in [Conway\'s Game of Life](https://conwaylife.com/wiki/Conway%27s_Game_of_Life "Conway\'s Game of Life") that travels in an [oblique direction](https://conwaylife.com/wiki/Knightship "Knightship"), was discovered in [2010](https://conwaylife.com/wiki/Category:Patterns_found_in_2010 "Category:Patterns found in 2010")?',
'... that there are [still lifes](https://conwaylife.com/wiki/Still_life "Still life") (such as the [quad pseudo still life](https://conwaylife.com/wiki/Quad_pseudo_still_life "Quad pseudo still life")) that can be split into four stable [islands](https://conwaylife.com/wiki/Island "Island"), but not two or three?',
'... that no new [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship") [speeds](https://conwaylife.com/wiki/Speed "Speed") were discovered after [1970](https://conwaylife.com/wiki/Category:Patterns_found_in_1970 "Category:Patterns found in 1970") until [1989](https://conwaylife.com/wiki/Category:Patterns_found_in_1989 "Category:Patterns found in 1989")?',
'... that the first stable reflector was found in October 1996, and the [first fast stable reflector](https://conwaylife.com/wiki/Snark "Snark") appeared in 2013, allowing the construction of oscillators of all periods ≥43 ticks?',
'... that nineteen [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship") velocities have been constructed, excluding several infinitely adjustable families of ships?',
'... that there are 71 distinct ways for two [gliders](https://conwaylife.com/wiki/Glider "Glider") to [collide](https://conwaylife.com/wiki/Glider_synthesis "Glider synthesis"), but it is unknown how many distinct 3-glider collisions there are?',
'... that to display the smallest known gun pattern for a [Gemini](https://conwaylife.com/wiki/Gemini "Gemini") spaceship at 1 cell = 1 pixel, on a standard-density video monitor, a screen over one mile square would be needed?',
'... that no odd-period glider guns were known before 1995, when a period 565 p5-spark-assisted [B-heptomino](https://conwaylife.com/wiki/B-heptomino "B-heptomino") loop was constructed by [David Buckingham](https://conwaylife.com/wiki/David_Buckingham "David Buckingham")?',
'... that even though the speed limit for spaceships is c/2 in a vacuum, in a medium of [stripes agar](https://conwaylife.com/wiki/Zebra_stripes "Zebra stripes") there are "spaceships" that can travel at lightspeed along the stripes, or two thirds of lightspeed perpendicular to the stripes?',
'... that the smallest known [spacefiller](https://conwaylife.com/wiki/Spacefiller "Spacefiller") pattern consists of [183 cells](https://conwaylife.com/wiki/Max "Max")?',
'... that the smallest known [sawtooth](https://conwaylife.com/wiki/Sawtooth "Sawtooth") pattern in Conway\'s Life consists of only [177 cells](https://conwaylife.com/wiki/Sawtooth_177 "Sawtooth 177")?',
'... that there are now over a hundred and seventy known [Herschel conduits](https://conwaylife.com/wiki/Herschel_conduit "Herschel conduit"), counting stable conduits only, and a much larger number if oscillator-supported conduits are included?',
'... that [Demonoids](https://conwaylife.com/wiki/Types_of_spaceships#Self-constructing_spaceship "Types of spaceships"), [Caterloopillars](https://conwaylife.com/wiki/Caterloopillar "Caterloopillar"), [Orthogonoids](https://conwaylife.com/wiki/Orthogonoid "Orthogonoid"), [half-bakery knightships](https://conwaylife.com/wiki/Parallel_HBK "Parallel HBK"), [loopships](https://conwaylife.com/w/index.php?title=Loopship&action=edit&redlink=1 "Loopship (page does not exist)") and [camelships](https://conwaylife.com/wiki/Camelship "Camelship") are the only known types of spaceships with fixed [slope](https://conwaylife.com/wiki/Slope "Slope") but adjustable speed -- not counting [0E0P metacell](https://conwaylife.com/wiki/0E0P_metacell "0E0P metacell")\-based patterns?',
'... that a pattern exists in which no cell in the unbounded Life plane [ever becomes periodic](https://conwaylife.com/wiki/Total_aperiodic "Total aperiodic")?',
'... that several different [universal constructors](https://conwaylife.com/wiki/Universal_constructor "Universal constructor") in Conway’s Life have been shown to be capable of constructing their own circuitry?',
'... that there are dozens of known [Cordership](https://conwaylife.com/wiki/Cordership "Cordership") variants, including [puffers](https://conwaylife.com/wiki/Puffer "Puffer"), [rakes](https://conwaylife.com/wiki/Rake "Rake") and [wickstretchers](https://conwaylife.com/wiki/Wickstretcher "Wickstretcher"), with periods of any multiple of 96?',
'... that [greyships](https://conwaylife.com/wiki/Greyship "Greyship") have been constructed with speeds of [c/2](https://conwaylife.com/wiki/C/2_orthogonal "C/2 orthogonal"), [c/3](https://conwaylife.com/wiki/C/3_orthogonal "C/3 orthogonal"), [c/4](https://conwaylife.com/wiki/C/4_orthogonal "C/4 orthogonal"), [c/5](https://conwaylife.com/wiki/C/5_orthogonal "C/5 orthogonal"), and [2c/5](https://conwaylife.com/wiki/2c/5_orthogonal "2c/5 orthogonal")?',
'... that most [greyships](https://conwaylife.com/wiki/Greyship "Greyship") travel [parallel to the stripes](https://conwaylife.com/wiki/With_the_grain "With the grain") in their included [agars](https://conwaylife.com/wiki/Agar "Agar"), but a few travel perpendicular to the stripes, or "[against the grain](https://conwaylife.com/wiki/Against_the_grain "Against the grain")"?',
'... that a pattern has been constructed that calculates and [prints out the digits of pi in decimal](https://conwaylife.com/wiki/Pi_calculator "Pi calculator"), and a similar one prints out the decimal digits of the Golden Ratio?',
'... that several different patterns have been constructed to calculate and display the sequence of [prime numbers](https://conwaylife.com/wiki/Primer "Primer"), and some have been adapted to display only [twin primes](https://conwaylife.com/wiki/Twin_prime_calculator "Twin prime calculator") or [Fermat primes](https://conwaylife.com/wiki/Fermat_prime_calculator "Fermat prime calculator")?',
'... that two completely different types of [oblique spaceships](https://conwaylife.com/wiki/Oblique_spaceship "Oblique spaceship"), the [waterbear](https://conwaylife.com/wiki/Waterbear "Waterbear") and the [half-baked knightship](https://conwaylife.com/wiki/Half-baked_knightship "Half-baked knightship"), were constructed in 2014?',
'... that no Caterpillar-type spaceships were completed for almost ten years after the original [Caterpillar](https://conwaylife.com/wiki/Caterpillar "Caterpillar") was constructed in 2004, but that two different designs, the [waterbear](https://conwaylife.com/wiki/Waterbear "Waterbear") and the [centipede](https://conwaylife.com/wiki/Centipede "Centipede"), were finished in 2014?',
'... that the first [spiral-growth pattern](https://conwaylife.com/wiki/Spiral_growth "Spiral growth") in Conway\'s Life was constructed in [2014](https://conwaylife.com/wiki/Category:Patterns_found_in_2014 "Category:Patterns found in 2014")?',
'... that among known glider recipes for irreducible objects, the [Gemini spaceship](https://conwaylife.com/wiki/Gemini "Gemini") has the largest known minimal recipe, currently 173,449 gliders — the runner-up being the [Parallel HBK](https://conwaylife.com/wiki/Parallel_HBK "Parallel HBK") with a 38,380-glider synthesis?',
'... that it was shown in 2014 that any salvo of gliders, no matter how tightly packed, can be constructed by crashing together gliders whose initial positions are farther apart than any chosen finite distance?',
'... that no spaceships with velocities other than [c/4 diagonal](https://conwaylife.com/wiki/C/4_diagonal "C/4 diagonal") ([glider](https://conwaylife.com/wiki/Glider "Glider")), [c/2 orthogonal](https://conwaylife.com/wiki/C/2_orthogonal "C/2 orthogonal") ([\*WSS](https://conwaylife.com/wiki/*WSS "*WSS") variants), and [c/12 diagonal](https://conwaylife.com/wiki/C/12_diagonal "C/12 diagonal") ([Corderships](https://conwaylife.com/wiki/Cordership "Cordership")) had known glider syntheses until [2003](https://conwaylife.com/wiki/Category:Patterns_found_in_2003 "Category:Patterns found in 2003"), when a [2c/5 spaceship gun](https://conwaylife.com/wiki/P416_60P5H2V0_gun "P416 60P5H2V0 gun") was constructed?',
'... that after ten years with no new small [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship") [syntheses](https://conwaylife.com/wiki/Syntheses "Syntheses"), a glider construction was found for the c/7 [loafer](https://conwaylife.com/wiki/Loafer "Loafer") in [2013](https://conwaylife.com/wiki/Category:Patterns_found_in_2013 "Category:Patterns found in 2013") less than three hours after its discovery?',
'... that glider constructions for three previously inconstructible spaceships (the [dart](https://conwaylife.com/wiki/Dart "Dart"), the [crab](https://conwaylife.com/wiki/Crab "Crab"), and the [Parallel HBK](https://conwaylife.com/wiki/Parallel_HBK "Parallel HBK")) were discovered in 2014?',
'... that glider constructions for the [B29](https://conwaylife.com/wiki/B29 "B29"), [X66](https://conwaylife.com/wiki/X66 "X66"), [half-X66 with HWSS](https://conwaylife.com/wiki/X66#Image_gallery "X66"), [Pushalong 1](https://conwaylife.com/wiki/Pushalong_1 "Pushalong 1"), [25P3H1V0.1](https://conwaylife.com/wiki/25P3H1V0.1 "25P3H1V0.1"), [30P5H2V0](https://conwaylife.com/wiki/30P5H2V0 "30P5H2V0"), [30P4H2V0.4](https://conwaylife.com/wiki/30P4H2V0.4 "30P4H2V0.4"), a [pufferfish](https://conwaylife.com/wiki/Pufferfish "Pufferfish") spaceship, and the [weekender](https://conwaylife.com/wiki/Weekender "Weekender") were discovered in 2015 — after fifteen years of intermittent efforts in the case of the weekender?',
'... that since 2014, more new spaceship syntheses have been completed than were found in all the years between 1970 and 2013?',
'... that as of 19 July 2020 there are [232 different still lifes](https://www.conwaylife.com/forums/viewtopic.php?p=100655#p100655) known to be constructible by colliding four gliders, but it is likely that this list is not complete?',
'... that in 2014 a [new natural infinite growth pattern](https://conwaylife.com/wiki/Pufferfish "Pufferfish") was discovered, starting from a symmetric random starting configuration?',
'... that the [first self-constructing Conway\'s Life pattern](https://conwaylife.com/wiki/Gemini "Gemini") was built in 2010?',
'... that the first glider synthesis for a [c/3 spaceship](https://conwaylife.com/wiki/25P3H1V0.2 "25P3H1V0.2") was [completed](http://conwaylife.com/forums/viewtopic.php?f=2&t=1557&p=16241#p16241) in 2014?',
'... that the first "macro-spaceship" gun (a [Gemini](https://conwaylife.com/wiki/Gemini "Gemini") spaceship gun) was constructed in 2010, followed by the [HBK gun](https://conwaylife.com/wiki/HBK_gun "HBK gun") in January 2015 and a [Demonoid](https://conwaylife.com/wiki/Demonoid "Demonoid") gun in December 2015?',
'... that the [waterbear](https://conwaylife.com/wiki/Waterbear "Waterbear") was the first known high-speed [oblique spaceship](https://conwaylife.com/wiki/Oblique_spaceship "Oblique spaceship"), many orders of magnitude faster than [Gemini](https://conwaylife.com/wiki/Gemini "Gemini") spaceships and [half-baked knightships](https://conwaylife.com/wiki/Half-baked_knightship "Half-baked knightship")?',
'... that there are no known direct reflectors for [lightspeed wire](https://conwaylife.com/wiki/Lightspeed_wire "Lightspeed wire") signals, or for signals in [2c/3 wires](https://conwaylife.com/wiki/2c/3_wire "2c/3 wire"), but that very large reflectors for these signals can be constructed using [stable](https://conwaylife.com/wiki/P1 "P1") or [periodic](https://conwaylife.com/wiki/Periodic "Periodic") circuitry?',
'... that 24 ten-cell patterns exhibit infinite growth, with 17 unique pattern types, but that it has been proven that no nine-cell pattern exhibits infinite growth?',
'... that all still lifes up to 19 bits have a known [glider synthesis](https://conwaylife.com/wiki/Glider_synthesis "Glider synthesis"), but it is still not known whether all still lifes are synthesizable?',
'... that the [French kiss](https://conwaylife.com/wiki/French_kiss "French kiss") remained without a glider synthesis until 2013?',
'... that Adam P. Goucher\'s distributed [Catagolue](https://conwaylife.com/wiki/Catagolue "Catagolue") soup-search project, started in February 2015, has tested several orders of magnitude more random [soups](https://conwaylife.com/wiki/Soup "Soup") than any previous such project, and has contributed to the reduction of many [glider construction](https://conwaylife.com/wiki/Glider_synthesis "Glider synthesis") recipes?',
'... that with the appearance of the [0E0P](https://conwaylife.com/wiki/0E0P "0E0P") metacell, the number of periods for which [strict volatility](https://conwaylife.com/wiki/Strict_volatility "Strict volatility") 1 [oscillators](https://conwaylife.com/wiki/Oscillator "Oscillator") were known went from 12 to infinity?',
'... that [Copperhead](https://conwaylife.com/wiki/Copperhead "Copperhead") is not only the first [c/10 orthogonal](https://conwaylife.com/wiki/C/10_orthogonal "C/10 orthogonal") [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship") ever found, but also remarkably compact for a pattern not discovered until 2016?',
'... that [loafer](https://conwaylife.com/wiki/Loafer "Loafer") is the fifth smallest non-flotilla spaceship, but was discovered 43 years after the four spaceships smaller than it?',
'... that despite being the fourth smallest non-flotilla orthogonal spaceship, [loafer](https://conwaylife.com/wiki/Loafer "Loafer") did not appear from a single randomly generated [soup](https://conwaylife.com/wiki/Soup "Soup") until [2020](https://conwaylife.com/wiki/Category:Patterns_found_in_2020 "Category:Patterns found in 2020")?',
'... that all known glider eaters take at least four ticks to recover to their original state after eating a glider?',
'... that [the formerly smallest](https://conwaylife.com/wiki/Centipede_caterloopillar "Centipede caterloopillar") [31c/240 spaceship](https://conwaylife.com/wiki/31c/240_orthogonal "31c/240 orthogonal") does not make use of the [31c/240 reaction](https://conwaylife.com/wiki/31c/240_reaction "31c/240 reaction")?',
'... that there is roughly one chance in 10^(N/3) that a [still life](https://conwaylife.com/wiki/Still_life "Still life") appearing out of random [soup](https://conwaylife.com/wiki/Soup "Soup") will have a population of exactly N cells?',
'... that the number of still lifes with N+1 bits is roughly 2.48 times larger than the number of N-bit still lifes?',
'... that the odds of a randomly-chosen 20×20 soup pattern being a [methuselah](https://conwaylife.com/wiki/Methuselah "Methuselah") that lasts between 1000N and 1000(N+1) ticks, is roughly the same as the odds that it will last **any** amount of time longer than 1000(N+1) ticks?',
'... that all [still lifes](https://conwaylife.com/wiki/Still_life "Still life") up to [17](https://conwaylife.com/wiki/Category:Strict_still_lifes_with_17_cells "Category:Strict still lifes with 17 cells") cells can be [synthesized](https://conwaylife.com/wiki/Glider_synthesis "Glider synthesis") at a cost of less than one [glider](https://conwaylife.com/wiki/Glider "Glider") per cell?',
'... that the first [elementary](https://conwaylife.com/wiki/Spaceship#Elementary_spaceships "Spaceship") [knightship](https://conwaylife.com/wiki/Knightship "Knightship"), [Sir Robin](https://conwaylife.com/wiki/Sir_Robin "Sir Robin"), was discovered only in [2018](https://conwaylife.com/wiki/Category:Patterns_found_in_2018 "Category:Patterns found in 2018"), with there having been a [very close call](https://conwaylife.com/wiki/Almost_knightship "Almost knightship") in [2004](https://conwaylife.com/wiki/Category:Patterns_found_in_2004 "Category:Patterns found in 2004")?',
'... that there is a 5×2 counterexample to the [Coolout Conjecture](https://conwaylife.com/wiki/Coolout_Conjecture "Coolout Conjecture"), proving that patterns that are internally compatible with stability can not always be made part of a larger still life, no matter what cells are added around the edges?',
'... that a Conway\'s Life pattern representing a complete [programmable 8-bit computer](http://conwaylife.com/forums/viewtopic.php?f=2&t=2561), consisting only of [buckaroos](https://conwaylife.com/wiki/Buckaroo "Buckaroo"), p60 glider guns, and glider duplicators, was completed in November 2016?',
'... that whilst no elementary [oblique spaceships](https://conwaylife.com/wiki/Oblique_spaceship "Oblique spaceship") were found in B3/S23 until 2018, and none have occurred naturally, at least two naturally occurring reactions have been discovered in [B38/S23](https://conwaylife.com/wiki/Pedestrian_Life "Pedestrian Life") that travel in an oblique direction?',
'... that not all [1.00 volatility oscillators](https://conwaylife.com/wiki/Category:Oscillators_with_volatility_1.00 "Category:Oscillators with volatility 1.00") are [phoenixes](https://conwaylife.com/wiki/Phoenix "Phoenix"), but volatility 1.00 period 2 oscillators _must_ be phoenixes?',
'... that no pattern inside a 6×6 bounding box is a [Garden of Eden](https://conwaylife.com/wiki/Garden_of_Eden "Garden of Eden")?',
'... that [Garden of Eden](https://conwaylife.com/wiki/Garden_of_Eden "Garden of Eden") patterns with only 45 ON cells have been found?',
'... that it is known that no [Garden of Eden](https://conwaylife.com/wiki/Garden_of_Eden "Garden of Eden") patterns exist that are 1, 2, or 3 cells high, but that it is currently an open question whether a 4-cell-high GoE can be constructed?',
'... that 6-cell-high [Garden of Eden](https://conwaylife.com/wiki/Garden_of_Eden "Garden of Eden") patterns were constructed as far back as 1973, but 5-cell-high GoEs were unknown until Steven Eker found some in 2016?',
'... that in 2016, [patterns were found](https://conwaylife.com/wiki/Grandfather_problem "Grandfather problem") that have great-great-grandparents but no great-great-great-grandparents?',
'... that no way is known for a 3×3 pattern to be tiled into an M×N rectangle to produce a [Garden of Eden](https://conwaylife.com/wiki/Garden_of_Eden "Garden of Eden"), but that there are 4×3, 4×4 and larger tiles that can be repeated in this way to produce GoEs?',
'... that there are spaceship stabilizations of agars?',
'... that [block](https://conwaylife.com/wiki/Block "Block") is the only known finite [strict still life](https://conwaylife.com/wiki/Strict_still_life "Strict still life") where each living cell has exactly 3 neighbours?',
'... that all [strict still lifes](https://conwaylife.com/wiki/Strict_still_life "Strict still life") up to and including 14 cells have been found by [apgsearch](https://conwaylife.com/wiki/Apgsearch "Apgsearch") in asymmetrical 16×16 [soups](https://conwaylife.com/wiki/Soup "Soup")?',
'... that [c/2 orthogonal](https://conwaylife.com/wiki/C/2_orthogonal "C/2 orthogonal") and [c/4 diagonal](https://conwaylife.com/wiki/C/4_diagonal "C/4 diagonal") were the only speeds of spaceships seen to emerge from asymmetric soups on [Catagolue](https://conwaylife.com/wiki/Catagolue "Catagolue") until [2020](https://conwaylife.com/wiki/Category:Patterns_found_in_2020 "Category:Patterns found in 2020"), when a [loafer](https://conwaylife.com/wiki/Loafer "Loafer") appeared naturally?',
'... that all currently known standard Herschel conduits produce the same Herschel great-grandfather pattern, except for [Fx158](https://conwaylife.com/wiki/Fx158 "Fx158")?',
'... that [eater 1](https://conwaylife.com/wiki/Eater_1 "Eater 1") is the smallest asymmetric [still life](https://conwaylife.com/wiki/Still_life "Still life")?',
'... that in [Life](https://conwaylife.com/wiki/Conway%27s_Game_of_Life "Conway\'s Game of Life"), no [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship") can exist with a speed of (m,n)[c](https://conwaylife.com/wiki/Speed_of_light "Speed of light")/x where (m+n)/x > 0.5?',
'... that even without using fixed-cost [reverse caber tosser](https://conwaylife.com/wiki/Reverse_caber_tosser "Reverse caber tosser") technology, an N-bit [strict still life](https://conwaylife.com/wiki/Strict_still_life "Strict still life") – specifically, some length of long long (...) [boat](https://conwaylife.com/wiki/Boat "Boat") or [ship](https://conwaylife.com/wiki/Ship "Ship") – [can be constructed](https://www.conwaylife.com/forums/viewtopic.php?p=101450#p101450) for any odd integer N using no more than [20 gliders](https://catagolue.appspot.com/census/b3s23/synthesis-costs/xs401), and for any even integer N using no more than [21 gliders](https://catagolue.appspot.com/census/b3s23/synthesis-costs/xs400), using a temporary [tubstretcher](https://conwaylife.com/wiki/Tubstretcher "Tubstretcher")?',
'... that even without using fixed-cost [reverse caber tosser](https://conwaylife.com/wiki/Reverse_caber_tosser "Reverse caber tosser") technology, some specific N-bit period-2 [oscillators](https://conwaylife.com/wiki/Oscillator "Oscillator") can be constructed for any even integer N using no more than 27 gliders, and for any odd integer N using no more than 29 gliders, using a temporary [tubstretcher](https://conwaylife.com/wiki/Tubstretcher "Tubstretcher")?',
'... that it is [currently an open question](https://conwaylife.com/wiki/Unique_father_problem "Unique father problem") as to if there exists a periodic pattern whose only predecessors are its own evolutionary sequence?',
'... that it was proved in the early 1970s that [reflectorless rotating oscillators](https://conwaylife.com/wiki/Reflectorless_rotating_oscillator "Reflectorless rotating oscillator") exist in Life, but none were found until 2018 when [Adam P. Goucher](https://conwaylife.com/wiki/Adam_P._Goucher "Adam P. Goucher") completed the [0E0P metacell](https://conwaylife.com/wiki/0E0P_metacell "0E0P metacell")?',
'... that there are currently known elementary spaceships with speeds [c/7](https://conwaylife.com/wiki/C/7_orthogonal "C/7 orthogonal") and [c/10 orthogonal](https://conwaylife.com/wiki/C/10_orthogonal "C/10 orthogonal"), but none with [c/8](https://conwaylife.com/wiki/C/8_orthogonal "C/8 orthogonal") or [c/9](https://conwaylife.com/wiki/C/9_orthogonal "C/9 orthogonal")?',
'... that there are currently known elementary spaceships with speeds [c/7](https://conwaylife.com/wiki/C/7_diagonal "C/7 diagonal") and [c/12 diagonal](https://conwaylife.com/wiki/C/12_diagonal "C/12 diagonal"), but none with c/8, c/9, c/10 or c/11?',
'... that while the speed limit for orthogonal spaceships in rules using the [Moore neighbourhood](https://conwaylife.com/wiki/Moore_neighbourhood "Moore neighbourhood") is [c orthogonal](https://conwaylife.com/wiki/C_orthogonal "C orthogonal"), spaceships in [Larger than Life](https://conwaylife.com/wiki/Larger_than_Life "Larger than Life") rules are capable of breaking this barrier?',
'... that without the use of [adjustable glider loops](https://conwaylife.com/wiki/Adjustable_glider_loops "Adjustable glider loops"), there are no known oscillators with periods [43](https://conwaylife.com/wiki/Category:Oscillators_with_period_43 "Category:Oscillators with period 43") or [53](https://conwaylife.com/w/index.php?title=Category:Oscillators_with_period_53&action=edit&redlink=1 "Category:Oscillators with period 53 (page does not exist)")?',
'... that without the use of [Herschel](https://conwaylife.com/wiki/Herschel "Herschel") loops or [adjustable glider loops](https://conwaylife.com/wiki/Adjustable_glider_loops "Adjustable glider loops"), there are no known oscillators with periods [59](https://conwaylife.com/wiki/Category:Oscillators_with_period_59 "Category:Oscillators with period 59") or [61](https://conwaylife.com/wiki/Category:Oscillators_with_period_61 "Category:Oscillators with period 61")?',
'... that small [stable](https://conwaylife.com/wiki/Stable "Stable") [elementary](https://conwaylife.com/wiki/Elementary "Elementary") [period multipliers](https://conwaylife.com/wiki/Period_multiplier "Period multiplier") (also known as [pulse dividers](https://conwaylife.com/wiki/Pulse_divider "Pulse divider")) have been found for multipliers of [2x](https://conwaylife.com/wiki/Semi-Snark "Semi-Snark"), [3x](https://conwaylife.com/wiki/Tremi-Snark "Tremi-Snark"), and [4x](https://conwaylife.com/wiki/Quadri-Snark "Quadri-Snark"), but not for 5x or higher?',
'... that with [reverse caber-tosser](https://conwaylife.com/wiki/Reverse_caber-tosser "Reverse caber-tosser") [universal constructor](https://conwaylife.com/wiki/Universal_constructor "Universal constructor") technology, it is possible to build any possible glider-constructible pattern, no matter what size, using only 17 gliders?',
'... that there are at least four known ways to send information diagonally at a speed greater than the maximum spaceship speed through vacuum? (Complete mechanisms include speeds approaching c/2 via two perpendicular [telegraphs](https://conwaylife.com/wiki/Telegraph "Telegraph"), and 2c/3 via a [2c/3 wire](https://conwaylife.com/wiki/2c/3_wire "2c/3 wire").)',
'... that an [oscillator](https://conwaylife.com/wiki/Oscillator "Oscillator") with strict [volatility](https://conwaylife.com/wiki/Volatility "Volatility") 1 can be constructed for any period 3506909 or higher?',
'... that the original [Gemini](https://conwaylife.com/wiki/Gemini "Gemini")\'s "below-the-elbow" construction efficiency, roughly three gliders per still life, is about four times better than that of _any_ subsequent [self-constructing](https://conwaylife.com/wiki/Self-constructing "Self-constructing") [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship")?',
'... that an [O(sqrt(log(t))) pattern](https://conwaylife.com/wiki/Osqrtlogt "Osqrtlogt") was constructed in 2010, with a diameter that grows at the slowest possible asymptotic ("big O") rate for any Life pattern?',
'... that since the first [Cordership](https://conwaylife.com/wiki/Cordership "Cordership") was assembled from 13 [switch engines](https://conwaylife.com/wiki/Switch_engine "Switch engine") in 1991, the number of switch engines required has gradually decreased, with a [2-engine Cordership](https://conwaylife.com/wiki/2-engine_Cordership "2-engine Cordership") finally making its appearance in 2017?',
'... that the bounding box and [recovery time](https://conwaylife.com/wiki/Recovery_time "Recovery time") of the current fastest [stable reflector](https://conwaylife.com/wiki/Stable_reflector "Stable reflector"), Mike Playle\'s [Snark](https://conwaylife.com/wiki/Snark "Snark"), are both more than two full orders of magnitude smaller than the first stable reflector, constructed by Paul Callahan in 1996?',
'... that as of 2019, no elementary [replicators](https://conwaylife.com/wiki/Replicator "Replicator") have been found in Life?',
'... that while multiple [c/12 diagonal](https://conwaylife.com/wiki/C/12_diagonal "C/12 diagonal") spaceships are known, none are true period?',
'... that the first p23 oscillator, [David Hilbert](https://conwaylife.com/wiki/David_Hilbert "David Hilbert"), was created as a modification of a ["troll" pattern](https://www.conwaylife.com/forums/viewtopic.php?p=85440#p85440) posted a week earlier?',
'... that exactly two years after a fake [loafer](https://conwaylife.com/wiki/Loafer "Loafer")\-producing [soup](https://conwaylife.com/wiki/Soup "Soup") was [posted to the forums](https://www.conwaylife.com/forums/viewtopic.php?p=58727#p58727) on April Fools\' Day, a real soup was found on April 1, [2020](https://conwaylife.com/wiki/Category:Patterns_found_in_2020 "Category:Patterns found in 2020")?',
'... patterns have been constructed whose [fate is currently unknown](https://conwaylife.com/wiki/Unknown_fate "Unknown fate") (based on the twin primes and Collatz conjectures)?',
'... a [pattern of 44 cells](https://conwaylife.com/wiki/One_per_generation "One per generation") exists whose population grows by exactly one cell each generation?',
'... it is possible to send a [signal](https://conwaylife.com/wiki/Signal "Signal") from one side to the other of an infinite diagonal line of cells [without destroying the line](https://conwaylife.com/wiki/Line_crosser "Line crosser")?',
'... there are \'[Heisenburp](https://conwaylife.com/wiki/Heisenburp "Heisenburp")\' reactions which can detect the passage of a [glider](https://conwaylife.com/wiki/Glider "Glider") without affecting it in any way?',
'... [Corderships](https://conwaylife.com/wiki/Cordership "Cordership") can be constructed using individual [switch engines](https://conwaylife.com/wiki/Switch_engine "Switch engine") placed arbitrarily far from each other, that will still support each other using intermediary [gliders](https://conwaylife.com/wiki/Glider "Glider") and [stable](https://conwaylife.com/wiki/Stable "Stable") objects?',
'... [fuses](https://conwaylife.com/wiki/Fuse "Fuse") can be made that [burn](https://conwaylife.com/wiki/Burn "Burn") arbitrarily slowly, based on sending [spaceships](https://conwaylife.com/wiki/Spaceship "Spaceship") back and forth between two rows of [stable](https://conwaylife.com/wiki/Stable "Stable") objects?',
'... there exist \'lone dot\' [agars](https://conwaylife.com/wiki/Agar "Agar") consisting of isolated cells in every generation?',
'... that some types of [spaceship](https://conwaylife.com/wiki/Spaceship "Spaceship"), but not all, support [stable](https://conwaylife.com/wiki/Stable "Stable") [Heisenburp](https://conwaylife.com/wiki/Heisenburp "Heisenburp") technology, where an arrangement of [still lifes](https://conwaylife.com/wiki/Still_life "Still life") detects the passage of the spaceship, emits a [signal](https://conwaylife.com/wiki/Signal "Signal"), and returns to its original state?',
'... that, while it is impossible to build a true [stable](https://conwaylife.com/wiki/Stable "Stable") [Heisenburp](https://conwaylife.com/wiki/Heisenburp "Heisenburp") device that detects a passing glider without even temporarily affecting it, there are several known [stable pseudo-Heisenburp](https://conwaylife.com/wiki/Stable_pseudo-Heisenburp "Stable pseudo-Heisenburp") devices?',
'... that the name of the [Bandersnatch](https://conwaylife.com/wiki/Bandersnatch "Bandersnatch"), a [color-changing](https://conwaylife.com/wiki/Color-changing "Color-changing") lane-shifter device discovered in [2020](https://conwaylife.com/wiki/Category:Patterns_found_in_2020 "Category:Patterns found in 2020"), is derived from a Lewis Carroll poem that also supplied names for the [Snark](https://conwaylife.com/wiki/Snark "Snark"), the [boojum reflector](https://conwaylife.com/wiki/Boojum_reflector "Boojum reflector"), and the [Bellman](https://conwaylife.com/wiki/Bellman "Bellman") search utility?',
'... that the first [rake](https://conwaylife.com/wiki/Rake "Rake") that produces [spaceships](https://conwaylife.com/wiki/Spaceship "Spaceship") travelling in the same direction but slower was found in [2003](https://conwaylife.com/wiki/Category:Patterns_found_in_2003 "Category:Patterns found in 2003"), using a [c/2](https://conwaylife.com/wiki/C/2 "C/2") rake to produce [2c/5](https://conwaylife.com/wiki/2c/5 "2c/5") spaceships?',
'... that it is possible for a [single Life object](https://conwaylife.com/wiki/Deep_Cell "Deep Cell") to simulate the evolution of an arbitrary number of other Life objects at the same time (although at increasingly slower speeds)?',
'... that the [Fast Forward Force Field](https://conwaylife.com/wiki/Fast_Forward_Force_Field "Fast Forward Force Field") reaction can transport an [LWSS](https://conwaylife.com/wiki/LWSS "LWSS") 11 spaces in 6 [generations](https://conwaylife.com/wiki/Generation "Generation"), creating the illusion of super-light-speed travel?',
'... that there are [line puffers](https://conwaylife.com/wiki/Line_puffer "Line puffer") with a row of live cells at the back, which create very dirty [exhaust](https://conwaylife.com/wiki/Exhaust "Exhaust") whose [period](https://conwaylife.com/wiki/Period "Period") apparently (this is not proven) grows exponentially as the length of the row is increased?',
'... that there is an infinite series of period 3 [oscillators](https://conwaylife.com/wiki/Oscillator "Oscillator") that are [polyominoes](https://conwaylife.com/wiki/Polyomino "Polyomino") in one phase, starting with the [cross](https://conwaylife.com/wiki/Cross "Cross")?',
'... that there are [spaceships](https://conwaylife.com/wiki/Spaceship "Spaceship") without any [sparks](https://conwaylife.com/wiki/Spark "Spark") which can nevertheless [perturb](https://conwaylife.com/wiki/Perturb "Perturb") objects due to their ability to [repair some damage to themselves](https://conwaylife.com/wiki/Types_of_spaceships#Edge-repair_spaceship "Types of spaceships")?',
'... that the [R-pentomino](https://conwaylife.com/wiki/R-pentomino "R-pentomino") creates a [queen bee shuttle](https://conwaylife.com/wiki/Queen_bee_shuttle "Queen bee shuttle") in generation 774, which lasts 17 generations before being destroyed?',
'... that a [relay](https://conwaylife.com/wiki/Relay "Relay") glider bouncing back and forth between two [pentadecathlons](https://conwaylife.com/wiki/Pentadecathlon "Pentadecathlon") was one of the earliest constructive proofs that [oscillators](https://conwaylife.com/wiki/Oscillator "Oscillator") can have arbitrarily high [periods](https://conwaylife.com/wiki/Period "Period")?',
'... that there are [spacefiller](https://conwaylife.com/wiki/Spacefiller "Spacefiller") patterns that [grow quadratically](https://conwaylife.com/wiki/Quadratic_growth "Quadratic growth") to fill space with an [agar](https://conwaylife.com/wiki/Agar "Agar") with density 1/2 ([zebra stripes](https://conwaylife.com/wiki/Zebra_stripes "Zebra stripes"))?',
'... that a row of appropriately placed [traffic lights](https://conwaylife.com/wiki/Traffic_light "Traffic light") is one of the few known [wicks](https://conwaylife.com/wiki/Wick "Wick") that can be extended by ["pushing" from its stationary end](https://conwaylife.com/wiki/Traffic_lights_extruder "Traffic lights extruder")?',
'... that [space nonfiller](https://conwaylife.com/wiki/Space_nonfiller "Space nonfiller") patterns have been constructed that expand to affect the entire Life plane, leaving an expanding region of [vacuum](https://conwaylife.com/wiki/Vacuum "Vacuum") at their center?',
'... that there exist [non-monotonic](https://conwaylife.com/wiki/Non-monotonic "Non-monotonic") [spaceships](https://conwaylife.com/wiki/Spaceship "Spaceship"), even some with [period](https://conwaylife.com/wiki/Period "Period") as low as 3, whose leading edges fall back in at least one generation.',
'... that some periodic objects -- e.g., a [pentadecathlon](https://conwaylife.com/wiki/Pentadecathlon "Pentadecathlon") [hassled](https://conwaylife.com/wiki/Hassle "Hassle") by period-7 [pipsquirters](https://conwaylife.com/wiki/Pipsquirter "Pipsquirter") -- can be [perturbed](https://conwaylife.com/wiki/Perturb "Perturb") to cause them to skip forwards by one or more [phases](https://conwaylife.com/wiki/Phase "Phase") in their cycle?',
'... that more [stable](https://conwaylife.com/wiki/Stable "Stable") [seed](https://conwaylife.com/wiki/Seed "Seed") [constellations](https://conwaylife.com/wiki/Constellation "Constellation") for moving objects were completed in [2020](https://conwaylife.com/wiki/Category:Patterns_found_in_2020 "Category:Patterns found in 2020") than in all preceding years put together?'
]
# ---------- #
plaintext = [
'... that the caterpillar contains over 11 million cells, and the 0E0P metacell contains over 18 million cells?',
'... that the Gosper glider gun was the first pattern to be discovered that exhibits infinite growth?',
'... that the block-laying switch engine and the glider-producing switch engine (and various combinations of two switch engines) are the only infinitely-growing patterns that are known to have ever occurred naturally from an asymmetric random starting configuration?',
'... that oscillators are known that oscillate at all periods other than 19, 34, 38 and 41?',
'... that the pentadecathlon and the blinker are the only known oscillators that are polyominos in more than one phase?',
'... that it is impossible for a period-3 oscillator to be a phoenix?',
'... that the 16×16 soup with the longest known lifespan lasts for over 49,000 generations before stabilizing?',
'... that replicators with quadratic population growth have been known to exist in Conway\'s Game of Life since the early 1970s, but none were found until 2018 when Adam P. Goucher constructed the 0E0P metacell?',
'... that the first known period 37 and 51 oscillators were found in 2009?',
'... that a pattern whose population grows without bound but does not tend to infinity is known as a sawtooth?',
'... that there are over 35.4 billion distinct strict still lifes with 34 or fewer cells?',
'... that some infinitely-growing patterns can be constructed with as few as three gliders?',
'... that quadratically-growing patterns have been found with as few as 23 initial cells?',
'... that the blinker is the only known oscillator that is one cell thick?',
'... that Gemini, the first spaceship found in Conway\'s Game of Life that travels in an oblique direction, was discovered in 2010?',
'... that there are still lifes (such as the quad pseudo still life) that can be split into four stable islands, but not two or three?',
'... that no new spaceship speeds were discovered after 1970 until 1989?',
'... that the first stable reflector was found in October 1996, and the first fast stable reflector appeared in 2013, allowing the construction of oscillators of all periods ≥43 ticks?',
'... that nineteen spaceship velocities have been constructed, excluding several infinitely adjustable families of ships?',
'... that there are 71 distinct ways for two gliders to collide, but it is unknown how many distinct 3-glider collisions there are?',
'... that to display the smallest known gun pattern for a Gemini spaceship at 1 cell = 1 pixel, on a standard-density video monitor, a screen over one mile square would be needed?',
'... that no odd-period glider guns were known before 1995, when a period 565 p5-spark-assisted B-heptomino loop was constructed by David Buckingham?',
'... that even though the speed limit for spaceships is c/2 in a vacuum, in a medium of stripes agar there are "spaceships" that can travel at lightspeed along the stripes, or two thirds of lightspeed perpendicular to the stripes?',
'... that the smallest known spacefiller pattern consists of 183 cells?',
'... that the smallest known sawtooth pattern in Conway\'s Life consists of only 177 cells?',
'... that there are now over a hundred and seventy known Herschel conduits, counting stable conduits only, and a much larger number if oscillator-supported conduits are included?',
'... that Demonoids, Caterloopillars, Orthogonoids, half-bakery knightships, loopships and camelships are the only known types of spaceships with fixed slope but adjustable speed -- not counting 0E0P metacell-based patterns?',
'... that a pattern exists in which no cell in the unbounded Life plane ever becomes periodic?',
'... that several different universal constructors in Conway’s Life have been shown to be capable of constructing their own circuitry?',
'... that there are dozens of known Cordership variants, including puffers, rakes and wickstretchers, with periods of any multiple of 96?',
'... that greyships have been constructed with speeds of c/2, c/3, c/4, c/5, and 2c/5?',
'... that most greyships travel parallel to the stripes in their included agars, but a few travel perpendicular to the stripes, or "against the grain"?',
'... that a pattern has been constructed that calculates and prints out the digits of pi in decimal, and a similar one prints out the decimal digits of the Golden Ratio?',
'... that several different patterns have been constructed to calculate and display the sequence of prime numbers, and some have been adapted to display only twin primes or Fermat primes?',
'... that two completely different types of oblique spaceships, the waterbear and the half-baked knightship, were constructed in 2014?',
'... that no Caterpillar-type spaceships were completed for almost ten years after the original Caterpillar was constructed in 2004, but that two different designs, the waterbear and the centipede, were finished in 2014?',
'... that the first spiral-growth pattern in Conway\'s Life was constructed in 2014?',
'... that among known glider recipes for irreducible objects, the Gemini spaceship has the largest known minimal recipe, currently 173,449 gliders — the runner-up being the Parallel HBK with a 38,380-glider synthesis?',
'... that it was shown in 2014 that any salvo of gliders, no matter how tightly packed, can be constructed by crashing together gliders whose initial positions are farther apart than any chosen finite distance?',
'... that no spaceships with velocities other than c/4 diagonal (glider), c/2 orthogonal (*WSS variants), and c/12 diagonal (Corderships) had known glider syntheses until 2003, when a 2c/5 spaceship gun was constructed?',
'... that after ten years with no new small spaceship syntheses, a glider construction was found for the c/7 loafer in 2013 less than three hours after its discovery?',
'... that glider constructions for three previously inconstructible spaceships (the dart, the crab, and the Parallel HBK) were discovered in 2014?',
'... that glider constructions for the B29, X66, half-X66 with HWSS, Pushalong 1, 25P3H1V0.1, 30P5H2V0, 30P4H2V0.4, a pufferfish spaceship, and the weekender were discovered in 2015 — after fifteen years of intermittent efforts in the case of the weekender?',
'... that since 2014, more new spaceship syntheses have been completed than were found in all the years between 1970 and 2013?',
'... that as of 19 July 2020 there are 232 different still lifes known to be constructible by colliding four gliders, but it is likely that this list is not complete?',
'... that in 2014 a new natural infinite growth pattern was discovered, starting from a symmetric random starting configuration?',
'... that the first self-constructing Conway\'s Life pattern was built in 2010?',
'... that the first glider synthesis for a c/3 spaceship was completed in 2014?',
'... that the first "macro-spaceship" gun (a Gemini spaceship gun) was constructed in 2010, followed by the HBK gun in January 2015 and a Demonoid gun in December 2015?',
'... that the waterbear was the first known high-speed oblique spaceship, many orders of magnitude faster than Gemini spaceships and half-baked knightships?',
'... that there are no known direct reflectors for lightspeed wire signals, or for signals in 2c/3 wires, but that very large reflectors for these signals can be constructed using stable or periodic circuitry?',
'... that 24 ten-cell patterns exhibit infinite growth, with 17 unique pattern types, but that it has been proven that no nine-cell pattern exhibits infinite growth?',
'... that all still lifes up to 19 bits have a known glider synthesis, but it is still not known whether all still lifes are synthesizable?',
'... that the French kiss remained without a glider synthesis until 2013?',
'... that Adam P. Goucher\'s distributed Catagolue soup-search project, started in February 2015, has tested several orders of magnitude more random soups than any previous such project, and has contributed to the reduction of many glider construction recipes?',
'... that with the appearance of the 0E0P metacell, the number of periods for which strict volatility 1 oscillators were known went from 12 to infinity?',
'... that Copperhead is not only the first c/10 orthogonal spaceship ever found, but also remarkably compact for a pattern not discovered until 2016?',
'... that loafer is the fifth smallest non-flotilla spaceship, but was discovered 43 years after the four spaceships smaller than it?',
'... that despite being the fourth smallest non-flotilla orthogonal spaceship, loafer did not appear from a single randomly generated soup until 2020?',
'... that all known glider eaters take at least four ticks to recover to their original state after eating a glider?',
'... that the formerly smallest 31c/240 spaceship does not make use of the 31c/240 reaction?',
'... that there is roughly one chance in 10^(N/3) that a still life appearing out of random soup will have a population of exactly N cells?',
'... that the number of still lifes with N+1 bits is roughly 2.48 times larger than the number of N-bit still lifes?',
'... that the odds of a randomly-chosen 20×20 soup pattern being a methuselah that lasts between 1000N and 1000(N+1) ticks, is roughly the same as the odds that it will last any amount of time longer than 1000(N+1) ticks?',
'... that all still lifes up to 17 cells can be synthesized at a cost of less than one glider per cell?',
'... that the first elementary knightship, Sir Robin, was discovered only in 2018, with there having been a very close call in 2004?',
'... that there is a 5×2 counterexample to the Coolout Conjecture, proving that patterns that are internally compatible with stability can not always be made part of a larger still life, no matter what cells are added around the edges?',
'... that a Conway\'s Life pattern representing a complete programmable 8-bit computer, consisting only of buckaroos, p60 glider guns, and glider duplicators, was completed in November 2016?',
'... that whilst no elementary oblique spaceships were found in B3/S23 until 2018, and none have occurred naturally, at least two naturally occurring reactions have been discovered in B38/S23 that travel in an oblique direction?',
'... that not all 1.00 volatility oscillators are phoenixes, but volatility 1.00 period 2 oscillators must be phoenixes?',
'... that no pattern inside a 6×6 bounding box is a Garden of Eden?',
'... that Garden of Eden patterns with only 45 ON cells have been found?',
'... that it is known that no Garden of Eden patterns exist that are 1, 2, or 3 cells high, but that it is currently an open question whether a 4-cell-high GoE can be constructed?',
'... that 6-cell-high Garden of Eden patterns were constructed as far back as 1973, but 5-cell-high GoEs were unknown until Steven Eker found some in 2016?',
'... that in 2016, patterns were found that have great-great-grandparents but no great-great-great-grandparents?',
'... that no way is known for a 3×3 pattern to be tiled into an M×N rectangle to produce a Garden of Eden, but that there are 4×3, 4×4 and larger tiles that can be repeated in this way to produce GoEs?',
'... that there are spaceship stabilizations of agars?',
'... that block is the only known finite strict still life where each living cell has exactly 3 neighbours?',
'... that all strict still lifes up to and including 14 cells have been found by apgsearch in asymmetrical 16×16 soups?',
'... that c/2 orthogonal and c/4 diagonal were the only speeds of spaceships seen to emerge from asymmetric soups on Catagolue until 2020, when a loafer appeared naturally?',
'... that all currently known standard Herschel conduits produce the same Herschel great-grandfather pattern, except for Fx158?',
'... that eater 1 is the smallest asymmetric still life?',
'... that in Life, no spaceship can exist with a speed of (m,n)c/x where (m+n)/x > 0.5?',
'... that even without using fixed-cost reverse caber tosser technology, an N-bit strict still life – specifically, some length of long long (...) boat or ship – can be constructed for any odd integer N using no more than 20 gliders, and for any even integer N using no more than 21 gliders, using a temporary tubstretcher?',
'... that even without using fixed-cost reverse caber tosser technology, some specific N-bit period-2 oscillators can be constructed for any even integer N using no more than 27 gliders, and for any odd integer N using no more than 29 gliders, using a temporary tubstretcher?',
'... that it is currently an open question as to if there exists a periodic pattern whose only predecessors are its own evolutionary sequence?',
'... that it was proved in the early 1970s that reflectorless rotating oscillators exist in Life, but none were found until 2018 when Adam P. Goucher completed the 0E0P metacell?',
'... that there are currently known elementary spaceships with speeds c/7 and c/10 orthogonal, but none with c/8 or c/9?',
'... that there are currently known elementary spaceships with speeds c/7 and c/12 diagonal, but none with c/8, c/9, c/10 or c/11?',
'... that while the speed limit for orthogonal spaceships in rules using the Moore neighbourhood is c orthogonal, spaceships in Larger than Life rules are capable of breaking this barrier?',
'... that without the use of adjustable glider loops, there are no known oscillators with periods 43 or 53?',
'... that without the use of Herschel loops or adjustable glider loops, there are no known oscillators with periods 59 or 61?',
'... that small stable elementary period multipliers (also known as pulse dividers) have been found for multipliers of 2x, 3x, and 4x, but not for 5x or higher?',
'... that with reverse caber-tosser universal constructor technology, it is possible to build any possible glider-constructible pattern, no matter what size, using only 17 gliders?',
'... that there are at least four known ways to send information diagonally at a speed greater than the maximum spaceship speed through vacuum? (Complete mechanisms include speeds approaching c/2 via two perpendicular telegraphs, and 2c/3 via a 2c/3 wire.)',
'... that an oscillator with strict volatility 1 can be constructed for any period 3506909 or higher?',
'... that the original Gemini\'s "below-the-elbow" construction efficiency, roughly three gliders per still life, is about four times better than that of any subsequent self-constructing spaceship?',
'... that an O(sqrt(log(t))) pattern was constructed in 2010, with a diameter that grows at the slowest possible asymptotic ("big O") rate for any Life pattern?',
'... that since the first Cordership was assembled from 13 switch engines in 1991, the number of switch engines required has gradually decreased, with a 2-engine Cordership finally making its appearance in 2017?',
'... that the bounding box and recovery time of the current fastest stable reflector, Mike Playle\'s Snark, are both more than two full orders of magnitude smaller than the first stable reflector, constructed by Paul Callahan in 1996?',
'... that as of 2019, no elementary replicators have been found in Life?',
'... that while multiple c/12 diagonal spaceships are known, none are true period?',
'... that the first p23 oscillator, David Hilbert, was created as a modification of a "troll" pattern posted a week earlier?',
'... that exactly two years after a fake loafer-producing soup was posted to the forums on April Fools\' Day, a real soup was found on April 1, 2020?',
'... patterns have been constructed whose fate is currently unknown (based on the twin primes and Collatz conjectures)?',
'... a pattern of 44 cells exists whose population grows by exactly one cell each generation?',
'... it is possible to send a signal from one side to the other of an infinite diagonal line of cells without destroying the line?',
'... there are \'Heisenburp\' reactions which can detect the passage of a glider without affecting it in any way?',
'... Corderships can be constructed using individual switch engines placed arbitrarily far from each other, that will still support each other using intermediary gliders and stable objects?',
'... fuses can be made that burn arbitrarily slowly, based on sending spaceships back and forth between two rows of stable objects?',
'... there exist \'lone dot\' agars consisting of isolated cells in every generation?',
'... that some types of spaceship, but not all, support stable Heisenburp technology, where an arrangement of still lifes detects the passage of the spaceship, emits a signal, and returns to its original state?',
'... that, while it is impossible to build a true stable Heisenburp device that detects a passing glider without even temporarily affecting it, there are several known stable pseudo-Heisenburp devices?',
'... that the name of the Bandersnatch, a color-changing lane-shifter device discovered in 2020, is derived from a Lewis Carroll poem that also supplied names for the Snark, the boojum reflector, and the Bellman search utility?',
'... that the first rake that produces spaceships travelling in the same direction but slower was found in 2003, using a c/2 rake to produce 2c/5 spaceships?',
'... that it is possible for a single Life object to simulate the evolution of an arbitrary number of other Life objects at the same time (although at increasingly slower speeds)?',
'... that the Fast Forward Force Field reaction can transport an LWSS 11 spaces in 6 generations, creating the illusion of super-light-speed travel?',
'... that there are line puffers with a row of live cells at the back, which create very dirty exhaust whose period apparently (this is not proven) grows exponentially as the length of the row is increased?',
'... that there is an infinite series of period 3 oscillators that are polyominoes in one phase, starting with the cross?',
'... that there are spaceships without any sparks which can nevertheless perturb objects due to their ability to repair some damage to themselves?',
'... that the R-pentomino creates a queen bee shuttle in generation 774, which lasts 17 generations before being destroyed?',
'... that a relay glider bouncing back and forth between two pentadecathlons was one of the earliest constructive proofs that oscillators can have arbitrarily high periods?',
'... that there are spacefiller patterns that grow quadratically to fill space with an agar with density 1/2 (zebra stripes)?',
'... that a row of appropriately placed traffic lights is one of the few known wicks that can be extended by "pushing" from its stationary end?',
'... that space nonfiller patterns have been constructed that expand to affect the entire Life plane, leaving an expanding region of vacuum at their center?',
'... that there exist non-monotonic spaceships, even some with period as low as 3, whose leading edges fall back in at least one generation.',
'... that some periodic objects -- e.g., a pentadecathlon hassled by period-7 pipsquirters -- can be perturbed to cause them to skip forwards by one or more phases in their cycle?',
'... that more stable seed constellations for moving objects were completed in 2020 than in all preceding years put together?'
]
assert len(trivia) == len(plaintext)
count = len(trivia)
| 221.929368 | 820 | 0.762291 | 8,974 | 59,699 | 5.04591 | 0.103076 | 0.096749 | 0.131178 | 0.159357 | 0.875403 | 0.818118 | 0.775562 | 0.680955 | 0.615189 | 0.581048 | 0 | 0.027931 | 0.122012 | 59,699 | 268 | 821 | 222.757463 | 0.835448 | 0.000436 | 0 | 0.045802 | 0 | 0.526718 | 0.919623 | 0.004659 | 0 | 0 | 0 | 0 | 0.003817 | 1 | 0 | false | 0.022901 | 0 | 0 | 0 | 0.007634 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3615ab227f7214cf12d2fa24e1f809cd307cc7a1 | 7,128 | py | Python | ambari-agent/src/test/python/ambari_agent/TestAmsAlert.py | zyclove/ambari | 1032f0f54cb7b312b9a3b37570cd840f4e1e89d4 | [
"Apache-2.0"
] | null | null | null | ambari-agent/src/test/python/ambari_agent/TestAmsAlert.py | zyclove/ambari | 1032f0f54cb7b312b9a3b37570cd840f4e1e89d4 | [
"Apache-2.0"
] | null | null | null | ambari-agent/src/test/python/ambari_agent/TestAmsAlert.py | zyclove/ambari | 1032f0f54cb7b312b9a3b37570cd840f4e1e89d4 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/python2
'''
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
'''
from unittest import TestCase
from alerts.ams_alert import AmsAlert
from mock.mock import Mock, MagicMock, patch
from AmbariConfig import AmbariConfig
class TestAmsAlert(TestCase):
def setUp(self):
self.config = AmbariConfig()
@patch("httplib.HTTPConnection")
def test_collect_ok(self, conn_mock):
alert_meta = {
'name': 'alert1',
'label': 'label1',
'serviceName': 'service1',
'componentName': 'component1',
'uuid': '123',
'enabled': 'true'
}
alert_source_meta = {
'ams': {
'metric_list': [
'metric1'
],
"app_id": "APP_ID",
"interval": 60,
"minimum_value": -1,
"compute": "mean",
"value": "{0}"
},
'uri': {
'http': '192.168.0.10:8080',
'https_property': '{{ams-site/timeline.metrics.service.http.policy}}',
'https_property_value': 'HTTPS_ONLY'
},
"reporting": {
"ok": {
"text": "OK: {0}"
},
"warning": {
"text": "Warn: {0}",
"value": 3
},
"critical": {
"text": "Crit: {0}",
"value": 5
}
}
}
cluster = 'c1'
host = 'host1'
expected_text = 'OK: 2'
def collector_side_effect(clus, data):
self.assertEquals(data['name'], alert_meta['name'])
self.assertEquals(data['label'], alert_meta['label'])
self.assertEquals(data['text'], expected_text)
self.assertEquals(data['service'], alert_meta['serviceName'])
self.assertEquals(data['component'], alert_meta['componentName'])
self.assertEquals(data['uuid'], alert_meta['uuid'])
self.assertEquals(data['enabled'], alert_meta['enabled'])
self.assertEquals(data['cluster'], cluster)
self.assertEquals(clus, cluster)
ca_connection = MagicMock()
response = MagicMock()
response.status = 200
ca_connection.getresponse.return_value = response
conn_mock.return_value = ca_connection
response.read.return_value = '{"metrics":[{"metricname":"metric1","metrics":{"1459966360838":1,"1459966370838":3}}]}'
mock_collector = MagicMock()
mock_collector.put = Mock(side_effect=collector_side_effect)
alert = AmsAlert(alert_meta, alert_source_meta, self.config)
alert.set_helpers(mock_collector, {'foo-site/bar': 12, 'foo-site/baz': 'asd'})
alert.set_cluster(cluster, host)
alert.collect()
@patch("httplib.HTTPConnection")
def test_collect_warn(self, conn_mock):
alert_meta = {
'definitionId': 1,
'name': 'alert1',
'label': 'label1',
'serviceName': 'service1',
'componentName': 'component1',
'uuid': '123',
'enabled': 'true'
}
alert_source_meta = {
'ams': {
'metric_list': [
'metric1'
],
"app_id": "APP_ID",
"interval": 60,
"minimum_value": -1,
"compute": "mean",
"value": "{0}"
},
'uri': {
'http': '192.168.0.10:8080',
'https_property': '{{ams-site/timeline.metrics.service.http.policy}}',
'https_property_value': 'HTTPS_ONLY'
},
"reporting": {
"ok": {
"text": "OK: {0}"
},
"warning": {
"text": "Warn: {0}",
"value": 3
},
"critical": {
"text": "Crit: {0}",
"value": 5
}
}
}
cluster = 'c1'
host = 'host1'
cluster_id = '0'
expected_text = 'Warn: 4'
def collector_side_effect(clus, data):
self.assertEquals(data['name'], alert_meta['name'])
self.assertEquals(data['text'], expected_text)
self.assertEquals(data['clusterId'], cluster_id)
self.assertEquals(clus, cluster)
ca_connection = MagicMock()
response = MagicMock()
response.status = 200
ca_connection.getresponse.return_value = response
conn_mock.return_value = ca_connection
response.read.return_value = '{"metrics":[{"metricname":"metric1","metrics":{"1459966360838":3,"1459966370838":5}}]}'
mock_collector = MagicMock()
mock_collector.put = Mock(side_effect=collector_side_effect)
alert = AmsAlert(alert_meta, alert_source_meta, self.config)
alert.set_helpers(mock_collector, MagicMock(), MagicMock())#{'foo-site/bar': 12, 'foo-site/baz': 'asd'})
alert.set_cluster(cluster, cluster_id, host)
alert.collect()
@patch("httplib.HTTPConnection")
def test_collect_ok(self, conn_mock):
alert_meta = {
'definitionId': 1,
'name': 'alert1',
'label': 'label1',
'serviceName': 'service1',
'componentName': 'component1',
'uuid': '123',
'enabled': 'true'
}
alert_source_meta = {
'ams': {
'metric_list': [
'metric1'
],
"app_id": "APP_ID",
"interval": 60,
"minimum_value": -1,
"compute": "mean",
"value": "{0}"
},
'uri': {
'http': '192.168.0.10:8080',
'https_property': '{{ams-site/timeline.metrics.service.http.policy}}',
'https_property_value': 'HTTPS_ONLY'
},
"reporting": {
"ok": {
"text": "OK: {0}"
},
"warning": {
"text": "Warn: {0}",
"value": 3
},
"critical": {
"text": "Crit: {0}",
"value": 5
}
}
}
cluster = 'c1'
host = 'host1'
cluster_id = '0'
expected_text = 'Crit: 10'
def collector_side_effect(clus, data):
self.assertEquals(data['name'], alert_meta['name'])
self.assertEquals(data['text'], expected_text)
self.assertEquals(data['clusterId'], cluster_id)
self.assertEquals(clus, cluster)
ca_connection = MagicMock()
response = MagicMock()
response.status = 200
ca_connection.getresponse.return_value = response
conn_mock.return_value = ca_connection
response.read.return_value = '{"metrics":[{"metricname":"metric1","metrics":{"1459966360838":10,"1459966370838":10}}]}'
mock_collector = MagicMock()
mock_collector.put = Mock(side_effect=collector_side_effect)
alert = AmsAlert(alert_meta, alert_source_meta, self.config)
alert.set_helpers(mock_collector, MagicMock(), MagicMock())#{'foo-site/bar': 12, 'foo-site/baz': 'asd'})
alert.set_cluster(cluster, cluster_id, host)
alert.collect()
| 29.94958 | 123 | 0.60115 | 781 | 7,128 | 5.329065 | 0.230474 | 0.065353 | 0.067275 | 0.020903 | 0.736905 | 0.736905 | 0.736905 | 0.736905 | 0.736905 | 0.709995 | 0 | 0.039643 | 0.246212 | 7,128 | 237 | 124 | 30.075949 | 0.734971 | 0.120791 | 0 | 0.765306 | 0 | 0 | 0.253357 | 0.075607 | 0 | 0 | 0 | 0 | 0.086735 | 1 | 0.035714 | false | 0 | 0.020408 | 0 | 0.061224 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3620f45821651451553415122115fe8bfc3f57cf | 130 | py | Python | api/docbleach/controllers/static.py | vmalguy/DocBleach-Web | 9645317da77fb45f77af93e925386b66f032fc15 | [
"MIT"
] | 12 | 2017-03-29T21:27:02.000Z | 2020-03-17T08:22:56.000Z | api/docbleach/controllers/static.py | vmalguy/DocBleach-Web | 9645317da77fb45f77af93e925386b66f032fc15 | [
"MIT"
] | 2 | 2018-12-20T15:13:40.000Z | 2020-07-22T15:03:21.000Z | api/docbleach/controllers/static.py | vmalguy/DocBleach-Web | 9645317da77fb45f77af93e925386b66f032fc15 | [
"MIT"
] | 8 | 2017-05-03T11:32:23.000Z | 2021-09-30T03:11:50.000Z | import tornado.web
from .base import BaseHandler
class StaticFileHandler(tornado.web.StaticFileHandler, BaseHandler):
pass
| 16.25 | 68 | 0.807692 | 14 | 130 | 7.5 | 0.642857 | 0.190476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130769 | 130 | 7 | 69 | 18.571429 | 0.929204 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
3644c4a3091e0b2ea283761b31f223f9821d0d05 | 29 | py | Python | utils/data/augmentations/__init__.py | shogi880/lossyless | 04f76c765111c4bf7e6b8f870787e5a8a2b66fbd | [
"MIT"
] | 61 | 2021-07-05T20:05:51.000Z | 2022-03-19T08:52:40.000Z | utils/data/augmentations/__init__.py | shogi880/lossyless | 04f76c765111c4bf7e6b8f870787e5a8a2b66fbd | [
"MIT"
] | 2 | 2021-07-26T15:36:17.000Z | 2021-12-19T10:29:12.000Z | utils/data/augmentations/__init__.py | shogi880/lossyless | 04f76c765111c4bf7e6b8f870787e5a8a2b66fbd | [
"MIT"
] | 7 | 2021-09-14T08:35:05.000Z | 2022-02-27T20:01:02.000Z | from .label_augment import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
365ae0533d4a04b30f7effb92e5d74fd83ae78c6 | 28 | py | Python | test/test_nop.py | amcgregor/cloudapp-rescue | 3fb9607990b7d160e4ed2eb789268f4307a35489 | [
"MIT"
] | 1 | 2021-05-11T15:18:03.000Z | 2021-05-11T15:18:03.000Z | test/test_nop.py | amcgregor/cloudapp-rescue | 3fb9607990b7d160e4ed2eb789268f4307a35489 | [
"MIT"
] | 1 | 2019-12-23T17:26:42.000Z | 2019-12-23T17:26:42.000Z | test/test_nop.py | amcgregor/cloudapp-rescue | 3fb9607990b7d160e4ed2eb789268f4307a35489 | [
"MIT"
] | null | null | null | def test_nop():
assert True | 14 | 15 | 0.75 | 5 | 28 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 2 | 16 | 14 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.5 | true | 0 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3672e4fff5dfeaaa5bdb434bb4218c0c123856bc | 359 | py | Python | Symbol.py | Alissonfelipe1234/turing_machine | 8f72ce7ac74b96ac20ec33319b2a69ef5214418d | [
"Unlicense"
] | null | null | null | Symbol.py | Alissonfelipe1234/turing_machine | 8f72ce7ac74b96ac20ec33319b2a69ef5214418d | [
"Unlicense"
] | null | null | null | Symbol.py | Alissonfelipe1234/turing_machine | 8f72ce7ac74b96ac20ec33319b2a69ef5214418d | [
"Unlicense"
] | null | null | null | class Symbol:
def __init__(self, char = 'e'):
if len(char) > 1 or not isinstance(char, str):
raise Exception('Symbol is must large')
self.char = char
def getChar(self):
return self.char
def __eq__(self, other):
return self.char == other.getChar()
def __str__(self):
return self.char
| 22.4375 | 54 | 0.573816 | 46 | 359 | 4.217391 | 0.5 | 0.206186 | 0.216495 | 0.185567 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.004082 | 0.317549 | 359 | 15 | 55 | 23.933333 | 0.787755 | 0 | 0 | 0.181818 | 0 | 0 | 0.058659 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.363636 | false | 0 | 0 | 0.272727 | 0.727273 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
368469cf9448a70924fd61cbb2638b2e38d54f2d | 112 | py | Python | spiders/__init__.py | localhost-0001/wowhead_scraper | d2eff5bdb02161afefe1da452c89f7850ec199fc | [
"MIT"
] | null | null | null | spiders/__init__.py | localhost-0001/wowhead_scraper | d2eff5bdb02161afefe1da452c89f7850ec199fc | [
"MIT"
] | null | null | null | spiders/__init__.py | localhost-0001/wowhead_scraper | d2eff5bdb02161afefe1da452c89f7850ec199fc | [
"MIT"
] | null | null | null | from .npc_spider import NPCSpider
from .object_spider import ObjectSpider
from .quest_spider import QuestSpider
| 28 | 39 | 0.866071 | 15 | 112 | 6.266667 | 0.6 | 0.382979 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107143 | 112 | 3 | 40 | 37.333333 | 0.94 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36b4fdea456bf64198f785fce4f36b825a0d6203 | 20,394 | py | Python | observatorio/dados/processing/xlsx.py | guerrasao/Observatorio-Socioeconomico-da-COVID-19 | 15457859092a41e539e57af6cc1bc875f3fbdf93 | [
"MIT"
] | null | null | null | observatorio/dados/processing/xlsx.py | guerrasao/Observatorio-Socioeconomico-da-COVID-19 | 15457859092a41e539e57af6cc1bc875f3fbdf93 | [
"MIT"
] | null | null | null | observatorio/dados/processing/xlsx.py | guerrasao/Observatorio-Socioeconomico-da-COVID-19 | 15457859092a41e539e57af6cc1bc875f3fbdf93 | [
"MIT"
] | null | null | null | import pandas as pd
from django.db import models
from django.utils.text import slugify
from ..models import Pais, Estado, Municipio, Variavel, VariavelPaisInteiro, VariavelPaisDecimal, VariavelEstadoInteiro, VariavelEstadoDecimal, VariavelMunicipioInteiro, VariavelMunicipioDecimal
def importar_xlsx(arquivo):
conteudo_arquivo = arquivo.read()
controle = pd.read_excel(conteudo_arquivo, sheet_name='Controle')
print(controle)
abrangencia = controle.at[0,'abrangencia']
if(abrangencia != 'MUNI'):
dados = pd.read_excel(conteudo_arquivo, sheet_name='Dados')
#replace NaN with None
dados = dados.astype(object).where(pd.notnull(dados),None)
print(dados)
if(abrangencia == 'PAIS'):
pais = Pais.objects.get(nome_normalizado=controle.at[0, 'nome_normalizado'])
variaveis_pais = Variavel.objects.all().filter(abrangencia='PAIS', ativa=True)
variaveis_pais_int = VariavelPaisInteiro.objects.all()
variaveis_pais_dec = VariavelPaisDecimal.objects.all()
#print(variaveis_pais)
variaveis_tab = dados.columns
linhas_tab = dados.index
for linha_atual in linhas_tab:
for var_atual in variaveis_tab:
#print(var_atual)
if((var_atual != 'Data') and (var_atual != 'Tempo/Variável')):
var_atual_obj = variaveis_pais.filter(nome=var_atual).values()
#print(var_atual_obj[0]['unidade'])
if(var_atual_obj[0]['unidade'] == 'SEMU' or var_atual_obj[0]['unidade'] == 'INTE'):
linha_existe = VariavelPaisInteiro.objects.filter(pais=pais, data=dados.at[linha_atual, 'Data'], variavel = var_atual_obj[0]['id']).order_by('-atualizado_em')
#print(linha_existe)
cont_linha_existe = linha_existe.count()
#print(cont_linha_existe)
#linha não existe, adicionar
if cont_linha_existe == 0:
l = VariavelPaisInteiro(
variavel = Variavel.objects.get(id=var_atual_obj[0]['id']),
pais = pais,
data = dados.at[linha_atual, 'Data'],
valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
)
l.save()
#linha existe, atualizar
elif cont_linha_existe == 1:
#print(linha_existe)
#print(linha_existe.values())
l = VariavelPaisInteiro.objects.get(id=linha_existe.values()[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
else:
#a primeira linha a ser mantida, excluindo a inconsistencia
vet_linhas_existentes = linha_existe[1:]
for atual in vet_linhas_existentes:
l = VariavelPaisInteiro.objects.get(id=atual[0]['id'])
l.delete()
#atualiza o valor da linha
l = VariavelPaisInteiro.objects.get(id=linha_existe[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
#erro mais de uma linha encontrada com a combinação de chaves, remover e deixar a última apenas
elif(
var_atual_obj[0]['unidade'] == 'DECI' or
var_atual_obj[0]['unidade'] == 'REAL' or
var_atual_obj[0]['unidade'] == 'DOLA' or
var_atual_obj[0]['unidade'] == 'EURO' or
var_atual_obj[0]['unidade'] == 'PORC'
):
linha_existe = VariavelPaisDecimal.objects.filter(pais=pais, data=dados.at[linha_atual, 'Data'], variavel = var_atual_obj[0]['id']).order_by('-atualizado_em')
#print(linha_existe)
cont_linha_existe = linha_existe.count()
#print(cont_linha_existe)
#linha não existe, adicionar
if cont_linha_existe == 0:
l = VariavelPaisDecimal(
variavel = Variavel.objects.get(id=var_atual_obj[0]['id']),
pais = pais,
data = dados.at[linha_atual, 'Data'],
valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
)
l.save()
#linha existe, atualizar
elif cont_linha_existe == 1:
#print(linha_existe)
#print(linha_existe.values())
l = VariavelPaisDecimal.objects.get(id=linha_existe.values()[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
else:
#a primeira linha a ser mantida, excluindo a inconsistencia
vet_linhas_existentes = linha_existe[1:]
for atual in vet_linhas_existentes:
l = VariavelPaisDecimal.objects.get(id=atual[0]['id'])
l.delete()
#atualiza o valor da linha
l = VariavelPaisDecimal.objects.get(id=linha_existe[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
#erro mais de uma linha encontrada com a combinação de chaves, remover e deixar a última apenas
else:
print('Tipo de Variavel não identificado:', var_atual_obj[0]['unidade'])
print(variaveis_tab)
elif(abrangencia == 'ESTA'):
estado = Estado.objects.get(nome_normalizado=controle.at[0, 'nome_normalizado'])
if(controle.at[0, 'nome_normalizado'] == 'rio-grande-do-sul'):
variaveis_estado = Variavel.objects.all().filter(abrangencia='ESTA', ativa=True)
else:
variaveis_estado = Variavel.objects.all().filter(abrangencia='ESTA', ativa=True, variavel_exclusiva_do_estado_RS=False)
variaveis_estado = Variavel.objects.all().filter(abrangencia='ESTA', ativa=True)
variaveis_estado_int = VariavelEstadoInteiro.objects.all()
variaveis_estado_dec = VariavelEstadoDecimal.objects.all()
print(variaveis_estado)
variaveis_tab = dados.columns
linhas_tab = dados.index
for linha_atual in linhas_tab:
for var_atual in variaveis_tab:
if((var_atual != 'Data') and (var_atual != 'Tempo/Variável')):
var_atual_obj = variaveis_estado.filter(nome=var_atual).values()
#print(var_atual_obj[0]['unidade'])
if(var_atual_obj[0]['unidade'] == 'SEMU' or var_atual_obj[0]['unidade'] == 'INTE'):
linha_existe = VariavelEstadoInteiro.objects.filter(estado=estado, data=dados.at[linha_atual, 'Data'], variavel = var_atual_obj[0]['id']).order_by('-atualizado_em')
print(linha_existe)
cont_linha_existe = linha_existe.count()
print(cont_linha_existe)
#linha não existe, adicionar
if cont_linha_existe == 0:
l = VariavelEstadoInteiro(
variavel = Variavel.objects.get(id=var_atual_obj[0]['id']),
estado = estado,
data = dados.at[linha_atual, 'Data'],
valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
)
l.save()
#linha existe, atualizar
elif cont_linha_existe == 1:
print(linha_existe)
print(linha_existe.values())
l = VariavelEstadoInteiro.objects.get(id=linha_existe.values()[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
else:
#a primeira linha a ser mantida, excluindo a inconsistencia
vet_linhas_existentes = linha_existe[1:]
for atual in vet_linhas_existentes:
l = VariavelEstadoInteiro.objects.get(id=atual[0]['id'])
l.delete()
#atualiza o valor da linha
l = VariavelEstadoInteiro.objects.get(id=linha_existe[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
#erro mais de uma linha encontrada com a combinação de chaves, remover e deixar a última apenas
elif(
var_atual_obj[0]['unidade'] == 'DECI' or
var_atual_obj[0]['unidade'] == 'REAL' or
var_atual_obj[0]['unidade'] == 'DOLA' or
var_atual_obj[0]['unidade'] == 'EURO' or
var_atual_obj[0]['unidade'] == 'PORC'
):
linha_existe = VariavelEstadoDecimal.objects.filter(estado=estado, data=dados.at[linha_atual, 'Data'], variavel = var_atual_obj[0]['id']).order_by('-atualizado_em')
print(linha_existe)
cont_linha_existe = linha_existe.count()
print(cont_linha_existe)
#linha não existe, adicionar
if cont_linha_existe == 0:
l = VariavelEstadoDecimal(
variavel = Variavel.objects.get(id=var_atual_obj[0]['id']),
estado = estado,
data = dados.at[linha_atual, 'Data'],
valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
)
l.save()
#linha existe, atualizar
elif cont_linha_existe == 1:
print(linha_existe)
print(linha_existe.values())
l = VariavelEstadoDecimal.objects.get(id=linha_existe.values()[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
else:
#a primeira linha a ser mantida, excluindo a inconsistencia
vet_linhas_existentes = linha_existe[1:]
for atual in vet_linhas_existentes:
l = VariavelEstadoDecimal.objects.get(id=atual[0]['id'])
l.delete()
#atualiza o valor da linha
l = VariavelEstadoDecimal.objects.get(id=linha_existe[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
#erro mais de uma linha encontrada com a combinação de chaves, remover e deixar a última apenas
else:
print('Tipo de Variavel não identificado:', var_atual_obj[0]['unidade'])
print()
elif(abrangencia == 'MUNI'):
municipios = pd.read_excel(conteudo_arquivo, sheet_name='Municipios')
municipios = municipios.astype(object).where(pd.notnull(municipios),None)
print(municipios)
#replace NaN with None
#dados = dados.astype(object).where(pd.notnull(dados),None)
#print(dados)
estado = Estado.objects.get(nome_normalizado=controle.at[0, 'nome_normalizado'])
variaveis_municipio = Variavel.objects.all().filter(abrangencia='MUNI', ativa=True)
variaveis_municipio_int = VariavelMunicipioInteiro.objects.all()
variaveis_municipio_dec = VariavelMunicipioDecimal.objects.all()
print(variaveis_municipio)
municipios_index = municipios.index
for municipio_atual in municipios_index:
dados = pd.read_excel(conteudo_arquivo, sheet_name=municipios.at[municipio_atual, 'Nome_Normalizado'])
dados = dados.astype(object).where(pd.notnull(dados),None)
print(dados)
municipio = Municipio.objects.get(codigo_ibge=municipios.at[municipio_atual, 'Codigo_IBGE'])
variaveis_tab = dados.columns
linhas_tab = dados.index
for linha_atual in linhas_tab:
for var_atual in variaveis_tab:
if((var_atual != 'Data') and (var_atual != 'Tempo/Variável')):
var_atual_obj = variaveis_municipio.filter(nome=var_atual).values()
#print(var_atual_obj[0]['unidade'])
if(var_atual_obj[0]['unidade'] == 'SEMU' or var_atual_obj[0]['unidade'] == 'INTE'):
linha_existe = VariavelMunicipioInteiro.objects.filter(municipio=municipio, data=dados.at[linha_atual, 'Data'], variavel = var_atual_obj[0]['id']).order_by('-atualizado_em')
print(linha_existe)
cont_linha_existe = linha_existe.count()
print(cont_linha_existe)
#linha não existe, adicionar
if cont_linha_existe == 0:
l = VariavelMunicipioInteiro(
variavel = Variavel.objects.get(id=var_atual_obj[0]['id']),
municipio = municipio,
data = dados.at[linha_atual, 'Data'],
valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
)
l.save()
#linha existe, atualizar
elif cont_linha_existe == 1:
print(linha_existe)
print(linha_existe.values())
l = VariavelMunicipioInteiro.objects.get(id=linha_existe.values()[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
else:
#a primeira linha a ser mantida, excluindo a inconsistencia
vet_linhas_existentes = linha_existe[1:]
for atual in vet_linhas_existentes:
l = VariavelMunicipioInteiro.objects.get(id=atual[0]['id'])
l.delete()
#atualiza o valor da linha
l = VariavelMunicipioInteiro.objects.get(id=linha_existe[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
#erro mais de uma linha encontrada com a combinação de chaves, remover e deixar a última apenas
elif(
var_atual_obj[0]['unidade'] == 'DECI' or
var_atual_obj[0]['unidade'] == 'REAL' or
var_atual_obj[0]['unidade'] == 'DOLA' or
var_atual_obj[0]['unidade'] == 'EURO' or
var_atual_obj[0]['unidade'] == 'PORC'
):
linha_existe = VariavelMunicipioDecimal.objects.filter(municipio=municipio, data=dados.at[linha_atual, 'Data'], variavel = var_atual_obj[0]['id']).order_by('-atualizado_em')
print(linha_existe)
cont_linha_existe = linha_existe.count()
print(cont_linha_existe)
#linha não existe, adicionar
if cont_linha_existe == 0:
l = VariavelMunicipioDecimal(
variavel = Variavel.objects.get(id=var_atual_obj[0]['id']),
municipio = municipio,
data = dados.at[linha_atual, 'Data'],
valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
)
l.save()
#linha existe, atualizar
elif cont_linha_existe == 1:
print(linha_existe)
print(linha_existe.values())
l = VariavelMunicipioDecimal.objects.get(id=linha_existe.values()[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
else:
#a primeira linha a ser mantida, excluindo a inconsistencia
vet_linhas_existentes = linha_existe[1:]
for atual in vet_linhas_existentes:
l = VariavelMunicipioDecimal.objects.get(id=atual[0]['id'])
l.delete()
#atualiza o valor da linha
l = VariavelMunicipioDecimal.objects.get(id=linha_existe[0]['id'])
l.valor = dados.at[linha_atual, var_atual_obj[0]['nome']]
l.save()
#erro mais de uma linha encontrada com a combinação de chaves, remover e deixar a última apenas
else:
print('Tipo de Variavel não identificado:', var_atual_obj[0]['unidade'])
print()
elif(abrangencia == 'CONT'):
print()
else:
return "Erro na leitura da abrangencia"
return True
def importar_municipios_rs_xlsx(arquivo):
conteudo_arquivo = arquivo.read()
dados = pd.read_excel(conteudo_arquivo, sheet_name='Municipios')
print(dados)
linhas_tab = dados.index
estado = Estado.objects.get(nome_normalizado='rio-grande-do-sul')
u = 0
a = 0
for linha_atual in linhas_tab:
municipio_existe = Municipio.objects.filter(estado=estado, nome_normalizado=slugify(dados.at[linha_atual, 'nome']))
# inserir sem coluna de numero de habitantes
if(municipio_existe.count() == 0):
l = Municipio(
estado = estado,
codigo_ibge = dados.at[linha_atual, 'id'],
nome = dados.at[linha_atual, 'nome'],
#numero_habitantes = dados.at[linha_atual, 'numero_habitantes'],
)
l.save()
a += 1
#atualizar municipio com o numero de habitantes
else:
municipio_atual = Municipio.objects.get(nome_normalizado=slugify(dados.at[linha_atual, 'nome']))
#print(municipio_atual)
municipio_atual.numero_habitantes = dados.at[linha_atual, 'numero_habitantes']
municipio_atual.total_de_repasse_programa_apoio_financeiro = dados.at[linha_atual, 'total_de_repasse_programa_apoio_financeiro']
municipio_atual.save()
u += 1
print('Linhas atualizadas:', u, ', Municipios Inseridos:', a) | 61.8 | 201 | 0.496911 | 1,965 | 20,394 | 4.949109 | 0.077354 | 0.088226 | 0.067866 | 0.070334 | 0.812339 | 0.786015 | 0.767712 | 0.729152 | 0.701491 | 0.691208 | 0 | 0.008534 | 0.408159 | 20,394 | 330 | 202 | 61.8 | 0.797183 | 0.09645 | 0 | 0.654412 | 0 | 0 | 0.055831 | 0.002285 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007353 | false | 0.003676 | 0.022059 | 0 | 0.036765 | 0.113971 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7fc6e04830a7d4b7ea4dddc1c5e19416d2f82086 | 46 | py | Python | dizoo/slime_volley/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 464 | 2021-07-08T07:26:33.000Z | 2022-03-31T12:35:16.000Z | dizoo/slime_volley/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 177 | 2021-07-09T08:22:55.000Z | 2022-03-31T07:35:22.000Z | dizoo/slime_volley/envs/__init__.py | sailxjx/DI-engine | c6763f8e2ba885a2a02f611195a1b5f8b50bff00 | [
"Apache-2.0"
] | 92 | 2021-07-08T12:16:37.000Z | 2022-03-31T09:24:41.000Z | from .slime_volley_env import SlimeVolleyEnv
| 23 | 45 | 0.869565 | 6 | 46 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fcaeb312701b867c180c992995067bc5e1c7ce8 | 33 | py | Python | app/core/modbus/__init__.py | zmops/zeus-modbus | 3de989f2233e994876cf2a98ac46d9213e53ff3c | [
"Apache-2.0"
] | 3 | 2022-01-26T04:27:49.000Z | 2022-03-04T14:02:41.000Z | app/core/modbus/__init__.py | zmops/zeus-modbus | 3de989f2233e994876cf2a98ac46d9213e53ff3c | [
"Apache-2.0"
] | null | null | null | app/core/modbus/__init__.py | zmops/zeus-modbus | 3de989f2233e994876cf2a98ac46d9213e53ff3c | [
"Apache-2.0"
] | null | null | null | from .client import ModbusClient
| 16.5 | 32 | 0.848485 | 4 | 33 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7fd2771ceecb002565de76716366eda2c1d3ef44 | 146 | py | Python | tools/__init__.py | supercatex/GeneralAlphaZero | 2d85aedc0db5addd353e4e25851019a2c126ca81 | [
"MIT"
] | null | null | null | tools/__init__.py | supercatex/GeneralAlphaZero | 2d85aedc0db5addd353e4e25851019a2c126ca81 | [
"MIT"
] | null | null | null | tools/__init__.py | supercatex/GeneralAlphaZero | 2d85aedc0db5addd353e4e25851019a2c126ca81 | [
"MIT"
] | null | null | null | from tools.Environment import Environment
from tools.Observation import Observation
from tools.Agent import Agent
from tools.Action import Action
| 29.2 | 41 | 0.863014 | 20 | 146 | 6.3 | 0.35 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109589 | 146 | 4 | 42 | 36.5 | 0.969231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3d12c138f4d8c6e5cd1411755421143acfda0815 | 6,131 | py | Python | backend/test.py | Jspsun/HackTheNorth2017 | b0fb68825ca1d98ceda8297477237aa9746074b4 | [
"MIT"
] | null | null | null | backend/test.py | Jspsun/HackTheNorth2017 | b0fb68825ca1d98ceda8297477237aa9746074b4 | [
"MIT"
] | 3 | 2021-01-28T19:53:04.000Z | 2022-03-25T18:36:14.000Z | backend/test.py | Jspsun/HackTheNorth2017 | b0fb68825ca1d98ceda8297477237aa9746074b4 | [
"MIT"
] | null | null | null | t = {0: u"m 'applspva 2x\n\n \n\nImport java.ncl).5canner:\n\nclass apple:(\n, public static vuld malnlscrlnq arqsllH\nScanner Ducky new Summarlsyscem.1nH", 130: u'1 Import java.ncl).5canner:\n\n3 class apple:(\n\n4* pm)\u201d static vuld malancrlnq axqsllH\n4, 5 Scanner Ducky new Scannellsyscem.1nH\ns\nw Int gnu, buys, DEQPJE:\n2 gun a;\n3 hay:\n\n \n\n3a peuple g\u201c): 7 Days:\na; System.oub-prlnclnlpeuple}=\n\n13) I', 261: u'[:1 thlems fa: :mdazfa Dezlalallan (E Causal: :3 \u2018m x Si" 3. RE\n\n \n\n \n\n \n\n \n\n(terminated) appls [lava Applmallan] G\\nglam nlsvavayumhmyamm {May}, zxmnamu AM)\n2', 10: u'1 Import lava.n:1).5cannex7\n3 2135: apple:(\n4* public static void malnYScrlnq arqsllH\n\nScanner Ducky new Scannerlsy:cem.1n}=\n\nI', 141: u'1 Import java.ncl).5canner:\n\n3 class apple:(\n\n4,\n\npublic static vuld malnlscrlnq arqsllH\nScanner Ducky new ScannEIKSYSCEm.1nH\n\nInt gnu, Days, DEQPJE:\n\n \n\ngm, = a:\nDay,\nMm gun]: W5;\n\n \n\nSystem. out-prlncln Apeupm:', 271: u'm appxmm 2x\n, Import java.nc)).5canner:\n\nclass apple:(\n4, public static vuld malnlscrlnq arqsllH\n\n5 Scanner Ducky new Summarksyscem.1nH\n5\n\n \n\nInt gnu, buys, DEQPJE:\ngnu = 33;\n\nbuy:\nPeuple qlr): 2. bays:\nSystem.oat-prlnclnlpeuple}F', 20: u'1 mm java...n1.5cmeu\n3 mm apple\u201c I\n\n47 mm mm mm: mm\ufb01mng mum\n\n5 5mm mm m. Semensymmm:', 151: u'1 Import java.ncl).5canner:\n\n3 class apple:(\n\n4,\n\npublic static vuld malnlscrlnq arqsllH\nScanner Ducky = new Summarlsyscem.1nH\n\nInt gnu, buys, DEQPJE:\nerls a;\n\nmy: :\n\nPenple q\u201c): ~ boys:\nSystem.out-prlnclnlpeuple}F', 281: u'Import java.ncl).5canner:\n\nclass apple:(\npublic static vuld malnlscrlnq arqsllH\nScanner Ducky new Summarlsyscem.1nH\n\nInt gnu, Days, DEQPJE:\n\n \n \n\ngnu = 7:\nbuy:\npeuple gnu 2. bays:\n\nSystem. out-prlncln Apeupm:', 30: u'1 mm java...n1.5cmeu\n3 mm apple\u201c I\n\n47 mm mm mm: mm\ufb01mng mum\n\n5 5mm mm m. Semensymmm:\n\nInt', 161: u'a\n4,\n\n \n\nImport java.ncl).5canner:\n\nclass apple:(\npublic stat\u201c: void malnkscxlnq axqsllH\nScanner Ducky = new Summarlsyscem.1nH\n\nInt gnu, buys, DEQPJE:\nq\u201c): n;\n\nmy: : ;\n\nPeuple gn : / buys:\nSystem.out-prlnclnlpeuple}F', 292: u"E 'awlmava\n; Import\n\n3 class\n4* p\n\n5 sum All 0mm All\n\nNways 53v: rsaurzs bdor: \\annzhmg\n\n>\xae", 40: u'1 mm java...n1.5cmeu\n3 mm apple\u201c I\n\n47 mm mm mm: mm\ufb01mng mum\n\n5 5mm mm m. Semensymmm:\n\nInt', 171: u'13>\n\n\xab >\n\n(a pmhmfa maazfa Wwwanfa cm\xbb: :3 \u2019 x W E}. ENE\n\n \n\n \n\n \n\n(laminated) appls [lava Appllrallan] G\\nglam rusuavayumhmuamm (May 3, 2009 11:25:57 AM)\n3', 302: u'pg pmhmm ma\ufb01a Mmg cm: :3 at am\n\n9W\n\nENE]\n\n \n\n(terminated) appls [lava Appllmlmn] G\\nglam nlsuavayumhmuamm (May 3, 2009 mum AM)\n\n \n\na', 50: u'1 mm ma...m.5cmeu\n3 mm apple\u201c I\n\n47 mm mm mm: mm\ufb01mng mum\n\n5 5mm mm m. Semensmmm=\n\nInt', 181: u'Import java.ncl).5canner:\n\nclass apple:(\npublic static void malnlscrlnq arqsllH\nScanner Ducky m- Summarlsyscem.1nH\n\nmt mu, Days, mm;\n91:): = n;\n\nm: i a;\n\npangs gm: / ms;\n5mm.oummunmeupm=', 312: u'1 Import java.ncl).5canner:\n\n3 2135: apple:(\n\n \n\n4* public static void malnlscrlnq arqsllH\n5 Scanner Ducky new Scannerlsyscem.1nH\ns\nw Int gnu, buys, DEQPJE:\n\n" erls 7;\n3 hay: 3,\n\nJo pearl): q\u201c): 1. bays:\n\n:1 System.out-prlnclnlpeuple}F\n\n:2 >', 60: u'1 mm java...n1.5cmeu\n3 mm apple\u201c I\n\n47 mm mm mm: mm\ufb01mng mum\n\n5 5mm mm m. Semensymmm:\n\nInt', 191: u'Import java.ncl).5cannex:\n\nmu mm\nmm: mm m mmmm mm\nmm mm m. Smeumm;\nI\nmm, mm, mm\ngm, 1;;\nm,\n\n \n\npeuple gnu / Days:\nSystem.oub-prlncln(PeuplmF', 322: u'1 Import java.ncl).5canner:\n\n3 2135: apple:(\n\n \n\n47 mm mm vm mummy mum\n.1 5 5mm mm m. Semeuwmmnn\n5\n~ mt mm, mm, mm;\n2 gm, 7 I\n3 boy: 3 |\n0 mm gm, 2 ms;\n\nSystem. out-prlncln Apeupm:', 70: u'1 mm mmmjcmm\n\n3 mm apple\u201c I\n\n47 mm mm mm mmkmg mum\n5mm mm m. Semexysymmm:\n\nInt gnu, mvs, DEQDJE:\ngn', 201: u'4 \xbb\n\n \n\n \n\n\u201c\u20181 thlems fez maazfa Dezhlallan', 80: u'1 mm java.nmjcmm\n\n3 mu apple\u201c I\n\n47 mm mm md mWSmng mum\n5mm Ducky m. Semeusmmm:\n\nInt gnu, Days, DEQPJE:\ngun a:', 211: u'1 Import java.ncl).5canner:\n3 2135: apple:(\n\n4* public static void malnkscxlnq arqsllH\n\nma; 5 Scanner Ducky new Scannerksy:cem.1n}=\n\ns\n7 .ntjgms, boys, mm,-\n2 gm, :3;\n\n3 hay:\n0 9mm gm, / Days;\na\n\n3\n\n \n\nSystem. out-prlncln Apeupm:', 90: u'1 Import java.ncl).5cannex:\n\n3 class apple:(\n\n4,\n\npublic static void malancrlnq arqsllH\nScanner Ducky new ScannEIYSyscen.an\n\nInt gnu, Days, DEQDJE:\ngun a:\nbuy: 7 3,\n\nPeople m', 221: u'1 Import java.ncl).5cannex:\n3 2135: apple:(\n\n4* public static void malnkscxlnq arqsllH\n\nd 5 Scanner Ducky new Scannerksy:cem.1n}=\n\n \n\ns\n7 m4 mu, mg, mp) .\n2 gm, p;\n\n3 hay:\n0 9mm gm, / Days;\na\n\n3\n\n \n\nSystem. out-prlncln Apeupm:', 100: u'; Import java.ncl).5cannex:\n\n3 class apple:(\n\n4,\n\npublic smug void malnlscrlnq arqsllH\nScanner Ducky new Scannerksy:cem.1n}=\n\nmt mm, mm, pmpk:\nan): a;\n\nbuy: 7 3, I\n9mm gm, + ms;\nSW4', 231: u'1 Import java.ncl).5cannex:\n\n \n\n3 m\u201d 39mg\n47 mm mm md mummy mum\n5 5mm mm m. Semeuwmmnn\n\nmt mm, mm, mm;\nan): n;\n\nm: ,\n\nmm gm, 4W5;\nmm.oumnmnmupm:', 110: u'Import java.ncl).5canner:\n\nclass apple:(\npublic smug void malnlscrlnq arqsllH\nScanner Ducky neI Summarlsyscem.1nH\n\nInt gnu, buys, DEQPJE:\nerls a;\n\nmy: 7 3,\n\nPeuple qlr): + buys:\nsum...ou:.punnnrpeupm:', 241: u'1 Import java.ncl).5canner:\n\n \n\n3 class apple:(\n\n \n\n4* public static vmd malnlscrlnq axqsllH\n\n5 Scanner Ducky new Summarlsyscem.1nH\nInt gnu, Days, DEQPJE:\ngnu n;\nmy: ,\n\npeople g\u201c): 1. bays:\nSystem.oub-prlncln(PeuplmF', 120: u'4\n\n \n\n \n\nIE pmhmfa mmf mmmmm @ x W\n<.=.m.m=d> appls [mmnmn] ewmgmm nusuavaVHMhmwawm {May}, mnmm AM)\n\n9W\n\nEEI\n\n \n\n \n\n9\u20181', 251: u'[:1 thlems fa: :mdazfa Dezlalallan (E Causal: :3 \u2018m x Si" 3. RE\n\n \n\n \n\n \n\n \n\n(terminated) appls [lava Applmallan] G\\nglam FIIsUavaVl\ufb01Wme/amme(May3,21mll:28:16AM)\n2'}
for x in t:
print(x, t[x]) | 1,532.75 | 6,099 | 0.699886 | 1,241 | 6,131 | 3.457695 | 0.22643 | 0.039618 | 0.036355 | 0.026101 | 0.538103 | 0.487532 | 0.466092 | 0.40014 | 0.383361 | 0.350035 | 0 | 0.080311 | 0.118578 | 6,131 | 4 | 6,100 | 1,532.75 | 0.713731 | 0 | 0 | 0 | 0 | 11 | 0.941944 | 0.202055 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
3d14e0890f212e57cbba925c4bf7a2cdfda88837 | 19,538 | py | Python | cottonformation/res/panorama.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 5 | 2021-07-22T03:45:59.000Z | 2021-12-17T21:07:14.000Z | cottonformation/res/panorama.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 1 | 2021-06-25T18:01:31.000Z | 2021-06-25T18:01:31.000Z | cottonformation/res/panorama.py | MacHu-GWU/cottonformation-project | 23e28c08cfb5a7cc0db6dbfdb1d7e1585c773f3b | [
"BSD-2-Clause"
] | 2 | 2021-06-27T03:08:21.000Z | 2021-06-28T22:15:51.000Z | # -*- coding: utf-8 -*-
"""
This module
"""
import attr
import typing
from ..core.model import (
Property, Resource, Tag, GetAtt, TypeHint, TypeCheck,
)
from ..core.constant import AttrMeta
#--- Property declaration ---
@attr.s
class PropApplicationInstanceManifestPayload(Property):
"""
AWS Object Type = "AWS::Panorama::ApplicationInstance.ManifestPayload"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-panorama-applicationinstance-manifestpayload.html
Property Document:
- ``p_PayloadData``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-panorama-applicationinstance-manifestpayload.html#cfn-panorama-applicationinstance-manifestpayload-payloaddata
"""
AWS_OBJECT_TYPE = "AWS::Panorama::ApplicationInstance.ManifestPayload"
p_PayloadData: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PayloadData"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-panorama-applicationinstance-manifestpayload.html#cfn-panorama-applicationinstance-manifestpayload-payloaddata"""
@attr.s
class PropApplicationInstanceManifestOverridesPayload(Property):
"""
AWS Object Type = "AWS::Panorama::ApplicationInstance.ManifestOverridesPayload"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-panorama-applicationinstance-manifestoverridespayload.html
Property Document:
- ``p_PayloadData``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-panorama-applicationinstance-manifestoverridespayload.html#cfn-panorama-applicationinstance-manifestoverridespayload-payloaddata
"""
AWS_OBJECT_TYPE = "AWS::Panorama::ApplicationInstance.ManifestOverridesPayload"
p_PayloadData: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "PayloadData"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-panorama-applicationinstance-manifestoverridespayload.html#cfn-panorama-applicationinstance-manifestoverridespayload-payloaddata"""
#--- Resource declaration ---
@attr.s
class Package(Resource):
"""
AWS Object Type = "AWS::Panorama::Package"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html
Property Document:
- ``rp_PackageName``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#cfn-panorama-package-packagename
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#cfn-panorama-package-tags
"""
AWS_OBJECT_TYPE = "AWS::Panorama::Package"
rp_PackageName: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PackageName"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#cfn-panorama-package-packagename"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#cfn-panorama-package-tags"""
@property
def rv_PackageId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#aws-resource-panorama-package-return-values"""
return GetAtt(resource=self, attr_name="PackageId")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#aws-resource-panorama-package-return-values"""
return GetAtt(resource=self, attr_name="Arn")
@property
def rv_CreatedTime(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-package.html#aws-resource-panorama-package-return-values"""
return GetAtt(resource=self, attr_name="CreatedTime")
@attr.s
class PackageVersion(Resource):
"""
AWS Object Type = "AWS::Panorama::PackageVersion"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html
Property Document:
- ``rp_PackageId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-packageid
- ``rp_PackageVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-packageversion
- ``rp_PatchVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-patchversion
- ``p_MarkLatest``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-marklatest
- ``p_OwnerAccount``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-owneraccount
- ``p_UpdatedLatestPatchVersion``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-updatedlatestpatchversion
"""
AWS_OBJECT_TYPE = "AWS::Panorama::PackageVersion"
rp_PackageId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PackageId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-packageid"""
rp_PackageVersion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PackageVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-packageversion"""
rp_PatchVersion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "PatchVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-patchversion"""
p_MarkLatest: bool = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(bool)),
metadata={AttrMeta.PROPERTY_NAME: "MarkLatest"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-marklatest"""
p_OwnerAccount: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "OwnerAccount"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-owneraccount"""
p_UpdatedLatestPatchVersion: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "UpdatedLatestPatchVersion"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#cfn-panorama-packageversion-updatedlatestpatchversion"""
@property
def rv_PackageArn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#aws-resource-panorama-packageversion-return-values"""
return GetAtt(resource=self, attr_name="PackageArn")
@property
def rv_IsLatestPatch(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#aws-resource-panorama-packageversion-return-values"""
return GetAtt(resource=self, attr_name="IsLatestPatch")
@property
def rv_PackageName(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#aws-resource-panorama-packageversion-return-values"""
return GetAtt(resource=self, attr_name="PackageName")
@property
def rv_Status(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#aws-resource-panorama-packageversion-return-values"""
return GetAtt(resource=self, attr_name="Status")
@property
def rv_StatusDescription(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#aws-resource-panorama-packageversion-return-values"""
return GetAtt(resource=self, attr_name="StatusDescription")
@property
def rv_RegisteredTime(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-packageversion.html#aws-resource-panorama-packageversion-return-values"""
return GetAtt(resource=self, attr_name="RegisteredTime")
@attr.s
class ApplicationInstance(Resource):
"""
AWS Object Type = "AWS::Panorama::ApplicationInstance"
Resource Document: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html
Property Document:
- ``rp_DefaultRuntimeContextDevice``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-defaultruntimecontextdevice
- ``rp_ManifestPayload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-manifestpayload
- ``p_ApplicationInstanceIdToReplace``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-applicationinstanceidtoreplace
- ``p_Description``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-description
- ``p_DeviceId``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-deviceid
- ``p_ManifestOverridesPayload``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-manifestoverridespayload
- ``p_Name``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-name
- ``p_RuntimeRoleArn``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-runtimerolearn
- ``p_StatusFilter``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-statusfilter
- ``p_Tags``: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-tags
"""
AWS_OBJECT_TYPE = "AWS::Panorama::ApplicationInstance"
rp_DefaultRuntimeContextDevice: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.instance_of(TypeCheck.intrinsic_str_type),
metadata={AttrMeta.PROPERTY_NAME: "DefaultRuntimeContextDevice"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-defaultruntimecontextdevice"""
rp_ManifestPayload: typing.Union['PropApplicationInstanceManifestPayload', dict] = attr.ib(
default=None,
converter=PropApplicationInstanceManifestPayload.from_dict,
validator=attr.validators.instance_of(PropApplicationInstanceManifestPayload),
metadata={AttrMeta.PROPERTY_NAME: "ManifestPayload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-manifestpayload"""
p_ApplicationInstanceIdToReplace: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "ApplicationInstanceIdToReplace"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-applicationinstanceidtoreplace"""
p_Description: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Description"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-description"""
p_DeviceId: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "DeviceId"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-deviceid"""
p_ManifestOverridesPayload: typing.Union['PropApplicationInstanceManifestOverridesPayload', dict] = attr.ib(
default=None,
converter=PropApplicationInstanceManifestOverridesPayload.from_dict,
validator=attr.validators.optional(attr.validators.instance_of(PropApplicationInstanceManifestOverridesPayload)),
metadata={AttrMeta.PROPERTY_NAME: "ManifestOverridesPayload"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-manifestoverridespayload"""
p_Name: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "Name"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-name"""
p_RuntimeRoleArn: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "RuntimeRoleArn"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-runtimerolearn"""
p_StatusFilter: TypeHint.intrinsic_str = attr.ib(
default=None,
validator=attr.validators.optional(attr.validators.instance_of(TypeCheck.intrinsic_str_type)),
metadata={AttrMeta.PROPERTY_NAME: "StatusFilter"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-statusfilter"""
p_Tags: typing.List[typing.Union[Tag, dict]] = attr.ib(
default=None,
converter=Tag.from_list,
validator=attr.validators.optional(attr.validators.deep_iterable(member_validator=attr.validators.instance_of(Tag), iterable_validator=attr.validators.instance_of(list))),
metadata={AttrMeta.PROPERTY_NAME: "Tags"},
)
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#cfn-panorama-applicationinstance-tags"""
@property
def rv_DefaultRuntimeContextDeviceName(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="DefaultRuntimeContextDeviceName")
@property
def rv_ApplicationInstanceId(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="ApplicationInstanceId")
@property
def rv_Status(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="Status")
@property
def rv_HealthStatus(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="HealthStatus")
@property
def rv_StatusDescription(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="StatusDescription")
@property
def rv_CreatedTime(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="CreatedTime")
@property
def rv_LastUpdatedTime(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="LastUpdatedTime")
@property
def rv_Arn(self) -> GetAtt:
"""Doc: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-panorama-applicationinstance.html#aws-resource-panorama-applicationinstance-return-values"""
return GetAtt(resource=self, attr_name="Arn")
| 60.677019 | 230 | 0.763794 | 2,048 | 19,538 | 7.198242 | 0.047363 | 0.1337 | 0.094085 | 0.071496 | 0.900285 | 0.896961 | 0.876068 | 0.858025 | 0.854294 | 0.852395 | 0 | 0.000057 | 0.108865 | 19,538 | 321 | 231 | 60.866044 | 0.846649 | 0.384123 | 0 | 0.482955 | 0 | 0 | 0.089937 | 0.051919 | 0 | 0 | 0 | 0 | 0 | 1 | 0.096591 | false | 0 | 0.022727 | 0 | 0.386364 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d1aa36f82cbd9dbbcdd1cfa3bc4c3c07d875ee3 | 15,465 | py | Python | app/python/testing/sanity/deprecated_test_sparse.py | RadjaHachilif/agotool | 2fcc3fd5a156053b528ec927bab79ddaf7af2dec | [
"MIT"
] | 6 | 2016-04-14T11:47:43.000Z | 2022-01-29T14:34:59.000Z | app/python/testing/sanity/deprecated_test_sparse.py | RadjaHachilif/agotool | 2fcc3fd5a156053b528ec927bab79ddaf7af2dec | [
"MIT"
] | 2 | 2019-12-21T12:15:46.000Z | 2021-01-08T12:22:17.000Z | app/python/testing/sanity/deprecated_test_sparse.py | RadjaHachilif/agotool | 2fcc3fd5a156053b528ec927bab79ddaf7af2dec | [
"MIT"
] | 1 | 2021-03-04T10:26:18.000Z | 2021-03-04T10:26:18.000Z | import sys, os
from scipy import sparse
import pickle
import random
import numpy as np
import pandas as pd
# from itertools import islice
# import pytest
# sys.path.insert(0, os.path.dirname(os.path.abspath(os.path.realpath(__file__))))
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(os.path.realpath(__file__)), "../.."))) # to get from API to python directory
import conftest
import variables, query
### load data
ENSP_2_tuple_funcEnum_score_dict = conftest.ENSP_2_tuple_funcEnum_score_dict
ENSP_2_tuple_funcEnum_score_dict_keys_list = list(ENSP_2_tuple_funcEnum_score_dict.keys())
CSC_ENSPencoding_2_FuncEnum_UPS_FIN = variables.tables_dict["CSC_ENSPencoding_2_FuncEnum"]
CSC_ENSPencoding_2_FuncEnum = sparse.load_npz(CSC_ENSPencoding_2_FuncEnum_UPS_FIN)
with open(variables.tables_dict["ENSP_2_rowIndex_dict"], "rb") as fh_ENSP_2_rowIndex_dict:
ENSP_2_rowIndex_dict = pickle.load(fh_ENSP_2_rowIndex_dict)
with open(variables.tables_dict["rowIndex_2_ENSP_dict"], "rb") as fh_rowIndex_2_ENSP_dict:
rowIndex_2_ENSP_dict = pickle.load(fh_rowIndex_2_ENSP_dict)
### funtions, helpers
def slice_ScoresMatrix_for_given_ENSP(protein_AN_set, ENSP_2_rowIndex_dict, matrix):
"""
produces 2D array
number of rows corresponds to number of proteins if in ENSP_2_Score_dict
number of columns corresponds to number of funcEnum of KS_etype; encoded as funcEnumIndex range(0, max(cond_KS_etypes)+1)
"""
list_of_rowIndices = []
for ENSP in protein_AN_set:
try:
rowIndex = ENSP_2_rowIndex_dict[ENSP]
except KeyError:
continue
list_of_rowIndices.append(rowIndex)
return matrix[list_of_rowIndices], list_of_rowIndices
def get_funcEnum_and_score_from_sparse_matrix(UniProtID_2_get, ENSP_2_rowIndex_dict=ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum=CSC_ENSPencoding_2_FuncEnum):
# grab data for specific UniProtID
fg_scores_matrix, list_of_rowIndices_fg = slice_ScoresMatrix_for_given_ENSP([UniProtID_2_get], ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
funcEnum_list, score_list = [], []
m = fg_scores_matrix
for i in range(len(m.indptr[:-1])): # get column values
index_row_start = m.indptr[i]
index_row_stop = m.indptr[i + 1]
if index_row_start == index_row_stop:
continue
# funcEnum_2_score_from_matrix.append([i, m.data[index_row_start:index_row_stop][0]])
funcEnum_list.append(i)
score_list.append(m.data[index_row_start:index_row_stop][0])
return funcEnum_list, score_list
### tests
# source: Protein_2_FunctionEnum_and_Score_table_UPS_FIN
# --> SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
# --> Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN and Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN.pickle
# Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN --> test exists
# SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN vs Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN --> test exists
# SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN vs Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle --> test exists
# Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN vs Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle --> test exists
# compare sparse matrix loaded from binary dump to dictionary-array loaded from flat file
def test_Protein_2_FunctionEnum_and_Score_sparse_vs_flatfile_ex_1():
"""
Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
compare sparse matrix loaded from binary dump to dictionary-array loaded from flat file
specific example (first line in Lars downloads 'Protein_2_Function_and_Score_DOID_BTO_GOCC_STS.txt.gz'
"""
UniProtID_2_test = "NAC1_ARATH"
funcEnum_list_from_sparse, score_list_from_sparse = get_funcEnum_and_score_from_sparse_matrix(UniProtID_2_test, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
funcEnum_arr, score_arr = ENSP_2_tuple_funcEnum_score_dict[UniProtID_2_test]
assert list(funcEnum_arr) == funcEnum_list_from_sparse
assert list(score_arr) == score_list_from_sparse
def test_Protein_2_FunctionEnum_and_Score_sparse_vs_flatfile_ex_2():
"""
random UniProtID
Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
"""
UniProtID_2_test = conftest.get_random_human_ENSP(num_ENSPs=1, UniProt_ID=True)[0]
funcEnum_list_from_sparse, score_list_from_sparse = get_funcEnum_and_score_from_sparse_matrix(UniProtID_2_test, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
try:
funcEnum_arr, score_arr = ENSP_2_tuple_funcEnum_score_dict[UniProtID_2_test]
except KeyError:
funcEnum_arr, score_arr = [], []
assert list(funcEnum_arr) == funcEnum_list_from_sparse
assert list(score_arr) == score_list_from_sparse
def test_Protein_2_FunctionEnum_and_Score_sparse_vs_flatfile_ex_3():
"""
20 random UniProtID that have TM scores
Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
"""
for i in range(20):
UniProtID_2_test = random.choice(ENSP_2_tuple_funcEnum_score_dict_keys_list)
funcEnum_list_from_sparse, score_list_from_sparse = get_funcEnum_and_score_from_sparse_matrix(UniProtID_2_test, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
funcEnum_arr, score_arr = ENSP_2_tuple_funcEnum_score_dict[UniProtID_2_test]
assert list(funcEnum_arr) == funcEnum_list_from_sparse
assert list(score_arr) == score_list_from_sparse
def test_Protein_2_FunctionEnum_and_Score_sparse_vs_flatfile_ex_4_multiple_entries():
"""
Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
"""
UniProtID_2_test_list = random.sample(ENSP_2_tuple_funcEnum_score_dict_keys_list, 100)
fg_scores_matrix, list_of_rowIndices_fg = slice_ScoresMatrix_for_given_ENSP(UniProtID_2_test_list, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
funcEnum_2_scores_dict = {}
for UniProtID in UniProtID_2_test_list:
funcEnum_arr, score_arr = ENSP_2_tuple_funcEnum_score_dict[UniProtID]
for funcEnum, score in zip(funcEnum_arr, score_arr):
if funcEnum not in funcEnum_2_scores_dict:
funcEnum_2_scores_dict[funcEnum] = [score]
else:
funcEnum_2_scores_dict[funcEnum].append(score)
m = fg_scores_matrix
for i in range(len(m.indptr[:-1])): # get column values
index_row_start = m.indptr[i]
index_row_stop = m.indptr[i + 1]
if index_row_start == index_row_stop:
continue
funcEnum = i
scores_list_sparse = sorted(m.data[index_row_start:index_row_stop])
scores_list_ff = sorted(funcEnum_2_scores_dict[funcEnum])
assert len(scores_list_sparse) == len(scores_list_ff)
assert scores_list_sparse == scores_list_ff
def test_Protein_2_FunctionEnum_and_Score_sparse_vs_flatfile_Yeast_acetylation():
"""
Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
"""
fn_userinput = os.path.join(variables.EXAMPLE_FOLDER, "Example_Yeast_acetylation_abundance_correction.txt")
df = pd.read_csv(fn_userinput, sep='\t')
fg = df.loc[df["Foreground"].notnull(), "Foreground"].tolist() # redundant UniProt accessions
UniProtID_2_test_list = list(query.map_secondary_2_primary_ANs(fg).values()) # no more redundancy
assert len(UniProtID_2_test_list) == 1159
fg_scores_matrix, list_of_rowIndices_fg = slice_ScoresMatrix_for_given_ENSP(UniProtID_2_test_list, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
funcEnum_2_scores_dict = {}
for UniProtID in UniProtID_2_test_list:
try:
funcEnum_arr, score_arr = ENSP_2_tuple_funcEnum_score_dict[UniProtID]
except KeyError:
continue
for funcEnum, score in zip(funcEnum_arr, score_arr):
if funcEnum not in funcEnum_2_scores_dict:
funcEnum_2_scores_dict[funcEnum] = [score]
else:
funcEnum_2_scores_dict[funcEnum].append(score)
m = fg_scores_matrix
for i in range(len(m.indptr[:-1])): # get column values
index_row_start = m.indptr[i]
index_row_stop = m.indptr[i + 1]
if index_row_start == index_row_stop:
continue
funcEnum = i
scores_list_sparse = sorted(m.data[index_row_start:index_row_stop])
scores_list_ff = sorted(funcEnum_2_scores_dict[funcEnum])
assert len(scores_list_sparse) == len(scores_list_ff)
assert scores_list_sparse == scores_list_ff
# def test_Protein_2_FunctionEnum_and_Score_Yeast_acetylation_KS_Cython_vs_Scipy_counting():
# funcName = "GOCC:0005634" # "Nucleus"
# fn_userinput = os.path.join(variables.EXAMPLE_FOLDER, "Example_Yeast_acetylation_abundance_correction.txt")
# df = pd.read_csv(fn_userinput, sep='\t')
# fg = df.loc[df["Foreground"].notnull(), "Foreground"].tolist() # redundant UniProt accessions
# UniProtID_2_test_list = list(query.map_secondary_2_primary_ANs(fg).values()) # no more redundancy
### very slow therefore commented out
# def test_Protein_2_FunctionEnum_and_Score_sparse_vs_flatfile_ex_debug_all_human_prots():
# """
# very lenghty test
# test all human proteins, since test_Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle_vs_sparse fails
# w
# Protein_2_FunctionEnum_and_Score_table_UPS_FIN vs SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN
# """
# for UniProtID_2_test in ENSP_2_tuple_funcEnum_score_dict_keys_list[::-1]:
# funcEnum_list_from_sparse, score_list_from_sparse = get_funcEnum_and_score_from_sparse_matrix(UniProtID_2_test, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
# funcEnum_arr, score_arr = ENSP_2_tuple_funcEnum_score_dict[UniProtID_2_test]
# assert list(funcEnum_arr) == funcEnum_list_from_sparse
# assert list(score_arr) == score_list_from_sparse
def test_Taxid_2_FunctionEnum_2_Scores_flatfile_vs_pickle():
"""
passed
hopefully solved, Snakemake rule was commented out and therefore Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle not updated
Taxid_2_FunctionEnum_2_Scores_dict vs Taxid_2_FunctionEnum_2_Scores_table
Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN vs Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle
Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN = os.path.join(variables.TABLES_DIR, "Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN.txt")
testing r_Pickle_Taxid_2_FunctionEnum_2_Scores_dict
"""
Taxid_2_FunctionEnum_2_Scores_dict_ff = query.get_Taxid_2_FunctionEnum_2_Scores_dict(read_from_flat_files=True, as_array_or_as_list="array", taxid_2_proteome_count=None, from_pickle=False)
Taxid_2_FunctionEnum_2_Scores_dict_p = query.get_Taxid_2_FunctionEnum_2_Scores_dict(read_from_flat_files=False, as_array_or_as_list="array", taxid_2_proteome_count=None, from_pickle=True)
assert sorted(Taxid_2_FunctionEnum_2_Scores_dict_ff.keys()) == sorted(Taxid_2_FunctionEnum_2_Scores_dict_p.keys())
for taxid in Taxid_2_FunctionEnum_2_Scores_dict_ff.keys():
FunctionEnum_2_Scores_dict_ff = Taxid_2_FunctionEnum_2_Scores_dict_ff[taxid]
FunctionEnum_2_Scores_dict_p = Taxid_2_FunctionEnum_2_Scores_dict_p[taxid]
for funcEnum in FunctionEnum_2_Scores_dict_ff:
arr_ff = FunctionEnum_2_Scores_dict_ff[funcEnum]
arr_p = FunctionEnum_2_Scores_dict_p[funcEnum]
assert np.array_equal(arr_ff, arr_p)
def test_Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_flatfile_vs_sparse():
"""
failed
SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN vs Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN
Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN = os.path.join(variables.TABLES_DIR, "Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN.txt")
"""
Taxid_2_FunctionEnum_2_Scores_dict_ff = query.get_Taxid_2_FunctionEnum_2_Scores_dict(read_from_flat_files=True, as_array_or_as_list="array", taxid_2_proteome_count=None, from_pickle=False)
FunctionEnum_2_Scores_dict_ff = Taxid_2_FunctionEnum_2_Scores_dict_ff[9606]
genome_scores_matrix, list_of_rowIndices_genome = slice_ScoresMatrix_for_given_ENSP(conftest.UniProt_IDs_human_list, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
m = genome_scores_matrix
for i in range(len(m.indptr[:-1])): # get column values
index_row_start = m.indptr[i]
index_row_stop = m.indptr[i + 1]
if index_row_start == index_row_stop:
continue
funcEnum = i
score_arr_sparse = m.data[index_row_start:index_row_stop]
score_arr_ff = FunctionEnum_2_Scores_dict_ff[funcEnum]
assert score_arr_sparse.shape[0] == score_arr_ff.shape[0]
assert sorted(score_arr_sparse) == sorted(score_arr_ff)
def test_Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle_vs_sparse():
"""
SparseMatrix_ENSPencoding_2_FuncEnum_UPS_FIN vs Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN_pickle
Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN = os.path.join(variables.TABLES_DIR, "Taxid_2_FunctionEnum_2_Scores_table_UPS_FIN.txt")
"""
Taxid_2_FunctionEnum_2_Scores_dict_p = query.get_Taxid_2_FunctionEnum_2_Scores_dict(read_from_flat_files=False, as_array_or_as_list="array", taxid_2_proteome_count=None, from_pickle=True)
FunctionEnum_2_Scores_dict_p = Taxid_2_FunctionEnum_2_Scores_dict_p[9606]
genome_scores_matrix, list_of_rowIndices_genome = slice_ScoresMatrix_for_given_ENSP(conftest.UniProt_IDs_human_list, ENSP_2_rowIndex_dict, CSC_ENSPencoding_2_FuncEnum)
m = genome_scores_matrix
for i in range(len(m.indptr[:-1])): # get column values
index_row_start = m.indptr[i]
index_row_stop = m.indptr[i + 1]
if index_row_start == index_row_stop:
continue
funcEnum = i
score_arr_sparse = m.data[index_row_start:index_row_stop]
score_arr_ff = FunctionEnum_2_Scores_dict_p[funcEnum]
assert score_arr_sparse.shape[0] == score_arr_ff.shape[0]
assert sorted(score_arr_sparse) == sorted(score_arr_ff)
def test_taxid_2_tuple_funcEnum_index_2_associations_counts():
"""
snakemake r_Pickle_taxid_2_tuple_funcEnum_index_2_associations_counts_UPS_FIN
compare pickled vs flatfile data
"""
taxid_2_tuple_funcEnum_index_2_associations_counts_p = query.get_background_taxid_2_funcEnum_index_2_associations(read_from_flat_files=False, from_pickle=True)
taxid_2_tuple_funcEnum_index_2_associations_counts_ff = query.get_background_taxid_2_funcEnum_index_2_associations(read_from_flat_files=True, from_pickle=False)
assert sorted(taxid_2_tuple_funcEnum_index_2_associations_counts_p.keys()) == sorted(taxid_2_tuple_funcEnum_index_2_associations_counts_ff.keys())
for taxid in taxid_2_tuple_funcEnum_index_2_associations_counts_ff.keys():
funcEnum_index_2_associations_ff = taxid_2_tuple_funcEnum_index_2_associations_counts_ff[taxid]
funcEnum_index_positions_arr_ff, counts_arr_ff = funcEnum_index_2_associations_ff
funcEnum_index_2_associations_p = taxid_2_tuple_funcEnum_index_2_associations_counts_p[taxid]
funcEnum_index_positions_arr_p, counts_arr_p = funcEnum_index_2_associations_p
assert np.array_equal(funcEnum_index_positions_arr_ff, funcEnum_index_positions_arr_p)
assert np.array_equal(counts_arr_ff, counts_arr_p)
| 56.441606 | 192 | 0.792952 | 2,327 | 15,465 | 4.701762 | 0.100129 | 0.037108 | 0.083356 | 0.067727 | 0.84974 | 0.797642 | 0.771593 | 0.755598 | 0.716022 | 0.663193 | 0 | 0.021466 | 0.141481 | 15,465 | 273 | 193 | 56.648352 | 0.802591 | 0.291626 | 0 | 0.564417 | 0 | 0 | 0.01665 | 0.007202 | 0 | 0 | 0 | 0 | 0.122699 | 1 | 0.067485 | false | 0 | 0.04908 | 0 | 0.128834 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e9fa8958725a44f7f808f6d79e211cd76d4f3872 | 70 | py | Python | expackage/expackage/__init__.py | jrbourbeau/xmeeting-reproducible-research | 0c791e86f5885e6bb9435f89182477ac696647ff | [
"MIT"
] | null | null | null | expackage/expackage/__init__.py | jrbourbeau/xmeeting-reproducible-research | 0c791e86f5885e6bb9435f89182477ac696647ff | [
"MIT"
] | null | null | null | expackage/expackage/__init__.py | jrbourbeau/xmeeting-reproducible-research | 0c791e86f5885e6bb9435f89182477ac696647ff | [
"MIT"
] | null | null | null | # __init__.py
from .math import my_add
print('Importing expackage')
| 11.666667 | 28 | 0.757143 | 10 | 70 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 70 | 5 | 29 | 14 | 0.8 | 0.157143 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
18585be7746168caae351a4a61a86c925865df12 | 457 | py | Python | oldstuff/python/pythonchallenge/1.py | bcherry/bcherry | 5d2f1144dbdbf35d6284018fa2c9e24ec5cecec6 | [
"MIT"
] | 3 | 2016-11-13T09:06:41.000Z | 2021-09-11T23:36:19.000Z | oldstuff/python/pythonchallenge/1.py | bcherry/bcherry | 5d2f1144dbdbf35d6284018fa2c9e24ec5cecec6 | [
"MIT"
] | null | null | null | oldstuff/python/pythonchallenge/1.py | bcherry/bcherry | 5d2f1144dbdbf35d6284018fa2c9e24ec5cecec6 | [
"MIT"
] | 2 | 2017-04-04T10:03:18.000Z | 2021-09-11T23:36:26.000Z | import string
original = "g fmnc wms bgblr rpylqjyrc gr zw fylb. rfyrq ufyr amknsrcpq ypc dmp. bmgle gr gl zw fylb gq glcddgagclr ylb rfyr'q ufw rfgq rcvr gq qm jmle. sqgle qrpgle.kyicrpylq() gq pcamkkclbcb. lmu ynnjw ml rfc spj."
print string.translate(original,string.maketrans("abcdefghijklmnopqrstuvwxyz","cdefghijklmnopqrstuvwxyzab"))
print string.translate("map",string.maketrans("abcdefghijklmnopqrstuvwxyz","cdefghijklmnopqrstuvwxyzab"))+".html"
| 57.125 | 216 | 0.803063 | 59 | 457 | 6.220339 | 0.745763 | 0.032698 | 0.108992 | 0.365123 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105033 | 457 | 7 | 217 | 65.285714 | 0.897311 | 0 | 0 | 0 | 0 | 0.25 | 0.689278 | 0.227571 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.25 | null | null | 0.5 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
1882680cc619516d0192ead3afc55a6c42184d79 | 185 | py | Python | examples/notebooks/utils.py | wqruan/tf-encrypted | 50ee4ae3ba76b7c1f70a90e18f875191adea0a07 | [
"Apache-2.0"
] | 825 | 2019-04-18T09:21:32.000Z | 2022-03-30T05:55:26.000Z | examples/notebooks/utils.py | wqruan/tf-encrypted | 50ee4ae3ba76b7c1f70a90e18f875191adea0a07 | [
"Apache-2.0"
] | 354 | 2019-04-18T08:42:40.000Z | 2022-03-31T18:06:31.000Z | examples/notebooks/utils.py | wqruan/tf-encrypted | 50ee4ae3ba76b7c1f70a90e18f875191adea0a07 | [
"Apache-2.0"
] | 161 | 2019-05-02T16:43:31.000Z | 2022-03-31T01:35:03.000Z | """
Various helpers that make using TensorFlow and TF Encrypted in notebooks
easier.
"""
import tensorflow as tf
def print_in_notebook(x):
return tf.py_func(print, [x], Tout=[])
| 16.818182 | 72 | 0.724324 | 28 | 185 | 4.678571 | 0.785714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.167568 | 185 | 10 | 73 | 18.5 | 0.850649 | 0.432432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 6 |
a103ef0df919863d6f83423f1687c8654cf35e1b | 212 | py | Python | solutions_automation/marketplace/meetings.py | zaibon/js-sdk | cd1d26f2c3343884c1927ceef7c1e12e3f7da905 | [
"Apache-2.0"
] | 13 | 2020-09-02T09:05:08.000Z | 2022-03-12T02:43:24.000Z | solutions_automation/marketplace/meetings.py | zaibon/js-sdk | cd1d26f2c3343884c1927ceef7c1e12e3f7da905 | [
"Apache-2.0"
] | 1,998 | 2020-06-15T11:46:10.000Z | 2022-03-24T22:12:41.000Z | solutions_automation/marketplace/meetings.py | zaibon/js-sdk | cd1d26f2c3343884c1927ceef7c1e12e3f7da905 | [
"Apache-2.0"
] | 8 | 2020-09-29T06:50:35.000Z | 2021-06-14T03:30:52.000Z | from jumpscale.packages.marketplace.chats.meetings import MeetingsDeploy
from solutions_automation.utils.gedispatch import GedisChatBotPatch
class MeetingsAutomated(GedisChatBotPatch, MeetingsDeploy):
pass
| 30.285714 | 72 | 0.867925 | 20 | 212 | 9.15 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084906 | 212 | 6 | 73 | 35.333333 | 0.943299 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.25 | 0.5 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
a137fc7556c4460df2a8e455c4df6e248599676b | 26 | py | Python | derp/__init__.py | k4t0mono/probable-palm-tree | 03f649d093c6d25c0b40f517ece45435578271dc | [
"BSD-2-Clause"
] | null | null | null | derp/__init__.py | k4t0mono/probable-palm-tree | 03f649d093c6d25c0b40f517ece45435578271dc | [
"BSD-2-Clause"
] | null | null | null | derp/__init__.py | k4t0mono/probable-palm-tree | 03f649d093c6d25c0b40f517ece45435578271dc | [
"BSD-2-Clause"
] | null | null | null | from .yiffer import Yiffer | 26 | 26 | 0.846154 | 4 | 26 | 5.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.115385 | 26 | 1 | 26 | 26 | 0.956522 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a15ce04be24b3b9538943b15ae16257e9e3dd479 | 38 | py | Python | python/packages/isce3/cuda/image/__init__.py | isce-framework/isce3 | 59cdd2c659a4879367db5537604b0ca93d26b372 | [
"Apache-2.0"
] | 64 | 2019-08-06T19:22:22.000Z | 2022-03-20T17:11:46.000Z | python/packages/isce3/cuda/image/__init__.py | isce-framework/isce3 | 59cdd2c659a4879367db5537604b0ca93d26b372 | [
"Apache-2.0"
] | 8 | 2020-09-01T22:46:53.000Z | 2021-11-04T00:05:28.000Z | python/packages/isce3/cuda/image/__init__.py | isce-framework/isce3 | 59cdd2c659a4879367db5537604b0ca93d26b372 | [
"Apache-2.0"
] | 29 | 2019-08-05T21:40:55.000Z | 2022-03-23T00:17:03.000Z | from pybind_isce3.cuda.image import *
| 19 | 37 | 0.815789 | 6 | 38 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.029412 | 0.105263 | 38 | 1 | 38 | 38 | 0.852941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a02cec9ff8036b0ef8434da1b289c520d58e20f | 36 | py | Python | base_astro_bot/fleet/__init__.py | Mirdalan/base_astro_bot | 656ebd55c0f57fc18bf95227af9e20a4c1392489 | [
"MIT"
] | 2 | 2018-11-16T11:31:53.000Z | 2019-05-19T03:07:15.000Z | base_astro_bot/fleet/__init__.py | Mirdalan/base_astro_bot | 656ebd55c0f57fc18bf95227af9e20a4c1392489 | [
"MIT"
] | null | null | null | base_astro_bot/fleet/__init__.py | Mirdalan/base_astro_bot | 656ebd55c0f57fc18bf95227af9e20a4c1392489 | [
"MIT"
] | null | null | null | from .fleet_mixin import FleetMixin
| 18 | 35 | 0.861111 | 5 | 36 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1a047b6ee7e4bf9f37d2a81c3b700c48c4bf42a0 | 27 | py | Python | src/aq/cli/__main__.py | jjulien/azure-query | 2da8ee35a58602d70225946d915f172c4b7a452d | [
"MIT"
] | 2 | 2020-05-12T17:36:13.000Z | 2020-08-01T16:26:08.000Z | src/aq/cli/__main__.py | nickshine/azure-query | b7b2acc3b22f68eab3b657daefd38ded56240934 | [
"MIT"
] | null | null | null | src/aq/cli/__main__.py | nickshine/azure-query | b7b2acc3b22f68eab3b657daefd38ded56240934 | [
"MIT"
] | 1 | 2019-12-04T17:40:10.000Z | 2019-12-04T17:40:10.000Z | import aq.cli
aq.cli.run() | 9 | 13 | 0.703704 | 6 | 27 | 3.166667 | 0.666667 | 0.526316 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 3 | 14 | 9 | 0.791667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1a160c52030aee789e90fc16bf376a43ce7e4e1f | 96 | py | Python | venv/lib/python3.8/site-packages/future/moves/itertools.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/future/moves/itertools.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/future/moves/itertools.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/3d/5c/45/1d1941425f44952d1cb8614f714b63e777875fb6750e6820cc67e043a7 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.489583 | 0 | 96 | 1 | 96 | 96 | 0.40625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c513136debfae36bbb83c808952e9810a657d71d | 144 | py | Python | g3ts/core/__init__.py | savioli/g3ts | bca8d48f481c00d400215099eec324c03c6c0467 | [
"MIT"
] | null | null | null | g3ts/core/__init__.py | savioli/g3ts | bca8d48f481c00d400215099eec324c03c6c0467 | [
"MIT"
] | null | null | null | g3ts/core/__init__.py | savioli/g3ts | bca8d48f481c00d400215099eec324c03c6c0467 | [
"MIT"
] | null | null | null | from .G3tsCrawler import G3tsCrawler
from .G3tsSpeechCrawler import G3tsSpeechCrawler
from .G3tsTranslationCrawler import G3tsTranslationCrawler | 48 | 58 | 0.902778 | 12 | 144 | 10.833333 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045113 | 0.076389 | 144 | 3 | 58 | 48 | 0.932331 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c534f2cf6f563b77c8bf5ef50a90964738c7cfc0 | 141 | py | Python | Pdfencrypt-main/install.py | Zusyaku/Termux-And-Lali-Linux-V2 | b1a1b0841d22d4bf2cc7932b72716d55f070871e | [
"Apache-2.0"
] | 2 | 2021-11-17T03:35:03.000Z | 2021-12-08T06:00:31.000Z | Pdfencrypt-main/install.py | Zusyaku/Termux-And-Lali-Linux-V2 | b1a1b0841d22d4bf2cc7932b72716d55f070871e | [
"Apache-2.0"
] | null | null | null | Pdfencrypt-main/install.py | Zusyaku/Termux-And-Lali-Linux-V2 | b1a1b0841d22d4bf2cc7932b72716d55f070871e | [
"Apache-2.0"
] | 2 | 2021-11-05T18:07:48.000Z | 2022-02-24T21:25:07.000Z | import os
os.system("pip install pypdf2")
os.system("pip install PyPDF2")
os.system("clear")
print("\033[1;32;40mType python Pdfencrypt.py ") | 28.2 | 48 | 0.744681 | 23 | 141 | 4.565217 | 0.652174 | 0.228571 | 0.209524 | 0.342857 | 0.533333 | 0.533333 | 0.533333 | 0 | 0 | 0 | 0 | 0.077519 | 0.085106 | 141 | 5 | 48 | 28.2 | 0.736434 | 0 | 0 | 0 | 0 | 0 | 0.56338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.2 | 0 | 0.2 | 0.2 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c53b7efd663352c82a5196eda79815894b228169 | 263 | py | Python | turismo_v1/apps/mapa_main/models.py | diegogarciapablo/proyecto_tesis | 98882b009486e7324679654d101702cfe3d0e0ee | [
"MIT"
] | null | null | null | turismo_v1/apps/mapa_main/models.py | diegogarciapablo/proyecto_tesis | 98882b009486e7324679654d101702cfe3d0e0ee | [
"MIT"
] | null | null | null | turismo_v1/apps/mapa_main/models.py | diegogarciapablo/proyecto_tesis | 98882b009486e7324679654d101702cfe3d0e0ee | [
"MIT"
] | null | null | null | from django.db import models
#class datos_marcador(models.Model):
# =models.CharField(max_length=20, null=False)
# user_name=models.CharField(max_length=20, null=False)
# fecha_creacion=models.DateField('Fecha de creacion',auto_now=True, auto_now_add=False) | 43.833333 | 88 | 0.790875 | 40 | 263 | 5 | 0.625 | 0.15 | 0.18 | 0.24 | 0.35 | 0.35 | 0.35 | 0 | 0 | 0 | 0 | 0.016598 | 0.08365 | 263 | 6 | 88 | 43.833333 | 0.813278 | 0.840304 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c54e18044f363b5886ee49993c00d4dd6ca7d392 | 27,925 | py | Python | filebrowser/tests/test_versions.py | dteskera/django-filebrowser-suit | 502e7b56f6ab295ab1af198184c76f8c64b44b58 | [
"BSD-3-Clause"
] | null | null | null | filebrowser/tests/test_versions.py | dteskera/django-filebrowser-suit | 502e7b56f6ab295ab1af198184c76f8c64b44b58 | [
"BSD-3-Clause"
] | null | null | null | filebrowser/tests/test_versions.py | dteskera/django-filebrowser-suit | 502e7b56f6ab295ab1af198184c76f8c64b44b58 | [
"BSD-3-Clause"
] | null | null | null | # coding: utf-8
# PYTHON IMPORTS
import os
import posixpath
import shutil
# DJANGO IMPORTS
from django.conf import settings
from django.test import TestCase
from django.template import Context, Template, TemplateSyntaxError
# FILEBROWSER IMPORTS
import filebrowser
from filebrowser.settings import DEFAULT_PERMISSIONS, STRICT_PIL
from filebrowser.base import FileObject
from filebrowser.sites import site
from filebrowser.utils import scale_and_crop
# PIL import
if STRICT_PIL:
from PIL import Image
else:
try:
from PIL import Image
except ImportError:
import Image
TESTS_PATH = os.path.dirname(os.path.abspath(__file__))
FILEBROWSER_PATH = os.path.split(TESTS_PATH)[0]
class VersionTemplateTagsTests(TestCase):
def setUp(self):
"""
Save original values/functions so they can be restored in tearDown
"""
self.original_path = filebrowser.base.os.path
self.original_directory = site.directory
self.original_versions_basedir = filebrowser.base.VERSIONS_BASEDIR
self.original_versions = filebrowser.base.VERSIONS
self.original_admin_versions = filebrowser.base.ADMIN_VERSIONS
self.original_placeholder = filebrowser.templatetags.fb_versions.PLACEHOLDER
self.original_show_placeholder = filebrowser.templatetags.fb_versions.SHOW_PLACEHOLDER
self.original_force_placeholder = filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER
# DIRECTORY
# custom directory because this could be set with sites
# and we cannot rely on filebrowser.settings
# FIXME: find better directory name
self.directory = "fb_test_directory/"
self.directory_path = os.path.join(site.storage.location, self.directory)
if os.path.exists(self.directory_path):
self.fail("Test directory already exists.")
else:
os.makedirs(self.directory_path)
# set site directory
site.directory = self.directory
# VERSIONS
self.versions = "_versionstestdirectory"
self.versions_path = os.path.join(site.storage.location, self.versions)
if os.path.exists(self.versions_path):
self.fail("Versions directory already exists.")
else:
os.makedirs(self.versions_path)
# create temporary test folder and move testimage
# FIXME: find better path names
self.tmpdir_name = os.path.join("fb_tmp_dir", "fb_tmp_dir_sub")
self.tmpdir_path = os.path.join(site.storage.location, self.directory, self.tmpdir_name)
if os.path.exists(self.tmpdir_path):
self.fail("Temporary testfolder already exists.")
else:
os.makedirs(self.tmpdir_path)
# copy test image to temporary test folder
self.image_path = os.path.join(FILEBROWSER_PATH, "static", "filebrowser", "img", "testimage.jpg")
if not os.path.exists(self.image_path):
self.fail("Testimage not found.")
shutil.copy(self.image_path, self.tmpdir_path)
# create temporary test folder (placeholder) and move testimage
# FIXME: find better path names
self.tmpdir_name_ph = os.path.join("fb_tmp_dir", "fb_tmp_placeholder")
self.tmpdir_path_ph = os.path.join(site.storage.location, self.directory, self.tmpdir_name_ph)
if os.path.exists(self.tmpdir_path_ph):
self.fail("Temporary testfolder (placeholder) already exists.")
else:
os.makedirs(self.tmpdir_path_ph)
# copy test image to temporary test folder (placeholder)
shutil.copy(self.image_path, self.tmpdir_path_ph)
# set posixpath
filebrowser.base.os.path = posixpath
# fileobjects
self.f_image = FileObject(os.path.join(self.directory, self.tmpdir_name, "testimage.jpg"), site=site)
self.f_image_not_exists = FileObject(os.path.join(self.directory, self.tmpdir_name, "testimage_does_not_exist.jpg"), site=site)
self.f_folder = FileObject(os.path.join(self.directory, self.tmpdir_name), site=site)
self.f_placeholder = FileObject(os.path.join(self.directory, self.tmpdir_name_ph, "testimage.jpg"), site=site)
def test_scale_crop(self):
"""
Test scale/crop functionality
scale_and_crop(im, width, height, opts)
self.f_image (original): width 1000, height 750
"""
# new width 500 > 500/375
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 500, "", "")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# new height 375 > 500/375
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, "", 375, "")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# SIZE TOO BIG, BUT NO UPSCALE DEFINED
# new width 1500, no upscale > False
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, "", "")
self.assertEqual(version, False)
# new height 1125, no upscale > False
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, "", 1125, "")
self.assertEqual(version, False)
# new width 1500, height 1125, no upscale > False
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 1125, "")
self.assertEqual(version, False)
# SIZE TOO BIG, UPSCALE DEFINED
# new width 1500, upscale > 1500/1125
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, "", "upscale")
self.assertEqual(version.size[0], 1500)
self.assertEqual(version.size[1], 1125)
# new height 1125, upscale > 1500/1125
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, "", 1125, "upscale")
self.assertEqual(version.size[0], 1500)
self.assertEqual(version.size[1], 1125)
# new width 1500, new height 1125, upscale > 1500/1125
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 1125, "upscale")
self.assertEqual(version.size[0], 1500)
self.assertEqual(version.size[1], 1125)
# new width 1500, new height 1125, upscale > 1500/1125
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 0, "upscale")
self.assertEqual(version.size[0], 1500)
self.assertEqual(version.size[1], 1125)
# SIZE TOO SMALL, UPSCALE DEFINED
# width too small, no upscale
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 500, "", "upscale")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# height too small, no upscale
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, "", 375, "upscale")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# CROPPING
# new width 500 and height 500 w. crop > 500/500
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 500, 500, "crop")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 500)
# new width 1500 and height 1500 w. crop > false (upscale missing)
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 1500, "crop")
self.assertEqual(version, False)
# new width 1500 and height 1500 w. crop > false (upscale missing)
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 1500, "crop,upscale")
self.assertEqual(version.size[0], 1500)
self.assertEqual(version.size[1], 1500)
# SPECIAL CASES
# new width 500 and height 1125
# new width is smaller than original, but new height is bigger
# width has higher priority
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 500, 1125, "")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# same with upscale
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 500, 1125, "upscale")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# new width 1500 and height 375
# new width is bigger than original, but new height is smaller
# height has higher priority
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 375, "")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
# same with upscale
im = Image.open(self.f_image.path_full)
version = scale_and_crop(im, 1500, 375, "upscale")
self.assertEqual(version.size[0], 500)
self.assertEqual(version.size[1], 375)
def test_version(self):
"""
Templatetag version
"""
# new settings
filebrowser.base.VERSIONS_BASEDIR = "fb_test_directory/_versions"
filebrowser.base.VERSIONS = {
'admin_thumbnail': {'verbose_name': 'Admin Thumbnail', 'width': 60, 'height': 60, 'opts': 'crop'},
'large': {'verbose_name': 'Large', 'width': 600, 'height': '', 'opts': ''},
'fixedheight': {'verbose_name': 'Fixed height', 'width': '', 'height': 100, 'opts': ''},
}
filebrowser.base.ADMIN_VERSIONS = ['large']
filebrowser.settings.VERSIONS = filebrowser.base.VERSIONS
filebrowser.templatetags.fb_versions.VERSIONS = filebrowser.base.VERSIONS
# templatetag version with wrong token
self.assertRaises(TemplateSyntaxError, lambda: Template('{% load fb_versions %}{% version obj.path %}'))
self.assertRaises(TemplateSyntaxError, lambda: Template('{% load fb_versions %}{% version %}'))
# templatetag version without path
t = Template('{% load fb_versions %}{% version obj "medium" %}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(r, "") # FIXME: should this throw an error?
# templatetag version with hardcoded path
t = Template('{% load fb_versions %}{% version path "large" %}')
c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimage.jpg"})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version with obj
t = Template('{% load fb_versions %}{% version obj "large" %}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version with obj.path
t = Template('{% load fb_versions %}{% version obj.path "large" %}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# fixed height
t = Template('{% load fb_versions %}{% version path "fixedheight" %}')
c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimage.jpg"})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_fixedheight.jpg"))
# # FIXME: templatetag version with non-existing path
# t = Template('{% load fb_versions %}{% version path "large" %}')
# c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimagexxx.jpg"})
# r = t.render(c)
# self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# test placeholder with existing image
filebrowser.templatetags.fb_versions.PLACEHOLDER = "fb_test_directory/fb_tmp_dir/fb_tmp_placeholder/testimage.jpg"
filebrowser.templatetags.fb_versions.SHOW_PLACEHOLDER = True
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = True
t = Template('{% load fb_versions %}{% version obj.path suffix %}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = False
t = Template('{% load fb_versions %}{% version obj.path suffix %}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# test placeholder with non-existing image
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = True
t = Template('{% load fb_versions %}{% version obj.path suffix %}')
c = Context({"obj": self.f_image_not_exists, "suffix": "large"})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = False
t = Template('{% load fb_versions %}{% version obj.path suffix %}')
c = Context({"obj": self.f_image_not_exists, "suffix": "large"})
r = t.render(c)
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
# Check permissions
if DEFAULT_PERMISSIONS is not None:
permissions_default = oct(DEFAULT_PERMISSIONS)
permissions_file = oct(os.stat(os.path.join(settings.MEDIA_ROOT, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg")).st_mode & 0o777)
self.assertEqual(permissions_default, permissions_file)
def test_version_as_object(self):
"""
Templatetag version_object
"""
# new settings
filebrowser.base.VERSIONS_BASEDIR = "fb_test_directory/_versions"
filebrowser.base.VERSIONS = {
'admin_thumbnail': {'verbose_name': 'Admin Thumbnail', 'width': 60, 'height': 60, 'opts': 'crop'},
'large': {'verbose_name': 'Large', 'width': 600, 'height': '', 'opts': ''},
}
filebrowser.base.ADMIN_VERSIONS = ['large']
filebrowser.settings.VERSIONS = filebrowser.base.VERSIONS
filebrowser.templatetags.fb_versions.VERSIONS = filebrowser.base.VERSIONS
# templatetag version with hardcoded path
t = Template('{% load fb_versions %}{% version path "large" as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimage.jpg"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version with obj.path
t = Template('{% load fb_versions %}{% version obj.path "large" as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version with obj
t = Template('{% load fb_versions %}{% version obj "large" as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version with suffix as variable
t = Template('{% load fb_versions %}{% version obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# # FIXME: templatetag version with non-existing path
# t = Template('{% load fb_versions %}{% version path "large" as version_large %}{{ version_large.url }}')
# c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimagexxx.jpg"})
# r = t.render(c)
# self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# test placeholder with existing image
filebrowser.templatetags.fb_versions.PLACEHOLDER = "fb_test_directory/fb_tmp_dir/fb_tmp_placeholder/testimage.jpg"
filebrowser.templatetags.fb_versions.SHOW_PLACEHOLDER = True
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = True
t = Template('{% load fb_versions %}{% version obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = False
t = Template('{% load fb_versions %}{% version obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# test placeholder with non-existing image
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = True
t = Template('{% load fb_versions %}{% version obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image_not_exists, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = False
t = Template('{% load fb_versions %}{% version obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image_not_exists, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
def test_version_object(self):
"""
Templatetag version_object
"""
# new settings
filebrowser.base.VERSIONS_BASEDIR = "fb_test_directory/_versions"
filebrowser.base.VERSIONS = {
'admin_thumbnail': {'verbose_name': 'Admin Thumbnail', 'width': 60, 'height': 60, 'opts': 'crop'},
'large': {'verbose_name': 'Large', 'width': 600, 'height': '', 'opts': ''},
}
filebrowser.base.ADMIN_VERSIONS = ['large']
filebrowser.settings.VERSIONS = filebrowser.base.VERSIONS
filebrowser.templatetags.fb_versions.VERSIONS = filebrowser.base.VERSIONS
# templatetag with wrong token
self.assertRaises(TemplateSyntaxError, lambda: Template('{% load fb_versions %}{% version_object obj.path %}'))
self.assertRaises(TemplateSyntaxError, lambda: Template('{% load fb_versions %}{% version_object %}'))
self.assertRaises(TemplateSyntaxError, lambda: Template('{% load fb_versions %}{% version_object obj.path "medium" %}'))
# templatetag version_object with hardcoded path
t = Template('{% load fb_versions %}{% version_object path "large" as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimage.jpg"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version_object with obj.path
t = Template('{% load fb_versions %}{% version_object obj.path "large" as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version_object with obj
t = Template('{% load fb_versions %}{% version_object obj "large" as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# templatetag version_object with suffix as variable
t = Template('{% load fb_versions %}{% version_object obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# # FIXME: templatetag version with non-existing path
# t = Template('{% load fb_versions %}{% version_object path "large" as version_large %}{{ version_large.url }}')
# c = Context({"obj": self.f_image, "path": "fb_test_directory/fb_tmp_dir/fb_tmp_dir_sub/testimagexxx.jpg"})
# r = t.render(c)
# self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# test placeholder with existing image
filebrowser.templatetags.fb_versions.PLACEHOLDER = "fb_test_directory/fb_tmp_dir/fb_tmp_placeholder/testimage.jpg"
filebrowser.templatetags.fb_versions.SHOW_PLACEHOLDER = True
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = True
t = Template('{% load fb_versions %}{% version_object obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = False
t = Template('{% load fb_versions %}{% version_object obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_dir_sub/testimage_large.jpg"))
# test placeholder with non-existing image
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = True
t = Template('{% load fb_versions %}{% version_object obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image_not_exists, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = False
t = Template('{% load fb_versions %}{% version_object obj suffix as version_large %}{{ version_large.url }}')
c = Context({"obj": self.f_image_not_exists, "suffix": "large"})
r = t.render(c)
self.assertEqual(c["version_large"].url, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
self.assertEqual(r, os.path.join(settings.MEDIA_URL, "fb_test_directory/_versions/fb_tmp_dir/fb_tmp_placeholder/testimage_large.jpg"))
def test_version_setting(self):
pass
def tearDown(self):
"""
Restore original values/functions
"""
filebrowser.base.os.path = self.original_path
site.directory = self.original_directory
filebrowser.base.VERSIONS_BASEDIR = self.original_versions_basedir
filebrowser.base.VERSIONS = self.original_versions
filebrowser.settings.VERSIONS = self.original_versions
filebrowser.templatetags.fb_versions.VERSIONS = self.original_versions
filebrowser.base.ADMIN_VERSIONS = self.original_admin_versions
filebrowser.templatetags.fb_versions.PLACEHOLDER = self.original_placeholder
filebrowser.templatetags.fb_versions.SHOW_PLACEHOLDER = self.original_show_placeholder
filebrowser.templatetags.fb_versions.FORCE_PLACEHOLDER = self.original_force_placeholder
# remove temporary directory and test folder
shutil.rmtree(self.directory_path)
shutil.rmtree(self.versions_path)
| 56.074297 | 171 | 0.68641 | 3,697 | 27,925 | 4.935894 | 0.054639 | 0.031784 | 0.042525 | 0.031784 | 0.874397 | 0.850449 | 0.838393 | 0.797896 | 0.780469 | 0.75811 | 0 | 0.017253 | 0.19051 | 27,925 | 497 | 172 | 56.187123 | 0.790002 | 0.150797 | 0 | 0.615625 | 0 | 0 | 0.29501 | 0.15345 | 0 | 0 | 0 | 0.008048 | 0.246875 | 1 | 0.021875 | false | 0.003125 | 0.046875 | 0 | 0.071875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c54e4e9f518f2c24baf99e9c8d409421e8883799 | 49 | py | Python | VacationPY/.ipynb_checkpoints/config_google-checkpoint.py | kfmatovic716/Python-API_challenge | e7f731ce06c49c2e10ca9370f0fec524581bc8cc | [
"Apache-2.0"
] | null | null | null | VacationPY/.ipynb_checkpoints/config_google-checkpoint.py | kfmatovic716/Python-API_challenge | e7f731ce06c49c2e10ca9370f0fec524581bc8cc | [
"Apache-2.0"
] | null | null | null | VacationPY/.ipynb_checkpoints/config_google-checkpoint.py | kfmatovic716/Python-API_challenge | e7f731ce06c49c2e10ca9370f0fec524581bc8cc | [
"Apache-2.0"
] | null | null | null | g_key = "AIzaSyARTxi16VivjD5qPXnAJc8QNalQWOWiyF0" | 49 | 49 | 0.897959 | 3 | 49 | 14.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 0.040816 | 49 | 1 | 49 | 49 | 0.808511 | 0 | 0 | 0 | 0 | 0 | 0.78 | 0.78 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c570b6ad565d9f306b32d1b3fe7d5d2cca2a8ec5 | 143 | py | Python | Python/src/bqs/bqs_exception.py | chatterjeem-nu/INFO6206_Assignment3 | d2d065a11a9a3f384500c15b7430f1343e47e42b | [
"Apache-2.0"
] | 3 | 2021-04-20T05:06:16.000Z | 2022-03-26T23:55:11.000Z | Python/src/bqs/bqs_exception.py | chatterjeem-nu/INFO6206_Assignment3 | d2d065a11a9a3f384500c15b7430f1343e47e42b | [
"Apache-2.0"
] | null | null | null | Python/src/bqs/bqs_exception.py | chatterjeem-nu/INFO6206_Assignment3 | d2d065a11a9a3f384500c15b7430f1343e47e42b | [
"Apache-2.0"
] | 1 | 2021-03-02T01:19:42.000Z | 2021-03-02T01:19:42.000Z | class BqsException(Exception):
def __init__(self, msg):
self.msg = msg
def __str__(self):
return f"Error {self.msg}"
| 17.875 | 34 | 0.608392 | 18 | 143 | 4.388889 | 0.611111 | 0.265823 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 143 | 7 | 35 | 20.428571 | 0.759615 | 0 | 0 | 0 | 0 | 0 | 0.111888 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
3d72bb63b295a8e3b7294bcd2638055bf030bc42 | 23,031 | py | Python | python/ComputeStatsOnD4JToolsResults.py | GunpreetAhuja/StaticBugCheckers | 7d0e05afa28682c11a54ea022fd6bfabd64c5dc8 | [
"MIT"
] | 12 | 2018-09-06T08:04:06.000Z | 2021-12-14T02:56:02.000Z | python/ComputeStatsOnD4JToolsResults.py | GunpreetAhuja/StaticBugCheckers | 7d0e05afa28682c11a54ea022fd6bfabd64c5dc8 | [
"MIT"
] | 1 | 2018-11-06T20:37:33.000Z | 2018-11-14T08:29:35.000Z | python/ComputeStatsOnD4JToolsResults.py | GunpreetAhuja/StaticBugCheckers | 7d0e05afa28682c11a54ea022fd6bfabd64c5dc8 | [
"MIT"
] | 3 | 2018-11-11T22:15:07.000Z | 2020-03-29T04:08:06.000Z | '''
Created on Jan. 24, 2018
@author Andrew Habib
'''
from statistics import mean
from collections import Counter
import os
from Util import load_parsed_ep, load_parsed_inf, load_parsed_sb, load_json_list, get_list_of_uniq_jsons
def display_min_max_avg_warnings_per_bug_total():
print("\nMin, Max, Avg (warnings per bug) and Total number of warnings")
print("\nBuggy versions:\n")
rel_path = './b/'
ep_all = load_parsed_ep(rel_path + 'ep_parsed.json')
inf_all = load_parsed_inf(rel_path + 'inf_parsed.json')
sb_all = load_parsed_sb(rel_path + 'sb_parsed.json')
print("Tool Min. Max. Avg. Total")
print("Errorprone", get_min_max_avg_warnings_per_bug_total(ep_all))
print("Infer", get_min_max_avg_warnings_per_bug_total(inf_all))
print("Spotbugs", get_min_max_avg_warnings_per_bug_total(sb_all))
print("\nTotal number of warnings by all tools:",
get_min_max_avg_warnings_per_bug_total(ep_all)[3] + get_min_max_avg_warnings_per_bug_total(inf_all)[3] + get_min_max_avg_warnings_per_bug_total(sb_all)[3])
''''''
print("\nFixed versions:\n")
rel_path = './f/'
ep_all = load_parsed_ep(rel_path + 'ep_parsed.json')
inf_all = load_parsed_inf(rel_path + 'inf_parsed.json')
sb_all = load_parsed_sb(rel_path + 'sb_parsed.json')
print("Tool Total Min. Max. Avg.")
print("Errorprone", get_min_max_avg_warnings_per_bug_total(ep_all))
print("Infer", get_min_max_avg_warnings_per_bug_total(inf_all))
print("Spotbugs", get_min_max_avg_warnings_per_bug_total(sb_all))
print("\nTotal number of warnings by all tools:",
get_min_max_avg_warnings_per_bug_total(ep_all)[3] + get_min_max_avg_warnings_per_bug_total(inf_all)[3] + get_min_max_avg_warnings_per_bug_total(sb_all)[3])
def get_min_max_avg_warnings_per_bug_total(warnings):
count = [w.proj for w in warnings]
counter = Counter(count)
return min(counter.values()), max(counter.values()), mean(counter.values()), len(count)
def get_warnings_bugs_from_each_approach():
print("\nWarnings and bugs from each automatic matching approach")
print("** warnings for combined approach are not unique (duplicates exist) **\n")
rel_path = './diffs_warnings/'
ep_res1 = load_parsed_ep(rel_path + "ep_warnings.json")
inf_res1 = load_parsed_inf(rel_path + "inf_warnings.json")
sb_res1 = load_parsed_sb(rel_path + "sb_warnings.json")
rel_path = './removed_warnings/'
ep_res2 = load_parsed_ep(rel_path + "ep_warnings.json")
inf_res2 = load_parsed_inf(rel_path + "inf_warnings.json")
sb_res2 = load_parsed_sb(rel_path + "sb_warnings.json")
_all_b = []
print("Tool Diff-based Fixed-based Combined")
print(" W B W B W B")
all_b = []
b_diff = get_bugs_from_warnings(ep_res1)
b_fixed = get_bugs_from_warnings(ep_res2)
all_b.extend(b_diff)
all_b.extend(b_fixed)
_all_b.extend(all_b)
print("Error Prone ", len(ep_res1), len(b_diff), len(ep_res2), len(b_fixed), len(ep_res1) + len(ep_res2), len(b_diff | b_fixed))
all_b = []
b_diff = get_bugs_from_warnings(inf_res1)
b_fixed = get_bugs_from_warnings(inf_res2)
all_b.extend(b_diff)
all_b.extend(b_fixed)
_all_b.extend(all_b)
print("Infer ", len(inf_res1), len(b_diff), len(inf_res2), len(b_fixed), len(inf_res1) + len(inf_res2), len(b_diff | b_fixed))
all_b = []
b_diff = get_bugs_from_warnings(sb_res1)
b_fixed = get_bugs_from_warnings(sb_res2)
all_b.extend(b_diff)
all_b.extend(b_fixed)
_all_b.extend(all_b)
print("SpotBugs ", len(sb_res1), len(b_diff), len(sb_res2), len(b_fixed), len(sb_res1) + len(sb_res2), len(b_diff | b_fixed))
print("\nUnique warnings from each approachcombined approach:\n")
rel_path = './diffs_warnings/'
ep_res1 = load_json_list(rel_path + "ep_warnings.json")
inf_res1 = load_json_list(rel_path + "inf_warnings.json")
sb_res1 = load_json_list(rel_path + "sb_warnings.json")
rel_path = './removed_warnings/'
ep_res2 = load_json_list(rel_path + "ep_warnings.json")
inf_res2 = load_json_list(rel_path + "inf_warnings.json")
sb_res2 = load_json_list(rel_path + "sb_warnings.json")
print("Ep ", len(ep_res1), len(ep_res2), len(get_list_of_uniq_jsons(ep_res1 + ep_res2)))
print("Inf", len(inf_res1), len(inf_res2), len(get_list_of_uniq_jsons(inf_res1 + inf_res2)))
print("Sb ", len(sb_res1), len(sb_res2), len(get_list_of_uniq_jsons(sb_res1 + sb_res2)))
print("\nUnique bugs from combined approach: ", len(set(_all_b)))
def get_bugs_from_warnings(warnings):
bugs = set(w.proj for w in warnings)
return bugs
def count_bugs_from_warnings(warnings):
bugs = set(w.proj for w in warnings)
return(len(bugs))
def get_manually_inspected_warnings_bugs():
print("\nManual inspection of warnings aggregated on warnings and bugs levels")
print("\nDiffs-based approach:\n")
rel_path = './diffs_warnings/'
ep_res = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ = load_parsed_ep(rel_path + "ep_succ.json")
ep_part = load_parsed_ep(rel_path + "ep_part.json")
ep_fail = load_parsed_ep(rel_path + "ep_fail.json")
inf_res = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ = load_parsed_inf(rel_path + "inf_succ.json")
inf_part = load_parsed_inf(rel_path + "inf_part.json")
inf_fail = load_parsed_inf(rel_path + "inf_fail.json")
sb_res = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ = load_parsed_sb(rel_path + "sb_succ.json")
sb_part = load_parsed_sb(rel_path + "sb_part.json")
sb_fail = load_parsed_sb(rel_path + "sb_fail.json")
print("Warnings:\n")
print('Tool "Full match" "Partial match" Mismatch Total')
print('"Error Prone"', len(ep_succ), len(ep_part), len(ep_fail), len(ep_res))
print("Infer", len(inf_succ), len(inf_part), len(inf_fail), len(inf_res))
print("Spotbugs", len(sb_succ), len(sb_part), len(sb_fail), len(sb_res))
print("\nBugs:\n")
print('Tool "Full match" "Partial match" Mismatch Total')
print('"Error Prone"', count_bugs_from_warnings(ep_succ), count_bugs_from_warnings(ep_part), count_bugs_from_warnings(ep_fail), count_bugs_from_warnings(ep_res))
print("Infer", count_bugs_from_warnings(inf_succ), count_bugs_from_warnings(inf_part), count_bugs_from_warnings(inf_fail), count_bugs_from_warnings(inf_res))
print("Spotbugs", count_bugs_from_warnings(sb_succ), count_bugs_from_warnings(sb_part), count_bugs_from_warnings(sb_fail), count_bugs_from_warnings(sb_res))
print("\nFixed warnings approach\n")
rel_path = './removed_warnings/'
ep_res = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ = load_parsed_ep(rel_path + "ep_succ.json")
ep_part = load_parsed_ep(rel_path + "ep_part.json")
ep_fail = load_parsed_ep(rel_path + "ep_fail.json")
inf_res = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ = load_parsed_inf(rel_path + "inf_succ.json")
inf_part = load_parsed_inf(rel_path + "inf_part.json")
inf_fail = load_parsed_inf(rel_path + "inf_fail.json")
sb_res = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ = load_parsed_sb(rel_path + "sb_succ.json")
sb_part = load_parsed_sb(rel_path + "sb_part.json")
sb_fail = load_parsed_sb(rel_path + "sb_fail.json")
print("Warnings:\n")
print('Tool "Full match" "Partial match" Mismatch Total')
print('"Error Prone"', len(ep_succ), len(ep_part), len(ep_fail), len(ep_res))
print("Infer", len(inf_succ), len(inf_part), len(inf_fail), len(inf_res))
print("Spotbugs", len(sb_succ), len(sb_part), len(sb_fail), len(sb_res))
print("\nBugs:\n")
print('Tool "Full match" "Partial match" Mismatch Total')
print('"Error Prone"', count_bugs_from_warnings(ep_succ), count_bugs_from_warnings(ep_part), count_bugs_from_warnings(ep_fail), count_bugs_from_warnings(ep_res))
print("Infer", count_bugs_from_warnings(inf_succ), count_bugs_from_warnings(inf_part), count_bugs_from_warnings(inf_fail), count_bugs_from_warnings(inf_res))
print("Spotbugs", count_bugs_from_warnings(sb_succ), count_bugs_from_warnings(sb_part), count_bugs_from_warnings(sb_fail), count_bugs_from_warnings(sb_res))
get_manually_inspected_warnings_bugs_combined_approach()
def get_manually_inspected_warnings_bugs_combined_approach():
print("\nCombined approach\n")
rel_path = './diffs_warnings/'
ep_succ1 = load_json_list(rel_path + "ep_succ.json")
ep_part1 = load_json_list(rel_path + "ep_part.json")
ep_fail1 = load_json_list(rel_path + "ep_fail.json")
inf_succ1 = load_json_list(rel_path + "inf_succ.json")
inf_part1 = load_json_list(rel_path + "inf_part.json")
inf_fail1 = load_json_list(rel_path + "inf_fail.json")
sb_succ1 = load_json_list(rel_path + "sb_succ.json")
sb_part1 = load_json_list(rel_path + "sb_part.json")
sb_fail1 = load_json_list(rel_path + "sb_fail.json")
rel_path = './removed_warnings/'
ep_succ2 = load_json_list(rel_path + "ep_succ.json")
ep_part2 = load_json_list(rel_path + "ep_part.json")
ep_fail2 = load_json_list(rel_path + "ep_fail.json")
inf_succ2 = load_json_list(rel_path + "inf_succ.json")
inf_part2 = load_json_list(rel_path + "inf_part.json")
inf_fail2 = load_json_list(rel_path + "inf_fail.json")
sb_succ2 = load_json_list(rel_path + "sb_succ.json")
sb_part2 = load_json_list(rel_path + "sb_part.json")
sb_fail2 = load_json_list(rel_path + "sb_fail.json")
# comnined data #
ep_succ = get_list_of_uniq_jsons(ep_succ1 + ep_succ2)
ep_part = get_list_of_uniq_jsons(ep_part1 + ep_part2)
ep_fail = get_list_of_uniq_jsons(ep_fail1 + ep_fail2)
inf_succ = get_list_of_uniq_jsons(inf_succ1 + inf_succ2)
inf_part = get_list_of_uniq_jsons(inf_part1 + inf_part2)
inf_fail = get_list_of_uniq_jsons(inf_fail1 + inf_fail2)
sb_succ = get_list_of_uniq_jsons(sb_succ1 + sb_succ2)
sb_part = get_list_of_uniq_jsons(sb_part1 + sb_part2)
sb_fail = get_list_of_uniq_jsons(sb_fail1 + sb_fail2)
print("Warnings:\n")
print('Tool "Full match" "Partial match" Mismatch Total')
print('"Error Prone"', len(ep_succ), len(ep_part), len(ep_fail), len(ep_succ) + len(ep_part) + len(ep_fail))
print('Infer', len(inf_succ), len(inf_part), len(inf_fail), len(inf_succ) + len(inf_part) + len(inf_fail))
print('SpotBugs', len(sb_succ), len(sb_part), len(sb_fail), len(sb_succ) + len(sb_part) + len(sb_fail))
print("\nBugs:\n")
print('Tool "Full match" "Partial match" Mismatch Total')
b_succ, b_part, b_fail = len(Counter(p[' Proj'] for p in ep_succ)), len(Counter(p[' Proj'] for p in ep_part)), len(Counter(p[' Proj'] for p in ep_fail))
print('"Error Prone"', b_succ, b_part, b_fail, b_succ + b_part + b_fail)
b_succ, b_part, b_fail = len(Counter(p[' Proj'] for p in inf_succ)), len(Counter(p[' Proj'] for p in inf_part)), len(Counter(p[' Proj'] for p in inf_fail))
print('Infer', b_succ, b_part, b_fail, b_succ + b_part + b_fail)
b_succ, b_part, b_fail = len(Counter(p[' Proj'] for p in sb_succ)), len(Counter(p[' Proj'] for p in sb_part)), len(Counter(p[' Proj'] for p in sb_fail))
print('SpotBugs', b_succ, b_part, b_fail, b_succ + b_part + b_fail)
def get_cand_detected_bugs_tools_sets():
print("\nCandidate and detected bugs by each tool and each approach")
rel_path = './diffs_warnings/'
ep_res1 = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ1 = load_parsed_ep(rel_path + "ep_succ.json")
ep_part1 = load_parsed_ep(rel_path + "ep_part.json")
inf_res1 = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ1 = load_parsed_inf(rel_path + "inf_succ.json")
inf_part1 = load_parsed_inf(rel_path + "inf_part.json")
sb_res1 = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ1 = load_parsed_sb(rel_path + "sb_succ.json")
sb_part1 = load_parsed_sb(rel_path + "sb_part.json")
rel_path = './removed_warnings/'
ep_res2 = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ2 = load_parsed_ep(rel_path + "ep_succ.json")
ep_part2 = load_parsed_ep(rel_path + "ep_part.json")
inf_res2 = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ2 = load_parsed_inf(rel_path + "inf_succ.json")
inf_part2 = load_parsed_inf(rel_path + "inf_part.json")
sb_res2 = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ2 = load_parsed_sb(rel_path + "sb_succ.json")
sb_part2 = load_parsed_sb(rel_path + "sb_part.json")
print("\nCandidate bugs:\n")
print("Tool Diff-based Fixed-based Both")
ep_cand_diff = get_bugs_from_warnings(ep_res1)
ep_cand_fixed = get_bugs_from_warnings(ep_res2)
print('"Error Prone"', len(ep_cand_diff), len(ep_cand_fixed), len(ep_cand_diff & ep_cand_fixed))
inf_cand_diff = get_bugs_from_warnings(inf_res1)
inf_cand_fixed = get_bugs_from_warnings(inf_res2)
print("Infer", len(inf_cand_diff), len(inf_cand_fixed), len(inf_cand_diff & inf_cand_fixed))
sb_cand_diff = get_bugs_from_warnings(sb_res1)
sb_cand_fixed = get_bugs_from_warnings(sb_res2)
print("Spotbugs", len(sb_cand_diff), len(sb_cand_fixed), len(sb_cand_diff & sb_cand_fixed))
print("\nTrue bugs (fully or partially flagged)\n")
print("Tool Diff-based Fixed-based Both")
ep_succ_diff = get_bugs_from_warnings(ep_succ1) | get_bugs_from_warnings(ep_part1)
ep_succ_fixed = get_bugs_from_warnings(ep_succ2) | get_bugs_from_warnings(ep_part2)
print('"Error Prone"', len(ep_succ_diff), len(ep_succ_fixed), len(ep_succ_diff & ep_succ_fixed))
inf_succ_diff = get_bugs_from_warnings(inf_succ1) | get_bugs_from_warnings(inf_part1)
inf_succ_fixed = get_bugs_from_warnings(inf_succ2) | get_bugs_from_warnings(inf_part2)
print("Infer", len(inf_succ_diff), len(inf_succ_fixed), len(inf_succ_diff & inf_succ_fixed))
sb_succ_diff = get_bugs_from_warnings(sb_succ1) | get_bugs_from_warnings(sb_part1)
sb_succ_fixed = get_bugs_from_warnings(sb_succ2) | get_bugs_from_warnings(sb_part2)
print("Spotbugs", len(sb_succ_diff), len(sb_succ_fixed), len(sb_succ_diff & sb_succ_fixed))
print("\nTrue bugs found by all tools\n")
ep_succ = get_bugs_from_warnings(ep_succ1) | get_bugs_from_warnings(ep_succ2) | get_bugs_from_warnings(ep_part1) | get_bugs_from_warnings(ep_part2)
print("Ep:", len(ep_succ))
inf_succ = get_bugs_from_warnings(inf_succ1) | get_bugs_from_warnings(inf_succ2) | get_bugs_from_warnings(inf_part1) | get_bugs_from_warnings(inf_part2)
print("Inf:", len(inf_succ))
sb_succ = get_bugs_from_warnings(sb_succ1) | get_bugs_from_warnings(sb_succ2) | get_bugs_from_warnings(sb_part1) | get_bugs_from_warnings(sb_part2)
print("Sb:", len(sb_succ))
print("Ep & Inf:", len(ep_succ & inf_succ))
print("Ep & Sb:", len(ep_succ & sb_succ))
print("Inf & Sb:", len(inf_succ & sb_succ))
print("Ep & Inf & Sb:", len(ep_succ & inf_succ & sb_succ))
def get_cand_detected_bugs_tools_table():
print("\nAll candidate and detected bugs by each tool and each approach\n")
rel_path = './diffs_warnings/'
ep_res1 = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ1 = load_parsed_ep(rel_path + "ep_succ.json")
ep_part1 = load_parsed_ep(rel_path + "ep_part.json")
ep_fail1 = load_parsed_ep(rel_path + "ep_fail.json")
inf_res1 = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ1 = load_parsed_inf(rel_path + "inf_succ.json")
inf_part1 = load_parsed_inf(rel_path + "inf_part.json")
inf_fail1 = load_parsed_inf(rel_path + "inf_fail.json")
sb_res1 = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ1 = load_parsed_sb(rel_path + "sb_succ.json")
sb_part1 = load_parsed_sb(rel_path + "sb_part.json")
sb_fail1 = load_parsed_sb(rel_path + "sb_fail.json")
rel_path = './removed_warnings/'
ep_res2 = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ2 = load_parsed_ep(rel_path + "ep_succ.json")
ep_part2 = load_parsed_ep(rel_path + "ep_part.json")
ep_fail2 = load_parsed_ep(rel_path + "ep_fail.json")
inf_res2 = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ2 = load_parsed_inf(rel_path + "inf_succ.json")
inf_part2 = load_parsed_inf(rel_path + "inf_part.json")
inf_fail2 = load_parsed_inf(rel_path + "inf_fail.json")
sb_res2 = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ2 = load_parsed_sb(rel_path + "sb_succ.json")
sb_part2 = load_parsed_sb(rel_path + "sb_part.json")
sb_fail2 = load_parsed_sb(rel_path + "sb_fail.json")
bugs = []
bugs.extend(w.proj for w in ep_res1)
bugs.extend(w.proj for w in inf_res1)
bugs.extend(w.proj for w in sb_res1)
bugs.extend(w.proj for w in ep_res2)
bugs.extend(w.proj for w in inf_res2)
bugs.extend(w.proj for w in sb_res2)
bugs = sorted(list(set(bugs)))
print(" Removed Warnings Diffs-based Combined")
print("Tool Ep Inf SB Ep Inf SB Ep Inf SB")
for b in bugs:
entry = b + " "
#####################################
if b in get_bugs_from_warnings(ep_succ1):
entry += "& F "
elif b in get_bugs_from_warnings(ep_part1):
entry += "& P "
elif b in get_bugs_from_warnings(ep_fail1):
entry += "& M "
else:
entry += "& - "
if b in get_bugs_from_warnings(inf_succ1):
entry += "& F "
elif b in get_bugs_from_warnings(inf_part1):
entry += "& P "
elif b in get_bugs_from_warnings(inf_fail1):
entry += "& M "
else:
entry += "& - "
if b in get_bugs_from_warnings(sb_succ1):
entry += "& F "
elif b in get_bugs_from_warnings(sb_part1):
entry += "& P "
elif b in get_bugs_from_warnings(sb_fail1):
entry += "& M "
else:
entry += "& - "
#####################################
if b in get_bugs_from_warnings(ep_succ2):
entry += "& F "
elif b in get_bugs_from_warnings(ep_part2):
entry += "& P "
elif b in get_bugs_from_warnings(ep_fail2):
entry += "& M "
else:
entry += "& - "
if b in get_bugs_from_warnings(inf_succ2):
entry += "& F "
elif b in get_bugs_from_warnings(inf_part2):
entry += "& P "
elif b in get_bugs_from_warnings(inf_fail2):
entry += "& M "
else:
entry += "& - "
if b in get_bugs_from_warnings(sb_succ2):
entry += "& F "
elif b in get_bugs_from_warnings(sb_part2):
entry += "& P "
elif b in get_bugs_from_warnings(sb_fail2):
entry += "& M "
else:
entry += "& - "
#####################################
if b in get_bugs_from_warnings(ep_succ1) or b in get_bugs_from_warnings(ep_succ2):
entry += "& F "
elif b in get_bugs_from_warnings(ep_part1) or b in get_bugs_from_warnings(ep_part2):
entry += "& P "
elif b in get_bugs_from_warnings(ep_fail1) or b in get_bugs_from_warnings(ep_fail2):
entry += "& M "
else:
entry += "& - "
if b in get_bugs_from_warnings(inf_succ1) or b in get_bugs_from_warnings(inf_succ2):
entry += "& F "
elif b in get_bugs_from_warnings(inf_part1) or b in get_bugs_from_warnings(inf_part2):
entry += "& P "
elif b in get_bugs_from_warnings(inf_fail1) or b in get_bugs_from_warnings(inf_fail2):
entry += "& M "
else:
entry += "& - "
if b in get_bugs_from_warnings(sb_succ1) or b in get_bugs_from_warnings(sb_succ2):
entry += "& F "
elif b in get_bugs_from_warnings(sb_part1) or b in get_bugs_from_warnings(sb_part2):
entry += "& P "
elif b in get_bugs_from_warnings(sb_fail1) or b in get_bugs_from_warnings(sb_fail2):
entry += "& M "
else:
entry += "& - "
entry += "\\\\"
print(entry)
print()
def get_true_detected_bugs_by_each_tool():
rel_path = './diffs_warnings/'
ep_res1 = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ1 = load_parsed_ep(rel_path + "ep_succ.json")
ep_part1 = load_parsed_ep(rel_path + "ep_part.json")
inf_res1 = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ1 = load_parsed_inf(rel_path + "inf_succ.json")
inf_part1 = load_parsed_inf(rel_path + "inf_part.json")
sb_res1 = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ1 = load_parsed_sb(rel_path + "sb_succ.json")
sb_part1 = load_parsed_sb(rel_path + "sb_part.json")
rel_path = './removed_warnings/'
ep_res2 = load_parsed_ep(rel_path + "ep_warnings.json")
ep_succ2 = load_parsed_ep(rel_path + "ep_succ.json")
ep_part2 = load_parsed_ep(rel_path + "ep_part.json")
inf_res2 = load_parsed_inf(rel_path + "inf_warnings.json")
inf_succ2 = load_parsed_inf(rel_path + "inf_succ.json")
inf_part2 = load_parsed_inf(rel_path + "inf_part.json")
sb_res2 = load_parsed_sb(rel_path + "sb_warnings.json")
sb_succ2 = load_parsed_sb(rel_path + "sb_succ.json")
sb_part2 = load_parsed_sb(rel_path + "sb_part.json")
print("\nTrue bugs found by each tool\n")
ep_succ = get_bugs_from_warnings(ep_succ1) | get_bugs_from_warnings(ep_succ2) | get_bugs_from_warnings(ep_part1) | get_bugs_from_warnings(ep_part2)
print("Ep:", len(ep_succ))
with open(os.path.join(os.getcwd(), "ep_detected"), 'w') as f:
f.write("\n".join(i for i in ep_succ))
inf_succ = get_bugs_from_warnings(inf_succ1) | get_bugs_from_warnings(inf_succ2) | get_bugs_from_warnings(inf_part1) | get_bugs_from_warnings(inf_part2)
print("Inf:", len(inf_succ))
with open(os.path.join(os.getcwd(), "inf_detected"), 'w') as f:
f.write("\n".join(i for i in inf_succ))
sb_succ = get_bugs_from_warnings(sb_succ1) | get_bugs_from_warnings(sb_succ2) | get_bugs_from_warnings(sb_part1) | get_bugs_from_warnings(sb_part2)
print("Sb:", len(sb_succ))
with open(os.path.join(os.getcwd(), "sb_detected"), 'w') as f:
f.write("\n".join(i for i in sb_succ))
print()
''' this script has to be run from the results/ directory '''
if __name__ == '__main__':
# display_min_max_avg_warnings_per_bug_total()
# get_warnings_bugs_from_each_approach()
# get_manually_inspected_warnings_bugs()
# get_cand_detected_bugs_tools_sets()
# get_cand_detected_bugs_tools_table()
get_true_detected_bugs_by_each_tool()
| 46.062 | 174 | 0.682298 | 3,668 | 23,031 | 3.849237 | 0.042257 | 0.067427 | 0.124655 | 0.114385 | 0.881011 | 0.856435 | 0.809406 | 0.760819 | 0.704937 | 0.630569 | 0 | 0.012613 | 0.194434 | 23,031 | 499 | 175 | 46.154309 | 0.748396 | 0.011202 | 0 | 0.548969 | 0 | 0 | 0.176049 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.025773 | false | 0 | 0.010309 | 0 | 0.041237 | 0.226804 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3db2e02d6403d3fcacebc1c1c20fd15a880b54dc | 95 | py | Python | snsims_previous/snsims/__init__.py | rbiswas4/SNsims | 8df1fec13bcb8b06cbb460663d96625854aa97fe | [
"MIT"
] | 1 | 2017-09-01T16:17:32.000Z | 2017-09-01T16:17:32.000Z | snsims_previous/snsims/__init__.py | rbiswas4/SNsims | 8df1fec13bcb8b06cbb460663d96625854aa97fe | [
"MIT"
] | 68 | 2015-04-16T22:54:30.000Z | 2018-02-09T23:34:45.000Z | snsims_previous/snsims/__init__.py | rbiswas4/SNsims | 8df1fec13bcb8b06cbb460663d96625854aa97fe | [
"MIT"
] | 2 | 2015-02-10T19:19:04.000Z | 2015-05-05T20:45:39.000Z | from . import snObject
# from . import snUniverse
# from . import models
from . import sources
| 19 | 26 | 0.747368 | 12 | 95 | 5.916667 | 0.5 | 0.56338 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.189474 | 95 | 4 | 27 | 23.75 | 0.922078 | 0.473684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a8590c07dec3545e1be5f31f284e2d77329a43d | 31 | py | Python | datasets/preprocess/__init__.py | 0one2/CVAHMR | b262b618c3058db15de11cc938dca111c5887081 | [
"BSD-3-Clause"
] | null | null | null | datasets/preprocess/__init__.py | 0one2/CVAHMR | b262b618c3058db15de11cc938dca111c5887081 | [
"BSD-3-Clause"
] | null | null | null | datasets/preprocess/__init__.py | 0one2/CVAHMR | b262b618c3058db15de11cc938dca111c5887081 | [
"BSD-3-Clause"
] | null | null | null | from .coco import coco_extract
| 15.5 | 30 | 0.83871 | 5 | 31 | 5 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a8d61e8bc78fc59875ef5bcb3e6c4ba07be2f89 | 31 | py | Python | connectors/python/flask/flasking/__init__.py | pranavbaburaj/jodb | 1242a6235fd3e976b70e8fa4da83bf1108baed42 | [
"MIT"
] | 2 | 2021-03-29T18:30:48.000Z | 2021-04-10T17:44:34.000Z | connectors/python/flask/flasking/__init__.py | pranavbaburaj/json-based-database | 1242a6235fd3e976b70e8fa4da83bf1108baed42 | [
"MIT"
] | null | null | null | connectors/python/flask/flasking/__init__.py | pranavbaburaj/json-based-database | 1242a6235fd3e976b70e8fa4da83bf1108baed42 | [
"MIT"
] | null | null | null | from .connector import RollerDB | 31 | 31 | 0.870968 | 4 | 31 | 6.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096774 | 31 | 1 | 31 | 31 | 0.964286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9afd890a0ee1ee4647d38a13a7c10c3867c4dde3 | 32 | py | Python | fitfs8/__init__.py | bcarreres/fit_fs8 | b5acd87067803c536316c39d1ec7994caef4d8fa | [
"BSD-3-Clause"
] | null | null | null | fitfs8/__init__.py | bcarreres/fit_fs8 | b5acd87067803c536316c39d1ec7994caef4d8fa | [
"BSD-3-Clause"
] | null | null | null | fitfs8/__init__.py | bcarreres/fit_fs8 | b5acd87067803c536316c39d1ec7994caef4d8fa | [
"BSD-3-Clause"
] | null | null | null | from .fs8_fit import fs8_fitter
| 16 | 31 | 0.84375 | 6 | 32 | 4.166667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071429 | 0.125 | 32 | 1 | 32 | 32 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b12c7de9ef0723871071da5635e7ebafafab5671 | 23 | py | Python | lazycls/ext/_importlib/__init__.py | trisongz/lazycls | 701bad1a358ed3bb136347d0c5eb81de3201f6a3 | [
"MIT"
] | 2 | 2021-12-02T00:13:16.000Z | 2022-02-26T11:18:33.000Z | lazycls/ext/_importlib/__init__.py | trisongz/lazycls | 701bad1a358ed3bb136347d0c5eb81de3201f6a3 | [
"MIT"
] | null | null | null | lazycls/ext/_importlib/__init__.py | trisongz/lazycls | 701bad1a358ed3bb136347d0c5eb81de3201f6a3 | [
"MIT"
] | null | null | null |
from .core import Libs | 11.5 | 22 | 0.782609 | 4 | 23 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.173913 | 23 | 2 | 22 | 11.5 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b151a8ec5e0d6182fad29ec14ea2155369d512da | 38 | py | Python | url2text/__init__.py | B-R-P/url2text | a7e79955d03d5b1b8e3e36dc687420bba0f5e66b | [
"MIT"
] | null | null | null | url2text/__init__.py | B-R-P/url2text | a7e79955d03d5b1b8e3e36dc687420bba0f5e66b | [
"MIT"
] | null | null | null | url2text/__init__.py | B-R-P/url2text | a7e79955d03d5b1b8e3e36dc687420bba0f5e66b | [
"MIT"
] | null | null | null | from url2text.url2text import url2text | 38 | 38 | 0.894737 | 5 | 38 | 6.8 | 0.6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085714 | 0.078947 | 38 | 1 | 38 | 38 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b17d24b4d52e293a7f61e9df4d670b87c3b1625f | 197 | py | Python | earlier-2020/python_mod_tutorials/file_io/tt/print_dir.py | transcendentsky/py_tutorials | fed8e6c8d79f854a1cebcfd5c37297a163846208 | [
"Apache-2.0"
] | 1 | 2018-06-18T12:09:33.000Z | 2018-06-18T12:09:33.000Z | earlier-2020/python_mod_tutorials/file_io/tt/print_dir.py | transcendentsky/py_tutorials | fed8e6c8d79f854a1cebcfd5c37297a163846208 | [
"Apache-2.0"
] | null | null | null | earlier-2020/python_mod_tutorials/file_io/tt/print_dir.py | transcendentsky/py_tutorials | fed8e6c8d79f854a1cebcfd5c37297a163846208 | [
"Apache-2.0"
] | 1 | 2018-06-18T12:13:21.000Z | 2018-06-18T12:13:21.000Z | import os
a = os.path.abspath('.')
print("[print_dir.py] {}".format(a))
print('[print_dir.py] ################')
def test():
s = os.path.abspath('.')
print("[print_dir.py] {}".format(a))
| 19.7 | 40 | 0.532995 | 28 | 197 | 3.642857 | 0.428571 | 0.294118 | 0.382353 | 0.441176 | 0.686275 | 0.686275 | 0.686275 | 0.686275 | 0.686275 | 0 | 0 | 0 | 0.13198 | 197 | 9 | 41 | 21.888889 | 0.596491 | 0 | 0 | 0.285714 | 0 | 0 | 0.340102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.142857 | 0 | 0.285714 | 0.428571 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
b196a14cee617cf46ad1b63bf73b8470ccb72d80 | 263 | py | Python | 2019_08_24/dojo.py | samisafatli/dojo | 3751f8413e70da84e928e037193e8cb03f6b3e65 | [
"MIT"
] | 114 | 2015-03-10T22:17:42.000Z | 2022-03-09T17:49:48.000Z | 2019_08_24/dojo.py | samisafatli/dojo | 3751f8413e70da84e928e037193e8cb03f6b3e65 | [
"MIT"
] | 9 | 2018-09-04T12:49:59.000Z | 2019-11-17T21:29:51.000Z | 2019_08_24/dojo.py | samisafatli/dojo | 3751f8413e70da84e928e037193e8cb03f6b3e65 | [
"MIT"
] | 39 | 2015-01-29T01:20:56.000Z | 2022-02-17T16:26:25.000Z | def sum(num1, num2):
return num1 + num2
def sub(num1,num2):
return num1 - num2
def mult(num1, num2):
return num1 * num2
def div(num1, num2):
return num1 / num2
def calc(num1, num2):
return [sum(num1,num2), sub(num1,num2), mult(num1,num2), div(num1,num2)] | 18.785714 | 73 | 0.680608 | 45 | 263 | 3.977778 | 0.2 | 0.581006 | 0.391061 | 0.402235 | 0.558659 | 0.558659 | 0 | 0 | 0 | 0 | 0 | 0.118182 | 0.163498 | 263 | 14 | 73 | 18.785714 | 0.695455 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4968a1fe49e590caefeb391c105b317c46ed22d0 | 150 | py | Python | py/start/partialstring.py | zhongwei/ztodo | fef4f24e65fb8d571c6c13e6f82d842023e7a8e1 | [
"CC0-1.0"
] | 1 | 2015-09-22T08:28:27.000Z | 2015-09-22T08:28:27.000Z | py/start/partialstring.py | zhongwei/ztodo | fef4f24e65fb8d571c6c13e6f82d842023e7a8e1 | [
"CC0-1.0"
] | null | null | null | py/start/partialstring.py | zhongwei/ztodo | fef4f24e65fb8d571c6c13e6f82d842023e7a8e1 | [
"CC0-1.0"
] | null | null | null | #!/usr/bin/env python
# encoding: utf-8
str = "Hello World!"
print str;
print str[0]
print str[2:5]
print str[2:]
print str * 2
print str + "TEST"
| 11.538462 | 21 | 0.646667 | 28 | 150 | 3.464286 | 0.535714 | 0.494845 | 0.278351 | 0.28866 | 0.268041 | 0 | 0 | 0 | 0 | 0 | 0 | 0.04918 | 0.186667 | 150 | 12 | 22 | 12.5 | 0.745902 | 0.24 | 0 | 0 | 0 | 0 | 0.144144 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.857143 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
49a0a3606ada34a3b41b1730f29767c704474254 | 105 | py | Python | hic/mymodule/__init__.py | zelhar/mg21 | f8392aba7deb63aa85f3d137ef81dea1bb742b41 | [
"MIT"
] | null | null | null | hic/mymodule/__init__.py | zelhar/mg21 | f8392aba7deb63aa85f3d137ef81dea1bb742b41 | [
"MIT"
] | null | null | null | hic/mymodule/__init__.py | zelhar/mg21 | f8392aba7deb63aa85f3d137ef81dea1bb742b41 | [
"MIT"
] | null | null | null | from .matrixpermutationsModule import *
from .hicCoolerModule import *
from .diffusionModule import *
| 15 | 39 | 0.8 | 9 | 105 | 9.333333 | 0.555556 | 0.238095 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 105 | 6 | 40 | 17.5 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7706632a74db385e2bec10b96d0588fe866b4a6d | 600 | py | Python | timelink/mhk/models/base.py | time-link/timelink-py | 60d51bfedb64688aa7603f074d7bbc7432b5e841 | [
"MIT"
] | null | null | null | timelink/mhk/models/base.py | time-link/timelink-py | 60d51bfedb64688aa7603f074d7bbc7432b5e841 | [
"MIT"
] | 3 | 2021-08-02T13:25:46.000Z | 2022-03-27T11:17:59.000Z | timelink/mhk/models/base.py | time-link/timelink-py | 60d51bfedb64688aa7603f074d7bbc7432b5e841 | [
"MIT"
] | null | null | null | # Import all the models, so that Base has them before being
# imported by Alembic
from timelink.mhk.models.base_class import Base # noqa
from timelink.mhk.models.entity import Entity # noqa
from timelink.mhk.models.pom_som_mapper import PomSomMapper, PomClassAttributes # noqa
from timelink.mhk.models.attribute import Attribute # noqa
from timelink.mhk.models.relation import Relation # noqa
from timelink.mhk.models.person import Person # noqa
from timelink.mhk.models.object import Object # noqa
from timelink.mhk.models.act import Act # noqa
from timelink.mhk.models.source import Source # noqa
| 46.153846 | 86 | 0.813333 | 90 | 600 | 5.388889 | 0.344444 | 0.22268 | 0.278351 | 0.389691 | 0.412371 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12 | 600 | 12 | 87 | 50 | 0.918561 | 0.203333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
77576dc59fe8eec27305feeb7e440982c9110804 | 777 | py | Python | kvdroid/jclass/android/view/__init__.py | kengoon/PyAndroidKX | 53b72b51c7b9aec06bbc330e7bf0f2e3a89736e2 | [
"MIT"
] | 1 | 2021-11-22T17:22:53.000Z | 2021-11-22T17:22:53.000Z | kvdroid/jclass/android/view/__init__.py | kengoon/PyAndroidKX | 53b72b51c7b9aec06bbc330e7bf0f2e3a89736e2 | [
"MIT"
] | null | null | null | kvdroid/jclass/android/view/__init__.py | kengoon/PyAndroidKX | 53b72b51c7b9aec06bbc330e7bf0f2e3a89736e2 | [
"MIT"
] | null | null | null | from jnius import autoclass
from kvdroid.jclass import _class_call
def View(*args, instantiate: bool = False):
return _class_call(autoclass('android.view.View'), args, instantiate)
def WindowManager(*args, instantiate: bool = False):
return _class_call(autoclass("android.view.WindowManager"), args, instantiate)
def WindowManagerLayoutParams(*args, instantiate: bool = False):
return _class_call(autoclass("android.view.WindowManager$LayoutParams"), args, instantiate)
def ViewGroupLayoutParams(*args, instantiate: bool = False):
return _class_call(autoclass('android.view.ViewGroup$LayoutParams'), args, instantiate)
def TextureView(*args, instantiate: bool = False):
return _class_call(autoclass('android.view.TextureView'), args, instantiate)
| 33.782609 | 95 | 0.773488 | 87 | 777 | 6.770115 | 0.252874 | 0.254669 | 0.16129 | 0.203735 | 0.544992 | 0.544992 | 0.544992 | 0.544992 | 0.544992 | 0.544992 | 0 | 0 | 0.113256 | 777 | 22 | 96 | 35.318182 | 0.854862 | 0 | 0 | 0 | 0 | 0 | 0.181467 | 0.159588 | 0 | 0 | 0 | 0 | 0 | 1 | 0.416667 | true | 0 | 0.166667 | 0.416667 | 1 | 0 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
7760622cd2b0fded4e4956266b44d00906f0d389 | 28 | py | Python | src/secml/array/__init__.py | zangobot/secml | 95a293e1201c24256eb7fe2f1d2125cd5f318c8c | [
"Apache-2.0"
] | 63 | 2020-04-20T16:31:16.000Z | 2022-03-29T01:05:35.000Z | src/secml/array/__init__.py | zangobot/secml | 95a293e1201c24256eb7fe2f1d2125cd5f318c8c | [
"Apache-2.0"
] | 5 | 2020-04-21T11:31:39.000Z | 2022-03-24T13:42:56.000Z | src/secml/array/__init__.py | zangobot/secml | 95a293e1201c24256eb7fe2f1d2125cd5f318c8c | [
"Apache-2.0"
] | 8 | 2020-04-21T09:16:42.000Z | 2022-02-23T16:28:43.000Z | from .c_array import CArray
| 14 | 27 | 0.821429 | 5 | 28 | 4.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.