hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f7604203cffef29f3dd74cbcb83c723b4d5a4536 | 6,407 | py | Python | test/server/test_rename.py | BoniLindsley/pymap | b3190d20799a6d342888e51bfc55cdfcbfe3ed26 | [
"MIT"
] | 18 | 2015-06-04T21:09:37.000Z | 2022-03-04T08:14:31.000Z | test/server/test_rename.py | BoniLindsley/pymap | b3190d20799a6d342888e51bfc55cdfcbfe3ed26 | [
"MIT"
] | 114 | 2018-10-17T23:11:00.000Z | 2022-03-19T16:59:16.000Z | test/server/test_rename.py | BoniLindsley/pymap | b3190d20799a6d342888e51bfc55cdfcbfe3ed26 | [
"MIT"
] | 8 | 2015-02-03T19:30:52.000Z | 2021-11-20T12:47:03.000Z |
import pytest
from .base import TestBase
pytestmark = pytest.mark.asyncio
class TestMailbox(TestBase):
async def test_rename(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_readline(
b'status1 STATUS Sent (MESSAGES UIDNEXT UIDVALIDITY)\r\n')
transport.push_write(
b'* STATUS Sent (MESSAGES 2 UIDNEXT 103 '
b'UIDVALIDITY ', (br'\d+', b'uidval1'), b')\r\n'
b'status1 OK STATUS completed.\r\n')
transport.push_readline(
b'rename1 RENAME Sent "Sent Test"\r\n')
transport.push_write(
b'rename1 OK RENAME completed.\r\n')
transport.push_readline(
b'status1 STATUS Sent (MESSAGES)\r\n')
transport.push_write(
b'status1 NO [NONEXISTENT] Mailbox does not exist.\r\n')
transport.push_readline(
b'status1 STATUS "Sent Test" (MESSAGES UIDNEXT UIDVALIDITY)\r\n')
transport.push_write(
b'* STATUS "Sent Test" (MESSAGES 2 UIDNEXT 103 '
b'UIDVALIDITY ', (br'\d+', b'uidval2'), b')\r\n'
b'status1 OK STATUS completed.\r\n')
transport.push_logout()
await self.run(transport)
assert self.matches['uidval1'] == self.matches['uidval2']
async def test_rename_inbox(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_readline(
b'status1 STATUS INBOX (MESSAGES UIDNEXT UIDVALIDITY)\r\n')
transport.push_write(
b'* STATUS INBOX (MESSAGES 4 UIDNEXT 105 '
b'UIDVALIDITY ', (br'\d+', b'uidval1'), b')\r\n'
b'status1 OK STATUS completed.\r\n')
transport.push_readline(
b'rename1 RENAME INBOX "Inbox Test"\r\n')
transport.push_write(
b'rename1 OK RENAME completed.\r\n')
transport.push_readline(
b'status1 STATUS INBOX (MESSAGES UIDNEXT UIDVALIDITY)\r\n')
transport.push_write(
b'* STATUS INBOX (MESSAGES 0 UIDNEXT 101 '
b'UIDVALIDITY ', (br'\d+', b'uidval2'), b')\r\n'
b'status1 OK STATUS completed.\r\n')
transport.push_readline(
b'status1 STATUS "Inbox Test" (MESSAGES UIDNEXT UIDVALIDITY)\r\n')
transport.push_write(
b'* STATUS "Inbox Test" (MESSAGES 4 UIDNEXT 105 '
b'UIDVALIDITY ', (br'\d+', b'uidval3'), b')\r\n'
b'status1 OK STATUS completed.\r\n')
transport.push_logout()
await self.run(transport)
assert self.matches['uidval1'] != self.matches['uidval2']
assert self.matches['uidval1'] == self.matches['uidval3']
async def test_rename_inbox_selected(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_select(b'INBOX')
transport.push_readline(
b'rename1 RENAME INBOX "Inbox Test"\r\n')
transport.push_write(
b'rename1 OK RENAME completed.\r\n')
transport.push_select(b'INBOX', 0, 0, 101, False)
transport.push_logout()
await self.run(transport)
async def test_rename_other_selected(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_select(b'Sent')
transport.push_readline(
b'rename1 RENAME Sent "Sent Test"\r\n')
transport.push_write(
b'* BYE Selected mailbox no longer exists.\r\n'
b'rename1 OK RENAME completed.\r\n')
await self.run(transport)
async def test_rename_selected(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_select(b'Sent')
transport.push_readline(
b'rename1 RENAME Sent "Sent Test"\r\n')
transport.push_write(
b'* BYE Selected mailbox no longer exists.\r\n'
b'rename1 OK RENAME completed.\r\n')
await self.run(transport)
async def test_rename_inferior(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_readline(
b'create1 CREATE Test\r\n')
transport.push_write(
b'create1 OK [MAILBOXID (', (br'F[a-f0-9]+', ), b')]'
b' CREATE completed.\r\n')
transport.push_readline(
b'create2 CREATE Test/One\r\n')
transport.push_write(
b'create2 OK [MAILBOXID (', (br'F[a-f0-9]+', ), b')]'
b' CREATE completed.\r\n')
transport.push_readline(
b'create3 CREATE Test/One/Two\r\n')
transport.push_write(
b'create3 OK [MAILBOXID (', (br'F[a-f0-9]+', ), b')]'
b' CREATE completed.\r\n')
transport.push_readline(
b'delete1 DELETE Test/One\r\n')
transport.push_write(
b'delete1 OK DELETE completed.\r\n')
transport.push_readline(
b'rename1 RENAME Test Foo\r\n')
transport.push_write(
b'rename1 OK RENAME completed.\r\n')
transport.push_readline(
b'list1 LIST Test *\r\n')
transport.push_write(
b'list1 OK LIST completed.\r\n')
transport.push_readline(
b'list2 LIST Foo *\r\n')
transport.push_write(
b'* LIST (\\HasChildren) "/" Foo\r\n'
b'* LIST (\\Noselect \\HasChildren) "/" Foo/One\r\n'
b'* LIST (\\HasNoChildren) "/" Foo/One/Two\r\n'
b'list2 OK LIST completed.\r\n')
transport.push_logout()
await self.run(transport)
async def test_rename_mailbox_id(self, imap_server):
transport = self.new_transport(imap_server)
transport.push_login()
transport.push_readline(
b'create1 CREATE Test\r\n')
transport.push_write(
b'create1 OK [MAILBOXID (', (br'F[a-f0-9]+', b'mbxid'), b')]'
b' CREATE completed.\r\n')
transport.push_readline(
b'rename1 RENAME Test Foo\r\n')
transport.push_write(
b'rename1 OK RENAME completed.\r\n')
transport.push_select(b'Foo', unseen=False)
transport.push_logout()
await self.run(transport)
assert self.matches['mbxid1'] == self.matches['mbxid']
| 40.550633 | 78 | 0.597628 | 809 | 6,407 | 4.616811 | 0.111248 | 0.198394 | 0.111914 | 0.15261 | 0.885944 | 0.87095 | 0.850335 | 0.825167 | 0.804284 | 0.769746 | 0 | 0.018434 | 0.280318 | 6,407 | 157 | 79 | 40.808917 | 0.791585 | 0 | 0 | 0.696552 | 0 | 0 | 0.330003 | 0 | 0 | 0 | 0 | 0 | 0.027586 | 1 | 0 | false | 0 | 0.013793 | 0 | 0.02069 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
f7838bf903a2453a819d10230da127f83e98a39d | 61,642 | py | Python | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_infra_correlator_oper.py | Maikor/ydk-py | b86c4a7c570ae3b2c5557d098420446df5de4929 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_infra_correlator_oper.py | Maikor/ydk-py | b86c4a7c570ae3b2c5557d098420446df5de4929 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | cisco-ios-xr/ydk/models/cisco_ios_xr/Cisco_IOS_XR_infra_correlator_oper.py | Maikor/ydk-py | b86c4a7c570ae3b2c5557d098420446df5de4929 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | """ Cisco_IOS_XR_infra_correlator_oper
This module contains a collection of YANG definitions
for Cisco IOS\-XR infra\-correlator package operational data.
This module contains definitions
for the following management objects\:
suppression\: Suppression operational data
correlator\: correlator
Copyright (c) 2013\-2018 by Cisco Systems, Inc.
All rights reserved.
"""
from collections import OrderedDict
from ydk.types import Entity, EntityPath, Identity, Enum, YType, YLeaf, YLeafList, YList, LeafDataList, Bits, Empty, Decimal64
from ydk.filters import YFilter
from ydk.errors import YError, YModelError
from ydk.errors.error_handler import handle_type_error as _handle_type_error
class AcRuleState(Enum):
"""
AcRuleState (Enum Class)
Ac rule state
.. data:: rule_unapplied = 0
Rule is in Unapplied state
.. data:: rule_applied = 1
Rule is Applied to specified RacksSlots,
Contexts and Sources
.. data:: rule_applied_all = 2
Rule is Applied to all of router
"""
rule_unapplied = Enum.YLeaf(0, "rule-unapplied")
rule_applied = Enum.YLeaf(1, "rule-applied")
rule_applied_all = Enum.YLeaf(2, "rule-applied-all")
class AlAlarmBistate(Enum):
"""
AlAlarmBistate (Enum Class)
Al alarm bistate
.. data:: not_available = 0
not available
.. data:: active = 1
active
.. data:: clear = 2
clear
"""
not_available = Enum.YLeaf(0, "not-available")
active = Enum.YLeaf(1, "active")
clear = Enum.YLeaf(2, "clear")
class AlAlarmSeverity(Enum):
"""
AlAlarmSeverity (Enum Class)
Al alarm severity
.. data:: unknown = -1
unknown
.. data:: emergency = 0
emergency
.. data:: alert = 1
alert
.. data:: critical = 2
critical
.. data:: error = 3
error
.. data:: warning = 4
warning
.. data:: notice = 5
notice
.. data:: informational = 6
informational
.. data:: debugging = 7
debugging
"""
unknown = Enum.YLeaf(-1, "unknown")
emergency = Enum.YLeaf(0, "emergency")
alert = Enum.YLeaf(1, "alert")
critical = Enum.YLeaf(2, "critical")
error = Enum.YLeaf(3, "error")
warning = Enum.YLeaf(4, "warning")
notice = Enum.YLeaf(5, "notice")
informational = Enum.YLeaf(6, "informational")
debugging = Enum.YLeaf(7, "debugging")
class Suppression(Entity):
"""
Suppression operational data
.. attribute:: rule_summaries
Table that contains the database of suppression rule summary
**type**\: :py:class:`RuleSummaries <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Suppression.RuleSummaries>`
.. attribute:: rule_details
Table that contains the database of suppression rule details
**type**\: :py:class:`RuleDetails <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Suppression.RuleDetails>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression, self).__init__()
self._top_entity = None
self.yang_name = "suppression"
self.yang_parent_name = "Cisco-IOS-XR-infra-correlator-oper"
self.is_top_level_class = True
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-summaries", ("rule_summaries", Suppression.RuleSummaries)), ("rule-details", ("rule_details", Suppression.RuleDetails))])
self._leafs = OrderedDict()
self.rule_summaries = Suppression.RuleSummaries()
self.rule_summaries.parent = self
self._children_name_map["rule_summaries"] = "rule-summaries"
self.rule_details = Suppression.RuleDetails()
self.rule_details.parent = self
self._children_name_map["rule_details"] = "rule-details"
self._segment_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:suppression"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression, [], name, value)
class RuleSummaries(Entity):
"""
Table that contains the database of suppression
rule summary
.. attribute:: rule_summary
One of the suppression rules
**type**\: list of :py:class:`RuleSummary <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Suppression.RuleSummaries.RuleSummary>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression.RuleSummaries, self).__init__()
self.yang_name = "rule-summaries"
self.yang_parent_name = "suppression"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-summary", ("rule_summary", Suppression.RuleSummaries.RuleSummary))])
self._leafs = OrderedDict()
self.rule_summary = YList(self)
self._segment_path = lambda: "rule-summaries"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:suppression/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression.RuleSummaries, [], name, value)
class RuleSummary(Entity):
"""
One of the suppression rules
.. attribute:: rule_name (key)
Suppression Rule Name
**type**\: str
**length:** 1..32
.. attribute:: rule_name_xr
Suppress Rule Name
**type**\: str
.. attribute:: rule_state
Applied state of the rule It could be not applied, applied or applied to all
**type**\: :py:class:`AcRuleState <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AcRuleState>`
.. attribute:: suppressed_alarms_count
Number of suppressed alarms associated with this rule
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression.RuleSummaries.RuleSummary, self).__init__()
self.yang_name = "rule-summary"
self.yang_parent_name = "rule-summaries"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('rule_name', (YLeaf(YType.str, 'rule-name'), ['str'])),
('rule_name_xr', (YLeaf(YType.str, 'rule-name-xr'), ['str'])),
('rule_state', (YLeaf(YType.enumeration, 'rule-state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AcRuleState', '')])),
('suppressed_alarms_count', (YLeaf(YType.uint32, 'suppressed-alarms-count'), ['int'])),
])
self.rule_name = None
self.rule_name_xr = None
self.rule_state = None
self.suppressed_alarms_count = None
self._segment_path = lambda: "rule-summary" + "[rule-name='" + str(self.rule_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:suppression/rule-summaries/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression.RuleSummaries.RuleSummary, ['rule_name', u'rule_name_xr', u'rule_state', u'suppressed_alarms_count'], name, value)
class RuleDetails(Entity):
"""
Table that contains the database of suppression
rule details
.. attribute:: rule_detail
Details of one of the suppression rules
**type**\: list of :py:class:`RuleDetail <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Suppression.RuleDetails.RuleDetail>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression.RuleDetails, self).__init__()
self.yang_name = "rule-details"
self.yang_parent_name = "suppression"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-detail", ("rule_detail", Suppression.RuleDetails.RuleDetail))])
self._leafs = OrderedDict()
self.rule_detail = YList(self)
self._segment_path = lambda: "rule-details"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:suppression/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression.RuleDetails, [], name, value)
class RuleDetail(Entity):
"""
Details of one of the suppression rules
.. attribute:: rule_name (key)
Suppression Rule Name
**type**\: str
**length:** 1..32
.. attribute:: rule_summary
Rule summary, name, etc
**type**\: :py:class:`RuleSummary <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Suppression.RuleDetails.RuleDetail.RuleSummary>`
.. attribute:: all_alarms
Match any alarm
**type**\: bool
.. attribute:: alarm_severity
Severity level to suppress
**type**\: :py:class:`AlAlarmSeverity <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AlAlarmSeverity>`
.. attribute:: apply_source
Sources (R/S/M) to which the rule is applied
**type**\: list of str
**pattern:** ([a\-zA\-Z0\-9\_]\*\\d+/){1,2}([a\-zA\-Z0\-9\_]\*\\d+)
.. attribute:: codes
Message codes defining the rule
**type**\: list of :py:class:`Codes <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Suppression.RuleDetails.RuleDetail.Codes>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression.RuleDetails.RuleDetail, self).__init__()
self.yang_name = "rule-detail"
self.yang_parent_name = "rule-details"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_name']
self._child_classes = OrderedDict([("rule-summary", ("rule_summary", Suppression.RuleDetails.RuleDetail.RuleSummary)), ("codes", ("codes", Suppression.RuleDetails.RuleDetail.Codes))])
self._leafs = OrderedDict([
('rule_name', (YLeaf(YType.str, 'rule-name'), ['str'])),
('all_alarms', (YLeaf(YType.boolean, 'all-alarms'), ['bool'])),
('alarm_severity', (YLeaf(YType.enumeration, 'alarm-severity'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AlAlarmSeverity', '')])),
('apply_source', (YLeafList(YType.str, 'apply-source'), ['str'])),
])
self.rule_name = None
self.all_alarms = None
self.alarm_severity = None
self.apply_source = []
self.rule_summary = Suppression.RuleDetails.RuleDetail.RuleSummary()
self.rule_summary.parent = self
self._children_name_map["rule_summary"] = "rule-summary"
self.codes = YList(self)
self._segment_path = lambda: "rule-detail" + "[rule-name='" + str(self.rule_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:suppression/rule-details/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression.RuleDetails.RuleDetail, ['rule_name', u'all_alarms', u'alarm_severity', u'apply_source'], name, value)
class RuleSummary(Entity):
"""
Rule summary, name, etc
.. attribute:: rule_name_xr
Suppress Rule Name
**type**\: str
.. attribute:: rule_state
Applied state of the rule It could be not applied, applied or applied to all
**type**\: :py:class:`AcRuleState <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AcRuleState>`
.. attribute:: suppressed_alarms_count
Number of suppressed alarms associated with this rule
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression.RuleDetails.RuleDetail.RuleSummary, self).__init__()
self.yang_name = "rule-summary"
self.yang_parent_name = "rule-detail"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('rule_name_xr', (YLeaf(YType.str, 'rule-name-xr'), ['str'])),
('rule_state', (YLeaf(YType.enumeration, 'rule-state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AcRuleState', '')])),
('suppressed_alarms_count', (YLeaf(YType.uint32, 'suppressed-alarms-count'), ['int'])),
])
self.rule_name_xr = None
self.rule_state = None
self.suppressed_alarms_count = None
self._segment_path = lambda: "rule-summary"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression.RuleDetails.RuleDetail.RuleSummary, [u'rule_name_xr', u'rule_state', u'suppressed_alarms_count'], name, value)
class Codes(Entity):
"""
Message codes defining the rule.
.. attribute:: category
Category of messages to which this alarm belongs
**type**\: str
.. attribute:: group
Group of messages to which this alarm belongs
**type**\: str
.. attribute:: code
Alarm code which further qualifies the alarm within a message group
**type**\: str
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Suppression.RuleDetails.RuleDetail.Codes, self).__init__()
self.yang_name = "codes"
self.yang_parent_name = "rule-detail"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('category', (YLeaf(YType.str, 'category'), ['str'])),
('group', (YLeaf(YType.str, 'group'), ['str'])),
('code', (YLeaf(YType.str, 'code'), ['str'])),
])
self.category = None
self.group = None
self.code = None
self._segment_path = lambda: "codes"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Suppression.RuleDetails.RuleDetail.Codes, [u'category', u'group', u'code'], name, value)
def clone_ptr(self):
self._top_entity = Suppression()
return self._top_entity
class Correlator(Entity):
"""
correlator
.. attribute:: rules
Table that contains the database of correlation rules
**type**\: :py:class:`Rules <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.Rules>`
.. attribute:: buffer_status
Describes buffer utilization and parameters configured
**type**\: :py:class:`BufferStatus <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.BufferStatus>`
.. attribute:: alarms
Correlated alarms Table
**type**\: :py:class:`Alarms <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.Alarms>`
.. attribute:: rule_set_summaries
Table that contains the ruleset summary info
**type**\: :py:class:`RuleSetSummaries <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSetSummaries>`
.. attribute:: rule_set_details
Table that contains the ruleset detail info
**type**\: :py:class:`RuleSetDetails <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSetDetails>`
.. attribute:: rule_details
Table that contains the database of correlation rule details
**type**\: :py:class:`RuleDetails <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleDetails>`
.. attribute:: rule_summaries
Table that contains the database of correlation rule summary
**type**\: :py:class:`RuleSummaries <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSummaries>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator, self).__init__()
self._top_entity = None
self.yang_name = "correlator"
self.yang_parent_name = "Cisco-IOS-XR-infra-correlator-oper"
self.is_top_level_class = True
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rules", ("rules", Correlator.Rules)), ("buffer-status", ("buffer_status", Correlator.BufferStatus)), ("alarms", ("alarms", Correlator.Alarms)), ("rule-set-summaries", ("rule_set_summaries", Correlator.RuleSetSummaries)), ("rule-set-details", ("rule_set_details", Correlator.RuleSetDetails)), ("rule-details", ("rule_details", Correlator.RuleDetails)), ("rule-summaries", ("rule_summaries", Correlator.RuleSummaries))])
self._leafs = OrderedDict()
self.rules = Correlator.Rules()
self.rules.parent = self
self._children_name_map["rules"] = "rules"
self.buffer_status = Correlator.BufferStatus()
self.buffer_status.parent = self
self._children_name_map["buffer_status"] = "buffer-status"
self.alarms = Correlator.Alarms()
self.alarms.parent = self
self._children_name_map["alarms"] = "alarms"
self.rule_set_summaries = Correlator.RuleSetSummaries()
self.rule_set_summaries.parent = self
self._children_name_map["rule_set_summaries"] = "rule-set-summaries"
self.rule_set_details = Correlator.RuleSetDetails()
self.rule_set_details.parent = self
self._children_name_map["rule_set_details"] = "rule-set-details"
self.rule_details = Correlator.RuleDetails()
self.rule_details.parent = self
self._children_name_map["rule_details"] = "rule-details"
self.rule_summaries = Correlator.RuleSummaries()
self.rule_summaries.parent = self
self._children_name_map["rule_summaries"] = "rule-summaries"
self._segment_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator, [], name, value)
class Rules(Entity):
"""
Table that contains the database of correlation
rules
.. attribute:: rule
One of the correlation rules
**type**\: list of :py:class:`Rule <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.Rules.Rule>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.Rules, self).__init__()
self.yang_name = "rules"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule", ("rule", Correlator.Rules.Rule))])
self._leafs = OrderedDict()
self.rule = YList(self)
self._segment_path = lambda: "rules"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.Rules, [], name, value)
class Rule(Entity):
"""
One of the correlation rules
.. attribute:: rule_name (key)
Correlation Rule Name
**type**\: str
**length:** 1..32
.. attribute:: rule_name_xr
Correlation Rule Name
**type**\: str
.. attribute:: timeout
Time window (in ms) for which root/all messages are kept in correlater before sending them to the logger
**type**\: int
**range:** 0..4294967295
.. attribute:: rule_state
Applied state of the rule It could be not applied, applied or applied to all
**type**\: :py:class:`AcRuleState <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AcRuleState>`
.. attribute:: apply_location
Locations (R/S/M) to which the rule is applied
**type**\: list of str
**pattern:** ([a\-zA\-Z0\-9\_]\*\\d+/){1,2}([a\-zA\-Z0\-9\_]\*\\d+)
.. attribute:: apply_context
Contexts (Interfaces) to which the rule is applied
**type**\: list of str
**length:** 0..33
.. attribute:: codes
Message codes defining the rule
**type**\: list of :py:class:`Codes <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.Rules.Rule.Codes>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.Rules.Rule, self).__init__()
self.yang_name = "rule"
self.yang_parent_name = "rules"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_name']
self._child_classes = OrderedDict([("codes", ("codes", Correlator.Rules.Rule.Codes))])
self._leafs = OrderedDict([
('rule_name', (YLeaf(YType.str, 'rule-name'), ['str'])),
('rule_name_xr', (YLeaf(YType.str, 'rule-name-xr'), ['str'])),
('timeout', (YLeaf(YType.uint32, 'timeout'), ['int'])),
('rule_state', (YLeaf(YType.enumeration, 'rule-state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AcRuleState', '')])),
('apply_location', (YLeafList(YType.str, 'apply-location'), ['str'])),
('apply_context', (YLeafList(YType.str, 'apply-context'), ['str'])),
])
self.rule_name = None
self.rule_name_xr = None
self.timeout = None
self.rule_state = None
self.apply_location = []
self.apply_context = []
self.codes = YList(self)
self._segment_path = lambda: "rule" + "[rule-name='" + str(self.rule_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/rules/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.Rules.Rule, ['rule_name', u'rule_name_xr', u'timeout', u'rule_state', u'apply_location', u'apply_context'], name, value)
class Codes(Entity):
"""
Message codes defining the rule.
.. attribute:: category
Category of messages to which this alarm belongs
**type**\: str
.. attribute:: group
Group of messages to which this alarm belongs
**type**\: str
.. attribute:: code
Alarm code which further qualifies the alarm within a message group
**type**\: str
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.Rules.Rule.Codes, self).__init__()
self.yang_name = "codes"
self.yang_parent_name = "rule"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('category', (YLeaf(YType.str, 'category'), ['str'])),
('group', (YLeaf(YType.str, 'group'), ['str'])),
('code', (YLeaf(YType.str, 'code'), ['str'])),
])
self.category = None
self.group = None
self.code = None
self._segment_path = lambda: "codes"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.Rules.Rule.Codes, [u'category', u'group', u'code'], name, value)
class BufferStatus(Entity):
"""
Describes buffer utilization and parameters
configured
.. attribute:: current_size
Current buffer usage
**type**\: int
**range:** 0..4294967295
.. attribute:: configured_size
Configured buffer size
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.BufferStatus, self).__init__()
self.yang_name = "buffer-status"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('current_size', (YLeaf(YType.uint32, 'current-size'), ['int'])),
('configured_size', (YLeaf(YType.uint32, 'configured-size'), ['int'])),
])
self.current_size = None
self.configured_size = None
self._segment_path = lambda: "buffer-status"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.BufferStatus, [u'current_size', u'configured_size'], name, value)
class Alarms(Entity):
"""
Correlated alarms Table
.. attribute:: alarm
One of the correlated alarms
**type**\: list of :py:class:`Alarm <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.Alarms.Alarm>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.Alarms, self).__init__()
self.yang_name = "alarms"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("alarm", ("alarm", Correlator.Alarms.Alarm))])
self._leafs = OrderedDict()
self.alarm = YList(self)
self._segment_path = lambda: "alarms"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.Alarms, [], name, value)
class Alarm(Entity):
"""
One of the correlated alarms
.. attribute:: alarm_id (key)
Alarm ID
**type**\: int
**range:** 0..4294967295
.. attribute:: alarm_info
Correlated alarm information
**type**\: :py:class:`AlarmInfo <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.Alarms.Alarm.AlarmInfo>`
.. attribute:: rule_name
Correlation rule name
**type**\: str
.. attribute:: context
Context string for the alarm
**type**\: str
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.Alarms.Alarm, self).__init__()
self.yang_name = "alarm"
self.yang_parent_name = "alarms"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['alarm_id']
self._child_classes = OrderedDict([("alarm-info", ("alarm_info", Correlator.Alarms.Alarm.AlarmInfo))])
self._leafs = OrderedDict([
('alarm_id', (YLeaf(YType.uint32, 'alarm-id'), ['int'])),
('rule_name', (YLeaf(YType.str, 'rule-name'), ['str'])),
('context', (YLeaf(YType.str, 'context'), ['str'])),
])
self.alarm_id = None
self.rule_name = None
self.context = None
self.alarm_info = Correlator.Alarms.Alarm.AlarmInfo()
self.alarm_info.parent = self
self._children_name_map["alarm_info"] = "alarm-info"
self._segment_path = lambda: "alarm" + "[alarm-id='" + str(self.alarm_id) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/alarms/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.Alarms.Alarm, ['alarm_id', u'rule_name', u'context'], name, value)
class AlarmInfo(Entity):
"""
Correlated alarm information
.. attribute:: source_id
Source Identifier(Location).Indicates the node in which the alarm was generated
**type**\: str
.. attribute:: timestamp
Time when the alarm was generated. It is expressed in number of milliseconds since 00\:00 \:00 UTC, January 1, 1970
**type**\: int
**range:** 0..18446744073709551615
**units**\: millisecond
.. attribute:: category
Category of the alarm
**type**\: str
.. attribute:: group
Group of messages to which this alarm belongs to
**type**\: str
.. attribute:: code
Alarm code which further qualifies the alarm within a message group
**type**\: str
.. attribute:: severity
Severity of the alarm
**type**\: :py:class:`AlAlarmSeverity <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AlAlarmSeverity>`
.. attribute:: state
State of the alarm (bistate alarms only)
**type**\: :py:class:`AlAlarmBistate <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AlAlarmBistate>`
.. attribute:: correlation_id
Correlation Identifier
**type**\: int
**range:** 0..4294967295
.. attribute:: is_admin
Indicates the event id admin\-level
**type**\: bool
.. attribute:: additional_text
Full text of the Alarm
**type**\: str
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.Alarms.Alarm.AlarmInfo, self).__init__()
self.yang_name = "alarm-info"
self.yang_parent_name = "alarm"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('source_id', (YLeaf(YType.str, 'source-id'), ['str'])),
('timestamp', (YLeaf(YType.uint64, 'timestamp'), ['int'])),
('category', (YLeaf(YType.str, 'category'), ['str'])),
('group', (YLeaf(YType.str, 'group'), ['str'])),
('code', (YLeaf(YType.str, 'code'), ['str'])),
('severity', (YLeaf(YType.enumeration, 'severity'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AlAlarmSeverity', '')])),
('state', (YLeaf(YType.enumeration, 'state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AlAlarmBistate', '')])),
('correlation_id', (YLeaf(YType.uint32, 'correlation-id'), ['int'])),
('is_admin', (YLeaf(YType.boolean, 'is-admin'), ['bool'])),
('additional_text', (YLeaf(YType.str, 'additional-text'), ['str'])),
])
self.source_id = None
self.timestamp = None
self.category = None
self.group = None
self.code = None
self.severity = None
self.state = None
self.correlation_id = None
self.is_admin = None
self.additional_text = None
self._segment_path = lambda: "alarm-info"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.Alarms.Alarm.AlarmInfo, [u'source_id', u'timestamp', u'category', u'group', u'code', u'severity', u'state', u'correlation_id', u'is_admin', u'additional_text'], name, value)
class RuleSetSummaries(Entity):
"""
Table that contains the ruleset summary info
.. attribute:: rule_set_summary
Summary of one of the correlation rulesets
**type**\: list of :py:class:`RuleSetSummary <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSetSummaries.RuleSetSummary>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSetSummaries, self).__init__()
self.yang_name = "rule-set-summaries"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-set-summary", ("rule_set_summary", Correlator.RuleSetSummaries.RuleSetSummary))])
self._leafs = OrderedDict()
self.rule_set_summary = YList(self)
self._segment_path = lambda: "rule-set-summaries"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSetSummaries, [], name, value)
class RuleSetSummary(Entity):
"""
Summary of one of the correlation rulesets
.. attribute:: rule_set_name (key)
Ruleset Name
**type**\: str
**length:** 1..32
.. attribute:: rule_set_name_xr
Ruleset Name
**type**\: str
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSetSummaries.RuleSetSummary, self).__init__()
self.yang_name = "rule-set-summary"
self.yang_parent_name = "rule-set-summaries"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_set_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('rule_set_name', (YLeaf(YType.str, 'rule-set-name'), ['str'])),
('rule_set_name_xr', (YLeaf(YType.str, 'rule-set-name-xr'), ['str'])),
])
self.rule_set_name = None
self.rule_set_name_xr = None
self._segment_path = lambda: "rule-set-summary" + "[rule-set-name='" + str(self.rule_set_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/rule-set-summaries/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSetSummaries.RuleSetSummary, ['rule_set_name', u'rule_set_name_xr'], name, value)
class RuleSetDetails(Entity):
"""
Table that contains the ruleset detail info
.. attribute:: rule_set_detail
Detail of one of the correlation rulesets
**type**\: list of :py:class:`RuleSetDetail <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSetDetails.RuleSetDetail>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSetDetails, self).__init__()
self.yang_name = "rule-set-details"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-set-detail", ("rule_set_detail", Correlator.RuleSetDetails.RuleSetDetail))])
self._leafs = OrderedDict()
self.rule_set_detail = YList(self)
self._segment_path = lambda: "rule-set-details"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSetDetails, [], name, value)
class RuleSetDetail(Entity):
"""
Detail of one of the correlation rulesets
.. attribute:: rule_set_name (key)
Ruleset Name
**type**\: str
**length:** 1..32
.. attribute:: rule_set_name_xr
Ruleset Name
**type**\: str
.. attribute:: rules
Rules contained in a ruleset
**type**\: list of :py:class:`Rules <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSetDetails.RuleSetDetail.Rules>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSetDetails.RuleSetDetail, self).__init__()
self.yang_name = "rule-set-detail"
self.yang_parent_name = "rule-set-details"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_set_name']
self._child_classes = OrderedDict([("rules", ("rules", Correlator.RuleSetDetails.RuleSetDetail.Rules))])
self._leafs = OrderedDict([
('rule_set_name', (YLeaf(YType.str, 'rule-set-name'), ['str'])),
('rule_set_name_xr', (YLeaf(YType.str, 'rule-set-name-xr'), ['str'])),
])
self.rule_set_name = None
self.rule_set_name_xr = None
self.rules = YList(self)
self._segment_path = lambda: "rule-set-detail" + "[rule-set-name='" + str(self.rule_set_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/rule-set-details/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSetDetails.RuleSetDetail, ['rule_set_name', u'rule_set_name_xr'], name, value)
class Rules(Entity):
"""
Rules contained in a ruleset
.. attribute:: rule_name_xr
Correlation Rule Name
**type**\: str
.. attribute:: stateful
Whether the rule is stateful
**type**\: bool
.. attribute:: rule_state
Applied state of the rule It could be not applied, applied or applied to all
**type**\: :py:class:`AcRuleState <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AcRuleState>`
.. attribute:: buffered_alarms_count
Number of buffered alarms correlated to this rule
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSetDetails.RuleSetDetail.Rules, self).__init__()
self.yang_name = "rules"
self.yang_parent_name = "rule-set-detail"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('rule_name_xr', (YLeaf(YType.str, 'rule-name-xr'), ['str'])),
('stateful', (YLeaf(YType.boolean, 'stateful'), ['bool'])),
('rule_state', (YLeaf(YType.enumeration, 'rule-state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AcRuleState', '')])),
('buffered_alarms_count', (YLeaf(YType.uint32, 'buffered-alarms-count'), ['int'])),
])
self.rule_name_xr = None
self.stateful = None
self.rule_state = None
self.buffered_alarms_count = None
self._segment_path = lambda: "rules"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSetDetails.RuleSetDetail.Rules, [u'rule_name_xr', u'stateful', u'rule_state', u'buffered_alarms_count'], name, value)
class RuleDetails(Entity):
"""
Table that contains the database of correlation
rule details
.. attribute:: rule_detail
Details of one of the correlation rules
**type**\: list of :py:class:`RuleDetail <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleDetails.RuleDetail>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleDetails, self).__init__()
self.yang_name = "rule-details"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-detail", ("rule_detail", Correlator.RuleDetails.RuleDetail))])
self._leafs = OrderedDict()
self.rule_detail = YList(self)
self._segment_path = lambda: "rule-details"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleDetails, [], name, value)
class RuleDetail(Entity):
"""
Details of one of the correlation rules
.. attribute:: rule_name (key)
Correlation Rule Name
**type**\: str
**length:** 1..32
.. attribute:: rule_summary
Rule summary, name, etc
**type**\: :py:class:`RuleSummary <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleDetails.RuleDetail.RuleSummary>`
.. attribute:: timeout
Time window (in ms) for which root/all messages are kept in correlater before sending them to the logger
**type**\: int
**range:** 0..4294967295
.. attribute:: root_cause_timeout
Timeout before root cause alarm
**type**\: int
**range:** 0..4294967295
.. attribute:: internal
True if the rule is internal
**type**\: bool
.. attribute:: reissue_non_bistate
Whether to reissue non\-bistate alarms
**type**\: bool
.. attribute:: reparent
Reparent
**type**\: bool
.. attribute:: context_correlation
Whether context correlation is enabled
**type**\: bool
.. attribute:: apply_location
Locations (R/S/M) to which the rule is applied
**type**\: list of str
**pattern:** ([a\-zA\-Z0\-9\_]\*\\d+/){1,2}([a\-zA\-Z0\-9\_]\*\\d+)
.. attribute:: apply_context
Contexts (Interfaces) to which the rule is applied
**type**\: list of str
**length:** 0..33
.. attribute:: codes
Message codes defining the rule
**type**\: list of :py:class:`Codes <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleDetails.RuleDetail.Codes>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleDetails.RuleDetail, self).__init__()
self.yang_name = "rule-detail"
self.yang_parent_name = "rule-details"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_name']
self._child_classes = OrderedDict([("rule-summary", ("rule_summary", Correlator.RuleDetails.RuleDetail.RuleSummary)), ("codes", ("codes", Correlator.RuleDetails.RuleDetail.Codes))])
self._leafs = OrderedDict([
('rule_name', (YLeaf(YType.str, 'rule-name'), ['str'])),
('timeout', (YLeaf(YType.uint32, 'timeout'), ['int'])),
('root_cause_timeout', (YLeaf(YType.uint32, 'root-cause-timeout'), ['int'])),
('internal', (YLeaf(YType.boolean, 'internal'), ['bool'])),
('reissue_non_bistate', (YLeaf(YType.boolean, 'reissue-non-bistate'), ['bool'])),
('reparent', (YLeaf(YType.boolean, 'reparent'), ['bool'])),
('context_correlation', (YLeaf(YType.boolean, 'context-correlation'), ['bool'])),
('apply_location', (YLeafList(YType.str, 'apply-location'), ['str'])),
('apply_context', (YLeafList(YType.str, 'apply-context'), ['str'])),
])
self.rule_name = None
self.timeout = None
self.root_cause_timeout = None
self.internal = None
self.reissue_non_bistate = None
self.reparent = None
self.context_correlation = None
self.apply_location = []
self.apply_context = []
self.rule_summary = Correlator.RuleDetails.RuleDetail.RuleSummary()
self.rule_summary.parent = self
self._children_name_map["rule_summary"] = "rule-summary"
self.codes = YList(self)
self._segment_path = lambda: "rule-detail" + "[rule-name='" + str(self.rule_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/rule-details/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleDetails.RuleDetail, ['rule_name', u'timeout', u'root_cause_timeout', u'internal', u'reissue_non_bistate', u'reparent', u'context_correlation', u'apply_location', u'apply_context'], name, value)
class RuleSummary(Entity):
"""
Rule summary, name, etc
.. attribute:: rule_name_xr
Correlation Rule Name
**type**\: str
.. attribute:: stateful
Whether the rule is stateful
**type**\: bool
.. attribute:: rule_state
Applied state of the rule It could be not applied, applied or applied to all
**type**\: :py:class:`AcRuleState <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AcRuleState>`
.. attribute:: buffered_alarms_count
Number of buffered alarms correlated to this rule
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleDetails.RuleDetail.RuleSummary, self).__init__()
self.yang_name = "rule-summary"
self.yang_parent_name = "rule-detail"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('rule_name_xr', (YLeaf(YType.str, 'rule-name-xr'), ['str'])),
('stateful', (YLeaf(YType.boolean, 'stateful'), ['bool'])),
('rule_state', (YLeaf(YType.enumeration, 'rule-state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AcRuleState', '')])),
('buffered_alarms_count', (YLeaf(YType.uint32, 'buffered-alarms-count'), ['int'])),
])
self.rule_name_xr = None
self.stateful = None
self.rule_state = None
self.buffered_alarms_count = None
self._segment_path = lambda: "rule-summary"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleDetails.RuleDetail.RuleSummary, [u'rule_name_xr', u'stateful', u'rule_state', u'buffered_alarms_count'], name, value)
class Codes(Entity):
"""
Message codes defining the rule.
.. attribute:: category
Category of messages to which this alarm belongs
**type**\: str
.. attribute:: group
Group of messages to which this alarm belongs
**type**\: str
.. attribute:: code
Alarm code which further qualifies the alarm within a message group
**type**\: str
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleDetails.RuleDetail.Codes, self).__init__()
self.yang_name = "codes"
self.yang_parent_name = "rule-detail"
self.is_top_level_class = False
self.has_list_ancestor = True
self.ylist_key_names = []
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('category', (YLeaf(YType.str, 'category'), ['str'])),
('group', (YLeaf(YType.str, 'group'), ['str'])),
('code', (YLeaf(YType.str, 'code'), ['str'])),
])
self.category = None
self.group = None
self.code = None
self._segment_path = lambda: "codes"
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleDetails.RuleDetail.Codes, [u'category', u'group', u'code'], name, value)
class RuleSummaries(Entity):
"""
Table that contains the database of correlation
rule summary
.. attribute:: rule_summary
One of the correlation rules
**type**\: list of :py:class:`RuleSummary <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.Correlator.RuleSummaries.RuleSummary>`
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSummaries, self).__init__()
self.yang_name = "rule-summaries"
self.yang_parent_name = "correlator"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = []
self._child_classes = OrderedDict([("rule-summary", ("rule_summary", Correlator.RuleSummaries.RuleSummary))])
self._leafs = OrderedDict()
self.rule_summary = YList(self)
self._segment_path = lambda: "rule-summaries"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSummaries, [], name, value)
class RuleSummary(Entity):
"""
One of the correlation rules
.. attribute:: rule_name (key)
Correlation Rule Name
**type**\: str
**length:** 1..32
.. attribute:: rule_name_xr
Correlation Rule Name
**type**\: str
.. attribute:: stateful
Whether the rule is stateful
**type**\: bool
.. attribute:: rule_state
Applied state of the rule It could be not applied, applied or applied to all
**type**\: :py:class:`AcRuleState <ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper.AcRuleState>`
.. attribute:: buffered_alarms_count
Number of buffered alarms correlated to this rule
**type**\: int
**range:** 0..4294967295
"""
_prefix = 'infra-correlator-oper'
_revision = '2017-09-07'
def __init__(self):
super(Correlator.RuleSummaries.RuleSummary, self).__init__()
self.yang_name = "rule-summary"
self.yang_parent_name = "rule-summaries"
self.is_top_level_class = False
self.has_list_ancestor = False
self.ylist_key_names = ['rule_name']
self._child_classes = OrderedDict([])
self._leafs = OrderedDict([
('rule_name', (YLeaf(YType.str, 'rule-name'), ['str'])),
('rule_name_xr', (YLeaf(YType.str, 'rule-name-xr'), ['str'])),
('stateful', (YLeaf(YType.boolean, 'stateful'), ['bool'])),
('rule_state', (YLeaf(YType.enumeration, 'rule-state'), [('ydk.models.cisco_ios_xr.Cisco_IOS_XR_infra_correlator_oper', 'AcRuleState', '')])),
('buffered_alarms_count', (YLeaf(YType.uint32, 'buffered-alarms-count'), ['int'])),
])
self.rule_name = None
self.rule_name_xr = None
self.stateful = None
self.rule_state = None
self.buffered_alarms_count = None
self._segment_path = lambda: "rule-summary" + "[rule-name='" + str(self.rule_name) + "']"
self._absolute_path = lambda: "Cisco-IOS-XR-infra-correlator-oper:correlator/rule-summaries/%s" % self._segment_path()
self._is_frozen = True
def __setattr__(self, name, value):
self._perform_setattr(Correlator.RuleSummaries.RuleSummary, ['rule_name', u'rule_name_xr', u'stateful', u'rule_state', u'buffered_alarms_count'], name, value)
def clone_ptr(self):
self._top_entity = Correlator()
return self._top_entity
| 37.770833 | 463 | 0.532721 | 6,013 | 61,642 | 5.186263 | 0.046399 | 0.027449 | 0.034311 | 0.031265 | 0.821132 | 0.793266 | 0.761488 | 0.745583 | 0.729774 | 0.707391 | 0 | 0.012165 | 0.355877 | 61,642 | 1,631 | 464 | 37.793991 | 0.773253 | 0.255897 | 0 | 0.685801 | 0 | 0.003021 | 0.173411 | 0.063246 | 0 | 0 | 0 | 0 | 0 | 1 | 0.081571 | false | 0 | 0.007553 | 0 | 0.164653 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3a853162cfc722e89e6cd21ef6d29778b71a47e | 1,307 | py | Python | Clustering Methods/colorsetup.py | Frightera/ML-Algorithms | 04694bd885cd9eb2e28d5ced56b207a90b4f0f14 | [
"MIT"
] | 5 | 2020-10-28T08:27:09.000Z | 2022-03-06T15:37:20.000Z | Clustering Methods/colorsetup.py | Frightera/ML-Algorithms | 04694bd885cd9eb2e28d5ced56b207a90b4f0f14 | [
"MIT"
] | null | null | null | Clustering Methods/colorsetup.py | Frightera/ML-Algorithms | 04694bd885cd9eb2e28d5ced56b207a90b4f0f14 | [
"MIT"
] | null | null | null | import seaborn as sns
my_color = {"Magenta 100":"2A0A16", "Magenta 90":"57002B", "Magenta 80":"760A3A", "Magenta 70":"A11950", "Magenta 60":"D12765", "Magenta 50":"EE538B", "Magenta 40":"FA75A6", "Magenta 30":"FFA0C2", "Magenta 20":"FFCFE1", "Magenta 10":"FFF0F6", "Purple 100":"1E1033", "Purple 90":"38146B", "Purple 80":"4F2196", "Purple 70":"6E32C9", "Purple 60":"8A3FFC", "Purple 50":"A66EFA", "Purple 40":"BB8EFF", "Purple 30":"D0B0FF", "Purple 20":"E6D6FF", "Purple 10":"F7F1FF", "Blue 100":"051243", "Blue 90":"061F80", "Blue 80":"0530AD", "Blue 70":"054ADA", "Blue 60":"0062FF", "Blue 50":"408BFC", "Blue 40":"6EA6FF", "Blue 30":"97C1FF", "Blue 20":"C9DEFF", "Blue 10":"EDF4FF", "Teal 100":"081A1C", "Teal 90":"003137", "Teal 80":"004548", "Teal 70":"006161", "Teal 60":"007D79", "Teal 50":"009C98", "Teal 40":"00BAB6", "Teal 30":"20D5D2", "Teal 20":"92EEEE", "Teal 10":"DBFBFB", "Gray 100":"171717", "Gray 90":"282828", "Gray 80":"3D3D3D", "Gray 70":"565656", "Gray 60":"6F6F6F", "Gray 50":"8C8C8C", "Gray 40":"A4A4A4", "Gray 30":"BEBEBE", "Gray 20":"DCDCDC", "Gray 10":"F3F3F3"}
colors = []
colornum = 60
for i in [f'Blue {colornum}', f'Teal {colornum}', f'Magenta {colornum}', f'Purple {colornum}', f'Gray {colornum}']:
colors.append(f'#{my_color[i]}')
palette = sns.color_palette(colors)
| 130.7 | 1,067 | 0.630451 | 188 | 1,307 | 4.367021 | 0.420213 | 0.043849 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.23655 | 0.104055 | 1,307 | 9 | 1,068 | 145.222222 | 0.46456 | 0 | 0 | 0 | 0 | 0 | 0.612261 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.142857 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3b7f6eb5a494f8e7ce85efbddb4705776d81b83 | 69,664 | py | Python | clustering_outliers/app.py | OpertusMundi/clustering-outliers-service | 6d3d89eaa8d3c491c5c78d4c12b67aef01391e32 | [
"Apache-2.0"
] | null | null | null | clustering_outliers/app.py | OpertusMundi/clustering-outliers-service | 6d3d89eaa8d3c491c5c78d4c12b67aef01391e32 | [
"Apache-2.0"
] | null | null | null | clustering_outliers/app.py | OpertusMundi/clustering-outliers-service | 6d3d89eaa8d3c491c5c78d4c12b67aef01391e32 | [
"Apache-2.0"
] | null | null | null | import json
from enum import auto, Enum
from datetime import datetime, timezone
from os import path, getenv, stat
from apispec import APISpec
from apispec_webframeworks.flask import FlaskPlugin
from flask_cors import CORS
from flask_executor import Executor
from flask_wtf import FlaskForm
from flask import Flask, send_file, abort
from flask import make_response
from . import db
from .forms import KMeansFileForm, KMeansPathForm, DBScanFileForm, DBScanPathForm, AgglomerativeFileForm, \
AgglomerativePathForm, IsoForestFileForm, IsoForestPathForm, LOFFileForm, LOFPathForm, OCSVMPathForm, OCSVMFileForm
from .models.agglomerative_clustering import agglomerative_clustering
from .models.dbscan import dbscan
from .models.isolation_forest import isolation_forest
from .models.kmeans import kmeams
from .models.local_outlier_factor import local_outlier_factor
from .models.one_class_svm import one_class_svm
from .logging import getLoggers
from .utils import check_directory_writable, get_temp_dir, mkdir, validate_form, get_tmp_dir, create_ticket, \
save_to_temp, uncompress_file
class OutputDirNotSet(Exception):
pass
if getenv('OUTPUT_DIR') is None:
raise OutputDirNotSet('Environment variable OUTPUT_DIR is not set.')
FILE_NOT_FOUND_MESSAGE = "File not found"
# Logging
mainLogger, accountLogger = getLoggers()
# OpenAPI documentation
spec = APISpec(
title="Clustering and Outlier Detection API",
version=getenv('VERSION'),
info=dict(
description="",
contact={"email": "kpsarakis94@gmail.com"}
),
externalDocs={"description": "GitHub", "url": "https://github.com/OpertusMundi/clustering-outliers-service"},
openapi_version="3.0.2",
plugins=[FlaskPlugin()],
)
# Initialize app
app = Flask(__name__, instance_relative_config=True, instance_path=getenv('INSTANCE_PATH'))
environment = getenv('FLASK_ENV')
if environment == 'testing' or environment == 'development':
secret_key = environment
else:
secret_key = getenv('SECRET_KEY') or open(getenv('SECRET_KEY_FILE')).read()
app.config.from_mapping(
SECRET_KEY=secret_key,
DATABASE=getenv('DATABASE'),
)
def executor_callback(future):
"""The callback function called when a job has completed."""
ticket, result, job_type, success, comment = future.result()
if result is not None:
rel_path = datetime.now().strftime("%y%m%d")
rel_path = path.join(rel_path, ticket)
output_path: str = path.join(getenv('OUTPUT_DIR'), rel_path)
mkdir(output_path)
filepath = path.join(getenv('OUTPUT_DIR'), rel_path, "result.json")
with open(filepath, 'w') as fp:
json.dump(result, fp)
else:
filepath = None
with app.app_context():
dbc = db.get_db()
db_result = dbc.execute('SELECT requested_time, filesize FROM tickets WHERE ticket = ?;', [ticket]).fetchone()
time = db_result['requested_time']
filesize = db_result['filesize']
execution_time = round((datetime.now(timezone.utc) - time.replace(tzinfo=timezone.utc)).total_seconds(), 3)
dbc.execute('UPDATE tickets SET result=?, success=?, status=1, execution_time=?, comment=? WHERE ticket=?;',
[filepath, success, execution_time, comment, ticket])
dbc.commit()
accountLogger(ticket=ticket, success=success, execution_start=time, execution_time=execution_time,
comment=comment, filesize=filesize)
dbc.close()
mainLogger.info(f'Processing of ticket: {ticket} is completed successfully')
# Ensure the instance folder exists and initialize application, db and executor.
mkdir(app.instance_path)
db.init_app(app)
executor = Executor(app)
executor.add_default_done_callback(executor_callback)
# Enable CORS
if getenv('CORS') is not None:
if getenv('CORS')[0:1] == '[':
origins = json.loads(getenv('CORS'))
else:
origins = getenv('CORS')
cors = CORS(app, origins=origins)
class JobType(Enum):
KMEANS = auto()
DBSCAN = auto()
AGGLO = auto()
ISOFOREST = auto()
LOCALOUTLIER = auto()
SVM = auto()
@executor.job
def enqueue(ticket: str, src_path: str, form: FlaskForm, job_type: JobType) -> tuple:
"""Enqueue a job (in case requested response type is 'deferred')."""
filesize = stat(src_path).st_size
dbc = db.get_db()
dbc.execute('INSERT INTO tickets (ticket, filesize) VALUES(?, ?);', [ticket, filesize])
dbc.commit()
dbc.close()
mainLogger.info(f'Starting processing ticket: {ticket}')
try:
if job_type is JobType.KMEANS:
result = kmeams(form, src_path)
elif job_type is JobType.DBSCAN:
result = dbscan(form, src_path)
elif job_type is JobType.AGGLO:
result = agglomerative_clustering(form, src_path)
elif job_type is JobType.ISOFOREST:
result = isolation_forest(form, src_path)
elif job_type is JobType.LOCALOUTLIER:
result = local_outlier_factor(form, src_path)
elif job_type is JobType.SVM:
result = one_class_svm(form, src_path)
else:
result = None
except Exception as e:
mainLogger.error(f'Processing of ticket: {ticket} failed')
return ticket, None, 0, str(e)
else:
return ticket, result, job_type, 1, None
@app.route("/")
def index():
"""The index route, gives info about the API endpoints."""
mainLogger.info('Generating OpenAPI document...')
return make_response(spec.to_dict(), 200)
@app.route("/_health")
def health_check():
"""Perform basic health checks
---
get:
tags:
- Health
summary: Get health status
description: 'Get health status'
operationId: 'getHealth'
responses:
default:
description: An object with status information
content:
application/json:
schema:
type: object
properties:
status:
type: string
description: A status of 'OK' or 'FAILED'
reason:
type: string
description: the reason of failure (if failed)
detail:
type: string
description: more details on this failure (if failed)
examples:
example-1:
value: |-
{"status": "OK"}
"""
mainLogger.info('Performing health checks...')
# Check that temp directory is writable
try:
check_directory_writable(get_temp_dir())
except Exception as exc:
return make_response({'status': 'FAILED', 'reason': 'temp directory not writable', 'detail': str(exc)},
200)
# Check that we can connect to our PostGIS backend
try:
dbc = db.get_db()
dbc.execute('SELECT 1').fetchone()
except Exception as exc:
return make_response({'status': 'FAILED', 'reason': 'cannot connect to SQLite backend', 'detail': str(exc)},
200)
return make_response({'status': 'OK'},
200)
@app.route("/kmeans/file", methods=["POST"])
def k_means_file():
"""Perform kmeans clustering to a geospatial file that is provided with the request
---
post:
summary: Perform kmeans clustering to a geospatial file that is provided with the request
tags:
- kmeans
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
resource:
type: string
format: binary
description: The geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the clustering process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
k:
type: integer
description: The number of expected clusters, leave empty to calculate this automatically
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
200:
description: kmeans completed and returned.
content:
application/json:
schema:
type: object
properties:
cluster_centers:
type: array
description: The cluster centers
ids:
type: array
description: The row ids
labels:
type: array
description: The row labels
202:
description: Accepted for processing, but clustering has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = KMeansFileForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /kmeans/file with file: {form.resource.data.filename}")
tmp_dir: str = get_tmp_dir("clustering_outliers")
ticket: str = create_ticket()
src_file_path: str = save_to_temp(form, tmp_dir, ticket)
src_file_path: str = uncompress_file(src_file_path)
# Immediate results
if form.response.data == "prompt":
response = kmeams(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.KMEANS)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/kmeans/path", methods=["POST"])
def k_means_path():
"""Perform kmeans clustering to a geospatial file that its path provided with the request
---
post:
summary: Perform kmeans clustering to a geospatial file that its path provided with the request
tags:
- kmeans
requestBody:
required: true
content:
application/x-www-form-urlencoded:
schema:
type: object
properties:
resource:
type: string
description: The path of the geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the clustering process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
k:
type: integer
description: The number of expected clusters, leave empty to calculate this automatically
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
200:
description: kmeans completed and returned.
content:
application/json:
schema:
type: object
properties:
cluster_centers:
type: array
description: The cluster centers
ids:
type: array
description: The row ids
labels:
type: array
description: The row labels
202:
description: Accepted for processing, but clustering has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = KMeansPathForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /kmeans/path with file: {form.resource.data}")
src_file_path: str = form.resource.data
src_file_path: str = uncompress_file(src_file_path)
if not path.exists(src_file_path):
abort(400, FILE_NOT_FOUND_MESSAGE)
# Immediate results
if form.response.data == "prompt":
response = kmeams(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
ticket: str = create_ticket()
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.KMEANS)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/dbscan/file", methods=["POST"])
def dbscan_file():
"""Perform dbscan clustering to a geospatial file that is provided with the request
---
post:
summary: Perform dbscan clustering to a geospatial file that is provided with the request
tags:
- dbscan
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
resource:
type: string
format: binary
description: The geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the clustering process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
epsilon:
type: float
description: The epsilon parameter of dbscan
min_samples:
type: integer
description: The minimum number of points required to form a dense region.
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
200:
description: dbscan completed and returned.
content:
application/json:
schema:
type: object
properties:
core_sample_indices:
type: array
description: The core sample indices
components:
type: array
description: The components
ids:
type: array
description: The row ids
labels:
type: array
description: The row labels
202:
description: Accepted for processing, but clustering has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = DBScanFileForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /dbscan/file with file: {form.resource.data.filename}")
tmp_dir: str = get_tmp_dir("clustering_outliers")
ticket: str = create_ticket()
src_file_path: str = save_to_temp(form, tmp_dir, ticket)
src_file_path: str = uncompress_file(src_file_path)
# Immediate results
if form.response.data == "prompt":
response = dbscan(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.DBSCAN)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/dbscan/path", methods=["POST"])
def dbscan_path():
"""Perform dbscan clustering to a geospatial file that its path is provided with the request
---
post:
summary: >-
Perform dbscan clustering to a geospatial file that its path is provided
with the request
tags:
- dbscan
requestBody:
required: true
content:
application/x-www-form-urlencoded:
schema:
type: object
properties:
resource:
type: string
description: The geospatial file path.
resource_type:
type: string
enum:
- csv
- shp
description: The geospatial file type
response:
type: string
enum:
- prompt
- deferred
description: >-
Determines whether the clustering process should be promptly
initiated (*prompt*) or queued (*deferred*). In the first case,
the response waits for the result, in the second the response is
immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
epsilon:
type: float
description: The epsilon parameter of dbscan
min_samples:
type: integer
description: The minimum number of points required to form a dense region.
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
'200':
description: dbscan completed and returned.
content:
application/json:
schema:
type: object
properties:
core_sample_indices:
type: array
description: The core sample indices
components:
type: array
description: The components
ids:
type: array
description: The row ids
labels:
type: array
description: The row labels
'202':
description: 'Accepted for processing, but clustering has not been completed.'
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: >-
The *resource* endpoint to get the resulting resource when
ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: >-
The `ticket` value returned in the response can be used as the
`ticket` parameter in `GET /status/{ticket}`.
'400':
description: Client error.
"""
form = DBScanPathForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /dbscan/path with file: {form.resource.data}")
src_file_path: str = form.resource.data
src_file_path: str = uncompress_file(src_file_path)
if not path.exists(src_file_path):
abort(400, FILE_NOT_FOUND_MESSAGE)
# Immediate results
if form.response.data == "prompt":
response = dbscan(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
ticket: str = create_ticket()
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.DBSCAN)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/agglomerative/file", methods=["POST"])
def agglomerative_file():
"""Perform agglomerative clustering to a geospatial file that is provided with the request
---
post:
summary: Perform agglomerative clustering to a geospatial file that is provided with the request
tags:
- agglomerative
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
resource:
type: string
format: binary
description: The geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the clustering process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
k:
type: integer
description: The number of clusters
linkage:
type: string
enum: [ward, complete, average, single]
description: The linkage type
dist_threshold:
type: string
default: euclidean
description: The distance measure used
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
200:
description: agglomerative clustering completed and returned.
content:
application/json:
schema:
type: object
properties:
n_clusters:
type: integer
description: The number of clusters
n_leaves:
type: integer
description: The number of leaves
n_connected_components:
type: integer
description: The number of connected components
children:
type: array
description: The children produced in the clustering process
ids:
type: array
description: The row ids
labels:
type: array
description: The row labels
202:
description: Accepted for processing, but clustering has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = AgglomerativeFileForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /agglomerative/file with file: {form.resource.data.filename}")
tmp_dir: str = get_tmp_dir("clustering_outliers")
ticket: str = create_ticket()
src_file_path: str = save_to_temp(form, tmp_dir, ticket)
src_file_path: str = uncompress_file(src_file_path)
# Immediate results
if form.response.data == "prompt":
response = agglomerative_clustering(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.AGGLO)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/agglomerative/path", methods=["POST"])
def agglomerative_path():
"""Perform agglomerative clustering to a geospatial file that its path is provided with the request
---
post:
summary: Perform agglomerative clustering to a geospatial file that its path is provided with the request
tags:
- agglomerative
requestBody:
required: true
content:
application/x-www-form-urlencoded:
schema:
type: object
properties:
resource:
type: string
description: The geospatial file path.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the clustering process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
k:
type: integer
description: The number of clusters
linkage:
type: string
enum: [ward, complete, average, single]
description: The linkage type
dist_threshold:
type: string
default: euclidean
description: The distance measure used
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
200:
description: agglomerative clustering completed and returned.
content:
application/json:
schema:
type: object
properties:
n_clusters:
type: integer
description: The number of clusters
n_leaves:
type: integer
description: The number of leaves
n_connected_components:
type: integer
description: The number of connected components
children:
type: array
description: The children produced in the clustering process
ids:
type: array
description: The row ids
labels:
type: array
description: The row labels
202:
description: Accepted for processing, but clustering has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = AgglomerativePathForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /agglomerative/path with file: {form.resource.data}")
src_file_path: str = form.resource.data
src_file_path: str = uncompress_file(src_file_path)
if not path.exists(src_file_path):
abort(400, FILE_NOT_FOUND_MESSAGE)
# Immediate results
if form.response.data == "prompt":
response = agglomerative_clustering(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
ticket: str = create_ticket()
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.AGGLO)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/isolation_forest/file", methods=["POST"])
def isolation_forest_file():
"""Perform outlier detection with isolation forest to a geospatial file that is provided with the request
---
post:
summary: Perform outlier detection with isolation forest to a geospatial file that is provided with the request
tags:
- isolation_forest
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
resource:
type: string
format: binary
description: The geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the outlier detection process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
epsilon:
type: float
description: The epsilon parameter of dbscan
min_samples:
type: integer
description: The minimum number of points required to form a dense region.
dist_measure:
type: string
default: euclidean
description: The distance measure used
required:
- resource
- resource_type
responses:
200:
description: isolation forest completed and returned.
content:
application/json:
schema:
type: object
properties:
outliers:
type: object
description: The detected outliers
202:
description: Accepted for processing, but outlier detection has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = IsoForestFileForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /isolation_forest/file with file: {form.resource.data.filename}")
tmp_dir: str = get_tmp_dir("clustering_outliers")
ticket: str = create_ticket()
src_file_path: str = save_to_temp(form, tmp_dir, ticket)
src_file_path: str = uncompress_file(src_file_path)
# Immediate results
if form.response.data == "prompt":
response = isolation_forest(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.ISOFOREST)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/isolation_forest/path", methods=["POST"])
def isolation_forest_path():
"""Perform outlier detection with isolation forest to a geospatial file that its path is provided with the request
---
post:
summary: Perform outlier detection with isolation forest to a geospatial file that its path is provided with the request
tags:
- isolation_forest
requestBody:
required: true
content:
application/x-www-form-urlencoded:
schema:
type: object
properties:
resource:
type: string
description: The geospatial file path.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the outlier detection process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
n_estimators:
type: integer
description: The number of estimators
max_samples:
type: integer
description: The maximum samples
required:
- resource
- resource_type
responses:
200:
description: isolation forest completed and returned.
content:
application/json:
schema:
type: object
properties:
outliers:
type: object
description: The detected outliers
202:
description: Accepted for processing, but outlier detection has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = IsoForestPathForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /isolation_forest/path with file: {form.resource.data}")
src_file_path: str = form.resource.data
src_file_path: str = uncompress_file(src_file_path)
if not path.exists(src_file_path):
abort(400, FILE_NOT_FOUND_MESSAGE)
# Immediate results
if form.response.data == "prompt":
response = isolation_forest(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
ticket: str = create_ticket()
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.ISOFOREST)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/local_outlier_factor/file", methods=["POST"])
def local_outlier_factor_file():
"""Perform local outlier factor anomaly detection to a geospatial file that is provided with the request
---
post:
summary: Perform local outlier factor anomaly detection to a geospatial file that is provided with the request
tags:
- local_outlier_factor
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
resource:
type: string
format: binary
description: The geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the outlier detection process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
n_neighbors:
type: integer
description: The number of neighbors
required:
- resource
- resource_type
responses:
200:
description: local outlier factor completed and returned.
content:
application/json:
schema:
type: object
properties:
outliers:
type: object
description: The detected outliers
202:
description: Accepted for processing, but outlier detection has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = LOFFileForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /local_outlier_factor/file with file: {form.resource.data.filename}")
tmp_dir: str = get_tmp_dir("clustering_outliers")
ticket: str = create_ticket()
src_file_path: str = save_to_temp(form, tmp_dir, ticket)
src_file_path: str = uncompress_file(src_file_path)
# Immediate results
if form.response.data == "prompt":
response = local_outlier_factor(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.LOCALOUTLIER)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/local_outlier_factor/path", methods=["POST"])
def local_outlier_factor_path():
"""Perform local outlier factor anomaly detection to a geospatial file that its path is provided with the request
---
post:
summary: Perform local outlier factor anomaly detection to a geospatial file that its path is provided with the request
tags:
- local_outlier_factor
requestBody:
required: true
content:
application/x-www-form-urlencoded:
schema:
type: object
properties:
resource:
type: string
description: The geospatial file path.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the outlier detection process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
n_neighbors:
type: integer
description: The number of neighbors
required:
- resource
- resource_type
responses:
200:
description: local outlier factor completed and returned.
content:
application/json:
schema:
type: object
properties:
outliers:
type: object
description: The detected outliers
202:
description: Accepted for processing, but outlier detection has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = LOFPathForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /local_outlier_factor/path with file: {form.resource.data}")
src_file_path: str = form.resource.data
src_file_path: str = uncompress_file(src_file_path)
if not path.exists(src_file_path):
abort(400, FILE_NOT_FOUND_MESSAGE)
# Immediate results
if form.response.data == "prompt":
response = local_outlier_factor(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
ticket: str = create_ticket()
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.LOCALOUTLIER)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/one_class_svm/file", methods=["POST"])
def svm_file():
"""Perform one class svm anomaly detection to a geospatial file that is provided with the request
---
post:
summary: Perform one class svm anomaly detection to a geospatial file that is provided with the request
tags:
- one_class_svm
requestBody:
required: true
content:
multipart/form-data:
schema:
type: object
properties:
resource:
type: string
format: binary
description: The geospatial file.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the outlier detection process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
degree:
type: integer
description: One class svm degree
required:
- resource
- resource_type
responses:
200:
description: one class svm completed and returned.
content:
application/json:
schema:
type: object
properties:
outliers:
type: object
description: The detected outliers
202:
description: Accepted for processing, but outlier detection has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = OCSVMFileForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /one_class_svm/file with file: {form.resource.data.filename}")
tmp_dir: str = get_tmp_dir("clustering_outliers")
ticket: str = create_ticket()
src_file_path: str = save_to_temp(form, tmp_dir, ticket)
src_file_path: str = uncompress_file(src_file_path)
# Immediate results
if form.response.data == "prompt":
response = one_class_svm(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.SVM)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/one_class_svm/path", methods=["POST"])
def svm_path():
"""Perform one class svm anomaly detection to a geospatial file that its path is provided with the request
---
post:
summary: Perform one class svm anomaly detection to a geospatial file that its path is provided with the request
tags:
- one_class_svm
requestBody:
required: true
content:
application/x-www-form-urlencoded:
schema:
type: object
properties:
resource:
type: string
description: The geospatial file path.
resource_type:
type: string
enum: [csv, shp]
description: The geospatial file type
response:
type: string
enum: [prompt, deferred]
description: Determines whether the outlier detection process should be promptly initiated (*prompt*) or queued (*deferred*). In the first case, the response waits for the result, in the second the response is immediate returning a ticket corresponding to the request.
columns:
type: array
default: null
description: The columns to cluster
id_column:
type: string
description: The column that will serve as the id
degree:
type: integer
description: One class svm degree
required:
- resource
- resource_type
responses:
200:
description: one class svm completed and returned.
content:
application/json:
schema:
type: object
properties:
outliers:
type: object
description: The detected outliers
202:
description: Accepted for processing, but outlier detection has not been completed.
content:
application/json:
schema:
type: object
properties:
ticket:
type: string
description: The ticket corresponding to the request.
endpoint:
type: string
description: The *resource* endpoint to get the resulting resource when ready.
status:
type: string
description: The *status* endpoint to poll for the status of the request.
links:
GetStatus:
operationId: getStatus
parameters:
ticket: '$response.body#/ticket'
description: The `ticket` value returned in the response can be used as the `ticket` parameter in `GET /status/{ticket}`.
400:
description: Client error.
"""
form = OCSVMPathForm()
validate_form(form, mainLogger)
mainLogger.info(f"Starting /one_class_svm/path with file: {form.resource.data}")
src_file_path: str = form.resource.data
src_file_path: str = uncompress_file(src_file_path)
if not path.exists(src_file_path):
abort(400, FILE_NOT_FOUND_MESSAGE)
# Immediate results
if form.response.data == "prompt":
response = one_class_svm(form, src_file_path)
return make_response(response, 200)
# Wait for results
else:
ticket: str = create_ticket()
enqueue.submit(ticket, src_file_path, form=form, job_type=JobType.SVM)
response = {"ticket": ticket, "endpoint": f"/resource/{ticket}", "status": f"/status/{ticket}"}
return make_response(response, 202)
@app.route("/status/<ticket>")
def status(ticket):
"""Get the status of a specific ticket.
---
get:
summary: Get the status of a task request.
operationId: getStatus
description: Returns the status of a request corresponding to a specific ticket.
tags:
- Status
parameters:
- name: ticket
in: path
description: The ticket of the request
required: true
schema:
type: string
responses:
200:
description: Ticket found and status returned.
content:
application/json:
schema:
type: object
properties:
completed:
type: boolean
description: Whether profiling process has been completed or not.
success:
type: boolean
description: Whether profiling process completed successfully.
comment:
type: string
description: If profiling has failed, a short comment describing the reason.
requested:
type: string
format: datetime
description: The timestamp of the request.
execution_time(s):
type: integer
description: The execution time in seconds.
404:
description: Ticket not found.
"""
if ticket is None:
return make_response('Ticket is missing.', 400)
dbc = db.get_db()
results = dbc.execute(
'SELECT status, success, requested_time, execution_time, comment FROM tickets WHERE ticket = ?',
[ticket]).fetchone()
if results is not None:
if results['success'] is not None:
success = bool(results['success'])
else:
success = None
return make_response({"completed": bool(results['status']), "success": success,
"requested": results['requested_time'], "execution_time(s)": results['execution_time'],
"comment": results['comment']}, 200)
return make_response('Not found.', 404)
@app.route("/resource/<ticket>")
def resource(ticket):
"""Get the resulted resource associated with a specific ticket.
---
get:
summary: Get the resource associated to a task request.
description: Returns the resource resulted from a task request corresponding to a specific ticket.
tags:
- Resource
parameters:
- name: ticket
in: path
description: The ticket of the request
required: true
schema:
type: string
responses:
200:
description: The compressed spatial file.
content:
application/x-tar:
schema:
type: string
format: binary
404:
description: Ticket not found or task has not been completed.
507:
description: Resource does not exist.
"""
if ticket is None:
return make_response('Resource ticket is missing.', 400)
dbc = db.get_db()
rel_path = dbc.execute('SELECT result FROM tickets WHERE ticket = ?', [ticket]).fetchone()['result']
if rel_path is None:
return make_response('Not found.', 404)
file = path.join(getenv('OUTPUT_DIR'), rel_path)
if not path.isfile(file):
return make_response('Resource does not exist.', 507)
return send_file(file, as_attachment=True)
# Views
with app.test_request_context():
spec.path(view=svm_path)
spec.path(view=svm_file)
spec.path(view=agglomerative_path)
spec.path(view=agglomerative_file)
spec.path(view=dbscan_path)
spec.path(view=dbscan_file)
spec.path(view=isolation_forest_path)
spec.path(view=isolation_forest_file)
spec.path(view=k_means_file)
spec.path(view=k_means_path)
spec.path(view=local_outlier_factor_path)
spec.path(view=local_outlier_factor_file)
spec.path(view=status)
spec.path(view=resource)
| 43.512804 | 294 | 0.495076 | 6,124 | 69,664 | 5.541476 | 0.065807 | 0.065594 | 0.021393 | 0.038897 | 0.830504 | 0.818718 | 0.803955 | 0.791755 | 0.783887 | 0.773751 | 0 | 0.006628 | 0.445567 | 69,664 | 1,600 | 295 | 43.54 | 0.871997 | 0.571515 | 0 | 0.463351 | 0 | 0 | 0.17624 | 0.021707 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04712 | false | 0.002618 | 0.054974 | 0 | 0.219895 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3fac3738145abae11290935daf875ab5ad6fae3 | 204,248 | py | Python | tests/unit/utils/test_vmware.py | HudsonWu/mysalt | 8ce2f66e0d0338157923f0ea0dab912a0f43e52e | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/test_vmware.py | HudsonWu/mysalt | 8ce2f66e0d0338157923f0ea0dab912a0f43e52e | [
"Apache-2.0"
] | null | null | null | tests/unit/utils/test_vmware.py | HudsonWu/mysalt | 8ce2f66e0d0338157923f0ea0dab912a0f43e52e | [
"Apache-2.0"
] | null | null | null | """
:codeauthor: Alexandru Bleotu <alexandru.bleotu@morganstanley.com>
Tests for cluster related functions in salt.utils.vmware
"""
import base64
import logging
import ssl
import salt.utils.vmware
from salt.exceptions import (
ArgumentValueError,
CommandExecutionError,
VMwareApiError,
VMwareConnectionError,
VMwareObjectRetrievalError,
VMwareRuntimeError,
VMwareSystemError,
)
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.mock import MagicMock, PropertyMock, call, patch
from tests.support.runtests import RUNTIME_VARS
from tests.support.unit import TestCase, skipIf
try:
from pyVmomi import vim, vmodl # pylint: disable=no-name-in-module
HAS_PYVMOMI = True
except ImportError:
HAS_PYVMOMI = False
try:
import gssapi
HAS_GSSAPI = True
except ImportError:
HAS_GSSAPI = False
log = logging.getLogger(__name__)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetClusterTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_cluster
"""
def setUp(self):
patches = (
("salt.utils.vmware.get_managed_object_name", MagicMock()),
("salt.utils.vmware.get_service_instance_from_managed_object", MagicMock()),
(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(
return_value=[{"name": "fake_cluster", "object": MagicMock()}]
),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_si = MagicMock()
self.mock_dc = MagicMock()
self.mock_cluster1 = MagicMock()
self.mock_cluster2 = MagicMock()
self.mock_entries = [
{"name": "fake_cluster1", "object": self.mock_cluster1},
{"name": "fake_cluster2", "object": self.mock_cluster2},
]
for attr in (
"mock_si",
"mock_dc",
"mock_cluster1",
"mock_cluster2",
"mock_entries",
):
self.addCleanup(delattr, self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster")
mock_get_managed_object_name.assert_called_once_with(self.mock_dc)
def test_get_service_instance_from_managed_object(self):
mock_dc_name = MagicMock()
mock_get_service_instance_from_managed_object = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value=mock_dc_name),
):
with patch(
"salt.utils.vmware.get_service_instance_from_managed_object",
mock_get_service_instance_from_managed_object,
):
salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster")
mock_get_service_instance_from_managed_object.assert_called_once_with(
self.mock_dc, name=mock_dc_name
)
def test_traversal_spec_init(self):
mock_dc_name = MagicMock()
mock_traversal_spec = MagicMock()
mock_traversal_spec_ini = MagicMock(return_value=mock_traversal_spec)
mock_get_service_instance_from_managed_object = MagicMock()
patch_traversal_spec_str = (
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec"
)
with patch(patch_traversal_spec_str, mock_traversal_spec_ini):
salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster")
mock_traversal_spec_ini.assert_has_calls(
[
call(path="childEntity", skip=False, type=vim.Folder),
call(
path="hostFolder",
skip=True,
type=vim.Datacenter,
selectSet=[mock_traversal_spec],
),
]
)
def test_get_mors_with_properties_call(self):
mock_get_mors_with_properties = MagicMock(
return_value=[{"name": "fake_cluster", "object": MagicMock()}]
)
mock_traversal_spec = MagicMock()
patch_traversal_spec_str = (
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec"
)
with patch(
"salt.utils.vmware.get_service_instance_from_managed_object",
MagicMock(return_value=self.mock_si),
):
with patch(
"salt.utils.vmware.get_mors_with_properties",
mock_get_mors_with_properties,
):
with patch(
patch_traversal_spec_str,
MagicMock(return_value=mock_traversal_spec),
):
salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster")
mock_get_mors_with_properties.assert_called_once_with(
self.mock_si,
vim.ClusterComputeResource,
container_ref=self.mock_dc,
property_list=["name"],
traversal_spec=mock_traversal_spec,
)
def test_get_mors_with_properties_returns_empty_array(self):
with patch(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dc"),
):
with patch(
"salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])
):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster")
self.assertEqual(
excinfo.exception.strerror,
"Cluster 'fake_cluster' was not found in " "datacenter 'fake_dc'",
)
def test_cluster_not_found(self):
with patch(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dc"),
):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster")
self.assertEqual(
excinfo.exception.strerror,
"Cluster 'fake_cluster' was not found in " "datacenter 'fake_dc'",
)
def test_cluster_found(self):
with patch(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dc"),
):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
):
res = salt.utils.vmware.get_cluster(self.mock_dc, "fake_cluster2")
self.assertEqual(res, self.mock_cluster2)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class CreateClusterTestCase(TestCase):
"""
Tests for salt.utils.vmware.create_cluster
"""
def setUp(self):
patches = (("salt.utils.vmware.get_managed_object_name", MagicMock()),)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_create_cluster_ex = MagicMock()
self.mock_dc = MagicMock(
hostFolder=MagicMock(CreateClusterEx=self.mock_create_cluster_ex)
)
self.mock_cluster_spec = MagicMock()
for attr in ("mock_create_cluster_ex", "mock_dc", "mock_cluster_spec"):
self.addCleanup(delattr, self, attr)
def test_get_managed_object_name(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.create_cluster(
self.mock_dc, "fake_cluster", self.mock_cluster_spec
)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc)
def test_create_cluster_call(self):
salt.utils.vmware.create_cluster(
self.mock_dc, "fake_cluster", self.mock_cluster_spec
)
self.mock_create_cluster_ex.assert_called_once_with(
"fake_cluster", self.mock_cluster_spec
)
def test_create_cluster_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_dc.hostFolder.CreateClusterEx = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_cluster(
self.mock_dc, "fake_cluster", self.mock_cluster_spec
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_create_cluster_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_dc.hostFolder.CreateClusterEx = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_cluster(
self.mock_dc, "fake_cluster", self.mock_cluster_spec
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_create_cluster_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_dc.hostFolder.CreateClusterEx = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.create_cluster(
self.mock_dc, "fake_cluster", self.mock_cluster_spec
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class UpdateClusterTestCase(TestCase):
"""
Tests for salt.utils.vmware.update_cluster
"""
def setUp(self):
patches = (
("salt.utils.vmware.get_managed_object_name", MagicMock()),
("salt.utils.vmware.wait_for_task", MagicMock()),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_task = MagicMock()
self.mock_reconfigure_compute_resource_task = MagicMock(
return_value=self.mock_task
)
self.mock_cluster = MagicMock(
ReconfigureComputeResource_Task=self.mock_reconfigure_compute_resource_task
)
self.mock_cluster_spec = MagicMock()
for attr in (
"mock_task",
"mock_reconfigure_compute_resource_task",
"mock_cluster",
"mock_cluster_spec",
):
self.addCleanup(delattr, self, attr)
def test_get_managed_object_name(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.update_cluster(self.mock_cluster, self.mock_cluster_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_cluster)
def test_reconfigure_compute_resource_task_call(self):
salt.utils.vmware.update_cluster(self.mock_cluster, self.mock_cluster_spec)
self.mock_reconfigure_compute_resource_task.assert_called_once_with(
self.mock_cluster_spec, modify=True
)
def test_reconfigure_compute_resource_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_cluster.ReconfigureComputeResource_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.update_cluster(self.mock_cluster, self.mock_cluster_spec)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_reconfigure_compute_resource_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_cluster.ReconfigureComputeResource_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.update_cluster(self.mock_cluster, self.mock_cluster_spec)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_reconfigure_compute_resource_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_cluster.ReconfigureComputeResource_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.update_cluster(self.mock_cluster, self.mock_cluster_spec)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_wait_for_task_call(self):
mock_wait_for_task = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_cluster"),
):
with patch("salt.utils.vmware.wait_for_task", mock_wait_for_task):
salt.utils.vmware.update_cluster(
self.mock_cluster, self.mock_cluster_spec
)
mock_wait_for_task.assert_called_once_with(
self.mock_task, "fake_cluster", "ClusterUpdateTask"
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class WaitForTaskTestCase(TestCase):
"""
Tests for salt.utils.vmware.wait_for_task
"""
def setUp(self):
patches = (
("salt.utils.vmware.time.time", MagicMock(return_value=1)),
("salt.utils.vmware.time.sleep", MagicMock(return_value=None)),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def test_first_task_info_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
mock_task = MagicMock()
type(mock_task).info = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_first_task_info_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
mock_task = MagicMock()
type(mock_task).info = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_first_task_info_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
mock_task = MagicMock()
type(mock_task).info = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_inner_loop_task_info_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
mock_task = MagicMock()
mock_info1 = MagicMock()
type(mock_task).info = PropertyMock(side_effect=[mock_info1, exc])
type(mock_info1).state = PropertyMock(side_effect=["running", "bad"])
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_inner_loop_task_info_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
mock_task = MagicMock()
mock_info1 = MagicMock()
type(mock_task).info = PropertyMock(side_effect=[mock_info1, exc])
type(mock_info1).state = PropertyMock(side_effect=["running", "bad"])
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_inner_loop_task_info_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
mock_task = MagicMock()
mock_info1 = MagicMock()
type(mock_task).info = PropertyMock(side_effect=[mock_info1, exc])
type(mock_info1).state = PropertyMock(side_effect=["running", "bad"])
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_info_state_running(self):
# The 'bad' values are invalid in the while loop
mock_task = MagicMock()
prop_mock_state = PropertyMock(side_effect=["running", "bad", "bad", "success"])
prop_mock_result = PropertyMock()
type(mock_task.info).state = prop_mock_state
type(mock_task.info).result = prop_mock_result
salt.utils.vmware.wait_for_task(mock_task, "fake_instance_name", "task_type")
self.assertEqual(prop_mock_state.call_count, 4)
self.assertEqual(prop_mock_result.call_count, 1)
def test_info_state_running_continues_loop(self):
mock_task = MagicMock()
# The 'fake' values are required to match all the lookups and end the
# loop
prop_mock_state = PropertyMock(
side_effect=["running", "fake", "fake", "success"]
)
prop_mock_result = PropertyMock()
type(mock_task.info).state = prop_mock_state
type(mock_task.info).result = prop_mock_result
salt.utils.vmware.wait_for_task(mock_task, "fake_instance_name", "task_type")
self.assertEqual(prop_mock_state.call_count, 4)
self.assertEqual(prop_mock_result.call_count, 1)
def test_info_state_queued_continues_loop(self):
mock_task = MagicMock()
# The 'fake' values are required to match all the lookups and end the
# loop
prop_mock_state = PropertyMock(
side_effect=["fake", "queued", "fake", "fake", "success"]
)
prop_mock_result = PropertyMock()
type(mock_task.info).state = prop_mock_state
type(mock_task.info).result = prop_mock_result
salt.utils.vmware.wait_for_task(mock_task, "fake_instance_name", "task_type")
self.assertEqual(prop_mock_state.call_count, 5)
self.assertEqual(prop_mock_result.call_count, 1)
def test_info_state_success(self):
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="success")
prop_mock_result = PropertyMock()
type(mock_task.info).state = prop_mock_state
type(mock_task.info).result = prop_mock_result
salt.utils.vmware.wait_for_task(mock_task, "fake_instance_name", "task_type")
self.assertEqual(prop_mock_state.call_count, 3)
self.assertEqual(prop_mock_result.call_count, 1)
def test_info_error_exception(self):
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="error")
prop_mock_error = PropertyMock(side_effect=Exception("error exc"))
type(mock_task.info).state = prop_mock_state
type(mock_task.info).error = prop_mock_error
with self.assertRaises(Exception) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(str(excinfo.exception), "error exc")
def test_info_error_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="error")
prop_mock_error = PropertyMock(side_effect=exc)
type(mock_task.info).state = prop_mock_state
type(mock_task.info).error = prop_mock_error
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_info_error_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="error")
prop_mock_error = PropertyMock(side_effect=exc)
type(mock_task.info).state = prop_mock_state
type(mock_task.info).error = prop_mock_error
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_info_error_system_fault(self):
exc = vmodl.fault.SystemError()
exc.msg = "SystemError msg"
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="error")
prop_mock_error = PropertyMock(side_effect=exc)
type(mock_task.info).state = prop_mock_state
type(mock_task.info).error = prop_mock_error
with self.assertRaises(VMwareSystemError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "SystemError msg")
def test_info_error_invalid_argument_no_fault_message(self):
exc = vmodl.fault.InvalidArgument()
exc.faultMessage = None
exc.msg = "InvalidArgumentFault msg"
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="error")
prop_mock_error = PropertyMock(side_effect=exc)
type(mock_task.info).state = prop_mock_state
type(mock_task.info).error = prop_mock_error
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(excinfo.exception.strerror, "InvalidArgumentFault msg")
def test_info_error_invalid_argument_with_fault_message(self):
exc = vmodl.fault.InvalidArgument()
fault_message = vim.LocalizableMessage()
fault_message.message = "LocalFault msg"
exc.faultMessage = [fault_message]
exc.msg = "InvalidArgumentFault msg"
mock_task = MagicMock()
prop_mock_state = PropertyMock(return_value="error")
prop_mock_error = PropertyMock(side_effect=exc)
type(mock_task.info).state = prop_mock_state
type(mock_task.info).error = prop_mock_error
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.wait_for_task(
mock_task, "fake_instance_name", "task_type"
)
self.assertEqual(
excinfo.exception.strerror, "InvalidArgumentFault msg (LocalFault msg)"
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetMorsWithPropertiesTestCase(TestCase):
"""
Tests for salt.utils.get_mors_with_properties
"""
si = None
obj_type = None
prop_list = None
container_ref = None
traversal_spec = None
def setUp(self):
self.si = MagicMock()
self.obj_type = MagicMock()
self.prop_list = MagicMock()
self.container_ref = MagicMock()
self.traversal_spec = MagicMock()
def test_empty_content(self):
get_content = MagicMock(return_value=[])
with patch("salt.utils.vmware.get_content", get_content):
ret = salt.utils.vmware.get_mors_with_properties(
self.si,
self.obj_type,
self.prop_list,
self.container_ref,
self.traversal_spec,
)
get_content.assert_called_once_with(
self.si,
self.obj_type,
property_list=self.prop_list,
container_ref=self.container_ref,
traversal_spec=self.traversal_spec,
local_properties=False,
)
self.assertEqual(ret, [])
def test_local_properties_set(self):
obj_mock = MagicMock()
# obj.propSet
propSet_prop = PropertyMock(return_value=[])
type(obj_mock).propSet = propSet_prop
# obj.obj
inner_obj_mock = MagicMock()
obj_prop = PropertyMock(return_value=inner_obj_mock)
type(obj_mock).obj = obj_prop
get_content = MagicMock(return_value=[obj_mock])
with patch("salt.utils.vmware.get_content", get_content):
ret = salt.utils.vmware.get_mors_with_properties(
self.si,
self.obj_type,
self.prop_list,
self.container_ref,
self.traversal_spec,
local_properties=True,
)
get_content.assert_called_once_with(
self.si,
self.obj_type,
property_list=self.prop_list,
container_ref=self.container_ref,
traversal_spec=self.traversal_spec,
local_properties=True,
)
def test_one_element_content(self):
obj_mock = MagicMock()
# obj.propSet
propSet_prop = PropertyMock(return_value=[])
type(obj_mock).propSet = propSet_prop
# obj.obj
inner_obj_mock = MagicMock()
obj_prop = PropertyMock(return_value=inner_obj_mock)
type(obj_mock).obj = obj_prop
get_content = MagicMock(return_value=[obj_mock])
with patch("salt.utils.vmware.get_content", get_content):
ret = salt.utils.vmware.get_mors_with_properties(
self.si,
self.obj_type,
self.prop_list,
self.container_ref,
self.traversal_spec,
)
get_content.assert_called_once_with(
self.si,
self.obj_type,
property_list=self.prop_list,
container_ref=self.container_ref,
traversal_spec=self.traversal_spec,
local_properties=False,
)
self.assertEqual(propSet_prop.call_count, 1)
self.assertEqual(obj_prop.call_count, 1)
self.assertEqual(len(ret), 1)
self.assertDictEqual(ret[0], {"object": inner_obj_mock})
def test_multiple_element_content(self):
# obj1
obj1_mock = MagicMock()
# obj1.propSet
obj1_propSet_prop = PropertyMock(return_value=[])
type(obj1_mock).propSet = obj1_propSet_prop
# obj1.obj
obj1_inner_obj_mock = MagicMock()
obj1_obj_prop = PropertyMock(return_value=obj1_inner_obj_mock)
type(obj1_mock).obj = obj1_obj_prop
# obj2
obj2_mock = MagicMock()
# obj2.propSet
obj2_propSet_prop = PropertyMock(return_value=[])
type(obj2_mock).propSet = obj2_propSet_prop
# obj2.obj
obj2_inner_obj_mock = MagicMock()
obj2_obj_prop = PropertyMock(return_value=obj2_inner_obj_mock)
type(obj2_mock).obj = obj2_obj_prop
get_content = MagicMock(return_value=[obj1_mock, obj2_mock])
with patch("salt.utils.vmware.get_content", get_content):
ret = salt.utils.vmware.get_mors_with_properties(
self.si,
self.obj_type,
self.prop_list,
self.container_ref,
self.traversal_spec,
)
get_content.assert_called_once_with(
self.si,
self.obj_type,
property_list=self.prop_list,
container_ref=self.container_ref,
traversal_spec=self.traversal_spec,
local_properties=False,
)
self.assertEqual(obj1_propSet_prop.call_count, 1)
self.assertEqual(obj2_propSet_prop.call_count, 1)
self.assertEqual(obj1_obj_prop.call_count, 1)
self.assertEqual(obj2_obj_prop.call_count, 1)
self.assertEqual(len(ret), 2)
self.assertDictEqual(ret[0], {"object": obj1_inner_obj_mock})
self.assertDictEqual(ret[1], {"object": obj2_inner_obj_mock})
def test_one_elem_one_property(self):
obj_mock = MagicMock()
# property mock
prop_set_obj_mock = MagicMock()
prop_set_obj_name_prop = PropertyMock(return_value="prop_name")
prop_set_obj_val_prop = PropertyMock(return_value="prop_value")
type(prop_set_obj_mock).name = prop_set_obj_name_prop
type(prop_set_obj_mock).val = prop_set_obj_val_prop
# obj.propSet
propSet_prop = PropertyMock(return_value=[prop_set_obj_mock])
type(obj_mock).propSet = propSet_prop
# obj.obj
inner_obj_mock = MagicMock()
obj_prop = PropertyMock(return_value=inner_obj_mock)
type(obj_mock).obj = obj_prop
get_content = MagicMock(return_value=[obj_mock])
with patch("salt.utils.vmware.get_content", get_content):
ret = salt.utils.vmware.get_mors_with_properties(
self.si,
self.obj_type,
self.prop_list,
self.container_ref,
self.traversal_spec,
local_properties=False,
)
get_content.assert_called_once_with(
self.si,
self.obj_type,
property_list=self.prop_list,
container_ref=self.container_ref,
traversal_spec=self.traversal_spec,
local_properties=False,
)
self.assertEqual(propSet_prop.call_count, 1)
self.assertEqual(prop_set_obj_name_prop.call_count, 1)
self.assertEqual(prop_set_obj_val_prop.call_count, 1)
self.assertEqual(obj_prop.call_count, 1)
self.assertEqual(len(ret), 1)
self.assertDictEqual(
ret[0], {"prop_name": "prop_value", "object": inner_obj_mock}
)
def test_one_elem_multiple_properties(self):
obj_mock = MagicMock()
# property1 mock
prop_set_obj1_mock = MagicMock()
prop_set_obj1_name_prop = PropertyMock(return_value="prop_name1")
prop_set_obj1_val_prop = PropertyMock(return_value="prop_value1")
type(prop_set_obj1_mock).name = prop_set_obj1_name_prop
type(prop_set_obj1_mock).val = prop_set_obj1_val_prop
# property2 mock
prop_set_obj2_mock = MagicMock()
prop_set_obj2_name_prop = PropertyMock(return_value="prop_name2")
prop_set_obj2_val_prop = PropertyMock(return_value="prop_value2")
type(prop_set_obj2_mock).name = prop_set_obj2_name_prop
type(prop_set_obj2_mock).val = prop_set_obj2_val_prop
# obj.propSet
propSet_prop = PropertyMock(
return_value=[prop_set_obj1_mock, prop_set_obj2_mock]
)
type(obj_mock).propSet = propSet_prop
# obj.obj
inner_obj_mock = MagicMock()
obj_prop = PropertyMock(return_value=inner_obj_mock)
type(obj_mock).obj = obj_prop
get_content = MagicMock(return_value=[obj_mock])
with patch("salt.utils.vmware.get_content", get_content):
ret = salt.utils.vmware.get_mors_with_properties(
self.si,
self.obj_type,
self.prop_list,
self.container_ref,
self.traversal_spec,
)
get_content.assert_called_once_with(
self.si,
self.obj_type,
property_list=self.prop_list,
container_ref=self.container_ref,
traversal_spec=self.traversal_spec,
local_properties=False,
)
self.assertEqual(propSet_prop.call_count, 1)
self.assertEqual(prop_set_obj1_name_prop.call_count, 1)
self.assertEqual(prop_set_obj1_val_prop.call_count, 1)
self.assertEqual(prop_set_obj2_name_prop.call_count, 1)
self.assertEqual(prop_set_obj2_val_prop.call_count, 1)
self.assertEqual(obj_prop.call_count, 1)
self.assertEqual(len(ret), 1)
self.assertDictEqual(
ret[0],
{
"prop_name1": "prop_value1",
"prop_name2": "prop_value2",
"object": inner_obj_mock,
},
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetPropertiesOfManagedObjectTestCase(TestCase):
"""
Tests for salt.utils.get_properties_of_managed_object
"""
def setUp(self):
patches = (
("salt.utils.vmware.get_service_instance_from_managed_object", MagicMock()),
(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=[MagicMock()]),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_si = MagicMock()
self.fake_mo_ref = vim.ManagedEntity("Fake")
self.mock_props = MagicMock()
self.mock_item_name = {"name": "fake_name"}
self.mock_item = MagicMock()
def test_get_service_instance_from_managed_object_call(self):
mock_get_instance_from_managed_object = MagicMock()
with patch(
"salt.utils.vmware.get_service_instance_from_managed_object",
mock_get_instance_from_managed_object,
):
salt.utils.vmware.get_properties_of_managed_object(
self.fake_mo_ref, self.mock_props
)
mock_get_instance_from_managed_object.assert_called_once_with(self.fake_mo_ref)
def test_get_mors_with_properties_calls(self):
mock_get_mors_with_properties = MagicMock(return_value=[MagicMock()])
with patch(
"salt.utils.vmware.get_service_instance_from_managed_object",
MagicMock(return_value=self.mock_si),
):
with patch(
"salt.utils.vmware.get_mors_with_properties",
mock_get_mors_with_properties,
):
salt.utils.vmware.get_properties_of_managed_object(
self.fake_mo_ref, self.mock_props
)
mock_get_mors_with_properties.assert_has_calls(
[
call(
self.mock_si,
vim.ManagedEntity,
container_ref=self.fake_mo_ref,
property_list=["name"],
local_properties=True,
),
call(
self.mock_si,
vim.ManagedEntity,
container_ref=self.fake_mo_ref,
property_list=self.mock_props,
local_properties=True,
),
]
)
def test_managed_object_no_name_property(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(side_effect=[vmodl.query.InvalidProperty(), []]),
):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_properties_of_managed_object(
self.fake_mo_ref, self.mock_props
)
self.assertEqual(
"Properties of managed object '<unnamed>' weren't " "retrieved",
excinfo.exception.strerror,
)
def test_no_items_named_object(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(side_effect=[[self.mock_item_name], []]),
):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_properties_of_managed_object(
self.fake_mo_ref, self.mock_props
)
self.assertEqual(
"Properties of managed object 'fake_name' weren't " "retrieved",
excinfo.exception.strerror,
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetManagedObjectName(TestCase):
"""
Tests for salt.utils.get_managed_object_name
"""
def setUp(self):
patches = (
(
"salt.utils.vmware.get_properties_of_managed_object",
MagicMock(return_value={"key": "value"}),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_mo_ref = MagicMock()
def test_get_properties_of_managed_object_call(self):
mock_get_properties_of_managed_object = MagicMock()
with patch(
"salt.utils.vmware.get_properties_of_managed_object",
mock_get_properties_of_managed_object,
):
salt.utils.vmware.get_managed_object_name(self.mock_mo_ref)
mock_get_properties_of_managed_object.assert_called_once_with(
self.mock_mo_ref, ["name"]
)
def test_no_name_in_property_dict(self):
ret = salt.utils.vmware.get_managed_object_name(self.mock_mo_ref)
self.assertIsNone(ret)
def test_return_managed_object_name(self):
mock_get_properties_of_managed_object = MagicMock()
with patch(
"salt.utils.vmware.get_properties_of_managed_object",
MagicMock(return_value={"name": "fake_name"}),
):
ret = salt.utils.vmware.get_managed_object_name(self.mock_mo_ref)
self.assertEqual(ret, "fake_name")
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetContentTestCase(TestCase):
"""
Tests for salt.utils.get_content
"""
# Method names to be patched
traversal_spec_method_name = (
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec"
)
property_spec_method_name = (
"salt.utils.vmware.vmodl.query.PropertyCollector.PropertySpec"
)
obj_spec_method_name = "salt.utils.vmware.vmodl.query.PropertyCollector.ObjectSpec"
filter_spec_method_name = (
"salt.utils.vmware.vmodl.query.PropertyCollector.FilterSpec"
)
# Class variables
si_mock = None
root_folder_mock = None
root_folder_prop = None
container_view_mock = None
create_container_view_mock = None
result_mock = None
retrieve_contents_mock = None
destroy_mock = None
obj_type_mock = None
traversal_spec_ret_mock = None
traversal_spec_mock = None
property_spec_ret_mock = None
property_spec_mock = None
obj_spec_ret_mock = None
obj_spec_mock = None
filter_spec_ret_mock = None
filter_spec_mock = None
def setUp(self):
patches = (
("salt.utils.vmware.get_root_folder", MagicMock()),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=MagicMock()),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.PropertySpec",
MagicMock(return_value=MagicMock()),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.ObjectSpec",
MagicMock(return_value=MagicMock()),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.FilterSpec",
MagicMock(return_value=MagicMock()),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
# setup the service instance
self.si_mock = MagicMock()
# RootFolder
self.root_folder_mock = MagicMock()
self.get_root_folder_mock = MagicMock(return_value=self.root_folder_mock)
# CreateContainerView()
self.container_view_mock = MagicMock()
self.create_container_view_mock = MagicMock(
return_value=self.container_view_mock
)
self.si_mock.content.viewManager.CreateContainerView = (
self.create_container_view_mock
)
# RetrieveContents()
self.result_mock = MagicMock()
self.retrieve_contents_mock = MagicMock(return_value=self.result_mock)
self.si_mock.content.propertyCollector.RetrieveContents = (
self.retrieve_contents_mock
)
# Destroy()
self.destroy_mock = MagicMock()
self.container_view_mock.Destroy = self.destroy_mock
# override mocks
self.obj_type_mock = MagicMock()
self.traversal_spec_ret_mock = MagicMock()
self.traversal_spec_mock = MagicMock(return_value=self.traversal_spec_ret_mock)
self.property_spec_ret_mock = MagicMock()
self.property_spec_mock = MagicMock(return_value=self.property_spec_ret_mock)
self.obj_spec_ret_mock = MagicMock()
self.obj_spec_mock = MagicMock(return_value=self.obj_spec_ret_mock)
self.filter_spec_ret_mock = MagicMock()
self.filter_spec_mock = MagicMock(return_value=self.filter_spec_ret_mock)
def test_empty_container_ref(self):
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.get_root_folder_mock.assert_called_once_with(self.si_mock)
self.create_container_view_mock.assert_called_once_with(
self.root_folder_mock, [self.obj_type_mock], True
)
def test_defined_container_ref(self):
container_ref_mock = MagicMock()
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with patch(self.obj_spec_method_name, self.obj_type_mock):
salt.utils.vmware.get_content(
self.si_mock, self.obj_type_mock, container_ref=container_ref_mock
)
self.assertEqual(self.get_root_folder_mock.call_count, 0)
self.create_container_view_mock.assert_called_once_with(
container_ref_mock, [self.obj_type_mock], True
)
# Also checks destroy is called
def test_local_traversal_spec(self):
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with patch(self.traversal_spec_method_name, self.traversal_spec_mock):
with patch(self.obj_spec_method_name, self.obj_spec_mock):
ret = salt.utils.vmware.get_content(
self.si_mock, self.obj_type_mock
)
self.create_container_view_mock.assert_called_once_with(
self.root_folder_mock, [self.obj_type_mock], True
)
self.traversal_spec_mock.assert_called_once_with(
name="traverseEntities",
path="view",
skip=False,
type=vim.view.ContainerView,
)
self.obj_spec_mock.assert_called_once_with(
obj=self.container_view_mock,
skip=True,
selectSet=[self.traversal_spec_ret_mock],
)
# check destroy is called
self.assertEqual(self.destroy_mock.call_count, 1)
def test_create_container_view_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.si_mock.content.viewManager.CreateContainerView = MagicMock(
side_effect=exc
)
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_create_container_view_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.si_mock.content.viewManager.CreateContainerView = MagicMock(
side_effect=exc
)
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_create_container_view_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.si_mock.content.viewManager.CreateContainerView = MagicMock(
side_effect=exc
)
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_destroy_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.si_mock.content.viewManager.CreateContainerView = MagicMock(
return_value=MagicMock(Destroy=MagicMock(side_effect=exc))
)
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_destroy_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.si_mock.content.viewManager.CreateContainerView = MagicMock(
return_value=MagicMock(Destroy=MagicMock(side_effect=exc))
)
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_destroy_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.si_mock.content.viewManager.CreateContainerView = MagicMock(
return_value=MagicMock(Destroy=MagicMock(side_effect=exc))
)
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
# Also checks destroy is not called
def test_external_traversal_spec(self):
traversal_spec_obj_mock = MagicMock()
with patch("salt.utils.vmware.get_root_folder", self.get_root_folder_mock):
with patch(self.traversal_spec_method_name, self.traversal_spec_mock):
with patch(self.obj_spec_method_name, self.obj_spec_mock):
salt.utils.vmware.get_content(
self.si_mock,
self.obj_type_mock,
traversal_spec=traversal_spec_obj_mock,
)
self.obj_spec_mock.assert_called_once_with(
obj=self.root_folder_mock, skip=True, selectSet=[traversal_spec_obj_mock]
)
# Check local traversal methods are not called
self.assertEqual(self.create_container_view_mock.call_count, 0)
self.assertEqual(self.traversal_spec_mock.call_count, 0)
# check destroy is not called
self.assertEqual(self.destroy_mock.call_count, 0)
def test_property_obj_filter_specs_and_contents(self):
with patch(self.traversal_spec_method_name, self.traversal_spec_mock):
with patch(self.property_spec_method_name, self.property_spec_mock):
with patch(self.obj_spec_method_name, self.obj_spec_mock):
with patch(self.filter_spec_method_name, self.filter_spec_mock):
ret = salt.utils.vmware.get_content(
self.si_mock, self.obj_type_mock
)
self.traversal_spec_mock.assert_called_once_with(
name="traverseEntities",
path="view",
skip=False,
type=vim.view.ContainerView,
)
self.property_spec_mock.assert_called_once_with(
type=self.obj_type_mock, all=True, pathSet=None
)
self.obj_spec_mock.assert_called_once_with(
obj=self.container_view_mock,
skip=True,
selectSet=[self.traversal_spec_ret_mock],
)
self.retrieve_contents_mock.assert_called_once_with([self.filter_spec_ret_mock])
self.assertEqual(ret, self.result_mock)
def test_retrieve_contents_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.si_mock.content.propertyCollector.RetrieveContents = MagicMock(
side_effect=exc
)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_retrieve_contents_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.si_mock.content.propertyCollector.RetrieveContents = MagicMock(
side_effect=exc
)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_retrieve_contents_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.si_mock.content.propertyCollector.RetrieveContents = MagicMock(
side_effect=exc
)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_content(self.si_mock, self.obj_type_mock)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_local_properties_set(self):
container_ref_mock = MagicMock()
with patch(self.traversal_spec_method_name, self.traversal_spec_mock):
with patch(self.property_spec_method_name, self.property_spec_mock):
with patch(self.obj_spec_method_name, self.obj_spec_mock):
salt.utils.vmware.get_content(
self.si_mock,
self.obj_type_mock,
container_ref=container_ref_mock,
local_properties=True,
)
self.assertEqual(self.traversal_spec_mock.call_count, 0)
self.obj_spec_mock.assert_called_once_with(
obj=container_ref_mock, skip=False, selectSet=None
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetRootFolderTestCase(TestCase):
"""
Tests for salt.utils.get_root_folder
"""
def setUp(self):
self.mock_root_folder = MagicMock()
self.mock_content = MagicMock(rootFolder=self.mock_root_folder)
self.mock_si = MagicMock(
RetrieveContent=MagicMock(return_value=self.mock_content)
)
def test_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_content).rootFolder = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_root_folder(self.mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(self.mock_content).rootFolder = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_root_folder(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(self.mock_content).rootFolder = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_root_folder(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_return(self):
ret = salt.utils.vmware.get_root_folder(self.mock_si)
self.assertEqual(ret, self.mock_root_folder)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetServiceInfoTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_service_info
"""
def setUp(self):
self.mock_about = MagicMock()
self.mock_si = MagicMock(content=MagicMock())
type(self.mock_si.content).about = PropertyMock(return_value=self.mock_about)
def tearDown(self):
for attr in ("mock_si", "mock_about"):
delattr(self, attr)
def test_about_ret(self):
ret = salt.utils.vmware.get_service_info(self.mock_si)
self.assertEqual(ret, self.mock_about)
def test_about_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_si.content).about = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_service_info(self.mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_about_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(self.mock_si.content).about = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_service_info(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_about_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(self.mock_si.content).about = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_service_info(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
@skipIf(not HAS_GSSAPI, "The 'gssapi' library is missing")
class GssapiTokenTest(TestCase):
"""
Test cases for salt.utils.vmware.get_gssapi_token
"""
def setUp(self):
patches = (
("gssapi.Name", MagicMock(return_value="service")),
("gssapi.InitContext", MagicMock()),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def test_no_gssapi(self):
with patch("salt.utils.vmware.HAS_GSSAPI", False):
with self.assertRaises(ImportError) as excinfo:
salt.utils.vmware.get_gssapi_token("principal", "host", "domain")
self.assertIn(
"The gssapi library is not imported.", excinfo.exception.message
)
@skipIf(not HAS_GSSAPI, "The 'gssapi' library is missing")
def test_service_name(self):
mock_name = MagicMock()
with patch.object(salt.utils.vmware.gssapi, "Name", mock_name):
with self.assertRaises(CommandExecutionError):
salt.utils.vmware.get_gssapi_token("principal", "host", "domain")
mock_name.assert_called_once_with(
"principal/host@domain", gssapi.C_NT_USER_NAME
)
@skipIf(not HAS_GSSAPI, "The 'gssapi' library is missing")
def test_out_token_defined(self):
mock_context = MagicMock(return_value=MagicMock())
mock_context.return_value.established = False
mock_context.return_value.step = MagicMock(return_value="out_token")
with patch.object(salt.utils.vmware.gssapi, "InitContext", mock_context):
ret = salt.utils.vmware.get_gssapi_token("principal", "host", "domain")
self.assertEqual(mock_context.return_value.step.called, 1)
self.assertEqual(ret, base64.b64encode(b"out_token"))
@skipIf(not HAS_GSSAPI, "The 'gssapi' library is missing")
def test_out_token_undefined(self):
mock_context = MagicMock(return_value=MagicMock())
mock_context.return_value.established = False
mock_context.return_value.step = MagicMock(return_value=None)
with patch.object(salt.utils.vmware.gssapi, "InitContext", mock_context):
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware.get_gssapi_token("principal", "host", "domain")
self.assertEqual(mock_context.return_value.step.called, 1)
self.assertIn("Can't receive token", excinfo.exception.strerror)
@skipIf(not HAS_GSSAPI, "The 'gssapi' library is missing")
def test_context_extablished(self):
mock_context = MagicMock(return_value=MagicMock())
mock_context.return_value.established = True
mock_context.return_value.step = MagicMock(return_value="out_token")
with patch.object(salt.utils.vmware.gssapi, "InitContext", mock_context):
mock_context.established = True
mock_context.step = MagicMock(return_value=None)
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware.get_gssapi_token("principal", "host", "domain")
self.assertEqual(mock_context.step.called, 0)
self.assertIn(
"Context established, but didn't receive token",
excinfo.exception.strerror,
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class PrivateGetServiceInstanceTestCase(TestCase):
"""
Tests for salt.utils.vmware._get_service_instance
"""
def setUp(self):
patches = (
("salt.utils.vmware.SmartConnect", MagicMock()),
("salt.utils.vmware.Disconnect", MagicMock()),
(
"salt.utils.vmware.get_gssapi_token",
MagicMock(return_value="fake_token"),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def test_invalid_mechianism(self):
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="invalid_mechanism",
principal="fake principal",
domain="fake_domain",
)
self.assertIn("Unsupported mechanism", excinfo.exception.strerror)
def test_userpass_mechanism_empty_username(self):
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username=None,
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="userpass",
principal="fake principal",
domain="fake_domain",
)
self.assertIn("mandatory parameter 'username'", excinfo.exception.strerror)
def test_userpass_mechanism_empty_password(self):
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password=None,
protocol="fake_protocol",
port=1,
mechanism="userpass",
principal="fake principal",
domain="fake_domain",
)
self.assertIn("mandatory parameter 'password'", excinfo.exception.strerror)
def test_userpass_mechanism_no_domain(self):
mock_sc = MagicMock()
with patch("salt.utils.vmware.SmartConnect", mock_sc):
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="userpass",
principal="fake principal",
domain=None,
)
mock_sc.assert_called_once_with(
host="fake_host.fqdn",
user="fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
b64token=None,
mechanism="userpass",
)
def test_userpass_mech_domain_unused(self):
mock_sc = MagicMock()
with patch("salt.utils.vmware.SmartConnect", mock_sc):
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username@domain",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="userpass",
principal="fake principal",
domain="fake_domain",
)
mock_sc.assert_called_once_with(
host="fake_host.fqdn",
user="fake_username@domain",
pwd="fake_password",
protocol="fake_protocol",
port=1,
b64token=None,
mechanism="userpass",
)
mock_sc.reset_mock()
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="domain\\fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="userpass",
principal="fake principal",
domain="fake_domain",
)
mock_sc.assert_called_once_with(
host="fake_host.fqdn",
user="domain\\fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
b64token=None,
mechanism="userpass",
)
def test_sspi_empty_principal(self):
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal=None,
domain="fake_domain",
)
self.assertIn("mandatory parameters are missing", excinfo.exception.strerror)
def test_sspi_empty_domain(self):
with self.assertRaises(CommandExecutionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain=None,
)
self.assertIn("mandatory parameters are missing", excinfo.exception.strerror)
def test_sspi_get_token_error(self):
mock_token = MagicMock(side_effect=Exception("Exception"))
with patch("salt.utils.vmware.get_gssapi_token", mock_token):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
)
mock_token.assert_called_once_with(
"fake_principal", "fake_host.fqdn", "fake_domain"
)
self.assertEqual("Exception", excinfo.exception.strerror)
def test_sspi_get_token_success_(self):
mock_token = MagicMock(return_value="fake_token")
mock_sc = MagicMock()
with patch("salt.utils.vmware.get_gssapi_token", mock_token):
with patch("salt.utils.vmware.SmartConnect", mock_sc):
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
)
mock_token.assert_called_once_with(
"fake_principal", "fake_host.fqdn", "fake_domain"
)
mock_sc.assert_called_once_with(
host="fake_host.fqdn",
user="fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
b64token="fake_token",
mechanism="sspi",
)
def test_first_attempt_successful_connection(self):
mock_sc = MagicMock()
with patch("salt.utils.vmware.SmartConnect", mock_sc):
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
)
mock_sc.assert_called_once_with(
host="fake_host.fqdn",
user="fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
b64token="fake_token",
mechanism="sspi",
)
def test_first_attempt_successful_connection_verify_ssl_false(self):
with patch("ssl.SSLContext", MagicMock()), patch(
"ssl._create_unverified_context", MagicMock()
):
exc = vim.fault.HostConnectFault()
exc.msg = "[SSL: CERTIFICATE_VERIFY_FAILED]"
mock_sc = MagicMock(side_effect=[None])
mock_ssl = MagicMock()
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with patch("ssl._create_unverified_context", mock_ssl):
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
verify_ssl=False,
)
mock_ssl.assert_called_once_with()
calls = [
call(
host="fake_host.fqdn",
user="fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
sslContext=mock_ssl.return_value,
b64token="fake_token",
mechanism="sspi",
),
]
mock_sc.assert_has_calls(calls)
def test_second_attempt_successful_connection_verify_ssl_false(self):
with patch("ssl.SSLContext", MagicMock()), patch(
"ssl._create_unverified_context", MagicMock()
):
exc = Exception("certificate verify failed")
mock_sc = MagicMock(side_effect=[exc, None])
mock_ssl_unverif = MagicMock()
mock_ssl_context = MagicMock()
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with patch("ssl._create_unverified_context", mock_ssl_unverif):
with patch("ssl.SSLContext", mock_ssl_context):
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
verify_ssl=False,
)
mock_ssl_context.assert_called_once_with(ssl.PROTOCOL_TLSv1)
mock_ssl_unverif.assert_called_once_with()
calls = [
call(
host="fake_host.fqdn",
user="fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
sslContext=mock_ssl_unverif.return_value,
b64token="fake_token",
mechanism="sspi",
),
call(
host="fake_host.fqdn",
user="fake_username",
pwd="fake_password",
protocol="fake_protocol",
port=1,
sslContext=mock_ssl_context.return_value,
b64token="fake_token",
mechanism="sspi",
),
]
mock_sc.assert_has_calls(calls)
def test_attempt_unsuccessful_connection_default_error(self):
exc = Exception("Exception")
mock_sc = MagicMock(side_effect=exc)
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(mock_sc.call_count, 1)
self.assertIn(
"Could not connect to host 'fake_host.fqdn'", excinfo.exception.message,
)
def test_attempt_unsuccessful_connection_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault"
mock_sc = MagicMock(side_effect=exc)
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(mock_sc.call_count, 1)
self.assertEqual("VimFault", excinfo.exception.message)
def test_first_attempt_unsuccsessful_connection_default_error(self):
with patch("ssl.SSLContext", MagicMock()), patch(
"ssl._create_unverified_context", MagicMock()
):
exc = vim.fault.HostConnectFault()
exc.msg = "certificate verify failed"
exc2 = Exception("Exception")
mock_sc = MagicMock(side_effect=[exc, exc2])
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
verify_ssl=False,
)
self.assertEqual(mock_sc.call_count, 2)
self.assertIn(
"Could not connect to host 'fake_host.fqdn'", excinfo.exception.message
)
def test_first_attempt_unsuccsessful_cannot_vim_fault_verify_ssl(self):
with patch("ssl.SSLContext", MagicMock()), patch(
"ssl._create_unverified_context", MagicMock()
):
exc = vim.fault.VimFault()
exc.msg = "VimFault"
mock_sc = MagicMock(side_effect=[exc])
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
verify_ssl=False,
)
self.assertEqual(mock_sc.call_count, 1)
self.assertIn("VimFault", excinfo.exception.message)
def test_third_attempt_unsuccessful_connection_detault_error(self):
with patch("ssl.SSLContext", MagicMock()), patch(
"ssl._create_unverified_context", MagicMock()
):
exc = vim.fault.HostConnectFault()
exc.msg = "certificate verify failed"
exc2 = Exception("Exception")
mock_sc = MagicMock(side_effect=[exc, exc2])
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
verify_ssl=False,
)
self.assertEqual(mock_sc.call_count, 2)
self.assertIn(
"Could not connect to host 'fake_host.fqdn", excinfo.exception.message
)
def test_second_attempt_unsuccessful_connection_vim_fault(self):
with patch("ssl.SSLContext", MagicMock()), patch(
"ssl._create_unverified_context", MagicMock()
):
exc = vim.fault.VimFault()
exc.msg = "VimFault"
mock_sc = MagicMock(side_effect=[exc])
with patch("salt.utils.vmware.SmartConnect", mock_sc):
with self.assertRaises(VMwareConnectionError) as excinfo:
salt.utils.vmware._get_service_instance(
host="fake_host.fqdn",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="sspi",
principal="fake_principal",
domain="fake_domain",
verify_ssl=False,
)
self.assertEqual(mock_sc.call_count, 1)
self.assertIn("VimFault", excinfo.exception.message)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetServiceInstanceTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_service_instance
"""
def setUp(self):
patches = (
("salt.utils.vmware.GetSi", MagicMock(return_value=None)),
(
"salt.utils.vmware._get_service_instance",
MagicMock(return_value=MagicMock()),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def test_default_params(self):
mock_get_si = MagicMock()
with patch("salt.utils.vmware._get_service_instance", mock_get_si):
salt.utils.vmware.get_service_instance(host="fake_host")
mock_get_si.assert_called_once_with(
"fake_host",
None,
None,
"https",
443,
"userpass",
None,
None,
verify_ssl=True,
)
def test_no_cached_service_instance_same_host_on_proxy(self):
with patch("salt.utils.platform.is_proxy", MagicMock(return_value=True)):
# Service instance is uncached when using class default mock objs
mock_get_si = MagicMock()
with patch("salt.utils.vmware._get_service_instance", mock_get_si):
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
mock_get_si.assert_called_once_with(
"fake_host",
"fake_username",
"fake_password",
"fake_protocol",
1,
"fake_mechanism",
"fake_principal",
"fake_domain",
verify_ssl=True,
)
def test_cached_service_instance_different_host(self):
mock_si = MagicMock()
mock_disconnect = MagicMock()
mock_get_si = MagicMock(return_value=mock_si)
mock_getstub = MagicMock()
with patch("salt.utils.vmware.GetSi", mock_get_si):
with patch("salt.utils.vmware.GetStub", mock_getstub):
with patch("salt.utils.vmware.Disconnect", mock_disconnect):
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(mock_get_si.call_count, 1)
self.assertEqual(mock_getstub.call_count, 1)
self.assertEqual(mock_disconnect.call_count, 1)
def test_uncached_service_instance(self):
# Service instance is uncached when using class default mock objs
mock_get_si = MagicMock()
with patch("salt.utils.vmware._get_service_instance", mock_get_si):
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
verify_ssl=True,
)
mock_get_si.assert_called_once_with(
"fake_host",
"fake_username",
"fake_password",
"fake_protocol",
1,
"fake_mechanism",
"fake_principal",
"fake_domain",
verify_ssl=True,
)
def test_unauthenticated_service_instance(self):
mock_si_current_time = MagicMock(side_effect=vim.fault.NotAuthenticated)
mock_si = MagicMock()
mock_get_si = MagicMock(return_value=mock_si)
mock_si.CurrentTime = mock_si_current_time
mock_disconnect = MagicMock()
with patch("salt.utils.vmware._get_service_instance", mock_get_si):
with patch("salt.utils.vmware.Disconnect", mock_disconnect):
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(mock_si_current_time.call_count, 1)
self.assertEqual(mock_disconnect.call_count, 1)
self.assertEqual(mock_get_si.call_count, 2)
def test_cached_unauthenticated_service_instance(self):
mock_si_current_time = MagicMock(side_effect=vim.fault.NotAuthenticated)
mock_si = MagicMock()
mock_get_si = MagicMock(return_value=mock_si)
mock_getsi = MagicMock(return_value=mock_si)
mock_si.CurrentTime = mock_si_current_time
mock_disconnect = MagicMock()
with patch("salt.utils.vmware.GetSi", mock_getsi):
with patch("salt.utils.vmware._get_service_instance", mock_get_si):
with patch("salt.utils.vmware.Disconnect", mock_disconnect):
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(mock_si_current_time.call_count, 1)
self.assertEqual(mock_disconnect.call_count, 1)
self.assertEqual(mock_get_si.call_count, 1)
def test_current_time_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
with patch(
"salt.utils.vmware._get_service_instance",
MagicMock(return_value=MagicMock(CurrentTime=MagicMock(side_effect=exc))),
):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_current_time_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
with patch(
"salt.utils.vmware._get_service_instance",
MagicMock(return_value=MagicMock(CurrentTime=MagicMock(side_effect=exc))),
):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_current_time_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
with patch(
"salt.utils.vmware._get_service_instance",
MagicMock(return_value=MagicMock(CurrentTime=MagicMock(side_effect=exc))),
):
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_service_instance(
host="fake_host",
username="fake_username",
password="fake_password",
protocol="fake_protocol",
port=1,
mechanism="fake_mechanism",
principal="fake_principal",
domain="fake_domain",
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class DisconnectTestCase(TestCase):
"""
Tests for salt.utils.vmware.disconnect
"""
def setUp(self):
self.mock_si = MagicMock()
self.addCleanup(delattr, self, "mock_si")
def test_disconnect(self):
mock_disconnect = MagicMock()
with patch("salt.utils.vmware.Disconnect", mock_disconnect):
salt.utils.vmware.disconnect(service_instance=self.mock_si)
mock_disconnect.assert_called_once_with(self.mock_si)
def test_disconnect_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
with patch("salt.utils.vmware.Disconnect", MagicMock(side_effect=exc)):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.disconnect(service_instance=self.mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_disconnect_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
with patch("salt.utils.vmware.Disconnect", MagicMock(side_effect=exc)):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.disconnect(service_instance=self.mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_disconnect_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
with patch("salt.utils.vmware.Disconnect", MagicMock(side_effect=exc)):
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.disconnect(service_instance=self.mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class IsConnectionToAVCenterTestCase(TestCase):
"""
Tests for salt.utils.vmware.is_connection_to_a_vcenter
"""
def test_api_type_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
mock_si = MagicMock()
type(mock_si.content.about).apiType = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.is_connection_to_a_vcenter(mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_api_type_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
mock_si = MagicMock()
type(mock_si.content.about).apiType = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.is_connection_to_a_vcenter(mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_api_type_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
mock_si = MagicMock()
type(mock_si.content.about).apiType = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.is_connection_to_a_vcenter(mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_connected_to_a_vcenter(self):
mock_si = MagicMock()
mock_si.content.about.apiType = "VirtualCenter"
ret = salt.utils.vmware.is_connection_to_a_vcenter(mock_si)
self.assertTrue(ret)
def test_connected_to_a_host(self):
mock_si = MagicMock()
mock_si.content.about.apiType = "HostAgent"
ret = salt.utils.vmware.is_connection_to_a_vcenter(mock_si)
self.assertFalse(ret)
def test_connected_to_invalid_entity(self):
mock_si = MagicMock()
mock_si.content.about.apiType = "UnsupportedType"
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.is_connection_to_a_vcenter(mock_si)
self.assertIn(
"Unexpected api type 'UnsupportedType'", excinfo.exception.strerror
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetNewServiceInstanceStub(TestCase, LoaderModuleMockMixin):
"""
Tests for salt.utils.vmware.get_new_service_instance_stub
"""
def setup_loader_modules(self):
return {salt.utils.vmware: {"sys": MagicMock(), "ssl": MagicMock()}}
def setUp(self):
self.mock_stub = MagicMock(host="fake_host:1000", cookie='ignore"fake_cookie')
self.mock_si = MagicMock(_stub=self.mock_stub)
self.mock_ret = MagicMock()
self.mock_new_stub = MagicMock()
self.context_dict = {}
patches = (
(
"salt.utils.vmware.VmomiSupport.GetRequestContext",
MagicMock(return_value=self.context_dict),
),
(
"salt.utils.vmware.SoapStubAdapter",
MagicMock(return_value=self.mock_new_stub),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_context = MagicMock()
self.mock_create_default_context = MagicMock(return_value=self.mock_context)
salt.utils.vmware.ssl.create_default_context = self.mock_create_default_context
def tearDown(self):
for attr in (
"mock_stub",
"mock_si",
"mock_ret",
"mock_new_stub",
"context_dict",
"mock_context",
"mock_create_default_context",
):
delattr(self, attr)
def test_ssl_default_context_loaded(self):
salt.utils.vmware.get_new_service_instance_stub(self.mock_si, "fake_path")
self.mock_create_default_context.assert_called_once_with()
self.assertFalse(self.mock_context.check_hostname)
self.assertEqual(self.mock_context.verify_mode, salt.utils.vmware.ssl.CERT_NONE)
def test_session_cookie_in_context(self):
salt.utils.vmware.get_new_service_instance_stub(self.mock_si, "fake_path")
self.assertEqual(self.context_dict["vcSessionCookie"], "fake_cookie")
def test_get_new_stub(self):
mock_get_new_stub = MagicMock()
with patch("salt.utils.vmware.SoapStubAdapter", mock_get_new_stub):
salt.utils.vmware.get_new_service_instance_stub(
self.mock_si, "fake_path", "fake_ns", "fake_version"
)
mock_get_new_stub.assert_called_once_with(
host="fake_host",
ns="fake_ns",
path="fake_path",
version="fake_version",
poolSize=0,
sslContext=self.mock_context,
)
def test_new_stub_returned(self):
ret = salt.utils.vmware.get_new_service_instance_stub(self.mock_si, "fake_path")
self.assertEqual(self.mock_new_stub.cookie, 'ignore"fake_cookie')
self.assertEqual(ret, self.mock_new_stub)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetServiceInstanceFromManagedObjectTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_managed_instance_from_managed_object
"""
def setUp(self):
patches = (("salt.utils.vmware.vim.ServiceInstance", MagicMock()),)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_si = MagicMock()
self.mock_stub = PropertyMock()
self.mock_mo_ref = MagicMock(_stub=self.mock_stub)
for attr in ("mock_si", "mock_stub", "mock_mo_ref"):
self.addCleanup(delattr, self, attr)
def test_default_name_parameter(self):
mock_trace = MagicMock()
type(salt.utils.vmware.log).trace = mock_trace
salt.utils.vmware.get_service_instance_from_managed_object(self.mock_mo_ref)
mock_trace.assert_called_once_with(
"[%s] Retrieving service instance from managed object", "<unnamed>"
)
def test_name_parameter_passed_in(self):
mock_trace = MagicMock()
type(salt.utils.vmware.log).trace = mock_trace
salt.utils.vmware.get_service_instance_from_managed_object(
self.mock_mo_ref, "fake_mo_name"
)
mock_trace.assert_called_once_with(
"[%s] Retrieving service instance from managed object", "fake_mo_name"
)
def test_service_instance_instantiation(self):
mock_service_instance_ini = MagicMock()
with patch("salt.utils.vmware.vim.ServiceInstance", mock_service_instance_ini):
salt.utils.vmware.get_service_instance_from_managed_object(self.mock_mo_ref)
mock_service_instance_ini.assert_called_once_with("ServiceInstance")
def test_si_return_and_stub_assignment(self):
with patch(
"salt.utils.vmware.vim.ServiceInstance",
MagicMock(return_value=self.mock_si),
):
ret = salt.utils.vmware.get_service_instance_from_managed_object(
self.mock_mo_ref
)
self.assertEqual(ret, self.mock_si)
self.assertEqual(ret._stub, self.mock_stub)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetDatacentersTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_datacenters
"""
def setUp(self):
patches = (
(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=[{"name": "fake_dc", "object": MagicMock()}]),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_si = MagicMock()
self.mock_dc1 = MagicMock()
self.mock_dc2 = MagicMock()
self.mock_entries = [
{"name": "fake_dc1", "object": self.mock_dc1},
{"name": "fake_dc2", "object": self.mock_dc2},
]
def test_get_mors_with_properties_call(self):
mock_get_mors_with_properties = MagicMock(
return_value=[{"name": "fake_dc", "object": MagicMock()}]
)
with patch(
"salt.utils.vmware.get_mors_with_properties", mock_get_mors_with_properties
):
salt.utils.vmware.get_datacenters(
self.mock_si, datacenter_names=["fake_dc1"]
)
mock_get_mors_with_properties.assert_called_once_with(
self.mock_si, vim.Datacenter, property_list=["name"]
)
def test_get_mors_with_properties_returns_empty_array(self):
with patch(
"salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])
):
res = salt.utils.vmware.get_datacenters(
self.mock_si, datacenter_names=["fake_dc1"]
)
self.assertEqual(res, [])
def test_no_parameters(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
):
res = salt.utils.vmware.get_datacenters(self.mock_si)
self.assertEqual(res, [])
def test_datastore_not_found(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
):
res = salt.utils.vmware.get_datacenters(
self.mock_si, datacenter_names=["fake_dc"]
)
self.assertEqual(res, [])
def test_datastore_found(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
):
res = salt.utils.vmware.get_datacenters(
self.mock_si, datacenter_names=["fake_dc2"]
)
self.assertEqual(res, [self.mock_dc2])
def test_get_all_datastores(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
):
res = salt.utils.vmware.get_datacenters(
self.mock_si, get_all_datacenters=True
)
self.assertEqual(res, [self.mock_dc1, self.mock_dc2])
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetDatacenterTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_datacenter
"""
def setUp(self):
patches = (
(
"salt.utils.vmware.get_datacenters",
MagicMock(return_value=[MagicMock()]),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_si = MagicMock()
self.mock_dc = MagicMock()
def test_get_datacenters_call(self):
mock_get_datacenters = MagicMock(return_value=[MagicMock()])
with patch("salt.utils.vmware.get_datacenters", mock_get_datacenters):
salt.utils.vmware.get_datacenter(self.mock_si, "fake_dc1")
mock_get_datacenters.assert_called_once_with(
self.mock_si, datacenter_names=["fake_dc1"]
)
def test_no_datacenters_returned(self):
with patch("salt.utils.vmware.get_datacenters", MagicMock(return_value=[])):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_datacenter(self.mock_si, "fake_dc1")
self.assertEqual(
"Datacenter 'fake_dc1' was not found", excinfo.exception.strerror
)
def test_get_datacenter_return(self):
with patch(
"salt.utils.vmware.get_datacenters", MagicMock(return_value=[self.mock_dc])
):
res = salt.utils.vmware.get_datacenter(self.mock_si, "fake_dc1")
self.assertEqual(res, self.mock_dc)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class CreateDatacenterTestCase(TestCase):
"""
Tests for salt.utils.vmware.create_datacenter
"""
def setUp(self):
patches = (("salt.utils.vmware.get_root_folder", MagicMock()),)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_si = MagicMock()
self.mock_dc = MagicMock()
self.mock_create_datacenter = MagicMock(return_value=self.mock_dc)
self.mock_root_folder = MagicMock(CreateDatacenter=self.mock_create_datacenter)
def test_get_root_folder(self):
mock_get_root_folder = MagicMock()
with patch("salt.utils.vmware.get_root_folder", mock_get_root_folder):
salt.utils.vmware.create_datacenter(self.mock_si, "fake_dc")
mock_get_root_folder.assert_called_once_with(self.mock_si)
def test_create_datacenter_call(self):
with patch(
"salt.utils.vmware.get_root_folder",
MagicMock(return_value=self.mock_root_folder),
):
salt.utils.vmware.create_datacenter(self.mock_si, "fake_dc")
self.mock_create_datacenter.assert_called_once_with("fake_dc")
def test_create_datacenter_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_root_folder = MagicMock(CreateDatacenter=MagicMock(side_effect=exc))
with patch(
"salt.utils.vmware.get_root_folder",
MagicMock(return_value=self.mock_root_folder),
):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_datacenter(self.mock_si, "fake_dc")
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_create_datacenter_raise_vim_fault(self):
exc = vim.VimFault()
exc.msg = "VimFault msg"
self.mock_root_folder = MagicMock(CreateDatacenter=MagicMock(side_effect=exc))
with patch(
"salt.utils.vmware.get_root_folder",
MagicMock(return_value=self.mock_root_folder),
):
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_datacenter(self.mock_si, "fake_dc")
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_create_datacenter_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_root_folder = MagicMock(CreateDatacenter=MagicMock(side_effect=exc))
with patch(
"salt.utils.vmware.get_root_folder",
MagicMock(return_value=self.mock_root_folder),
):
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.create_datacenter(self.mock_si, "fake_dc")
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_datastore_successfully_created(self):
with patch(
"salt.utils.vmware.get_root_folder",
MagicMock(return_value=self.mock_root_folder),
):
res = salt.utils.vmware.create_datacenter(self.mock_si, "fake_dc")
self.assertEqual(res, self.mock_dc)
class FakeTaskClass:
pass
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetDvssTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dc_ref = MagicMock()
self.mock_traversal_spec = MagicMock()
self.mock_items = [
{"object": MagicMock(), "name": "fake_dvs1"},
{"object": MagicMock(), "name": "fake_dvs2"},
{"object": MagicMock(), "name": "fake_dvs3"},
]
self.mock_get_mors = MagicMock(return_value=self.mock_items)
patches = (
("salt.utils.vmware.get_managed_object_name", MagicMock()),
("salt.utils.vmware.get_mors_with_properties", self.mock_get_mors),
(
"salt.utils.vmware.get_service_instance_from_managed_object",
MagicMock(return_value=self.mock_si),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=self.mock_traversal_spec),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_dc_ref",
"mock_traversal_spec",
"mock_items",
"mock_get_mors",
):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.get_dvss(self.mock_dc_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value="traversal_spec")
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec,
):
salt.utils.vmware.get_dvss(self.mock_dc_ref)
mock_traversal_spec.assert_has_calls(
[
call(path="childEntity", skip=False, type=vim.Folder),
call(
path="networkFolder",
skip=True,
type=vim.Datacenter,
selectSet=["traversal_spec"],
),
]
)
def test_get_mors_with_properties(self):
salt.utils.vmware.get_dvss(self.mock_dc_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si,
vim.DistributedVirtualSwitch,
container_ref=self.mock_dc_ref,
property_list=["name"],
traversal_spec=self.mock_traversal_spec,
)
def test_get_no_dvss(self):
ret = salt.utils.vmware.get_dvss(self.mock_dc_ref)
self.assertEqual(ret, [])
def test_get_all_dvss(self):
ret = salt.utils.vmware.get_dvss(self.mock_dc_ref, get_all_dvss=True)
self.assertEqual(ret, [i["object"] for i in self.mock_items])
def test_filtered_all_dvss(self):
ret = salt.utils.vmware.get_dvss(
self.mock_dc_ref, dvs_names=["fake_dvs1", "fake_dvs3", "no_dvs"]
)
self.assertEqual(
ret, [self.mock_items[0]["object"], self.mock_items[2]["object"]]
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetNetworkFolderTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dc_ref = MagicMock()
self.mock_traversal_spec = MagicMock()
self.mock_entries = [{"object": MagicMock(), "name": "fake_netw_folder"}]
self.mock_get_mors = MagicMock(return_value=self.mock_entries)
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dc"),
),
(
"salt.utils.vmware.get_service_instance_from_managed_object",
MagicMock(return_value=self.mock_si),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=self.mock_traversal_spec),
),
("salt.utils.vmware.get_mors_with_properties", self.mock_get_mors),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_dc_ref",
"mock_traversal_spec",
"mock_entries",
"mock_get_mors",
):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.get_network_folder(self.mock_dc_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value="traversal_spec")
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec,
):
salt.utils.vmware.get_network_folder(self.mock_dc_ref)
mock_traversal_spec.assert_called_once_with(
path="networkFolder", skip=False, type=vim.Datacenter
)
def test_get_mors_with_properties(self):
salt.utils.vmware.get_network_folder(self.mock_dc_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si,
vim.Folder,
container_ref=self.mock_dc_ref,
property_list=["name"],
traversal_spec=self.mock_traversal_spec,
)
def test_get_no_network_folder(self):
with patch(
"salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])
):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_network_folder(self.mock_dc_ref)
self.assertEqual(
excinfo.exception.strerror,
"Network folder in datacenter 'fake_dc' wasn't " "retrieved",
)
def test_get_network_folder(self):
ret = salt.utils.vmware.get_network_folder(self.mock_dc_ref)
self.assertEqual(ret, self.mock_entries[0]["object"])
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class CreateDvsTestCase(TestCase):
def setUp(self):
self.mock_dc_ref = MagicMock()
self.mock_dvs_create_spec = MagicMock()
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_netw_folder = MagicMock(
CreateDVS_Task=MagicMock(return_value=self.mock_task)
)
self.mock_wait_for_task = MagicMock()
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dc"),
),
(
"salt.utils.vmware.get_network_folder",
MagicMock(return_value=self.mock_netw_folder),
),
("salt.utils.vmware.wait_for_task", self.mock_wait_for_task),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_dc_ref",
"mock_dvs_create_spec",
"mock_task",
"mock_netw_folder",
"mock_wait_for_task",
):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.create_dvs(self.mock_dc_ref, "fake_dvs")
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_no_dvs_create_spec(self):
mock_spec = MagicMock(configSpec=None)
mock_config_spec = MagicMock()
mock_dvs_create_spec = MagicMock(return_value=mock_spec)
mock_vmware_dvs_config_spec = MagicMock(return_value=mock_config_spec)
with patch("salt.utils.vmware.vim.DVSCreateSpec", mock_dvs_create_spec):
with patch(
"salt.utils.vmware.vim.VMwareDVSConfigSpec", mock_vmware_dvs_config_spec
):
salt.utils.vmware.create_dvs(self.mock_dc_ref, "fake_dvs")
mock_dvs_create_spec.assert_called_once_with()
mock_vmware_dvs_config_spec.assert_called_once_with()
self.assertEqual(mock_spec.configSpec, mock_config_spec)
self.assertEqual(mock_config_spec.name, "fake_dvs")
self.mock_netw_folder.CreateDVS_Task.assert_called_once_with(mock_spec)
def test_get_network_folder(self):
mock_get_network_folder = MagicMock()
with patch("salt.utils.vmware.get_network_folder", mock_get_network_folder):
salt.utils.vmware.create_dvs(self.mock_dc_ref, "fake_dvs")
mock_get_network_folder.assert_called_once_with(self.mock_dc_ref)
def test_create_dvs_task_passed_in_spec(self):
salt.utils.vmware.create_dvs(
self.mock_dc_ref, "fake_dvs", dvs_create_spec=self.mock_dvs_create_spec
)
self.mock_netw_folder.CreateDVS_Task.assert_called_once_with(
self.mock_dvs_create_spec
)
def test_create_dvs_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_netw_folder.CreateDVS_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_dvs(
self.mock_dc_ref, "fake_dvs", dvs_create_spec=self.mock_dvs_create_spec
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_create_dvs_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_netw_folder.CreateDVS_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_dvs(
self.mock_dc_ref, "fake_dvs", dvs_create_spec=self.mock_dvs_create_spec
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_create_dvs_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_netw_folder.CreateDVS_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.create_dvs(
self.mock_dc_ref, "fake_dvs", dvs_create_spec=self.mock_dvs_create_spec
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_wait_for_tasks(self):
salt.utils.vmware.create_dvs(
self.mock_dc_ref, "fake_dvs", dvs_create_spec=self.mock_dvs_create_spec
)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task,
"fake_dvs",
"<class '{}unit.utils.test_vmware.FakeTaskClass'>".format(
"tests." if RUNTIME_VARS.PYTEST_SESSION else ""
),
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class UpdateDvsTestCase(TestCase):
def setUp(self):
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_dvs_ref = MagicMock(
ReconfigureDvs_Task=MagicMock(return_value=self.mock_task)
)
self.mock_dvs_spec = MagicMock()
self.mock_wait_for_task = MagicMock()
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dvs"),
),
("salt.utils.vmware.wait_for_task", self.mock_wait_for_task),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_dvs_ref",
"mock_task",
"mock_dvs_spec",
"mock_wait_for_task",
):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_reconfigure_dvs_task(self):
salt.utils.vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.mock_dvs_ref.ReconfigureDvs_Task.assert_called_once_with(
self.mock_dvs_spec
)
def test_reconfigure_dvs_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_dvs_ref.ReconfigureDvs_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_reconfigure_dvs_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_dvs_ref.ReconfigureDvs_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_reconfigure_dvs_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_dvs_ref.ReconfigureDvs_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_wait_for_tasks(self):
salt.utils.vmware.update_dvs(self.mock_dvs_ref, self.mock_dvs_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task,
"fake_dvs",
"<class '{}unit.utils.test_vmware.FakeTaskClass'>".format(
"tests." if RUNTIME_VARS.PYTEST_SESSION else ""
),
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class SetDvsNetworkResourceManagementEnabledTestCase(TestCase):
def setUp(self):
self.mock_enabled = MagicMock()
self.mock_dvs_ref = MagicMock(EnableNetworkResourceManagement=MagicMock())
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dvs"),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ("mock_dvs_ref", "mock_enabled"):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled
)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_enable_network_resource_management(self):
salt.utils.vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled
)
self.mock_dvs_ref.EnableNetworkResourceManagement.assert_called_once_with(
enable=self.mock_enabled
)
def test_enable_network_resource_management_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_dvs_ref.EnableNetworkResourceManagement = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_enable_network_resource_management_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_dvs_ref.EnableNetworkResourceManagement = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled
)
def test_enable_network_resource_management_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_dvs_ref.EnableNetworkResourceManagement = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.set_dvs_network_resource_management_enabled(
self.mock_dvs_ref, self.mock_enabled
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetDvportgroupsTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dc_ref = MagicMock(spec=vim.Datacenter)
self.mock_dvs_ref = MagicMock(spec=vim.DistributedVirtualSwitch)
self.mock_traversal_spec = MagicMock()
self.mock_items = [
{"object": MagicMock(), "name": "fake_pg1"},
{"object": MagicMock(), "name": "fake_pg2"},
{"object": MagicMock(), "name": "fake_pg3"},
]
self.mock_get_mors = MagicMock(return_value=self.mock_items)
patches = (
("salt.utils.vmware.get_managed_object_name", MagicMock()),
("salt.utils.vmware.get_mors_with_properties", self.mock_get_mors),
(
"salt.utils.vmware.get_service_instance_from_managed_object",
MagicMock(return_value=self.mock_si),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=self.mock_traversal_spec),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_dc_ref",
"mock_dvs_ref",
"mock_traversal_spec",
"mock_items",
"mock_get_mors",
):
delattr(self, attr)
def test_unsupported_parrent(self):
with self.assertRaises(ArgumentValueError) as excinfo:
salt.utils.vmware.get_dvportgroups(MagicMock())
self.assertEqual(
excinfo.exception.strerror,
"Parent has to be either a datacenter, or a " "distributed virtual switch",
)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.get_dvportgroups(self.mock_dc_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dc_ref)
def test_traversal_spec_datacenter_parent(self):
mock_traversal_spec = MagicMock(return_value="traversal_spec")
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec,
):
salt.utils.vmware.get_dvportgroups(self.mock_dc_ref)
mock_traversal_spec.assert_has_calls(
[
call(path="childEntity", skip=False, type=vim.Folder),
call(
path="networkFolder",
skip=True,
type=vim.Datacenter,
selectSet=["traversal_spec"],
),
]
)
def test_traversal_spec_dvs_parent(self):
mock_traversal_spec = MagicMock(return_value="traversal_spec")
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec,
):
salt.utils.vmware.get_dvportgroups(self.mock_dvs_ref)
mock_traversal_spec.assert_called_once_with(
path="portgroup", skip=False, type=vim.DistributedVirtualSwitch
)
def test_get_mors_with_properties(self):
salt.utils.vmware.get_dvportgroups(self.mock_dvs_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si,
vim.DistributedVirtualPortgroup,
container_ref=self.mock_dvs_ref,
property_list=["name"],
traversal_spec=self.mock_traversal_spec,
)
def test_get_no_pgs(self):
ret = salt.utils.vmware.get_dvportgroups(self.mock_dvs_ref)
self.assertEqual(ret, [])
def test_get_all_pgs(self):
ret = salt.utils.vmware.get_dvportgroups(
self.mock_dvs_ref, get_all_portgroups=True
)
self.assertEqual(ret, [i["object"] for i in self.mock_items])
def test_filtered_pgs(self):
ret = salt.utils.vmware.get_dvss(
self.mock_dc_ref, dvs_names=["fake_pg1", "fake_pg3", "no_pg"]
)
self.assertEqual(
ret, [self.mock_items[0]["object"], self.mock_items[2]["object"]]
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetUplinkDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_si = MagicMock()
self.mock_dvs_ref = MagicMock(spec=vim.DistributedVirtualSwitch)
self.mock_traversal_spec = MagicMock()
self.mock_items = [
{"object": MagicMock(), "tag": [MagicMock(key="fake_tag")]},
{"object": MagicMock(), "tag": [MagicMock(key="SYSTEM/DVS.UPLINKPG")]},
]
self.mock_get_mors = MagicMock(return_value=self.mock_items)
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dvs"),
),
("salt.utils.vmware.get_mors_with_properties", self.mock_get_mors),
(
"salt.utils.vmware.get_service_instance_from_managed_object",
MagicMock(return_value=self.mock_si),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=self.mock_traversal_spec),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_dvs_ref",
"mock_traversal_spec",
"mock_items",
"mock_get_mors",
):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value="traversal_spec")
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec,
):
salt.utils.vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
mock_traversal_spec.assert_called_once_with(
path="portgroup", skip=False, type=vim.DistributedVirtualSwitch
)
def test_get_mors_with_properties(self):
salt.utils.vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si,
vim.DistributedVirtualPortgroup,
container_ref=self.mock_dvs_ref,
property_list=["tag"],
traversal_spec=self.mock_traversal_spec,
)
def test_get_no_uplink_pg(self):
with patch(
"salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])
):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
self.assertEqual(
excinfo.exception.strerror,
"Uplink portgroup of DVS 'fake_dvs' wasn't found",
)
def test_get_uplink_pg(self):
ret = salt.utils.vmware.get_uplink_dvportgroup(self.mock_dvs_ref)
self.assertEqual(ret, self.mock_items[1]["object"])
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class CreateDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_pg_spec = MagicMock()
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_dvs_ref = MagicMock(
CreateDVPortgroup_Task=MagicMock(return_value=self.mock_task)
)
self.mock_wait_for_task = MagicMock()
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_dvs"),
),
("salt.utils.vmware.wait_for_task", self.mock_wait_for_task),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ("mock_pg_spec", "mock_dvs_ref", "mock_task", "mock_wait_for_task"):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_dvs_ref)
def test_create_dvporgroup_task(self):
salt.utils.vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.mock_dvs_ref.CreateDVPortgroup_Task.assert_called_once_with(
self.mock_pg_spec
)
def test_create_dvporgroup_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_dvs_ref.CreateDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_create_dvporgroup_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_dvs_ref.CreateDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_create_dvporgroup_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_dvs_ref.CreateDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_wait_for_tasks(self):
salt.utils.vmware.create_dvportgroup(self.mock_dvs_ref, self.mock_pg_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task,
"fake_dvs",
"<class '{}unit.utils.test_vmware.FakeTaskClass'>".format(
"tests." if RUNTIME_VARS.PYTEST_SESSION else ""
),
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class UpdateDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_pg_spec = MagicMock()
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_pg_ref = MagicMock(
ReconfigureDVPortgroup_Task=MagicMock(return_value=self.mock_task)
)
self.mock_wait_for_task = MagicMock()
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_pg"),
),
("salt.utils.vmware.wait_for_task", self.mock_wait_for_task),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ("mock_pg_spec", "mock_pg_ref", "mock_task", "mock_wait_for_task"):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
mock_get_managed_object_name.assert_called_once_with(self.mock_pg_ref)
def test_reconfigure_dvporgroup_task(self):
salt.utils.vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.mock_pg_ref.ReconfigureDVPortgroup_Task.assert_called_once_with(
self.mock_pg_spec
)
def test_reconfigure_dvporgroup_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_pg_ref.ReconfigureDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_reconfigure_dvporgroup_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_pg_ref.ReconfigureDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_reconfigure_dvporgroup_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_pg_ref.ReconfigureDVPortgroup_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_wait_for_tasks(self):
salt.utils.vmware.update_dvportgroup(self.mock_pg_ref, self.mock_pg_spec)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task,
"fake_pg",
"<class '{}unit.utils.test_vmware.FakeTaskClass'>".format(
"tests." if RUNTIME_VARS.PYTEST_SESSION else ""
),
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class RemoveDvportgroupTestCase(TestCase):
def setUp(self):
self.mock_task = MagicMock(spec=FakeTaskClass)
self.mock_pg_ref = MagicMock(
Destroy_Task=MagicMock(return_value=self.mock_task)
)
self.mock_wait_for_task = MagicMock()
patches = (
(
"salt.utils.vmware.get_managed_object_name",
MagicMock(return_value="fake_pg"),
),
("salt.utils.vmware.wait_for_task", self.mock_wait_for_task),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ("mock_pg_ref", "mock_task", "mock_wait_for_task"):
delattr(self, attr)
def test_get_managed_object_name_call(self):
mock_get_managed_object_name = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", mock_get_managed_object_name
):
salt.utils.vmware.remove_dvportgroup(self.mock_pg_ref)
mock_get_managed_object_name.assert_called_once_with(self.mock_pg_ref)
def test_destroy_task(self):
salt.utils.vmware.remove_dvportgroup(self.mock_pg_ref)
self.mock_pg_ref.Destroy_Task.assert_called_once_with()
def test_destroy_task_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_pg_ref.Destroy_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.remove_dvportgroup(self.mock_pg_ref)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_destroy_treconfigure_dvporgroup_task_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_pg_ref.Destroy_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.remove_dvportgroup(self.mock_pg_ref)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_destroy_treconfigure_dvporgroup_task_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_pg_ref.Destroy_Task = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.remove_dvportgroup(self.mock_pg_ref)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_wait_for_tasks(self):
salt.utils.vmware.remove_dvportgroup(self.mock_pg_ref)
self.mock_wait_for_task.assert_called_once_with(
self.mock_task,
"fake_pg",
"<class '{}unit.utils.test_vmware.FakeTaskClass'>".format(
"tests." if RUNTIME_VARS.PYTEST_SESSION else ""
),
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetHostsTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_hosts
"""
def setUp(self):
patches = (
("salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])),
("salt.utils.vmware.get_datacenter", MagicMock(return_value=None)),
("salt.utils.vmware.get_cluster", MagicMock(return_value=None)),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
self.mock_root_folder = MagicMock()
self.mock_si = MagicMock()
self.mock_host1, self.mock_host2, self.mock_host3 = (
MagicMock(),
MagicMock(),
MagicMock(),
)
self.mock_prop_host1 = {"name": "fake_hostname1", "object": self.mock_host1}
self.mock_prop_host2 = {"name": "fake_hostname2", "object": self.mock_host2}
self.mock_prop_host3 = {"name": "fake_hostname3", "object": self.mock_host3}
self.mock_prop_hosts = [
self.mock_prop_host1,
self.mock_prop_host2,
self.mock_prop_host3,
]
def test_cluster_no_datacenter(self):
with self.assertRaises(ArgumentValueError) as excinfo:
salt.utils.vmware.get_hosts(self.mock_si, cluster_name="fake_cluster")
self.assertEqual(
excinfo.exception.strerror,
"Must specify the datacenter when specifying the " "cluster",
)
def test_get_si_no_datacenter_no_cluster(self):
mock_get_mors = MagicMock()
mock_get_root_folder = MagicMock(return_value=self.mock_root_folder)
with patch("salt.utils.vmware.get_root_folder", mock_get_root_folder):
with patch("salt.utils.vmware.get_mors_with_properties", mock_get_mors):
salt.utils.vmware.get_hosts(self.mock_si)
mock_get_root_folder.assert_called_once_with(self.mock_si)
mock_get_mors.assert_called_once_with(
self.mock_si,
vim.HostSystem,
container_ref=self.mock_root_folder,
property_list=["name"],
)
def test_get_si_datacenter_name_no_cluster_name(self):
mock_dc = MagicMock()
mock_get_dc = MagicMock(return_value=mock_dc)
mock_get_mors = MagicMock()
with patch("salt.utils.vmware.get_datacenter", mock_get_dc):
with patch("salt.utils.vmware.get_mors_with_properties", mock_get_mors):
salt.utils.vmware.get_hosts(
self.mock_si, datacenter_name="fake_datacenter"
)
mock_get_dc.assert_called_once_with(self.mock_si, "fake_datacenter")
mock_get_mors.assert_called_once_with(
self.mock_si, vim.HostSystem, container_ref=mock_dc, property_list=["name"]
)
def test_get_si_datacenter_name_and_cluster_name(self):
mock_dc = MagicMock()
mock_get_dc = MagicMock(return_value=mock_dc)
mock_get_cl = MagicMock()
mock_get_mors = MagicMock()
with patch("salt.utils.vmware.get_datacenter", mock_get_dc):
with patch("salt.utils.vmware.get_cluster", mock_get_cl):
with patch("salt.utils.vmware.get_mors_with_properties", mock_get_mors):
salt.utils.vmware.get_hosts(
self.mock_si,
datacenter_name="fake_datacenter",
cluster_name="fake_cluster",
)
mock_get_dc.assert_called_once_with(self.mock_si, "fake_datacenter")
mock_get_mors.assert_called_once_with(
self.mock_si,
vim.HostSystem,
container_ref=mock_dc,
property_list=["name", "parent"],
)
def test_host_get_all_hosts(self):
with patch(
"salt.utils.vmware.get_root_folder",
MagicMock(return_value=self.mock_root_folder),
):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_prop_hosts),
):
res = salt.utils.vmware.get_hosts(self.mock_si, get_all_hosts=True)
self.assertEqual(res, [self.mock_host1, self.mock_host2, self.mock_host3])
def test_filter_hostname(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_prop_hosts),
):
res = salt.utils.vmware.get_hosts(
self.mock_si, host_names=["fake_hostname1", "fake_hostname2"]
)
self.assertEqual(res, [self.mock_host1, self.mock_host2])
def test_get_all_host_flag_not_set_and_no_host_names(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_prop_hosts),
):
res = salt.utils.vmware.get_hosts(self.mock_si)
self.assertEqual(res, [])
def test_filter_cluster(self):
self.mock_prop_host1["parent"] = vim.ClusterComputeResource("cluster")
self.mock_prop_host2["parent"] = vim.ClusterComputeResource("cluster")
self.mock_prop_host3["parent"] = vim.Datacenter("dc")
mock_get_cl_name = MagicMock(
side_effect=["fake_bad_cluster", "fake_good_cluster"]
)
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_prop_hosts),
):
with patch("salt.utils.vmware.get_managed_object_name", mock_get_cl_name):
res = salt.utils.vmware.get_hosts(
self.mock_si,
datacenter_name="fake_datacenter",
cluster_name="fake_good_cluster",
get_all_hosts=True,
)
self.assertEqual(mock_get_cl_name.call_count, 2)
self.assertEqual(res, [self.mock_host2])
def test_no_hosts(self):
with patch(
"salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])
):
res = salt.utils.vmware.get_hosts(self.mock_si, get_all_hosts=True)
self.assertEqual(res, [])
def test_one_host_returned(self):
with patch(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=[self.mock_prop_host1]),
):
res = salt.utils.vmware.get_hosts(self.mock_si, get_all_hosts=True)
self.assertEqual(res, [self.mock_host1])
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetLicenseManagerTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_license_manager
"""
def setUp(self):
self.mock_si = MagicMock()
self.mock_lic_mgr = MagicMock()
type(self.mock_si.content).licenseManager = PropertyMock(
return_value=self.mock_lic_mgr
)
def tearDown(self):
for attr in ("mock_si", "mock_lic_mgr"):
delattr(self, attr)
def test_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_si.content).licenseManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_license_manager(self.mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(self.mock_si.content).licenseManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_license_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(self.mock_si.content).licenseManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_license_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_valid_assignment_manager(self):
ret = salt.utils.vmware.get_license_manager(self.mock_si)
self.assertEqual(ret, self.mock_lic_mgr)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetLicenseAssignmentManagerTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_license_assignment_manager
"""
def setUp(self):
self.mock_si = MagicMock()
self.mock_lic_assign_mgr = MagicMock()
type(
self.mock_si.content.licenseManager
).licenseAssignmentManager = PropertyMock(return_value=self.mock_lic_assign_mgr)
def tearDown(self):
for attr in ("mock_si", "mock_lic_assign_mgr"):
delattr(self, attr)
def test_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(
self.mock_si.content.licenseManager
).licenseAssignmentManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_license_assignment_manager(self.mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(
self.mock_si.content.licenseManager
).licenseAssignmentManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_license_assignment_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(
self.mock_si.content.licenseManager
).licenseAssignmentManager = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_license_assignment_manager(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_empty_license_assignment_manager(self):
type(
self.mock_si.content.licenseManager
).licenseAssignmentManager = PropertyMock(return_value=None)
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_license_assignment_manager(self.mock_si)
self.assertEqual(
excinfo.exception.strerror, "License assignment manager was not retrieved"
)
def test_valid_assignment_manager(self):
ret = salt.utils.vmware.get_license_assignment_manager(self.mock_si)
self.assertEqual(ret, self.mock_lic_assign_mgr)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetLicensesTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_licenses
"""
def setUp(self):
self.mock_si = MagicMock()
self.mock_licenses = [MagicMock(), MagicMock()]
self.mock_lic_mgr = MagicMock()
type(self.mock_lic_mgr).licenses = PropertyMock(return_value=self.mock_licenses)
patches = (
(
"salt.utils.vmware.get_license_manager",
MagicMock(return_value=self.mock_lic_mgr),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ("mock_si", "mock_lic_mgr", "mock_licenses"):
delattr(self, attr)
def test_no_license_manager_passed_in(self):
mock_get_license_manager = MagicMock()
with patch("salt.utils.vmware.get_license_manager", mock_get_license_manager):
salt.utils.vmware.get_licenses(self.mock_si)
mock_get_license_manager.assert_called_once_with(self.mock_si)
def test_license_manager_passed_in(self):
mock_licenses = PropertyMock()
mock_lic_mgr = MagicMock()
type(mock_lic_mgr).licenses = mock_licenses
mock_get_license_manager = MagicMock()
with patch("salt.utils.vmware.get_license_manager", mock_get_license_manager):
salt.utils.vmware.get_licenses(self.mock_si, license_manager=mock_lic_mgr)
self.assertEqual(mock_get_license_manager.call_count, 0)
self.assertEqual(mock_licenses.call_count, 1)
def test_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_lic_mgr).licenses = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_licenses(self.mock_si)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_raise_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(self.mock_lic_mgr).licenses = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_licenses(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(self.mock_lic_mgr).licenses = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_licenses(self.mock_si)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_valid_licenses(self):
ret = salt.utils.vmware.get_licenses(self.mock_si)
self.assertEqual(ret, self.mock_licenses)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class AddLicenseTestCase(TestCase):
"""
Tests for salt.utils.vmware.add_license
"""
def setUp(self):
self.mock_si = MagicMock()
self.mock_license = MagicMock()
self.mock_add_license = MagicMock(return_value=self.mock_license)
self.mock_lic_mgr = MagicMock(AddLicense=self.mock_add_license)
self.mock_label = MagicMock()
patches = (
(
"salt.utils.vmware.get_license_manager",
MagicMock(return_value=self.mock_lic_mgr),
),
("salt.utils.vmware.vim.KeyValue", MagicMock(return_value=self.mock_label)),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_lic_mgr",
"mock_license",
"mock_add_license",
"mock_label",
):
delattr(self, attr)
def test_no_license_manager_passed_in(self):
mock_get_license_manager = MagicMock()
with patch("salt.utils.vmware.get_license_manager", mock_get_license_manager):
salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
mock_get_license_manager.assert_called_once_with(self.mock_si)
def test_license_manager_passed_in(self):
mock_get_license_manager = MagicMock()
with patch("salt.utils.vmware.get_license_manager", mock_get_license_manager):
salt.utils.vmware.add_license(
self.mock_si,
"fake_license_key",
"fake_license_description",
license_manager=self.mock_lic_mgr,
)
self.assertEqual(mock_get_license_manager.call_count, 0)
self.assertEqual(self.mock_add_license.call_count, 1)
def test_label_settings(self):
salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
self.assertEqual(self.mock_label.key, "VpxClientLicenseLabel")
self.assertEqual(self.mock_label.value, "fake_license_description")
def test_add_license_arguments(self):
salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
self.mock_add_license.assert_called_once_with(
"fake_license_key", [self.mock_label]
)
def test_add_license_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_lic_mgr.AddLicense = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_add_license_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_lic_mgr.AddLicense = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_add_license_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_lic_mgr.AddLicense = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_valid_license_added(self):
ret = salt.utils.vmware.add_license(
self.mock_si, "fake_license_key", "fake_license_description"
)
self.assertEqual(ret, self.mock_license)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetAssignedLicensesTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_assigned_licenses
"""
def setUp(self):
self.mock_ent_id = MagicMock()
self.mock_si = MagicMock()
type(self.mock_si.content.about).instanceUuid = PropertyMock(
return_value=self.mock_ent_id
)
self.mock_moid = MagicMock()
self.prop_mock_moid = PropertyMock(return_value=self.mock_moid)
self.mock_entity_ref = MagicMock()
type(self.mock_entity_ref)._moId = self.prop_mock_moid
self.mock_assignments = [
MagicMock(entityDisplayName="fake_ent1"),
MagicMock(entityDisplayName="fake_ent2"),
]
self.mock_query_assigned_licenses = MagicMock(
return_value=[
MagicMock(assignedLicense=self.mock_assignments[0]),
MagicMock(assignedLicense=self.mock_assignments[1]),
]
)
self.mock_lic_assign_mgr = MagicMock(
QueryAssignedLicenses=self.mock_query_assigned_licenses
)
patches = (
(
"salt.utils.vmware.get_license_assignment_manager",
MagicMock(return_value=self.mock_lic_assign_mgr),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_ent_id",
"mock_si",
"mock_moid",
"prop_mock_moid",
"mock_entity_ref",
"mock_assignments",
"mock_query_assigned_licenses",
"mock_lic_assign_mgr",
):
delattr(self, attr)
def test_no_license_assignment_manager_passed_in(self):
mock_get_license_assign_manager = MagicMock()
with patch(
"salt.utils.vmware.get_license_assignment_manager",
mock_get_license_assign_manager,
):
salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
mock_get_license_assign_manager.assert_called_once_with(self.mock_si)
def test_license_assignment_manager_passed_in(self):
mock_get_license_assign_manager = MagicMock()
with patch(
"salt.utils.vmware.get_license_assignment_manager",
mock_get_license_assign_manager,
):
salt.utils.vmware.get_assigned_licenses(
self.mock_si,
self.mock_entity_ref,
"fake_entity_name",
license_assignment_manager=self.mock_lic_assign_mgr,
)
self.assertEqual(mock_get_license_assign_manager.call_count, 0)
def test_entity_name(self):
mock_trace = MagicMock()
with patch("salt._logging.impl.SaltLoggingClass.trace", mock_trace):
salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
mock_trace.assert_called_once_with(
"Retrieving licenses assigned to '%s'", "fake_entity_name"
)
def test_instance_uuid(self):
mock_instance_uuid_prop = PropertyMock()
type(self.mock_si.content.about).instanceUuid = mock_instance_uuid_prop
self.mock_lic_assign_mgr.QueryAssignedLicenses = MagicMock(
return_value=[MagicMock(entityDisplayName="fake_vcenter")]
)
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.assertEqual(mock_instance_uuid_prop.call_count, 1)
def test_instance_uuid_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_si.content.about).instanceUuid = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_instance_uuid_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(self.mock_si.content.about).instanceUuid = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_instance_uuid_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(self.mock_si.content.about).instanceUuid = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_vcenter_entity_too_many_assignements(self):
self.mock_lic_assign_mgr.QueryAssignedLicenses = MagicMock(
return_value=[MagicMock(), MagicMock()]
)
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.assertEqual(
excinfo.exception.strerror,
"Unexpected return. Expect only a single assignment",
)
def test_wrong_vcenter_name(self):
self.mock_lic_assign_mgr.QueryAssignedLicenses = MagicMock(
return_value=[MagicMock(entityDisplayName="bad_vcenter")]
)
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.assertEqual(
excinfo.exception.strerror,
"Got license assignment info for a different vcenter",
)
def test_query_assigned_licenses_vcenter(self):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, entity_name="fake_vcenter"
)
self.mock_query_assigned_licenses.assert_called_once_with(self.mock_ent_id)
def test_query_assigned_licenses_with_entity(self):
salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
self.mock_query_assigned_licenses.assert_called_once_with(self.mock_moid)
def test_query_assigned_licenses_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_lic_assign_mgr.QueryAssignedLicenses = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_query_assigned_licenses_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_lic_assign_mgr.QueryAssignedLicenses = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_query_assigned_licenses_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_lic_assign_mgr.QueryAssignedLicenses = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_valid_assignments(self):
ret = salt.utils.vmware.get_assigned_licenses(
self.mock_si, self.mock_entity_ref, "fake_entity_name"
)
self.assertEqual(ret, self.mock_assignments)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class AssignLicenseTestCase(TestCase):
"""
Tests for salt.utils.vmware.assign_license
"""
def setUp(self):
self.mock_ent_id = MagicMock()
self.mock_si = MagicMock()
type(self.mock_si.content.about).instanceUuid = PropertyMock(
return_value=self.mock_ent_id
)
self.mock_lic_key = MagicMock()
self.mock_moid = MagicMock()
self.prop_mock_moid = PropertyMock(return_value=self.mock_moid)
self.mock_entity_ref = MagicMock()
type(self.mock_entity_ref)._moId = self.prop_mock_moid
self.mock_license = MagicMock()
self.mock_update_assigned_license = MagicMock(return_value=self.mock_license)
self.mock_lic_assign_mgr = MagicMock(
UpdateAssignedLicense=self.mock_update_assigned_license
)
patches = (
(
"salt.utils.vmware.get_license_assignment_manager",
MagicMock(return_value=self.mock_lic_assign_mgr),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def test_no_license_assignment_manager_passed_in(self):
mock_get_license_assign_manager = MagicMock()
with patch(
"salt.utils.vmware.get_license_assignment_manager",
mock_get_license_assign_manager,
):
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
mock_get_license_assign_manager.assert_called_once_with(self.mock_si)
def test_license_assignment_manager_passed_in(self):
mock_get_license_assign_manager = MagicMock()
with patch(
"salt.utils.vmware.get_license_assignment_manager",
mock_get_license_assign_manager,
):
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
license_assignment_manager=self.mock_lic_assign_mgr,
)
self.assertEqual(mock_get_license_assign_manager.call_count, 0)
self.assertEqual(self.mock_update_assigned_license.call_count, 1)
def test_entity_name(self):
mock_trace = MagicMock()
with patch("salt._logging.impl.SaltLoggingClass.trace", mock_trace):
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
mock_trace.assert_called_once_with(
"Assigning license to '%s'", "fake_entity_name"
)
def test_instance_uuid(self):
mock_instance_uuid_prop = PropertyMock()
type(self.mock_si.content.about).instanceUuid = mock_instance_uuid_prop
self.mock_lic_assign_mgr.UpdateAssignedLicense = MagicMock(
return_value=[MagicMock(entityDisplayName="fake_vcenter")]
)
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
entity_name="fake_entity_name",
)
self.assertEqual(mock_instance_uuid_prop.call_count, 1)
def test_instance_uuid_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_si.content.about).instanceUuid = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
entity_name="fake_entity_name",
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_instance_uuid_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
type(self.mock_si.content.about).instanceUuid = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
entity_name="fake_entity_name",
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_instance_uuid_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
type(self.mock_si.content.about).instanceUuid = PropertyMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
entity_name="fake_entity_name",
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_update_assigned_licenses_vcenter(self):
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
entity_name="fake_entity_name",
)
self.mock_update_assigned_license.assert_called_once_with(
self.mock_ent_id, self.mock_lic_key, "fake_license_name"
)
def test_update_assigned_licenses_call_with_entity(self):
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
self.mock_update_assigned_license.assert_called_once_with(
self.mock_moid, self.mock_lic_key, "fake_license_name"
)
def test_update_assigned_licenses_raises_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
self.mock_lic_assign_mgr.UpdateAssignedLicense = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_update_assigned_licenses_raises_vim_fault(self):
exc = vim.fault.VimFault()
exc.msg = "VimFault msg"
self.mock_lic_assign_mgr.UpdateAssignedLicense = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
self.assertEqual(excinfo.exception.strerror, "VimFault msg")
def test_update_assigned_licenses_raises_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "RuntimeFault msg"
self.mock_lic_assign_mgr.UpdateAssignedLicense = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
self.assertEqual(excinfo.exception.strerror, "RuntimeFault msg")
def test_valid_assignments(self):
ret = salt.utils.vmware.assign_license(
self.mock_si,
self.mock_lic_key,
"fake_license_name",
self.mock_entity_ref,
"fake_entity_name",
)
self.assertEqual(ret, self.mock_license)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetStorageSystemTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_storage_system
"""
def setUp(self):
self.mock_si = MagicMock(content=MagicMock())
self.mock_host_ref = MagicMock()
self.mock_get_managed_object_name = MagicMock(return_value="fake_host")
self.mock_traversal_spec = MagicMock()
self.mock_obj = MagicMock()
self.mock_get_mors = MagicMock(return_value=[{"object": self.mock_obj}])
patches = (
(
"salt.utils.vmware.get_managed_object_name",
self.mock_get_managed_object_name,
),
("salt.utils.vmware.get_mors_with_properties", self.mock_get_mors),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=self.mock_traversal_spec),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_host_ref",
"mock_get_managed_object_name",
"mock_traversal_spec",
"mock_obj",
):
delattr(self, attr)
def test_no_hostname_argument(self):
salt.utils.vmware.get_storage_system(self.mock_si, self.mock_host_ref)
self.mock_get_managed_object_name.assert_called_once_with(self.mock_host_ref)
def test_hostname_argument(self):
salt.utils.vmware.get_storage_system(
self.mock_si, self.mock_host_ref, hostname="fake_host"
)
self.assertEqual(self.mock_get_managed_object_name.call_count, 0)
def test_traversal_spec(self):
mock_traversal_spec = MagicMock(return_value=[{"object": self.mock_obj}])
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec,
):
salt.utils.vmware.get_storage_system(self.mock_si, self.mock_host_ref)
mock_traversal_spec.assert_called_once_with(
path="configManager.storageSystem", type=vim.HostSystem, skip=False
)
def test_get_mors_with_properties(self):
salt.utils.vmware.get_storage_system(self.mock_si, self.mock_host_ref)
self.mock_get_mors.assert_called_once_with(
self.mock_si,
vim.HostStorageSystem,
property_list=["systemFile"],
container_ref=self.mock_host_ref,
traversal_spec=self.mock_traversal_spec,
)
def test_empty_mors_result(self):
with patch(
"salt.utils.vmware.get_mors_with_properties", MagicMock(return_value=[])
):
with self.assertRaises(VMwareObjectRetrievalError) as excinfo:
salt.utils.vmware.get_storage_system(self.mock_si, self.mock_host_ref)
self.assertEqual(
excinfo.exception.strerror,
"Host's 'fake_host' storage system was " "not retrieved",
)
def test_valid_mors_result(self):
res = salt.utils.vmware.get_storage_system(self.mock_si, self.mock_host_ref)
self.assertEqual(res, self.mock_obj)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class GetDatastoresTestCase(TestCase):
"""
Tests for salt.utils.vmware.get_datastores
"""
def setUp(self):
self.mock_si = MagicMock()
self.mock_reference = MagicMock(spec=vim.HostSystem)
self.mock_mount_infos = [
MagicMock(
volume=MagicMock(
spec=vim.HostVmfsVolume, extent=[MagicMock(diskName="fake_disk2")]
)
),
MagicMock(
volume=MagicMock(
spec=vim.HostVmfsVolume, extent=[MagicMock(diskName="fake_disk3")]
)
),
]
self.mock_mount_infos[0].volume.name = "fake_ds2"
self.mock_mount_infos[1].volume.name = "fake_ds3"
self.mock_entries = [
{"name": "fake_ds1", "object": MagicMock()},
{"name": "fake_ds2", "object": MagicMock()},
{"name": "fake_ds3", "object": MagicMock()},
]
self.mock_storage_system = MagicMock()
self.mock_get_storage_system = MagicMock(return_value=self.mock_storage_system)
self.mock_get_managed_object_name = MagicMock(return_value="fake_host")
self.mock_traversal_spec = MagicMock()
patches = (
(
"salt.utils.vmware.get_managed_object_name",
self.mock_get_managed_object_name,
),
("salt.utils.vmware.get_storage_system", self.mock_get_storage_system),
(
"salt.utils.vmware.get_properties_of_managed_object",
MagicMock(
return_value={
"fileSystemVolumeInfo.mountInfo": self.mock_mount_infos
}
),
),
(
"salt.utils.vmware.get_mors_with_properties",
MagicMock(return_value=self.mock_entries),
),
(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
MagicMock(return_value=self.mock_traversal_spec),
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in (
"mock_si",
"mock_reference",
"mock_storage_system",
"mock_get_storage_system",
"mock_mount_infos",
"mock_entries",
"mock_get_managed_object_name",
"mock_traversal_spec",
):
delattr(self, attr)
def test_get_reference_name_call(self):
salt.utils.vmware.get_datastores(self.mock_si, self.mock_reference)
self.mock_get_managed_object_name.assert_called_once_with(self.mock_reference)
def test_get_no_datastores(self):
res = salt.utils.vmware.get_datastores(self.mock_si, self.mock_reference)
self.assertEqual(res, [])
def test_get_storage_system_call(self):
salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, backing_disk_ids=["fake_disk1"]
)
self.mock_get_storage_system.assert_called_once_with(
self.mock_si, self.mock_reference, "fake_host"
)
def test_get_mount_info_call(self):
mock_get_properties_of_managed_object = MagicMock()
with patch(
"salt.utils.vmware.get_properties_of_managed_object",
mock_get_properties_of_managed_object,
):
salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, backing_disk_ids=["fake_disk1"]
)
mock_get_properties_of_managed_object.assert_called_once_with(
self.mock_storage_system, ["fileSystemVolumeInfo.mountInfo"]
)
def test_backing_disks_no_mount_info(self):
with patch(
"salt.utils.vmware.get_properties_of_managed_object",
MagicMock(return_value={}),
):
res = salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, backing_disk_ids=["fake_disk_id"]
)
self.assertEqual(res, [])
def test_host_traversal_spec(self):
# Reference is of type vim.HostSystem
mock_traversal_spec_init = MagicMock()
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec_init,
):
salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, get_all_datastores=True
)
mock_traversal_spec_init.assert_called_once_with(
name="host_datastore_traversal",
path="datastore",
skip=False,
type=vim.HostSystem,
)
def test_cluster_traversal_spec(self):
mock_traversal_spec_init = MagicMock()
# Reference is of type vim.ClusterComputeResource
mock_reference = MagicMock(spec=vim.ClusterComputeResource)
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec_init,
):
salt.utils.vmware.get_datastores(
self.mock_si, mock_reference, get_all_datastores=True
)
mock_traversal_spec_init.assert_called_once_with(
name="cluster_datastore_traversal",
path="datastore",
skip=False,
type=vim.ClusterComputeResource,
)
def test_datacenter_traversal_spec(self):
mock_traversal_spec_init = MagicMock()
# Reference is of type vim.ClusterComputeResource
mock_reference = MagicMock(spec=vim.Datacenter)
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec_init,
):
salt.utils.vmware.get_datastores(
self.mock_si, mock_reference, get_all_datastores=True
)
mock_traversal_spec_init.assert_called_once_with(
name="datacenter_datastore_traversal",
path="datastore",
skip=False,
type=vim.Datacenter,
)
def test_root_folder_traversal_spec(self):
mock_traversal_spec_init = MagicMock(return_value="traversal")
mock_reference = MagicMock(spec=vim.Folder)
with patch(
"salt.utils.vmware.get_managed_object_name",
MagicMock(side_effect=["fake_host", "Datacenters"]),
):
with patch(
"salt.utils.vmware.vmodl.query.PropertyCollector.TraversalSpec",
mock_traversal_spec_init,
):
salt.utils.vmware.get_datastores(
self.mock_si, mock_reference, get_all_datastores=True
)
mock_traversal_spec_init.assert_has_calls(
[
call(path="datastore", skip=False, type=vim.Datacenter),
call(
path="childEntity",
selectSet=["traversal"],
skip=False,
type=vim.Folder,
),
]
)
def test_unsupported_reference_type(self):
class FakeClass:
pass
mock_reference = MagicMock(spec=FakeClass)
with self.assertRaises(ArgumentValueError) as excinfo:
salt.utils.vmware.get_datastores(
self.mock_si, mock_reference, get_all_datastores=True
)
self.assertEqual(
excinfo.exception.strerror, "Unsupported reference type 'FakeClass'"
)
def test_get_mors_with_properties(self):
mock_get_mors_with_properties = MagicMock()
with patch(
"salt.utils.vmware.get_mors_with_properties", mock_get_mors_with_properties
):
salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, get_all_datastores=True
)
mock_get_mors_with_properties.assert_called_once_with(
self.mock_si,
object_type=vim.Datastore,
property_list=["name"],
container_ref=self.mock_reference,
traversal_spec=self.mock_traversal_spec,
)
def test_get_all_datastores(self):
res = salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, get_all_datastores=True
)
self.assertEqual(
res,
[
self.mock_entries[0]["object"],
self.mock_entries[1]["object"],
self.mock_entries[2]["object"],
],
)
def test_get_datastores_filtered_by_name(self):
res = salt.utils.vmware.get_datastores(
self.mock_si, self.mock_reference, datastore_names=["fake_ds1", "fake_ds2"]
)
self.assertEqual(
res, [self.mock_entries[0]["object"], self.mock_entries[1]["object"]]
)
def test_get_datastores_filtered_by_backing_disk(self):
res = salt.utils.vmware.get_datastores(
self.mock_si,
self.mock_reference,
backing_disk_ids=["fake_disk2", "fake_disk3"],
)
self.assertEqual(
res, [self.mock_entries[1]["object"], self.mock_entries[2]["object"]]
)
def test_get_datastores_filtered_by_both_name_and_backing_disk(self):
# Simulate VMware data model for volumes fake_ds2, fake_ds3
res = salt.utils.vmware.get_datastores(
self.mock_si,
self.mock_reference,
datastore_names=["fake_ds1"],
backing_disk_ids=["fake_disk3"],
)
self.assertEqual(
res, [self.mock_entries[0]["object"], self.mock_entries[2]["object"]]
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
class RenameDatastoreTestCase(TestCase):
"""
Tests for salt.utils.vmware.rename_datastore
"""
def setUp(self):
self.mock_ds_ref = MagicMock()
self.mock_get_managed_object_name = MagicMock(return_value="fake_ds")
patches = (
(
"salt.utils.vmware.get_managed_object_name",
self.mock_get_managed_object_name,
),
)
for mod, mock in patches:
patcher = patch(mod, mock)
patcher.start()
self.addCleanup(patcher.stop)
def tearDown(self):
for attr in ("mock_ds_ref", "mock_get_managed_object_name"):
delattr(self, attr)
def test_datastore_name_call(self):
salt.utils.vmware.rename_datastore(self.mock_ds_ref, "fake_new_name")
self.mock_get_managed_object_name.assert_called_once_with(self.mock_ds_ref)
def test_rename_datastore_raise_no_permission(self):
exc = vim.fault.NoPermission()
exc.privilegeId = "Fake privilege"
type(self.mock_ds_ref).RenameDatastore = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.rename_datastore(self.mock_ds_ref, "fake_new_name")
self.assertEqual(
excinfo.exception.strerror,
"Not enough permissions. Required privilege: " "Fake privilege",
)
def test_rename_datastore_raise_vim_fault(self):
exc = vim.VimFault()
exc.msg = "vim_fault"
type(self.mock_ds_ref).RenameDatastore = MagicMock(side_effect=exc)
with self.assertRaises(VMwareApiError) as excinfo:
salt.utils.vmware.rename_datastore(self.mock_ds_ref, "fake_new_name")
self.assertEqual(excinfo.exception.strerror, "vim_fault")
def test_rename_datastore_raise_runtime_fault(self):
exc = vmodl.RuntimeFault()
exc.msg = "runtime_fault"
type(self.mock_ds_ref).RenameDatastore = MagicMock(side_effect=exc)
with self.assertRaises(VMwareRuntimeError) as excinfo:
salt.utils.vmware.rename_datastore(self.mock_ds_ref, "fake_new_name")
self.assertEqual(excinfo.exception.strerror, "runtime_fault")
def test_rename_datastore(self):
salt.utils.vmware.rename_datastore(self.mock_ds_ref, "fake_new_name")
self.mock_ds_ref.RenameDatastore.assert_called_once_with("fake_new_name")
class ConvertToKbTestCase(TestCase):
"""
Tests for converting units
"""
def setUp(self):
pass
def test_gb_conversion_call(self):
self.assertEqual(
salt.utils.vmware.convert_to_kb("Gb", 10),
{"size": int(10485760), "unit": "KB"},
)
def test_mb_conversion_call(self):
self.assertEqual(
salt.utils.vmware.convert_to_kb("Mb", 10),
{"size": int(10240), "unit": "KB"},
)
def test_kb_conversion_call(self):
self.assertEqual(
salt.utils.vmware.convert_to_kb("Kb", 10), {"size": int(10), "unit": "KB"}
)
def test_conversion_bad_input_argument_fault(self):
self.assertRaises(
ArgumentValueError, salt.utils.vmware.convert_to_kb, "test", 10
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
@patch("salt.utils.vmware.get_managed_object_name", MagicMock())
@patch("salt.utils.vmware.wait_for_task", MagicMock())
class CreateVirtualMachineTestCase(TestCase):
"""
Tests for salt.utils.vmware.create_vm
"""
def setUp(self):
self.vm_name = "fake_vm"
self.mock_task = MagicMock()
self.mock_config_spec = MagicMock()
self.mock_resourcepool_object = MagicMock()
self.mock_host_object = MagicMock()
self.mock_vm_create_task = MagicMock(return_value=self.mock_task)
self.mock_folder_object = MagicMock(CreateVM_Task=self.mock_vm_create_task)
def test_create_vm_pool_task_call(self):
salt.utils.vmware.create_vm(
self.vm_name,
self.mock_config_spec,
self.mock_folder_object,
self.mock_resourcepool_object,
)
self.assert_called_once(self.mock_vm_create_task)
def test_create_vm_host_task_call(self):
salt.utils.vmware.create_vm(
self.vm_name,
self.mock_config_spec,
self.mock_folder_object,
self.mock_resourcepool_object,
host_object=self.mock_host_object,
)
self.assert_called_once(self.mock_vm_create_task)
def test_create_vm_raise_no_permission(self):
exception = vim.fault.NoPermission()
exception.msg = "vim.fault.NoPermission msg"
self.mock_folder_object.CreateVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.create_vm(
self.vm_name,
self.mock_config_spec,
self.mock_folder_object,
self.mock_resourcepool_object,
)
self.assertEqual(
exc.exception.strerror, "Not enough permissions. Required privilege: "
)
def test_create_vm_raise_vim_fault(self):
exception = vim.fault.VimFault()
exception.msg = "vim.fault.VimFault msg"
self.mock_folder_object.CreateVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.create_vm(
self.vm_name,
self.mock_config_spec,
self.mock_folder_object,
self.mock_resourcepool_object,
)
self.assertEqual(exc.exception.strerror, "vim.fault.VimFault msg")
def test_create_vm_raise_runtime_fault(self):
exception = vmodl.RuntimeFault()
exception.msg = "vmodl.RuntimeFault msg"
self.mock_folder_object.CreateVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareRuntimeError) as exc:
salt.utils.vmware.create_vm(
self.vm_name,
self.mock_config_spec,
self.mock_folder_object,
self.mock_resourcepool_object,
)
self.assertEqual(exc.exception.strerror, "vmodl.RuntimeFault msg")
def test_create_vm_wait_for_task(self):
mock_wait_for_task = MagicMock()
with patch("salt.utils.vmware.wait_for_task", mock_wait_for_task):
salt.utils.vmware.create_vm(
self.vm_name,
self.mock_config_spec,
self.mock_folder_object,
self.mock_resourcepool_object,
)
mock_wait_for_task.assert_called_once_with(
self.mock_task, self.vm_name, "CreateVM Task", 10, "info"
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
@patch("salt.utils.vmware.get_managed_object_name", MagicMock())
@patch("salt.utils.vmware.wait_for_task", MagicMock())
class RegisterVirtualMachineTestCase(TestCase):
"""
Tests for salt.utils.vmware.register_vm
"""
def setUp(self):
self.vm_name = "fake_vm"
self.mock_task = MagicMock()
self.mock_vmx_path = MagicMock()
self.mock_resourcepool_object = MagicMock()
self.mock_host_object = MagicMock()
self.mock_vm_register_task = MagicMock(return_value=self.mock_task)
self.vm_folder_object = MagicMock(RegisterVM_Task=self.mock_vm_register_task)
self.datacenter = MagicMock(vmFolder=self.vm_folder_object)
def test_register_vm_pool_task_call(self):
salt.utils.vmware.register_vm(
self.datacenter,
self.vm_name,
self.mock_vmx_path,
self.mock_resourcepool_object,
)
self.assert_called_once(self.mock_vm_register_task)
def test_register_vm_host_task_call(self):
salt.utils.vmware.register_vm(
self.datacenter,
self.vm_name,
self.mock_vmx_path,
self.mock_resourcepool_object,
host_object=self.mock_host_object,
)
self.assert_called_once(self.mock_vm_register_task)
def test_register_vm_raise_no_permission(self):
exception = vim.fault.NoPermission()
self.vm_folder_object.RegisterVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.register_vm(
self.datacenter,
self.vm_name,
self.mock_vmx_path,
self.mock_resourcepool_object,
)
self.assertEqual(
exc.exception.strerror, "Not enough permissions. Required privilege: "
)
def test_register_vm_raise_vim_fault(self):
exception = vim.fault.VimFault()
exception.msg = "vim.fault.VimFault msg"
self.vm_folder_object.RegisterVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.register_vm(
self.datacenter,
self.vm_name,
self.mock_vmx_path,
self.mock_resourcepool_object,
)
self.assertEqual(exc.exception.strerror, "vim.fault.VimFault msg")
def test_register_vm_raise_runtime_fault(self):
exception = vmodl.RuntimeFault()
exception.msg = "vmodl.RuntimeFault msg"
self.vm_folder_object.RegisterVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareRuntimeError) as exc:
salt.utils.vmware.register_vm(
self.datacenter,
self.vm_name,
self.mock_vmx_path,
self.mock_resourcepool_object,
)
self.assertEqual(exc.exception.strerror, "vmodl.RuntimeFault msg")
def test_register_vm_wait_for_task(self):
mock_wait_for_task = MagicMock()
with patch("salt.utils.vmware.wait_for_task", mock_wait_for_task):
salt.utils.vmware.register_vm(
self.datacenter,
self.vm_name,
self.mock_vmx_path,
self.mock_resourcepool_object,
)
mock_wait_for_task.assert_called_once_with(
self.mock_task, self.vm_name, "RegisterVM Task"
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
@patch("salt.utils.vmware.get_managed_object_name", MagicMock())
@patch("salt.utils.vmware.wait_for_task", MagicMock())
class UpdateVirtualMachineTestCase(TestCase):
"""
Tests for salt.utils.vmware.update_vm
"""
def setUp(self):
self.mock_task = MagicMock()
self.mock_config_spec = MagicMock()
self.mock_vm_update_task = MagicMock(return_value=self.mock_task)
self.mock_vm_ref = MagicMock(ReconfigVM_Task=self.mock_vm_update_task)
def test_update_vm_task_call(self):
salt.utils.vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
self.assert_called_once(self.mock_vm_update_task)
def test_update_vm_raise_vim_fault(self):
exception = vim.fault.VimFault()
exception.msg = "vim.fault.VimFault"
self.mock_vm_ref.ReconfigVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
self.assertEqual(exc.exception.strerror, "vim.fault.VimFault")
def test_update_vm_raise_runtime_fault(self):
exception = vmodl.RuntimeFault()
exception.msg = "vmodl.RuntimeFault"
self.mock_vm_ref.ReconfigVM_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareRuntimeError) as exc:
salt.utils.vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
self.assertEqual(exc.exception.strerror, "vmodl.RuntimeFault")
def test_update_vm_wait_for_task(self):
mock_wait_for_task = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", MagicMock(return_value="my_vm")
):
with patch("salt.utils.vmware.wait_for_task", mock_wait_for_task):
salt.utils.vmware.update_vm(self.mock_vm_ref, self.mock_config_spec)
mock_wait_for_task.assert_called_once_with(
self.mock_task, "my_vm", "ReconfigureVM Task"
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
@patch("salt.utils.vmware.get_managed_object_name", MagicMock())
@patch("salt.utils.vmware.wait_for_task", MagicMock())
class DeleteVirtualMachineTestCase(TestCase):
"""
Tests for salt.utils.vmware.delete_vm
"""
def setUp(self):
self.mock_task = MagicMock()
self.mock_vm_destroy_task = MagicMock(return_value=self.mock_task)
self.mock_vm_ref = MagicMock(Destroy_Task=self.mock_vm_destroy_task)
def test_destroy_vm_task_call(self):
salt.utils.vmware.delete_vm(self.mock_vm_ref)
self.assert_called_once(self.mock_vm_destroy_task)
def test_destroy_vm_raise_vim_fault(self):
exception = vim.fault.VimFault()
exception.msg = "vim.fault.VimFault"
self.mock_vm_ref.Destroy_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.delete_vm(self.mock_vm_ref)
self.assertEqual(exc.exception.strerror, "vim.fault.VimFault")
def test_destroy_vm_raise_runtime_fault(self):
exception = vmodl.RuntimeFault()
exception.msg = "vmodl.RuntimeFault"
self.mock_vm_ref.Destroy_Task = MagicMock(side_effect=exception)
with self.assertRaises(VMwareRuntimeError) as exc:
salt.utils.vmware.delete_vm(self.mock_vm_ref)
self.assertEqual(exc.exception.strerror, "vmodl.RuntimeFault")
def test_destroy_vm_wait_for_task(self):
mock_wait_for_task = MagicMock()
with patch(
"salt.utils.vmware.get_managed_object_name", MagicMock(return_value="my_vm")
):
with patch("salt.utils.vmware.wait_for_task", mock_wait_for_task):
salt.utils.vmware.delete_vm(self.mock_vm_ref)
mock_wait_for_task.assert_called_once_with(
self.mock_task, "my_vm", "Destroy Task"
)
@skipIf(not HAS_PYVMOMI, "The 'pyvmomi' library is missing")
@patch("salt.utils.vmware.get_managed_object_name", MagicMock())
class UnregisterVirtualMachineTestCase(TestCase):
"""
Tests for salt.utils.vmware.unregister_vm
"""
def setUp(self):
self.mock_vm_unregister = MagicMock()
self.mock_vm_ref = MagicMock(UnregisterVM=self.mock_vm_unregister)
def test_unregister_vm_task_call(self):
salt.utils.vmware.unregister_vm(self.mock_vm_ref)
self.assert_called_once(self.mock_vm_unregister)
def test_unregister_vm_raise_vim_fault(self):
exception = vim.fault.VimFault()
exception.msg = "vim.fault.VimFault"
self.mock_vm_ref.UnregisterVM = MagicMock(side_effect=exception)
with self.assertRaises(VMwareApiError) as exc:
salt.utils.vmware.unregister_vm(self.mock_vm_ref)
self.assertEqual(exc.exception.strerror, "vim.fault.VimFault")
def test_unregister_vm_raise_runtime_fault(self):
exception = vmodl.RuntimeFault()
exception.msg = "vmodl.RuntimeFault"
self.mock_vm_ref.UnregisterVM = MagicMock(side_effect=exception)
with self.assertRaises(VMwareRuntimeError) as exc:
salt.utils.vmware.unregister_vm(self.mock_vm_ref)
self.assertEqual(exc.exception.strerror, "vmodl.RuntimeFault")
| 39.814425 | 88 | 0.62808 | 22,645 | 204,248 | 5.336542 | 0.027291 | 0.063287 | 0.072861 | 0.052281 | 0.895528 | 0.858647 | 0.82697 | 0.806398 | 0.789186 | 0.766852 | 0 | 0.002312 | 0.280076 | 204,248 | 5,129 | 89 | 39.822188 | 0.819529 | 0.013542 | 0 | 0.67899 | 0 | 0 | 0.133146 | 0.05375 | 0 | 0 | 0 | 0 | 0.117899 | 1 | 0.084761 | false | 0.014427 | 0.003381 | 0.000225 | 0.10505 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e3fe291bdfff71b9bb9be57811aa6ff3a6bfcd9e | 21 | py | Python | wildfire/deps/__init__.py | speedcell4/wildfire | 4646f92b7d75e807e401501af4c64cf654115d2f | [
"MIT"
] | null | null | null | wildfire/deps/__init__.py | speedcell4/wildfire | 4646f92b7d75e807e401501af4c64cf654115d2f | [
"MIT"
] | null | null | null | wildfire/deps/__init__.py | speedcell4/wildfire | 4646f92b7d75e807e401501af4c64cf654115d2f | [
"MIT"
] | null | null | null | from .conll import *
| 10.5 | 20 | 0.714286 | 3 | 21 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.190476 | 21 | 1 | 21 | 21 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
580ab8d81eb710a35513be5918eb8ee7d94ec022 | 90,967 | py | Python | tests/project/test_config.py | ashemedai/hatch | 9ec00d5e027c992efbc16dd777b1f6926368b6bf | [
"MIT"
] | null | null | null | tests/project/test_config.py | ashemedai/hatch | 9ec00d5e027c992efbc16dd777b1f6926368b6bf | [
"MIT"
] | null | null | null | tests/project/test_config.py | ashemedai/hatch | 9ec00d5e027c992efbc16dd777b1f6926368b6bf | [
"MIT"
] | null | null | null | import pytest
from hatch.plugin.manager import PluginManager
from hatch.project.config import ProjectConfig
from hatch.project.env import RESERVED_OPTIONS
from hatch.utils.structures import EnvVars
from hatch.version.scheme.standard import StandardScheme
from hatchling.version.source.regex import RegexSource
ARRAY_OPTIONS = [o for o, t in RESERVED_OPTIONS.items() if t is list]
BOOLEAN_OPTIONS = [o for o, t in RESERVED_OPTIONS.items() if t is bool]
MAPPING_OPTIONS = [o for o, t in RESERVED_OPTIONS.items() if t is dict]
STRING_OPTIONS = [o for o, t in RESERVED_OPTIONS.items() if t is str and o != 'matrix-name-format']
class TestEnv:
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.env` must be a table'):
_ = ProjectConfig(isolation, {'env': 9000}).env
def test_default(self, isolation):
project_config = ProjectConfig(isolation, {})
assert project_config.env == project_config.env == {}
class TestEnvCollectors:
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.env.collectors` must be a table'):
_ = ProjectConfig(isolation, {'env': {'collectors': 9000}}).env_collectors
def test_collector_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.env.collectors.foo` must be a table'):
_ = ProjectConfig(isolation, {'env': {'collectors': {'foo': 9000}}}).env_collectors
def test_default(self, isolation):
project_config = ProjectConfig(isolation, {})
assert project_config.env_collectors == project_config.env_collectors == {'default': {}}
def test_defined(self, isolation):
project_config = ProjectConfig(isolation, {'env': {'collectors': {'foo': {'bar': {'baz': 9000}}}}})
assert project_config.env_collectors == {'default': {}, 'foo': {'bar': {'baz': 9000}}}
assert list(project_config.env_collectors) == ['default', 'foo']
class TestEnvs:
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs` must be a table'):
_ = ProjectConfig(isolation, {'envs': 9000}, PluginManager()).envs
def test_config_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo` must be a table'):
_ = ProjectConfig(isolation, {'envs': {'foo': 9000}}, PluginManager()).envs
def test_unknown_collector(self, isolation):
with pytest.raises(ValueError, match='Unknown environment collector: foo'):
_ = ProjectConfig(isolation, {'env': {'collectors': {'foo': {}}}}, PluginManager()).envs
def test_unknown_template(self, isolation):
with pytest.raises(
ValueError, match='Field `tool.hatch.envs.foo.template` refers to an unknown environment `bar`'
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'template': 'bar'}}}, PluginManager()).envs
def test_default_undefined(self, isolation):
project_config = ProjectConfig(isolation, {}, PluginManager())
assert project_config.envs == project_config.envs == {'default': {'type': 'virtual'}}
assert project_config.matrices == project_config.matrices == {}
def test_default_partially_defined(self, isolation):
env_config = {'default': {'option': True}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
assert project_config.envs == {'default': {'option': True, 'type': 'virtual'}}
def test_default_defined(self, isolation):
env_config = {'default': {'type': 'foo'}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
assert project_config.envs == {'default': {'type': 'foo'}}
def test_basic(self, isolation):
env_config = {'foo': {'option': True}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
assert project_config.envs == {'default': {'type': 'virtual'}, 'foo': {'option': True, 'type': 'virtual'}}
def test_basic_override(self, isolation):
env_config = {'foo': {'type': 'baz'}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
assert project_config.envs == {'default': {'type': 'virtual'}, 'foo': {'type': 'baz'}}
def test_multiple_inheritance(self, isolation):
env_config = {
'foo': {'option1': 'foo'},
'bar': {'template': 'foo', 'option2': 'bar'},
'baz': {'template': 'bar', 'option3': 'baz'},
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
assert project_config.envs == {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'option1': 'foo'},
'bar': {'type': 'virtual', 'option1': 'foo', 'option2': 'bar'},
'baz': {'type': 'virtual', 'option1': 'foo', 'option2': 'bar', 'option3': 'baz'},
}
def test_circular_inheritance(self, isolation):
with pytest.raises(
ValueError, match='Circular inheritance detected for field `tool.hatch.envs.*.template`: foo -> bar -> foo'
):
_ = ProjectConfig(
isolation, {'envs': {'foo': {'template': 'bar'}, 'bar': {'template': 'foo'}}}, PluginManager()
).envs
def test_scripts_inheritance(self, isolation):
env_config = {
'default': {'scripts': {'cmd1': 'bar', 'cmd2': 'baz'}},
'foo': {'scripts': {'cmd1': 'foo'}},
'bar': {'template': 'foo', 'scripts': {'cmd3': 'bar'}},
'baz': {},
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
assert project_config.envs == {
'default': {'type': 'virtual', 'scripts': {'cmd1': 'bar', 'cmd2': 'baz'}},
'foo': {'type': 'virtual', 'scripts': {'cmd1': 'foo', 'cmd2': 'baz'}},
'bar': {'type': 'virtual', 'scripts': {'cmd1': 'foo', 'cmd2': 'baz', 'cmd3': 'bar'}},
'baz': {'type': 'virtual', 'scripts': {'cmd1': 'bar', 'cmd2': 'baz'}},
}
def test_matrices_not_array(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.matrix` must be an array'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': 9000}}}, PluginManager()).envs
def test_matrix_not_table(self, isolation):
with pytest.raises(TypeError, match='Entry #1 in field `tool.hatch.envs.foo.matrix` must be a table'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [9000]}}}, PluginManager()).envs
def test_matrix_empty(self, isolation):
with pytest.raises(ValueError, match='Matrix #1 in field `tool.hatch.envs.foo.matrix` cannot be empty'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{}]}}}, PluginManager()).envs
def test_matrix_variable_empty_string(self, isolation):
with pytest.raises(
ValueError, match='Variable #1 in matrix #1 in field `tool.hatch.envs.foo.matrix` cannot be an empty string'
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{'': []}]}}}, PluginManager()).envs
def test_matrix_variable_not_array(self, isolation):
with pytest.raises(
TypeError, match='Variable `bar` in matrix #1 in field `tool.hatch.envs.foo.matrix` must be an array'
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{'bar': 9000}]}}}, PluginManager()).envs
def test_matrix_variable_array_empty(self, isolation):
with pytest.raises(
ValueError, match='Variable `bar` in matrix #1 in field `tool.hatch.envs.foo.matrix` cannot be empty'
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{'bar': []}]}}}, PluginManager()).envs
def test_matrix_variable_entry_not_string(self, isolation):
with pytest.raises(
TypeError,
match='Value #1 of variable `bar` in matrix #1 in field `tool.hatch.envs.foo.matrix` must be a string',
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{'bar': [9000]}]}}}, PluginManager()).envs
def test_matrix_variable_entry_empty_string(self, isolation):
with pytest.raises(
ValueError,
match=(
'Value #1 of variable `bar` in matrix #1 in field `tool.hatch.envs.foo.matrix` '
'cannot be an empty string'
),
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{'bar': ['']}]}}}, PluginManager()).envs
def test_matrix_variable_entry_duplicate(self, isolation):
with pytest.raises(
ValueError,
match='Value #2 of variable `bar` in matrix #1 in field `tool.hatch.envs.foo.matrix` is a duplicate',
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix': [{'bar': ['1', '1']}]}}}, PluginManager()).envs
def test_matrix_multiple_python_variables(self, isolation):
with pytest.raises(
ValueError,
match='Matrix #1 in field `tool.hatch.envs.foo.matrix` cannot contain both `py` and `python` variables',
):
_ = ProjectConfig(
isolation,
{'envs': {'foo': {'matrix': [{'py': ['39', '310'], 'python': ['39', '311']}]}}},
PluginManager(),
).envs
def test_matrix_name_format_not_string(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.matrix-name-format` must be a string'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix-name-format': 9000}}}, PluginManager()).envs
def test_matrix_name_format_invalid(self, isolation):
with pytest.raises(
ValueError,
match='Field `tool.hatch.envs.foo.matrix-name-format` must contain at least the `{value}` placeholder',
):
_ = ProjectConfig(isolation, {'envs': {'foo': {'matrix-name-format': 'bar'}}}, PluginManager()).envs
def test_overrides_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides` must be a table'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'overrides': 9000}}}, PluginManager()).envs
def test_overrides_platform_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides.platform` must be a table'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'overrides': {'platform': 9000}}}}, PluginManager()).envs
def test_overrides_env_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides.env` must be a table'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'overrides': {'env': 9000}}}}, PluginManager()).envs
def test_overrides_matrix_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides.matrix` must be a table'):
_ = ProjectConfig(
isolation,
{'envs': {'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': 9000}}}},
PluginManager(),
).envs
def test_overrides_platform_entry_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides.platform.bar` must be a table'):
_ = ProjectConfig(
isolation, {'envs': {'foo': {'overrides': {'platform': {'bar': 9000}}}}}, PluginManager()
).envs
def test_overrides_env_entry_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides.env.bar` must be a table'):
_ = ProjectConfig(isolation, {'envs': {'foo': {'overrides': {'env': {'bar': 9000}}}}}, PluginManager()).envs
def test_overrides_matrix_entry_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.envs.foo.overrides.matrix.bar` must be a table'):
_ = ProjectConfig(
isolation,
{'envs': {'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'bar': 9000}}}}},
PluginManager(),
).envs
def test_matrix_simple_no_python(self, isolation):
env_config = {'foo': {'option': True, 'matrix': [{'version': ['9000', '3.14']}]}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', 'option': True},
'foo.3.14': {'type': 'virtual', 'option': True},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
def test_matrix_simple_no_python_custom_name_format(self, isolation):
env_config = {
'foo': {
'option': True,
'matrix-name-format': '{variable}_{value}',
'matrix': [{'version': ['9000', '3.14']}],
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.version_9000': {'type': 'virtual', 'option': True},
'foo.version_3.14': {'type': 'virtual', 'option': True},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('indicator', ['py', 'python'])
def test_matrix_simple_only_python(self, isolation, indicator):
env_config = {'foo': {'option': True, 'matrix': [{indicator: ['39', '310']}]}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.py39': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py310': {'type': 'virtual', 'option': True, 'python': '310'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('indicator', ['py', 'python'])
def test_matrix_simple(self, isolation, indicator):
env_config = {'foo': {'option': True, 'matrix': [{'version': ['9000', '3.14'], indicator: ['39', '310']}]}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.py39-9000': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-3.14': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py310-9000': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-3.14': {'type': 'virtual', 'option': True, 'python': '310'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('indicator', ['py', 'python'])
def test_matrix_simple_custom_name_format(self, isolation, indicator):
env_config = {
'foo': {
'option': True,
'matrix-name-format': '{variable}_{value}',
'matrix': [{'version': ['9000', '3.14'], indicator: ['39', '310']}],
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.py39-version_9000': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-version_3.14': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py310-version_9000': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-version_3.14': {'type': 'virtual', 'option': True, 'python': '310'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
def test_matrix_multiple_non_python(self, isolation):
env_config = {
'foo': {
'option': True,
'matrix': [{'version': ['9000', '3.14'], 'py': ['39', '310'], 'foo': ['baz', 'bar']}],
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.py39-9000-baz': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-9000-bar': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-3.14-baz': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-3.14-bar': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py310-9000-baz': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-9000-bar': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-3.14-baz': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-3.14-bar': {'type': 'virtual', 'option': True, 'python': '310'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
def test_matrix_series(self, isolation):
env_config = {
'foo': {
'option': True,
'matrix': [
{'version': ['9000', '3.14'], 'py': ['39', '310'], 'foo': ['baz', 'bar']},
{'version': ['9000'], 'py': ['310'], 'baz': ['foo', 'test'], 'bar': ['foobar']},
],
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.py39-9000-baz': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-9000-bar': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-3.14-baz': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py39-3.14-bar': {'type': 'virtual', 'option': True, 'python': '39'},
'foo.py310-9000-baz': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-9000-bar': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-3.14-baz': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-3.14-bar': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-9000-foo-foobar': {'type': 'virtual', 'option': True, 'python': '310'},
'foo.py310-9000-test-foobar': {'type': 'virtual', 'option': True, 'python': '310'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
def test_matrices_not_inherited(self, isolation):
env_config = {
'foo': {'option1': True, 'matrix': [{'py': ['39']}]},
'bar': {'template': 'foo', 'option2': False},
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.py39': {'type': 'virtual', 'option1': True, 'python': '39'},
'bar': {'type': 'virtual', 'option1': True, 'option2': False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:2]
def test_matrix_default_naming(self, isolation):
env_config = {'default': {'option': True, 'matrix': [{'version': ['9000', '3.14']}]}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'9000': {'type': 'virtual', 'option': True},
'3.14': {'type': 'virtual', 'option': True},
}
assert project_config.envs == expected_envs
assert project_config.matrices['default'] == list(expected_envs)
def test_matrix_pypy_naming(self, isolation):
env_config = {'foo': {'option': True, 'matrix': [{'py': ['python3.9', 'pypy3']}]}}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.python3.9': {'type': 'virtual', 'option': True, 'python': 'python3.9'},
'foo.pypy3': {'type': 'virtual', 'option': True, 'python': 'pypy3'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=f'Field `tool.hatch.envs.foo.overrides.matrix.version.{option}` must be a string or an array',
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: 9000}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_entry_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string or an inline table'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [9000]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_table_entry_no_key(self, isolation, option):
with pytest.raises(
ValueError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must have an option named `key`'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: [{}]}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_table_entry_key_not_string(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `key` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'key': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_table_entry_key_empty_string(self, isolation, option):
with pytest.raises(
ValueError,
match=(
f'Option `key` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'cannot be an empty string'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'key': ''}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_table_entry_value_not_string(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `value` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'key': 'foo', 'value': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_table_entry_if_not_array(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `if` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be an array'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {
'matrix': {'version': {option: [{'key': 'foo', 'value': 'bar', 'if': 9000}]}}
},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_invalid_type(self, isolation, option):
with pytest.raises(
TypeError, match=f'Field `tool.hatch.envs.foo.overrides.matrix.version.{option}` must be an array'
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: 9000}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table_entry_no_value(self, isolation, option):
with pytest.raises(
ValueError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must have an option named `value`'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: [{}]}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table_entry_value_not_string(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `value` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table_entry_value_empty_string(self, isolation, option):
with pytest.raises(
ValueError,
match=(
f'Option `value` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'cannot be an empty string'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': ''}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table_entry_if_not_array(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `if` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be an array'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': 'foo', 'if': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_entry_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string or an inline table'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [9000]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string, inline table, or an array'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: 9000}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_table_no_value(self, isolation, option):
with pytest.raises(
ValueError,
match=f'Field `tool.hatch.envs.foo.overrides.matrix.version.{option}` must have an option named `value`',
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: {}}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_table_value_not_string(self, isolation, option):
with pytest.raises(
TypeError,
match=f'Option `value` in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` must be a string',
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: {'value': 9000}}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_entry_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string or an inline table'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [9000]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_no_value(self, isolation, option):
with pytest.raises(
ValueError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must have an option named `value`'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: [{}]}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_value_not_string(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `value` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a string'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_if_not_array(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `if` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be an array'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': 'foo', 'if': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a boolean, inline table, or an array'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: 9000}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_table_no_value(self, isolation, option):
with pytest.raises(
ValueError,
match=f'Field `tool.hatch.envs.foo.overrides.matrix.version.{option}` must have an option named `value`',
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: {}}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_table_value_not_boolean(self, isolation, option):
with pytest.raises(
TypeError,
match=f'Option `value` in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` must be a boolean',
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: {'value': 9000}}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_entry_invalid_type(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a boolean or an inline table'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [9000]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_no_value(self, isolation, option):
with pytest.raises(
ValueError,
match=(
f'Entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must have an option named `value`'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {'matrix': [{'version': ['9000']}], 'overrides': {'matrix': {'version': {option: [{}]}}}}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_value_not_boolean(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `value` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be a boolean'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_if_not_array(self, isolation, option):
with pytest.raises(
TypeError,
match=(
f'Option `if` in entry #1 in field `tool.hatch.envs.foo.overrides.matrix.version.{option}` '
f'must be an array'
),
):
_ = ProjectConfig(
isolation,
{
'envs': {
'foo': {
'matrix': [{'version': ['9000']}],
'overrides': {'matrix': {'version': {option: [{'value': True, 'if': 9000}]}}},
}
}
},
PluginManager(),
).envs
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_string_with_value(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: 'FOO=ok'}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': 'ok'}},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_string_without_value(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: 'FOO'}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': '9000'}},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_string_override(self, isolation, option):
env_config = {
'foo': {
option: {'TEST': 'baz'},
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: 'TEST'}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'TEST': '9000'}},
'foo.bar': {'type': 'virtual', option: {'TEST': 'baz'}},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_string_with_value(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: ['FOO=ok']}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': 'ok'}},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_string_without_value(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: ['FOO']}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': '9000'}},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_string_override(self, isolation, option):
env_config = {
'foo': {
option: {'TEST': 'baz'},
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: ['TEST']}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'TEST': '9000'}},
'foo.bar': {'type': 'virtual', option: {'TEST': 'baz'}},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_table_key_with_value(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'key': 'FOO', 'value': 'ok'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': 'ok'}},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_table_key_without_value(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'key': 'FOO'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': '9000'}},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_table_override(self, isolation, option):
env_config = {
'foo': {
option: {'TEST': 'baz'},
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'key': 'TEST'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'TEST': '9000'}},
'foo.bar': {'type': 'virtual', option: {'TEST': 'baz'}},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_array_table_conditional(self, isolation, option):
env_config = {
'foo': {
option: {'TEST': 'baz'},
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'key': 'TEST', 'if': ['42']}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'TEST': 'baz'}},
'foo.42': {'type': 'virtual', option: {'TEST': '42'}},
'foo.bar': {'type': 'virtual', option: {'TEST': 'baz'}},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', MAPPING_OPTIONS)
def test_overrides_matrix_mapping_overwrite(self, isolation, option):
env_config = {
'foo': {
option: {'TEST': 'baz'},
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {f'set-{option}': ['FOO=bar', {'key': 'BAZ'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: {'FOO': 'bar', 'BAZ': '9000'}},
'foo.bar': {'type': 'virtual', option: {'TEST': 'baz'}},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_string(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: ['run foo']}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: ['run foo']},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_string_existing_append(self, isolation, option):
env_config = {
'foo': {
option: ['run baz'],
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: ['run foo']}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: ['run baz', 'run foo']},
'foo.bar': {'type': 'virtual', option: ['run baz']},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'run foo'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: ['run foo']},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table_existing_append(self, isolation, option):
env_config = {
'foo': {
option: ['run baz'],
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'run foo'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: ['run baz', 'run foo']},
'foo.bar': {'type': 'virtual', option: ['run baz']},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_table_conditional(self, isolation, option):
env_config = {
'foo': {
option: ['run baz'],
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'run foo', 'if': ['42']}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: ['run baz']},
'foo.42': {'type': 'virtual', option: ['run baz', 'run foo']},
'foo.bar': {'type': 'virtual', option: ['run baz']},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', ARRAY_OPTIONS)
def test_overrides_matrix_array_overwrite(self, isolation, option):
env_config = {
'foo': {
option: ['run baz'],
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {f'set-{option}': ['run foo', {'value': 'run bar'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: ['run foo', 'run bar']},
'foo.bar': {'type': 'virtual', option: ['run baz']},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_string_create(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: 'baz'}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_string_overwrite(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: 'baz'}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_table_create(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: {'value': 'baz'}}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_table_override(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: {'value': 'baz'}}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_table_conditional(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: {'value': 'baz', 'if': ['42']}}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'test'},
'foo.42': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_create(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'baz'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_override(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'baz'}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_conditional(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'baz', 'if': ['42']}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'test'},
'foo.42': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_conditional_eager_string(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: ['baz', {'value': 'foo', 'if': ['42']}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'baz'},
'foo.42': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', STRING_OPTIONS)
def test_overrides_matrix_string_array_table_conditional_eager_table(self, isolation, option):
env_config = {
'foo': {
option: 'test',
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': 'baz', 'if': ['42']}, 'foo']}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: 'foo'},
'foo.42': {'type': 'virtual', option: 'baz'},
'foo.bar': {'type': 'virtual', option: 'test'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_boolean_create(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: True}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_boolean_overwrite(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: True}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_table_create(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: {'value': True}}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_table_override(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: {'value': True}}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_table_conditional(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: {'value': True, 'if': ['42']}}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: False},
'foo.42': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_create(self, isolation, option):
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': True}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_override(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': True}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_conditional(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': True, 'if': ['42']}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: False},
'foo.42': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_conditional_eager_boolean(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [True, {'value': False, 'if': ['42']}]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: True},
'foo.42': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
@pytest.mark.parametrize('option', BOOLEAN_OPTIONS)
def test_overrides_matrix_boolean_array_table_conditional_eager_table(self, isolation, option):
env_config = {
'foo': {
option: False,
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {'matrix': {'version': {option: [{'value': True, 'if': ['42']}, False]}}},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', option: False},
'foo.42': {'type': 'virtual', option: True},
'foo.bar': {'type': 'virtual', option: False},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
# We assert type coverage using matrix variable overrides, for the others just test one type
def test_overrides_platform_boolean_boolean_create(self, isolation, current_platform):
env_config = {
'foo': {
'overrides': {'platform': {'bar': {'dependencies': ['baz']}, current_platform: {'skip-install': True}}}
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': True},
}
assert project_config.envs == expected_envs
def test_overrides_platform_boolean_boolean_overwrite(self, isolation, current_platform):
env_config = {
'foo': {
'skip-install': True,
'overrides': {
'platform': {'bar': {'dependencies': ['baz']}, current_platform: {'skip-install': False}}
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': False},
}
assert project_config.envs == expected_envs
def test_overrides_platform_boolean_table_create(self, isolation, current_platform):
env_config = {
'foo': {
'overrides': {
'platform': {
'bar': {'dependencies': ['baz']},
current_platform: {'skip-install': [{'value': True}]},
}
}
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': True},
}
assert project_config.envs == expected_envs
def test_overrides_platform_boolean_table_overwrite(self, isolation, current_platform):
env_config = {
'foo': {
'skip-install': True,
'overrides': {
'platform': {
'bar': {'dependencies': ['baz']},
current_platform: {'skip-install': [{'value': False}]},
}
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': False},
}
assert project_config.envs == expected_envs
def test_overrides_env_boolean_boolean_create(self, isolation):
env_var_exists = 'OVERRIDES_ENV_FOO'
env_var_missing = 'OVERRIDES_ENV_BAR'
env_config = {
'foo': {
'overrides': {
'env': {env_var_missing: {'dependencies': ['baz']}, env_var_exists: {'skip-install': True}}
}
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': True},
}
with EnvVars({env_var_exists: 'any'}):
assert project_config.envs == expected_envs
def test_overrides_env_boolean_boolean_overwrite(self, isolation):
env_var_exists = 'OVERRIDES_ENV_FOO'
env_var_missing = 'OVERRIDES_ENV_BAR'
env_config = {
'foo': {
'skip-install': True,
'overrides': {
'env': {env_var_missing: {'dependencies': ['baz']}, env_var_exists: {'skip-install': False}}
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': False},
}
with EnvVars({env_var_exists: 'any'}):
assert project_config.envs == expected_envs
def test_overrides_env_boolean_table_create(self, isolation):
env_var_exists = 'OVERRIDES_ENV_FOO'
env_var_missing = 'OVERRIDES_ENV_BAR'
env_config = {
'foo': {
'overrides': {
'env': {
env_var_missing: {'dependencies': ['baz']},
env_var_exists: {'skip-install': [{'value': True}]},
}
}
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': True},
}
with EnvVars({env_var_exists: 'any'}):
assert project_config.envs == expected_envs
def test_overrides_env_boolean_table_overwrite(self, isolation):
env_var_exists = 'OVERRIDES_ENV_FOO'
env_var_missing = 'OVERRIDES_ENV_BAR'
env_config = {
'foo': {
'skip-install': True,
'overrides': {
'env': {
env_var_missing: {'dependencies': ['baz']},
env_var_exists: {'skip-install': [{'value': False}]},
}
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': False},
}
with EnvVars({env_var_exists: 'any'}):
assert project_config.envs == expected_envs
def test_overrides_env_boolean_conditional(self, isolation):
env_var_exists = 'OVERRIDES_ENV_FOO'
env_var_missing = 'OVERRIDES_ENV_BAR'
env_config = {
'foo': {
'overrides': {
'env': {
env_var_missing: {'dependencies': ['baz']},
env_var_exists: {'skip-install': [{'value': True, 'if': ['foo']}]},
}
}
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': True},
}
with EnvVars({env_var_exists: 'foo'}):
assert project_config.envs == expected_envs
# Tests for source precedence
def test_overrides_matrix_precedence_over_platform(self, isolation, current_platform):
env_config = {
'foo': {
'skip-install': False,
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {
'platform': {current_platform: {'skip-install': True}},
'matrix': {'version': {'skip-install': [{'value': False, 'if': ['42']}]}},
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', 'skip-install': True},
'foo.42': {'type': 'virtual', 'skip-install': False},
'foo.bar': {'type': 'virtual', 'skip-install': True},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
def test_overrides_matrix_precedence_over_env(self, isolation):
env_var = 'OVERRIDES_ENV_FOO'
env_config = {
'foo': {
'skip-install': False,
'matrix': [{'version': ['9000', '42']}, {'feature': ['bar']}],
'overrides': {
'env': {env_var: {'skip-install': True}},
'matrix': {'version': {'skip-install': [{'value': False, 'if': ['42']}]}},
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', 'skip-install': True},
'foo.42': {'type': 'virtual', 'skip-install': False},
'foo.bar': {'type': 'virtual', 'skip-install': True},
}
with EnvVars({env_var: 'any'}):
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
def test_overrides_env_precedence_over_platform(self, isolation, current_platform):
env_var = 'OVERRIDES_ENV_FOO'
env_config = {
'foo': {
'overrides': {
'platform': {current_platform: {'skip-install': True}},
'env': {env_var: {'skip-install': [{'value': False, 'if': ['foo']}]}},
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo': {'type': 'virtual', 'skip-install': False},
}
with EnvVars({env_var: 'foo'}):
assert project_config.envs == expected_envs
# Test for options defined by environment plugins
def test_overrides_for_environment_plugins(self, isolation, current_platform):
env_var = 'OVERRIDES_ENV_FOO'
env_config = {
'foo': {
'matrix': [{'version': ['9000']}, {'feature': ['bar']}],
'overrides': {
'platform': {current_platform: {'foo': True}},
'env': {env_var: {'bar': [{'value': 'foobar', 'if': ['foo']}]}},
'matrix': {'version': {'baz': 'BAR=ok'}},
},
}
}
project_config = ProjectConfig(isolation, {'envs': env_config}, PluginManager())
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual'},
'foo.bar': {'type': 'virtual'},
}
with EnvVars({env_var: 'foo'}):
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
project_config.finalize_env_overrides({'foo': bool, 'bar': str, 'baz': dict})
expected_envs = {
'default': {'type': 'virtual'},
'foo.9000': {'type': 'virtual', 'foo': True, 'bar': 'foobar', 'baz': {'BAR': 'ok'}},
'foo.bar': {'type': 'virtual', 'foo': True, 'bar': 'foobar'},
}
assert project_config.envs == expected_envs
assert project_config.matrices['foo'] == list(expected_envs)[1:]
class TestPublish:
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.publish` must be a table'):
_ = ProjectConfig(isolation, {'publish': 9000}).publish
def test_config_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.publish.foo` must be a table'):
_ = ProjectConfig(isolation, {'publish': {'foo': 9000}}).publish
def test_default(self, isolation):
project_config = ProjectConfig(isolation, {})
assert project_config.publish == project_config.publish == {}
def test_defined(self, isolation):
project_config = ProjectConfig(isolation, {'publish': {'foo': {'bar': 'baz'}}})
assert project_config.publish == {'foo': {'bar': 'baz'}}
class TestScripts:
def test_not_table(self, isolation):
config = {'scripts': 9000}
project_config = ProjectConfig(isolation, config)
with pytest.raises(TypeError, match='Field `tool.hatch.scripts` must be a table'):
_ = project_config.scripts
def test_name_contains_spaces(self, isolation):
config = {'scripts': {'foo bar': []}}
project_config = ProjectConfig(isolation, config)
with pytest.raises(
ValueError, match='Script name `foo bar` in field `tool.hatch.scripts` must not contain spaces'
):
_ = project_config.scripts
def test_default(self, isolation):
project_config = ProjectConfig(isolation, {})
assert project_config.scripts == project_config.scripts == {}
def test_single_commands(self, isolation):
config = {'scripts': {'foo': 'command1', 'bar': 'command2'}}
project_config = ProjectConfig(isolation, config)
assert project_config.scripts == {'foo': ['command1'], 'bar': ['command2']}
def test_multiple_commands(self, isolation):
config = {'scripts': {'foo': 'command1', 'bar': ['command3', 'command2']}}
project_config = ProjectConfig(isolation, config)
assert project_config.scripts == {'foo': ['command1'], 'bar': ['command3', 'command2']}
def test_multiple_commands_not_string(self, isolation):
config = {'scripts': {'foo': [9000]}}
project_config = ProjectConfig(isolation, config)
with pytest.raises(TypeError, match='Command #1 in field `tool.hatch.scripts.foo` must be a string'):
_ = project_config.scripts
def test_config_invalid_type(self, isolation):
config = {'scripts': {'foo': 9000}}
project_config = ProjectConfig(isolation, config)
with pytest.raises(TypeError, match='Field `tool.hatch.scripts.foo` must be a string or an array of strings'):
_ = project_config.scripts
def test_command_expansion_basic(self, isolation):
config = {'scripts': {'foo': 'command1', 'bar': ['command3', 'foo']}}
project_config = ProjectConfig(isolation, config)
assert project_config.scripts == {'foo': ['command1'], 'bar': ['command3', 'command1']}
def test_command_expansion_multiple_nested(self, isolation):
config = {
'scripts': {
'foo': 'command3',
'baz': ['command5', 'bar', 'foo', 'command1'],
'bar': ['command4', 'foo', 'command2'],
}
}
project_config = ProjectConfig(isolation, config)
assert project_config.scripts == {
'foo': ['command3'],
'baz': ['command5', 'command4', 'command3', 'command2', 'command3', 'command1'],
'bar': ['command4', 'command3', 'command2'],
}
def test_command_expansion_modification(self, isolation):
config = {
'scripts': {
'foo': 'command3',
'baz': ['command5', 'bar world', 'foo', 'command1'],
'bar': ['command4', 'foo hello', 'command2'],
}
}
project_config = ProjectConfig(isolation, config)
assert project_config.scripts == {
'foo': ['command3'],
'baz': ['command5', 'command4 world', 'command3 hello world', 'command2 world', 'command3', 'command1'],
'bar': ['command4', 'command3 hello', 'command2'],
}
def test_command_expansion_circular_inheritance(self, isolation):
config = {'scripts': {'foo': 'bar', 'bar': 'foo'}}
project_config = ProjectConfig(isolation, config)
with pytest.raises(
ValueError, match='Circular expansion detected for field `tool.hatch.scripts`: foo -> bar -> foo'
):
_ = project_config.scripts
class TestVersionConfig:
def test_missing(self, isolation):
with pytest.raises(ValueError, match='Missing `tool.hatch.version` configuration'):
_ = ProjectConfig(isolation, {}).version
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.version` must be a table'):
_ = ProjectConfig(isolation, {'version': 9000}).version
def test_parse(self, isolation):
project_config = ProjectConfig(isolation, {'version': {'foo': 'bar'}})
assert project_config.version.config == project_config.version.config == {'foo': 'bar'}
class TestVersionSourceName:
def test_empty(self, isolation):
with pytest.raises(
ValueError, match='The `source` option under the `tool.hatch.version` table must not be empty if defined'
):
_ = ProjectConfig(isolation, {'version': {'source': ''}}).version.source_name
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.version.source` must be a string'):
_ = ProjectConfig(isolation, {'version': {'source': 9000}}).version.source_name
def test_correct(self, isolation):
project_config = ProjectConfig(isolation, {'version': {'source': 'foo'}})
assert project_config.version.source_name == project_config.version.source_name == 'foo'
def test_default(self, isolation):
project_config = ProjectConfig(isolation, {'version': {}})
assert project_config.version.source_name == project_config.version.source_name == 'regex'
class TestVersionSchemeName:
def test_missing(self, isolation):
with pytest.raises(
ValueError, match='The `scheme` option under the `tool.hatch.version` table must not be empty if defined'
):
_ = ProjectConfig(isolation, {'version': {'scheme': ''}}).version.scheme_name
def test_not_table(self, isolation):
with pytest.raises(TypeError, match='Field `tool.hatch.version.scheme` must be a string'):
_ = ProjectConfig(isolation, {'version': {'scheme': 9000}}).version.scheme_name
def test_correct(self, isolation):
project_config = ProjectConfig(isolation, {'version': {'scheme': 'foo'}})
assert project_config.version.scheme_name == project_config.version.scheme_name == 'foo'
def test_default(self, isolation):
project_config = ProjectConfig(isolation, {'version': {}})
assert project_config.version.scheme_name == project_config.version.scheme_name == 'standard'
class TestVersionSource:
def test_unknown(self, isolation):
with pytest.raises(ValueError, match='Unknown version source: foo'):
_ = ProjectConfig(isolation, {'version': {'source': 'foo'}}, PluginManager()).version.source
def test_cached(self, isolation):
project_config = ProjectConfig(isolation, {'version': {}}, PluginManager())
assert project_config.version.source is project_config.version.source
assert isinstance(project_config.version.source, RegexSource)
class TestVersionScheme:
def test_unknown(self, isolation):
with pytest.raises(ValueError, match='Unknown version scheme: foo'):
_ = ProjectConfig(isolation, {'version': {'scheme': 'foo'}}, PluginManager()).version.scheme
def test_cached(self, isolation):
project_config = ProjectConfig(isolation, {'version': {}}, PluginManager())
assert project_config.version.scheme is project_config.version.scheme
assert isinstance(project_config.version.scheme, StandardScheme)
| 41.348636 | 120 | 0.527708 | 8,373 | 90,967 | 5.554401 | 0.025558 | 0.070161 | 0.056379 | 0.067732 | 0.943127 | 0.913346 | 0.893521 | 0.869181 | 0.853549 | 0.833423 | 0 | 0.019762 | 0.30743 | 90,967 | 2,199 | 121 | 41.36744 | 0.718433 | 0.001825 | 0 | 0.631859 | 0 | 0.018647 | 0.203762 | 0.02793 | 0 | 0 | 0 | 0 | 0.07512 | 1 | 0.082046 | false | 0 | 0.003729 | 0 | 0.091103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5821c1379d0ea7c2b0d3a7e4df7bf74a96ceee61 | 36 | py | Python | env/lib/python3.6/site-packages/torch/jit/passes/__init__.py | bopopescu/smart_contracts7 | 40a487cb3843e86ab5e4cb50b1aafa2095f648cd | [
"Apache-2.0"
] | null | null | null | env/lib/python3.6/site-packages/torch/jit/passes/__init__.py | bopopescu/smart_contracts7 | 40a487cb3843e86ab5e4cb50b1aafa2095f648cd | [
"Apache-2.0"
] | null | null | null | env/lib/python3.6/site-packages/torch/jit/passes/__init__.py | bopopescu/smart_contracts7 | 40a487cb3843e86ab5e4cb50b1aafa2095f648cd | [
"Apache-2.0"
] | 1 | 2020-07-24T17:53:25.000Z | 2020-07-24T17:53:25.000Z | from .inplace import _check_inplace
| 18 | 35 | 0.861111 | 5 | 36 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.90625 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
587e4fb6944bfee8df41377005377567a02555e1 | 71 | py | Python | autoscalingsim/fault/failure/realizations/__init__.py | Remit/autoscaling-simulator | 091943c0e9eedf9543e9305682a067ab60f56def | [
"MIT"
] | 6 | 2021-03-10T16:23:10.000Z | 2022-01-14T04:57:46.000Z | autoscalingsim/fault/failure/realizations/__init__.py | Remit/autoscaling-simulator | 091943c0e9eedf9543e9305682a067ab60f56def | [
"MIT"
] | null | null | null | autoscalingsim/fault/failure/realizations/__init__.py | Remit/autoscaling-simulator | 091943c0e9eedf9543e9305682a067ab60f56def | [
"MIT"
] | 1 | 2022-01-14T04:57:55.000Z | 2022-01-14T04:57:55.000Z | from . import node_group_termination
from . import service_termination
| 23.666667 | 36 | 0.859155 | 9 | 71 | 6.444444 | 0.666667 | 0.344828 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.112676 | 71 | 2 | 37 | 35.5 | 0.920635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
5454cdfa4e38f4b706fa889307548ea8e41af295 | 5,571 | py | Python | nipy/labs/bindings/tests/test_blas3.py | bpinsard/nipy | d49e8292adad6619e3dac710752131b567efe90e | [
"BSD-3-Clause"
] | 8 | 2019-05-29T09:38:30.000Z | 2021-01-20T03:36:59.000Z | nipy/labs/bindings/tests/test_blas3.py | bpinsard/nipy | d49e8292adad6619e3dac710752131b567efe90e | [
"BSD-3-Clause"
] | 12 | 2021-03-09T03:01:16.000Z | 2022-03-11T23:59:36.000Z | nipy/labs/bindings/tests/test_blas3.py | bpinsard/nipy | d49e8292adad6619e3dac710752131b567efe90e | [
"BSD-3-Clause"
] | 1 | 2020-07-17T12:49:49.000Z | 2020-07-17T12:49:49.000Z | from __future__ import absolute_import
#!/usr/bin/env python
#
# Test BLAS 3
#
from numpy.testing import assert_almost_equal
import numpy as np
from .. import (blas_dgemm, blas_dsymm, blas_dtrmm,
blas_dtrsm, blas_dsyrk, blas_dsyr2k)
n1 = 10
n2 = 13
def test_dgemm():
A = np.random.rand(n1,n2)
B = np.random.rand(n2,n1)
C = np.random.rand(n1,n1)
C2 = np.random.rand(n2,n2)
alpha = np.double(np.random.rand(1))
beta = np.double(np.random.rand(1))
# Test: A*B
Dgold = alpha*np.dot(A,B) + beta*C
D = blas_dgemm(0, 0, alpha, A, B, beta, C)
assert_almost_equal(Dgold, D)
# Test: A^t B^t
Dgold = alpha*np.dot(A.T,B.T) + beta*C2
D = blas_dgemm(1, 1, alpha, A, B, beta, C2)
assert_almost_equal(Dgold, D)
def test_dsymm():
A = np.random.rand(n1,n1)
A = A + A.T
B = np.random.rand(n1,n2)
C = np.random.rand(n1,n2)
B2 = np.random.rand(n2,n1)
C2 = np.random.rand(n2,n1)
alpha = np.double(np.random.rand(1))
beta = np.double(np.random.rand(1))
# Test: A*B
Dgold = alpha*np.dot(A,B) + beta*C
D = blas_dsymm(0, 0, alpha, A, B, beta, C)
assert_almost_equal(Dgold, D)
D = blas_dsymm(0, 1, alpha, A, B, beta, C)
assert_almost_equal(Dgold, D)
# Test: B*A
Dgold = alpha*np.dot(B2,A) + beta*C2
D = blas_dsymm(1, 0, alpha, A, B2, beta, C2)
assert_almost_equal(Dgold, D)
D = blas_dsymm(1, 1, alpha, A, B2, beta, C2)
assert_almost_equal(Dgold, D)
def _test_dtrXm(A, U, L, B, alpha, blasfn):
# Test: U*B
Dgold = alpha*np.dot(U,B)
D = blasfn(0, 0, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*U
Dgold = alpha*np.dot(B,U)
D = blasfn(1, 0, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: U'*B
Dgold = alpha*np.dot(U.T,B)
D = blasfn(0, 0, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*U'
Dgold = alpha*np.dot(B,U.T)
D = blasfn(1, 0, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: L*B
Dgold = alpha*np.dot(L,B)
D = blasfn(0, 1, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*L
Dgold = alpha*np.dot(B,L)
D = blasfn(1, 1, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: L'*B
Dgold = alpha*np.dot(L.T,B)
D = blasfn(0, 1, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*L'
Dgold = alpha*np.dot(B,L.T)
D = blasfn(1, 1, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: U*B
Dgold = alpha*np.dot(U,B)
D = blasfn(0, 0, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*U
Dgold = alpha*np.dot(B,U)
D = blasfn(1, 0, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: U'*B
Dgold = alpha*np.dot(U.T,B)
D = blasfn(0, 0, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*U'
Dgold = alpha*np.dot(B,U.T)
D = blasfn(1, 0, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: L*B
Dgold = alpha*np.dot(L,B)
D = blasfn(0, 1, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*L
Dgold = alpha*np.dot(B,L)
D = blasfn(1, 1, 0, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: L'*B
Dgold = alpha*np.dot(L.T,B)
D = blasfn(0, 1, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
# Test: B*L'
Dgold = alpha*np.dot(B,L.T)
D = blasfn(1, 1, 1, 0, alpha, A, B)
assert_almost_equal(Dgold, D)
def test_dtrmm():
A = np.random.rand(n1,n1)
U = np.triu(A)
L = np.tril(A)
B = np.random.rand(n1,n1)
alpha = np.double(np.random.rand(1))
_test_dtrXm(A, U, L, B, alpha, blas_dtrmm)
def test_dtrsm():
A = np.random.rand(n1,n1)
U = np.linalg.inv(np.triu(A))
L = np.linalg.inv(np.tril(A))
B = np.random.rand(n1,n1)
alpha = np.double(np.random.rand(1))
_test_dtrXm(A, U, L, B, alpha, blas_dtrsm)
def test_dsyrk():
A = np.random.rand(n1,n1)
C = np.random.rand(n1,n1)
alpha = np.double(np.random.rand(1))
beta = np.double(np.random.rand(1))
# Test A*A'
U = np.triu(blas_dsyrk(0, 0, alpha, A, beta, C))
L = np.tril(blas_dsyrk(1, 0, alpha, A, beta, C))
Dgold = alpha*np.dot(A, A.T) + beta*C
Ugold = np.triu(Dgold)
Lgold = np.tril(Dgold)
assert_almost_equal(Ugold, U)
assert_almost_equal(Lgold, L)
# Test A'*A
U = np.triu(blas_dsyrk(0, 1, alpha, A, beta, C))
L = np.tril(blas_dsyrk(1, 1, alpha, A, beta, C))
Dgold = alpha*np.dot(A.T, A) + beta*C
Ugold = np.triu(Dgold)
Lgold = np.tril(Dgold)
assert_almost_equal(Ugold, U)
assert_almost_equal(Lgold, L)
def test_dsyr2k():
A = np.random.rand(n1,n1)
B = np.random.rand(n1,n1)
C = np.random.rand(n1,n1)
alpha = np.double(np.random.rand(1))
beta = np.double(np.random.rand(1))
# Test A*B' + B*A'
U = np.triu(blas_dsyr2k(0, 0, alpha, A, B, beta, C))
L = np.tril(blas_dsyr2k(1, 0, alpha, A, B, beta, C))
Dgold = alpha*(np.dot(A,B.T) + np.dot(B,A.T)) + beta*C
Ugold = np.triu(Dgold)
Lgold = np.tril(Dgold)
assert_almost_equal(Ugold, U)
assert_almost_equal(Lgold, L)
# Test A'*B + B'*A
U = np.triu(blas_dsyr2k(0, 1, alpha, A, B, beta, C))
L = np.tril(blas_dsyr2k(1, 1, alpha, A, B, beta, C))
Dgold = alpha*(np.dot(A.T,B) + np.dot(B.T,A)) + beta*C
Ugold = np.triu(Dgold)
Lgold = np.tril(Dgold)
assert_almost_equal(Ugold, U)
assert_almost_equal(Lgold, L)
if __name__ == "__main__":
import nose
nose.run(argv=['', __file__])
| 29.47619 | 58 | 0.580147 | 1,039 | 5,571 | 3.001925 | 0.059673 | 0.021161 | 0.168964 | 0.115422 | 0.871433 | 0.839692 | 0.813402 | 0.792882 | 0.773325 | 0.730683 | 0 | 0.03848 | 0.244301 | 5,571 | 188 | 59 | 29.632979 | 0.702375 | 0.053491 | 0 | 0.645833 | 0 | 0 | 0.001526 | 0 | 0 | 0 | 0 | 0 | 0.215278 | 1 | 0.048611 | false | 0 | 0.034722 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5457cbb6f8b2103da5594d77f5e45a5881e293ba | 2,499 | py | Python | elegy/metrics/recall_test.py | cgarciae/elegy | 3494cc7d495198f4c383d3560ea05df65bb669ff | [
"Apache-2.0"
] | 1 | 2021-09-02T20:53:09.000Z | 2021-09-02T20:53:09.000Z | elegy/metrics/recall_test.py | cgarciae/elegy | 3494cc7d495198f4c383d3560ea05df65bb669ff | [
"Apache-2.0"
] | 4 | 2020-07-02T02:18:33.000Z | 2020-07-02T02:43:57.000Z | elegy/metrics/recall_test.py | cgarciae/elegy | 3494cc7d495198f4c383d3560ea05df65bb669ff | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
import jax.numpy as jnp
import tensorflow.keras as tfk
import numpy as np
import elegy
class RecallTest(TestCase):
def test_basic(self):
y_true = (np.random.uniform(0, 1, size=(5, 6, 7)) > 0.5).astype(np.float32)
y_pred = np.random.uniform(0, 1, size=(5, 6, 7))
sample_weight = np.expand_dims(
(np.random.uniform(0, 1, size=(6, 7)) > 0.5).astype(int), axis=0
)
assert np.allclose(
tfk.metrics.Recall()(y_true, y_pred),
elegy.metrics.Recall()(jnp.asarray(y_true), jnp.asarray(y_pred)),
)
assert np.allclose(
tfk.metrics.Recall(thresholds=0.3)(y_true, y_pred),
elegy.metrics.Recall(threshold=0.3)(
jnp.asarray(y_true), jnp.asarray(y_pred)
),
)
assert np.allclose(
tfk.metrics.Recall(thresholds=0.3)(
y_true, y_pred, sample_weight=sample_weight
),
elegy.metrics.Recall(threshold=0.3)(
jnp.asarray(y_true),
jnp.asarray(y_pred),
sample_weight=jnp.asarray(sample_weight),
),
)
def test_cummulative(self):
tm = tfk.metrics.Recall(thresholds=0.3)
em = elegy.metrics.Recall(threshold=0.3)
# 1st run
y_true = (np.random.uniform(0, 1, size=(5, 6, 7)) > 0.5).astype(np.float32)
y_pred = np.random.uniform(0, 1, size=(5, 6, 7))
sample_weight = np.expand_dims(
(np.random.uniform(0, 1, size=(6, 7)) > 0.5).astype(int), axis=0
)
assert np.allclose(
tm(y_true, y_pred, sample_weight=sample_weight),
em(
jnp.asarray(y_true),
jnp.asarray(y_pred),
sample_weight=jnp.asarray(sample_weight),
),
)
# 2nd run
y_true = (np.random.uniform(0, 1, size=(5, 6, 7)) > 0.5).astype(np.float32)
y_pred = np.random.uniform(0, 1, size=(5, 6, 7))
sample_weight = np.expand_dims(
(np.random.uniform(0, 1, size=(6, 7)) > 0.5).astype(int), axis=0
)
assert np.allclose(
tm(y_true, y_pred, sample_weight=sample_weight),
em(
jnp.asarray(y_true),
jnp.asarray(y_pred),
sample_weight=jnp.asarray(sample_weight),
),
)
| 32.454545 | 84 | 0.520608 | 326 | 2,499 | 3.849693 | 0.159509 | 0.143426 | 0.087649 | 0.114741 | 0.866135 | 0.866135 | 0.807968 | 0.785657 | 0.766534 | 0.766534 | 0 | 0.047124 | 0.346138 | 2,499 | 76 | 85 | 32.881579 | 0.72093 | 0.006002 | 0 | 0.606557 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 1 | 0.032787 | false | 0 | 0.081967 | 0 | 0.131148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
546bed8bb1897263de6bc337e7bdb8c783c77f35 | 31 | py | Python | 0x02-python-import_modules/103-fast_alphabet.py | Rmolimock/holbertonschool-higher_level_programming | cf0421cbb6463b3960dc581badf7d4bbe1622b7d | [
"MIT"
] | 1 | 2019-05-21T09:34:41.000Z | 2019-05-21T09:34:41.000Z | 0x02-python-import_modules/103-fast_alphabet.py | Rmolimock/holbertonschool-higher_level_programming | cf0421cbb6463b3960dc581badf7d4bbe1622b7d | [
"MIT"
] | null | null | null | 0x02-python-import_modules/103-fast_alphabet.py | Rmolimock/holbertonschool-higher_level_programming | cf0421cbb6463b3960dc581badf7d4bbe1622b7d | [
"MIT"
] | null | null | null | #!/usr/bin/python3
import test
| 10.333333 | 18 | 0.741935 | 5 | 31 | 4.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.035714 | 0.096774 | 31 | 2 | 19 | 15.5 | 0.785714 | 0.548387 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
49afe8e2b8bb6e8b09ab7e4246fe8c47cf0de6a8 | 28,035 | py | Python | sdk/python/pulumi_alicloud/vpn/outputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 42 | 2019-03-18T06:34:37.000Z | 2022-03-24T07:08:57.000Z | sdk/python/pulumi_alicloud/vpn/outputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 152 | 2019-04-15T21:03:44.000Z | 2022-03-29T18:00:57.000Z | sdk/python/pulumi_alicloud/vpn/outputs.py | pulumi/pulumi-alicloud | 9c34d84b4588a7c885c6bec1f03b5016e5a41683 | [
"ECL-2.0",
"Apache-2.0"
] | 3 | 2020-08-26T17:30:07.000Z | 2021-07-05T01:37:45.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union, overload
from .. import _utilities
from . import outputs
__all__ = [
'ConnectionIkeConfig',
'ConnectionIpsecConfig',
'GetConnectionsConnectionResult',
'GetConnectionsConnectionIkeConfigResult',
'GetConnectionsConnectionIpsecConfigResult',
'GetCustomerGatewaysGatewayResult',
'GetGatewaysGatewayResult',
]
@pulumi.output_type
class ConnectionIkeConfig(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ikeAuthAlg":
suggest = "ike_auth_alg"
elif key == "ikeEncAlg":
suggest = "ike_enc_alg"
elif key == "ikeLifetime":
suggest = "ike_lifetime"
elif key == "ikeLocalId":
suggest = "ike_local_id"
elif key == "ikeMode":
suggest = "ike_mode"
elif key == "ikePfs":
suggest = "ike_pfs"
elif key == "ikeRemoteId":
suggest = "ike_remote_id"
elif key == "ikeVersion":
suggest = "ike_version"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ConnectionIkeConfig. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ConnectionIkeConfig.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ConnectionIkeConfig.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ike_auth_alg: Optional[str] = None,
ike_enc_alg: Optional[str] = None,
ike_lifetime: Optional[int] = None,
ike_local_id: Optional[str] = None,
ike_mode: Optional[str] = None,
ike_pfs: Optional[str] = None,
ike_remote_id: Optional[str] = None,
ike_version: Optional[str] = None,
psk: Optional[str] = None):
"""
:param str ike_auth_alg: The authentication algorithm of phase-one negotiation. Valid value: md5 | sha1 | sha256 | sha384 | sha512 |. Default value: sha1
:param str ike_enc_alg: The encryption algorithm of phase-one negotiation. Valid value: aes | aes192 | aes256 | des | 3des. Default Valid value: aes
:param int ike_lifetime: The SA lifecycle as the result of phase-one negotiation. The valid value of n is [0, 86400], the unit is second and the default value is 86400.
:param str ike_local_id: The identification of the VPN gateway.
:param str ike_mode: The negotiation mode of IKE V1. Valid value: main (main mode) | aggressive (aggressive mode). Default value: main
:param str ike_pfs: The Diffie-Hellman key exchange algorithm used by phase-one negotiation. Valid value: group1 | group2 | group5 | group14 | group24. Default value: group2
:param str ike_remote_id: The identification of the customer gateway.
:param str ike_version: The version of the IKE protocol. Valid value: ikev1 | ikev2. Default value: ikev1
:param str psk: Used for authentication between the IPsec VPN gateway and the customer gateway.
"""
if ike_auth_alg is not None:
pulumi.set(__self__, "ike_auth_alg", ike_auth_alg)
if ike_enc_alg is not None:
pulumi.set(__self__, "ike_enc_alg", ike_enc_alg)
if ike_lifetime is not None:
pulumi.set(__self__, "ike_lifetime", ike_lifetime)
if ike_local_id is not None:
pulumi.set(__self__, "ike_local_id", ike_local_id)
if ike_mode is not None:
pulumi.set(__self__, "ike_mode", ike_mode)
if ike_pfs is not None:
pulumi.set(__self__, "ike_pfs", ike_pfs)
if ike_remote_id is not None:
pulumi.set(__self__, "ike_remote_id", ike_remote_id)
if ike_version is not None:
pulumi.set(__self__, "ike_version", ike_version)
if psk is not None:
pulumi.set(__self__, "psk", psk)
@property
@pulumi.getter(name="ikeAuthAlg")
def ike_auth_alg(self) -> Optional[str]:
"""
The authentication algorithm of phase-one negotiation. Valid value: md5 | sha1 | sha256 | sha384 | sha512 |. Default value: sha1
"""
return pulumi.get(self, "ike_auth_alg")
@property
@pulumi.getter(name="ikeEncAlg")
def ike_enc_alg(self) -> Optional[str]:
"""
The encryption algorithm of phase-one negotiation. Valid value: aes | aes192 | aes256 | des | 3des. Default Valid value: aes
"""
return pulumi.get(self, "ike_enc_alg")
@property
@pulumi.getter(name="ikeLifetime")
def ike_lifetime(self) -> Optional[int]:
"""
The SA lifecycle as the result of phase-one negotiation. The valid value of n is [0, 86400], the unit is second and the default value is 86400.
"""
return pulumi.get(self, "ike_lifetime")
@property
@pulumi.getter(name="ikeLocalId")
def ike_local_id(self) -> Optional[str]:
"""
The identification of the VPN gateway.
"""
return pulumi.get(self, "ike_local_id")
@property
@pulumi.getter(name="ikeMode")
def ike_mode(self) -> Optional[str]:
"""
The negotiation mode of IKE V1. Valid value: main (main mode) | aggressive (aggressive mode). Default value: main
"""
return pulumi.get(self, "ike_mode")
@property
@pulumi.getter(name="ikePfs")
def ike_pfs(self) -> Optional[str]:
"""
The Diffie-Hellman key exchange algorithm used by phase-one negotiation. Valid value: group1 | group2 | group5 | group14 | group24. Default value: group2
"""
return pulumi.get(self, "ike_pfs")
@property
@pulumi.getter(name="ikeRemoteId")
def ike_remote_id(self) -> Optional[str]:
"""
The identification of the customer gateway.
"""
return pulumi.get(self, "ike_remote_id")
@property
@pulumi.getter(name="ikeVersion")
def ike_version(self) -> Optional[str]:
"""
The version of the IKE protocol. Valid value: ikev1 | ikev2. Default value: ikev1
"""
return pulumi.get(self, "ike_version")
@property
@pulumi.getter
def psk(self) -> Optional[str]:
"""
Used for authentication between the IPsec VPN gateway and the customer gateway.
"""
return pulumi.get(self, "psk")
@pulumi.output_type
class ConnectionIpsecConfig(dict):
@staticmethod
def __key_warning(key: str):
suggest = None
if key == "ipsecAuthAlg":
suggest = "ipsec_auth_alg"
elif key == "ipsecEncAlg":
suggest = "ipsec_enc_alg"
elif key == "ipsecLifetime":
suggest = "ipsec_lifetime"
elif key == "ipsecPfs":
suggest = "ipsec_pfs"
if suggest:
pulumi.log.warn(f"Key '{key}' not found in ConnectionIpsecConfig. Access the value via the '{suggest}' property getter instead.")
def __getitem__(self, key: str) -> Any:
ConnectionIpsecConfig.__key_warning(key)
return super().__getitem__(key)
def get(self, key: str, default = None) -> Any:
ConnectionIpsecConfig.__key_warning(key)
return super().get(key, default)
def __init__(__self__, *,
ipsec_auth_alg: Optional[str] = None,
ipsec_enc_alg: Optional[str] = None,
ipsec_lifetime: Optional[int] = None,
ipsec_pfs: Optional[str] = None):
"""
:param str ipsec_auth_alg: The authentication algorithm of phase-two negotiation. Valid value: md5 | sha1 | sha256 | sha384 | sha512 |. Default value: sha1
:param str ipsec_enc_alg: The encryption algorithm of phase-two negotiation. Valid value: aes | aes192 | aes256 | des | 3des. Default value: aes
:param int ipsec_lifetime: The SA lifecycle as the result of phase-two negotiation. The valid value is [0, 86400], the unit is second and the default value is 86400.
:param str ipsec_pfs: The Diffie-Hellman key exchange algorithm used by phase-two negotiation. Valid value: group1 | group2 | group5 | group14 | group24| disabled. Default value: group2
"""
if ipsec_auth_alg is not None:
pulumi.set(__self__, "ipsec_auth_alg", ipsec_auth_alg)
if ipsec_enc_alg is not None:
pulumi.set(__self__, "ipsec_enc_alg", ipsec_enc_alg)
if ipsec_lifetime is not None:
pulumi.set(__self__, "ipsec_lifetime", ipsec_lifetime)
if ipsec_pfs is not None:
pulumi.set(__self__, "ipsec_pfs", ipsec_pfs)
@property
@pulumi.getter(name="ipsecAuthAlg")
def ipsec_auth_alg(self) -> Optional[str]:
"""
The authentication algorithm of phase-two negotiation. Valid value: md5 | sha1 | sha256 | sha384 | sha512 |. Default value: sha1
"""
return pulumi.get(self, "ipsec_auth_alg")
@property
@pulumi.getter(name="ipsecEncAlg")
def ipsec_enc_alg(self) -> Optional[str]:
"""
The encryption algorithm of phase-two negotiation. Valid value: aes | aes192 | aes256 | des | 3des. Default value: aes
"""
return pulumi.get(self, "ipsec_enc_alg")
@property
@pulumi.getter(name="ipsecLifetime")
def ipsec_lifetime(self) -> Optional[int]:
"""
The SA lifecycle as the result of phase-two negotiation. The valid value is [0, 86400], the unit is second and the default value is 86400.
"""
return pulumi.get(self, "ipsec_lifetime")
@property
@pulumi.getter(name="ipsecPfs")
def ipsec_pfs(self) -> Optional[str]:
"""
The Diffie-Hellman key exchange algorithm used by phase-two negotiation. Valid value: group1 | group2 | group5 | group14 | group24| disabled. Default value: group2
"""
return pulumi.get(self, "ipsec_pfs")
@pulumi.output_type
class GetConnectionsConnectionResult(dict):
def __init__(__self__, *,
create_time: str,
customer_gateway_id: str,
effect_immediately: bool,
id: str,
local_subnet: str,
name: str,
remote_subnet: str,
status: str,
vpn_gateway_id: str,
ike_configs: Optional[Sequence['outputs.GetConnectionsConnectionIkeConfigResult']] = None,
ipsec_configs: Optional[Sequence['outputs.GetConnectionsConnectionIpsecConfigResult']] = None):
"""
:param str customer_gateway_id: Use the VPN customer gateway ID as the search key.
:param str id: ID of the VPN connection.
:param str local_subnet: The local subnet of the VPN connection.
:param str name: The name of the VPN connection.
:param str remote_subnet: The remote subnet of the VPN connection.
:param str status: The status of the VPN connection, valid value:ike_sa_not_established, ike_sa_established, ipsec_sa_not_established, ipsec_sa_established.
:param str vpn_gateway_id: Use the VPN gateway ID as the search key.
:param Sequence['GetConnectionsConnectionIkeConfigArgs'] ike_configs: The configurations of phase-one negotiation.
:param Sequence['GetConnectionsConnectionIpsecConfigArgs'] ipsec_configs: The configurations of phase-two negotiation.
"""
pulumi.set(__self__, "create_time", create_time)
pulumi.set(__self__, "customer_gateway_id", customer_gateway_id)
pulumi.set(__self__, "effect_immediately", effect_immediately)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "local_subnet", local_subnet)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "remote_subnet", remote_subnet)
pulumi.set(__self__, "status", status)
pulumi.set(__self__, "vpn_gateway_id", vpn_gateway_id)
if ike_configs is not None:
pulumi.set(__self__, "ike_configs", ike_configs)
if ipsec_configs is not None:
pulumi.set(__self__, "ipsec_configs", ipsec_configs)
@property
@pulumi.getter(name="createTime")
def create_time(self) -> str:
return pulumi.get(self, "create_time")
@property
@pulumi.getter(name="customerGatewayId")
def customer_gateway_id(self) -> str:
"""
Use the VPN customer gateway ID as the search key.
"""
return pulumi.get(self, "customer_gateway_id")
@property
@pulumi.getter(name="effectImmediately")
def effect_immediately(self) -> bool:
return pulumi.get(self, "effect_immediately")
@property
@pulumi.getter
def id(self) -> str:
"""
ID of the VPN connection.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="localSubnet")
def local_subnet(self) -> str:
"""
The local subnet of the VPN connection.
"""
return pulumi.get(self, "local_subnet")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the VPN connection.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="remoteSubnet")
def remote_subnet(self) -> str:
"""
The remote subnet of the VPN connection.
"""
return pulumi.get(self, "remote_subnet")
@property
@pulumi.getter
def status(self) -> str:
"""
The status of the VPN connection, valid value:ike_sa_not_established, ike_sa_established, ipsec_sa_not_established, ipsec_sa_established.
"""
return pulumi.get(self, "status")
@property
@pulumi.getter(name="vpnGatewayId")
def vpn_gateway_id(self) -> str:
"""
Use the VPN gateway ID as the search key.
"""
return pulumi.get(self, "vpn_gateway_id")
@property
@pulumi.getter(name="ikeConfigs")
def ike_configs(self) -> Optional[Sequence['outputs.GetConnectionsConnectionIkeConfigResult']]:
"""
The configurations of phase-one negotiation.
"""
return pulumi.get(self, "ike_configs")
@property
@pulumi.getter(name="ipsecConfigs")
def ipsec_configs(self) -> Optional[Sequence['outputs.GetConnectionsConnectionIpsecConfigResult']]:
"""
The configurations of phase-two negotiation.
"""
return pulumi.get(self, "ipsec_configs")
@pulumi.output_type
class GetConnectionsConnectionIkeConfigResult(dict):
def __init__(__self__, *,
ike_auth_alg: Optional[str] = None,
ike_enc_alg: Optional[str] = None,
ike_lifetime: Optional[int] = None,
ike_local_id: Optional[str] = None,
ike_mode: Optional[str] = None,
ike_pfs: Optional[str] = None,
ike_remote_id: Optional[str] = None,
ike_version: Optional[str] = None,
psk: Optional[str] = None):
"""
:param str ike_auth_alg: The authentication algorithm of phase-one negotiation.
:param str ike_enc_alg: The encryption algorithm of phase-one negotiation.
:param int ike_lifetime: The SA lifecycle as the result of phase-one negotiation.
:param str ike_local_id: The identification of the VPN gateway.
:param str ike_mode: The negotiation mode of IKE phase-one.
:param str ike_pfs: The Diffie-Hellman key exchange algorithm used by phase-one negotiation.
:param str ike_remote_id: The identification of the customer gateway.
:param str ike_version: The version of the IKE protocol.
:param str psk: Used for authentication between the IPsec VPN gateway and the customer gateway.
"""
if ike_auth_alg is not None:
pulumi.set(__self__, "ike_auth_alg", ike_auth_alg)
if ike_enc_alg is not None:
pulumi.set(__self__, "ike_enc_alg", ike_enc_alg)
if ike_lifetime is not None:
pulumi.set(__self__, "ike_lifetime", ike_lifetime)
if ike_local_id is not None:
pulumi.set(__self__, "ike_local_id", ike_local_id)
if ike_mode is not None:
pulumi.set(__self__, "ike_mode", ike_mode)
if ike_pfs is not None:
pulumi.set(__self__, "ike_pfs", ike_pfs)
if ike_remote_id is not None:
pulumi.set(__self__, "ike_remote_id", ike_remote_id)
if ike_version is not None:
pulumi.set(__self__, "ike_version", ike_version)
if psk is not None:
pulumi.set(__self__, "psk", psk)
@property
@pulumi.getter(name="ikeAuthAlg")
def ike_auth_alg(self) -> Optional[str]:
"""
The authentication algorithm of phase-one negotiation.
"""
return pulumi.get(self, "ike_auth_alg")
@property
@pulumi.getter(name="ikeEncAlg")
def ike_enc_alg(self) -> Optional[str]:
"""
The encryption algorithm of phase-one negotiation.
"""
return pulumi.get(self, "ike_enc_alg")
@property
@pulumi.getter(name="ikeLifetime")
def ike_lifetime(self) -> Optional[int]:
"""
The SA lifecycle as the result of phase-one negotiation.
"""
return pulumi.get(self, "ike_lifetime")
@property
@pulumi.getter(name="ikeLocalId")
def ike_local_id(self) -> Optional[str]:
"""
The identification of the VPN gateway.
"""
return pulumi.get(self, "ike_local_id")
@property
@pulumi.getter(name="ikeMode")
def ike_mode(self) -> Optional[str]:
"""
The negotiation mode of IKE phase-one.
"""
return pulumi.get(self, "ike_mode")
@property
@pulumi.getter(name="ikePfs")
def ike_pfs(self) -> Optional[str]:
"""
The Diffie-Hellman key exchange algorithm used by phase-one negotiation.
"""
return pulumi.get(self, "ike_pfs")
@property
@pulumi.getter(name="ikeRemoteId")
def ike_remote_id(self) -> Optional[str]:
"""
The identification of the customer gateway.
"""
return pulumi.get(self, "ike_remote_id")
@property
@pulumi.getter(name="ikeVersion")
def ike_version(self) -> Optional[str]:
"""
The version of the IKE protocol.
"""
return pulumi.get(self, "ike_version")
@property
@pulumi.getter
def psk(self) -> Optional[str]:
"""
Used for authentication between the IPsec VPN gateway and the customer gateway.
"""
return pulumi.get(self, "psk")
@pulumi.output_type
class GetConnectionsConnectionIpsecConfigResult(dict):
def __init__(__self__, *,
ipsec_auth_alg: Optional[str] = None,
ipsec_enc_alg: Optional[str] = None,
ipsec_lifetime: Optional[int] = None,
ipsec_pfs: Optional[str] = None):
"""
:param str ipsec_auth_alg: The authentication algorithm of phase-two negotiation.
:param str ipsec_enc_alg: The encryption algorithm of phase-two negotiation.
:param int ipsec_lifetime: The SA lifecycle as the result of phase-two negotiation.
:param str ipsec_pfs: The Diffie-Hellman key exchange algorithm used by phase-two negotiation.
"""
if ipsec_auth_alg is not None:
pulumi.set(__self__, "ipsec_auth_alg", ipsec_auth_alg)
if ipsec_enc_alg is not None:
pulumi.set(__self__, "ipsec_enc_alg", ipsec_enc_alg)
if ipsec_lifetime is not None:
pulumi.set(__self__, "ipsec_lifetime", ipsec_lifetime)
if ipsec_pfs is not None:
pulumi.set(__self__, "ipsec_pfs", ipsec_pfs)
@property
@pulumi.getter(name="ipsecAuthAlg")
def ipsec_auth_alg(self) -> Optional[str]:
"""
The authentication algorithm of phase-two negotiation.
"""
return pulumi.get(self, "ipsec_auth_alg")
@property
@pulumi.getter(name="ipsecEncAlg")
def ipsec_enc_alg(self) -> Optional[str]:
"""
The encryption algorithm of phase-two negotiation.
"""
return pulumi.get(self, "ipsec_enc_alg")
@property
@pulumi.getter(name="ipsecLifetime")
def ipsec_lifetime(self) -> Optional[int]:
"""
The SA lifecycle as the result of phase-two negotiation.
"""
return pulumi.get(self, "ipsec_lifetime")
@property
@pulumi.getter(name="ipsecPfs")
def ipsec_pfs(self) -> Optional[str]:
"""
The Diffie-Hellman key exchange algorithm used by phase-two negotiation.
"""
return pulumi.get(self, "ipsec_pfs")
@pulumi.output_type
class GetCustomerGatewaysGatewayResult(dict):
def __init__(__self__, *,
create_time: str,
description: str,
id: str,
ip_address: str,
name: str):
"""
:param str create_time: The creation time of the VPN customer gateway.
:param str description: The description of the VPN customer gateway.
:param str id: ID of the VPN customer gateway .
:param str ip_address: The ip address of the VPN customer gateway.
:param str name: The name of the VPN customer gateway.
"""
pulumi.set(__self__, "create_time", create_time)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "ip_address", ip_address)
pulumi.set(__self__, "name", name)
@property
@pulumi.getter(name="createTime")
def create_time(self) -> str:
"""
The creation time of the VPN customer gateway.
"""
return pulumi.get(self, "create_time")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the VPN customer gateway.
"""
return pulumi.get(self, "description")
@property
@pulumi.getter
def id(self) -> str:
"""
ID of the VPN customer gateway .
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> str:
"""
The ip address of the VPN customer gateway.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the VPN customer gateway.
"""
return pulumi.get(self, "name")
@pulumi.output_type
class GetGatewaysGatewayResult(dict):
def __init__(__self__, *,
business_status: str,
create_time: str,
description: str,
enable_ipsec: str,
enable_ssl: str,
end_time: str,
id: str,
instance_charge_type: str,
internet_ip: str,
name: str,
specification: str,
ssl_connections: int,
status: str,
vpc_id: str):
"""
:param str business_status: Limit search to specific business status - valid value is "Normal", "FinancialLocked".
:param str create_time: The creation time of the VPN gateway.
:param str description: The description of the VPN
:param str enable_ipsec: Whether the ipsec function is enabled.
:param str enable_ssl: Whether the ssl function is enabled.
:param str end_time: The expiration time of the VPN gateway.
:param str id: ID of the VPN.
:param str instance_charge_type: The charge type of the VPN gateway.
:param str internet_ip: The internet ip of the VPN.
:param str name: The name of the VPN.
:param str specification: The Specification of the VPN
:param int ssl_connections: Total count of ssl vpn connections.
:param str status: Limit search to specific status - valid value is "Init", "Provisioning", "Active", "Updating", "Deleting".
:param str vpc_id: Use the VPC ID as the search key.
"""
pulumi.set(__self__, "business_status", business_status)
pulumi.set(__self__, "create_time", create_time)
pulumi.set(__self__, "description", description)
pulumi.set(__self__, "enable_ipsec", enable_ipsec)
pulumi.set(__self__, "enable_ssl", enable_ssl)
pulumi.set(__self__, "end_time", end_time)
pulumi.set(__self__, "id", id)
pulumi.set(__self__, "instance_charge_type", instance_charge_type)
pulumi.set(__self__, "internet_ip", internet_ip)
pulumi.set(__self__, "name", name)
pulumi.set(__self__, "specification", specification)
pulumi.set(__self__, "ssl_connections", ssl_connections)
pulumi.set(__self__, "status", status)
pulumi.set(__self__, "vpc_id", vpc_id)
@property
@pulumi.getter(name="businessStatus")
def business_status(self) -> str:
"""
Limit search to specific business status - valid value is "Normal", "FinancialLocked".
"""
return pulumi.get(self, "business_status")
@property
@pulumi.getter(name="createTime")
def create_time(self) -> str:
"""
The creation time of the VPN gateway.
"""
return pulumi.get(self, "create_time")
@property
@pulumi.getter
def description(self) -> str:
"""
The description of the VPN
"""
return pulumi.get(self, "description")
@property
@pulumi.getter(name="enableIpsec")
def enable_ipsec(self) -> str:
"""
Whether the ipsec function is enabled.
"""
return pulumi.get(self, "enable_ipsec")
@property
@pulumi.getter(name="enableSsl")
def enable_ssl(self) -> str:
"""
Whether the ssl function is enabled.
"""
return pulumi.get(self, "enable_ssl")
@property
@pulumi.getter(name="endTime")
def end_time(self) -> str:
"""
The expiration time of the VPN gateway.
"""
return pulumi.get(self, "end_time")
@property
@pulumi.getter
def id(self) -> str:
"""
ID of the VPN.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="instanceChargeType")
def instance_charge_type(self) -> str:
"""
The charge type of the VPN gateway.
"""
return pulumi.get(self, "instance_charge_type")
@property
@pulumi.getter(name="internetIp")
def internet_ip(self) -> str:
"""
The internet ip of the VPN.
"""
return pulumi.get(self, "internet_ip")
@property
@pulumi.getter
def name(self) -> str:
"""
The name of the VPN.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def specification(self) -> str:
"""
The Specification of the VPN
"""
return pulumi.get(self, "specification")
@property
@pulumi.getter(name="sslConnections")
def ssl_connections(self) -> int:
"""
Total count of ssl vpn connections.
"""
return pulumi.get(self, "ssl_connections")
@property
@pulumi.getter
def status(self) -> str:
"""
Limit search to specific status - valid value is "Init", "Provisioning", "Active", "Updating", "Deleting".
"""
return pulumi.get(self, "status")
@property
@pulumi.getter(name="vpcId")
def vpc_id(self) -> str:
"""
Use the VPC ID as the search key.
"""
return pulumi.get(self, "vpc_id")
| 36.839685 | 193 | 0.618834 | 3,325 | 28,035 | 4.992782 | 0.066165 | 0.024456 | 0.043853 | 0.064093 | 0.804289 | 0.782483 | 0.758268 | 0.720378 | 0.676405 | 0.628275 | 0 | 0.007973 | 0.279757 | 28,035 | 760 | 194 | 36.888158 | 0.814184 | 0.297771 | 0 | 0.697309 | 1 | 0.004484 | 0.138971 | 0.022398 | 0 | 0 | 0 | 0 | 0 | 1 | 0.154709 | false | 0 | 0.013453 | 0.004484 | 0.318386 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
49bce43f1f2e798fc0c6a8f5152282d4b231bcbf | 1,630 | py | Python | test/test_fracdiff_fn.py | ulf1/fracdiff | 58e2152bf5f5be1a4e3c6d1aa781e0bd4b4f51f6 | [
"MIT"
] | 3 | 2020-11-18T08:28:16.000Z | 2022-01-30T07:03:23.000Z | test/test_fracdiff_fn.py | ulf1/fracdiff | 58e2152bf5f5be1a4e3c6d1aa781e0bd4b4f51f6 | [
"MIT"
] | 7 | 2020-02-17T10:12:11.000Z | 2020-05-16T15:04:47.000Z | test/test_fracdiff_fn.py | ulf1/fracdiff | 58e2152bf5f5be1a4e3c6d1aa781e0bd4b4f51f6 | [
"MIT"
] | null | null | null | from numpy_fracdiff.fracdiff_fn import fracdiff
import numpy as np
import numpy.testing as npt
def test1():
x = np.array([10, 11, 9])
w = [1.0, -1.0]
z = fracdiff(x, weights=w)
target = np.array([np.nan, 1.0, -2.0])
npt.assert_allclose(z, target)
def test2():
x = np.array([10, 11, 9])
w = [1.0, -2.0, 1.0]
z = fracdiff(x, weights=w)
target = np.array([np.nan, np.nan, -3.0])
npt.assert_allclose(z, target)
def test3():
x = np.array([10, 11, 9])
z = fracdiff(x, order=1, truncation='find')
target = np.array([np.nan, 1.0, -2.0])
npt.assert_allclose(z, target)
def test4():
x = np.array([10, 11, 9])
z = fracdiff(x, order=2, truncation='find')
target = np.array([np.nan, np.nan, -3.0])
npt.assert_allclose(z, target)
def test5():
x = np.array([10, 11, 9])
z = fracdiff(x, order=1, truncation=1)
target = np.array([np.nan, 1.0, -2.0])
npt.assert_allclose(z, target)
def test6():
x = np.array([10, 11, 9])
z = fracdiff(x, order=2, truncation=2)
target = np.array([np.nan, np.nan, -3.0])
npt.assert_allclose(z, target)
def test7():
x = np.array([10, 11, 9])
z = fracdiff(x, order=1)
target = np.array([np.nan, 1.0, -2.0])
npt.assert_allclose(z, target)
def test8():
x = np.array([10, 11, 9])
z = fracdiff(x, order=2)
target = np.array([np.nan, np.nan, -3.0])
npt.assert_allclose(z, target)
def test11():
X = np.array([[10, 11, 9], [4, 6, 9]]).T
Z = fracdiff(X, order=1)
target = np.array([[np.nan, 1.0, -2.0], [np.nan, 2.0, 3.0]]).T
npt.assert_allclose(Z, target)
| 23.623188 | 66 | 0.570552 | 290 | 1,630 | 3.168966 | 0.141379 | 0.137106 | 0.078346 | 0.097933 | 0.844396 | 0.818281 | 0.804135 | 0.79543 | 0.79543 | 0.761698 | 0 | 0.08603 | 0.222699 | 1,630 | 68 | 67 | 23.970588 | 0.639305 | 0 | 0 | 0.52 | 0 | 0 | 0.004908 | 0 | 0 | 0 | 0 | 0 | 0.18 | 1 | 0.18 | false | 0 | 0.06 | 0 | 0.24 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b717da25d14be4448abc91f6173a91a5a1b1210e | 93 | py | Python | app/explorer/__init__.py | pebblecode/cirrus-marketplace-api | 64d9e3be8705a2fe64c964b16947e9877885de7b | [
"MIT"
] | null | null | null | app/explorer/__init__.py | pebblecode/cirrus-marketplace-api | 64d9e3be8705a2fe64c964b16947e9877885de7b | [
"MIT"
] | null | null | null | app/explorer/__init__.py | pebblecode/cirrus-marketplace-api | 64d9e3be8705a2fe64c964b16947e9877885de7b | [
"MIT"
] | null | null | null | from flask import Blueprint
explorer = Blueprint('explorer', __name__)
from . import views
| 15.5 | 42 | 0.774194 | 11 | 93 | 6.181818 | 0.636364 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.150538 | 93 | 5 | 43 | 18.6 | 0.860759 | 0 | 0 | 0 | 0 | 0 | 0.086022 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0.666667 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
b72961c47f70b33a6071c17bf6f3f5f3f85a2459 | 4,507 | py | Python | tests/test_transcriptionapi.py | voxolab/voxo-dashboard | 0593248328f0f47a4c1f00d1a10080ecc559a389 | [
"MIT"
] | null | null | null | tests/test_transcriptionapi.py | voxolab/voxo-dashboard | 0593248328f0f47a4c1f00d1a10080ecc559a389 | [
"MIT"
] | null | null | null | tests/test_transcriptionapi.py | voxolab/voxo-dashboard | 0593248328f0f47a4c1f00d1a10080ecc559a389 | [
"MIT"
] | null | null | null | import io
import json
import os
from flask import url_for
def test_file_upload(app, client, user_token):
rv = client.post(
url_for('api.upload_transcription'),
data=dict(
auto_file=(io.BytesIO(b"this is a test"), 'test.xml'),
ref_file=(io.BytesIO(b"this is a test"), 'test.txt')
),
follow_redirects=True,
headers=[('Authentication-Token', user_token)]
)
assert rv.status_code == 200
result = json.loads(rv.data.decode("utf-8"))
assert result['status'] == 1
file_basename, file_extension = \
os.path.splitext(result['transcription']['auto_filename'])
path = "{}/{}/{}".format(
app.config['UPLOAD_FOLDER'],
result['transcription']['user_id'],
'transcriptions',
file_basename,
result['transcription']['auto_filename'])
assert os.path.exists(path)
def test_download_file(client, user_token, server_token):
rv = client.post(
url_for('api.upload_transcription'),
data=dict(
auto_file=(io.BytesIO(b"this is a xml test"), 'test.xml'),
ref_file=(io.BytesIO(b"this is a txt test"), 'test.txt')
),
follow_redirects=True,
headers=[('Authentication-Token', user_token)])
assert rv.status_code == 200
result = json.loads(rv.data.decode("utf-8"))
# Test reference/correct transcription download
rv = client.get(
url_for(
'api.download_file',
file_id=result['transcription']['id'])
+ "?type=transcription_ref",
follow_redirects=True,
headers=[('Authentication-Token', server_token)])
assert rv.status_code == 200
assert rv.mimetype == "text/plain"
assert rv.data.decode("utf-8") == "this is a txt test"
# Test automatic transcription download
rv = client.get(
url_for(
'api.download_file',
file_id=result['transcription']['id'])
+ "?type=transcription_auto",
follow_redirects=True,
headers=[('Authentication-Token', server_token)])
assert rv.status_code == 200
assert rv.mimetype == "text/plain"
assert rv.data.decode("utf-8") == "this is a xml test"
def test_aligned_file_upload(
app, client, server_token, transcription):
rv = client.post(
url_for('api.modify_transcription', transcription_id=transcription.id),
data=dict(
file=(io.BytesIO(b"this is a test result"), 'test_result.txt')
),
follow_redirects=True,
headers=[('Authentication-Token', server_token)])
assert rv.status_code == 200
result = json.loads(rv.data.decode("utf-8"))
assert result['status'] == 1
path = "{}/{}/{}".format(
app.config['UPLOAD_FOLDER'],
result['transcription']['user_id'],
'transcriptions', result['transcription']['aligned_filename'])
assert os.path.exists(path)
def test_json_conversion_ok(user_token, client):
callId = "myid"
rv = client.post(
url_for('api.convert_transcription', call_unique_id=callId),
data=dict(
in_xml_file=(open(
"tests/fixtures/voyage-in.xml", 'rb'), 'voyage-in.xml'),
out_xml_file=(open(
"tests/fixtures/voyage-out.xml", 'rb'), 'voyage-out.xml')
),
follow_redirects=True,
headers=[('Authentication-Token', user_token)]
)
result = json.loads(rv.data.decode("utf-8"))
assert rv.status_code == 200
assert result['callUniqueId'] == callId
def test_json_conversion_fail_empty_files(user_token, client):
callId = "myid"
# Provide only one file
rv = client.post(
url_for('api.convert_transcription', call_unique_id=callId),
data=dict(
in_xml_file=(io.BytesIO(b""), 'voyage-in.xml'),
out_xml_file=(io.BytesIO(b""), 'voyage-out.xml'),
),
follow_redirects=True,
headers=[('Authentication-Token', user_token)]
)
assert rv.status_code == 400
def test_json_conversion_fail_one_file(user_token, client):
callId = "myid"
# Provide only one file
rv = client.post(
url_for('api.convert_transcription', call_unique_id=callId),
data=dict(
in_xml_file=(open(
"tests/fixtures/voyage-in.xml", 'rb'), 'voyage-in.xml'),
),
follow_redirects=True,
headers=[('Authentication-Token', user_token)]
)
assert rv.status_code == 400
| 28.525316 | 79 | 0.611493 | 545 | 4,507 | 4.866055 | 0.166972 | 0.036199 | 0.027149 | 0.078431 | 0.822021 | 0.78997 | 0.74095 | 0.739819 | 0.702489 | 0.688537 | 0 | 0.009406 | 0.245174 | 4,507 | 157 | 80 | 28.707006 | 0.770135 | 0.028178 | 0 | 0.675439 | 0 | 0 | 0.233371 | 0.063771 | 0 | 0 | 0 | 0 | 0.149123 | 1 | 0.052632 | false | 0 | 0.035088 | 0 | 0.087719 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f851a6df5e9877ebe42823cc94997020d663d4b | 183 | py | Python | callback_plugins/__init__.py | SamuelMwangiW/ansible-role-degoss | 0e9252ecb1319a6bd778d171fa0944b724616984 | [
"Apache-2.0",
"MIT"
] | 36 | 2017-02-21T03:24:10.000Z | 2021-09-07T11:30:15.000Z | callback_plugins/__init__.py | SamuelMwangiW/ansible-role-degoss | 0e9252ecb1319a6bd778d171fa0944b724616984 | [
"Apache-2.0",
"MIT"
] | 33 | 2017-02-15T02:18:35.000Z | 2021-03-13T02:39:05.000Z | callback_plugins/__init__.py | SamuelMwangiW/ansible-role-degoss | 0e9252ecb1319a6bd778d171fa0944b724616984 | [
"Apache-2.0",
"MIT"
] | 12 | 2017-04-05T22:08:23.000Z | 2021-02-05T02:41:12.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, print_function
# NOTE This is a stub file to allow unit testing of the degoss callback plugin.
| 26.142857 | 79 | 0.743169 | 29 | 183 | 4.482759 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.006536 | 0.163934 | 183 | 6 | 80 | 30.5 | 0.843137 | 0.655738 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 6 |
3f9a70478f01260528ecf6341399403f4cd94294 | 20 | py | Python | srpp/__init__.py | CREEi-models/srpp | b4fc14637d758740be76b0fe48c7073cf10548a7 | [
"MIT"
] | null | null | null | srpp/__init__.py | CREEi-models/srpp | b4fc14637d758740be76b0fe48c7073cf10548a7 | [
"MIT"
] | 2 | 2020-04-29T14:44:10.000Z | 2020-04-29T14:47:53.000Z | srpp/__init__.py | CREEi-models/srpp | b4fc14637d758740be76b0fe48c7073cf10548a7 | [
"MIT"
] | null | null | null | from .srpp import * | 20 | 20 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 20 | 1 | 20 | 20 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3fbbfe47e76c8a46b5c211038675f8315b5c8053 | 7,174 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/jumbo/phys/Phys_OOK.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 82 | 2016-06-29T17:24:43.000Z | 2021-04-16T06:49:17.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/jumbo/phys/Phys_OOK.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/jumbo/phys/Phys_OOK.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 56 | 2016-08-02T10:50:50.000Z | 2021-07-19T08:57:34.000Z | from pyradioconfig.calculator_model_framework.interfaces.iphy import IPhy
from pyradioconfig.parts.common.phys.phy_common import PHY_COMMON_FRAME_INTERNAL
from py_2_and_3_compatibility import *
class PHYS_OOK(IPhy):
def OOK_Base(self, phy, model):
# Add values to existing inputs
phy.profile_inputs.base_frequency_hz.value = long(915000000)
phy.profile_inputs.baudrate_tol_ppm.value = 1000
phy.profile_inputs.channel_spacing_hz.value = 1000000
phy.profile_inputs.deviation.value = 0
phy.profile_inputs.diff_encoding_mode.value = model.vars.diff_encoding_mode.var_enum.DISABLED
phy.profile_inputs.dsss_chipping_code.value = long(0)
phy.profile_inputs.dsss_len.value = 0
phy.profile_inputs.dsss_spreading_factor.value = 0
phy.profile_inputs.fsk_symbol_map.value = model.vars.fsk_symbol_map.var_enum.MAP0
phy.profile_inputs.modulation_type.value = model.vars.modulation_type.var_enum.OOK
phy.profile_inputs.preamble_pattern.value = 1
phy.profile_inputs.preamble_pattern_len.value = 2
phy.profile_inputs.preamble_length.value = 40
phy.profile_inputs.rx_xtal_error_ppm.value = 0
phy.profile_inputs.shaping_filter.value = model.vars.shaping_filter.var_enum.NONE
phy.profile_inputs.shaping_filter_param.value = 1.5
phy.profile_inputs.syncword_0.value = long(0xf68d)
phy.profile_inputs.syncword_1.value = long(0x0)
phy.profile_inputs.syncword_length.value = 16
phy.profile_inputs.tx_xtal_error_ppm.value = 0
phy.profile_inputs.xtal_frequency_hz.value = 38400000
phy.profile_inputs.errors_in_timing_window.value = 0
PHY_COMMON_FRAME_INTERNAL(phy, model)
# def PHY_Internal_915M_OOK_100kbps(self, model):
# phy = self._makePhy(model, model.profiles.Base, '915M OOK 100kbps')
#
# self.OOK_Base(phy, model)
#
# # Add values to existing inputs
# phy.profile_inputs.bitrate.value = 100000
# phy.profile_inputs.bandwidth_hz.value = 600000
def PHY_Datasheet_915M_OOK_4p8kbps(self, model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M OOK 4.8kbps', phy_name=phy_name)
self.OOK_Base(phy, model)
# Add values to existing inputs
phy.profile_inputs.bitrate.value = 4800
phy.profile_inputs.bandwidth_hz.value = 306000
# Other overrides needed to maintain validated PHY despite OOK calculator changes
phy.profile_inputs.symbols_in_timing_window.value = 16
phy.profile_inputs.timing_resync_period.value = 2
phy.profile_inputs.frequency_comp_mode.value = model.vars.frequency_comp_mode.var_enum.INTERNAL_LOCK_AT_FRAME_DETECT
phy.profile_inputs.ook_slicer_level.value = 2
phy.profile_inputs.errors_in_timing_window.value = 1
model.vars.dynamic_slicer_enabled.value_forced = False
phy.profile_outputs.MODEM_CF_DEC0.override = 0
phy.profile_outputs.MODEM_CF_DEC1.override = 10
phy.profile_outputs.MODEM_CF_DEC2.override = 47
phy.profile_outputs.MODEM_SRCCHF_SRCRATIO1.override = 128
phy.profile_outputs.MODEM_SRCCHF_SRCRATIO2.override = 862
phy.profile_outputs.MODEM_SRCCHF_SRCENABLE1.override = 0
phy.profile_outputs.MODEM_CTRL2_DATAFILTER.override = 4
phy.profile_outputs.MODEM_TIMING_OFFSUBNUM.override = 3
phy.profile_outputs.MODEM_TIMING_OFFSUBDEN.override = 2
phy.profile_outputs.MODEM_RXBR_RXBRNUM.override = 0
phy.profile_outputs.MODEM_CTRL5_RESYNCBAUDTRANS.override = 0
phy.profile_outputs.AGC_CTRL0_MODE.override = 1
phy.profile_outputs.AGC_GAINSTEPLIM_CFLOOPSTEPMAX.override = 5
def PHY_Datasheet_433M_OOK_4p8kbps(self,model, phy_name=None):
phy=self._makePhy(model,model.profiles.Base,readable_name='434M OOK 4.8kbps', phy_name=phy_name)
self.OOK_Base(phy,model)
#Addvaluestoexistinginputs
phy.profile_inputs.base_frequency_hz.value= long(433000000)
phy.profile_inputs.bitrate.value=4800
phy.profile_inputs.bandwidth_hz.value=306000
# Other overrides needed to maintain validated PHY despite OOK calculator changes
phy.profile_inputs.symbols_in_timing_window.value = 16
phy.profile_inputs.timing_resync_period.value = 2
phy.profile_inputs.frequency_comp_mode.value = model.vars.frequency_comp_mode.var_enum.INTERNAL_LOCK_AT_FRAME_DETECT
phy.profile_inputs.ook_slicer_level.value = 2
phy.profile_inputs.errors_in_timing_window.value = 1
model.vars.dynamic_slicer_enabled.value_forced = False
phy.profile_outputs.MODEM_CF_DEC0.override = 0
phy.profile_outputs.MODEM_CF_DEC1.override = 10
phy.profile_outputs.MODEM_CF_DEC2.override = 47
phy.profile_outputs.MODEM_SRCCHF_SRCRATIO1.override = 128
phy.profile_outputs.MODEM_SRCCHF_SRCRATIO2.override = 862
phy.profile_outputs.MODEM_SRCCHF_SRCENABLE1.override = 0
phy.profile_outputs.MODEM_CTRL2_DATAFILTER.override = 4
phy.profile_outputs.MODEM_TIMING_OFFSUBNUM.override = 3
phy.profile_outputs.MODEM_TIMING_OFFSUBDEN.override = 2
phy.profile_outputs.MODEM_RXBR_RXBRNUM.override = 0
phy.profile_outputs.MODEM_CTRL5_RESYNCBAUDTRANS.override = 0
phy.profile_outputs.AGC_CTRL0_MODE.override = 1
phy.profile_outputs.AGC_GAINSTEPLIM_CFLOOPSTEPMAX.override = 5
### Customer OOK reference PHYs, use dynamic slicing ###
def PHY_Reference_433M_OOK_4p8kbps(self,model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='433M OOK 4.8kbps', phy_name=phy_name)
self.OOK_Base(phy, model)
phy.profile_inputs.base_frequency_hz.value = long(433000000)
phy.profile_inputs.bitrate.value = 4800
phy.profile_inputs.bandwidth_hz.value = 360000
return phy
def PHY_Reference_433M_OOK_10kbps(self,model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='433M OOK 10kbps', phy_name=phy_name)
self.OOK_Base(phy, model)
phy.profile_inputs.base_frequency_hz.value = long(433000000)
phy.profile_inputs.bitrate.value = 10000
phy.profile_inputs.bandwidth_hz.value = 750000
return phy
def PHY_Reference_915M_OOK_4p8kbps(self,model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M OOK 4.8kbps', phy_name=phy_name)
self.OOK_Base(phy, model)
phy.profile_inputs.base_frequency_hz.value = long(915000000)
phy.profile_inputs.bitrate.value = 4800
phy.profile_inputs.bandwidth_hz.value = 360000
return phy
def PHY_Reference_915M_OOK_10kbps(self,model, phy_name=None):
phy = self._makePhy(model, model.profiles.Base, readable_name='915M OOK 10kbps', phy_name=phy_name)
self.OOK_Base(phy, model)
phy.profile_inputs.base_frequency_hz.value = long(915000000)
phy.profile_inputs.bitrate.value = 10000
phy.profile_inputs.bandwidth_hz.value = 750000
return phy | 49.475862 | 124 | 0.739755 | 982 | 7,174 | 5.080448 | 0.164969 | 0.15434 | 0.16356 | 0.097013 | 0.80918 | 0.757266 | 0.750852 | 0.739828 | 0.71798 | 0.71798 | 0 | 0.052847 | 0.182325 | 7,174 | 145 | 125 | 49.475862 | 0.797647 | 0.080151 | 0 | 0.647619 | 0 | 0 | 0.01429 | 0 | 0 | 0 | 0.001368 | 0 | 0 | 1 | 0.066667 | false | 0 | 0.028571 | 0 | 0.142857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3fcc347f748e8bad983e45f9682b325bda52c24b | 29 | py | Python | pymapd/_utils.py | vishalbelsare/pymapd | 36f971f8b49a33287ccc341a30f8c43a36d379a3 | [
"Apache-2.0"
] | 73 | 2018-09-27T14:58:46.000Z | 2021-12-17T02:35:23.000Z | pymapd/_utils.py | heavyai/pymapd | 36f971f8b49a33287ccc341a30f8c43a36d379a3 | [
"Apache-2.0"
] | 225 | 2018-10-13T12:57:17.000Z | 2021-10-20T23:45:24.000Z | pymapd/_utils.py | heavyai/pymapd | 36f971f8b49a33287ccc341a30f8c43a36d379a3 | [
"Apache-2.0"
] | 38 | 2018-10-10T11:04:06.000Z | 2021-04-23T20:08:08.000Z | from omnisci._utils import *
| 14.5 | 28 | 0.793103 | 4 | 29 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.137931 | 29 | 1 | 29 | 29 | 0.88 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3fe4112ad883d67df68167f043b79c88a57becd7 | 36 | py | Python | examples/python/hellowithdeps.py | akulakhan/http-trigger | 09d172eb5e0527c24d8cc5320e1f8e8e554c7009 | [
"Apache-2.0"
] | 9 | 2018-09-29T22:31:44.000Z | 2021-07-27T22:34:52.000Z | examples/python/hellowithdeps.py | akulakhan/http-trigger | 09d172eb5e0527c24d8cc5320e1f8e8e554c7009 | [
"Apache-2.0"
] | 10 | 2018-08-30T15:46:27.000Z | 2021-01-12T16:16:34.000Z | examples/python/hellowithdeps.py | akulakhan/http-trigger | 09d172eb5e0527c24d8cc5320e1f8e8e554c7009 | [
"Apache-2.0"
] | 19 | 2018-07-05T18:45:02.000Z | 2021-07-27T22:34:55.000Z | from hellowithdepshelper import foo
| 18 | 35 | 0.888889 | 4 | 36 | 8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b75808601c5d5b09c344ce6a721758e1ed42614c | 84 | py | Python | py_tea_code/3.mypro-modules/test03.py | qq4215279/study_python | b0eb9dedfc4abb2fd6c024a599e7375869c3d77a | [
"Apache-2.0"
] | null | null | null | py_tea_code/3.mypro-modules/test03.py | qq4215279/study_python | b0eb9dedfc4abb2fd6c024a599e7375869c3d77a | [
"Apache-2.0"
] | null | null | null | py_tea_code/3.mypro-modules/test03.py | qq4215279/study_python | b0eb9dedfc4abb2fd6c024a599e7375869c3d77a | [
"Apache-2.0"
] | null | null | null | import test02
import test02
print("####")
import importlib
importlib.reload(test02) | 14 | 24 | 0.77381 | 10 | 84 | 6.5 | 0.5 | 0.369231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 0.095238 | 84 | 6 | 24 | 14 | 0.776316 | 0 | 0 | 0.4 | 0 | 0 | 0.047059 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.8 | 0 | 0.8 | 0.2 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b78ae1b0c450452460ea433a2cc5fb4127b47b00 | 131 | py | Python | ayesaac/services/common/__init__.py | jessi678/aye-saac | 30745f2a72df87487bdb3a937e5e41ab3e3f397b | [
"BSD-3-Clause"
] | 2 | 2021-01-27T19:17:28.000Z | 2021-04-26T16:06:42.000Z | ayesaac/services/common/__init__.py | jessi678/aye-saac | 30745f2a72df87487bdb3a937e5e41ab3e3f397b | [
"BSD-3-Clause"
] | null | null | null | ayesaac/services/common/__init__.py | jessi678/aye-saac | 30745f2a72df87487bdb3a937e5e41ab3e3f397b | [
"BSD-3-Clause"
] | 1 | 2021-03-11T13:54:00.000Z | 2021-03-11T13:54:00.000Z | from .queue_manager import QueueManager
from .run_service_wrapper import run_service_wrapper
from .service_base import ServiceBase
| 32.75 | 52 | 0.885496 | 18 | 131 | 6.111111 | 0.555556 | 0.181818 | 0.309091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.091603 | 131 | 3 | 53 | 43.666667 | 0.92437 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7ab7192872fa71e991bd445fe68df8a39989375 | 49,549 | py | Python | core/domain/skill_validators_test.py | luccasparoni/oppia | 988f7c1e818faf774ec424e33b5dd0267c40237b | [
"Apache-2.0"
] | null | null | null | core/domain/skill_validators_test.py | luccasparoni/oppia | 988f7c1e818faf774ec424e33b5dd0267c40237b | [
"Apache-2.0"
] | null | null | null | core/domain/skill_validators_test.py | luccasparoni/oppia | 988f7c1e818faf774ec424e33b5dd0267c40237b | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
#
# Copyright 2020 The Oppia Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit tests for core.domain.skill_validators."""
from __future__ import absolute_import # pylint: disable=import-only-modules
from __future__ import unicode_literals # pylint: disable=import-only-modules
import datetime
from constants import constants
from core.domain import prod_validation_jobs_one_off
from core.domain import skill_domain
from core.domain import skill_services
from core.domain import state_domain
from core.platform import models
from core.tests import test_utils
import feconf
import python_utils
datastore_services = models.Registry.import_datastore_services()
USER_EMAIL = 'useremail@example.com'
USER_NAME = 'username'
(question_models, skill_models, user_models) = models.Registry.import_models([
models.NAMES.question, models.NAMES.skill, models.NAMES.user
])
class SkillModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(SkillModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup(self.ADMIN_EMAIL, self.ADMIN_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.admin_id = self.get_user_id_from_email(self.ADMIN_EMAIL)
rubrics = [
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[0], ['Explanation 1']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[1], ['Explanation 2']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[2], ['Explanation 3'])]
self.set_admins([self.ADMIN_USERNAME])
language_codes = ['ar', 'en', 'en']
skills = [skill_domain.Skill.create_default_skill(
'%s' % i,
'description %d' % i,
rubrics
) for i in python_utils.RANGE(3)]
for i in python_utils.RANGE(2):
skill = skill_domain.Skill.create_default_skill(
'%s' % (i + 3),
'description %d' % (i + 3),
rubrics)
skill_services.save_new_skill(self.owner_id, skill)
example_1 = skill_domain.WorkedExample(
state_domain.SubtitledHtml('2', '<p>Example Question 1</p>'),
state_domain.SubtitledHtml('3', '<p>Example Explanation 1</p>')
)
skill_contents = skill_domain.SkillContents(
state_domain.SubtitledHtml(
'1', '<p>Explanation</p>'), [example_1],
state_domain.RecordedVoiceovers.from_dict({
'voiceovers_mapping': {
'1': {}, '2': {}, '3': {}
}
}),
state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'1': {}, '2': {}, '3': {}
}
})
)
misconception_dict = {
'id': 0, 'name': 'name', 'notes': '<p>notes</p>',
'feedback': '<p>default_feedback</p>',
'must_be_addressed': True}
misconception = skill_domain.Misconception.from_dict(
misconception_dict)
for index, skill in enumerate(skills):
skill.language_code = language_codes[index]
skill.skill_contents = skill_contents
skill.add_misconception(misconception)
if index < 2:
skill.superseding_skill_id = '%s' % (index + 3)
skill.all_questions_merged = True
skill_services.save_new_skill(self.owner_id, skill)
self.model_instance_0 = skill_models.SkillModel.get_by_id('0')
self.model_instance_1 = skill_models.SkillModel.get_by_id('1')
self.model_instance_2 = skill_models.SkillModel.get_by_id('2')
self.superseding_skill_0 = skill_models.SkillModel.get_by_id('3')
self.superseding_skill_1 = skill_models.SkillModel.get_by_id('4')
self.job_class = (
prod_validation_jobs_one_off.SkillModelAuditOneOffJob)
def test_standard_operation(self):
skill_services.update_skill(
self.admin_id, '0', [skill_domain.SkillChange({
'cmd': 'update_skill_property',
'property_name': 'description',
'new_value': 'New description',
'old_value': 'description 0'
})], 'Changes.')
expected_output = [
u'[u\'fully-validated SkillModel\', 5]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.model_instance_0.created_on = (
self.model_instance_0.last_updated + datetime.timedelta(days=1))
self.model_instance_0.commit(
feconf.SYSTEM_COMMITTER_ID, 'created_on test', [])
expected_output = [
(
u'[u\'failed validation check for time field relation check '
'of SkillModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.model_instance_0.id,
self.model_instance_0.created_on,
self.model_instance_0.last_updated
),
u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_last_updated_greater_than_current_time(self):
self.model_instance_0.delete(feconf.SYSTEM_COMMITTER_ID, 'delete')
self.model_instance_1.delete(feconf.SYSTEM_COMMITTER_ID, 'delete')
self.superseding_skill_0.delete(feconf.SYSTEM_COMMITTER_ID, 'delete')
self.superseding_skill_1.delete(feconf.SYSTEM_COMMITTER_ID, 'delete')
expected_output = [
'[u\'fully-validated SkillModel\', 4]',
(
u'[u\'failed validation check for current time check of '
'SkillModel\', '
'[u\'Entity id %s: The last_updated field has a '
'value %s which is greater than the time when '
'the job was run\']]'
) % (self.model_instance_2.id, self.model_instance_2.last_updated)
]
mocked_datetime = datetime.datetime.utcnow() - datetime.timedelta(
hours=13)
with datastore_services.mock_datetime_for_datastore(mocked_datetime):
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_skill_schema(self):
expected_output = [
(
u'[u\'failed validation check for domain object check of '
'SkillModel\', '
'[u\'Entity id %s: Entity fails domain validation with the '
'error Invalid language code: %s\']]'
) % (self.model_instance_0.id, self.model_instance_0.language_code),
u'[u\'fully-validated SkillModel\', 4]']
with self.swap(
constants, 'SUPPORTED_CONTENT_LANGUAGES', [{
'code': 'en', 'description': 'English'}]):
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_all_questions_merged(self):
question_models.QuestionSkillLinkModel(
id='question1-0', question_id='question1', skill_id='0',
skill_difficulty=0.5
).put_for_human()
expected_output = [
(
u'[u\'failed validation check for all questions merged '
'check of SkillModel\', '
'[u"Entity id 0: all_questions_merged is True but the '
'following question ids are still linked to the skill: '
'[u\'question1\']"]]'
), u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_superseding_skill_model_failure(self):
self.superseding_skill_0.delete(feconf.SYSTEM_COMMITTER_ID, '', [])
expected_output = [
(
u'[u\'failed validation check for superseding_skill_ids field '
'check of SkillModel\', '
'[u"Entity id 0: based on field superseding_skill_ids '
'having value 3, expected model SkillModel with id 3 but it '
'doesn\'t exist"]]'),
u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_skill_commit_log_entry_model_failure(self):
skill_services.update_skill(
self.admin_id, '0', [skill_domain.SkillChange({
'cmd': 'update_skill_property',
'property_name': 'description',
'new_value': 'New description',
'old_value': 'description 0'
})], 'Changes.')
self.process_and_flush_pending_mapreduce_tasks()
skill_models.SkillCommitLogEntryModel.get_by_id(
'skill-0-1').delete()
expected_output = [
(
u'[u\'failed validation check for '
'skill_commit_log_entry_ids field check of '
'SkillModel\', '
'[u"Entity id 0: based on field '
'skill_commit_log_entry_ids having value '
'skill-0-1, expected model SkillCommitLogEntryModel '
'with id skill-0-1 but it doesn\'t exist"]]'),
u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_summary_model_failure(self):
skill_models.SkillSummaryModel.get_by_id('0').delete()
expected_output = [
(
u'[u\'failed validation check for skill_summary_ids '
'field check of SkillModel\', '
'[u"Entity id 0: based on field skill_summary_ids having '
'value 0, expected model SkillSummaryModel with id 0 '
'but it doesn\'t exist"]]'),
u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_snapshot_metadata_model_failure(self):
skill_models.SkillSnapshotMetadataModel.get_by_id(
'0-1').delete()
expected_output = [
(
u'[u\'failed validation check for snapshot_metadata_ids '
'field check of SkillModel\', '
'[u"Entity id 0: based on field snapshot_metadata_ids having '
'value 0-1, expected model SkillSnapshotMetadataModel '
'with id 0-1 but it doesn\'t exist"]]'),
u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_snapshot_content_model_failure(self):
skill_models.SkillSnapshotContentModel.get_by_id(
'0-1').delete()
expected_output = [
(
u'[u\'failed validation check for snapshot_content_ids '
'field check of SkillModel\', '
'[u"Entity id 0: based on field snapshot_content_ids having '
'value 0-1, expected model SkillSnapshotContentModel '
'with id 0-1 but it doesn\'t exist"]]'),
u'[u\'fully-validated SkillModel\', 4]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
class SkillSnapshotMetadataModelValidatorTests(
test_utils.AuditJobsTestBase):
def setUp(self):
super(SkillSnapshotMetadataModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup(USER_EMAIL, USER_NAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.user_id = self.get_user_id_from_email(USER_EMAIL)
self.signup(self.ADMIN_EMAIL, self.ADMIN_USERNAME)
self.admin_id = self.get_user_id_from_email(self.ADMIN_EMAIL)
self.set_admins([self.ADMIN_USERNAME])
rubrics = [
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[0], ['Explanation 1']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[1], ['Explanation 2']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[2], ['Explanation 3'])]
language_codes = ['ar', 'en', 'en']
skills = [skill_domain.Skill.create_default_skill(
'%s' % i,
'description %d' % i,
rubrics
) for i in python_utils.RANGE(3)]
example_1 = skill_domain.WorkedExample(
state_domain.SubtitledHtml('2', '<p>Example Question 1</p>'),
state_domain.SubtitledHtml('3', '<p>Example Explanation 1</p>')
)
skill_contents = skill_domain.SkillContents(
state_domain.SubtitledHtml(
'1', '<p>Explanation</p>'), [example_1],
state_domain.RecordedVoiceovers.from_dict({
'voiceovers_mapping': {
'1': {}, '2': {}, '3': {}
}
}),
state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'1': {}, '2': {}, '3': {}
}
})
)
misconception_dict = {
'id': 0, 'name': 'name', 'notes': '<p>notes</p>',
'feedback': '<p>default_feedback</p>',
'must_be_addressed': True}
misconception = skill_domain.Misconception.from_dict(
misconception_dict)
for index, skill in enumerate(skills):
skill.language_code = language_codes[index]
skill.skill_contents = skill_contents
skill.add_misconception(misconception)
if index == 0:
skill_services.save_new_skill(self.user_id, skill)
else:
skill_services.save_new_skill(self.owner_id, skill)
self.model_instance_0 = (
skill_models.SkillSnapshotMetadataModel.get_by_id(
'0-1'))
self.model_instance_1 = (
skill_models.SkillSnapshotMetadataModel.get_by_id(
'1-1'))
self.model_instance_2 = (
skill_models.SkillSnapshotMetadataModel.get_by_id(
'2-1'))
self.job_class = (
prod_validation_jobs_one_off
.SkillSnapshotMetadataModelAuditOneOffJob)
def test_standard_operation(self):
skill_services.update_skill(
self.admin_id, '0', [skill_domain.SkillChange({
'cmd': 'update_skill_property',
'property_name': 'description',
'new_value': 'New description',
'old_value': 'description 0'
})], 'Changes.')
expected_output = [
u'[u\'fully-validated SkillSnapshotMetadataModel\', 4]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_committer_id_migration_bot(self):
self.model_instance_1.committer_id = feconf.MIGRATION_BOT_USER_ID
self.model_instance_1.put_for_bot()
expected_output = [
u'[u\'fully-validated SkillSnapshotMetadataModel\', 3]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_pseudo_committer_id(self):
self.model_instance_1.committer_id = self.PSEUDONYMOUS_ID
self.model_instance_1.put_for_bot()
expected_output = [
u'[u\'fully-validated SkillSnapshotMetadataModel\', 3]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.model_instance_0.created_on = (
self.model_instance_0.last_updated + datetime.timedelta(days=1))
self.model_instance_0.put_for_human()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of SkillSnapshotMetadataModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.model_instance_0.id,
self.model_instance_0.created_on,
self.model_instance_0.last_updated
), (
u'[u\'fully-validated '
'SkillSnapshotMetadataModel\', 2]')]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_last_updated_greater_than_current_time(self):
self.model_instance_1.delete()
self.model_instance_2.delete()
expected_output = [(
u'[u\'failed validation check for current time check of '
'SkillSnapshotMetadataModel\', '
'[u\'Entity id %s: The last_updated field has a '
'value %s which is greater than the time when the job was run\']]'
) % (self.model_instance_0.id, self.model_instance_0.last_updated)]
mocked_datetime = datetime.datetime.utcnow() - datetime.timedelta(
hours=13)
with datastore_services.mock_datetime_for_datastore(mocked_datetime):
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_skill_model_failure(self):
skill_models.SkillModel.get_by_id('0').delete(
self.user_id, '', [])
expected_output = [
(
u'[u\'failed validation check for skill_ids '
'field check of SkillSnapshotMetadataModel\', '
'[u"Entity id 0-1: based on field skill_ids '
'having value 0, expected model SkillModel with '
'id 0 but it doesn\'t exist", u"Entity id 0-2: based on field '
'skill_ids having value 0, expected model '
'SkillModel with id 0 but it doesn\'t exist"]]'
), (
u'[u\'fully-validated '
'SkillSnapshotMetadataModel\', 2]')]
self.run_job_and_check_output(
expected_output, literal_eval=True)
def test_missing_committer_model_failure(self):
user_models.UserSettingsModel.get_by_id(self.user_id).delete()
expected_output = [
(
u'[u\'failed validation check for committer_ids field '
'check of SkillSnapshotMetadataModel\', '
'[u"Entity id 0-1: based on field committer_ids having '
'value %s, expected model UserSettingsModel with id %s '
'but it doesn\'t exist"]]'
) % (self.user_id, self.user_id), (
u'[u\'fully-validated '
'SkillSnapshotMetadataModel\', 2]')]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_invalid_skill_version_in_model_id(self):
model_with_invalid_version_in_id = (
skill_models.SkillSnapshotMetadataModel(
id='0-3', committer_id=self.owner_id, commit_type='edit',
commit_message='msg', commit_cmds=[{}]))
model_with_invalid_version_in_id.put_for_human()
expected_output = [
(
u'[u\'failed validation check for skill model '
'version check of SkillSnapshotMetadataModel\', '
'[u\'Entity id 0-3: Skill model corresponding to '
'id 0 has a version 1 which is less than the version 3 in '
'snapshot metadata model id\']]'
), (
u'[u\'fully-validated SkillSnapshotMetadataModel\', '
'3]')]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_commit_cmd_schmea(self):
self.model_instance_0.commit_cmds = [{
'cmd': 'add_skill_misconception'
}, {
'cmd': 'delete_skill_misconception',
'invalid_attribute': 'invalid'
}]
self.model_instance_0.put_for_human()
expected_output = [
(
u'[u\'failed validation check for commit cmd '
'delete_skill_misconception check of '
'SkillSnapshotMetadataModel\', '
'[u"Entity id 0-1: Commit command domain validation '
'for command: {u\'cmd\': u\'delete_skill_misconception\', '
'u\'invalid_attribute\': u\'invalid\'} failed with error: '
'The following required attributes are missing: '
'misconception_id, The following extra attributes are present: '
'invalid_attribute"]]'
), (
u'[u\'failed validation check for commit cmd '
'add_skill_misconception check of '
'SkillSnapshotMetadataModel\', '
'[u"Entity id 0-1: Commit command domain validation '
'for command: {u\'cmd\': u\'add_skill_misconception\'} '
'failed with error: The following required attributes '
'are missing: new_misconception_dict"]]'
), u'[u\'fully-validated SkillSnapshotMetadataModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
class SkillSnapshotContentModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(SkillSnapshotContentModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.signup(self.ADMIN_EMAIL, self.ADMIN_USERNAME)
self.admin_id = self.get_user_id_from_email(self.ADMIN_EMAIL)
self.set_admins([self.ADMIN_USERNAME])
rubrics = [
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[0], ['Explanation 1']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[1], ['Explanation 2']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[2], ['Explanation 3'])]
language_codes = ['ar', 'en', 'en']
skills = [skill_domain.Skill.create_default_skill(
'%s' % i,
'description %d' % i,
rubrics
) for i in python_utils.RANGE(3)]
example_1 = skill_domain.WorkedExample(
state_domain.SubtitledHtml('2', '<p>Example Question 1</p>'),
state_domain.SubtitledHtml('3', '<p>Example Explanation 1</p>')
)
skill_contents = skill_domain.SkillContents(
state_domain.SubtitledHtml(
'1', '<p>Explanation</p>'), [example_1],
state_domain.RecordedVoiceovers.from_dict({
'voiceovers_mapping': {
'1': {}, '2': {}, '3': {}
}
}),
state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'1': {}, '2': {}, '3': {}
}
})
)
misconception_dict = {
'id': 0, 'name': 'name', 'notes': '<p>notes</p>',
'feedback': '<p>default_feedback</p>',
'must_be_addressed': True}
misconception = skill_domain.Misconception.from_dict(
misconception_dict)
for index, skill in enumerate(skills):
skill.language_code = language_codes[index]
skill.skill_contents = skill_contents
skill.add_misconception(misconception)
skill_services.save_new_skill(self.owner_id, skill)
self.model_instance_0 = (
skill_models.SkillSnapshotContentModel.get_by_id(
'0-1'))
self.model_instance_1 = (
skill_models.SkillSnapshotContentModel.get_by_id(
'1-1'))
self.model_instance_2 = (
skill_models.SkillSnapshotContentModel.get_by_id(
'2-1'))
self.job_class = (
prod_validation_jobs_one_off
.SkillSnapshotContentModelAuditOneOffJob)
def test_standard_operation(self):
skill_services.update_skill(
self.admin_id, '0', [skill_domain.SkillChange({
'cmd': 'update_skill_property',
'property_name': 'description',
'new_value': 'New description',
'old_value': 'description 0'
})], 'Changes.')
expected_output = [
u'[u\'fully-validated SkillSnapshotContentModel\', 4]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.model_instance_0.created_on = (
self.model_instance_0.last_updated + datetime.timedelta(days=1))
self.model_instance_0.put()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of SkillSnapshotContentModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.model_instance_0.id,
self.model_instance_0.created_on,
self.model_instance_0.last_updated
), (
u'[u\'fully-validated '
'SkillSnapshotContentModel\', 2]')]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_last_updated_greater_than_current_time(self):
self.model_instance_1.delete()
self.model_instance_2.delete()
expected_output = [(
u'[u\'failed validation check for current time check of '
'SkillSnapshotContentModel\', '
'[u\'Entity id %s: The last_updated field has a '
'value %s which is greater than the time when the job was run\']]'
) % (self.model_instance_0.id, self.model_instance_0.last_updated)]
mocked_datetime = datetime.datetime.utcnow() - datetime.timedelta(
hours=13)
with datastore_services.mock_datetime_for_datastore(mocked_datetime):
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_skill_model_failure(self):
skill_models.SkillModel.get_by_id('0').delete(self.owner_id, '', [])
expected_output = [
(
u'[u\'failed validation check for skill_ids '
'field check of SkillSnapshotContentModel\', '
'[u"Entity id 0-1: based on field skill_ids '
'having value 0, expected model SkillModel with '
'id 0 but it doesn\'t exist", u"Entity id 0-2: based on field '
'skill_ids having value 0, expected model '
'SkillModel with id 0 but it doesn\'t exist"]]'
), (
u'[u\'fully-validated '
'SkillSnapshotContentModel\', 2]')]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_invalid_skill_version_in_model_id(self):
model_with_invalid_version_in_id = (
skill_models.SkillSnapshotContentModel(
id='0-3'))
model_with_invalid_version_in_id.content = {}
model_with_invalid_version_in_id.put()
expected_output = [
(
u'[u\'failed validation check for skill model '
'version check of SkillSnapshotContentModel\', '
'[u\'Entity id 0-3: Skill model corresponding to '
'id 0 has a version 1 which is less than '
'the version 3 in snapshot content model id\']]'
), (
u'[u\'fully-validated SkillSnapshotContentModel\', '
'3]')]
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
class SkillCommitLogEntryModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(SkillCommitLogEntryModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup(self.ADMIN_EMAIL, self.ADMIN_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.admin_id = self.get_user_id_from_email(self.ADMIN_EMAIL)
self.set_admins([self.ADMIN_USERNAME])
rubrics = [
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[0], ['Explanation 1']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[1], ['Explanation 2']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[2], ['Explanation 3'])]
language_codes = ['ar', 'en', 'en']
skills = [skill_domain.Skill.create_default_skill(
'%s' % i,
'description %d' % i,
rubrics
) for i in python_utils.RANGE(3)]
example_1 = skill_domain.WorkedExample(
state_domain.SubtitledHtml('2', '<p>Example Question 1</p>'),
state_domain.SubtitledHtml('3', '<p>Example Explanation 1</p>')
)
skill_contents = skill_domain.SkillContents(
state_domain.SubtitledHtml(
'1', '<p>Explanation</p>'), [example_1],
state_domain.RecordedVoiceovers.from_dict({
'voiceovers_mapping': {
'1': {}, '2': {}, '3': {}
}
}),
state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'1': {}, '2': {}, '3': {}
}
})
)
misconception_dict = {
'id': 0, 'name': 'name', 'notes': '<p>notes</p>',
'feedback': '<p>default_feedback</p>',
'must_be_addressed': True}
misconception = skill_domain.Misconception.from_dict(
misconception_dict)
for index, skill in enumerate(skills):
skill.language_code = language_codes[index]
skill.skill_contents = skill_contents
skill.add_misconception(misconception)
skill_services.save_new_skill(self.owner_id, skill)
self.model_instance_0 = (
skill_models.SkillCommitLogEntryModel.get_by_id(
'skill-0-1'))
self.model_instance_1 = (
skill_models.SkillCommitLogEntryModel.get_by_id(
'skill-1-1'))
self.model_instance_2 = (
skill_models.SkillCommitLogEntryModel.get_by_id(
'skill-2-1'))
self.job_class = (
prod_validation_jobs_one_off
.SkillCommitLogEntryModelAuditOneOffJob)
def test_standard_operation(self):
skill_services.update_skill(
self.admin_id, '0', [skill_domain.SkillChange({
'cmd': 'update_skill_property',
'property_name': 'description',
'new_value': 'New description',
'old_value': 'description 0'
})], 'Changes.')
expected_output = [
u'[u\'fully-validated SkillCommitLogEntryModel\', 4]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_user_id_migration_bot(self):
self.model_instance_1.user_id = feconf.MIGRATION_BOT_USER_ID
self.model_instance_1.put_for_bot()
expected_output = [
u'[u\'fully-validated SkillCommitLogEntryModel\', 3]'
]
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_pseudo_user_id(self):
self.model_instance_1.user_id = self.PSEUDONYMOUS_ID
self.model_instance_1.put_for_bot()
expected_output = [
u'[u\'fully-validated SkillCommitLogEntryModel\', 3]'
]
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.model_instance_0.created_on = (
self.model_instance_0.last_updated + datetime.timedelta(days=1))
self.model_instance_0.put_for_human()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of SkillCommitLogEntryModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.model_instance_0.id,
self.model_instance_0.created_on,
self.model_instance_0.last_updated
), u'[u\'fully-validated SkillCommitLogEntryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_last_updated_greater_than_current_time(self):
self.model_instance_1.delete()
self.model_instance_2.delete()
expected_output = [(
u'[u\'failed validation check for current time check of '
'SkillCommitLogEntryModel\', '
'[u\'Entity id %s: The last_updated field has a '
'value %s which is greater than the time when the job was run\']]'
) % (self.model_instance_0.id, self.model_instance_0.last_updated)]
mocked_datetime = datetime.datetime.utcnow() - datetime.timedelta(
hours=13)
with datastore_services.mock_datetime_for_datastore(mocked_datetime):
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_skill_model_failure(self):
skill_models.SkillModel.get_by_id('0').delete(
feconf.SYSTEM_COMMITTER_ID, '', [])
expected_output = [
(
u'[u\'failed validation check for skill_ids field '
'check of SkillCommitLogEntryModel\', '
'[u"Entity id skill-0-1: based on field skill_ids '
'having value 0, expected model SkillModel with id '
'0 but it doesn\'t exist", u"Entity id skill-0-2: '
'based on field skill_ids having value 0, expected '
'model SkillModel with id 0 but it doesn\'t exist"]]'
), u'[u\'fully-validated SkillCommitLogEntryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=True)
def test_invalid_skill_version_in_model_id(self):
model_with_invalid_version_in_id = (
skill_models.SkillCommitLogEntryModel.create(
'0', 3, self.owner_id, 'edit', 'msg', [{}],
constants.ACTIVITY_STATUS_PUBLIC, False))
model_with_invalid_version_in_id.skill_id = '0'
model_with_invalid_version_in_id.put_for_human()
expected_output = [
(
u'[u\'failed validation check for skill model '
'version check of SkillCommitLogEntryModel\', '
'[u\'Entity id %s: Skill model corresponding '
'to id 0 has a version 1 which is less than '
'the version 3 in commit log entry model id\']]'
) % (model_with_invalid_version_in_id.id),
u'[u\'fully-validated SkillCommitLogEntryModel\', 3]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_id(self):
model_with_invalid_id = (
skill_models.SkillCommitLogEntryModel(
id='invalid-0-1',
user_id=self.owner_id,
commit_type='edit',
commit_message='msg',
commit_cmds=[{}],
post_commit_status=constants.ACTIVITY_STATUS_PUBLIC,
post_commit_is_private=False))
model_with_invalid_id.skill_id = '0'
model_with_invalid_id.put_for_human()
expected_output = [
(
u'[u\'failed validation check for model id check of '
'SkillCommitLogEntryModel\', '
'[u\'Entity id %s: Entity id does not match regex pattern\']]'
) % (model_with_invalid_id.id), (
u'[u\'failed validation check for commit cmd check of '
'SkillCommitLogEntryModel\', [u\'Entity id invalid-0-1: '
'No commit command domain object defined for entity with '
'commands: [{}]\']]'),
u'[u\'fully-validated SkillCommitLogEntryModel\', 3]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_commit_type(self):
self.model_instance_0.commit_type = 'invalid'
self.model_instance_0.put_for_human()
expected_output = [
(
u'[u\'failed validation check for commit type check of '
'SkillCommitLogEntryModel\', '
'[u\'Entity id skill-0-1: Commit type invalid is '
'not allowed\']]'
), u'[u\'fully-validated SkillCommitLogEntryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_post_commit_status(self):
self.model_instance_0.post_commit_status = 'invalid'
self.model_instance_0.put_for_human()
expected_output = [
(
u'[u\'failed validation check for post commit status check '
'of SkillCommitLogEntryModel\', '
'[u\'Entity id skill-0-1: Post commit status invalid '
'is invalid\']]'
), u'[u\'fully-validated SkillCommitLogEntryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_private_post_commit_status(self):
self.model_instance_0.post_commit_status = 'private'
self.model_instance_0.put_for_human()
expected_output = [
(
u'[u\'failed validation check for post commit status check '
'of SkillCommitLogEntryModel\', '
'[u\'Entity id skill-0-1: Post commit status private '
'is invalid\']]'
), u'[u\'fully-validated SkillCommitLogEntryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_commit_cmd_schmea(self):
self.model_instance_0.commit_cmds = [{
'cmd': 'add_skill_misconception'
}, {
'cmd': 'delete_skill_misconception',
'invalid_attribute': 'invalid'
}]
self.model_instance_0.put_for_human()
expected_output = [
(
u'[u\'failed validation check for commit cmd '
'add_skill_misconception check of SkillCommitLogEntryModel\', '
'[u"Entity id skill-0-1: Commit command domain validation '
'for command: {u\'cmd\': u\'add_skill_misconception\'} '
'failed with error: The following required attributes are '
'missing: new_misconception_dict"]]'
), (
u'[u\'failed validation check for commit cmd '
'delete_skill_misconception check of '
'SkillCommitLogEntryModel\', '
'[u"Entity id skill-0-1: Commit command domain validation '
'for command: {u\'cmd\': u\'delete_skill_misconception\', '
'u\'invalid_attribute\': u\'invalid\'} failed with error: '
'The following required attributes are missing: '
'misconception_id, The following extra attributes are present: '
'invalid_attribute"]]'
), u'[u\'fully-validated SkillCommitLogEntryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
class SkillSummaryModelValidatorTests(test_utils.AuditJobsTestBase):
def setUp(self):
super(SkillSummaryModelValidatorTests, self).setUp()
self.signup(self.OWNER_EMAIL, self.OWNER_USERNAME)
self.signup(self.ADMIN_EMAIL, self.ADMIN_USERNAME)
self.owner_id = self.get_user_id_from_email(self.OWNER_EMAIL)
self.admin_id = self.get_user_id_from_email(self.ADMIN_EMAIL)
self.set_admins([self.ADMIN_USERNAME])
rubrics = [
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[0], ['Explanation 1']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[1], ['Explanation 2']),
skill_domain.Rubric(
constants.SKILL_DIFFICULTIES[2], ['Explanation 3'])]
language_codes = ['ar', 'en', 'en']
skills = [skill_domain.Skill.create_default_skill(
'%s' % i,
'description %d' % i,
rubrics
) for i in python_utils.RANGE(3)]
example_1 = skill_domain.WorkedExample(
state_domain.SubtitledHtml('2', '<p>Example Question 1</p>'),
state_domain.SubtitledHtml('3', '<p>Example Explanation 1</p>')
)
skill_contents = skill_domain.SkillContents(
state_domain.SubtitledHtml(
'1', '<p>Explanation</p>'), [example_1],
state_domain.RecordedVoiceovers.from_dict({
'voiceovers_mapping': {
'1': {}, '2': {}, '3': {}
}
}),
state_domain.WrittenTranslations.from_dict({
'translations_mapping': {
'1': {}, '2': {}, '3': {}
}
})
)
misconception_dict = {
'id': 0, 'name': 'name', 'notes': '<p>notes</p>',
'feedback': '<p>default_feedback</p>',
'must_be_addressed': True}
misconception = skill_domain.Misconception.from_dict(
misconception_dict)
for index, skill in enumerate(skills):
skill.language_code = language_codes[index]
skill.skill_contents = skill_contents
skill.add_misconception(misconception)
skill_services.save_new_skill(self.owner_id, skill)
self.model_instance_0 = skill_models.SkillSummaryModel.get_by_id('0')
self.model_instance_1 = skill_models.SkillSummaryModel.get_by_id('1')
self.model_instance_2 = skill_models.SkillSummaryModel.get_by_id('2')
self.job_class = (
prod_validation_jobs_one_off.SkillSummaryModelAuditOneOffJob)
def test_standard_operation(self):
skill_services.update_skill(
self.admin_id, '0', [skill_domain.SkillChange({
'cmd': 'update_skill_property',
'property_name': 'description',
'new_value': 'New description',
'old_value': 'description 0'
})], 'Changes.')
expected_output = [
u'[u\'fully-validated SkillSummaryModel\', 3]']
self.process_and_flush_pending_mapreduce_tasks()
self.run_job_and_check_output(
expected_output, sort=False, literal_eval=False)
def test_model_with_created_on_greater_than_last_updated(self):
self.model_instance_0.created_on = (
self.model_instance_0.last_updated + datetime.timedelta(days=1))
self.model_instance_0.put()
expected_output = [(
u'[u\'failed validation check for time field relation check '
'of SkillSummaryModel\', '
'[u\'Entity id %s: The created_on field has a value '
'%s which is greater than the value '
'%s of last_updated field\']]') % (
self.model_instance_0.id,
self.model_instance_0.created_on,
self.model_instance_0.last_updated
), u'[u\'fully-validated SkillSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_last_updated_greater_than_current_time(self):
skill_services.delete_skill(self.owner_id, '1')
skill_services.delete_skill(self.owner_id, '2')
expected_output = [(
u'[u\'failed validation check for current time check of '
'SkillSummaryModel\', '
'[u\'Entity id %s: The last_updated field has a '
'value %s which is greater than the time when the job was run\']]'
) % (self.model_instance_0.id, self.model_instance_0.last_updated)]
mocked_datetime = datetime.datetime.utcnow() - datetime.timedelta(
hours=13)
with datastore_services.mock_datetime_for_datastore(mocked_datetime):
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_missing_skill_model_failure(self):
skill_model = skill_models.SkillModel.get_by_id('0')
skill_model.delete(feconf.SYSTEM_COMMITTER_ID, '', [])
self.model_instance_0.skill_model_last_updated = (
skill_model.last_updated)
self.model_instance_0.put()
expected_output = [
(
u'[u\'failed validation check for skill_ids '
'field check of SkillSummaryModel\', '
'[u"Entity id 0: based on field skill_ids having '
'value 0, expected model SkillModel with id 0 but '
'it doesn\'t exist"]]'),
u'[u\'fully-validated SkillSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_misconception_count(self):
self.model_instance_0.misconception_count = 10
self.model_instance_0.put()
expected_output = [
(
u'[u\'failed validation check for misconception count '
'check of SkillSummaryModel\', '
'[u"Entity id 0: Misconception count: 10 does not match '
'the number of misconceptions in skill model: '
'[{u\'id\': 0, u\'must_be_addressed\': True, '
'u\'notes\': u\'<p>notes</p>\', u\'name\': u\'name\', '
'u\'feedback\': u\'<p>default_feedback</p>\'}]"]]'
), u'[u\'fully-validated SkillSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_worked_examples_count(self):
self.model_instance_0.worked_examples_count = 10
self.model_instance_0.put()
expected_output = [
(
u'[u\'failed validation check for worked examples '
'count check of SkillSummaryModel\', '
'[u"Entity id 0: Worked examples count: 10 does not '
'match the number of worked examples in skill_contents '
'in skill model: [{u\'explanation\': {u\'content_id\': u\'3\', '
'u\'html\': u\'<p>Example Explanation 1</p>\'}, u\'question\': '
'{u\'content_id\': u\'2\', u\'html\': u\'<p>Example Question '
'1</p>\'}}]"]]'
), u'[u\'fully-validated SkillSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
def test_model_with_invalid_skill_related_property(self):
self.model_instance_0.description = 'invalid'
self.model_instance_0.put()
expected_output = [
(
u'[u\'failed validation check for description field check of '
'SkillSummaryModel\', '
'[u\'Entity id %s: description field in entity: invalid does '
'not match corresponding skill description field: '
'description 0\']]'
) % self.model_instance_0.id,
u'[u\'fully-validated SkillSummaryModel\', 2]']
self.run_job_and_check_output(
expected_output, sort=True, literal_eval=False)
| 43.965395 | 80 | 0.600658 | 5,511 | 49,549 | 5.10851 | 0.056977 | 0.030689 | 0.055554 | 0.041559 | 0.874969 | 0.851206 | 0.832913 | 0.790751 | 0.761233 | 0.747771 | 0 | 0.012469 | 0.295909 | 49,549 | 1,126 | 81 | 44.004441 | 0.794508 | 0.014309 | 0 | 0.69596 | 0 | 0.00101 | 0.161047 | 0.02878 | 0 | 0 | 0 | 0 | 0 | 1 | 0.048485 | false | 0 | 0.014141 | 0 | 0.067677 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b7d171e1e3b5e489d9d02a1e19569b0e41462994 | 215 | py | Python | api/handlers/bitly.py | LostLuma/repo-import-test | 45273fc3543d21366ed3cc5007dc5680b1e3e546 | [
"MIT"
] | 1 | 2020-01-27T17:42:30.000Z | 2020-01-27T17:42:30.000Z | api/handlers/bitly.py | LostLuma/repo-import-test | 45273fc3543d21366ed3cc5007dc5680b1e3e546 | [
"MIT"
] | 59 | 2021-11-17T08:21:59.000Z | 2022-03-29T08:29:55.000Z | api/handlers/bitly.py | SpoopySite/SpoopySite | da68e454eee2a242e3df2ae8ef31bf1e50da571b | [
"MIT"
] | 3 | 2020-01-26T23:19:24.000Z | 2021-09-25T07:07:59.000Z | import urllib.parse
from urllib.parse import ParseResult
def bitly(parsed: ParseResult):
if "url" in urllib.parse.parse_qs(parsed.query):
return urllib.parse.parse_qs(parsed.query).get("url")[0], True
| 26.875 | 70 | 0.734884 | 32 | 215 | 4.875 | 0.53125 | 0.282051 | 0.205128 | 0.230769 | 0.371795 | 0.371795 | 0 | 0 | 0 | 0 | 0 | 0.005405 | 0.139535 | 215 | 7 | 71 | 30.714286 | 0.837838 | 0 | 0 | 0 | 0 | 0 | 0.027907 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | false | 0 | 0.4 | 0 | 0.8 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b7f5aecc2fa706d7be31a35b9693b1f684f85268 | 27 | py | Python | vnpy/api/uft/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 19,529 | 2015-03-02T12:17:35.000Z | 2022-03-31T17:18:27.000Z | vnpy/api/uft/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 2,186 | 2015-03-04T23:16:33.000Z | 2022-03-31T03:44:01.000Z | vnpy/api/uft/__init__.py | ChaunceyDong/vnpy | 1c1b683ffc1c842bb7661e8194eca61af30cf586 | [
"MIT"
] | 8,276 | 2015-03-02T05:21:04.000Z | 2022-03-31T13:13:13.000Z | from vnpy_uft.api import *
| 13.5 | 26 | 0.777778 | 5 | 27 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 27 | 1 | 27 | 27 | 0.869565 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4dae54ee81e1f631489f1d784b85744a7f207d3d | 176 | py | Python | simple_seo/views.py | danigosa/django-simple-seo | 17610e50148c6672cb34e96654df1d3515b0444f | [
"BSD-3-Clause"
] | 11 | 2015-01-02T15:44:31.000Z | 2021-07-27T06:54:35.000Z | simple_seo/views.py | danigosa/django-simple-seo | 17610e50148c6672cb34e96654df1d3515b0444f | [
"BSD-3-Clause"
] | 8 | 2016-02-03T07:07:04.000Z | 2022-01-13T00:42:32.000Z | simple_seo/views.py | danigosa/django-simple-seo | 17610e50148c6672cb34e96654df1d3515b0444f | [
"BSD-3-Clause"
] | 8 | 2015-02-20T13:51:51.000Z | 2021-06-24T19:11:30.000Z | from django.shortcuts import render_to_response
def template_test(request):
"""Render a simple template"""
return render_to_response('simple_seo_test.html', locals()) | 29.333333 | 63 | 0.772727 | 24 | 176 | 5.375 | 0.708333 | 0.124031 | 0.248062 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 176 | 6 | 63 | 29.333333 | 0.837662 | 0.136364 | 0 | 0 | 0 | 0 | 0.136054 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4dbdd0d3ffc894e489050780ffe6f6ad58aa1f12 | 82 | py | Python | quantize/ttq.py | jeffreyzpan/micronet-submission | 377d34e4a4a715d2fbac189117d12e8cdc270548 | [
"Apache-2.0"
] | 2 | 2019-10-12T02:43:33.000Z | 2021-02-20T06:47:08.000Z | quantize/ttq.py | jeffreyzpan/micronet-submission | 377d34e4a4a715d2fbac189117d12e8cdc270548 | [
"Apache-2.0"
] | null | null | null | quantize/ttq.py | jeffreyzpan/micronet-submission | 377d34e4a4a715d2fbac189117d12e8cdc270548 | [
"Apache-2.0"
] | null | null | null |
def calc_threshold(aw):
# return 0.05 * aw.max()
return 0.7 * aw.mean()
| 13.666667 | 28 | 0.573171 | 14 | 82 | 3.285714 | 0.714286 | 0.304348 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.081967 | 0.256098 | 82 | 5 | 29 | 16.4 | 0.672131 | 0.268293 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
4dd991fab8bc4cae94abcd6c758ea3ada834bb7c | 31,574 | py | Python | octopus_deploy_swagger_client/__init__.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | octopus_deploy_swagger_client/__init__.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | octopus_deploy_swagger_client/__init__.py | cvent/octopus-deploy-api-client | 0e03e842e1beb29b132776aee077df570b88366a | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
# flake8: noqa
"""
Octopus Server API
No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen) # noqa: E501
OpenAPI spec version: 2019.6.7+Branch.tags-2019.6.7.Sha.aa18dc6809953218c66f57eff7d26481d9b23d6a
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
# import apis into sdk package
from octopus_deploy_swagger_client.octopus_deploy_client.accounts_api import AccountsApi
from octopus_deploy_swagger_client.octopus_deploy_client.action_templates_api import ActionTemplatesApi
from octopus_deploy_swagger_client.octopus_deploy_client.api_keys_api import ApiKeysApi
from octopus_deploy_swagger_client.octopus_deploy_client.artifacts_api import ArtifactsApi
from octopus_deploy_swagger_client.octopus_deploy_client.authentication_api import AuthenticationApi
from octopus_deploy_swagger_client.octopus_deploy_client.certificate_configuration_api import CertificateConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.certificates_api import CertificatesApi
from octopus_deploy_swagger_client.octopus_deploy_client.channels_api import ChannelsApi
from octopus_deploy_swagger_client.octopus_deploy_client.cloud_template_api import CloudTemplateApi
from octopus_deploy_swagger_client.octopus_deploy_client.community_action_templates_api import CommunityActionTemplatesApi
from octopus_deploy_swagger_client.octopus_deploy_client.configuration_api import ConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.dashboard_configurations_api import DashboardConfigurationsApi
from octopus_deploy_swagger_client.octopus_deploy_client.dashboards_api import DashboardsApi
from octopus_deploy_swagger_client.octopus_deploy_client.deployment_processes_api import DeploymentProcessesApi
from octopus_deploy_swagger_client.octopus_deploy_client.deployments_api import DeploymentsApi
from octopus_deploy_swagger_client.octopus_deploy_client.environments_api import EnvironmentsApi
from octopus_deploy_swagger_client.octopus_deploy_client.events_api import EventsApi
from octopus_deploy_swagger_client.octopus_deploy_client.external_security_groups_api import ExternalSecurityGroupsApi
from octopus_deploy_swagger_client.octopus_deploy_client.features_configuration_api import FeaturesConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.feeds_api import FeedsApi
from octopus_deploy_swagger_client.octopus_deploy_client.home_api import HomeApi
from octopus_deploy_swagger_client.octopus_deploy_client.interruptions_api import InterruptionsApi
from octopus_deploy_swagger_client.octopus_deploy_client.invitations_api import InvitationsApi
from octopus_deploy_swagger_client.octopus_deploy_client.lets_encrypt_api import LetsEncryptApi
from octopus_deploy_swagger_client.octopus_deploy_client.library_variable_sets_api import LibraryVariableSetsApi
from octopus_deploy_swagger_client.octopus_deploy_client.licenses_api import LicensesApi
from octopus_deploy_swagger_client.octopus_deploy_client.lifecycles_api import LifecyclesApi
from octopus_deploy_swagger_client.octopus_deploy_client.machine_policies_api import MachinePoliciesApi
from octopus_deploy_swagger_client.octopus_deploy_client.machine_roles_api import MachineRolesApi
from octopus_deploy_swagger_client.octopus_deploy_client.machines_api import MachinesApi
from octopus_deploy_swagger_client.octopus_deploy_client.maintenance_configuration_api import MaintenanceConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.migration_api import MigrationApi
from octopus_deploy_swagger_client.octopus_deploy_client.nu_get_api import NuGetApi
from octopus_deploy_swagger_client.octopus_deploy_client.octopus_package_metadata_api import OctopusPackageMetadataApi
from octopus_deploy_swagger_client.octopus_deploy_client.octopus_server_nodes_api import OctopusServerNodesApi
from octopus_deploy_swagger_client.octopus_deploy_client.packages_api import PackagesApi
from octopus_deploy_swagger_client.octopus_deploy_client.performance_configuration_api import PerformanceConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.permissions_api import PermissionsApi
from octopus_deploy_swagger_client.octopus_deploy_client.progression_api import ProgressionApi
from octopus_deploy_swagger_client.octopus_deploy_client.project_groups_api import ProjectGroupsApi
from octopus_deploy_swagger_client.octopus_deploy_client.project_triggers_api import ProjectTriggersApi
from octopus_deploy_swagger_client.octopus_deploy_client.projects_api import ProjectsApi
from octopus_deploy_swagger_client.octopus_deploy_client.proxies_api import ProxiesApi
from octopus_deploy_swagger_client.octopus_deploy_client.releases_api import ReleasesApi
from octopus_deploy_swagger_client.octopus_deploy_client.reporting_api import ReportingApi
from octopus_deploy_swagger_client.octopus_deploy_client.scheduler_api import SchedulerApi
from octopus_deploy_swagger_client.octopus_deploy_client.scoped_user_role_api import ScopedUserRoleApi
from octopus_deploy_swagger_client.octopus_deploy_client.server_configuration_api import ServerConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.server_status_api import ServerStatusApi
from octopus_deploy_swagger_client.octopus_deploy_client.smtp_configuration_api import SmtpConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.space_home_api import SpaceHomeApi
from octopus_deploy_swagger_client.octopus_deploy_client.spaces_api import SpacesApi
from octopus_deploy_swagger_client.octopus_deploy_client.subscription_api import SubscriptionApi
from octopus_deploy_swagger_client.octopus_deploy_client.tag_sets_api import TagSetsApi
from octopus_deploy_swagger_client.octopus_deploy_client.tasks_api import TasksApi
from octopus_deploy_swagger_client.octopus_deploy_client.teams_api import TeamsApi
from octopus_deploy_swagger_client.octopus_deploy_client.tenant_variables_api import TenantVariablesApi
from octopus_deploy_swagger_client.octopus_deploy_client.tenants_api import TenantsApi
from octopus_deploy_swagger_client.octopus_deploy_client.upgrade_configuration_api import UpgradeConfigurationApi
from octopus_deploy_swagger_client.octopus_deploy_client.user_onboarding_api import UserOnboardingApi
from octopus_deploy_swagger_client.octopus_deploy_client.user_permissions_api import UserPermissionsApi
from octopus_deploy_swagger_client.octopus_deploy_client.user_roles_api import UserRolesApi
from octopus_deploy_swagger_client.octopus_deploy_client.user_teams_api import UserTeamsApi
from octopus_deploy_swagger_client.octopus_deploy_client.users_api import UsersApi
from octopus_deploy_swagger_client.octopus_deploy_client.variables_api import VariablesApi
from octopus_deploy_swagger_client.octopus_deploy_client.worker_pools_api import WorkerPoolsApi
from octopus_deploy_swagger_client.octopus_deploy_client.workers_api import WorkersApi
# import ApiClient
from octopus_deploy_swagger_client.api_client import ApiClient
from octopus_deploy_swagger_client.configuration import Configuration
# import models into sdk package
from octopus_deploy_swagger_client.models.account_resource import AccountResource
from octopus_deploy_swagger_client.models.account_usage_resource import AccountUsageResource
from octopus_deploy_swagger_client.models.action_template_category_resource import ActionTemplateCategoryResource
from octopus_deploy_swagger_client.models.action_template_parameter_resource import ActionTemplateParameterResource
from octopus_deploy_swagger_client.models.action_template_resource import ActionTemplateResource
from octopus_deploy_swagger_client.models.action_template_search_resource import ActionTemplateSearchResource
from octopus_deploy_swagger_client.models.action_template_usage_resource import ActionTemplateUsageResource
from octopus_deploy_swagger_client.models.action_update_removed_package_usage import ActionUpdateRemovedPackageUsage
from octopus_deploy_swagger_client.models.action_update_result_resource import ActionUpdateResultResource
from octopus_deploy_swagger_client.models.activity_log_element import ActivityLogElement
from octopus_deploy_swagger_client.models.activity_log_entry import ActivityLogEntry
from octopus_deploy_swagger_client.models.activity_log_tree_node import ActivityLogTreeNode
from octopus_deploy_swagger_client.models.api_key_resource import ApiKeyResource
from octopus_deploy_swagger_client.models.artifact_resource import ArtifactResource
from octopus_deploy_swagger_client.models.authentication_provider_element import AuthenticationProviderElement
from octopus_deploy_swagger_client.models.authentication_provider_that_supports_groups import AuthenticationProviderThatSupportsGroups
from octopus_deploy_swagger_client.models.authentication_resource import AuthenticationResource
from octopus_deploy_swagger_client.models.auto_deploy_release_override_resource import AutoDeployReleaseOverrideResource
from octopus_deploy_swagger_client.models.azure_environment_resource import AzureEnvironmentResource
from octopus_deploy_swagger_client.models.azure_resource_group_resource import AzureResourceGroupResource
from octopus_deploy_swagger_client.models.azure_storage_account_resource import AzureStorageAccountResource
from octopus_deploy_swagger_client.models.azure_web_site_resource_azure_web_sites_list_action import AzureWebSiteResourceAzureWebSitesListAction
from octopus_deploy_swagger_client.models.azure_web_site_slot_resource import AzureWebSiteSlotResource
from octopus_deploy_swagger_client.models.built_in_feed_stats_resource import BuiltInFeedStatsResource
from octopus_deploy_swagger_client.models.certificate_configuration_resource import CertificateConfigurationResource
from octopus_deploy_swagger_client.models.certificate_resource import CertificateResource
from octopus_deploy_swagger_client.models.certificate_usage_resource import CertificateUsageResource
from octopus_deploy_swagger_client.models.channel_resource import ChannelResource
from octopus_deploy_swagger_client.models.channel_version_rule_resource import ChannelVersionRuleResource
from octopus_deploy_swagger_client.models.cloud_template_metadata import CloudTemplateMetadata
from octopus_deploy_swagger_client.models.commit_details import CommitDetails
from octopus_deploy_swagger_client.models.community_action_template_resource import CommunityActionTemplateResource
from octopus_deploy_swagger_client.models.configuration_section_metadata import ConfigurationSectionMetadata
from octopus_deploy_swagger_client.models.control import Control
from octopus_deploy_swagger_client.models.dashboard_configuration_resource import DashboardConfigurationResource
from octopus_deploy_swagger_client.models.dashboard_environment_resource import DashboardEnvironmentResource
from octopus_deploy_swagger_client.models.dashboard_item_resource import DashboardItemResource
from octopus_deploy_swagger_client.models.dashboard_project_group_resource import DashboardProjectGroupResource
from octopus_deploy_swagger_client.models.dashboard_project_resource import DashboardProjectResource
from octopus_deploy_swagger_client.models.dashboard_resource import DashboardResource
from octopus_deploy_swagger_client.models.dashboard_tenant_resource import DashboardTenantResource
from octopus_deploy_swagger_client.models.defect_resource import DefectResource
from octopus_deploy_swagger_client.models.deployment_action_package_resource import DeploymentActionPackageResource
from octopus_deploy_swagger_client.models.deployment_action_resource import DeploymentActionResource
from octopus_deploy_swagger_client.models.deployment_environment_settings_metadata import DeploymentEnvironmentSettingsMetadata
from octopus_deploy_swagger_client.models.deployment_preview_resource import DeploymentPreviewResource
from octopus_deploy_swagger_client.models.deployment_process_resource import DeploymentProcessResource
from octopus_deploy_swagger_client.models.deployment_promomotion_tenant import DeploymentPromomotionTenant
from octopus_deploy_swagger_client.models.deployment_promotion_target import DeploymentPromotionTarget
from octopus_deploy_swagger_client.models.deployment_resource import DeploymentResource
from octopus_deploy_swagger_client.models.deployment_step_resource import DeploymentStepResource
from octopus_deploy_swagger_client.models.deployment_target_resource import DeploymentTargetResource
from octopus_deploy_swagger_client.models.deployment_template_resource import DeploymentTemplateResource
from octopus_deploy_swagger_client.models.deployment_template_step import DeploymentTemplateStep
from octopus_deploy_swagger_client.models.display_info import DisplayInfo
from octopus_deploy_swagger_client.models.document_type_document import DocumentTypeDocument
from octopus_deploy_swagger_client.models.endpoint_resource import EndpointResource
from octopus_deploy_swagger_client.models.environment_resource import EnvironmentResource
from octopus_deploy_swagger_client.models.event_agent_resource import EventAgentResource
from octopus_deploy_swagger_client.models.event_category_resource import EventCategoryResource
from octopus_deploy_swagger_client.models.event_group_resource import EventGroupResource
from octopus_deploy_swagger_client.models.event_notification_subscription import EventNotificationSubscription
from octopus_deploy_swagger_client.models.event_notification_subscription_filter import EventNotificationSubscriptionFilter
from octopus_deploy_swagger_client.models.event_reference import EventReference
from octopus_deploy_swagger_client.models.event_resource import EventResource
from octopus_deploy_swagger_client.models.extension_settings_values import ExtensionSettingsValues
from octopus_deploy_swagger_client.models.extensions_info_resource import ExtensionsInfoResource
from octopus_deploy_swagger_client.models.features_configuration_resource import FeaturesConfigurationResource
from octopus_deploy_swagger_client.models.feed_resource import FeedResource
from octopus_deploy_swagger_client.models.form import Form
from octopus_deploy_swagger_client.models.form_element import FormElement
from octopus_deploy_swagger_client.models.identity_claim_resource import IdentityClaimResource
from octopus_deploy_swagger_client.models.identity_resource import IdentityResource
from octopus_deploy_swagger_client.models.inline_response200 import InlineResponse200
from octopus_deploy_swagger_client.models.interruption_resource import InterruptionResource
from octopus_deploy_swagger_client.models.invitation_resource import InvitationResource
from octopus_deploy_swagger_client.models.library import Library
from octopus_deploy_swagger_client.models.library_variable_set_project_usage import LibraryVariableSetProjectUsage
from octopus_deploy_swagger_client.models.library_variable_set_release_usage_entry import LibraryVariableSetReleaseUsageEntry
from octopus_deploy_swagger_client.models.library_variable_set_resource import LibraryVariableSetResource
from octopus_deploy_swagger_client.models.library_variable_set_usage_entry import LibraryVariableSetUsageEntry
from octopus_deploy_swagger_client.models.library_variable_set_usage_resource import LibraryVariableSetUsageResource
from octopus_deploy_swagger_client.models.license_limit_status_resource import LicenseLimitStatusResource
from octopus_deploy_swagger_client.models.license_message_resource import LicenseMessageResource
from octopus_deploy_swagger_client.models.license_resource import LicenseResource
from octopus_deploy_swagger_client.models.license_status_resource import LicenseStatusResource
from octopus_deploy_swagger_client.models.lifecycle_progression_resource import LifecycleProgressionResource
from octopus_deploy_swagger_client.models.lifecycle_resource import LifecycleResource
from octopus_deploy_swagger_client.models.list_api_metadata import ListApiMetadata
from octopus_deploy_swagger_client.models.login_initiated_resource import LoginInitiatedResource
from octopus_deploy_swagger_client.models.machine_cleanup_policy import MachineCleanupPolicy
from octopus_deploy_swagger_client.models.machine_connection_status import MachineConnectionStatus
from octopus_deploy_swagger_client.models.machine_connectivity_policy import MachineConnectivityPolicy
from octopus_deploy_swagger_client.models.machine_deployment_preview import MachineDeploymentPreview
from octopus_deploy_swagger_client.models.machine_health_check_policy import MachineHealthCheckPolicy
from octopus_deploy_swagger_client.models.machine_policy_resource import MachinePolicyResource
from octopus_deploy_swagger_client.models.machine_resource import MachineResource
from octopus_deploy_swagger_client.models.machine_script_policy import MachineScriptPolicy
from octopus_deploy_swagger_client.models.machine_update_policy import MachineUpdatePolicy
from octopus_deploy_swagger_client.models.maintenance_configuration_resource import MaintenanceConfigurationResource
from octopus_deploy_swagger_client.models.metadata import Metadata
from octopus_deploy_swagger_client.models.migration_import_resource import MigrationImportResource
from octopus_deploy_swagger_client.models.migration_partial_export_resource import MigrationPartialExportResource
from octopus_deploy_swagger_client.models.multi_tenancy_status_resource import MultiTenancyStatusResource
from octopus_deploy_swagger_client.models.named_reference_item import NamedReferenceItem
from octopus_deploy_swagger_client.models.numeric_report_data import NumericReportData
from octopus_deploy_swagger_client.models.numeric_report_series import NumericReportSeries
from octopus_deploy_swagger_client.models.octopus_package_metadata_mapped_resource import OctopusPackageMetadataMappedResource
from octopus_deploy_swagger_client.models.octopus_server_node_details_resource import OctopusServerNodeDetailsResource
from octopus_deploy_swagger_client.models.octopus_server_node_resource import OctopusServerNodeResource
from octopus_deploy_swagger_client.models.onboarding_resource import OnboardingResource
from octopus_deploy_swagger_client.models.onboarding_task_resource import OnboardingTaskResource
from octopus_deploy_swagger_client.models.options_metadata import OptionsMetadata
from octopus_deploy_swagger_client.models.package_build_metadata import PackageBuildMetadata
from octopus_deploy_swagger_client.models.package_description_resource import PackageDescriptionResource
from octopus_deploy_swagger_client.models.package_from_built_in_feed_resource import PackageFromBuiltInFeedResource
from octopus_deploy_swagger_client.models.package_note import PackageNote
from octopus_deploy_swagger_client.models.package_note_list_resource import PackageNoteListResource
from octopus_deploy_swagger_client.models.package_notes_result import PackageNotesResult
from octopus_deploy_swagger_client.models.package_reference import PackageReference
from octopus_deploy_swagger_client.models.package_resource import PackageResource
from octopus_deploy_swagger_client.models.package_signature_resource import PackageSignatureResource
from octopus_deploy_swagger_client.models.package_version_resource import PackageVersionResource
from octopus_deploy_swagger_client.models.performance_configuration_resource import PerformanceConfigurationResource
from octopus_deploy_swagger_client.models.permission_description import PermissionDescription
from octopus_deploy_swagger_client.models.phase_deployment_resource import PhaseDeploymentResource
from octopus_deploy_swagger_client.models.phase_progression_resource import PhaseProgressionResource
from octopus_deploy_swagger_client.models.phase_resource import PhaseResource
from octopus_deploy_swagger_client.models.progression_resource import ProgressionResource
from octopus_deploy_swagger_client.models.project import Project
from octopus_deploy_swagger_client.models.project_connectivity_policy import ProjectConnectivityPolicy
from octopus_deploy_swagger_client.models.project_group_resource import ProjectGroupResource
from octopus_deploy_swagger_client.models.project_resource import ProjectResource
from octopus_deploy_swagger_client.models.project_settings_metadata import ProjectSettingsMetadata
from octopus_deploy_swagger_client.models.project_trigger_resource import ProjectTriggerResource
from octopus_deploy_swagger_client.models.project_variable_set_usage import ProjectVariableSetUsage
from octopus_deploy_swagger_client.models.projected_team_reference_data_item import ProjectedTeamReferenceDataItem
from octopus_deploy_swagger_client.models.property_applicability import PropertyApplicability
from octopus_deploy_swagger_client.models.property_metadata import PropertyMetadata
from octopus_deploy_swagger_client.models.property_value_resource import PropertyValueResource
from octopus_deploy_swagger_client.models.proxy_resource import ProxyResource
from octopus_deploy_swagger_client.models.reference_data_item import ReferenceDataItem
from octopus_deploy_swagger_client.models.release_changes import ReleaseChanges
from octopus_deploy_swagger_client.models.release_creation_strategy_resource import ReleaseCreationStrategyResource
from octopus_deploy_swagger_client.models.release_package_metadata_resource import ReleasePackageMetadataResource
from octopus_deploy_swagger_client.models.release_progression_resource import ReleaseProgressionResource
from octopus_deploy_swagger_client.models.release_resource import ReleaseResource
from octopus_deploy_swagger_client.models.release_template_package import ReleaseTemplatePackage
from octopus_deploy_swagger_client.models.release_template_resource import ReleaseTemplateResource
from octopus_deploy_swagger_client.models.release_usage import ReleaseUsage
from octopus_deploy_swagger_client.models.release_usage_entry import ReleaseUsageEntry
from octopus_deploy_swagger_client.models.report_deployment_count_over_time_resource import ReportDeploymentCountOverTimeResource
from octopus_deploy_swagger_client.models.resource_collection_account_resource import ResourceCollectionAccountResource
from octopus_deploy_swagger_client.models.resource_collection_action_template_resource import ResourceCollectionActionTemplateResource
from octopus_deploy_swagger_client.models.resource_collection_api_key_resource import ResourceCollectionApiKeyResource
from octopus_deploy_swagger_client.models.resource_collection_artifact_resource import ResourceCollectionArtifactResource
from octopus_deploy_swagger_client.models.resource_collection_certificate_configuration_resource import ResourceCollectionCertificateConfigurationResource
from octopus_deploy_swagger_client.models.resource_collection_certificate_resource import ResourceCollectionCertificateResource
from octopus_deploy_swagger_client.models.resource_collection_channel_resource import ResourceCollectionChannelResource
from octopus_deploy_swagger_client.models.resource_collection_community_action_template_resource import ResourceCollectionCommunityActionTemplateResource
from octopus_deploy_swagger_client.models.resource_collection_configuration_section_metadata import ResourceCollectionConfigurationSectionMetadata
from octopus_deploy_swagger_client.models.resource_collection_defect_resource import ResourceCollectionDefectResource
from octopus_deploy_swagger_client.models.resource_collection_deployment_process_resource import ResourceCollectionDeploymentProcessResource
from octopus_deploy_swagger_client.models.resource_collection_deployment_resource import ResourceCollectionDeploymentResource
from octopus_deploy_swagger_client.models.resource_collection_deployment_target_resource import ResourceCollectionDeploymentTargetResource
from octopus_deploy_swagger_client.models.resource_collection_environment_resource import ResourceCollectionEnvironmentResource
from octopus_deploy_swagger_client.models.resource_collection_feed_resource import ResourceCollectionFeedResource
from octopus_deploy_swagger_client.models.resource_collection_interruption_resource import ResourceCollectionInterruptionResource
from octopus_deploy_swagger_client.models.resource_collection_library_variable_set_resource import ResourceCollectionLibraryVariableSetResource
from octopus_deploy_swagger_client.models.resource_collection_lifecycle_resource import ResourceCollectionLifecycleResource
from octopus_deploy_swagger_client.models.resource_collection_machine_policy_resource import ResourceCollectionMachinePolicyResource
from octopus_deploy_swagger_client.models.resource_collection_octopus_server_node_resource import ResourceCollectionOctopusServerNodeResource
from octopus_deploy_swagger_client.models.resource_collection_package_description_resource import ResourceCollectionPackageDescriptionResource
from octopus_deploy_swagger_client.models.resource_collection_package_resource import ResourceCollectionPackageResource
from octopus_deploy_swagger_client.models.resource_collection_package_version_resource import ResourceCollectionPackageVersionResource
from octopus_deploy_swagger_client.models.resource_collection_project_group_resource import ResourceCollectionProjectGroupResource
from octopus_deploy_swagger_client.models.resource_collection_project_resource import ResourceCollectionProjectResource
from octopus_deploy_swagger_client.models.resource_collection_project_trigger_resource import ResourceCollectionProjectTriggerResource
from octopus_deploy_swagger_client.models.resource_collection_proxy_resource import ResourceCollectionProxyResource
from octopus_deploy_swagger_client.models.resource_collection_release_resource import ResourceCollectionReleaseResource
from octopus_deploy_swagger_client.models.resource_collection_scoped_user_role_resource import ResourceCollectionScopedUserRoleResource
from octopus_deploy_swagger_client.models.resource_collection_space_resource import ResourceCollectionSpaceResource
from octopus_deploy_swagger_client.models.resource_collection_subscription_resource import ResourceCollectionSubscriptionResource
from octopus_deploy_swagger_client.models.resource_collection_tag_set_resource import ResourceCollectionTagSetResource
from octopus_deploy_swagger_client.models.resource_collection_task_resource import ResourceCollectionTaskResource
from octopus_deploy_swagger_client.models.resource_collection_team_resource import ResourceCollectionTeamResource
from octopus_deploy_swagger_client.models.resource_collection_tenant_resource import ResourceCollectionTenantResource
from octopus_deploy_swagger_client.models.resource_collection_user_resource import ResourceCollectionUserResource
from octopus_deploy_swagger_client.models.resource_collection_user_role_resource import ResourceCollectionUserRoleResource
from octopus_deploy_swagger_client.models.resource_collection_worker_pool_resource import ResourceCollectionWorkerPoolResource
from octopus_deploy_swagger_client.models.resource_collection_worker_resource import ResourceCollectionWorkerResource
from octopus_deploy_swagger_client.models.retention_period import RetentionPeriod
from octopus_deploy_swagger_client.models.root_resource import RootResource
from octopus_deploy_swagger_client.models.scheduled_task_details_resource import ScheduledTaskDetailsResource
from octopus_deploy_swagger_client.models.scheduled_task_status_resource import ScheduledTaskStatusResource
from octopus_deploy_swagger_client.models.scheduler_status_resource import SchedulerStatusResource
from octopus_deploy_swagger_client.models.scoped_user_role_resource import ScopedUserRoleResource
from octopus_deploy_swagger_client.models.selected_package import SelectedPackage
from octopus_deploy_swagger_client.models.sensitive_value import SensitiveValue
from octopus_deploy_swagger_client.models.server_configuration_resource import ServerConfigurationResource
from octopus_deploy_swagger_client.models.server_configuration_settings_resource import ServerConfigurationSettingsResource
from octopus_deploy_swagger_client.models.server_configuration_value_resource import ServerConfigurationValueResource
from octopus_deploy_swagger_client.models.server_status_health_resource import ServerStatusHealthResource
from octopus_deploy_swagger_client.models.server_timezone_resource import ServerTimezoneResource
from octopus_deploy_swagger_client.models.smtp_is_configured_resource import SmtpIsConfiguredResource
from octopus_deploy_swagger_client.models.space_resource import SpaceResource
from octopus_deploy_swagger_client.models.space_root_resource import SpaceRootResource
from octopus_deploy_swagger_client.models.step_usage import StepUsage
from octopus_deploy_swagger_client.models.step_usage_entry import StepUsageEntry
from octopus_deploy_swagger_client.models.subscription_resource import SubscriptionResource
from octopus_deploy_swagger_client.models.tag_resource import TagResource
from octopus_deploy_swagger_client.models.tag_set_resource import TagSetResource
from octopus_deploy_swagger_client.models.target_usage_entry import TargetUsageEntry
from octopus_deploy_swagger_client.models.task_details_resource import TaskDetailsResource
from octopus_deploy_swagger_client.models.task_progress import TaskProgress
from octopus_deploy_swagger_client.models.task_resource import TaskResource
from octopus_deploy_swagger_client.models.task_type_resource import TaskTypeResource
from octopus_deploy_swagger_client.models.team_name_resource import TeamNameResource
from octopus_deploy_swagger_client.models.team_resource import TeamResource
from octopus_deploy_swagger_client.models.tenant_resource import TenantResource
from octopus_deploy_swagger_client.models.tenant_variable_resource import TenantVariableResource
from octopus_deploy_swagger_client.models.trigger_action_resource import TriggerActionResource
from octopus_deploy_swagger_client.models.trigger_filter_resource import TriggerFilterResource
from octopus_deploy_swagger_client.models.type_metadata import TypeMetadata
from octopus_deploy_swagger_client.models.user_authentication_resource import UserAuthenticationResource
from octopus_deploy_swagger_client.models.user_permission_restriction import UserPermissionRestriction
from octopus_deploy_swagger_client.models.user_permission_set_resource import UserPermissionSetResource
from octopus_deploy_swagger_client.models.user_permission_set_resource_space_permissions import UserPermissionSetResourceSpacePermissions
from octopus_deploy_swagger_client.models.user_resource import UserResource
from octopus_deploy_swagger_client.models.user_role_resource import UserRoleResource
from octopus_deploy_swagger_client.models.variable_prompt_options import VariablePromptOptions
from octopus_deploy_swagger_client.models.variable_resource import VariableResource
from octopus_deploy_swagger_client.models.variable_resource_scope import VariableResourceScope
from octopus_deploy_swagger_client.models.variable_scope_values import VariableScopeValues
from octopus_deploy_swagger_client.models.variable_set_resource import VariableSetResource
from octopus_deploy_swagger_client.models.variables_scoped_to_environment_response import VariablesScopedToEnvironmentResponse
from octopus_deploy_swagger_client.models.versioning_strategy_resource import VersioningStrategyResource
from octopus_deploy_swagger_client.models.work_item_link import WorkItemLink
from octopus_deploy_swagger_client.models.worker_pool_resource import WorkerPoolResource
from octopus_deploy_swagger_client.models.worker_resource import WorkerResource
from octopus_deploy_swagger_client.models.x509_certificate import X509Certificate
| 95.389728 | 154 | 0.937259 | 3,645 | 31,574 | 7.644719 | 0.153635 | 0.175884 | 0.189126 | 0.267002 | 0.530271 | 0.52234 | 0.459035 | 0.293594 | 0.083115 | 0.008613 | 0 | 0.001818 | 0.041712 | 31,574 | 330 | 155 | 95.678788 | 0.919126 | 0.012795 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
128b685c6cbc06febd1fbc83bd69b2e75ef7f114 | 596 | py | Python | tests/basicswap/__init__.py | cryptoguard/basicswap | eb38a1b39d7dce1aee74697b469124f0ec1d5a36 | [
"MIT"
] | null | null | null | tests/basicswap/__init__.py | cryptoguard/basicswap | eb38a1b39d7dce1aee74697b469124f0ec1d5a36 | [
"MIT"
] | null | null | null | tests/basicswap/__init__.py | cryptoguard/basicswap | eb38a1b39d7dce1aee74697b469124f0ec1d5a36 | [
"MIT"
] | null | null | null | import unittest
import tests.basicswap.test_other as test_other
import tests.basicswap.test_prepare as test_prepare
import tests.basicswap.test_run as test_run
import tests.basicswap.test_reload as test_reload
def test_suite():
loader = unittest.TestLoader()
suite = loader.loadTestsFromModule(test_other)
suite.addTests(loader.loadTestsFromModule(test_prepare))
suite.addTests(loader.loadTestsFromModule(test_run))
suite.addTests(loader.loadTestsFromModule(test_reload))
# TODO: Add to ci scripts suite.addTests(loader.loadTestsFromModule(test_xmr))
return suite
| 33.111111 | 82 | 0.807047 | 76 | 596 | 6.144737 | 0.302632 | 0.267666 | 0.310493 | 0.205567 | 0.359743 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.11745 | 596 | 17 | 83 | 35.058824 | 0.887833 | 0.127517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.058824 | 0 | 1 | 0.083333 | false | 0 | 0.416667 | 0 | 0.583333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
12ff781a721b272fed7987c0e90343f7950d2b00 | 76 | py | Python | kivy-tiktaktoe/tic_tac_toe_board.py | zybex86/zybex86.github.io | f782e1cd84d473395b500792ad109a3f8bdc52c4 | [
"MIT"
] | null | null | null | kivy-tiktaktoe/tic_tac_toe_board.py | zybex86/zybex86.github.io | f782e1cd84d473395b500792ad109a3f8bdc52c4 | [
"MIT"
] | 1 | 2020-10-04T16:42:00.000Z | 2020-10-04T16:42:00.000Z | kivy-tiktaktoe/tic_tac_toe_board.py | zybex86/zybex86.github.io | f782e1cd84d473395b500792ad109a3f8bdc52c4 | [
"MIT"
] | 1 | 2020-10-24T17:23:00.000Z | 2020-10-24T17:23:00.000Z | from kivy.uix.widget import Widget
class TicTacToeBoard(Widget):
pass
| 12.666667 | 34 | 0.763158 | 10 | 76 | 5.8 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.171053 | 76 | 5 | 35 | 15.2 | 0.920635 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
420f95ad872b33c7f9bc71b8ba2ccc115e9cf08d | 126 | py | Python | pygsuite/drive/__init__.py | ngharrington/pygsuite | d9031d26fcf7b85c45bd243ae7f175c38ae6b1c8 | [
"MIT"
] | null | null | null | pygsuite/drive/__init__.py | ngharrington/pygsuite | d9031d26fcf7b85c45bd243ae7f175c38ae6b1c8 | [
"MIT"
] | null | null | null | pygsuite/drive/__init__.py | ngharrington/pygsuite | d9031d26fcf7b85c45bd243ae7f175c38ae6b1c8 | [
"MIT"
] | null | null | null | from .drive import Drive, FileTypes, UserType, PermissionType
__all__ = ["Drive", "FileTypes", "UserType", "PermissionType"]
| 31.5 | 62 | 0.746032 | 12 | 126 | 7.5 | 0.583333 | 0.311111 | 0.488889 | 0.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 126 | 3 | 63 | 42 | 0.803571 | 0 | 0 | 0 | 0 | 0 | 0.285714 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
42466d01949910f61adf1ede12ed149a19fc6e3c | 111 | py | Python | pyvarinf/__init__.py | suswei/RLCT | e9e04ca5e64250dfbb94134ec5283286dcdc4358 | [
"MIT"
] | null | null | null | pyvarinf/__init__.py | suswei/RLCT | e9e04ca5e64250dfbb94134ec5283286dcdc4358 | [
"MIT"
] | null | null | null | pyvarinf/__init__.py | suswei/RLCT | e9e04ca5e64250dfbb94134ec5283286dcdc4358 | [
"MIT"
] | null | null | null | from .vi import Variationalize
from .vi import Sample
from .ivi import IVariationalize
from .ivi import ISample | 27.75 | 32 | 0.828829 | 16 | 111 | 5.75 | 0.5 | 0.130435 | 0.26087 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 111 | 4 | 33 | 27.75 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c40a77dd27d54eb6244122516b977ccf16cc9d1c | 19,873 | py | Python | server/apps/physicaldevice/tests/test_worker_reset.py | iotile/iotile_cloud | 9dc65ac86d3a730bba42108ed7d9bbb963d22ba6 | [
"MIT"
] | null | null | null | server/apps/physicaldevice/tests/test_worker_reset.py | iotile/iotile_cloud | 9dc65ac86d3a730bba42108ed7d9bbb963d22ba6 | [
"MIT"
] | null | null | null | server/apps/physicaldevice/tests/test_worker_reset.py | iotile/iotile_cloud | 9dc65ac86d3a730bba42108ed7d9bbb963d22ba6 | [
"MIT"
] | null | null | null | import datetime
import json
import dateutil.parser
from django.contrib.auth import get_user_model
from django.test import Client, TestCase
from django.utils import timezone
from apps.devicelocation.models import DeviceLocation
from apps.physicaldevice.models import Device
from apps.property.models import GenericProperty
from apps.report.models import GeneratedUserReport
from apps.sqsworker.exceptions import WorkerActionHardError
from apps.stream.models import StreamId, StreamVariable
from apps.streamdata.models import StreamData
from apps.streamer.models import *
from apps.streamevent.models import StreamEventData
from apps.streamfilter.models import *
from apps.streamnote.models import StreamNote
from apps.utils.gid.convert import *
from apps.utils.test_util import TestMixin
from ..models import *
from ..worker.device_data_reset import DeviceDataResetAction
user_model = get_user_model()
class DeviceDataResetTests(TestMixin, TestCase):
def setUp(self):
self.usersTestSetup()
self.orgTestSetup()
self.deviceTemplateTestSetup()
self.v1 = StreamVariable.objects.create_variable(
name='Var A', project=self.p1, created_by=self.u2, lid=1,
)
self.v2 = StreamVariable.objects.create_variable(
name='Var B', project=self.p1, created_by=self.u3, lid=2,
)
self.pd1 = Device.objects.create_device(project=self.p1, label='d1', template=self.dt1, created_by=self.u2)
self.pd2 = Device.objects.create_device(project=self.p1, label='d2', template=self.dt1, created_by=self.u2)
StreamId.objects.create_after_new_device(self.pd1)
StreamId.objects.create_after_new_device(self.pd2)
self.s1 = StreamId.objects.filter(variable=self.v1).first()
self.s2 = StreamId.objects.filter(variable=self.v2).first()
def tearDown(self):
StreamFilterAction.objects.all().delete()
StreamFilterTrigger.objects.all().delete()
StreamFilter.objects.all().delete()
StreamId.objects.all().delete()
StreamVariable.objects.all().delete()
GenericProperty.objects.all().delete()
Device.objects.all().delete()
StreamData.objects.all().delete()
StreamEventData.objects.all().delete()
self.deviceTemplateTestTearDown()
self.orgTestTearDown()
self.userTestTearDown()
def testDeviceResetActionBadArguments(self):
with self.assertRaises(WorkerActionHardError):
DeviceDataResetAction.schedule(args={})
with self.assertRaises(WorkerActionHardError):
DeviceDataResetAction.schedule(args={'foobar': 5})
with self.assertRaises(WorkerActionHardError):
DeviceDataResetAction.schedule(args={'device_slug': 'd--0000-0000-0000-0001', 'extra-bad-arg': 'foo'})
self.assertTrue(DeviceDataResetAction._arguments_ok({
'device_slug': 'd--0000-0000-0000-0001', 'user': 'slug'
}))
action = DeviceDataResetAction()
self.assertIsNotNone(action)
with self.assertRaises(WorkerActionHardError):
action.execute(arguments={'foobar': 5})
def testDeviceResetActionNoDataDevice(self):
action = DeviceDataResetAction()
self.assertIsNotNone(action)
with self.assertRaises(WorkerActionHardError):
action.execute({'device_slug': 'd--0000-0000-0000-0001', 'user': 'user2'})
def testPropertyDelete(self):
GenericProperty.objects.create_int_property(slug=self.pd1.slug,
created_by=self.u1,
name='prop1', value=4)
GenericProperty.objects.create_str_property(slug=self.pd1.slug,
created_by=self.u1,
name='prop2', value='4')
GenericProperty.objects.create_bool_property(slug=self.pd1.slug,
created_by=self.u1, is_system=True,
name='prop3', value=True)
self.assertEqual(GenericProperty.objects.object_properties_qs(self.pd1).count(), 3)
action = DeviceDataResetAction()
action._device = self.pd1
action._clear_properties()
self.assertEqual(GenericProperty.objects.object_properties_qs(self.pd1).count(), 1)
system_prop = GenericProperty.objects.object_properties_qs(self.pd1).first()
self.assertTrue(system_prop.is_system)
def testStreamDataDelete(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
stream1 = StreamId.objects.create_stream(
project=self.p1, variable=self.v1, device=device, created_by=self.u2
)
stream2 = StreamId.objects.create_stream(
project=self.p1, variable=self.v2, device=device, created_by=self.u2
)
StreamData.objects.create(
stream_slug=stream1.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=5,
int_value=5
)
StreamData.objects.create(
stream_slug=stream1.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=6,
int_value=6
)
StreamData.objects.create(
stream_slug=stream2.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=7,
int_value=7
)
StreamData.objects.create(
stream_slug=stream1.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=8,
int_value=8
)
StreamData.objects.create(
stream_slug=stream2.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=9,
int_value=9
)
action = DeviceDataResetAction()
action._device = device
self.assertEqual(StreamData.objects.filter(stream_slug=stream1.slug).count(), 3)
self.assertEqual(StreamData.objects.filter(stream_slug=stream2.slug).count(), 2)
action._clear_stream_data()
self.assertEqual(StreamData.objects.filter(stream_slug=stream1.slug).count(), 0)
self.assertEqual(StreamData.objects.filter(stream_slug=stream2.slug).count(), 0)
def testStreamEventDelete(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
stream1 = StreamId.objects.create_stream(
project=self.p1, variable=self.v1, device=device, created_by=self.u2
)
stream2 = StreamId.objects.create_stream(
project=self.p1, variable=self.v2, device=device, created_by=self.u2
)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=stream1.slug,
streamer_local_id=2
)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=stream1.slug,
streamer_local_id=3
)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=stream2.slug,
streamer_local_id=4
)
action = DeviceDataResetAction()
action._device = device
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream1.slug).count(), 2)
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream2.slug).count(), 1)
action._clear_stream_data()
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream1.slug).count(), 0)
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream2.slug).count(), 0)
def testStreamNoteDelete(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
stream1 = StreamId.objects.create_stream(
project=self.p1, variable=self.v1, device=device, created_by=self.u2
)
StreamId.objects.create_stream(
project=self.p1, variable=self.v2, device=device, created_by=self.u2
)
StreamNote.objects.create(
target_slug=stream1.slug,
timestamp=timezone.now(),
created_by=self.u2,
note='Note 1'
)
StreamNote.objects.create(
target_slug=stream1.slug,
timestamp=timezone.now(),
created_by=self.u2,
note='Note 2'
)
StreamNote.objects.create(
target_slug=stream1.slug,
timestamp=timezone.now(),
created_by=self.u2,
note='Note 3'
)
StreamNote.objects.create(
target_slug=device.slug,
timestamp=timezone.now(),
created_by=self.u1,
note='Note 4'
)
action = DeviceDataResetAction()
action._device = device
self.assertEqual(StreamNote.objects.filter(target_slug=stream1.slug).count(), 3)
self.assertEqual(StreamNote.objects.filter(target_slug=device.slug).count(), 1)
action._clear_notes_and_locations()
self.assertEqual(StreamNote.objects.filter(target_slug=stream1.slug).count(), 0)
self.assertEqual(StreamNote.objects.filter(target_slug=device.slug).count(), 0)
StreamNote.objects.create(
target_slug=device.slug,
timestamp=timezone.now(),
created_by=self.u1,
note='Note 4'
)
action = DeviceDataResetAction()
action._device = device
action.execute(arguments={
'device_slug': device.slug, 'user': self.u2.slug, 'include_notes_and_locations': False
})
# Keep Note plus the note the worker adds
self.assertEqual(StreamNote.objects.filter(target_slug=device.slug).count(), 2)
def testDataBlockActionResetDeviceLocations(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
DeviceLocation.objects.create(
timestamp=timezone.now(),
target_slug=device.slug,
lat=12.1234, lon=10.000,
user=self.u2
)
DeviceLocation.objects.create(
timestamp=timezone.now(),
target_slug=device.slug,
lat=12.1234, lon=11.000,
user=self.u2
)
DeviceLocation.objects.create(
timestamp=timezone.now(),
target_slug=device.slug,
lat=12.1234, lon=12.000,
user=self.u2
)
self.assertEqual(DeviceLocation.objects.count(), 3)
action = DeviceDataResetAction()
action._device = device
self.assertEqual(DeviceLocation.objects.filter(target_slug=device.slug).count(), 3)
action._clear_notes_and_locations()
self.assertEqual(DeviceLocation.objects.filter(target_slug=device.slug).count(), 0)
DeviceLocation.objects.create(
timestamp=timezone.now(),
target_slug=device.slug,
lat=12.1234, lon=12.000,
user=self.u2
)
action = DeviceDataResetAction()
action._device = device
action.execute(arguments={
'device_slug': device.slug, 'user': self.u2.slug, 'include_notes_and_locations': False
})
self.assertEqual(DeviceLocation.objects.filter(target_slug=device.slug).count(), 1)
def testDataBlockActionResetReports(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
GeneratedUserReport.objects.create(
org=device.org,
label='My report 1',
source_ref=device.slug,
created_by=self.u2
)
GeneratedUserReport.objects.create(
org=device.org,
label='My report 2',
source_ref=device.slug,
created_by=self.u2
)
self.assertEqual(GeneratedUserReport.objects.count(), 2)
action = DeviceDataResetAction()
action._device = device
action._delete_generated_reports()
self.assertEqual(GeneratedUserReport.objects.count(), 0)
def testDeviceResetActionTestAll(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
stream1 = StreamId.objects.create_stream(
project=self.p1, variable=self.v1, device=device, created_by=self.u2
)
stream2 = StreamId.objects.create_stream(
project=self.p1, variable=self.v2, device=device, created_by=self.u2
)
streamer = Streamer.objects.create(device=device, index=1, created_by=self.u1 )
StreamerReport.objects.create(streamer=streamer, actual_first_id=11, actual_last_id=20, created_by=self.u1 )
GenericProperty.objects.create_int_property(slug=device.slug,
created_by=self.u1,
name='prop1', value=4)
GenericProperty.objects.create_str_property(slug=device.slug,
created_by=self.u1,
name='prop2', value='4')
GenericProperty.objects.create_bool_property(slug=device.slug,
created_by=self.u1,
name='prop3', value=True)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=stream1.slug,
streamer_local_id=2
)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=stream1.slug,
streamer_local_id=3
)
StreamEventData.objects.create(
timestamp=timezone.now(),
device_timestamp=10,
stream_slug=stream2.slug,
streamer_local_id=4
)
StreamData.objects.create(
stream_slug=stream1.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=5,
int_value=5
)
StreamData.objects.create(
stream_slug=stream1.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=6,
int_value=6
)
StreamData.objects.create(
stream_slug=stream2.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=7,
int_value=7
)
StreamData.objects.create(
stream_slug=stream1.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=8,
int_value=8
)
StreamData.objects.create(
stream_slug=stream2.slug,
type='ITR',
timestamp=timezone.now(),
streamer_local_id=9,
int_value=9
)
StreamNote.objects.create(
target_slug=stream1.slug,
timestamp=timezone.now(),
created_by=self.u2,
note='Note 1'
)
StreamNote.objects.create(
target_slug=stream1.slug,
timestamp=timezone.now(),
created_by=self.u2,
note='Note 2'
)
StreamNote.objects.create(
target_slug=stream1.slug,
timestamp=timezone.now(),
created_by=self.u2,
note='Note 3'
)
StreamNote.objects.create(
target_slug=device.slug,
timestamp=timezone.now(),
created_by=self.u1,
note='Note 4'
)
DeviceLocation.objects.create(
timestamp=timezone.now(),
target_slug=device.slug,
lat=12.1234, lon=10.000,
user=self.u2
)
GeneratedUserReport.objects.create(
org=device.org,
label='My report 1',
source_ref=device.slug,
created_by=self.u2
)
self.assertEqual(GenericProperty.objects.object_properties_qs(device).count(), 3)
self.assertEqual(device.streamids.count(), 2)
self.assertEqual(device.streamers.count(), 1)
self.assertEqual(StreamerReport.objects.count(), 1)
self.assertEqual(StreamData.objects.filter(stream_slug=stream1.slug).count(), 3)
self.assertEqual(StreamData.objects.filter(stream_slug=stream2.slug).count(), 2)
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream1.slug).count(), 2)
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream2.slug).count(), 1)
self.assertEqual(StreamNote.objects.filter(target_slug=stream1.slug).count(), 3)
self.assertEqual(StreamNote.objects.filter(target_slug=device.slug).count(), 1)
self.assertEqual(DeviceLocation.objects.filter(target_slug=device.slug).count(), 1)
self.assertEqual(GeneratedUserReport.objects.filter(source_ref=device.slug).count(), 1)
action = DeviceDataResetAction()
action._device = device
action.execute(arguments={'device_slug': device.slug, 'user': self.u2.slug})
self.assertEqual(GenericProperty.objects.object_properties_qs(device).count(), 0)
self.assertEqual(device.streamids.count(), 2)
self.assertEqual(device.streamers.count(), 0)
self.assertEqual(StreamerReport.objects.count(), 0)
self.assertEqual(StreamData.objects.filter(stream_slug=stream1.slug).count(), 0)
self.assertEqual(StreamData.objects.filter(stream_slug=stream2.slug).count(), 0)
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream1.slug).count(), 0)
self.assertEqual(StreamEventData.objects.filter(stream_slug=stream2.slug).count(), 0)
self.assertEqual(StreamNote.objects.filter(target_slug=stream1.slug).count(), 0)
self.assertEqual(StreamNote.objects.filter(target_slug=device.slug).count(), 1)
system_note = StreamNote.objects.filter(target_slug=device.slug).first()
self.assertTrue('data was cleared' in system_note.note)
self.assertEqual(DeviceLocation.objects.filter(target_slug=device.slug).count(), 0)
self.assertEqual(GeneratedUserReport.objects.filter(source_ref=device.slug).count(), 0)
def testNoFullDeviceResetActionTestAll(self):
device = Device.objects.create_device(project=self.p1, label='d3', template=self.dt1, created_by=self.u2)
stream1 = StreamId.objects.create_stream(
project=self.p1, variable=self.v1, device=device, created_by=self.u2
)
streamer = Streamer.objects.create(device=device, index=1, created_by=self.u1 )
StreamerReport.objects.create(streamer=streamer, actual_first_id=11, actual_last_id=20, created_by=self.u1 )
self.assertEqual(device.streamids.count(), 1)
self.assertEqual(device.streamers.count(), 1)
self.assertEqual(StreamerReport.objects.count(), 1)
action = DeviceDataResetAction()
action._device = device
action.execute(arguments={'device_slug': device.slug, 'user': self.u2.slug, 'full': False})
self.assertEqual(GenericProperty.objects.object_properties_qs(device).count(), 0)
self.assertEqual(device.streamids.count(), 1)
self.assertEqual(device.streamers.count(), 1)
self.assertEqual(StreamerReport.objects.count(), 0)
| 39.120079 | 116 | 0.625371 | 2,089 | 19,873 | 5.819052 | 0.092389 | 0.069513 | 0.044916 | 0.034551 | 0.836377 | 0.824284 | 0.793518 | 0.742267 | 0.726308 | 0.714627 | 0 | 0.027092 | 0.266341 | 19,873 | 507 | 117 | 39.197239 | 0.806653 | 0.001962 | 0 | 0.669746 | 0 | 0 | 0.022945 | 0.006051 | 0 | 0 | 0 | 0 | 0.143187 | 1 | 0.027714 | false | 0 | 0.048499 | 0 | 0.078522 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c42413ff4133bc5538fb90ad57355abfe11bc077 | 30 | py | Python | constructor.py | python-tpl/pyignores | 3fccbe0fe7934c0e4e938eddb1df63525d173b81 | [
"MIT"
] | null | null | null | constructor.py | python-tpl/pyignores | 3fccbe0fe7934c0e4e938eddb1df63525d173b81 | [
"MIT"
] | null | null | null | constructor.py | python-tpl/pyignores | 3fccbe0fe7934c0e4e938eddb1df63525d173b81 | [
"MIT"
] | null | null | null | def construct():
return {} | 15 | 16 | 0.6 | 3 | 30 | 6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.233333 | 30 | 2 | 17 | 15 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
c43605367051949bbc2960e7a5c8d21b546b9b77 | 267 | py | Python | Deep.Learning/2.Neural-Networks/1.Introduction-to-Neural-Networks/part31-sigmoid.py | Scrier/udacity | 1326441aa2104a641b555676ec2429d8b6eb539f | [
"MIT"
] | 1 | 2021-09-08T02:55:34.000Z | 2021-09-08T02:55:34.000Z | Deep.Learning/2.Neural-Networks/1.Introduction-to-Neural-Networks/part31-sigmoid.py | Scrier/udacity | 1326441aa2104a641b555676ec2429d8b6eb539f | [
"MIT"
] | 1 | 2018-01-14T16:34:49.000Z | 2018-01-14T16:34:49.000Z | Deep.Learning/2.Neural-Networks/1.Introduction-to-Neural-Networks/part31-sigmoid.py | Scrier/udacity | 1326441aa2104a641b555676ec2429d8b6eb539f | [
"MIT"
] | null | null | null | import numpy as np
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def calculate(w1, w2, bias):
print("calculate(",w1,",",w2,",",bias,")")
return sigmoid(w1*0.4 + w2 * 0.6 + bias)
print(calculate(2,6,-2))
print(calculate(3,5,-2.2))
print(calculate(5,4,-3)) | 22.25 | 46 | 0.595506 | 48 | 267 | 3.3125 | 0.4375 | 0.352201 | 0.163522 | 0.213836 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097778 | 0.157303 | 267 | 12 | 47 | 22.25 | 0.608889 | 0 | 0 | 0 | 0 | 0 | 0.048507 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | false | 0 | 0.111111 | 0.111111 | 0.555556 | 0.444444 | 0 | 0 | 0 | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 6 |
c4579f3c2f53c90568f14a10c4c25d3c1efd08a9 | 528 | py | Python | TMonline/TM/templatetags/tmmap.py | KAN-RYU/TerraformingMarsPython | 7a689d8ef994b98df0f3c2ec18d150d11daeda59 | [
"MIT"
] | 1 | 2019-01-09T03:42:34.000Z | 2019-01-09T03:42:34.000Z | TMonline/TM/templatetags/tmmap.py | KAN-RYU/TerraformingMarsPython | 7a689d8ef994b98df0f3c2ec18d150d11daeda59 | [
"MIT"
] | null | null | null | TMonline/TM/templatetags/tmmap.py | KAN-RYU/TerraformingMarsPython | 7a689d8ef994b98df0f3c2ec18d150d11daeda59 | [
"MIT"
] | null | null | null | from django import template
register = template.Library()
"""
0 0 1 1 1 1 1
0 0 1 1 1 1 1 1
0 1 1 1 1 1 1 1
0 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
"""
@register.simple_tag(name='tmmap')
def tmmap():
returnA = [[0,0,1,1,1,1,1],
[0,0,1,1,1,1,1,1],
[0,1,1,1,1,1,1,1],
[0,1,1,1,1,1,1,1,1],
[1,1,1,1,1,1,1,1,1],
[0,1,1,1,1,1,1,1,1],
[0,1,1,1,1,1,1,1],
[0,0,1,1,1,1,1,1],
[0,0,1,1,1,1,1]]
return returnA
| 19.555556 | 35 | 0.409091 | 131 | 528 | 1.641221 | 0.114504 | 0.781395 | 1.004651 | 1.116279 | 0.530233 | 0.530233 | 0.530233 | 0.530233 | 0.530233 | 0.530233 | 0 | 0.346505 | 0.376894 | 528 | 26 | 36 | 20.307692 | 0.306991 | 0 | 0 | 0.428571 | 0 | 0 | 0.011442 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.071429 | false | 0 | 0.071429 | 0 | 0.214286 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c47145b67e168a01014619206c8c6fb36b0e494c | 146 | py | Python | sandbox/lib/jumpscale/Jumpscale/servers/gedis/pytests/actors/__init__.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | sandbox/lib/jumpscale/Jumpscale/servers/gedis/pytests/actors/__init__.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | sandbox/lib/jumpscale/Jumpscale/servers/gedis/pytests/actors/__init__.py | threefoldtech/threebot_prebuilt | 1f0e1c65c14cef079cd80f73927d7c8318755c48 | [
"Apache-2.0"
] | null | null | null | from Jumpscale import j
from .actor import SCHEMA_IN, SCHEMA_OUT
for schema in [SCHEMA_IN, SCHEMA_OUT]:
j.data.schema.get_from_text(schema)
| 20.857143 | 40 | 0.780822 | 25 | 146 | 4.32 | 0.48 | 0.222222 | 0.388889 | 0.314815 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.143836 | 146 | 6 | 41 | 24.333333 | 0.864 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
6782d9c2244f85f0bb93e58ae8517cab6fc895e8 | 2,079 | py | Python | tests/scripts/test_disable_two_factor_authentication.py | nilsholle/sampledb | 90d7487a3990995ca2ec5dfd8b59d4739d6a9a87 | [
"MIT"
] | 5 | 2020-02-13T15:25:37.000Z | 2021-05-06T21:05:14.000Z | tests/scripts/test_disable_two_factor_authentication.py | nilsholle/sampledb | 90d7487a3990995ca2ec5dfd8b59d4739d6a9a87 | [
"MIT"
] | 28 | 2019-11-12T14:14:08.000Z | 2022-03-11T16:29:27.000Z | tests/scripts/test_disable_two_factor_authentication.py | nilsholle/sampledb | 90d7487a3990995ca2ec5dfd8b59d4739d6a9a87 | [
"MIT"
] | 8 | 2019-12-10T15:46:02.000Z | 2021-11-02T12:24:52.000Z | # coding: utf-8
"""
"""
import pytest
from sampledb.logic import users, authentication
import sampledb.__main__ as scripts
@pytest.fixture
def user_id():
user = users.create_user('test', 'user_id@example.com', users.UserType.PERSON)
method = authentication._create_two_factor_authentication_method(user.id, {'type': 'test'})
authentication.activate_two_factor_authentication_method(method.id)
assert user.id is not None
return user.id
def test_disable_two_factor_authentication(capsys, user_id):
assert authentication.get_active_two_factor_authentication_method(user_id) is not None
scripts.main([scripts.__file__, 'disable_two_factor_authentication', str(user_id)])
assert 'Success' in capsys.readouterr()[0]
assert authentication.get_active_two_factor_authentication_method(user_id) is None
with pytest.raises(SystemExit) as exc_info:
scripts.main([scripts.__file__, 'disable_two_factor_authentication', str(user_id)])
assert exc_info.value != 0
assert 'Error' in capsys.readouterr()[1]
assert authentication.get_active_two_factor_authentication_method(user_id) is None
def test_disable_two_factor_authentication_missing_arguments(capsys, user_id):
assert authentication.get_active_two_factor_authentication_method(user_id) is not None
with pytest.raises(SystemExit) as exc_info:
scripts.main([scripts.__file__, 'disable_two_factor_authentication'])
assert exc_info.value != 0
assert 'Usage' in capsys.readouterr()[0]
assert authentication.get_active_two_factor_authentication_method(user_id) is not None
def test_disable_two_factor_authentication_invalid_user_id(capsys, user_id):
assert authentication.get_active_two_factor_authentication_method(user_id) is not None
with pytest.raises(SystemExit) as exc_info:
scripts.main([scripts.__file__, 'disable_two_factor_authentication', 'user_id'])
assert exc_info.value != 0
assert 'Error' in capsys.readouterr()[1]
assert authentication.get_active_two_factor_authentication_method(user_id) is not None
| 41.58 | 95 | 0.790765 | 282 | 2,079 | 5.421986 | 0.195035 | 0.074559 | 0.24068 | 0.1707 | 0.788751 | 0.77894 | 0.7155 | 0.667103 | 0.667103 | 0.667103 | 0 | 0.004393 | 0.124098 | 2,079 | 49 | 96 | 42.428571 | 0.835255 | 0.006253 | 0 | 0.5 | 0 | 0 | 0.093431 | 0.064234 | 0 | 0 | 0 | 0 | 0.441176 | 1 | 0.117647 | false | 0 | 0.088235 | 0 | 0.235294 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67bcc39bf7b013ab3e06c7b2e7bea82822a02dab | 16,128 | py | Python | datadog/api/resources.py | cclauss/datadogpy | ec571f6593c983c0228fd7b6019bced1c5e7fc93 | [
"BSD-3-Clause"
] | 520 | 2015-03-17T23:04:53.000Z | 2022-03-27T22:10:59.000Z | datadog/api/resources.py | cclauss/datadogpy | ec571f6593c983c0228fd7b6019bced1c5e7fc93 | [
"BSD-3-Clause"
] | 544 | 2015-02-19T16:56:58.000Z | 2022-03-28T18:23:38.000Z | datadog/api/resources.py | miketheman/datadogpy | c71bd8de53aaeffedc5b1d3dd133354d1fa533b7 | [
"BSD-3-Clause"
] | 306 | 2015-03-23T19:42:15.000Z | 2022-03-18T22:22:11.000Z | # Unless explicitly stated otherwise all files in this repository are licensed under the BSD-3-Clause License.
# This product includes software developed at Datadog (https://www.datadoghq.com/).
# Copyright 2015-Present Datadog, Inc
"""
Datadog API resources.
"""
from datadog.api.api_client import APIClient
class CreateableAPIResource(object):
"""
Creatable API Resource
"""
@classmethod
def create(cls, attach_host_name=False, method="POST", id=None, params=None, **body):
"""
Create a new API resource object
:param attach_host_name: link the new resource object to the host name
:type attach_host_name: bool
:param method: HTTP method to use to contact API endpoint
:type method: HTTP method string
:param id: create a new resource object as a child of the given object
:type id: id
:param params: new resource object source
:type params: dictionary
:param body: new resource object attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = cls._resource_name
api_version = getattr(cls, "_api_version", None)
if method == "GET":
return APIClient.submit("GET", path, api_version, **body)
if id is None:
return APIClient.submit("POST", path, api_version, body, attach_host_name=attach_host_name, **params)
path = "{resource_name}/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
return APIClient.submit("POST", path, api_version, body, attach_host_name=attach_host_name, **params)
class SendableAPIResource(object):
"""
Fork of CreateableAPIResource class with different method names
"""
@classmethod
def send(cls, attach_host_name=False, id=None, compress_payload=False, **body):
"""
Create an API resource object
:param attach_host_name: link the new resource object to the host name
:type attach_host_name: bool
:param id: create a new resource object as a child of the given object
:type id: id
:param compress_payload: compress the payload using zlib
:type compress_payload: bool
:param body: new resource object attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
api_version = getattr(cls, "_api_version", None)
if id is None:
return APIClient.submit(
"POST",
cls._resource_name,
api_version,
body,
attach_host_name=attach_host_name,
compress_payload=compress_payload,
)
path = "{resource_name}/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
return APIClient.submit(
"POST", path, api_version, body, attach_host_name=attach_host_name, compress_payload=compress_payload
)
class UpdatableAPIResource(object):
"""
Updatable API Resource
"""
@classmethod
def update(cls, id, params=None, **body):
"""
Update an API resource object
:param params: updated resource object source
:type params: dictionary
:param body: updated resource object attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = "{resource_name}/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("PUT", path, api_version, body, **params)
class CustomUpdatableAPIResource(object):
"""
Updatable API Resource with custom HTTP Verb
"""
@classmethod
def update(cls, method=None, id=None, params=None, **body):
"""
Update an API resource object
:param method: HTTP method, defaults to PUT
:type params: string
:param params: updatable resource id
:type params: string
:param params: updated resource object source
:type params: dictionary
:param body: updated resource object attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if method is None:
method = "PUT"
if params is None:
params = {}
path = "{resource_name}/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit(method, path, api_version, body, **params)
class DeletableAPIResource(object):
"""
Deletable API Resource
"""
@classmethod
def delete(cls, id, **params):
"""
Delete an API resource object
:param id: resource object to delete
:type id: id
:returns: Dictionary representing the API's JSON response
"""
path = "{resource_name}/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("DELETE", path, api_version, **params)
class GetableAPIResource(object):
"""
Getable API Resource
"""
@classmethod
def get(cls, id, **params):
"""
Get information about an API resource object
:param id: resource object id to retrieve
:type id: id
:param params: parameters to filter API resource stream
:type params: dictionary
:returns: Dictionary representing the API's JSON response
"""
path = "{resource_name}/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("GET", path, api_version, **params)
class ListableAPIResource(object):
"""
Listable API Resource
"""
@classmethod
def get_all(cls, **params):
"""
List API resource objects
:param params: parameters to filter API resource stream
:type params: dictionary
:returns: Dictionary representing the API's JSON response
"""
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("GET", cls._resource_name, api_version, **params)
class ListableAPISubResource(object):
"""
Listable API Sub-Resource
"""
@classmethod
def get_items(cls, id, **params):
"""
List API sub-resource objects from a resource
:param id: resource id to retrieve sub-resource objects from
:type id: id
:param params: parameters to filter API sub-resource stream
:type params: dictionary
:returns: Dictionary representing the API's JSON response
"""
path = "{resource_name}/{resource_id}/{sub_resource_name}".format(
resource_name=cls._resource_name, resource_id=id, sub_resource_name=cls._sub_resource_name
)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("GET", path, api_version, **params)
class AddableAPISubResource(object):
"""
Addable API Sub-Resource
"""
@classmethod
def add_items(cls, id, params=None, **body):
"""
Add new API sub-resource objects to a resource
:param id: resource id to add sub-resource objects to
:type id: id
:param params: request parameters
:type params: dictionary
:param body: new sub-resource objects attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = "{resource_name}/{resource_id}/{sub_resource_name}".format(
resource_name=cls._resource_name, resource_id=id, sub_resource_name=cls._sub_resource_name
)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("POST", path, api_version, body, **params)
class UpdatableAPISubResource(object):
"""
Updatable API Sub-Resource
"""
@classmethod
def update_items(cls, id, params=None, **body):
"""
Update API sub-resource objects of a resource
:param id: resource id to update sub-resource objects from
:type id: id
:param params: request parameters
:type params: dictionary
:param body: updated sub-resource objects attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = "{resource_name}/{resource_id}/{sub_resource_name}".format(
resource_name=cls._resource_name, resource_id=id, sub_resource_name=cls._sub_resource_name
)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("PUT", path, api_version, body, **params)
class DeletableAPISubResource(object):
"""
Deletable API Sub-Resource
"""
@classmethod
def delete_items(cls, id, params=None, **body):
"""
Delete API sub-resource objects from a resource
:param id: resource id to delete sub-resource objects from
:type id: id
:param params: request parameters
:type params: dictionary
:param body: deleted sub-resource objects attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = "{resource_name}/{resource_id}/{sub_resource_name}".format(
resource_name=cls._resource_name, resource_id=id, sub_resource_name=cls._sub_resource_name
)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("DELETE", path, api_version, body, **params)
class SearchableAPIResource(object):
"""
Fork of ListableAPIResource class with different method names
"""
@classmethod
def _search(cls, **params):
"""
Query an API resource stream
:param params: parameters to filter API resource stream
:type params: dictionary
:returns: Dictionary representing the API's JSON response
"""
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("GET", cls._resource_name, api_version, **params)
class ActionAPIResource(object):
"""
Actionable API Resource
"""
@classmethod
def _trigger_class_action(cls, method, action_name, id=None, params=None, **body):
"""
Trigger an action
:param method: HTTP method to use to contact API endpoint
:type method: HTTP method string
:param action_name: action name
:type action_name: string
:param id: trigger the action for the specified resource object
:type id: id
:param params: action parameters
:type params: dictionary
:param body: action body
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
api_version = getattr(cls, "_api_version", None)
if id is None:
path = "{resource_name}/{action_name}".format(resource_name=cls._resource_name, action_name=action_name)
else:
path = "{resource_name}/{resource_id}/{action_name}".format(
resource_name=cls._resource_name, resource_id=id, action_name=action_name
)
if method == "GET":
# Do not add body to GET requests, it causes 400 Bad request responses on EU site
body = None
return APIClient.submit(method, path, api_version, body, **params)
@classmethod
def _trigger_action(cls, method, name, id=None, **body):
"""
Trigger an action
:param method: HTTP method to use to contact API endpoint
:type method: HTTP method string
:param name: action name
:type name: string
:param id: trigger the action for the specified resource object
:type id: id
:param body: action body
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
api_version = getattr(cls, "_api_version", None)
if id is None:
return APIClient.submit(method, name, api_version, body)
path = "{action_name}/{resource_id}".format(action_name=name, resource_id=id)
if method == "GET":
# Do not add body to GET requests, it causes 400 Bad request responses on EU site
body = None
return APIClient.submit(method, path, api_version, body)
class UpdatableAPISyntheticsSubResource(object):
"""
Update Synthetics sub resource
"""
@classmethod
def update_synthetics_items(cls, id, params=None, **body):
"""
Update API sub-resource objects of a resource
:param id: resource id to update sub-resource objects from
:type id: id
:param params: request parameters
:type params: dictionary
:param body: updated sub-resource objects attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = "{resource_name}/tests/{resource_id}/{sub_resource_name}".format(
resource_name=cls._resource_name, resource_id=id, sub_resource_name=cls._sub_resource_name
)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("PUT", path, api_version, body, **params)
class UpdatableAPISyntheticsResource(object):
"""
Update Synthetics resource
"""
@classmethod
def update_synthetics(cls, id, params=None, **body):
"""
Update an API resource object
:param params: updated resource object source
:type params: dictionary
:param body: updated resource object attributes
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
path = "{resource_name}/tests/{resource_id}".format(resource_name=cls._resource_name, resource_id=id)
api_version = getattr(cls, "_api_version", None)
return APIClient.submit("PUT", path, api_version, body, **params)
class ActionAPISyntheticsResource(object):
"""
Actionable Synthetics API Resource
"""
@classmethod
def _trigger_synthetics_class_action(cls, method, name, id=None, params=None, **body):
"""
Trigger an action
:param method: HTTP method to use to contact API endpoint
:type method: HTTP method string
:param name: action name
:type name: string
:param id: trigger the action for the specified resource object
:type id: id
:param params: action parameters
:type params: dictionary
:param body: action body
:type body: dictionary
:returns: Dictionary representing the API's JSON response
"""
if params is None:
params = {}
api_version = getattr(cls, "_api_version", None)
if id is None:
path = "{resource_name}/{action_name}".format(resource_name=cls._resource_name, action_name=name)
else:
path = "{resource_name}/{action_name}/{resource_id}".format(
resource_name=cls._resource_name, resource_id=id, action_name=name
)
if method == "GET":
# Do not add body to GET requests, it causes 400 Bad request responses on EU site
body = None
return APIClient.submit(method, path, api_version, body, **params)
| 29.866667 | 116 | 0.633681 | 1,873 | 16,128 | 5.308062 | 0.085424 | 0.080869 | 0.039429 | 0.055321 | 0.829411 | 0.773386 | 0.760913 | 0.745625 | 0.731744 | 0.713036 | 0 | 0.001201 | 0.27722 | 16,128 | 539 | 117 | 29.922078 | 0.851677 | 0.381262 | 0 | 0.590361 | 0 | 0 | 0.108703 | 0.075128 | 0 | 0 | 0 | 0 | 0 | 1 | 0.10241 | false | 0 | 0.006024 | 0 | 0.331325 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
67ea4975c630f32cea76c8e6e7a0fe5bc06760dd | 85 | py | Python | Tutorial/_04_Modules_Pip.py | SenonLi/LearnPython | 0d37ed625c623a79daa9c4407751050e683fa3ed | [
"Apache-2.0"
] | null | null | null | Tutorial/_04_Modules_Pip.py | SenonLi/LearnPython | 0d37ed625c623a79daa9c4407751050e683fa3ed | [
"Apache-2.0"
] | null | null | null | Tutorial/_04_Modules_Pip.py | SenonLi/LearnPython | 0d37ed625c623a79daa9c4407751050e683fa3ed | [
"Apache-2.0"
] | null | null | null | import _03_FileManage
print(_03_FileManage.getFileExtension("../DebugPython.sln"))
| 17 | 60 | 0.811765 | 9 | 85 | 7.222222 | 0.777778 | 0.369231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.05 | 0.058824 | 85 | 4 | 61 | 21.25 | 0.7625 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
e1ffea02fcfc09f5701d2c82946462eeb061fdf7 | 15,338 | py | Python | cassiopeia/datastores/riotapi/leagues.py | mikaeldui/cassiopeia | fb22e0dd2c71ae5e14c046379e49c8a44215e79d | [
"MIT"
] | null | null | null | cassiopeia/datastores/riotapi/leagues.py | mikaeldui/cassiopeia | fb22e0dd2c71ae5e14c046379e49c8a44215e79d | [
"MIT"
] | null | null | null | cassiopeia/datastores/riotapi/leagues.py | mikaeldui/cassiopeia | fb22e0dd2c71ae5e14c046379e49c8a44215e79d | [
"MIT"
] | null | null | null | from typing import Type, TypeVar, MutableMapping, Any, Iterable, Generator
from datapipelines import (
DataSource,
PipelineContext,
Query,
NotFoundError,
validate_query,
)
from .common import RiotAPIService, APINotFoundError
from ...data import Platform, Queue, Tier, Division
from ...dto.league import (
LeagueEntriesDto,
LeagueDto,
LeagueSummonerEntriesDto,
ChallengerLeagueListDto,
MasterLeagueListDto,
GrandmasterLeagueListDto,
)
from ..uniquekeys import convert_region_to_platform
T = TypeVar("T")
class LeaguesAPI(RiotAPIService):
@DataSource.dispatch
def get(
self,
type: Type[T],
query: MutableMapping[str, Any],
context: PipelineContext = None,
) -> T:
pass
@DataSource.dispatch
def get_many(
self,
type: Type[T],
query: MutableMapping[str, Any],
context: PipelineContext = None,
) -> Iterable[T]:
pass
# League Entries
_validate_get_league_entries_query = (
Query.has("queue")
.as_(Queue)
.also.has("tier")
.as_(Tier)
.also.has("division")
.as_(Division)
.also.has("page")
.as_(int)
.also.has("platform")
.as_(Platform)
)
@get.register(LeagueEntriesDto)
@validate_query(_validate_get_league_entries_query, convert_region_to_platform)
def get_league_entries_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> LeagueEntriesDto:
url = "https://{platform}.api.riotgames.com/lol/league/v4/entries/{queue}/{tier}/{division}".format(
platform=query["platform"].value.lower(),
queue=query["queue"].value,
tier=query["tier"].value,
division=query["division"].value,
)
try:
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], "leagues/paginated-entries"
)
data = self._get(
url,
parameters={"page": query["page"]},
app_limiter=app_limiter,
method_limiter=method_limiter,
)
except APINotFoundError:
data = []
region = query["platform"].region.value
for entry in data:
entry["region"] = region
return LeagueEntriesDto(
entries=data,
page=query["page"],
region=query["region"].value,
queue=query["queue"].value,
tier=query["tier"].value,
division=query["division"].value,
)
_validate_get_league_summoner_entries_query = (
Query.has("summoner.id").as_(str).also.has("platform").as_(Platform)
)
@get.register(LeagueSummonerEntriesDto)
@validate_query(
_validate_get_league_summoner_entries_query, convert_region_to_platform
)
def get_league_summoner_entries_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> LeagueSummonerEntriesDto:
url = "https://{platform}.api.riotgames.com/lol/league/v4/entries/by-summoner/{id}".format(
platform=query["platform"].value.lower(), id=query["summoner.id"]
)
try:
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], "leagues/summoner-entries"
)
data = self._get(
url, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError:
data = []
region = query["platform"].region.value
for entry in data:
entry["region"] = region
return LeagueSummonerEntriesDto(
entries=data, region=region, summonerId=query["summoner.id"]
)
# Leagues
_validate_get_league_query = (
Query.has("id").as_(str).also.has("platform").as_(Platform)
)
@get.register(LeagueDto)
@validate_query(_validate_get_league_query, convert_region_to_platform)
def get_leagues_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> LeagueDto:
url = "https://{platform}.api.riotgames.com/lol/league/v4/leagues/{leagueId}".format(
platform=query["platform"].value.lower(), leagueId=query["id"]
)
try:
endpoint = "leagues/leagueId {}".format(query["platform"].value)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data["region"] = query["platform"].region.value
for entry in data["entries"]:
entry["region"] = data["region"]
entry["tier"] = data["tier"]
return LeagueDto(data)
_validate_get_many_league_query = (
Query.has("ids").as_(Iterable).also.has("platform").as_(Platform)
)
@get_many.register(LeagueDto)
@validate_query(_validate_get_many_league_query, convert_region_to_platform)
def get_many_leagues_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> Generator[LeagueDto, None, None]:
def generator():
for id in query["ids"]:
url = "https://{platform}.api.riotgames.com/lol/league/v4/leagues/{leagueId}".format(
platform=query["platform"].value.lower(), leagueId=id
)
try:
endpoint = "leagues/leagueId {}".format(query["platform"].value)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data = {"leagues": data}
data["region"] = query["platform"].region.value
for league in data["leagues"]:
league["region"] = data["region"]
for entry in league["entries"]:
entry["region"] = data["region"]
yield LeagueDto(data)
return generator()
_validate_get_challenger_league_query = (
Query.has("queue").as_(Queue).also.has("platform").as_(Platform)
)
@get.register(ChallengerLeagueListDto)
@validate_query(_validate_get_challenger_league_query, convert_region_to_platform)
def get_challenger_league_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> ChallengerLeagueListDto:
url = "https://{platform}.api.riotgames.com/lol/league/v4/challengerleagues/by-queue/{queueName}".format(
platform=query["platform"].value.lower(), queueName=query["queue"].value
)
try:
endpoint = "challengerleagues/by-queue {}".format(query["platform"].value)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data["region"] = query["platform"].region.value
data["queue"] = query["queue"].value
for entry in data["entries"]:
entry["region"] = data["region"]
return ChallengerLeagueListDto(data)
_validate_get_many_challenger_league_query = (
Query.has("queues").as_(Iterable).also.has("platform").as_(Platform)
)
@get_many.register(ChallengerLeagueListDto)
@validate_query(
_validate_get_many_challenger_league_query, convert_region_to_platform
)
def get_many_challenger_leagues_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> Generator[ChallengerLeagueListDto, None, None]:
def generator():
for queue in query["queues"]:
url = "https://{platform}.api.riotgames.com/lol/league/v4/challengerleagues/by-queue/{queueName}".format(
platform=query["platform"].value.lower(), queueName=queue.value
)
try:
endpoint = "challengerleagues/by-queue {}".format(
query["platform"].value
)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data = {"leagues": data}
data["region"] = query["platform"].region.value
data["queue"] = queue.value
for entry in data["entries"]:
entry["region"] = data["region"]
yield ChallengerLeagueListDto(data)
return generator()
_validate_get_grandmaster_league_query = (
Query.has("queue").as_(Queue).also.has("platform").as_(Platform)
)
@get.register(GrandmasterLeagueListDto)
@validate_query(_validate_get_grandmaster_league_query, convert_region_to_platform)
def get_grandmaster_league_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> GrandmasterLeagueListDto:
url = "https://{platform}.api.riotgames.com/lol/league/v4/grandmasterleagues/by-queue/{queueName}".format(
platform=query["platform"].value.lower(), queueName=query["queue"].value
)
try:
endpoint = "grandmasterleagues/by-queue {}".format(query["platform"].value)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data["region"] = query["platform"].region.value
data["queue"] = query["queue"].value
for entry in data["entries"]:
entry["region"] = data["region"]
return GrandmasterLeagueListDto(data)
_validate_get_many_grandmaster_league_query = (
Query.has("queues").as_(Iterable).also.has("platform").as_(Platform)
)
@get_many.register(GrandmasterLeagueListDto)
@validate_query(
_validate_get_many_grandmaster_league_query, convert_region_to_platform
)
def get_many_grandmaster_leagues_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> Generator[GrandmasterLeagueListDto, None, None]:
def generator():
for queue in query["queues"]:
url = "https://{platform}.api.riotgames.com/lol/league/v4/grandmasterleagues/by-queue/{queueName}".format(
platform=query["platform"].value.lower(), queueName=queue.value
)
try:
endpoint = "grandmasterleagues/by-queue {}".format(
query["platform"].value
)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data = {"leagues": data}
data["region"] = query["platform"].region.value
data["queue"] = queue.value
for entry in data["entries"]:
entry["region"] = data["region"]
yield GrandmasterLeagueListDto(data)
return generator()
_validate_get_master_league_query = (
Query.has("queue").as_(Queue).also.has("platform").as_(Platform)
)
@get.register(MasterLeagueListDto)
@validate_query(_validate_get_master_league_query, convert_region_to_platform)
def get_master_league_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> MasterLeagueListDto:
url = "https://{platform}.api.riotgames.com/lol/league/v4/masterleagues/by-queue/{queueName}".format(
platform=query["platform"].value.lower(), queueName=query["queue"].value
)
try:
endpoint = "masterleagues/by-queue {}".format(query["platform"].value)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data["region"] = query["platform"].region.value
data["queue"] = query["queue"].value
for entry in data["entries"]:
entry["region"] = data["region"]
return MasterLeagueListDto(data)
_validate_get_many_master_league_query = (
Query.has("queues").as_(Iterable).also.has("platform").as_(Platform)
)
@get_many.register(MasterLeagueListDto)
@validate_query(_validate_get_many_master_league_query, convert_region_to_platform)
def get_many_master_leagues_list(
self, query: MutableMapping[str, Any], context: PipelineContext = None
) -> Generator[MasterLeagueListDto, None, None]:
def generator():
for queue in query["queues"]:
url = "https://{platform}.api.riotgames.com/lol/league/v4/masterleagues/by-queue/{queueName}".format(
platform=query["platform"].value.lower(), queueName=queue.value
)
try:
endpoint = "masterleagues/by-queue {}".format(
query["platform"].value
)
app_limiter, method_limiter = self._get_rate_limiter(
query["platform"], endpoint
)
data = self._get(
url, {}, app_limiter=app_limiter, method_limiter=method_limiter
)
except APINotFoundError as error:
raise NotFoundError(str(error)) from error
data = {"leagues": data}
data["region"] = query["platform"].region.value
data["queue"] = queue.value
for entry in data["entries"]:
entry["region"] = data["region"]
yield MasterLeagueListDto(data)
return generator()
| 39.530928 | 122 | 0.58919 | 1,476 | 15,338 | 5.911924 | 0.065041 | 0.056612 | 0.06876 | 0.052716 | 0.835893 | 0.805524 | 0.737337 | 0.732867 | 0.724616 | 0.673734 | 0 | 0.000927 | 0.296453 | 15,338 | 387 | 123 | 39.633075 | 0.80771 | 0.001434 | 0 | 0.49711 | 0 | 0.023121 | 0.127931 | 0.012995 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046243 | false | 0.00578 | 0.017341 | 0 | 0.124277 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c033571dfbf426252c986ddd0977275b86dc3c83 | 71 | py | Python | main/engines/__init__.py | billtrn/Comment-Sentiment-Detector | 3cacca439cf8ada10da021ca620008d8320eeacd | [
"MIT"
] | 10 | 2021-05-19T11:24:19.000Z | 2022-01-07T16:27:23.000Z | main/engines/__init__.py | billtrn/Comment_Sentiment_Analysis | 3cacca439cf8ada10da021ca620008d8320eeacd | [
"MIT"
] | 1 | 2021-05-18T15:55:52.000Z | 2021-05-18T15:55:52.000Z | main/engines/__init__.py | billtrn/Comment_Sentiment_Analysis | 3cacca439cf8ada10da021ca620008d8320eeacd | [
"MIT"
] | null | null | null | from .evaluate import *
from .preprocess import *
from .train import *
| 17.75 | 25 | 0.746479 | 9 | 71 | 5.888889 | 0.555556 | 0.377358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169014 | 71 | 3 | 26 | 23.666667 | 0.898305 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
c05795da861836484cefaa160a74f646346f22a5 | 699 | py | Python | mgplvm/manifolds/base.py | rkj26/mgplvm-pytorch | 7d082d92be4d82ae8ab978e774ce83429444c14b | [
"MIT"
] | 18 | 2020-12-29T20:24:55.000Z | 2022-03-07T15:44:13.000Z | mgplvm/manifolds/base.py | rkj26/mgplvm-pytorch | 7d082d92be4d82ae8ab978e774ce83429444c14b | [
"MIT"
] | 41 | 2021-01-15T14:00:25.000Z | 2021-06-17T13:33:11.000Z | mgplvm/manifolds/base.py | rkj26/mgplvm-pytorch | 7d082d92be4d82ae8ab978e774ce83429444c14b | [
"MIT"
] | 1 | 2021-11-22T21:44:13.000Z | 2021-11-22T21:44:13.000Z | import abc
import torch
import torch.nn as nn
from torch import Tensor
from ..base import Module
from typing import Any
class Manifold(Module, metaclass=abc.ABCMeta):
def __init__(self, d: int):
"""
:param d: dimensionality of the manifold
"""
super().__init__()
self.d = d
@abc.abstractmethod
def expmap(x: Tensor) -> Tensor:
pass
@abc.abstractmethod
def logmap(x: Tensor) -> Tensor:
pass
@abc.abstractmethod
def log_q(x: Tensor) -> Tensor:
pass
@abc.abstractmethod
def distance(x: Tensor, y: Tensor) -> Tensor:
pass
@abc.abstractmethod
def inducing_points(n):
pass
| 18.891892 | 49 | 0.609442 | 85 | 699 | 4.894118 | 0.435294 | 0.204327 | 0.240385 | 0.182692 | 0.353365 | 0.353365 | 0.266827 | 0 | 0 | 0 | 0 | 0 | 0.293276 | 699 | 36 | 50 | 19.416667 | 0.842105 | 0.057225 | 0 | 0.4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.24 | false | 0.2 | 0.24 | 0 | 0.52 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
fbf43edee6638a72546152bfe9b1e7d9b2bea6d9 | 209 | py | Python | tests/__init__.py | ningyixue/AIPI530_Final_Project | b95353ffd003692a37a59042dfcd744a18b7e802 | [
"MIT"
] | 565 | 2020-08-01T02:44:28.000Z | 2022-03-30T15:00:54.000Z | tests/__init__.py | ningyixue/AIPI530_Final_Project | b95353ffd003692a37a59042dfcd744a18b7e802 | [
"MIT"
] | 144 | 2020-08-01T03:45:10.000Z | 2022-03-30T14:51:16.000Z | tests/__init__.py | ningyixue/AIPI530_Final_Project | b95353ffd003692a37a59042dfcd744a18b7e802 | [
"MIT"
] | 103 | 2020-08-26T13:27:34.000Z | 2022-03-31T12:24:27.000Z | import os
import pytest
is_skipping_performance_test = os.environ.get("TEST_PERFORMANCE") != "TRUE"
performance_test = pytest.mark.skipif(
is_skipping_performance_test, reason="skip performance tests"
)
| 23.222222 | 75 | 0.794258 | 27 | 209 | 5.851852 | 0.555556 | 0.28481 | 0.265823 | 0.316456 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110048 | 209 | 8 | 76 | 26.125 | 0.849462 | 0 | 0 | 0 | 0 | 0 | 0.200957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
fbfb1cfad92797d0bfb1e782fc6c38c2c36a97f9 | 224 | py | Python | scripts/run_update.py | marvinsohn/ose-course-data-science | 250f2bac7a3aafe1367a970c8b09a2c9fed67f0a | [
"MIT"
] | 62 | 2019-04-02T11:51:06.000Z | 2020-07-11T05:28:27.000Z | scripts/run_update.py | marvinsohn/ose-course-data-science | 250f2bac7a3aafe1367a970c8b09a2c9fed67f0a | [
"MIT"
] | 49 | 2019-04-05T10:57:07.000Z | 2020-07-07T20:41:19.000Z | scripts/run_update.py | HumanCapitalAnalysis/ose-data-science | d5be68de68f170f8e8f11c9ed635b42f19100f87 | [
"MIT"
] | 46 | 2019-04-03T08:31:02.000Z | 2020-07-13T12:43:26.000Z | #!/usr/bin/env python
"""This script updates all files, including the submodules."""
import subprocess
subprocess.check_call(["git", "pull"])
subprocess.check_call(["git", "submodule", "update", "--recursive", "--remote"])
| 32 | 80 | 0.705357 | 27 | 224 | 5.777778 | 0.814815 | 0.192308 | 0.24359 | 0.282051 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089286 | 224 | 6 | 81 | 37.333333 | 0.764706 | 0.34375 | 0 | 0 | 0 | 0 | 0.312057 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
220f407e14efc98252b2ac8aaba3ee471172b4f3 | 46 | py | Python | test/helper/fetchHITRAN.py | Datseris/RadiativeTransfer.jl | 0fdd094f2842d574c09dfeb7cd02e40c25edaeb2 | [
"MIT"
] | 29 | 2021-05-07T21:58:21.000Z | 2022-01-20T18:03:07.000Z | test/helper/fetchHITRAN.py | Datseris/RadiativeTransfer.jl | 0fdd094f2842d574c09dfeb7cd02e40c25edaeb2 | [
"MIT"
] | 28 | 2020-08-24T21:33:12.000Z | 2021-05-03T19:30:14.000Z | test/helper/fetchHITRAN.py | Datseris/RadiativeTransfer.jl | 0fdd094f2842d574c09dfeb7cd02e40c25edaeb2 | [
"MIT"
] | 1 | 2021-06-22T23:35:29.000Z | 2021-06-22T23:35:29.000Z | from hapi import *
fetch('CO2',2,1,5500,7000)
| 15.333333 | 26 | 0.695652 | 9 | 46 | 3.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.268293 | 0.108696 | 46 | 2 | 27 | 23 | 0.512195 | 0 | 0 | 0 | 0 | 0 | 0.065217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
22238f147780d468fbaede35f167e2d6fb9dfa6a | 129 | py | Python | test.py | Thukor/MazeSolver | c953e193ce27a7348e8ec9c5592144426dfce193 | [
"MIT"
] | 5 | 2018-02-06T22:48:34.000Z | 2020-01-07T20:19:05.000Z | test.py | Thukor/MazeSolver | c953e193ce27a7348e8ec9c5592144426dfce193 | [
"MIT"
] | 11 | 2018-01-31T21:47:49.000Z | 2018-04-21T16:42:52.000Z | test.py | Thukor/MazeSolver | c953e193ce27a7348e8ec9c5592144426dfce193 | [
"MIT"
] | 2 | 2020-06-18T05:40:03.000Z | 2022-02-02T03:46:30.000Z |
from lib.util.decorators.imagestrategy import STRATEGIES
from lib.util.ImageProcessing.strategies import *
print(STRATEGIES) | 18.428571 | 57 | 0.829457 | 15 | 129 | 7.133333 | 0.6 | 0.130841 | 0.205607 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.100775 | 129 | 7 | 58 | 18.428571 | 0.922414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0.333333 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2260f5fc0b43fa9673289fc8df72bdf9e98f0bac | 930 | py | Python | example_notebooks/scover/__init__.py | jacobhepkema/scanem | fbd4d3c8bb00e12dde6f33a18473a7775096c243 | [
"MIT"
] | 1 | 2020-09-15T09:54:55.000Z | 2020-09-15T09:54:55.000Z | scover/__init__.py | jacobhepkema/scanem | fbd4d3c8bb00e12dde6f33a18473a7775096c243 | [
"MIT"
] | null | null | null | scover/__init__.py | jacobhepkema/scanem | fbd4d3c8bb00e12dde6f33a18473a7775096c243 | [
"MIT"
] | null | null | null | from .data.utils import (
onehot_seq,
pool_anndata,
SeqDataset,
get_splits,
shan_ent,
fdr,
get_group_name,
seq_list_to_conv,
align_conv_filters,
save_meme,
read_meme,
to_z,
get_activations,
create_alignment_df,
generate_alignment_graph,
plot_alignment_graph,
generate_motif_cluster_df
)
from .net.seqnet import (
tune_scover_asha_hyperopt,
train_scover,
train_scover_bs,
SeqNet
)
__all__ = [
'onehot_seq',
'pool_anndata',
'SeqDataset',
'get_splits',
'shan_ent',
'fdr',
'get_group_name',
'seq_list_to_conv',
'align_conv_filters',
'save_meme',
'read_meme',
'to_z',
'get_activations',
'create_alignment_df',
'generate_alignment_graph',
'plot_alignment_graph',
'generate_motif_cluster_df',
'tune_scover_asha_hyperopt',
'train_scover',
'train_scover_bs',
'SeqNet'
]
| 18.6 | 32 | 0.65914 | 111 | 930 | 4.945946 | 0.387387 | 0.102004 | 0.047359 | 0.07286 | 0.925319 | 0.925319 | 0.925319 | 0.925319 | 0.925319 | 0.925319 | 0 | 0 | 0.241935 | 930 | 49 | 33 | 18.979592 | 0.778723 | 0 | 0 | 0 | 0 | 0 | 0.305376 | 0.07957 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.041667 | 0 | 0.041667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
97f2f9fe57ebe9380be62b5a602d7fdecd627a48 | 129 | py | Python | app/__init__.py | Hacker-1202/Selfium | 7e798c23c9f24aacab6f6a485d6355f1045bc65c | [
"MIT"
] | 14 | 2021-11-05T11:27:25.000Z | 2022-02-28T02:04:32.000Z | app/__init__.py | CssHammer/Selfium | 7e798c23c9f24aacab6f6a485d6355f1045bc65c | [
"MIT"
] | 2 | 2021-05-17T23:55:34.000Z | 2021-07-09T17:24:44.000Z | app/__init__.py | CssHammer/Selfium | 7e798c23c9f24aacab6f6a485d6355f1045bc65c | [
"MIT"
] | 5 | 2022-01-02T13:33:17.000Z | 2022-02-26T13:09:50.000Z | from .auth import *
from .cli import *
from .filesystem import *
from .events import *
from .helpers import *
from .vars import * | 21.5 | 25 | 0.728682 | 18 | 129 | 5.222222 | 0.444444 | 0.531915 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178295 | 129 | 6 | 26 | 21.5 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
97f606d518e9238d128064b3e4090f3beeb758a9 | 8,384 | py | Python | tests/notebook/api/test_client.py | abeja-inc/abeja-platform-sdk | 97cfc99b11ffc1fccb3f527435277bc89e18b8c3 | [
"Apache-2.0"
] | 2 | 2020-10-20T18:38:16.000Z | 2020-10-20T20:12:35.000Z | tests/notebook/api/test_client.py | abeja-inc/abeja-platform-sdk | 97cfc99b11ffc1fccb3f527435277bc89e18b8c3 | [
"Apache-2.0"
] | 30 | 2020-04-07T01:15:47.000Z | 2020-11-18T03:25:19.000Z | tests/notebook/api/test_client.py | abeja-inc/abeja-platform-sdk | 97cfc99b11ffc1fccb3f527435277bc89e18b8c3 | [
"Apache-2.0"
] | null | null | null | import unittest
import requests_mock
from abeja.notebook import APIClient
ORGANIZATION_ID = '1111111111111'
SERVICE_ID = '2222222222222'
JOB_DEFINITION_NAME = 'test-job'
NOTEBOOK_ID = '4444444444444'
NOTEBOOK_RES = {
"job_definition_id": JOB_DEFINITION_NAME,
"training_notebook_id": NOTEBOOK_ID,
"name": "notebook-3",
"description": None,
"status": "Pending",
"status_message": None,
"instance_type": "cpu-1",
"image": "abeja-inc/all-cpu:18.10",
"creator": {
"updated_at": "2018-01-04T03:02:12Z",
"role": "admin",
"is_registered": True,
"id": "1122334455660",
"email": "test@abeja.asia",
"display_name": None,
"created_at": "2017-05-26T01:38:46Z"
},
"created_at": "2018-06-07T04:42:34.913644Z",
"modified_at": "2018-06-07T04:42:34.913726Z"
}
NOTEBOOK_LIST_RES = [
NOTEBOOK_RES
]
class TestAPIClient(unittest.TestCase):
@requests_mock.Mocker()
def test_create_notebook(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME)
m.post(path, json=NOTEBOOK_RES)
client = APIClient()
ret = client.create_notebook(ORGANIZATION_ID, JOB_DEFINITION_NAME)
self.assertDictEqual(m.request_history[0].json(), {})
self.assertDictEqual(ret, NOTEBOOK_RES)
@requests_mock.Mocker()
def test_create_notebook_with_params(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME)
m.post(path, json=NOTEBOOK_RES)
client = APIClient()
ret = client.create_notebook(
ORGANIZATION_ID,
JOB_DEFINITION_NAME,
instance_type="gpu-1",
image="abeja-inc/all-gpu:18.10",
notebook_type="lab")
expected_payload = {
"instance_type": "gpu-1",
"image": 'abeja-inc/all-gpu:18.10',
"notebook_type": "lab"
}
self.assertDictEqual(m.request_history[0].json(), expected_payload)
self.assertDictEqual(ret, NOTEBOOK_RES)
@requests_mock.Mocker()
def test_get_notebook(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
m.get(path, json=NOTEBOOK_RES)
client = APIClient()
ret = client.get_notebook(
ORGANIZATION_ID,
JOB_DEFINITION_NAME,
NOTEBOOK_ID)
self.assertDictEqual(ret, NOTEBOOK_RES)
@requests_mock.Mocker()
def test_get_notebooks(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME)
m.get(path, json=NOTEBOOK_LIST_RES)
client = APIClient()
ret = client.get_notebooks(ORGANIZATION_ID, JOB_DEFINITION_NAME)
self.assertListEqual(ret, NOTEBOOK_LIST_RES)
@requests_mock.Mocker()
def test_update_notebook(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
m.put(path, json=NOTEBOOK_RES)
client = APIClient()
ret = client.update_notebook(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
self.assertDictEqual(m.request_history[0].json(), {})
self.assertDictEqual(ret, NOTEBOOK_RES)
@requests_mock.Mocker()
def test_update_notebook_with_params(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
m.put(path, json=NOTEBOOK_RES)
client = APIClient()
ret = client.update_notebook(
ORGANIZATION_ID,
JOB_DEFINITION_NAME,
NOTEBOOK_ID,
instance_type="gpu-1",
image="abeja-inc/all-gpu:18.10",
notebook_type="lab")
expected_payload = {
"instance_type": "gpu-1",
"image": 'abeja-inc/all-gpu:18.10',
"notebook_type": 'lab'
}
self.assertDictEqual(m.request_history[0].json(), expected_payload)
self.assertDictEqual(ret, NOTEBOOK_RES)
@requests_mock.Mocker()
def test_delete_notebook(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
message_res = {
"message": "abc1111111111111 deleted"
}
m.delete(path, json=message_res)
client = APIClient()
ret = client.delete_notebook(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
self.assertDictEqual(ret, message_res)
@requests_mock.Mocker()
def test_stop_notebook(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}/stop'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
message_res = {
"message": "abc1111111111111 stopped"
}
m.post(path, json=message_res)
client = APIClient()
ret = client.stop_notebook(
ORGANIZATION_ID,
JOB_DEFINITION_NAME,
NOTEBOOK_ID)
self.assertDictEqual(ret, message_res)
@requests_mock.Mocker()
def test_start_notebook(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}/start'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
message_res = {
"message": "abc1111111111111 started"
}
m.post(path, json=message_res)
client = APIClient()
ret = client.start_notebook(
ORGANIZATION_ID,
JOB_DEFINITION_NAME,
NOTEBOOK_ID)
self.assertDictEqual(ret, message_res)
@requests_mock.Mocker()
def test_get_notebook_recent_logs(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}/recentlogs'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
message_res = {
"events": [
{
"message": "start executing model with abeja-runtime-python36 (version: 0.X.X)",
"timestamp": "2019-10-16T00:00:00.000Z"}],
"next_backward_token": "AAA",
"next_forward_token": "BBB"}
m.get(path, json=message_res)
client = APIClient()
ret = client.get_notebook_recent_logs(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
self.assertDictEqual(ret, message_res)
@requests_mock.Mocker()
def test_get_notebook_recent_logs_next_forward_token(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}/recentlogs?next_forward_token=AAA'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
message_res = {
"events": [
{
"message": "start executing model with abeja-runtime-python36 (version: 0.X.X)",
"timestamp": "2019-10-16T00:00:00.000Z"}],
"next_backward_token": "AAA",
"next_forward_token": "BBB"}
m.get(path, json=message_res)
client = APIClient()
ret = client.get_notebook_recent_logs(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID,
next_forward_token="AAA"
)
self.assertDictEqual(ret, message_res)
@requests_mock.Mocker()
def test_get_notebook_recent_logs_next_backward_token(self, m):
path = '/organizations/{}/training/definitions/{}/notebooks/{}/recentlogs?next_backward_token=BBB'.format(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID)
message_res = {
"events": [
{
"message": "start executing model with abeja-runtime-python36 (version: 0.X.X)",
"timestamp": "2019-10-16T00:00:00.000Z"}],
"next_backward_token": "AAA",
"next_forward_token": "BBB"}
m.get(path, json=message_res)
client = APIClient()
ret = client.get_notebook_recent_logs(
ORGANIZATION_ID, JOB_DEFINITION_NAME, NOTEBOOK_ID,
next_backward_token="BBB"
)
self.assertDictEqual(ret, message_res)
| 35.676596 | 114 | 0.620945 | 894 | 8,384 | 5.540268 | 0.139821 | 0.070866 | 0.089239 | 0.095901 | 0.853422 | 0.839087 | 0.8191 | 0.811024 | 0.750252 | 0.750252 | 0 | 0.04011 | 0.259542 | 8,384 | 234 | 115 | 35.82906 | 0.757732 | 0 | 0 | 0.61194 | 0 | 0 | 0.223402 | 0.123688 | 0 | 0 | 0 | 0 | 0.079602 | 1 | 0.059701 | false | 0 | 0.014925 | 0 | 0.079602 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3f600ecb9fa61be81a74f2c51edf8b6bfdb9b3b5 | 158 | py | Python | mmseg/models/segmentors/__init__.py | CheungBH/mmsegmentation | 9d72a35ad6d6df499dc6b61eb441b0646f35db18 | [
"Apache-2.0"
] | 903 | 2021-06-13T04:45:03.000Z | 2022-03-31T13:21:50.000Z | mmseg/models/segmentors/__init__.py | zots0127/SegFormer | 93301b33d7b7634b018386681be3a640f5979957 | [
"DOC"
] | 72 | 2021-06-13T13:01:49.000Z | 2022-03-30T09:19:34.000Z | mmseg/models/segmentors/__init__.py | zots0127/SegFormer | 93301b33d7b7634b018386681be3a640f5979957 | [
"DOC"
] | 159 | 2021-04-13T01:23:15.000Z | 2022-03-31T18:56:09.000Z | from .cascade_encoder_decoder import CascadeEncoderDecoder
from .encoder_decoder import EncoderDecoder
__all__ = ['EncoderDecoder', 'CascadeEncoderDecoder']
| 31.6 | 58 | 0.85443 | 14 | 158 | 9.142857 | 0.571429 | 0.21875 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.082278 | 158 | 4 | 59 | 39.5 | 0.882759 | 0 | 0 | 0 | 0 | 0 | 0.221519 | 0.132911 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
58ab6325f675c04ea88b72dc85652f4c4ac45651 | 3,311 | py | Python | tests/test_mof_process_parser.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 104 | 2020-03-04T14:31:31.000Z | 2022-03-28T02:59:36.000Z | tests/test_mof_process_parser.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 7 | 2020-04-20T09:18:39.000Z | 2022-03-19T17:06:19.000Z | tests/test_mof_process_parser.py | IMULMUL/etl-parser | 76b7c046866ce0469cd129ee3f7bb3799b34e271 | [
"Apache-2.0"
] | 16 | 2020-03-05T18:55:59.000Z | 2022-03-01T10:19:28.000Z | # -*- coding: utf-8 -*-
import unittest
from etl.parsers.kernel import Process_Terminate_TypeGroup1, Process_V4_TypeGroup1
from etl.parsers.kernel.core import build_mof
from etl.parsers.kernel.process import ImageLoad
from etl.wmi import EventTraceGroup
class TestMofProcessParser(unittest.TestCase):
def test_image_load(self):
payload = b'\x00\x00?\x87\x03\xf8\xff\xff\x00\xa0\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf9]\x01\x00\xef\xa4\xef]\x04\x01\x00\x00\x00\x00?\x87\x03\xf8\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\\\x00D\x00e\x00v\x00i\x00c\x00e\x00\\\x00H\x00a\x00r\x00d\x00d\x00i\x00s\x00k\x00V\x00o\x00l\x00u\x00m\x00e\x004\x00\\\x00W\x00i\x00n\x00d\x00o\x00w\x00s\x00\\\x00S\x00y\x00s\x00t\x00e\x00m\x003\x002\x00\\\x00d\x00r\x00i\x00v\x00e\x00r\x00s\x00\\\x00w\x00i\x00n\x00t\x00u\x00n\x00.\x00s\x00y\x00s\x00\x00\x00'
mof = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_PROCESS, 3, 10, payload)
self.assertIsInstance(mof, ImageLoad)
self.assertEqual(mof.get_process_id(), 0)
self.assertEqual(mof.get_image_filename(), "\\Device\\HarddiskVolume4\\Windows\\System32\\drivers\\wintun.sys")
def test_process_terminate(self):
payload = b'\xdc\x01\x00\x00'
mof = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_PROCESS, 2, 11, payload)
self.assertIsInstance(mof, Process_Terminate_TypeGroup1)
self.assertEqual(mof.get_process_id(), 476)
def test_process_v4_type_group1_type1(self):
payload = b'\x800\xa9\x98\x0b\xa5\xff\xff\x1c\x02\x00\x00\xcc\x01\x00\x00\x00\x00\x00\x00\x03\x01\x00\x00\x00po\n\x04\x00\x00\x00\x04\x00\x00\x00\xd0\x19\xeb\x0f\x8b\x8d\xff\xff\x00\x00\x00\x00\x03\x00\x00\x00\x01\x01\x00\x00\x00\x00\x00\x05\x12\x00\x00\x00smss.exe\x00\\\x00S\x00y\x00s\x00t\x00e\x00m\x00R\x00o\x00o\x00t\x00\\\x00S\x00y\x00s\x00t\x00e\x00m\x003\x002\x00\\\x00s\x00m\x00s\x00s\x00.\x00e\x00x\x00e\x00 \x000\x000\x000\x000\x000\x000\x00e\x004\x00 \x000\x000\x000\x000\x000\x000\x008\x004\x00 \x00\x00\x00\x00\x00\x00\x00'
mof = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_PROCESS, 4, 1, payload)
self.assertIsInstance(mof, Process_V4_TypeGroup1)
self.assertEqual(mof.get_process_id(), 540)
self.assertEqual(mof.get_command_line(), '\\SystemRoot\\System32\\smss.exe 000000e4 00000084 ')
self.assertEqual(mof.get_image_file_name(), 'smss.exe')
def test_process_v4_type_group1_type2(self):
payload = b'\x80 \xa6\x98\x0b\xa5\xff\xff\xdc\x01\x00\x00\xcc\x01\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\xe0r\x95\x04\x00\x00\x00\x00\x00\x00\x00\xd0x\x91\x10\x8b\x8d\xff\xff\x00\x00\x00\x00F\x00\x00\x00\x01\x01\x00\x00\x00\x00\x00\x05\x12\x00\x00\x00autochk.exe\x00\\\x00?\x00?\x00\\\x00C\x00:\x00\\\x00W\x00i\x00n\x00d\x00o\x00w\x00s\x00\\\x00s\x00y\x00s\x00t\x00e\x00m\x003\x002\x00\\\x00a\x00u\x00t\x00o\x00c\x00h\x00k\x00.\x00e\x00x\x00e\x00 \x00*\x00\x00\x00\x00\x00\x00\x00'
mof = build_mof(EventTraceGroup.EVENT_TRACE_GROUP_PROCESS, 4, 2, payload)
self.assertIsInstance(mof, Process_V4_TypeGroup1)
self.assertEqual(mof.get_process_id(), 476)
self.assertEqual(mof.get_command_line(), '\\??\\C:\\Windows\\system32\\autochk.exe *')
self.assertEqual(mof.get_image_file_name(), 'autochk.exe')
| 80.756098 | 545 | 0.735427 | 561 | 3,311 | 4.231729 | 0.235294 | 0.232519 | 0.24642 | 0.237574 | 0.59604 | 0.537911 | 0.454928 | 0.356782 | 0.31845 | 0.317186 | 0 | 0.227605 | 0.089701 | 3,311 | 40 | 546 | 82.775 | 0.560053 | 0.006342 | 0 | 0.129032 | 0 | 0.096774 | 0.520535 | 0.500761 | 0 | 0 | 0 | 0 | 0.419355 | 1 | 0.129032 | false | 0 | 0.16129 | 0 | 0.322581 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
453dd8a74257c33c1d3118209db4310be018b328 | 334 | py | Python | naturalnets/brains/__init__.py | bjuergens/NaturalNets | fd67f1b3c443761270adaf9877ed2a6358d830f0 | [
"MIT"
] | null | null | null | naturalnets/brains/__init__.py | bjuergens/NaturalNets | fd67f1b3c443761270adaf9877ed2a6358d830f0 | [
"MIT"
] | 2 | 2021-04-13T11:47:01.000Z | 2021-04-30T11:44:46.000Z | naturalnets/brains/__init__.py | bjuergens/NaturalNets | fd67f1b3c443761270adaf9877ed2a6358d830f0 | [
"MIT"
] | 1 | 2021-11-03T09:36:40.000Z | 2021-11-03T09:36:40.000Z | from naturalnets.brains.continuous_time_rnn import ContinuousTimeRNN
from naturalnets.brains.feed_forward_nn import FeedForwardNN
from naturalnets.brains.indirect_encoded_ctrnn import IndirectEncodedCtrnn
from naturalnets.brains.elman import ElmanNN
from naturalnets.brains.gru import GruNN
from naturalnets.brains.lstm import LstmNN
| 47.714286 | 74 | 0.892216 | 42 | 334 | 6.952381 | 0.52381 | 0.308219 | 0.431507 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.071856 | 334 | 6 | 75 | 55.666667 | 0.941935 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1885998b47c70a9057511349a056bcd6da1b3131 | 48 | py | Python | hon/json/__init__.py | swquinn/hon | 333332029ee884a8822d38024659d5d7da64ff1a | [
"MIT"
] | null | null | null | hon/json/__init__.py | swquinn/hon | 333332029ee884a8822d38024659d5d7da64ff1a | [
"MIT"
] | 14 | 2019-06-23T01:49:55.000Z | 2021-02-22T01:26:51.000Z | hon/json/__init__.py | swquinn/hon | 333332029ee884a8822d38024659d5d7da64ff1a | [
"MIT"
] | null | null | null | from .json_serializable import JsonSerializable
| 24 | 47 | 0.895833 | 5 | 48 | 8.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 48 | 1 | 48 | 48 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
18c3a11f575bdc7a9987a43f5f710ff37e7b9a3d | 7,595 | py | Python | app/core/gopup/index/index_toutiao.py | ZhouRR/quotations-gateway-api | ef433fe8e461344a6c59e5edec206ad4ba7eeff6 | [
"Apache-2.0"
] | null | null | null | app/core/gopup/index/index_toutiao.py | ZhouRR/quotations-gateway-api | ef433fe8e461344a6c59e5edec206ad4ba7eeff6 | [
"Apache-2.0"
] | null | null | null | app/core/gopup/index/index_toutiao.py | ZhouRR/quotations-gateway-api | ef433fe8e461344a6c59e5edec206ad4ba7eeff6 | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2020/10/23 0023
# @Author : justin.郑 3907721@qq.com
# @File : index_toutiao.py
# @Desc : 头条指数
import json
import pandas as pd
import requests
from gopup.index.cons import index_toutiao_headers
def toutiao_index(keyword="python", start_date="20201016", end_date="20201022", app_name="toutiao"):
"""
头条指数数据
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:param app_name: 平台
:return:
datetime 日期
index 指数
"""
# list_keyword = '["%s"]' % keyword
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_multi_keyword_hot_trend"
data = {
"keyword_list": [keyword],
"start_date": start_date,
"end_date": end_date,
"app_name": app_name
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
hot_list = json.loads(res.text)['data']['hot_list'][0]['hot_list']
df = pd.DataFrame(hot_list)
return df
except:
return None
def toutiao_relation(keyword="python", start_date="20201012", end_date="20201018", app_name="toutiao"):
"""
头条相关分析
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:return:
relation_word 相关词
relation_score 相关性值
score_rank 相关性值排名
search_hot 搜索热点值
search_ratio 搜索比率
"""
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_relation_word"
data = {"param": {"keyword": keyword,
"start_date": start_date,
"end_date": end_date,
"app_name": app_name}
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
relation_word_list = json.loads(res.text)['data']['relation_word_list']
df = pd.DataFrame(relation_word_list)
return df
except:
return None
# def toutiao_sentiment(keyword="python", start_date="20201012", end_date="20201018"):
# """
# 头条情感分析
# :param keyword: 关键词
# :param start_date: 开始日期
# :param end_date: 截止日期
# :return:
# keyword 关键词
# score 情感值
# """
# url = "https://index.toutiao.com/api/v1/get_keyword_sentiment"
# data = {
# "keyword": keyword,
# "start_date": start_date,
# "end_date": end_date
# }
# res = requests.get(url, params=data, headers=index_toutiao_headers)
# score = json.loads(res.text)['data']['score']
# df = pd.DataFrame([{"score": score, "keyword": keyword}])
# return df
def toutiao_province(keyword="python", start_date="20201012", end_date="20201018", app_name="toutiao"):
"""
头条地域分析
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:return:
name 省份
value 渗透率
"""
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_portrait"
data = {"param": {"keyword": keyword,
"start_date": start_date,
"end_date": end_date,
"app_name": app_name}
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
res_text = json.loads(res.text)['data']['data'][2]['label_list']
df = pd.DataFrame(res_text)
df['name'] = df['name_zh']
df = df.drop(['label_id', 'name_zh'], axis=1)
df = df.sort_values(by="value", ascending=False)
return df
except:
return None
def toutiao_city(keyword="python", start_date="20201012", end_date="20201018", app_name="toutiao"):
"""
头条城市分析
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:return:
name 城市
value 渗透率
"""
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_portrait"
data = {"param": {"keyword": keyword,
"start_date": start_date,
"end_date": end_date,
"app_name": app_name}
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
res_text = json.loads(res.text)['data']['data'][3]['label_list']
df = pd.DataFrame(res_text)
df['name'] = df['name_zh']
df = df.drop(['label_id', 'name_zh'], axis=1)
df = df.sort_values(by="value", ascending=False)
return df
except:
return None
def toutiao_age(keyword="python", start_date="20201012", end_date="20201018", app_name="toutiao"):
"""
头条年龄分析
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:return:
name 年龄区间
value 渗透率
"""
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_portrait"
data = {"param": {"keyword": keyword,
"start_date": start_date,
"end_date": end_date,
"app_name": app_name}
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
res_text = json.loads(res.text)['data']['data'][0]['label_list']
df = pd.DataFrame(res_text)
df['name'] = df['name_zh']
df = df.drop(['label_id', 'name_zh'], axis=1)
return df
except:
return None
def toutiao_gender(keyword="python", start_date="20201012", end_date="20201018", app_name="toutiao"):
"""
头条性别分析
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:return:
name 性别
value 渗透率
"""
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_portrait"
data = {"param": {"keyword": keyword,
"start_date": start_date,
"end_date": end_date,
"app_name": app_name}
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
res_text = json.loads(res.text)['data']['data'][1]['label_list']
df = pd.DataFrame(res_text)
df['name'] = df['name_zh']
df = df.drop(['label_id', 'name_zh'], axis=1)
df = df.sort_values(by="value", ascending=False)
return df
except:
return None
def toutiao_interest_category(keyword="python", start_date="20201012", end_date="20201018", app_name="toutiao"):
"""
头条用户阅读兴趣分类
:param keyword: 关键词
:param start_date: 开始日期
:param end_date: 截止日期
:return:
name 分类
value 渗透率
"""
try:
url = "https://trendinsight.oceanengine.com/api/open/index/get_portrait"
data = {"param": {"keyword": keyword,
"start_date": start_date,
"end_date": end_date,
"app_name": app_name}
}
res = requests.post(url, json=data, headers=index_toutiao_headers)
res_text = json.loads(res.text)['data']['data'][4]['label_list']
df = pd.DataFrame(res_text)
df['name'] = df['name_zh']
df = df.drop(['label_id', 'name_zh'], axis=1)
df = df.sort_values(by="value", ascending=False)
return df
except:
return None
if __name__ == "__main__":
index_df = toutiao_index(keyword="口罩", start_date='20201214', end_date='20201220', app_name="aweme")
print(index_df)
| 32.319149 | 112 | 0.556814 | 887 | 7,595 | 4.551297 | 0.152198 | 0.073569 | 0.043597 | 0.043597 | 0.779787 | 0.767402 | 0.755512 | 0.747089 | 0.717117 | 0.717117 | 0 | 0.033549 | 0.309282 | 7,595 | 234 | 113 | 32.457265 | 0.735989 | 0.236735 | 0 | 0.708333 | 0 | 0 | 0.228102 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058333 | false | 0 | 0.033333 | 0 | 0.208333 | 0.008333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
18da0cd032c63aac5bb1094fe3842d719f9df873 | 56 | py | Python | unit_testing/legacy_tests/__init__.py | MunSunouk/messari-python-api | 716474d784823d8a9610a054a7391c5e7d021bbc | [
"MIT"
] | 62 | 2021-11-10T15:37:13.000Z | 2022-03-30T23:01:21.000Z | unit_testing/legacy_tests/__init__.py | MunSunouk/messari-python-api | 716474d784823d8a9610a054a7391c5e7d021bbc | [
"MIT"
] | 15 | 2021-11-03T17:55:52.000Z | 2022-03-30T23:00:51.000Z | unit_testing/legacy_tests/__init__.py | MunSunouk/messari-python-api | 716474d784823d8a9610a054a7391c5e7d021bbc | [
"MIT"
] | 11 | 2021-11-11T05:00:50.000Z | 2022-03-16T06:28:38.000Z | from messari import session
from messari.utils import *
| 18.666667 | 27 | 0.821429 | 8 | 56 | 5.75 | 0.625 | 0.478261 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 56 | 2 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7a121e1779f5604a804e3f905328b491a7489b1b | 160 | py | Python | suvec/common/postproc/__init__.py | ProtsenkoAI/skady-user-vectorizer | 9114337d4a5cb176f6980e73a93eef90a49b478e | [
"MIT"
] | 1 | 2021-05-07T16:48:16.000Z | 2021-05-07T16:48:16.000Z | suvec/common/postproc/__init__.py | ProtsenkoAI/skady-user-vectorizer | 9114337d4a5cb176f6980e73a93eef90a49b478e | [
"MIT"
] | null | null | null | suvec/common/postproc/__init__.py | ProtsenkoAI/skady-user-vectorizer | 9114337d4a5cb176f6980e73a93eef90a49b478e | [
"MIT"
] | null | null | null | from .parsed_processor_impl import ParsedProcessorImpl
from .parsed_processor import ParsedProcessor
from .processor_with_hooks import ParsedProcessorWithHooks
| 40 | 58 | 0.90625 | 17 | 160 | 8.235294 | 0.588235 | 0.142857 | 0.271429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 160 | 3 | 59 | 53.333333 | 0.945946 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e13facb5cf51870c4a175ec598c1f07d55387b66 | 198 | py | Python | src/baboon_tracking/mixins/unioned_frames_mixin.py | radioactivebean0/baboon-tracking | 062351c514073aac8e1207b8b46ca89ece987928 | [
"MIT"
] | 6 | 2019-07-15T19:10:59.000Z | 2022-02-01T04:25:26.000Z | src/baboon_tracking/mixins/unioned_frames_mixin.py | radioactivebean0/baboon-tracking | 062351c514073aac8e1207b8b46ca89ece987928 | [
"MIT"
] | 86 | 2019-07-02T17:59:46.000Z | 2022-02-01T23:23:08.000Z | src/baboon_tracking/mixins/unioned_frames_mixin.py | radioactivebean0/baboon-tracking | 062351c514073aac8e1207b8b46ca89ece987928 | [
"MIT"
] | 7 | 2019-10-16T12:58:21.000Z | 2022-03-08T00:31:32.000Z | """
Mixin for returning unioned frames.
"""
class UnionedFramesMixin:
"""
Mixin for returning unioned frames.
"""
def __init__(self):
self.unioned_frames = []
| 15.230769 | 40 | 0.585859 | 18 | 198 | 6.166667 | 0.555556 | 0.351351 | 0.306306 | 0.432432 | 0.540541 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.30303 | 198 | 12 | 41 | 16.5 | 0.804348 | 0.358586 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 6 |
e14af8b907a7c5bf3b961386fe78326703341550 | 203 | py | Python | libpandadna/__init__.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | 1 | 2020-09-27T22:12:47.000Z | 2020-09-27T22:12:47.000Z | libpandadna/__init__.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | null | null | null | libpandadna/__init__.py | journeyfan/toontown-journey | 7a4db507e5c1c38a014fc65588086d9655aaa5b4 | [
"MIT"
] | 2 | 2020-09-26T20:37:18.000Z | 2020-11-15T20:55:33.000Z | import sys
if sys.version.startswith('3.6'):
from .py36.libpandadna import *
else:
if sys.platform == "darwin":
from .mac.libpandadna import *
else:
from .libpandadna import * | 25.375 | 38 | 0.640394 | 25 | 203 | 5.2 | 0.56 | 0.392308 | 0.323077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025974 | 0.241379 | 203 | 8 | 39 | 25.375 | 0.818182 | 0 | 0 | 0.25 | 0 | 0 | 0.044118 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
e1630f8e8bb84df5ff936b1cf85b8e3340ac1d70 | 54,632 | py | Python | cogs/other.py | SeldoW/Tomori | be610999f4002a9f2340ce430cf9d6c1c36f5034 | [
"MIT"
] | 1 | 2019-08-11T19:18:30.000Z | 2019-08-11T19:18:30.000Z | cogs/other.py | SeldoW/Tomori | be610999f4002a9f2340ce430cf9d6c1c36f5034 | [
"MIT"
] | null | null | null | cogs/other.py | SeldoW/Tomori | be610999f4002a9f2340ce430cf9d6c1c36f5034 | [
"MIT"
] | null | null | null | import discord
import asyncio
import requests
import time
from datetime import datetime, date
import string
import random
import copy
import re
import json
import asyncpg
from discord.ext import commands
from cogs.locale import *
from cogs.const import *
from cogs.help import *
from cogs.ids import *
from cogs.util import *
from cogs.discord_hooks import Webhook
support_url = "https://discord.gg/tomori"
site_url = "http://discord.band"
site_commands_url = "https://discord.band/commands"
invite_url = "https://discordapp.com/api/oauth2/authorize?client_id=491605739635212298&permissions=536341719&redirect_uri=https%3A%2F%2Fdiscord.band&scope=bot"
async def o_webhook(client, conn, context, name, value):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale FROM settings WHERE discord_id = '{discord_id}'".format(discord_id=server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
dat = await conn.fetchrow("SELECT * FROM mods WHERE type = 'webhook' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=clear_name(name).lower()))
if not dat:
em.description = locale[lang]["other_webhook_not_exists"].format(
who=message.author.display_name+"#"+message.author.discriminator,
name=name
)
await client.send_message(message.channel, embed=em)
return
if dat["condition"]:
cond = dat["condition"]
else:
cond = ""
if not any(cond==role.id or role.permissions.administrator for role in message.author.roles) \
and not cond==message.author.id \
and not message.author.id == message.server.owner.id:
return
try:
await client.delete_message(message)
except:
pass
try:
ret = json.loads(value)
if ret and isinstance(ret, dict):
msg = Webhook(web_url=dat["value"], **ret)
msg.post()
else:
msg = Webhook(
web_url=dat["value"],
text=value
)
msg.post()
except:
msg = Webhook(
web_url=dat["value"],
text=value
)
msg.post()
async def o_about(client, conn, context):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale FROM settings WHERE discord_id = '{discord_id}'".format(discord_id=server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
try:
await client.delete_message(message)
except:
pass
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
if const["locale"] == "english":
em.description = "***Python-bot created by __Ананасовая Печенюха (Cookie)__\n\
supported by __Unknown__ and __Teris__.***\n\n\
**[Support server]({support_url})**\n\
**[Site]({site_url})**\n\n\
For any questions talk to <@316287332779163648>.".format(support_url=support_url, site_url=site_url)
else:
em.description = "***Python-bot написанный __Ананасовой Печенюхой__\n\
при поддержке __Unknown'a__ и __Teris'а__.***\n\n\
**[Ссылка на сервер поддержки]({support_url})**\n\
**[Ссылка на сайт]({site_url})**\n\n\
По всем вопросам обращайтесь к <@316287332779163648>.".format(support_url=support_url, site_url=site_url)
if not message.server.id in servers_without_follow_us:
em.add_field(
name=locale[lang]["global_follow_us"],
value=tomori_links,
inline=False
)
await client.send_message(message.channel, embed=em)
return
async def o_invite(client, conn, context):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale FROM settings WHERE discord_id = '{discord_id}'".format(discord_id=server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
try:
await client.delete_message(message)
except:
pass
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
em.title = locale[lang]["other_invite_title"]
em.description = invite_url
if not message.server.id in servers_without_follow_us:
em.add_field(
name=locale[lang]["global_follow_us"],
value=tomori_links,
inline=False
)
await client.send_message(message.author, embed=em)
return
async def o_server(client, conn, context):
message = context.message
server_id = message.server.id
server = message.server
const = await conn.fetchrow("SELECT em_color, prefix, locale, bank, server_money FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
try:
await client.delete_message(message)
except:
pass
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
em.set_author(name=server.name, icon_url=server.icon_url)
em.add_field(
name=locale[lang]["other_server_owner"],
value="{0.name}#{0.discriminator}".format(server.owner),
inline=True
)
em.add_field(
name=locale[lang]["other_server_prefix"],
value=const["prefix"],
inline=True
)
em.add_field(
name=locale[lang]["other_server_bank"],
value=str(const["bank"]),
inline=True
)
em.add_field(
name=locale[lang]["other_server_channels"],
value=str(len(server.channels)),
inline=True
)
em.add_field(
name=locale[lang]["other_server_members"],
value=str(len(server.members)),
inline=True
)
em.add_field(
name=locale[lang]["other_server_lifetime"],
value=locale[lang]["other_server_days"].format(int((datetime.utcnow() - server.created_at).days)),
inline=True
)
em.add_field(
name=":satellite:ID",
value=server.id,
inline=True
)
em.add_field(
name=locale[lang]["other_server_emojis"],
value=str(len(server.emojis)),
inline=True
)
em.set_thumbnail(url=message.server.icon_url)
await client.send_message(message.channel, embed=em)
return
async def o_avatar(client, conn, context, who):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, is_avatar, locale FROM settings WHERE discord_id = '{discord_id}'".format(discord_id=server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
if not who:
who = message.author
em.title = locale[lang]["other_avatar"].format(clear_name(who.display_name[:50]))
em.set_image(url=who.avatar_url)
await client.send_message(message.channel, embed=em)
return
async def o_like(client, conn, context):
message = context.message
server_id = message.server.id
if message.author.bot or message.channel.is_private:
return
const = await conn.fetchrow("SELECT em_color, locale, likes, like_one, like_time FROM settings WHERE discord_id = '{discord_id}'".format(discord_id=server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
now = int(time.time())
if now - const["like_time"] > 14400:
await conn.execute("UPDATE settings SET likes = {likes}, like_time = {like_time} WHERE discord_id = '{discord_id}'".format(
likes=const["likes"] + const["like_one"],
like_time=now,
discord_id=server_id
))
global top_servers
top_servers = await conn.fetch("SELECT discord_id FROM settings ORDER BY likes DESC, like_time ASC LIMIT 10")
em.description = locale[lang]["other_like_success"].format(who=message.author.display_name+"#"+message.author.discriminator)
else:
t=14400 - now + const["like_time"]
h=str(t//3600)
m=str((t//60)%60)
s=str(t%60)
em.description = locale[lang]["other_like_wait"].format(
who=message.author.display_name+"#"+message.author.discriminator,
hours=h,
minutes=m,
seconds=s
)
if not message.server.id in servers_without_follow_us:
em.add_field(
name=locale[lang]["global_follow_us"],
value=tomori_links,
inline=False
)
await client.send_message(message.channel, embed=em)
return
async def o_list(client, conn, context, page):
message = context.message
server_id = message.server.id
if message.author.bot or message.channel.is_private:
return
const = await conn.fetchrow("SELECT em_color, locale FROM settings WHERE discord_id = '{discord_id}'".format(discord_id=server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
_locale = locale[lang]
em = discord.Embed(colour=0x87b5ff)
if not const:
em.description = _locale["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
try:
await client.delete_message(message)
except:
pass
dat = await conn.fetchrow("SELECT COUNT(name) FROM settings")
all_count = dat[0]
pages = (((all_count - 1) // 10) + 1)
if not page:
page = 1
if page > pages:
em.description = _locale["global_page_not_exists"].format(who=message.author.display_name+"#"+message.author.discriminator, number=page)
await client.send_message(message.channel, embed=em)
return
em.title = _locale["other_top_of_servers"]
if all_count == 0:
em.description = _locale["global_list_is_empty"]
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetch("SELECT name, discord_id, likes, invite FROM settings ORDER BY likes DESC, like_time DESC LIMIT 10 OFFSET {offset}".format(offset=(page-1)*10))
for index, server in enumerate(dat):
member_count = 0
serv = client.get_server(server["discord_id"])
if serv:
member_count = serv.member_count
if not server["invite"] or not await u_check_invite(client, server["invite"]):
link = await u_invite_to_server(client, server["discord_id"])
if link:
await conn.execute("UPDATE settings SET invite = '{link}' WHERE discord_id = '{id}'".format(
link=link,
id=server["discord_id"]
))
else:
link = "https://discord-server.com/"+server["discord_id"]
else:
link = server["invite"]
em.add_field(
name="#{index} {name}".format(
index=(page-1)*10+index+1,
name=server["name"]
),
value="<:likes:493040819402702869>\xa0{likes}\xa0\xa0<:users:492827033026560020>\xa0{member_count}\xa0\xa0[<:server:492861835087708162> **__join__**]({link} \"{link_message}\")".format(
likes=server["likes"],
member_count=member_count,
link=link,
link_message=_locale["other_list_link_message"]
),
inline=True
)
em.set_footer(text=_locale["other_footer_page"].format(number=page, length=pages))
await client.send_message(message.channel, embed=em)
return
async def o_report(client, conn, context):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale, prefix, locale FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
eD = discord.Embed(color = 0xC5934B, title = "Report from user:", description = message.content)
eD.add_field(name = "Server", value = "Name: " + message.server.name + "\n" + "Id: `" + message.server.id + "`")
eD.add_field(name = "Settings", value = "Locale: \"" + const["locale"] + "\"\n" + "Prefix: `" + const["prefix"] + "`")
eD.add_field(name = "Chat", value = "Name: " + message.channel.name + "\n" + "Id: `" + message.channel.id + "`")
eD.add_field(name = "User", value = "Name: " + message.author.name + "\n" + "Id: `" + message.author.id + "`\n" + "Display Name: " + message.author.display_name)
eD.set_author(name = message.author.name, icon_url= message.author.avatar_url)
await client.send_message(client.get_channel(report_channel_id), embed=eD)
em.title = locale[lang]["other_report_sent_success"].format(who=message.author.display_name+"#"+message.author.discriminator)
em.set_image(url='https://media.giphy.com/media/xTkcESPybY7bmlKL7O/giphy.gif')
await client.send_message(message.channel, embed=em)
return
async def o_ping(client, conn, context):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
now = datetime.utcnow()
delta = now - message.timestamp
latency = delta.microseconds / 1000
em.description=locale[lang]["other_ping"].format(
who=message.author.display_name+"#"+message.author.discriminator,
latency=int(latency)
)
await client.send_message(message.channel, embed=em)
return
async def o_help(client, conn, context):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT * FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
if message.content.startswith(const["prefix"]+"help ") or message.content.startswith(const["prefix"]+"h "):
await h_check_help(client, conn, message)
return
if not message.content == const["prefix"]+"help" and not message.content == "!help":
return
em.title = locale[lang]["other_help_title"]
em.description = locale[lang]["other_help_desc"].format(const["name"], const["prefix"])
com_adm = ""
com_econ = ""
com_fun = ""
com_stat = ""
com_other = ""
com_mon = ""
if const["is_say"]:
com_adm += "``say``, "
if const["is_clear"]:
com_adm += "``clear``, "
if const["is_sex"]:
com_fun += "``sex``, "
if const["is_kick"]:
com_adm += "``kick``, "
if const["is_ban"]:
com_adm += "``ban``, ``unban``, "
if const["is_timely"]:
com_econ += "``timely``, "
if const["is_work"]:
com_econ += "``work``, "
if const["is_br"]:
com_econ += "``br``, "
if const["is_slots"]:
com_econ += "``slots``, "
if const["is_give"]:
com_econ += "``give``, "
if const["is_kiss"]:
com_fun += "``kiss``, "
if const["is_hug"]:
com_fun += "``hug``, "
if const["is_punch"]:
com_fun += "``punch``, "
if const["is_five"]:
com_fun += "``five``, "
if const["is_wink"]:
com_fun += "``wink``, "
if const["is_fuck"]:
com_fun += "``fuck``, "
if const["is_drink"]:
com_fun += "``drink``, "
if const["is_rep"]:
com_fun += "``rep``, "
if const["is_cash"]:
com_stat += "``$``, "
if const["is_top"]:
com_stat += "``top``, "
if const["is_me"]:
com_stat += "``me``, "
com_other = "``help``, "
if const["is_ping"]:
com_other += "``ping``, "
if const["is_avatar"]:
com_other += "``avatar``, "
if const["is_report"]:
com_other += "``report``, "
if const["is_server"]:
com_other += "``server``, "
if const["is_invite"]:
com_other += "``invite``, "
if const["is_about"]:
com_other += "``about``, "
com_adm += "``send``, ``start``, ``stop``, ``pay``, "
if const["is_like"]:
com_mon += "``like``, "
if const["is_list"]:
com_mon += "``list``, "
if com_adm != "":
em.add_field(name="Admin", value=com_adm[:-2], inline=False)
if com_econ != "":
em.add_field(name="Economics", value=com_econ[:-2], inline=False)
if com_fun != "":
em.add_field(name="Fun", value=com_fun[:-2], inline=False)
if com_stat != "":
em.add_field(name="Stats", value=com_stat[:-2], inline=False)
if com_mon != "":
em.add_field(name="Monitoring", value=com_mon[:-2], inline=False)
if com_other != "":
em.add_field(name="Other", value=com_other[:-2], inline=False)
if not server_id in servers_without_more_info_in_help:
em.add_field(name=locale[lang]["help_more_info"], value=site_commands_url, inline=False)
em.set_footer(text=locale[lang]["help_help_by_command"].format(prefix=const["prefix"]))
await client.send_message(message.channel, embed=em)
return
async def o_lvlup(client, conn, context, page):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT * FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
try:
await client.delete_message(message)
except:
pass
autorole = const["autorole_id"]
if autorole:
autorole = discord.utils.get(message.server.roles, id=autorole)
dat = await conn.fetchrow("SELECT COUNT(condition) FROM mods WHERE type = 'lvlup' AND server_id = '{server_id}'".format(server_id=server_id))
all_count = dat[0]
pages = (((all_count - 1) // 24) + 1)
if not page:
page = 1
if all_count == 0 and not autorole:
em.description = locale[lang]["global_list_is_empty"]
await client.send_message(message.channel, embed=em)
return
if page > pages and not (page == 1 and autorole):
em.description = locale[lang]["global_page_not_exists"].format(who=message.author.display_name+"#"+message.author.discriminator, number=page)
await client.send_message(message.channel, embed=em)
return
if page == 1 and autorole:
em.add_field(
name=locale[lang]["other_lvlup_autorole_name"],
value=autorole.mention,
inline=True
)
dat = await conn.fetch("SELECT * FROM mods WHERE type = 'lvlup' AND server_id = '{server_id}' ORDER BY condition::int ASC LIMIT 24 OFFSET {offset}".format(server_id=server_id, offset=(page-1)*24))
if dat:
for index, role in enumerate(dat):
_role = discord.utils.get(message.server.roles, id=role["value"])
if _role:
em.add_field(
name="{lvl} {name}".format(lvl=role["condition"], name=locale[lang]["other_lvlup_lvl_name"]),
value=_role.mention,
inline=True
)
else:
em.description = locale[lang]["global_list_is_empty"]
await client.send_message(message.channel, embed=em)
return
async def o_backgrounds(client, conn, context):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT server_money, em_color, locale FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
if not message.server.id in konoha_servers:
back_list = random.choice(background_list)
back_name_list = background_name_list
else:
back_list = random.choice(konoha_background_list)
back_name_list = konoha_background_name_list
em.title = locale[lang]["other_backgrounds_title"]
if len(back_list) == 0:
em.description = locale[lang]["other_backgrounds_list_is_empty"]
await client.send_message(message.channel, embed=em)
return
for i, back in enumerate(back_name_list):
em.add_field(
name=locale[lang]["other_backgrounds_element"].format(
position=i+1,
name=back
),
value="-------------------------",
inline=True
)
await client.send_message(message.channel, embed=em)
return
async def o_set(client, conn, context, arg1, arg2, args):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale, server_money FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
if arg1 == "background" or arg1 == "back":
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if not message.server.id in konoha_servers:
back_list = random.choice(background_list)
else:
back_list = random.choice(konoha_background_list)
if args:
arg2 = arg2 + " " + args
if arg2.isdigit() and int(arg2) <= len(back_list) and int(arg2) > 0:
arg2 = back_list[int(arg2)-1]
else:
arg2 = arg2.lower().replace(" ", "_") + ".jpg"
if not arg2 in back_list:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if message.server.id in local_stats_servers:
stats_type = message.server.id
else:
stats_type = "global"
dat = await conn.fetchrow("SELECT cash, background FROM users WHERE stats_type = '{stats_type}' AND discord_id = '{id}'".format(
stats_type=stats_type,
id=message.author.id
))
if dat:
if dat["cash"] < background_change_price:
em.description = locale[lang]["global_dont_have_that_much_money"].format(who=message.author.display_name+"#"+message.author.discriminator, money=const["server_money"])
await client.send_message(message.channel, embed=em)
return
if dat["background"] == arg2:
em.description = locale[lang]["other_backgrounds_already_has"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
await conn.execute("UPDATE users SET cash = {cash}, background = '{back}' WHERE stats_type = '{stats_type}' AND discord_id = '{id}'".format(
cash=dat[0] - background_change_price,
back=arg2,
stats_type=stats_type,
id=message.author.id
))
em.description = locale[lang]["other_backgrounds_success_response"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
else:
await conn.execute("INSERT INTO users(name, discord_id, stats_type) VALUES('{}', '{}', '{}')".format(clear_name(message.author.display_name[:50]), message.author.id, stats_type))
em.description = locale[lang]["global_dont_have_that_much_money"].format(who=message.author.display_name+"#"+message.author.discriminator, money=const[1])
await client.send_message(message.channel, embed=em)
return
if arg1 == "prefix":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="prefix"
)
em.description += "\n"+locale[lang]["other_set_prefix_you_can_try"]+" `%s`" % "`, `".join(prefix_list)
await client.send_message(message.channel, embed=em)
return
if arg2 in prefix_list:
await conn.execute("UPDATE settings SET prefix = '{}' WHERE discord_id = '{}'".format(arg2,server_id))
em.description = locale[lang]["other_set_prefix_success"].format(
who=message.author.display_name+"#"+message.author.discriminator,
prefix=arg2
)
await client.send_message(message.channel, embed=em)
return
elif arg2 == "default":
await conn.execute("UPDATE settings SET prefix = '{}' WHERE discord_id = '{}'".format('!',server_id))
em.description = locale[lang]["other_set_prefix_success"].format(
who=message.author.display_name+"#"+message.author.discriminator,
prefix='!'
)
em.description += "\n" + locale[lang]["other_set_prefix_you_can_try"] + " `%s`" % "`, `".join(prefix_list)
else:
em.description = locale[lang]["other_set_prefix_you_can_try"] + " `%s`" % "`, `".join(prefix_list)
await client.send_message(message.channel, embed=em)
return
if arg1 == "autorole":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if args:
arg2 = arg2 + " " + args
role = discord.utils.get(message.server.roles, name=arg2)
if not role:
arg2 = re.sub(r'[<@#&!>]+', '', arg2.lower())
role = discord.utils.get(message.server.roles, id=arg2)
if not role:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT prefix FROM settings WHERE discord_id = '{}'".format(message.server.id))
if dat:
await conn.execute("UPDATE settings SET autorole_id = '{role_id}' WHERE discord_id = '{server_id}'".format(
role_id=role.id,
server_id=message.server.id
))
else:
await conn.execute("INSERT INTO settings(name, discord_id, autorole_id) VALUES('{name}', '{id}', '{role}')".format(
name=clear_name(message.server.name[:50]),
id=message.server.id,
role=role.id
))
em.description = locale[lang]["other_autorole_success_response"].format(
who=message.author.display_name+"#"+message.author.discriminator,
role_id=role.id
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "lvlup":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="lvl"
)
await client.send_message(message.channel, embed=em)
return
if not args:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="role"
)
await client.send_message(message.channel, embed=em)
return
if not arg2.isdigit():
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="lvl"
)
await client.send_message(message.channel, embed=em)
return
role = discord.utils.get(message.server.roles, name=args)
if not role:
args = re.sub(r'[<@#&!>]+', '', args.lower())
role = discord.utils.get(message.server.roles, id=args)
if not role:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="role"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT * FROM mods WHERE server_id = '{server}' AND type = 'lvlup' AND condition = '{cond}'".format(
server=message.server.id,
cond=arg2
))
if dat:
await conn.execute("UPDATE mods SET value = '{role}' WHERE server_id = '{server}' AND type = 'lvlup' AND condition = '{cond}'".format(
role=role.id,
server=message.server.id,
cond=dat["condition"]
))
else:
await conn.execute("INSERT INTO mods(server_id, condition, value, type) VALUES('{server}', '{cond}', '{role}', 'lvlup')".format(
role=role.id,
server=message.server.id,
cond=arg2
))
em.description = locale[lang]["other_lvlup_success_response"].format(
who=message.author.display_name+"#"+message.author.discriminator,
lvl=arg2,
role_id=role.id
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "shop":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if not args:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="cost"
)
await client.send_message(message.channel, embed=em)
return
args = args.rsplit(" ", 1)
if len(args) == 2:
arg2 = arg2 + " " + args[0]
args = args[1]
arg2 = arg2.rstrip()
else:
args = args[0]
role = discord.utils.get(message.server.roles, name=arg2)
if not role:
arg2 = re.sub(r'[<@#&!>]+', '', arg2.lower())
role = discord.utils.get(message.server.roles, id=arg2)
if not role:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if not args.isdigit():
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="cost"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT * FROM mods WHERE type = 'shop' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=role.id))
if dat:
em.description = locale[lang]["other_set_shop_exists"].format(
who=message.author.display_name+"#"+message.author.discriminator,
role_id=role.id
)
else:
await conn.execute("INSERT INTO mods(name, server_id, type, condition) VALUES('{name}', '{id}', '{type}', '{cond}')".format(
name=role.id,
id=message.server.id,
type="shop",
cond=args
))
em.description = locale[lang]["other_shop_success_response"].format(
who=message.author.display_name+"#"+message.author.discriminator,
role_id=role.id,
cost=args
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "language" or arg1 == "lang":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
em.description += "\n" + locale[lang]["other_you_can_try"] + " `%s`" % "`, `".join(short_locales.keys())
await client.send_message(message.channel, embed=em)
return
if args:
arg2 = arg2 + " " + args
arg2 = arg2.lower()
arg2 = short_locales.get(arg2, arg2)
if not arg2 in locale.keys():
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT prefix FROM settings WHERE discord_id = '{}'".format(message.server.id))
if dat:
await conn.execute("UPDATE settings SET locale = '{lang}' WHERE discord_id = '{server_id}'".format(
lang=arg2,
server_id=message.server.id
))
else:
await conn.execute("INSERT INTO settings(name, discord_id, locale) VALUES('{name}', '{id}', '{lang}')".format(
name=clear_name(message.server.name[:50]),
id=message.server.id,
lang=arg2
))
em.description = locale[arg2]["other_lang_success_response"].format(
who=message.author.display_name+"#"+message.author.discriminator,
lang=arg2
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "webhook" or arg1 == "wh":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if not args:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="value"
)
await client.send_message(message.channel, embed=em)
return
arg2 = clear_name(arg2).lower()
args = clear_name(args)
if not arg2:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT * FROM mods WHERE type = 'webhook' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=arg2))
if dat:
em.description = locale[lang]["other_set_webhook_exists"].format(
who=message.author.display_name+"#"+message.author.discriminator,
name=arg2
)
else:
await conn.execute("INSERT INTO mods(name, server_id, type, value) VALUES('{name}', '{id}', '{type}', '{value}')".format(
name=arg2,
id=message.server.id,
type="webhook",
value=args
))
em.description = locale[lang]["other_webhook_success_response"].format(
who=message.author.display_name+"#"+message.author.discriminator,
name=arg2
)
await client.send_message(message.channel, embed=em)
return
if not arg1:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="category"
)
await client.send_message(message.channel, embed=em)
return
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="category"
)
await client.send_message(message.channel, embed=em)
return
async def o_remove(client, conn, context, arg1, arg2, args):
message = context.message
server_id = message.server.id
const = await conn.fetchrow("SELECT em_color, locale, server_money FROM settings WHERE discord_id = '{}'".format(server_id))
lang = const["locale"]
if not lang in locale.keys():
em = discord.Embed(description="{who}, {response}.".format(
who=message.author.display_name+"#"+message.author.discriminator,
response="ошибка локализации",
colour=0xC5934B))
await client.send_message(message.channel, embed=em)
return
if not const:
em.description = locale[lang]["global_not_available"].format(who=message.author.display_name+"#"+message.author.discriminator)
await client.send_message(message.channel, embed=em)
return
em = discord.Embed(colour=int(const["em_color"], 16) + 512)
try:
await client.delete_message(message)
except:
pass
if arg1 == "autorole":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.execute("UPDATE settings SET autorole_id = NULL WHERE discord_id = '{}'".format(message.server.id))
em.description = locale[lang]["other_autorole_success_delete"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "lvlup":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="lvl"
)
await client.send_message(message.channel, embed=em)
return
if not arg2.isdigit():
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="lvl"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT * FROM mods WHERE type = 'lvlup' AND condition = '{lvl}' AND server_id = '{server_id}'".format(server_id=server_id, lvl=arg2))
if not dat:
em.description = locale[lang]["other_lvlup_not_exists"].format(
who=message.author.display_name+"#"+message.author.discriminator,
lvl=arg2
)
else:
await conn.execute("DELETE FROM mods WHERE type = 'lvlup' AND condition = '{lvl}' AND server_id = '{server_id}'".format(server_id=server_id, lvl=arg2))
em.description = locale[lang]["other_lvlup_success_delete"].format(
who=message.author.display_name+"#"+message.author.discriminator,
role_id=dat["value"],
lvl=arg2
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "webhook" or arg1 == "wh":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
arg2 = clear_name(arg2).lower()
if not arg2:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT * FROM mods WHERE type = 'webhook' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=arg2))
if not dat:
em.description = locale[lang]["other_webhook_not_exists"].format(
who=message.author.display_name+"#"+message.author.discriminator,
name=arg2
)
else:
await conn.execute("DELETE FROM mods WHERE type = 'webhook' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=arg2))
em.description = locale[lang]["other_webhook_success_delete"].format(
who=message.author.display_name+"#"+message.author.discriminator,
name=arg2
)
await client.send_message(message.channel, embed=em)
return
if arg1 == "shop":
if not message.author == message.server.owner and not any(role.permissions.administrator for role in message.author.roles):
em.description = locale[lang]["global_not_allow_to_use"].format(
who=message.author.display_name+"#"+message.author.discriminator
)
await client.send_message(message.channel, embed=em)
return
if not arg2:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
if args:
arg2 = arg2 + " " + args
logg.info("remove arg2 = {arg2}".format(arg2=arg2))
role = discord.utils.get(message.server.roles, name=arg2)
if not role:
arg2 = re.sub(r'[<@#&!>]+', '', arg2.lower())
role = discord.utils.get(message.server.roles, id=arg2)
if not role:
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="name"
)
await client.send_message(message.channel, embed=em)
return
dat = await conn.fetchrow("SELECT * FROM mods WHERE type = 'shop' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=role.id))
if not dat:
em.description = locale[lang]["other_shop_not_exists"].format(
who=message.author.display_name+"#"+message.author.discriminator,
role_id=role.id
)
else:
await conn.execute("DELETE FROM mods WHERE type = 'shop' AND name = '{name}' AND server_id = '{server_id}'".format(server_id=server_id, name=role.id))
em.description = locale[lang]["other_shop_success_delete"].format(
who=message.author.display_name+"#"+message.author.discriminator,
role_id=role.id
)
await client.send_message(message.channel, embed=em)
return
if not arg1:
em.description = locale[lang]["other_missed_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="category"
)
await client.send_message(message.channel, embed=em)
return
em.description = locale[lang]["other_incorrect_argument"].format(
who=message.author.display_name+"#"+message.author.discriminator,
arg="category"
)
await client.send_message(message.channel, embed=em)
return
| 43.324346 | 200 | 0.613798 | 6,441 | 54,632 | 5.053253 | 0.054184 | 0.088669 | 0.045164 | 0.066241 | 0.809174 | 0.786131 | 0.763856 | 0.751997 | 0.734884 | 0.718877 | 0 | 0.011397 | 0.256425 | 54,632 | 1,260 | 201 | 43.35873 | 0.789824 | 0 | 0 | 0.643939 | 0 | 0.015993 | 0.162817 | 0.035144 | 0.003367 | 0 | 0.002343 | 0 | 0 | 1 | 0 | false | 0.011785 | 0.015152 | 0 | 0.10101 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e183a664f4aac27f96c557ca1785dc77de664f4e | 83 | py | Python | adoptabot/utils/utils.py | AtomicNicos/AdoptABot | 10802d5346a012d24afe4c8202c91f6ae1428e51 | [
"CC0-1.0"
] | null | null | null | adoptabot/utils/utils.py | AtomicNicos/AdoptABot | 10802d5346a012d24afe4c8202c91f6ae1428e51 | [
"CC0-1.0"
] | null | null | null | adoptabot/utils/utils.py | AtomicNicos/AdoptABot | 10802d5346a012d24afe4c8202c91f6ae1428e51 | [
"CC0-1.0"
] | null | null | null | def message_by_creator(ctx):
return ctx.message.author.id == 358938184513486852 | 41.5 | 54 | 0.795181 | 11 | 83 | 5.818182 | 0.818182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.243243 | 0.108434 | 83 | 2 | 54 | 41.5 | 0.621622 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
e1b7cee2623a7d4dde632d106d2769bb15f5cf20 | 4,365 | py | Python | other stuff/process_folder_generator.py | Philip2809/WorldTool | 6b775e7e7bb2ac1bd633a72f2d3a667e89c2cff4 | [
"MIT"
] | null | null | null | other stuff/process_folder_generator.py | Philip2809/WorldTool | 6b775e7e7bb2ac1bd633a72f2d3a667e89c2cff4 | [
"MIT"
] | null | null | null | other stuff/process_folder_generator.py | Philip2809/WorldTool | 6b775e7e7bb2ac1bd633a72f2d3a667e89c2cff4 | [
"MIT"
] | null | null | null |
from genericpath import exists;
import os;
from shutil import rmtree;
from sys import exit;
name = input('Process name? ')
path = input('Process path? (nothing="worldtool:process") ')
if path == '': path = 'worldtool:process'
if (exists('output')):
if (input('The directory "output" already exists.\nDelete it and continue? (y/n) ').lower() == 'y'):
rmtree('output')
else:
print('Cancelled.')
exit()
os.mkdir('output')
os.mkdir('output/'+name)
directions = ['x','-x','y','-y','z','-z','main']
def a(string):
return str(string).replace('{path}',path).replace('{name}',name)
contents = [
a(
"scoreboard players add #writerPosX worldtool 1"
"\ntp ~1 ~ ~"
"\nexecute positioned ~1 ~ ~ run function {path}/{name}/main"
"\n"
),
a(
"scoreboard players remove #writerPosX worldtool 1"
"\ntp ~-1 ~ ~"
"\nexecute positioned ~-1 ~ ~ run function {path}/{name}/main"
"\n"
),
a(
"scoreboard players operation #pos2z worldtool = #pos1z worldtool"
"\nscoreboard players operation #pos1z worldtool = #writerPosZ worldtool"
"\nscoreboard players operation #pos2x worldtool = #pos1x worldtool"
"\nscoreboard players operation #pos1x worldtool = #writerPosX worldtool"
"\n"
"\nscoreboard players add #writerPosY worldtool 1"
"\ntp ~ ~1 ~"
"\nexecute positioned ~ ~1 ~ run function {path}/{name}/main"
"\n"
),
a(
"scoreboard players operation #pos2z worldtool = #pos1z worldtool"
"\nscoreboard players operation #pos1z worldtool = #writerPosZ worldtool"
"\nscoreboard players operation #pos2x worldtool = #pos1x worldtool"
"\nscoreboard players operation #pos1x worldtool = #writerPosX worldtool"
"\n"
"\nscoreboard players remove #writerPosY worldtool 1"
"\ntp ~ ~-1 ~"
"\nexecute positioned ~ ~-1 ~ run function {path}/{name}/main"
"\n"
),
a(
"scoreboard players operation #pos2x worldtool = #pos1x worldtool"
"\nscoreboard players operation #pos1x worldtool = #writerPosX worldtool"
"\n"
"\nscoreboard players add #writerPosZ worldtool 1"
"\ntp ~ ~ ~1"
"\nexecute positioned ~ ~ ~1 run function {path}/{name}/main"
"\n"
),
a(
"scoreboard players operation #pos2x worldtool = #pos1x worldtool"
"\nscoreboard players operation #pos1x worldtool = #writerPosX worldtool"
"\n"
"\nscoreboard players remove #writerPosZ worldtool 1"
"\ntp ~ ~ ~-1"
"\nexecute positioned ~ ~ ~-1 run function {path}/{name}/main"
"\n"
),
a(
"scoreboard players add #blocksChecked worldtool 1"
"\n"
"\n# PROCESS-SPECIFIC COMMANDS HERE"
"\n"
"\n# Move the writer"
"\nexecute if score #writerPosX worldtool < #pos2x worldtool unless score #blocksChecked worldtool >= $blocksPerTick worldtool run function {path}/{name}/x"
"\nexecute if score #writerPosX worldtool > #pos2x worldtool unless score #blocksChecked worldtool >= $blocksPerTick worldtool run function {path}/{name}/-x"
"\n"
"\nexecute if score #writerPosX worldtool = #pos2x worldtool if score #writerPosZ worldtool < #pos2z worldtool unless score #blocksChecked worldtool >= $blocksPerTick worldtool run function {path}/{name}/z"
"\nexecute if score #writerPosX worldtool = #pos2x worldtool if score #writerPosZ worldtool > #pos2z worldtool unless score #blocksChecked worldtool >= $blocksPerTick worldtool run function {path}/{name}/-z"
"\n"
"\nexecute if score #writerPosX worldtool = #pos2x worldtool if score #writerPosZ worldtool = #pos2z worldtool if score #writerPosY worldtool < #pos2y worldtool unless score #blocksChecked worldtool >= $blocksPerTick worldtool run function {path}/{name}/y"
"\nexecute if score #writerPosX worldtool = #pos2x worldtool if score #writerPosZ worldtool = #pos2z worldtool if score #writerPosY worldtool > #pos2y worldtool unless score #blocksChecked worldtool >= $blocksPerTick worldtool run function {path}/{name}/-y"
"\n"
)
]
for i in range(0,len(directions)):
file = open('output/'+name+'/'+directions[i]+'.mcfunction', 'w')
file.write(contents[i])
file.close()
| 42.794118 | 265 | 0.637342 | 464 | 4,365 | 5.99569 | 0.178879 | 0.081955 | 0.064702 | 0.081955 | 0.791517 | 0.791517 | 0.791517 | 0.791517 | 0.791517 | 0.791517 | 0 | 0.01493 | 0.232761 | 4,365 | 101 | 266 | 43.217822 | 0.815766 | 0 | 0 | 0.434783 | 0 | 0.065217 | 0.714482 | 0.006645 | 0 | 0 | 0 | 0 | 0 | 1 | 0.01087 | false | 0 | 0.043478 | 0.01087 | 0.065217 | 0.01087 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
befd1d98626b41d53b676f597314e2fb0a539708 | 3,345 | py | Python | test/test_scenario_defintion_parser.py | tom-010/django-presentable-exception | 9015ad11e83deb71bd7d2a01ccff72b1f8d74681 | [
"Apache-2.0"
] | null | null | null | test/test_scenario_defintion_parser.py | tom-010/django-presentable-exception | 9015ad11e83deb71bd7d2a01ccff72b1f8d74681 | [
"Apache-2.0"
] | null | null | null | test/test_scenario_defintion_parser.py | tom-010/django-presentable-exception | 9015ad11e83deb71bd7d2a01ccff72b1f8d74681 | [
"Apache-2.0"
] | null | null | null | from unittest import TestCase
from presentable_exception.presentable_exception import ExceptionScenario, ScenarioDefinitionParser
class TestScenarioDefintionParserWithList(TestCase):
def setUp(self):
self.parser = ScenarioDefinitionParser()
def test_empty_list(self):
res = self.parser.parse_scenarios('package', [])
self.assertEqual([], res)
def test_list_with_one_tuple(self):
res = self.parser.parse_scenarios('package', [('the-key', 'the description')])
self.assertEqual(1, len(res))
self.assertIsInstance(res[0], ExceptionScenario)
self.assertEqual('package', res[0].package)
self.assertEqual('the-key', res[0].key)
self.assertEqual('the description', res[0].description)
self.assertEqual('server', res[0].responsible)
def test_no_description_given(self):
res = self.parser.parse_scenarios('package', [('the-key',)])
self.assertEqual(1, len(res))
self.assertIsInstance(res[0], ExceptionScenario)
self.assertEqual('package', res[0].package)
self.assertEqual('the-key', res[0].key)
self.assertFalse(res[0].description)
def test_empty_tuple(self):
res = self.parser.parse_scenarios('package', [()])
self.assertEqual(0, len(res))
def test_empty_key(self):
res = self.parser.parse_scenarios('package', [('', 'd')])
self.assertEqual(0, len(res))
def test_lists_work_too(self):
# note the [] arount 'the-key', 'the description'
res = self.parser.parse_scenarios('package', [['the-key', 'the description']])
self.assertEqual(1, len(res))
self.assertIsInstance(res[0], ExceptionScenario)
self.assertEqual('package', res[0].package)
self.assertEqual('the-key', res[0].key)
self.assertEqual('the description', res[0].description)
def test_invalid_input(self):
for invalid_input in [set, 1, 1.1, 'abc']:
res = self.parser.parse_scenarios('package', invalid_input)
self.assertEqual([], res)
class TestScenarioDefintionParserWithMap(TestCase):
def setUp(self):
self.parser = ScenarioDefinitionParser()
def test_empty_map(self):
res = self.parser.parse_scenarios('package', {})
self.assertEqual([], res)
def test_server_fault(self):
res = self.parser.parse_scenarios('package', {'server': [('the-key', 'the description')]})
self.assertEqual(1, len(res))
self.assertIsInstance(res[0], ExceptionScenario)
self.assertEqual('package', res[0].package)
self.assertEqual('the-key', res[0].key)
self.assertEqual('the description', res[0].description)
self.assertEqual('server', res[0].responsible)
def test_client_fault(self):
res = self.parser.parse_scenarios('package', {'client': [('the-key', 'the description')]})
self.assertEqual(1, len(res))
self.assertIsInstance(res[0], ExceptionScenario)
self.assertEqual('package', res[0].package)
self.assertEqual('the-key', res[0].key)
self.assertEqual('the description', res[0].description)
self.assertEqual('client', res[0].responsible)
# def test_list_with_one_tuple(self):
# res = self.parser.parse_scenarios('package', [('the-key', 'the description')]) | 40.792683 | 99 | 0.656203 | 385 | 3,345 | 5.58961 | 0.137662 | 0.188197 | 0.06645 | 0.092007 | 0.81552 | 0.795074 | 0.779275 | 0.741636 | 0.699349 | 0.654275 | 0 | 0.012236 | 0.193722 | 3,345 | 82 | 100 | 40.792683 | 0.785688 | 0.049626 | 0 | 0.564516 | 0 | 0 | 0.103589 | 0 | 0 | 0 | 0 | 0 | 0.532258 | 1 | 0.193548 | false | 0 | 0.032258 | 0 | 0.258065 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
83511f6af5bba6e17f2245aa2558ca79aec90a48 | 25 | py | Python | Test.py | victoria-vargas/OPE | 86e0a7c475ddd2712f42642176312943ba707c88 | [
"Apache-2.0"
] | null | null | null | Test.py | victoria-vargas/OPE | 86e0a7c475ddd2712f42642176312943ba707c88 | [
"Apache-2.0"
] | null | null | null | Test.py | victoria-vargas/OPE | 86e0a7c475ddd2712f42642176312943ba707c88 | [
"Apache-2.0"
] | null | null | null | print("TESTANDO SA POHA") | 25 | 25 | 0.76 | 4 | 25 | 4.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.08 | 25 | 1 | 25 | 25 | 0.826087 | 0 | 0 | 0 | 0 | 0 | 0.615385 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
55d1dd87b712b83090c86f645aeb3f87a532bf73 | 226 | py | Python | src/facegraph/__init__.py | brstrat/pyFaceGraph | c1cf307b19a80dee06a384e28d93c03acfe5a490 | [
"Unlicense"
] | 13 | 2015-03-24T11:45:55.000Z | 2019-04-27T20:21:03.000Z | src/facegraph/__init__.py | brstrat/pyFaceGraph | c1cf307b19a80dee06a384e28d93c03acfe5a490 | [
"Unlicense"
] | 3 | 2015-03-30T11:50:03.000Z | 2021-11-11T16:57:08.000Z | src/facegraph/__init__.py | brstrat/pyFaceGraph | c1cf307b19a80dee06a384e28d93c03acfe5a490 | [
"Unlicense"
] | 9 | 2015-01-11T14:26:59.000Z | 2021-11-09T19:17:06.000Z | # -*- coding: utf-8 -*-
__version__ = '0.0.36'
from facegraph.api import Api
from facegraph.api import ApiException
from facegraph.fql import FQL
from facegraph.graph import Graph
from facegraph.graph import GraphException
| 22.6 | 42 | 0.778761 | 32 | 226 | 5.375 | 0.4375 | 0.377907 | 0.186047 | 0.255814 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025641 | 0.137168 | 226 | 9 | 43 | 25.111111 | 0.85641 | 0.09292 | 0 | 0 | 0 | 0 | 0.029557 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.833333 | 0 | 0.833333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
55f703fa95d52e8d4ac64af2b18c721a5724d09a | 11,059 | py | Python | tests/test_hasher.py | wfscheper/hasher | b12b17258b12ec8145e1ee75701afdae1610b564 | [
"Apache-2.0"
] | 1 | 2019-08-01T19:19:18.000Z | 2019-08-01T19:19:18.000Z | tests/test_hasher.py | wfscheper/hasher | b12b17258b12ec8145e1ee75701afdae1610b564 | [
"Apache-2.0"
] | null | null | null | tests/test_hasher.py | wfscheper/hasher | b12b17258b12ec8145e1ee75701afdae1610b564 | [
"Apache-2.0"
] | 2 | 2015-05-08T08:07:05.000Z | 2019-03-14T08:43:37.000Z | # Copyright 2013 Walter Scheper
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import hashlib
import io
import pytest
@pytest.fixture
def md5hasher(mocker):
from hasher import hashes
md5hasher = hashes.MD5Hasher(
mocker.MagicMock(name="stdout"), mocker.MagicMock(name="stderr")
)
return md5hasher
@pytest.fixture
def args():
from hasher.app import AttrDict
return AttrDict(binary=False, warn=False, status=False, quiet=False)
class TestMD5Hasher:
data = "a string of data to hash\n"
data_md5 = hashlib.md5(data.encode("utf-8")).hexdigest()
check_data = "3ac11b17fa463072f069580031317af2 AUTHORS\n4e6ee384b7a0a002681cda43a5ccc9d0 README.rst\n"
def test_generate_invalid_file(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = IOError()
with pytest.raises(IOError):
md5hasher.generate_hash("foo", args)
_open.assert_called_once_with("foo", "r")
def test_generate_display_text(self, mocker, md5hasher, args):
_open = mocker.patch(
"hasher.hashes.open", mocker.mock_open(read_data=self.data)
)
md5hasher.generate_hash("foo", args)
_open.assert_called_once_with("foo", "r")
md5hasher.stdout.assert_called_with("%s foo" % self.data_md5)
def test_generate_display_text_binary(self, mocker, md5hasher, args):
_open = mocker.patch(
"hasher.hashes.open", mocker.mock_open(read_data=self.data.encode("utf-8"))
)
args.binary = True
md5hasher.generate_hash("foo", args)
_open.assert_called_once_with("foo", "rb")
md5hasher.stdout.assert_called_with("%s *foo" % self.data_md5)
def test_checkresult_display(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS\n"),
io.StringIO("README.rst\n"),
]
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [
mocker.call("AUTHORS: OK"),
mocker.call("README.rst: OK"),
]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
assert rc == 0
def test_checkresult_display_formaterror(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(
"1234 File\n"
"1111111111111111111111111111111 File2\n"
"f2cd884501b6913cad2ae243475a75d3 +README.rst\n"
"111111111111111111111111111111111 File2\n"
"1111111111111111111111111111111111 File2\n"
)
]
rc = md5hasher.check_hash("foo", args)
assert [] == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call("hasher md5: WARNING: 5 lines are improperly formatted")
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_display_hasherror(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS.\n"),
io.StringIO("README.rst\n"),
]
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [
mocker.call("AUTHORS: FAILED"),
mocker.call("README.rst: OK"),
]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call("hasher md5: WARNING: 1 computed checksum did NOT match")
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_display_hasherror_multiple(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS.\n"),
io.StringIO("README.rst.\n"),
]
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [
mocker.call("AUTHORS: FAILED"),
mocker.call("README.rst: FAILED"),
]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call("hasher md5: WARNING: 2 computed checksums did NOT match")
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_display_readerror(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
IOError,
io.StringIO("README.rst\n"),
]
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [
mocker.call("AUTHORS: FAILED open or read"),
mocker.call("README.rst: OK"),
]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call("hasher md5: AUTHORS: No such file or directory"),
mocker.call("hasher md5: WARNING: 1 listed file could not be read"),
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_display_readerror_multiple(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [io.StringIO(self.check_data), IOError, IOError]
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [
mocker.call("AUTHORS: FAILED open or read"),
mocker.call("README.rst: FAILED open or read"),
]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call("hasher md5: AUTHORS: No such file or directory"),
mocker.call("hasher md5: README.rst: No such file or directory"),
mocker.call("hasher md5: WARNING: 2 listed files could not be read"),
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_quiet(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS\n"),
io.StringIO("README.rst\n"),
]
args.quiet = True
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = []
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = []
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 0
def test_checkresult_status(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS\n"),
io.StringIO("README.rst\n"),
]
args.status = True
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = []
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = []
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 0
def test_checkresult_status_hasherror(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS\n"),
io.StringIO("AUTHORS\n"),
]
args.status = True
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = []
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = []
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_status_readerror(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS\n"),
IOError,
]
args.status = True
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = []
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call("hasher md5: README.rst: No such file or directory")
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
def test_checkresult_warn(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(self.check_data),
io.StringIO("AUTHORS\n"),
io.StringIO("README.rst\n"),
]
args.warn = True
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [
mocker.call("AUTHORS: OK"),
mocker.call("README.rst: OK"),
]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
assert [] == md5hasher.stderr.call_args_list
assert rc == 0
def test_checkresult_warn_formaterror(self, mocker, md5hasher, args):
_open = mocker.patch("hasher.hashes.open", mocker.mock_open())
_open.side_effect = [
io.StringIO(
"3ac11b17fa463072f069580031317af2 AUTHORS\n4e6ee384b7a0a002681cda43a5ccc9d0 +README.rst\n"
),
io.StringIO("AUTHORS\n"),
]
args.warn = True
rc = md5hasher.check_hash("foo", args)
expected_stdout_calls = [mocker.call("AUTHORS: OK")]
assert expected_stdout_calls == md5hasher.stdout.call_args_list
expected_stderr_calls = [
mocker.call(
"hasher md5: foo: 2: improperly formatted MD5 checksum" " line"
),
mocker.call("hasher md5: WARNING: 1 line is improperly formatted"),
]
assert expected_stderr_calls == md5hasher.stderr.call_args_list
assert rc == 1
| 36.259016 | 108 | 0.630979 | 1,278 | 11,059 | 5.243349 | 0.129108 | 0.044769 | 0.041188 | 0.051485 | 0.813461 | 0.804954 | 0.772273 | 0.768094 | 0.768094 | 0.76496 | 0 | 0.038646 | 0.262953 | 11,059 | 304 | 109 | 36.378289 | 0.783462 | 0.049824 | 0 | 0.618644 | 0 | 0 | 0.163713 | 0.026301 | 0 | 0 | 0 | 0 | 0.169492 | 1 | 0.072034 | false | 0 | 0.021186 | 0 | 0.118644 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3644a8621263c06fc1461337f4b6b5ef0564b49b | 38 | py | Python | cupy_alias/math/ufunc.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 142 | 2018-06-07T07:43:10.000Z | 2021-10-30T21:06:32.000Z | cupy_alias/math/ufunc.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 282 | 2018-06-07T08:35:03.000Z | 2021-03-31T03:14:32.000Z | cupy_alias/math/ufunc.py | fixstars/clpy | 693485f85397cc110fa45803c36c30c24c297df0 | [
"BSD-3-Clause"
] | 19 | 2018-06-19T11:07:53.000Z | 2021-05-13T20:57:04.000Z | from clpy.math.ufunc import * # NOQA
| 19 | 37 | 0.710526 | 6 | 38 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.184211 | 38 | 1 | 38 | 38 | 0.870968 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
367dd3c50df5131b114cbf21575fbf1a1c68f81f | 5,172 | py | Python | v_error_eval.py | OnishiItsuki/sketched-nmf_for_github | 4a01cbbf7314fb6de4a280e1f6266ddc2100aaf3 | [
"MIT"
] | null | null | null | v_error_eval.py | OnishiItsuki/sketched-nmf_for_github | 4a01cbbf7314fb6de4a280e1f6266ddc2100aaf3 | [
"MIT"
] | null | null | null | v_error_eval.py | OnishiItsuki/sketched-nmf_for_github | 4a01cbbf7314fb6de4a280e1f6266ddc2100aaf3 | [
"MIT"
] | null | null | null | import numpy as np
import functionfile as ff
def v_error_eval(n, m, r, approximate_size, v, iteration, seeds, c_mode, nmfqp, column_sketching):
nmf_error = np.zeros(iteration)
snmf_error = np.zeros(iteration)
w = ff.generate_w(n, r, seeds[0], c_mode=c_mode)
h = ff.generate_h(r, m, seeds[1], c_mode=c_mode)
v_s = ff.uniform_sampling(v, approximate_size, seeds[0] + 1, right_product=column_sketching)
print("\n\n\n------------------ NMF -------------------")
if nmfqp:
print("NMF matrix H is calculated by QP.")
for i in range(0, iteration):
w, h = ff.update(v, w, h, c_mode)
h_qp = ff.calculate_h(v, w, print_interim=True)
nmf_error[i] = np.linalg.norm(v - np.dot(w, h_qp)) ** 2
if (i == 0) | (i % 100 == 99):
print(str(i + 1) + " times update error: " + str(nmf_error[i]))
h = h_qp
else:
for i in range(0, iteration):
w, h = ff.update(v, w, h, c_mode)
nmf_error[i] = np.linalg.norm(v - np.dot(w, h)) ** 2
if (i == 0) | (i % 100 == 99):
print(str(i + 1) + " times update error: " + str(nmf_error[i]))
print("\n\n------------- Sketching NMF --------------")
if column_sketching:
w_s = ff.generate_w(n, r, seeds[0], c_mode=c_mode)
h_s = ff.generate_h(r, approximate_size, seeds[1], c_mode=c_mode)
for i in range(0, iteration):
w_s, h_s = ff.update(v_s, w_s, h_s, c_mode)
h_os = ff.calculate_h(v, w_s, print_interim=True)
snmf_error[i] = np.linalg.norm(v - np.dot(w_s, h_os)) ** 2
if (i == 0) | (i % 100 == 99):
print(str(i + 1) + " times update error: " + str(snmf_error[i]))
h_s = h_os
else:
w_s = ff.generate_w(approximate_size, r, seeds[0], c_mode=c_mode)
h_s = ff.generate_h(r, m, seeds[1], c_mode=c_mode)
for i in range(0, iteration):
w_s, h_s = ff.update(v_s, w_s, h_s, c_mode)
w_os = ff.calculate_h(v, h_s, print_interim=True)
snmf_error[i] = np.linalg.norm(v - np.dot(w_os, h_s)) ** 2
if (i == 0) | (i % 100 == 99):
print(str(i + 1) + " times update error: " + str(snmf_error[i]))
w_s = w_os
return nmf_error, snmf_error, w, h, w_s, h_s
def parallel_v_error_eval(r, approximate_size, v, iteration, wh_seed, c_mode, nmfqp, t_flag, snmf_only=False):
theta_start = 5
n, m = v.shape
seeds = ff.two_seeds(wh_seed)
nmf_error = np.zeros(iteration)
snmf_error = np.zeros(iteration)
w = ff.generate_w(n, r, seeds[0], c_mode=c_mode)
h = ff.generate_h(r, m, seeds[1], c_mode=c_mode)
v_s = ff.uniform_sampling(v, approximate_size, seeds[0] + 1)
# NMF -------------------------------------------------------------------------------------------------------------
theta_w = theta_h = theta_start
if not snmf_only:
if nmfqp:
print("NMF matrix H is calculated by QP.")
for i in range(0, iteration):
if c_mode != 2:
w, h = ff.update(v, w, h, c_mode)
else:
w, h, theta_w, theta_h = ff.fgd_update(v, w, h, theta_w, theta_h)
h_qp = ff.calculate_h(v, w, print_interim=False)
nmf_error[i] = np.linalg.norm(v - np.dot(w, h_qp)) ** 2
if (i == 0) | (i % 100 == 99):
print("NMF ( r=" + str(r) + " k=" + str(approximate_size) + " ) : " + str(i + 1) + " times update")
h = h_qp
else:
for i in range(0, iteration):
if c_mode != 2:
w, h = ff.update(v, w, h, c_mode)
else:
w, h, theta_w, theta_h = ff.fgd_update(v, w, h, theta_w, theta_h)
nmf_error[i] = np.linalg.norm(v - np.dot(w, h)) ** 2
if (i == 0) | (i % 2000 == 1999):
print("NMF ( r=" + str(r) + " k=" + str(approximate_size) + " ) : " + str(i + 1) + " times update")
# SNMF ------------------------------------------------------------------------------------------------------------
w_s = ff.generate_w(n, r, seeds[0], c_mode=c_mode)
h_s = ff.generate_h(r, approximate_size, seeds[1], c_mode=c_mode)
for i in range(0, iteration):
if c_mode != 2:
w_s, h_s = ff.update(v_s, w_s, h_s, c_mode)
else:
w_s, h_s, theta_w, theta_h = ff.fgd_update(v_s, w_s, h_s, theta_w, theta_h)
h_os = ff.calculate_h(v, w_s, print_interim=False)
snmf_error[i] = np.linalg.norm(v - np.dot(w_s, h_os)) ** 2
if (i == 0) | (i % 100 == 99):
print("SketchingNMF ( r={} k={} seed={} ) : {} times update".format(r, approximate_size, wh_seed, i + 1))
if t_flag & snmf_only:
return nmf_error, snmf_error, None, None, h_os.T, w_s.T, h_s.T
elif t_flag:
return nmf_error, snmf_error, h.T, w.T, h_os.T, w_s.T, h_s.T
elif snmf_only:
return nmf_error, snmf_error, None, None, w_s, h_os, h_s
else:
return nmf_error, snmf_error, w, h, w_s, h_os, h_s
| 46.178571 | 120 | 0.506961 | 840 | 5,172 | 2.892857 | 0.097619 | 0.065844 | 0.016049 | 0.041152 | 0.821399 | 0.779012 | 0.774486 | 0.774074 | 0.755967 | 0.7 | 0 | 0.02316 | 0.298724 | 5,172 | 111 | 121 | 46.594595 | 0.646816 | 0.044277 | 0 | 0.639175 | 0 | 0 | 0.074696 | 0.004858 | 0 | 0 | 0 | 0 | 0 | 1 | 0.020619 | false | 0 | 0.020619 | 0 | 0.092784 | 0.164948 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36866a86ea24bf328274bb038b85fb6204230cd5 | 15,456 | py | Python | src/openpersonen/api/tests/views/test_partner.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 2 | 2020-08-26T11:24:43.000Z | 2021-07-28T09:46:40.000Z | src/openpersonen/api/tests/views/test_partner.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | 153 | 2020-08-26T10:45:35.000Z | 2021-12-10T17:33:16.000Z | src/openpersonen/api/tests/views/test_partner.py | maykinmedia/open-personen | ddcf083ccd4eb864c5305bcd8bc75c6c64108272 | [
"RSA-MD"
] | null | null | null | from django.template import loader
from django.urls import NoReverseMatch, reverse
from django.utils.module_loading import import_string
import requests_mock
from mock import patch
from rest_framework.test import APITestCase
from openpersonen.api.tests.factory_models import (
PartnerschapFactory,
PersoonFactory,
TokenFactory,
)
from openpersonen.api.tests.test_data import (
PARTNER_RETRIEVE_DATA,
PARTNER_RETRIEVE_DATA_NO_DATES,
)
from openpersonen.api.views.generic_responses import get_404_response
from openpersonen.contrib.stufbg.models import StufBGClient
from openpersonen.features.country_code_and_omschrijving.factory_models import (
CountryCodeAndOmschrijvingFactory,
)
from openpersonen.features.country_code_and_omschrijving.models import (
CountryCodeAndOmschrijving,
)
from openpersonen.features.gemeente_code_and_omschrijving.factory_models import (
GemeenteCodeAndOmschrijvingFactory,
)
from openpersonen.features.gemeente_code_and_omschrijving.models import (
GemeenteCodeAndOmschrijving,
)
@patch(
"openpersonen.api.data_classes.persoon.backend",
import_string("openpersonen.contrib.stufbg.backend.default"),
)
class TestPartner(APITestCase):
def setUp(self):
super().setUp()
self.url = StufBGClient.get_solo().url
self.persoon_bsn = 123456789
self.partner_bsn = 987654321
self.token = TokenFactory.create()
CountryCodeAndOmschrijvingFactory.create()
GemeenteCodeAndOmschrijvingFactory.create()
def test_partner_without_token(self):
response = self.client.get(
reverse(
"partners-list",
kwargs={"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn},
)
)
self.assertEqual(response.status_code, 401)
@requests_mock.Mocker()
def test_list_partner(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseTwoPartners.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-list",
kwargs={"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(post_mock.called)
data = response.json()["_embedded"]["partners"]
self.assertEqual(len(data), 2)
first_bsn = data[0]["burgerservicenummer"]
second_bsn = data[1]["burgerservicenummer"]
self.assertTrue(first_bsn == str(self.partner_bsn) or first_bsn == "123456789")
self.assertTrue(
second_bsn == str(self.partner_bsn) or second_bsn == "123456789"
)
@requests_mock.Mocker()
def test_list_partner_with_one_partner(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseOnePartner.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-list",
kwargs={"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(post_mock.called)
data = response.json()["_embedded"]["partners"]
self.assertEqual(len(data), 1)
self.assertEqual(data[0]["burgerservicenummer"], str(self.partner_bsn))
@requests_mock.Mocker()
def test_detail_partner(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseOnePartner.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": self.partner_bsn,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(post_mock.called)
self.maxDiff = None
self.assertEqual(response.json(), PARTNER_RETRIEVE_DATA)
@requests_mock.Mocker()
def test_detail_partner_no_dates(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponsePartnerNoDates.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": self.partner_bsn,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(post_mock.called)
self.maxDiff = None
self.assertEqual(response.json(), PARTNER_RETRIEVE_DATA_NO_DATES)
@requests_mock.Mocker()
def test_detail_partner_BG_response(self, post_mock):
fake_bsn = 123456780
fake_partner_bsn = 123456789
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseBG.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": fake_bsn,
"id": fake_partner_bsn,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(post_mock.called)
self.assertEqual(response.json()["burgerservicenummer"], str(fake_partner_bsn))
@requests_mock.Mocker()
def test_detail_partner_when_id_does_not_match(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseOnePartner.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": 111111111,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 404)
self.assertTrue(post_mock.called)
@requests_mock.Mocker()
def test_detail_partner_with_two_partners(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseTwoPartners.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": self.partner_bsn,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(post_mock.called)
self.maxDiff = None
self.assertEqual(response.json(), PARTNER_RETRIEVE_DATA)
@requests_mock.Mocker()
def test_detail_partner_when_id_does_not_match_with_two_partners(self, post_mock):
post_mock.post(
self.url,
content=bytes(
loader.render_to_string("response/ResponseTwoPartners.xml"),
encoding="utf-8",
),
)
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": 111111111,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 404)
self.assertTrue(post_mock.called)
def test_detail_partner_with_bad_id(self):
with self.assertRaises(NoReverseMatch):
self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": "badid",
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
class TestPartnerWithTestingModels(APITestCase):
def setUp(self):
super().setUp()
self.persoon_bsn = 123456789
self.partner_bsn = 111111111
self.persoon = PersoonFactory.create(
burgerservicenummer_persoon=self.persoon_bsn
)
self.partnerschap = PartnerschapFactory(
persoon=self.persoon,
burgerservicenummer_echtgenoot_geregistreerd_partner=self.partner_bsn,
)
self.token = TokenFactory.create()
def test_partner_without_token(self):
response = self.client.get(
reverse(
"partners-list",
kwargs={"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn},
)
)
self.assertEqual(response.status_code, 401)
def test_partner_with_token(self):
response = self.client.get(
reverse(
"partners-list",
kwargs={"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
def test_list_partner(self):
response = self.client.get(
reverse(
"partners-list",
kwargs={"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
self.assertTrue(isinstance(response.json()["_embedded"]["partners"], list))
data = response.json()["_embedded"]["partners"][0]
self.assertEqual(
data["burgerservicenummer"],
str(self.partner_bsn),
)
self.assertEqual(
data["_embedded"]["naam"]["voornamen"],
self.partnerschap.voornamen_echtgenoot_geregistreerd_partner,
)
self.assertEqual(
data["_embedded"]["geboorte"]["_embedded"]["datum"]["datum"],
str(self.partnerschap.geboortedatum_echtgenoot_geregistreerd_partner),
)
self.assertEqual(
data["_embedded"]["geboorte"]["_embedded"]["land"]["omschrijving"],
CountryCodeAndOmschrijving.get_omschrijving_from_code(
self.partnerschap.geboorteland_echtgenoot_geregistreerd_partner
),
)
self.assertEqual(
data["_embedded"]["inOnderzoek"]["_embedded"]["datumIngangOnderzoek"][
"datum"
],
str(self.partnerschap.datum_ingang_onderzoek),
)
self.assertEqual(
data["_embedded"]["aangaanHuwelijkPartnerschap"]["_embedded"]["datum"][
"datum"
],
str(
self.partnerschap.datum_huwelijkssluiting_aangaan_geregistreerd_partnerschap
),
)
self.assertEqual(
data["_embedded"]["aangaanHuwelijkPartnerschap"]["_embedded"]["land"][
"omschrijving"
],
CountryCodeAndOmschrijving.get_omschrijving_from_code(
self.partnerschap.land_ontbinding_huwelijk_geregistreerd_partnerschap
),
)
self.assertEqual(
data["_embedded"]["aangaanHuwelijkPartnerschap"]["_embedded"]["plaats"][
"omschrijving"
],
GemeenteCodeAndOmschrijving.get_omschrijving_from_code(
self.partnerschap.plaats_huwelijkssluiting_aangaan_geregistreerd_partnerschap
),
)
def test_detail_partner(self):
response = self.client.get(
reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": self.partner_bsn,
},
),
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 200)
data = response.json()
self.assertEqual(
data["burgerservicenummer"],
str(self.partner_bsn),
)
self.assertEqual(
data["_embedded"]["naam"]["voornamen"],
self.partnerschap.voornamen_echtgenoot_geregistreerd_partner,
)
self.assertEqual(
data["_embedded"]["geboorte"]["_embedded"]["datum"]["datum"],
str(self.partnerschap.geboortedatum_echtgenoot_geregistreerd_partner),
)
self.assertEqual(
data["_embedded"]["geboorte"]["_embedded"]["land"]["omschrijving"],
CountryCodeAndOmschrijving.get_omschrijving_from_code(
self.partnerschap.geboorteland_echtgenoot_geregistreerd_partner
),
)
self.assertEqual(
data["_embedded"]["inOnderzoek"]["_embedded"]["datumIngangOnderzoek"][
"datum"
],
str(self.partnerschap.datum_ingang_onderzoek),
)
self.assertEqual(
data["_embedded"]["aangaanHuwelijkPartnerschap"]["_embedded"]["datum"][
"datum"
],
str(
self.partnerschap.datum_huwelijkssluiting_aangaan_geregistreerd_partnerschap
),
)
self.assertEqual(
data["_embedded"]["aangaanHuwelijkPartnerschap"]["_embedded"]["land"][
"omschrijving"
],
CountryCodeAndOmschrijving.get_omschrijving_from_code(
self.partnerschap.land_ontbinding_huwelijk_geregistreerd_partnerschap
),
)
self.assertEqual(
data["_embedded"]["aangaanHuwelijkPartnerschap"]["_embedded"]["plaats"][
"omschrijving"
],
GemeenteCodeAndOmschrijving.get_omschrijving_from_code(
self.partnerschap.plaats_huwelijkssluiting_aangaan_geregistreerd_partnerschap
),
)
def test_detail_partner_404(self):
url = reverse(
"partners-detail",
kwargs={
"ingeschrevenpersonen_burgerservicenummer": self.persoon_bsn,
"id": 222222222,
},
)
response = self.client.get(
url,
HTTP_AUTHORIZATION=f"Token {self.token.key}",
)
self.assertEqual(response.status_code, 404)
self.assertEqual(response.json(), get_404_response(url))
| 34.044053 | 93 | 0.584951 | 1,309 | 15,456 | 6.652406 | 0.114591 | 0.065457 | 0.050184 | 0.033762 | 0.815802 | 0.798576 | 0.789159 | 0.729215 | 0.727951 | 0.722095 | 0 | 0.015489 | 0.314959 | 15,456 | 453 | 94 | 34.119205 | 0.806951 | 0 | 0 | 0.661765 | 0 | 0 | 0.152886 | 0.070976 | 0 | 0 | 0 | 0 | 0.122549 | 1 | 0.041667 | false | 0 | 0.036765 | 0 | 0.083333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36a17a05af2caa40a0932b6556d1bbad61c31afe | 44 | py | Python | coffin/contrib/auth/decorators.py | spothero/coffin | 9ea6a9173cbfed592c5b4776c489dba8d9280d52 | [
"BSD-3-Clause"
] | 1 | 2016-11-19T06:32:20.000Z | 2016-11-19T06:32:20.000Z | coffin/contrib/auth/decorators.py | spothero/coffin | 9ea6a9173cbfed592c5b4776c489dba8d9280d52 | [
"BSD-3-Clause"
] | null | null | null | coffin/contrib/auth/decorators.py | spothero/coffin | 9ea6a9173cbfed592c5b4776c489dba8d9280d52 | [
"BSD-3-Clause"
] | 1 | 2019-08-14T09:51:23.000Z | 2019-08-14T09:51:23.000Z | from django.contrib.auth.decorators import * | 44 | 44 | 0.840909 | 6 | 44 | 6.166667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068182 | 44 | 1 | 44 | 44 | 0.902439 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
36ac0862f2c70b5d34f728980c20962f0583d449 | 32 | py | Python | install.py | paulkokos/GLEW_Tutorials | 40ed6239763cbeb70d941ea47c1b012a009c8598 | [
"BSD-3-Clause"
] | 1 | 2020-12-23T14:11:15.000Z | 2020-12-23T14:11:15.000Z | install.py | paulkokos/GLEW_Tutorials | 40ed6239763cbeb70d941ea47c1b012a009c8598 | [
"BSD-3-Clause"
] | null | null | null | install.py | paulkokos/GLEW_Tutorials | 40ed6239763cbeb70d941ea47c1b012a009c8598 | [
"BSD-3-Clause"
] | null | null | null | print("Installation on the way") | 32 | 32 | 0.78125 | 5 | 32 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.862069 | 0 | 0 | 0 | 0 | 0 | 0.69697 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
36b71b01efa4e5a22174b93592e680aa0a08e9a2 | 31,013 | py | Python | contrastive/losses.py | neurospin-projects/2022_jchavas_cingulate_inhibitory_control | 30e63f0af62fa83abd3858720ce3f3a15a3fbaea | [
"MIT"
] | null | null | null | contrastive/losses.py | neurospin-projects/2022_jchavas_cingulate_inhibitory_control | 30e63f0af62fa83abd3858720ce3f3a15a3fbaea | [
"MIT"
] | null | null | null | contrastive/losses.py | neurospin-projects/2022_jchavas_cingulate_inhibitory_control | 30e63f0af62fa83abd3858720ce3f3a15a3fbaea | [
"MIT"
] | null | null | null | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# This software and supporting documentation are distributed by
# Institut Federatif de Recherche 49
# CEA/NeuroSpin, Batiment 145,
# 91191 Gif-sur-Yvette cedex
# France
#
# This software is governed by the CeCILL license version 2 under
# French law and abiding by the rules of distribution of free software.
# You can use, modify and/or redistribute the software under the
# terms of the CeCILL license version 2 as circulated by CEA, CNRS
# and INRIA at the following URL "http://www.cecill.info".
#
# As a counterpart to the access to the source code and rights to copy,
# modify and redistribute granted by the license, users are provided only
# with a limited warranty and the software's author, the holder of the
# economic rights, and the successive licensors have only limited
# liability.
#
# In this respect, the user's attention is drawn to the risks associated
# with loading, using, modifying and/or developing or reproducing the
# software by the user in light of its specific status of free software,
# that may mean that it is complicated to manipulate, and that also
# therefore means that it is reserved for developers and experienced
# professionals having in-depth computer knowledge. Users are therefore
# encouraged to load and test the software's suitability as regards their
# requirements in conditions enabling the security of their systems and/or
# data to be ensured and, more generally, to use and operate it in the
# same conditions as regards security.
#
# The fact that you are presently reading this means that you have had
# knowledge of the CeCILL license version 2 and that you accept its terms.
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as func
from sklearn.metrics.pairwise import rbf_kernel
def mean_off_diagonal(a):
"""Computes the mean of off-diagonal elements"""
n = a.shape[0]
return ((a.sum() - a.trace()) / (n * n - n))
def quantile_off_diagonal(a):
"""Computes the quantile of off-diagonal elements
TODO: it is here the quantile of the whole a"""
return a.quantile(0.75)
def print_info(z_i, z_j, sim_zij, sim_zii, sim_zjj, temperature):
"""prints useful info over correlations"""
print("histogram of z_i after normalization:")
print(np.histogram(z_i.detach().cpu().numpy() * 100, bins='auto'))
print("histogram of z_j after normalization:")
print(np.histogram(z_j.detach().cpu().numpy() * 100, bins='auto'))
# Gives histogram of sim vectors
print("histogram of sim_zij:")
print(
np.histogram(
sim_zij.detach().cpu().numpy() *
temperature *
100,
bins='auto'))
# Diagonals as 1D tensor
diag_ij = sim_zij.diagonal()
# Prints quantiles of positive pairs (views from the same image)
quantile_positive_pairs = diag_ij.quantile(0.75)
print(
f"quantile of positives ij = "
f"{quantile_positive_pairs.cpu()*temperature*100}")
# Computes quantiles of negative pairs
quantile_negative_ii = quantile_off_diagonal(sim_zii)
quantile_negative_jj = quantile_off_diagonal(sim_zjj)
quantile_negative_ij = quantile_off_diagonal(sim_zij)
# Prints quantiles of negative pairs
print(
f"quantile of negatives ii = "
f"{quantile_negative_ii.cpu()*temperature*100}")
print(
f"quantile of negatives jj = "
f"{quantile_negative_jj.cpu()*temperature*100}")
print(
f"quantile of negatives ij = "
f"{quantile_negative_ij.cpu()*temperature*100}")
class NTXenLoss(nn.Module):
"""
Normalized Temperature Cross-Entropy Loss for Constrastive Learning
Refer for instance to:
Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
A Simple Framework for Contrastive Learning of Visual Representations,
arXiv 2020
"""
def __init__(self, temperature=0.1, return_logits=False):
super().__init__()
self.temperature = temperature
self.INF = 1e8
self.return_logits = return_logits
def forward(self, z_i, z_j):
N = len(z_i)
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zii = (z_i @ z_i.T) / self.temperature
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
sim_zij = (z_i @ z_j.T) / self.temperature
print_info(z_i, z_j, sim_zij, sim_zii, sim_zjj, self.temperature)
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_zii = sim_zii - self.INF * torch.eye(N, device=z_i.device)
sim_zjj = sim_zjj - self.INF * torch.eye(N, device=z_i.device)
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_zij, sim_zii], dim=1),
correct_pairs)
loss_j = func.cross_entropy(torch.cat([sim_zij.T, sim_zjj], dim=1),
correct_pairs)
if self.return_logits:
return (loss_i + loss_j), sim_zij, sim_zii, sim_zjj
return (loss_i + loss_j)
def __str__(self):
return "{}(temp={})".format(type(self).__name__, self.temperature)
class GeneralizedSupervisedNTXenLoss(nn.Module):
def __init__(self, kernel='rbf',
temperature=0.1,
return_logits=False,
sigma=1.0,
proportion_pure_contrastive=1.0):
"""
:param kernel: a callable function f: [K, *] x [K, *] -> [K, K]
y1, y2 -> f(y1, y2)
where (*) is the dimension of the labels (yi)
default: an rbf kernel parametrized by 'sigma'
which corresponds to gamma=1/(2*sigma**2)
:param temperature:
:param return_logits:
"""
# sigma = prior over the label's range
super().__init__()
self.kernel = kernel
self.sigma = sigma
if self.kernel == 'rbf':
self.kernel = \
lambda y1, y2: rbf_kernel(y1, y2, gamma=1./(2*self.sigma**2))
else:
assert hasattr(self.kernel, '__call__'), \
'kernel must be a callable'
self.temperature = temperature
self.proportion_pure_contrastive = proportion_pure_contrastive
self.return_logits = return_logits
self.INF = 1e8
def forward(self, z_i, z_j, labels):
N = len(z_i)
assert N == len(labels), "Unexpected labels length: %i"%len(labels)
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
sim_zii= (z_i @ z_i.T) / self.temperature # dim [N, N]
# => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature # dim [N, N]
# => Upper triangle contains incorrect pairs
sim_zij = (z_i @ z_j.T) / self.temperature # dim [N, N]
# => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_zii = sim_zii - self.INF * torch.eye(N, device=z_i.device)
sim_zjj = sim_zjj - self.INF * torch.eye(N, device=z_i.device)
all_labels = \
labels.view(N, -1).repeat(2, 1).detach().cpu().numpy() # [2N, *]
weights = self.kernel(all_labels, all_labels) # [2N, 2N]
weights = weights * (1 - np.eye(2*N)) # puts 0 on the diagonal
# # We now apply a random mask
# random_mask = np.random.randint(0,2,weights.shape)
# random_mask = random_mask * (1 - np.eye(2*N))
# # We now assure that there is at least one 1 for each row
# for i in range(random_mask.shape[0]):
# random_mask[i][(i+1)%random_mask.shape[0]] = 1
# weights = weights * random_mask # puts 0 randomly with 50% proba
# We normalize the weights
weights /= weights.sum(axis=1).reshape(2*N,1)
# if 'rbf' kernel and sigma->0,
# we retrieve the classical NTXenLoss (without labels)
sim_Z = torch.cat([torch.cat([sim_zii, sim_zij], dim=1),
torch.cat([sim_zij.T, sim_zjj], dim=1)],
dim=0) # [2N, 2N]
log_sim_Z = func.log_softmax(sim_Z, dim=1)
weights = torch.from_numpy(weights)
loss_label = -1./N * (weights.to(z_i.device) \
* log_sim_Z).sum()
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_zij, sim_zii], dim=1),
correct_pairs)
loss_j = func.cross_entropy(torch.cat([sim_zij.T, sim_zjj], dim=1),
correct_pairs)
loss_multi = self.proportion_pure_contrastive*(loss_i+loss_j) \
+ (1-self.proportion_pure_contrastive) * loss_label
if self.return_logits:
return loss_multi, sim_zij, sim_zii, sim_zjj, correct_pairs, weights
return loss_multi
def __str__(self):
return "{}(temp={}, kernel={}, sigma={})".format(type(self).__name__,
self.temperature,
self.kernel.__name__,
self.sigma)
class CrossEntropyLoss(nn.Module):
"""
Normalized Temperature Cross-Entropy Loss for Constrastive Learning
Refer for instance to:
Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
A Simple Framework for Contrastive Learning of Visual Representations,
arXiv 2020
"""
def __init__(self, weights=[1, 2], reduction='sum', device=None):
super().__init__()
self.class_weights = torch.FloatTensor(weights).to(device)
self.reduction = reduction
self.loss = nn.CrossEntropyLoss(weight=self.class_weights,
reduction=self.reduction)
def forward(self, sample, output_i, output_j):
sample = (sample >= 1).long()
output_i = output_i.float()
output_j = output_j.float()
loss_i = self.loss(output_i,
sample[:, 0, :, :, :])
loss_j = self.loss(output_j,
sample[:, 0, :, :, :])
return (loss_i + loss_j)
def __str__(self):
return "{}(temp={})".format(type(self).__name__, self.temperature)
class NTXenLoss_NearestNeighbours(nn.Module):
"""
Normalized Nearest Neighbour Temperature Cross-Entropy Loss
for Constrastive Learning
Refer for instance to:
Dwibedi et al, 2021
With a little help from my friends nearest-neighbours
"""
def __init__(self, temperature=0.1, return_logits=False):
super().__init__()
self.temperature = temperature
self.INF = 1e8
self.return_logits = return_logits
def forward(self, z_i, z_j):
N = len(z_i)
diag_inf = self.INF * torch.eye(N, device=z_i.device)
#####################################################
# Computes the classical terms for NTXenLoss
#####################################################
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zii = (z_i @ z_i.T) / self.temperature
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
sim_zij = (z_i @ z_j.T) / self.temperature
sim_zji = sim_zij.T
print("histogram of zij:")
print(
np.histogram(
sim_zij.detach().cpu().numpy() *
self.temperature,
bins='auto'))
#####################################################
# Computes the terms for NearestNeighbour NTXenLoss
# loss_i
#####################################################
max_ii = torch.max(sim_zii - diag_inf, dim=1)
max_ij = torch.max(sim_zij - diag_inf, dim=1)
# Computes nearest-neighbour of z_i
z_nn_i = torch.zeros(z_i.shape, device=z_i.device)
for i in range(N):
if max_ii.values[i] > max_ij.values[i]:
z_nn_i[i] = z_i[max_ii.indices[i]]
else:
z_nn_i[i] = z_j[max_ij.indices[i]]
# dim [N, N] => Upper triangle contains incorrect pairs (nn(i),i+)
sim_nn_zii = (z_nn_i @ z_i.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (nn(i),j)
sim_nn_zij = (z_nn_i @ z_j.T) / self.temperature
# 'Remove' the covariant vectors by penalizing it (exp(-inf) = 0)
for i in range(N):
if max_ii.values[i] > max_ij.values[i]:
sim_nn_zii[i, max_ii.indices[i]] = -self.INF
else:
sim_nn_zij[i, max_ij.indices[i]] = -self.INF
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_nn_zii = sim_nn_zii - diag_inf
# Computes nearest neighbour contrastive loss for first view i
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_nn_zij, sim_nn_zii], dim=1),
correct_pairs)
#####################################################
# Computes the terms for NearestNeighbour NTXenLoss
# loss_j
#####################################################
max_jj = torch.max(sim_zjj - diag_inf, dim=1)
max_ji = torch.max(sim_zji - diag_inf, dim=1)
# Computes nearest-neighbour of z_j
z_nn_j = torch.zeros(z_j.shape, device=z_j.device)
for i in range(N):
if max_jj.values[i] > max_ji.values[i]:
z_nn_j[i] = z_j[max_jj.indices[i]]
else:
z_nn_j[i] = z_i[max_ji.indices[i]]
# dim [N, N] => Upper triangle contains incorrect pairs (nn(i),i+)
sim_nn_zjj = (z_nn_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (nn(i),j)
sim_nn_zji = (z_nn_j @ z_i.T) / self.temperature
# 'Remove' the covariant vectors by penalizing it (exp(-inf) = 0)
for i in range(N):
if max_jj.values[i] > max_ji.values[i]:
sim_nn_zjj[i, max_jj.indices[i]] = -self.INF
else:
sim_nn_zji[i, max_ji.indices[i]] = -self.INF
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_nn_zjj = sim_nn_zjj - diag_inf
# Computes nearest neighbour contrastive loss for first view i
loss_j = func.cross_entropy(torch.cat([sim_nn_zji, sim_nn_zjj], dim=1),
correct_pairs)
if self.return_logits:
return (loss_i + loss_j), sim_zij, correct_pairs
return (loss_i + loss_j)
def __str__(self):
return "{}(temp={})".format(type(self).__name__, self.temperature)
class NTXenLoss_WithoutHardNegative(nn.Module):
"""
Normalized Temperature Cross-Entropy Loss for Constrastive Learning
Refer for instance to:
Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
A Simple Framework for Contrastive Learning of Visual Representations,
arXiv 2020
"""
def __init__(self, temperature=0.1, return_logits=False):
super().__init__()
self.temperature = temperature
self.INF = 1e8
self.return_logits = return_logits
def forward(self, z_i, z_j):
N = len(z_i)
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zii = (z_i @ z_i.T) / self.temperature
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
sim_zij = (z_i @ z_j.T) / self.temperature
# Diagonals as 1D tensor
diag_ij = sim_zij.diagonal()
# Prints quantiles of positive pairs (views from the same image)
quantile_positive_pairs = diag_ij.quantile(0.75)
print(
f"quantile of positives ij = "
f"{quantile_positive_pairs.cpu()*self.temperature*100}")
# Computes quantiles of negative pairs
quantile_negative_ii = quantile_off_diagonal(sim_zii)
quantile_negative_jj = quantile_off_diagonal(sim_zjj)
quantile_negative_ij = quantile_off_diagonal(sim_zij)
# Prints quantiles of negative pairs
print(
f"quantile of negatives ii = "
f"{quantile_negative_ii.cpu()*self.temperature*100}")
print(
f"quantile of negatives jj = "
f"{quantile_negative_jj.cpu()*self.temperature*100}")
print(
f"quantile of negatives ij = "
f"{quantile_negative_ij.cpu()*self.temperature*100}")
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_zii = sim_zii - self.INF * torch.eye(N, device=z_i.device)
sim_zjj = sim_zjj - self.INF * torch.eye(N, device=z_i.device)
# 'Remove' the parts that are hard negatives to promote clustering
sim_zii[sim_zii > quantile_negative_ii] = -self.INF
sim_zjj[sim_zii > quantile_negative_jj] = -self.INF
negative_ij = sim_zij - diag_ij.diag()
negative_ij[negative_ij > quantile_negative_ij] = -self.INF
negative_ij.fill_diagonal_(0.)
sim_zij = negative_ij + diag_ij.diag()
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_zij, sim_zii], dim=1),
correct_pairs)
loss_j = func.cross_entropy(torch.cat([sim_zij.T, sim_zjj], dim=1),
correct_pairs)
if self.return_logits:
return (loss_i + loss_j), sim_zij, correct_pairs
return (loss_i + loss_j)
def __str__(self):
return "{}(temp={})".format(type(self).__name__, self.temperature)
class NTXenLoss_Mixed(nn.Module):
"""
Normalized Temperature Cross-Entropy Loss for Constrastive Learning
Refer for instance to:
Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
A Simple Framework for Contrastive Learning of Visual Representations,
arXiv 2020
"""
def __init__(self, temperature=0.1, return_logits=False):
super().__init__()
self.temperature = temperature
self.INF = 1e8
self.return_logits = return_logits
def forward_NearestNeighbours_OtherView(self, z_i, z_j):
N = len(z_i)
diag_inf = self.INF * torch.eye(N, device=z_i.device)
#####################################################
# Computes the classical terms for NTXenLoss
#####################################################
print("histogram of z_i before normalization:")
print(np.histogram(z_i.detach().cpu().numpy() * 100, bins='auto'))
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
print("histogram of z_i after normalization:")
print(np.histogram(z_i.detach().cpu().numpy() * 100, bins='auto'))
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zii = (z_i @ z_i.T) / self.temperature
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
sim_zij = (z_i @ z_j.T) / self.temperature
sim_zji = sim_zij.T
print("histogram of sim_zij:")
print(
np.histogram(
sim_zij.detach().cpu().numpy() *
self.temperature *
100,
bins='auto'))
#####################################################
# Computes the terms for NearestNeighbour NTXenLoss
# loss_i
#####################################################
max_ii = torch.max(sim_zii - diag_inf, dim=1)
max_ij = torch.max(sim_zij - diag_inf, dim=1)
# Computes nearest-neighbour of z_i
z_nn_i = torch.zeros(z_i.shape, device=z_i.device)
z_nn_i = z_j[max_ij.indices]
# dim [N, N] => Upper triangle contains incorrect pairs (nn(i),i+)
sim_nn_zii = (z_nn_i @ z_i.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (nn(i),j)
sim_nn_zij = (z_nn_i @ z_j.T) / self.temperature
# 'Remove' the covariant vectors by penalizing it (exp(-inf) = 0)
for i in range(N):
sim_nn_zij[i, max_ij.indices[i]] = -self.INF
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_nn_zii = sim_nn_zii - diag_inf
# Computes nearest neighbour contrastive loss for first view i
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_nn_zij, sim_nn_zii], dim=1),
correct_pairs)
#####################################################
# Computes the terms for NearestNeighbour NTXenLoss
# loss_j
#####################################################
max_jj = torch.max(sim_zjj - diag_inf, dim=1)
max_ji = torch.max(sim_zji - diag_inf, dim=1)
# Computes nearest-neighbour of z_j
z_nn_j = torch.zeros(z_j.shape, device=z_j.device)
z_nn_j = z_i[max_ji.indices]
# dim [N, N] => Upper triangle contains incorrect pairs (nn(i),i+)
sim_nn_zjj = (z_nn_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (nn(i),j)
sim_nn_zji = (z_nn_j @ z_i.T) / self.temperature
# 'Remove' the covariant vectors by penalizing it (exp(-inf) = 0)
for i in range(N):
sim_nn_zji[i, max_ji.indices[i]] = -self.INF
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_nn_zjj = sim_nn_zjj - diag_inf
# Computes nearest neighbour contrastive loss for first view i
loss_j = func.cross_entropy(torch.cat([sim_nn_zji, sim_nn_zjj], dim=1),
correct_pairs)
if self.return_logits:
return (loss_i + loss_j), sim_zij, correct_pairs
return (loss_i + loss_j)
def forward_NearestNeighbours(self, z_i, z_j):
N = len(z_i)
diag_inf = self.INF * torch.eye(N, device=z_i.device)
#####################################################
# Computes the classical terms for NTXenLoss
#####################################################
print("histogram of z_i before normalization:")
print(np.histogram(z_i.detach().cpu().numpy() * 100, bins='auto'))
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
print("histogram of z_i after normalization:")
print(np.histogram(z_i.detach().cpu().numpy() * 100, bins='auto'))
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zii = (z_i @ z_i.T) / self.temperature
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
sim_zij = (z_i @ z_j.T) / self.temperature
sim_zji = sim_zij.T
print("histogram of sim_zij:")
print(
np.histogram(
sim_zij.detach().cpu().numpy() *
self.temperature *
100,
bins='auto'))
#####################################################
# Computes the terms for NearestNeighbour NTXenLoss
# loss_i
#####################################################
max_ii = torch.max(sim_zii - diag_inf, dim=1)
max_ij = torch.max(sim_zij - diag_inf, dim=1)
# Computes nearest-neighbour of z_i
z_nn_i = torch.zeros(z_i.shape, device=z_i.device)
for i in range(N):
if max_ii.values[i] > max_ij.values[i]:
z_nn_i[i] = z_i[max_ii.indices[i]]
else:
z_nn_i[i] = z_j[max_ij.indices[i]]
# dim [N, N] => Upper triangle contains incorrect pairs (nn(i),i+)
sim_nn_zii = (z_nn_i @ z_i.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (nn(i),j)
sim_nn_zij = (z_nn_i @ z_j.T) / self.temperature
# 'Remove' the covariant vectors by penalizing it (exp(-inf) = 0)
for i in range(N):
if max_ii.values[i] > max_ij.values[i]:
sim_nn_zii[i, max_ii.indices[i]] = -self.INF
else:
sim_nn_zij[i, max_ij.indices[i]] = -self.INF
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_nn_zii = sim_nn_zii - diag_inf
# Computes nearest neighbour contrastive loss for first view i
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_nn_zij, sim_nn_zii], dim=1),
correct_pairs)
#####################################################
# Computes the terms for NearestNeighbour NTXenLoss
# loss_j
#####################################################
max_jj = torch.max(sim_zjj - diag_inf, dim=1)
max_ji = torch.max(sim_zji - diag_inf, dim=1)
# Computes nearest-neighbour of z_j
z_nn_j = torch.zeros(z_j.shape, device=z_j.device)
for i in range(N):
if max_jj.values[i] > max_ji.values[i]:
z_nn_j[i] = z_j[max_jj.indices[i]]
else:
z_nn_j[i] = z_i[max_ji.indices[i]]
# dim [N, N] => Upper triangle contains incorrect pairs (nn(i),i+)
sim_nn_zjj = (z_nn_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (nn(i),j)
sim_nn_zji = (z_nn_j @ z_i.T) / self.temperature
# 'Remove' the covariant vectors by penalizing it (exp(-inf) = 0)
for i in range(N):
if max_jj.values[i] > max_ji.values[i]:
sim_nn_zjj[i, max_jj.indices[i]] = -self.INF
else:
sim_nn_zji[i, max_ji.indices[i]] = -self.INF
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_nn_zjj = sim_nn_zjj - diag_inf
# Computes nearest neighbour contrastive loss for first view i
loss_j = func.cross_entropy(torch.cat([sim_nn_zji, sim_nn_zjj], dim=1),
correct_pairs)
if self.return_logits:
return (loss_i + loss_j), sim_zij, correct_pairs
return (loss_i + loss_j)
def forward_WithoutHardNegative(self, z_i, z_j):
N = len(z_i)
z_i = func.normalize(z_i, p=2, dim=-1) # dim [N, D]
z_j = func.normalize(z_j, p=2, dim=-1) # dim [N, D]
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zii = (z_i @ z_i.T) / self.temperature
# dim [N, N] => Upper triangle contains incorrect pairs
sim_zjj = (z_j @ z_j.T) / self.temperature
# dim [N, N] => the diag contains the correct pairs (i,j)
# (x transforms via T_i and T_j)
sim_zij = (z_i @ z_j.T) / self.temperature
# Diagonals as 1D tensor
diag_ij = sim_zij.diagonal()
# Prints quantiles of positive pairs (views from the same image)
quantile_positive_pairs = diag_ij.quantile(0.75)
print(
f"quantile of positives ij = "
f"{quantile_positive_pairs.cpu()*self.temperature*100}")
# Computes quantiles of negative pairs
quantile_negative_ii = quantile_off_diagonal(sim_zii)
quantile_negative_jj = quantile_off_diagonal(sim_zjj)
quantile_negative_ij = quantile_off_diagonal(sim_zij)
# Prints quantiles of negative pairs
print(
f"quantile of negatives ii = "
f"{quantile_negative_ii.cpu()*self.temperature*100}")
print(
f"quantile of negatives jj = "
f"{quantile_negative_jj.cpu()*self.temperature*100}")
print(
f"quantile of negatives ij = "
f"{quantile_negative_ij.cpu()*self.temperature*100}")
# 'Remove' the diag terms by penalizing it (exp(-inf) = 0)
sim_zii = sim_zii - self.INF * torch.eye(N, device=z_i.device)
sim_zjj = sim_zjj - self.INF * torch.eye(N, device=z_i.device)
# 'Remove' the parts that are hard negatives to promote clustering
sim_zii[sim_zii > quantile_negative_ii] = -self.INF
sim_zjj[sim_zjj > quantile_negative_jj] = -self.INF
# 'Remove' the parts that are hard negatives to promote clustering
# We keep the positive element j (second view)
negative_ij = sim_zij - diag_ij.diag()
negative_ij[negative_ij > quantile_negative_ij] = -self.INF
negative_ij.fill_diagonal_(0.)
sim_zij = negative_ij + diag_ij.diag()
correct_pairs = torch.arange(N, device=z_i.device).long()
loss_i = func.cross_entropy(torch.cat([sim_zij, sim_zii], dim=1),
correct_pairs)
loss_j = func.cross_entropy(torch.cat([sim_zij.T, sim_zjj], dim=1),
correct_pairs)
if self.return_logits:
return (loss_i + loss_j), sim_zij, correct_pairs
return (loss_i + loss_j)
def forward(self, z_i, z_j):
loss_NN, _, _ = self.forward_NearestNeighbours_OtherView(z_i, z_j)
loss_WHN, sim_zij, correct_pairs = self.forward_WithoutHardNegative(
z_i, z_j)
if self.return_logits:
return ((loss_NN + loss_WHN) / 2), sim_zij, correct_pairs
return ((loss_NN + loss_WHN) / 2)
def __str__(self):
return "{}(temp={})".format(type(self).__name__, self.temperature)
| 38.961055 | 80 | 0.576339 | 4,260 | 31,013 | 3.979812 | 0.088263 | 0.012151 | 0.009732 | 0.023534 | 0.786658 | 0.769966 | 0.751976 | 0.749735 | 0.749735 | 0.747965 | 0 | 0.011906 | 0.287718 | 31,013 | 795 | 81 | 39.010063 | 0.755591 | 0.28459 | 0 | 0.79562 | 0 | 0 | 0.067342 | 0.027715 | 0 | 0 | 0 | 0.001258 | 0.004866 | 1 | 0.058394 | false | 0 | 0.012165 | 0.014599 | 0.145985 | 0.082725 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
36cdd6bf7c72d75af2eac5c927ec5f88aa856e11 | 215 | py | Python | corehq/apps/commtrack/exceptions.py | dslowikowski/commcare-hq | ad8885cf8dab69dc85cb64f37aeaf06106124797 | [
"BSD-3-Clause"
] | null | null | null | corehq/apps/commtrack/exceptions.py | dslowikowski/commcare-hq | ad8885cf8dab69dc85cb64f37aeaf06106124797 | [
"BSD-3-Clause"
] | null | null | null | corehq/apps/commtrack/exceptions.py | dslowikowski/commcare-hq | ad8885cf8dab69dc85cb64f37aeaf06106124797 | [
"BSD-3-Clause"
] | null | null | null | class LinkedSupplyPointNotFoundError(Exception):
pass
class NotAUserClassError(Exception):
pass
class InvalidProductException(Exception):
pass
class NoDefaultLocationException(Exception):
pass
| 14.333333 | 48 | 0.786047 | 16 | 215 | 10.5625 | 0.4375 | 0.307692 | 0.319527 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.15814 | 215 | 14 | 49 | 15.357143 | 0.933702 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 0 | 1 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3d059b98c2116628c8b84bccacd7f3acbdc382b8 | 85 | py | Python | Python/teste_19.py | itsmeLuizMoura/Estudo | c2798ba93c36a274d0fe9a83d781b7e585b8dfbe | [
"MIT"
] | null | null | null | Python/teste_19.py | itsmeLuizMoura/Estudo | c2798ba93c36a274d0fe9a83d781b7e585b8dfbe | [
"MIT"
] | null | null | null | Python/teste_19.py | itsmeLuizMoura/Estudo | c2798ba93c36a274d0fe9a83d781b7e585b8dfbe | [
"MIT"
] | null | null | null | a = (2, 5, 4)
b = (5, 8, 1, 2)
c = a + b
print(c)
print(c.count(5))
print(c.index(5)) | 14.166667 | 17 | 0.494118 | 22 | 85 | 1.909091 | 0.5 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.134328 | 0.211765 | 85 | 6 | 18 | 14.166667 | 0.492537 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.5 | 1 | 0 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
3d535de211d3947882ae1004f5111baef6508fa0 | 281 | py | Python | src/adapter/repository/sftp/RemoteFile.py | lorenzomartino86/anomaly-detector | 46f4f059ac9f36820fb6d5b5cf823a992013ffda | [
"Apache-2.0"
] | 1 | 2020-07-06T14:09:33.000Z | 2020-07-06T14:09:33.000Z | src/adapter/repository/sftp/RemoteFile.py | lorenzomartino86/anomaly-detector | 46f4f059ac9f36820fb6d5b5cf823a992013ffda | [
"Apache-2.0"
] | null | null | null | src/adapter/repository/sftp/RemoteFile.py | lorenzomartino86/anomaly-detector | 46f4f059ac9f36820fb6d5b5cf823a992013ffda | [
"Apache-2.0"
] | null | null | null | class RemoteFile(object):
def __init__(self, path):
self.path = path
def get_name(self):
return self.path.split('/')[-1]
def get_extension(self):
return self.path.split('/')[-1].split('.')[-1]
def get_path(self):
return self.path
| 21.615385 | 54 | 0.576512 | 37 | 281 | 4.189189 | 0.351351 | 0.258065 | 0.270968 | 0.348387 | 0.309677 | 0.309677 | 0 | 0 | 0 | 0 | 0 | 0.014286 | 0.252669 | 281 | 12 | 55 | 23.416667 | 0.72381 | 0 | 0 | 0 | 0 | 0 | 0.010676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.444444 | false | 0 | 0 | 0.333333 | 0.888889 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
18056ff14b9e5401305adf499b2132f4ac36432d | 2,845 | py | Python | tests/reporter/test_statsd.py | miracle2k/metrology | f7341372055da0713a6db634c336b452cb4dc491 | [
"MIT"
] | null | null | null | tests/reporter/test_statsd.py | miracle2k/metrology | f7341372055da0713a6db634c336b452cb4dc491 | [
"MIT"
] | null | null | null | tests/reporter/test_statsd.py | miracle2k/metrology | f7341372055da0713a6db634c336b452cb4dc491 | [
"MIT"
] | null | null | null | try:
from StringIO import StringIO
from mock import patch
except ImportError:
from io import StringIO # noqa
from unittest.mock import patch # noqa
from unittest import TestCase
from metrology import Metrology
from metrology.reporter.statsd import StatsDReporter
class StatsDReporterTest(TestCase):
def tearDown(self):
Metrology.stop()
@patch.object(StatsDReporter, 'socket')
def test_send_nobatch(self, mock):
self.reporter = StatsDReporter('localhost', 3333,
batch_size=1, conn_type='tcp')
Metrology.meter('meter').mark()
Metrology.counter('counter').increment()
Metrology.timer('timer').update(5)
Metrology.utilization_timer('utimer').update(5)
Metrology.histogram('histogram').update(5)
self.reporter.write()
self.assertTrue(mock.sendall.called)
self.assertEqual(37, len(mock.sendall.call_args_list))
self.reporter.stop()
@patch.object(StatsDReporter, 'socket')
def test_send_batch(self, mock):
self.reporter = StatsDReporter('localhost', 3333,
batch_size=2, conn_type='tcp')
Metrology.meter('meter').mark()
Metrology.counter('counter').increment()
Metrology.timer('timer').update(5)
Metrology.utilization_timer('utimer').update(5)
Metrology.histogram('histogram').update(5)
self.reporter.write()
self.assertTrue(mock.sendall.called)
self.assertEqual(19, len(mock.sendall.call_args_list))
self.reporter.stop()
@patch.object(StatsDReporter, 'socket')
def test_udp_send_nobatch(self, mock):
self.reporter = StatsDReporter('localhost', 3333,
batch_size=1, conn_type='udp')
Metrology.meter('meter').mark()
Metrology.counter('counter').increment()
Metrology.timer('timer').update(5)
Metrology.utilization_timer('utimer').update(5)
Metrology.histogram('histogram').update(5)
self.reporter.write()
self.assertTrue(mock.sendto.called)
self.assertEqual(37, len(mock.sendto.call_args_list))
self.reporter.stop()
@patch.object(StatsDReporter, 'socket')
def test_udp_send_batch(self, mock):
self.reporter = StatsDReporter('localhost', 3333,
batch_size=2, conn_type='udp')
Metrology.meter('meter').mark()
Metrology.counter('counter').increment()
Metrology.timer('timer').update(5)
Metrology.utilization_timer('utimer').update(5)
Metrology.histogram('histogram').update(5)
self.reporter.write()
self.assertTrue(mock.sendto.called)
self.assertEqual(19, len(mock.sendto.call_args_list))
self.reporter.stop()
| 37.434211 | 69 | 0.638313 | 306 | 2,845 | 5.836601 | 0.19281 | 0.080627 | 0.071669 | 0.06495 | 0.845465 | 0.845465 | 0.840985 | 0.840985 | 0.817469 | 0.787234 | 0 | 0.018416 | 0.236555 | 2,845 | 75 | 70 | 37.933333 | 0.803867 | 0.003163 | 0 | 0.625 | 0 | 0 | 0.070597 | 0 | 0 | 0 | 0 | 0 | 0.125 | 1 | 0.078125 | false | 0 | 0.125 | 0 | 0.21875 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
43fef15c49d5b13fd6fb3b017bc3f229ba83ce45 | 19,436 | py | Python | totalgood/conceive/migrations/0001_initial.py | hobson/totalgood | 5aae617beb08c21cbd262f091d69793abb17c5b0 | [
"MIT"
] | null | null | null | totalgood/conceive/migrations/0001_initial.py | hobson/totalgood | 5aae617beb08c21cbd262f091d69793abb17c5b0 | [
"MIT"
] | 9 | 2020-03-24T15:56:06.000Z | 2022-03-11T23:26:02.000Z | totalgood/conceive/migrations/0001_initial.py | hobson/totalgood | 5aae617beb08c21cbd262f091d69793abb17c5b0 | [
"MIT"
] | 1 | 2016-04-24T15:10:20.000Z | 2016-04-24T15:10:20.000Z | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
import datetime
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='Backup',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('zip_file', models.FileField(null=True, upload_to=b'backups', blank=True)),
('backup_at', models.DateTimeField(auto_now_add=True, null=True)),
('num_posts', models.IntegerField(default=0)),
('num_revisions', models.IntegerField(default=0)),
('num_reads', models.IntegerField(default=0)),
('num_fantastics', models.IntegerField(default=0)),
],
options={
'ordering': ('-backup_at',),
},
),
migrations.CreateModel(
name='Collection',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('title', models.TextField(null=True, blank=True)),
('slug', models.CharField(max_length=800, editable=False, blank=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Concept',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('title', models.TextField(default=b'Title', null=True, blank=True)),
('body', models.TextField(default=b'Body', null=True, blank=True)),
('title_html', models.TextField(null=True, editable=False, blank=True)),
('body_html', models.TextField(null=True, editable=False, blank=True)),
('description', models.TextField(null=True, blank=True)),
('post_type', models.IntegerField(choices=[(1, b'Big Quote'), (2, b'Photo with caption'), (3, b'Article and a Single Image'), (4, b'Article and a multiple images'), (5, b'Body with no real title')])),
('num_images', models.IntegerField(default=0)),
('permalink_path', models.CharField(max_length=500, null=True, editable=False, blank=True)),
('is_draft', models.BooleanField(default=True)),
('allow_comments', models.BooleanField(default=True)),
('dayone_post', models.BooleanField(default=False, editable=False)),
('dayone_posted', models.DateTimeField(null=True, editable=False, blank=True)),
('dayone_last_modified', models.DateTimeField(null=True, editable=False, blank=True)),
('dayone_last_rev', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('dayone_image', models.ImageField(upload_to=b'dayone_images', null=True, verbose_name=b'Hero Image', blank=True)),
('dayone_image_url', models.TextField(null=True, blank=True)),
('dayone_image_blog_size_url', models.TextField(null=True, blank=True)),
('dayone_image_thumb_size_url', models.TextField(null=True, blank=True)),
('twitter_publish_intent', models.BooleanField(default=True)),
('twitter_include_image', models.BooleanField(default=True)),
('twitter_published', models.BooleanField(default=False, editable=False)),
('twitter_status_text', models.TextField(null=True, blank=True)),
('twitter_status_id', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('twitter_retweets', models.IntegerField(default=0, null=True, blank=True)),
('twitter_favorites', models.IntegerField(default=0, null=True, blank=True)),
('facebook_publish_intent', models.BooleanField(default=True)),
('facebook_published', models.BooleanField(default=False, editable=False)),
('facebook_status_text', models.TextField(null=True, blank=True)),
('facebook_status_id', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('facebook_likes', models.IntegerField(default=0, null=True, blank=True)),
('facebook_shares', models.IntegerField(default=0, null=True, blank=True)),
('facebook_comments', models.IntegerField(default=0, null=True, blank=True)),
('social_shares_customized', models.BooleanField(default=False)),
('email_publish_intent', models.BooleanField(default=False)),
('allow_private_viewing', models.BooleanField(default=False)),
('started_at', models.DateTimeField(auto_now_add=True, null=True)),
('sort_datetime', models.DateTimeField(null=True, editable=False, blank=True)),
('published_at', models.DateTimeField(null=True, editable=False, blank=True)),
('written_on', models.DateTimeField(default=datetime.datetime(2015, 4, 16, 0, 52, 28, 411789), null=True, blank=True)),
('slug', models.CharField(max_length=800, verbose_name=b'url', blank=True)),
('dayone_id', models.CharField(max_length=255, unique=True, null=True, editable=False, blank=True)),
],
options={
'ordering': ('-started_at',),
},
),
migrations.CreateModel(
name='ConceptImage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('image', models.ImageField(null=True, upload_to=b'post_images', blank=True)),
('image_url', models.TextField(null=True, blank=True)),
('blog_size_url', models.TextField(null=True, blank=True)),
('thumb_size_url', models.TextField(null=True, blank=True)),
('post', models.ForeignKey(to='conceive.Concept')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='ConceptRevision',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('title', models.TextField(default=b'Title', null=True, blank=True)),
('body', models.TextField(default=b'Body', null=True, blank=True)),
('title_html', models.TextField(null=True, editable=False, blank=True)),
('body_html', models.TextField(null=True, editable=False, blank=True)),
('description', models.TextField(null=True, blank=True)),
('post_type', models.IntegerField(choices=[(1, b'Big Quote'), (2, b'Photo with caption'), (3, b'Article and a Single Image'), (4, b'Article and a multiple images'), (5, b'Body with no real title')])),
('num_images', models.IntegerField(default=0)),
('permalink_path', models.CharField(max_length=500, null=True, editable=False, blank=True)),
('is_draft', models.BooleanField(default=True)),
('allow_comments', models.BooleanField(default=True)),
('dayone_post', models.BooleanField(default=False, editable=False)),
('dayone_posted', models.DateTimeField(null=True, editable=False, blank=True)),
('dayone_last_modified', models.DateTimeField(null=True, editable=False, blank=True)),
('dayone_last_rev', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('dayone_image', models.ImageField(upload_to=b'dayone_images', null=True, verbose_name=b'Hero Image', blank=True)),
('dayone_image_url', models.TextField(null=True, blank=True)),
('dayone_image_blog_size_url', models.TextField(null=True, blank=True)),
('dayone_image_thumb_size_url', models.TextField(null=True, blank=True)),
('twitter_publish_intent', models.BooleanField(default=True)),
('twitter_include_image', models.BooleanField(default=True)),
('twitter_published', models.BooleanField(default=False, editable=False)),
('twitter_status_text', models.TextField(null=True, blank=True)),
('twitter_status_id', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('twitter_retweets', models.IntegerField(default=0, null=True, blank=True)),
('twitter_favorites', models.IntegerField(default=0, null=True, blank=True)),
('facebook_publish_intent', models.BooleanField(default=True)),
('facebook_published', models.BooleanField(default=False, editable=False)),
('facebook_status_text', models.TextField(null=True, blank=True)),
('facebook_status_id', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('facebook_likes', models.IntegerField(default=0, null=True, blank=True)),
('facebook_shares', models.IntegerField(default=0, null=True, blank=True)),
('facebook_comments', models.IntegerField(default=0, null=True, blank=True)),
('social_shares_customized', models.BooleanField(default=False)),
('email_publish_intent', models.BooleanField(default=False)),
('allow_private_viewing', models.BooleanField(default=False)),
('revised_at', models.DateTimeField(auto_now_add=True)),
],
options={
'ordering': ('-revised_at',),
},
),
migrations.CreateModel(
name='Contributor',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('premium_user', models.BooleanField(default=False)),
('slug', models.CharField(max_length=255, editable=False, blank=True)),
('blog_name', models.CharField(max_length=255, blank=True)),
('blog_domain', models.CharField(max_length=255, unique=True, null=True, blank=True)),
('blog_header', models.TextField(null=True, blank=True)),
('blog_footer', models.TextField(null=True, blank=True)),
('public_domain', models.BooleanField(default=False)),
('wikipedia_url', models.TextField(null=True, blank=True)),
('archive', models.BooleanField(default=False)),
('archive_name', models.CharField(max_length=255, editable=False, blank=True)),
('birthdate', models.DateField(null=True, blank=True)),
('deathdate', models.DateField(null=True, blank=True)),
('dropbox_access_token', models.CharField(max_length=255, null=True, blank=True)),
('dropbox_user_id', models.CharField(max_length=255, null=True, blank=True)),
('dropbox_url_state', models.CharField(max_length=255, null=True, blank=True)),
('dropbox_expire_date', models.DateTimeField(null=True, blank=True)),
('dropbox_dayone_folder_path', models.CharField(max_length=255, null=True, blank=True)),
('dropbox_dayone_entry_hash', models.CharField(max_length=255, null=True, blank=True)),
('dropbox_dayone_image_hash', models.CharField(max_length=255, null=True, blank=True)),
('last_dropbox_sync', models.DateTimeField(null=True, blank=True)),
('facebook_api_key', models.TextField(null=True, blank=True)),
('facebook_account_link', models.TextField(null=True, blank=True)),
('facebook_account_name', models.CharField(max_length=255, null=True, blank=True)),
('facebook_expire_date', models.DateTimeField(null=True, blank=True)),
('facebook_profile_picture_url', models.TextField(null=True, blank=True)),
('twitter_api_key', models.TextField(null=True, blank=True)),
('twitter_api_secret', models.TextField(null=True, blank=True)),
('twitter_full_name', models.CharField(max_length=255, null=True, blank=True)),
('twitter_account_name', models.CharField(max_length=255, null=True, blank=True)),
('twitter_expire_date', models.DateTimeField(null=True, blank=True)),
('twitter_profile_picture_url', models.TextField(null=True, blank=True)),
('user', models.ForeignKey(blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Fantastic',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('uuid', models.CharField(max_length=500, null=True, blank=True)),
('marked_at', models.DateTimeField(auto_now_add=True)),
('on', models.BooleanField(default=True)),
('post', models.ForeignKey(to='conceive.Concept')),
('reader', models.ForeignKey(blank=True, to='conceive.Contributor', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Location',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('area', models.CharField(max_length=255, null=True, blank=True)),
('country', models.CharField(max_length=255, null=True, blank=True)),
('latitude', models.FloatField(null=True, blank=True)),
('longitude', models.FloatField(null=True, blank=True)),
('time_zone', models.CharField(max_length=255, null=True, editable=False, blank=True)),
],
),
migrations.CreateModel(
name='Read',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('uuid', models.CharField(max_length=500, null=True, blank=True)),
('read_at', models.DateTimeField(auto_now_add=True)),
('post', models.ForeignKey(to='conceive.Concept')),
('reader', models.ForeignKey(blank=True, to='conceive.Contributor', null=True)),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Redirect',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('created', models.DateTimeField(auto_now_add=True)),
('updated', models.DateTimeField(auto_now=True)),
('old_url', models.CharField(max_length=600, null=True, blank=True)),
('new_url', models.CharField(max_length=600, null=True, blank=True)),
('author', models.ForeignKey(to='conceive.Contributor')),
],
options={
'abstract': False,
},
),
migrations.CreateModel(
name='Weather',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('temp', models.FloatField(help_text=b'Current outdoor (shaded) temperature (float, deg C)', null=True, blank=True)),
('condition', models.CharField(help_text=b'Short weather condition description (str, categorical)', max_length=255, null=True, editable=False, blank=True)),
('icon', models.CharField(max_length=255, null=True, editable=False, blank=True)),
('pressure_mmm', models.IntegerField(null=True, blank=True)),
('relative_humidity', models.IntegerField(null=True, blank=True)),
('wind_bearing', models.IntegerField(null=True, blank=True)),
('wind_chill_c', models.IntegerField(null=True, blank=True)),
('wind_speed_kph', models.IntegerField(null=True, blank=True)),
],
),
migrations.AddField(
model_name='conceptrevision',
name='author',
field=models.ForeignKey(to='conceive.Contributor'),
),
migrations.AddField(
model_name='conceptrevision',
name='location',
field=models.ForeignKey(to='conceive.Location', null=True),
),
migrations.AddField(
model_name='conceptrevision',
name='post',
field=models.ForeignKey(to='conceive.Concept'),
),
migrations.AddField(
model_name='conceptrevision',
name='weather',
field=models.ForeignKey(to='conceive.Weather', null=True),
),
migrations.AddField(
model_name='concept',
name='author',
field=models.ForeignKey(to='conceive.Contributor'),
),
migrations.AddField(
model_name='concept',
name='location',
field=models.ForeignKey(to='conceive.Location', null=True),
),
migrations.AddField(
model_name='concept',
name='weather',
field=models.ForeignKey(to='conceive.Weather', null=True),
),
migrations.AddField(
model_name='collection',
name='author',
field=models.ForeignKey(to='conceive.Contributor'),
),
migrations.AddField(
model_name='backup',
name='author',
field=models.ForeignKey(to='conceive.Contributor'),
),
]
| 60.548287 | 216 | 0.593281 | 1,988 | 19,436 | 5.639336 | 0.11167 | 0.07564 | 0.08117 | 0.106146 | 0.876015 | 0.850058 | 0.809562 | 0.762198 | 0.713228 | 0.66069 | 0 | 0.010242 | 0.261525 | 19,436 | 320 | 217 | 60.7375 | 0.770849 | 0.00108 | 0 | 0.656051 | 0 | 0 | 0.16422 | 0.025807 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.012739 | 0 | 0.022293 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1160a009783baa442f475810009aefedbc6d867 | 80 | py | Python | tracker/__init__.py | akkaze/rdc | 76e85e3fe441e3ba968a190d9496f467b9d4d2e6 | [
"BSD-3-Clause"
] | 52 | 2018-10-08T01:56:15.000Z | 2021-03-14T12:19:51.000Z | tracker/__init__.py | akkaze/rdc | 76e85e3fe441e3ba968a190d9496f467b9d4d2e6 | [
"BSD-3-Clause"
] | null | null | null | tracker/__init__.py | akkaze/rdc | 76e85e3fe441e3ba968a190d9496f467b9d4d2e6 | [
"BSD-3-Clause"
] | 3 | 2019-01-02T05:17:28.000Z | 2020-01-06T03:53:12.000Z | from . import utils
from . import args
from . import tracker
from . import topo
| 16 | 21 | 0.75 | 12 | 80 | 5 | 0.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 80 | 4 | 22 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a11e1a2eca77a13764ea607f53518772fad40c71 | 6,018 | py | Python | benchmarks/test_raw_io_benchmarks.py | srenoes/pyogrio | 9398a7f1dae001cc04c7c52c5e4c67882fea20f5 | [
"MIT"
] | 52 | 2021-07-09T03:33:53.000Z | 2022-03-25T10:52:53.000Z | benchmarks/test_raw_io_benchmarks.py | srenoes/pyogrio | 9398a7f1dae001cc04c7c52c5e4c67882fea20f5 | [
"MIT"
] | 49 | 2021-05-28T00:54:10.000Z | 2022-03-31T16:42:09.000Z | benchmarks/test_raw_io_benchmarks.py | srenoes/pyogrio | 9398a7f1dae001cc04c7c52c5e4c67882fea20f5 | [
"MIT"
] | 4 | 2021-07-09T08:54:59.000Z | 2022-03-17T14:50:14.000Z | import os
import fiona
import pytest
from pyogrio.raw import read, write
def fiona_read(path, layer=None):
"""Read records from OGR data source using Fiona.
Note: Fiona returns different information than pyogrio and we have to
use a list here to force reading from Fiona's records generator -
both of which incur a slight performance penalty.
"""
with fiona.open(path, layer=layer) as src:
list(src)
def fiona_write(path, records, **kwargs):
with fiona.open(path, "w", **kwargs) as out:
for record in records:
out.write(record)
@pytest.mark.benchmark(group="read-lowres")
def test_read_lowres(naturalearth_lowres, benchmark):
benchmark(read, naturalearth_lowres)
@pytest.mark.benchmark(group="read-lowres")
def test_read_fiona_lowres(naturalearth_lowres, benchmark):
benchmark(fiona_read, naturalearth_lowres)
@pytest.mark.benchmark(group="read-modres-admin0")
def test_read_modres(naturalearth_modres, benchmark):
benchmark(read, naturalearth_modres)
@pytest.mark.benchmark(group="read-modres-admin0")
def test_read_vsi_modres(naturalearth_modres_vsi, benchmark):
benchmark(read, naturalearth_modres_vsi)
@pytest.mark.benchmark(group="read-modres-admin0")
def test_read_fiona_modres(naturalearth_modres, benchmark):
benchmark(fiona_read, naturalearth_modres)
@pytest.mark.benchmark(group="read-modres-admin1")
def test_read_modres1(naturalearth_modres1, benchmark):
benchmark(read, naturalearth_modres1)
@pytest.mark.benchmark(group="read-modres-admin1")
def test_read_fiona_modres1(naturalearth_modres1, benchmark):
benchmark(fiona_read, naturalearth_modres1)
@pytest.mark.benchmark(group="read-nhd_hr")
def test_read_nhd_hr(nhd_hr, benchmark):
benchmark(read, nhd_hr, layer="NHDFlowline")
@pytest.mark.benchmark(group="read-nhd_hr")
def test_read_fiona_nhd_hr(nhd_hr, benchmark):
benchmark(fiona_read, nhd_hr, layer="NHDFlowline")
@pytest.mark.benchmark(group="read-subset")
def test_read_full_modres1(naturalearth_modres1, benchmark):
benchmark(read, naturalearth_modres1)
@pytest.mark.benchmark(group="read-subset")
def test_read_no_geometry_modres1(naturalearth_modres1, benchmark):
benchmark(read, naturalearth_modres1, read_geometry=False)
@pytest.mark.benchmark(group="read-subset")
def test_read_one_column_modres1(naturalearth_modres1, benchmark):
benchmark(read, naturalearth_modres1, columns=["NAME"])
@pytest.mark.benchmark(group="read-subset")
def test_read_only_geometry_modres1(naturalearth_modres1, benchmark):
benchmark(read, naturalearth_modres1, columns=[])
@pytest.mark.benchmark(group="read-subset")
def test_read_only_meta_modres1(naturalearth_modres1, benchmark):
benchmark(read, naturalearth_modres1, columns=[], read_geometry=False)
@pytest.mark.benchmark(group="write-lowres")
def test_write_lowres_shp(tmpdir, naturalearth_lowres, benchmark):
meta, geometry, field_data = read(naturalearth_lowres)
filename = os.path.join(str(tmpdir), "test.shp")
benchmark(write, filename, geometry, field_data, driver="ESRI Shapefile", **meta)
@pytest.mark.benchmark(group="write-lowres")
def test_write_lowres_gpkg(tmpdir, naturalearth_lowres, benchmark):
meta, geometry, field_data = read(naturalearth_lowres)
filename = os.path.join(str(tmpdir), "test.gpkg")
benchmark(write, filename, geometry, field_data, driver="GPKG", **meta)
@pytest.mark.benchmark(group="write-lowres")
def test_write_lowres_geojson(tmpdir, naturalearth_lowres, benchmark):
meta, geometry, field_data = read(naturalearth_lowres)
filename = os.path.join(str(tmpdir), "test.json")
benchmark(write, filename, geometry, field_data, driver="GeoJSON", **meta)
@pytest.mark.benchmark(group="write-lowres")
def test_write_lowres_geojsonseq(tmpdir, naturalearth_lowres, benchmark):
meta, geometry, field_data = read(naturalearth_lowres)
filename = os.path.join(str(tmpdir), "test.json")
benchmark(write, filename, geometry, field_data, driver="GeoJSONSeq", **meta)
@pytest.mark.benchmark(group="write-lowres")
def test_write_fiona_lowres_shp(tmpdir, naturalearth_lowres, benchmark):
with fiona.open(naturalearth_lowres) as source:
crs = source.crs
schema = source.schema
records = list(source)
filename = os.path.join(str(tmpdir), "test.shp")
benchmark(
fiona_write, filename, records, driver="ESRI Shapefile", crs=crs, schema=schema
)
# @pytest.mark.benchmark(group="write-lowres")
# def test_write_fiona_lowres_gpkg(tmpdir, naturalearth_lowres, benchmark):
# with fiona.open(naturalearth_lowres) as source:
# crs = source.crs
# schema = source.schema
# records = list(source)
# filename = os.path.join(str(tmpdir), "test.gpkg")
# benchmark(fiona_write, filename, records, driver="GPKG", crs=crs, schema=schema)
# @pytest.mark.benchmark(group="write-lowres")
# def test_write_fiona_lowres_geojson(tmpdir, naturalearth_lowres, benchmark):
# with fiona.open(naturalearth_lowres) as source:
# crs = source.crs
# schema = source.schema
# records = list(source)
# filename = os.path.join(str(tmpdir), "test.json")
# benchmark(fiona_write, filename, records, driver="GeoJSON", crs=crs, schema=schema)
@pytest.mark.benchmark(group="write-modres")
def test_write_modres_shp(tmpdir, naturalearth_modres, benchmark):
meta, geometry, field_data = read(naturalearth_modres)
filename = os.path.join(str(tmpdir), "test.shp")
benchmark(write, filename, geometry, field_data, **meta)
@pytest.mark.benchmark(group="write-modres")
def test_write_fiona_modres_shp(tmpdir, naturalearth_modres, benchmark):
with fiona.open(naturalearth_modres) as source:
crs = source.crs
schema = source.schema
records = list(source)
filename = os.path.join(str(tmpdir), "test.shp")
benchmark(
fiona_write, filename, records, driver="ESRI Shapefile", crs=crs, schema=schema
)
| 34 | 89 | 0.748588 | 769 | 6,018 | 5.6671 | 0.119636 | 0.052777 | 0.100275 | 0.126664 | 0.872189 | 0.805186 | 0.753557 | 0.725562 | 0.710188 | 0.556677 | 0 | 0.00498 | 0.132436 | 6,018 | 176 | 90 | 34.193182 | 0.829726 | 0.171818 | 0 | 0.438776 | 0 | 0 | 0.085321 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.234694 | false | 0 | 0.040816 | 0 | 0.27551 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1578de50589c4f52de8d1481d674cbf444e7131 | 12,155 | py | Python | tests/garage/torch/modules/test_gaussian_mlp_module.py | Rnhondova/garage | 0ff40022a1da0287a45af86c7b8fc9c604c13dd5 | [
"MIT"
] | 1 | 2021-01-11T18:40:52.000Z | 2021-01-11T18:40:52.000Z | tests/garage/torch/modules/test_gaussian_mlp_module.py | Rnhondova/garage | 0ff40022a1da0287a45af86c7b8fc9c604c13dd5 | [
"MIT"
] | 4 | 2021-01-18T06:16:20.000Z | 2021-03-06T08:49:03.000Z | tests/garage/torch/modules/test_gaussian_mlp_module.py | Rnhondova/garage | 0ff40022a1da0287a45af86c7b8fc9c604c13dd5 | [
"MIT"
] | 2 | 2021-07-29T22:02:36.000Z | 2021-11-13T08:26:43.000Z | # yapf: disable
import pytest
import torch
from torch import nn
from garage.torch.modules.gaussian_mlp_module import (
GaussianMLPIndependentStdModule) # noqa: E501
from garage.torch.modules.gaussian_mlp_module import (
GaussianMLPTwoHeadedModule) # noqa: E501
from garage.torch.modules.gaussian_mlp_module import GaussianMLPModule
# yapf: enable
plain_settings = [
(1, 1, (1, )),
(1, 2, (2, )),
(1, 3, (3, )),
(1, 1, (1, 2)),
(1, 2, (2, 1)),
(1, 3, (4, 5)),
(2, 1, (1, )),
(2, 2, (2, )),
(2, 3, (3, )),
(2, 1, (1, 2)),
(2, 2, (2, 1)),
(2, 3, (4, 5)),
(5, 1, (1, )),
(5, 2, (2, )),
(5, 3, (3, )),
(5, 1, (1, 2)),
(5, 2, (2, 1)),
(5, 3, (4, 5)),
]
different_std_settings = [(1, 1, (1, ), (1, )), (1, 2, (2, ), (2, )),
(1, 3, (3, ), (3, )), (1, 1, (1, 2), (1, 2)),
(1, 2, (2, 1), (2, 1)), (1, 3, (4, 5), (4, 5)),
(2, 1, (1, ), (1, )), (2, 2, (2, ), (2, )),
(2, 3, (3, ), (3, )), (2, 1, (1, 2), (1, 2)),
(2, 2, (2, 1), (2, 1)), (2, 3, (4, 5), (4, 5)),
(5, 1, (1, ), (1, )), (5, 2, (2, ), (2, )),
(5, 3, (3, ), (3, )), (5, 1, (1, 2), (1, 2)),
(5, 2, (2, 1), (2, 1)), (5, 3, (4, 5), (4, 5))]
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_std_share_network_output_values(input_dim, output_dim, hidden_sizes):
module = GaussianMLPTwoHeadedModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
hidden_nonlinearity=None,
std_parameterization='exp',
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_)
dist = module(torch.ones(input_dim))
exp_mean = torch.full(
(output_dim, ),
input_dim * (torch.Tensor(hidden_sizes).prod().item()),
dtype=torch.float)
exp_variance = (input_dim *
torch.Tensor(hidden_sizes).prod()).exp().pow(2).item()
assert dist.mean.equal(exp_mean)
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance, dtype=torch.float))
assert dist.rsample().shape == (output_dim, )
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_std_share_network_output_values_with_batch(input_dim, output_dim,
hidden_sizes):
module = GaussianMLPTwoHeadedModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
hidden_nonlinearity=None,
std_parameterization='exp',
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_)
batch_size = 5
dist = module(torch.ones([batch_size, input_dim]))
exp_mean = torch.full(
(batch_size, output_dim),
input_dim * (torch.Tensor(hidden_sizes).prod().item()),
dtype=torch.float)
exp_variance = (input_dim *
torch.Tensor(hidden_sizes).prod()).exp().pow(2).item()
assert dist.mean.equal(exp_mean)
assert dist.variance.equal(
torch.full((batch_size, output_dim), exp_variance, dtype=torch.float))
assert dist.rsample().shape == (batch_size, output_dim)
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_std_network_output_values(input_dim, output_dim, hidden_sizes):
init_std = 2.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=init_std,
hidden_nonlinearity=None,
std_parameterization='exp',
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_)
dist = module(torch.ones(input_dim))
exp_mean = torch.full(
(output_dim, ),
input_dim * (torch.Tensor(hidden_sizes).prod().item()),
dtype=torch.float)
exp_variance = init_std**2
assert dist.mean.equal(exp_mean)
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance, dtype=torch.float))
assert dist.rsample().shape == (output_dim, )
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_std_network_output_values_with_batch(input_dim, output_dim,
hidden_sizes):
init_std = 2.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=init_std,
hidden_nonlinearity=None,
std_parameterization='exp',
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_)
batch_size = 5
dist = module(torch.ones([batch_size, input_dim]))
exp_mean = torch.full(
(batch_size, output_dim),
input_dim * (torch.Tensor(hidden_sizes).prod().item()),
dtype=torch.float)
exp_variance = init_std**2
assert dist.mean.equal(exp_mean)
assert dist.variance.equal(
torch.full((batch_size, output_dim), exp_variance, dtype=torch.float))
assert dist.rsample().shape == (batch_size, output_dim)
@pytest.mark.parametrize(
'input_dim, output_dim, hidden_sizes, std_hidden_sizes',
different_std_settings)
def test_std_adaptive_network_output_values(input_dim, output_dim,
hidden_sizes, std_hidden_sizes):
module = GaussianMLPIndependentStdModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
std_hidden_sizes=std_hidden_sizes,
hidden_nonlinearity=None,
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_,
std_hidden_nonlinearity=None,
std_hidden_w_init=nn.init.ones_,
std_output_w_init=nn.init.ones_)
dist = module(torch.ones(input_dim))
exp_mean = torch.full(
(output_dim, ),
input_dim * (torch.Tensor(hidden_sizes).prod().item()),
dtype=torch.float)
exp_variance = (input_dim *
torch.Tensor(hidden_sizes).prod()).exp().pow(2).item()
assert dist.mean.equal(exp_mean)
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance, dtype=torch.float))
assert dist.rsample().shape == (output_dim, )
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_softplus_std_network_output_values(input_dim, output_dim,
hidden_sizes):
init_std = 2.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=init_std,
hidden_nonlinearity=None,
std_parameterization='softplus',
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_)
dist = module(torch.ones(input_dim))
exp_mean = input_dim * torch.Tensor(hidden_sizes).prod().item()
exp_variance = torch.Tensor([init_std]).exp().add(1.).log()**2
assert dist.mean.equal(
torch.full((output_dim, ), exp_mean, dtype=torch.float))
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance[0], dtype=torch.float))
assert dist.rsample().shape == (output_dim, )
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_exp_min_std(input_dim, output_dim, hidden_sizes):
min_value = 10.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=1.,
min_std=min_value,
hidden_nonlinearity=None,
std_parameterization='exp',
hidden_w_init=nn.init.zeros_,
output_w_init=nn.init.zeros_)
dist = module(torch.ones(input_dim))
exp_variance = min_value**2
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance, dtype=torch.float))
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_exp_max_std(input_dim, output_dim, hidden_sizes):
max_value = 1.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=10.,
max_std=max_value,
hidden_nonlinearity=None,
std_parameterization='exp',
hidden_w_init=nn.init.zeros_,
output_w_init=nn.init.zeros_)
dist = module(torch.ones(input_dim))
exp_variance = max_value**2
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance, dtype=torch.float))
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_softplus_min_std(input_dim, output_dim, hidden_sizes):
min_value = 2.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=1.,
min_std=min_value,
hidden_nonlinearity=None,
std_parameterization='softplus',
hidden_w_init=nn.init.zeros_,
output_w_init=nn.init.zeros_)
dist = module(torch.ones(input_dim))
exp_variance = torch.Tensor([min_value]).exp().add(1.).log()**2
assert dist.variance.equal(
torch.full((output_dim, ), exp_variance[0], dtype=torch.float))
@pytest.mark.parametrize('input_dim, output_dim, hidden_sizes', plain_settings)
def test_softplus_max_std(input_dim, output_dim, hidden_sizes):
max_value = 1.
module = GaussianMLPModule(input_dim=input_dim,
output_dim=output_dim,
hidden_sizes=hidden_sizes,
init_std=10,
max_std=max_value,
hidden_nonlinearity=None,
std_parameterization='softplus',
hidden_w_init=nn.init.ones_,
output_w_init=nn.init.ones_)
dist = module(torch.ones(input_dim))
exp_variance = torch.Tensor([max_value]).exp().add(1.).log()**2
assert torch.equal(
dist.variance,
torch.full((output_dim, ), exp_variance[0], dtype=torch.float))
def test_unknown_std_parameterization():
with pytest.raises(NotImplementedError):
GaussianMLPModule(input_dim=1,
output_dim=1,
std_parameterization='unknown')
| 39.852459 | 79 | 0.529247 | 1,345 | 12,155 | 4.477323 | 0.056506 | 0.094155 | 0.079708 | 0.084689 | 0.929259 | 0.906343 | 0.883261 | 0.869811 | 0.853371 | 0.840917 | 0 | 0.024725 | 0.357795 | 12,155 | 304 | 80 | 39.983553 | 0.746733 | 0.003949 | 0 | 0.690377 | 0 | 0 | 0.034457 | 0 | 0 | 0 | 0 | 0 | 0.09205 | 1 | 0.046025 | false | 0 | 0.025105 | 0 | 0.07113 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a1a3bd64cdcd35871de97ff39a7804655d09ee5e | 7,753 | py | Python | examples/pyaos8/interfaces.py | michaelrosejr/pyaos8 | 2fc7c241692bad7bd1a5e25c87cd65d5830a9dd5 | [
"Apache-2.0"
] | 2 | 2019-07-31T07:35:47.000Z | 2020-01-10T15:45:48.000Z | examples/pyaos8/interfaces.py | michaelrosejr/pyaos8 | 2fc7c241692bad7bd1a5e25c87cd65d5830a9dd5 | [
"Apache-2.0"
] | null | null | null | examples/pyaos8/interfaces.py | michaelrosejr/pyaos8 | 2fc7c241692bad7bd1a5e25c87cd65d5830a9dd5 | [
"Apache-2.0"
] | 2 | 2018-11-17T04:33:35.000Z | 2020-09-09T16:08:34.000Z | import requests
import json
import sys
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# from aosget import aosget
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
def aosget(url, auth):
aoscookie = dict(SESSION = auth.uidaruba)
try:
r = requests.get(url, cookies=aoscookie, verify=False)
if r.status_code != 200:
print('Status:', r.status_code, 'Headers:', r.headers,
'Error Response:', r.reason)
return r.text
except requests.exceptions.RequestException as error:
#print("Error")
return "Error:\n" + str(error) + sys._getframe().f_code.co_name + ": An Error has occured"
def aosput(url, auth, payload):
aoscookie = dict(SESSION = auth.uidaruba)
try:
r = requests.post(url, cookies=aoscookie, data=payload, verify=False)
if r.status_code != 200:
print('Status:', r.status_code, 'Headers:', r.headers,
'Error Response:', r.reason)
return r.text
except requests.exceptions.RequestException as error:
#print("Error")
return "Error:\n" + str(error) + " get_interfaces: An Error has occured"
url_write = "https://" + auth.aos8ip + ":4343/v1/configuration/object/write_memory?json=1&UIDARUBA=" + auth.uidaruba
try:
r = requests.post(url_write, cookies=aoscookie, verify=False)
if r.status_code != 200:
print('Status:', r.status_code, 'Headers:', r.headers,
'Error Response:', r.reason)
return r.text
except requests.exceptions.RequestException as error:
#print("Error")
return "Error:\n" + str(error) + " url_write: An Error has occured"
class interfaces():
def get_int_get(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/int_gig?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def vlan_id_get(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/vlan_id?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def vlan_id_post(auth, aosdata):
aosdata = json.dumps(aosdata)
print(aosdata)
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/vlan_id?json=1&UIDARUBA=" + auth.uidaruba
response = aosput(url, auth, aosdata)
return response
def get_int_mgmt(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_mgmt?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_rad_src_int(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"rad_src_int?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_ipv6_gateway_ip(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"ipv6_gateway_ip?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_stm_tun_node_addr(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"stm_tun_node_addr?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_pc(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_pc?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_tg(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"tg?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_range(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_range?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_rad_src_int_v6(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"rad_src_int_v6?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_gig(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_gig?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_cell(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_cell?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_vlan_id(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"vlan_id?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_tun(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_tun?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_stm_tun_node_mtu(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"stm_tun_node_mtu?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_ip_flow_export_prof(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"ip_flow_export_prof?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_ping(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"ping?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_rad_cp_redir_v6(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"rad_cp_redir_v6?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_loop(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_loop?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_stm_tun_loop_prevention(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"stm_tun_loop_prevention?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_vlan_range_rem(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"vlan_range_rem?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_vlan_name_id(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"vlan_name_id?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_rad_cp_redir(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"rad_cp_redir?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_int_vlan(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"int_vlan?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
def get_vlan_range(auth):
url = "https://" + auth.aos8ip + ":4343/v1/configuration/object/" \
"vlan_range?json=1&UIDARUBA=" + auth.uidaruba
response = aosget(url, auth)
return response
| 33.708696 | 120 | 0.60454 | 925 | 7,753 | 4.934054 | 0.101622 | 0.076249 | 0.088738 | 0.112401 | 0.860868 | 0.85298 | 0.85298 | 0.83589 | 0.816608 | 0.775416 | 0 | 0.03592 | 0.260286 | 7,753 | 229 | 121 | 33.855895 | 0.759895 | 0.008642 | 0 | 0.585366 | 0 | 0 | 0.257128 | 0.199844 | 0 | 0 | 0 | 0 | 0 | 1 | 0.170732 | false | 0 | 0.02439 | 0 | 0.396341 | 0.02439 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b806e29b5ea620ff41da11213f007d6d76a4d4f2 | 49 | py | Python | enthought/pyface/split_widget.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 3 | 2016-12-09T06:05:18.000Z | 2018-03-01T13:00:29.000Z | enthought/pyface/split_widget.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | 1 | 2020-12-02T00:51:32.000Z | 2020-12-02T08:48:55.000Z | enthought/pyface/split_widget.py | enthought/etsproxy | 4aafd628611ebf7fe8311c9d1a0abcf7f7bb5347 | [
"BSD-3-Clause"
] | null | null | null | # proxy module
from pyface.split_widget import *
| 16.333333 | 33 | 0.795918 | 7 | 49 | 5.428571 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 49 | 2 | 34 | 24.5 | 0.904762 | 0.244898 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
62e64e94c51a94ca5f08fb994b77f6287272f9cc | 7,983 | py | Python | resources/dot_PyCharm/system/python_stubs/-762174762/_heapq.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | 1 | 2020-04-20T02:27:20.000Z | 2020-04-20T02:27:20.000Z | resources/dot_PyCharm/system/python_stubs/-762174762/_heapq.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | resources/dot_PyCharm/system/python_stubs/-762174762/_heapq.py | basepipe/developer_onboarding | 05b6a776f8974c89517868131b201f11c6c2a5ad | [
"MIT"
] | null | null | null | # encoding: utf-8
# module _heapq
# from (built-in)
# by generator 1.147
"""
Heap queue algorithm (a.k.a. priority queue).
Heaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for
all k, counting elements from 0. For the sake of comparison,
non-existing elements are considered to be infinite. The interesting
property of a heap is that a[0] is always its smallest element.
Usage:
heap = [] # creates an empty heap
heappush(heap, item) # pushes a new item on the heap
item = heappop(heap) # pops the smallest item from the heap
item = heap[0] # smallest item on the heap without popping it
heapify(x) # transforms list into a heap, in-place, in linear time
item = heapreplace(heap, item) # pops and returns smallest item, and adds
# new item; the heap size is unchanged
Our API differs from textbook heap algorithms as follows:
- We use 0-based indexing. This makes the relationship between the
index for a node and the indexes for its children slightly less
obvious, but is more suitable since Python uses 0-based indexing.
- Our heappop() method returns the smallest item, not the largest.
These two make it possible to view the heap as a regular Python list
without surprises: heap[0] is the smallest item, and heap.sort()
maintains the heap invariant!
"""
# no imports
# Variables with simple values
__about__ = 'Heap queues\n\n[explanation by Fran\xe7ois Pinard]\n\nHeaps are arrays for which a[k] <= a[2*k+1] and a[k] <= a[2*k+2] for\nall k, counting elements from 0. For the sake of comparison,\nnon-existing elements are considered to be infinite. The interesting\nproperty of a heap is that a[0] is always its smallest element.\n\nThe strange invariant above is meant to be an efficient memory\nrepresentation for a tournament. The numbers below are `k\', not a[k]:\n\n 0\n\n 1 2\n\n 3 4 5 6\n\n 7 8 9 10 11 12 13 14\n\n 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30\n\n\nIn the tree above, each cell `k\' is topping `2*k+1\' and `2*k+2\'. In\na usual binary tournament we see in sports, each cell is the winner\nover the two cells it tops, and we can trace the winner down the tree\nto see all opponents s/he had. However, in many computer applications\nof such tournaments, we do not need to trace the history of a winner.\nTo be more memory efficient, when a winner is promoted, we try to\nreplace it by something else at a lower level, and the rule becomes\nthat a cell and the two cells it tops contain three different items,\nbut the top cell "wins" over the two topped cells.\n\nIf this heap invariant is protected at all time, index 0 is clearly\nthe overall winner. The simplest algorithmic way to remove it and\nfind the "next" winner is to move some loser (let\'s say cell 30 in the\ndiagram above) into the 0 position, and then percolate this new 0 down\nthe tree, exchanging values, until the invariant is re-established.\nThis is clearly logarithmic on the total number of items in the tree.\nBy iterating over all items, you get an O(n ln n) sort.\n\nA nice feature of this sort is that you can efficiently insert new\nitems while the sort is going on, provided that the inserted items are\nnot "better" than the last 0\'th element you extracted. This is\nespecially useful in simulation contexts, where the tree holds all\nincoming events, and the "win" condition means the smallest scheduled\ntime. When an event schedule other events for execution, they are\nscheduled into the future, so they can easily go into the heap. So, a\nheap is a good structure for implementing schedulers (this is what I\nused for my MIDI sequencer :-).\n\nVarious structures for implementing schedulers have been extensively\nstudied, and heaps are good for this, as they are reasonably speedy,\nthe speed is almost constant, and the worst case is not much different\nthan the average case. However, there are other representations which\nare more efficient overall, yet the worst cases might be terrible.\n\nHeaps are also very useful in big disk sorts. You most probably all\nknow that a big sort implies producing "runs" (which are pre-sorted\nsequences, which size is usually related to the amount of CPU memory),\nfollowed by a merging passes for these runs, which merging is often\nvery cleverly organised[1]. It is very important that the initial\nsort produces the longest runs possible. Tournaments are a good way\nto that. If, using all the memory available to hold a tournament, you\nreplace and percolate items that happen to fit the current run, you\'ll\nproduce runs which are twice the size of the memory for random input,\nand much better for input fuzzily ordered.\n\nMoreover, if you output the 0\'th item on disk and get an input which\nmay not fit in the current tournament (because the value "wins" over\nthe last output value), it cannot fit in the heap, so the size of the\nheap decreases. The freed memory could be cleverly reused immediately\nfor progressively building a second heap, which grows at exactly the\nsame rate the first heap is melting. When the first heap completely\nvanishes, you switch heaps and start a new run. Clever and quite\neffective!\n\nIn a word, heaps are useful memory structures to know. I use them in\na few applications, and I think it is good to keep a `heap\' module\naround. :-)\n\n--------------------\n[1] The disk balancing algorithms which are current, nowadays, are\nmore annoying than clever, and this is a consequence of the seeking\ncapabilities of the disks. On devices which cannot seek, like big\ntape drives, the story was quite different, and one had to be very\nclever to ensure (far in advance) that each tape movement will be the\nmost effective possible (that is, will best participate at\n"progressing" the merge). Some tapes were even able to read\nbackwards, and this was also used to avoid the rewinding time.\nBelieve me, real good tape sorts were quite spectacular to watch!\nFrom all times, sorting has always been a Great Art! :-)\n'
# functions
def heapify(*args, **kwargs): # real signature unknown
""" Transform list into a heap, in-place, in O(len(heap)) time. """
pass
def heappop(*args, **kwargs): # real signature unknown
""" Pop the smallest item off the heap, maintaining the heap invariant. """
pass
def heappush(heap, item): # real signature unknown; restored from __doc__
""" heappush(heap, item) -> None. Push item onto heap, maintaining the heap invariant. """
pass
def heappushpop(heap, item): # real signature unknown; restored from __doc__
"""
heappushpop(heap, item) -> value. Push item on the heap, then pop and return the smallest item
from the heap. The combined action runs more efficiently than
heappush() followed by a separate call to heappop().
"""
pass
def heapreplace(heap, item): # real signature unknown; restored from __doc__
"""
heapreplace(heap, item) -> value. Pop and return the current smallest value, and add the new item.
This is more efficient than heappop() followed by heappush(), and can be
more appropriate when using a fixed-size heap. Note that the value
returned may be larger than item! That constrains reasonable uses of
this routine unless written as part of a conditional replacement:
if item > heap[0]:
item = heapreplace(heap, item)
"""
pass
def nlargest(*args, **kwargs): # real signature unknown
"""
Find the n largest elements in a dataset.
Equivalent to: sorted(iterable, reverse=True)[:n]
"""
pass
def nsmallest(*args, **kwargs): # real signature unknown
"""
Find the n smallest elements in a dataset.
Equivalent to: sorted(iterable)[:n]
"""
pass
# no classes
| 84.925532 | 4,836 | 0.721032 | 1,311 | 7,983 | 4.377574 | 0.385202 | 0.015856 | 0.024394 | 0.002788 | 0.15839 | 0.142011 | 0.13295 | 0.11204 | 0.060986 | 0.041819 | 0 | 0.013928 | 0.208568 | 7,983 | 93 | 4,837 | 85.83871 | 0.894429 | 0.332958 | 0 | 0.466667 | 0 | 0.266667 | 0.610756 | 0.034243 | 0 | 0 | 0 | 0 | 0 | 1 | 0.466667 | false | 0.533333 | 0.066667 | 0 | 0.533333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 6 |
1a054fa7e328a2ec97d7be291428c2fe6fff855e | 411 | py | Python | tests/test_utils/test_config.py | uTensor/utensor_cgen | eccd6859028d0b6a350dced25ea72ff02faaf9ad | [
"Apache-2.0"
] | 49 | 2018-01-06T12:57:56.000Z | 2021-09-03T09:48:32.000Z | tests/test_utils/test_config.py | uTensor/utensor_cgen | eccd6859028d0b6a350dced25ea72ff02faaf9ad | [
"Apache-2.0"
] | 101 | 2018-01-16T19:24:21.000Z | 2021-11-10T19:39:33.000Z | tests/test_utils/test_config.py | uTensor/utensor_cgen | eccd6859028d0b6a350dced25ea72ff02faaf9ad | [
"Apache-2.0"
] | 32 | 2018-02-15T19:39:50.000Z | 2020-11-26T22:32:05.000Z | def test_user_values(config_user_values):
assert config_user_values['x'] == 2
assert config_user_values['y'] == 2
def test_config_nested(config_nested):
assert isinstance(config_nested['dict1'], type(config_nested))
assert isinstance(config_nested['dict1']['inner'], type(config_nested))
assert config_nested['dict1']['inner']['x'] == 2
assert config_nested['dict1']['inner']['y'] == 4
| 41.1 | 75 | 0.710462 | 56 | 411 | 4.910714 | 0.267857 | 0.349091 | 0.247273 | 0.24 | 0.530909 | 0.327273 | 0.327273 | 0 | 0 | 0 | 0 | 0.022222 | 0.124088 | 411 | 9 | 76 | 45.666667 | 0.741667 | 0 | 0 | 0 | 0 | 0 | 0.094891 | 0 | 0 | 0 | 0 | 0 | 0.75 | 1 | 0.25 | false | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
a7ed5f7aa1a0ee7f2c31124ed9c1bd6c6966b8a8 | 3,141 | py | Python | usaspending_api/disaster/tests/integration/test_object_class_spending_total.py | jbuendiallc/usaspending-api | f827870cbca4b6a6e16f1c5272bb2ff73a113d76 | [
"CC0-1.0"
] | 1 | 2020-08-14T04:14:32.000Z | 2020-08-14T04:14:32.000Z | usaspending_api/disaster/tests/integration/test_object_class_spending_total.py | jbuendiallc/usaspending-api | f827870cbca4b6a6e16f1c5272bb2ff73a113d76 | [
"CC0-1.0"
] | null | null | null | usaspending_api/disaster/tests/integration/test_object_class_spending_total.py | jbuendiallc/usaspending-api | f827870cbca4b6a6e16f1c5272bb2ff73a113d76 | [
"CC0-1.0"
] | null | null | null | import pytest
from rest_framework import status
url = "/api/v2/disaster/object_class/spending/"
@pytest.mark.django_db
def test_basic_object_class_spending_total_success(
client, basic_fa_by_object_class_with_object_class, monkeypatch, helpers
):
helpers.patch_datetime_now(monkeypatch, 2022, 12, 31)
helpers.reset_dabs_cache()
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["M"], spending_type="total")
expected_results = [
{
"id": "001",
"code": "001",
"description": "001 name",
"award_count": None,
"obligation": 9.0,
"outlay": 0.0,
"children": [
{
"id": "1",
"code": "0001",
"description": "0001 name",
"award_count": None,
"obligation": 9.0,
"outlay": 0.0,
}
],
}
]
print(resp.json()["results"])
assert resp.status_code == status.HTTP_200_OK
assert resp.json()["results"] == expected_results
@pytest.mark.django_db
def test_object_class_spending_filters_on_defc(
client, basic_fa_by_object_class_with_object_class, monkeypatch, helpers
):
helpers.patch_datetime_now(monkeypatch, 2022, 12, 31)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["A"], spending_type="total")
assert len(resp.json()["results"]) == 0
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["M"], spending_type="total")
assert len(resp.json()["results"]) == 1
@pytest.mark.django_db
def test_object_class_spending_filters_on_non_zero_obligations(
client, basic_fa_by_object_class_with_object_class_but_no_obligations, monkeypatch, helpers
):
helpers.patch_datetime_now(monkeypatch, 2022, 12, 31)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["M"], spending_type="total")
assert len(resp.json()["results"]) == 0
@pytest.mark.django_db
def test_object_class_spending_adds_over_multiple_object_classes(
client, basic_fa_by_object_class_with_multpile_object_class, monkeypatch, helpers
):
helpers.patch_datetime_now(monkeypatch, 2022, 12, 31)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["M"], spending_type="total")
assert len(resp.json()["results"]) == 1
assert len(resp.json()["results"][0]["children"]) == 3
assert resp.json()["results"][0]["obligation"] == 11
assert resp.json()["results"][0]["outlay"] == 22
@pytest.mark.django_db
def test_object_class_spending_adds_over_multiple_object_classes_of_same_code(
client, basic_fa_by_object_class_with_multpile_object_class_of_same_code, monkeypatch, helpers
):
helpers.patch_datetime_now(monkeypatch, 2022, 12, 31)
resp = helpers.post_for_spending_endpoint(client, url, def_codes=["M"], spending_type="total")
assert len(resp.json()["results"]) == 1
assert len(resp.json()["results"][0]["children"]) == 1
assert resp.json()["results"][0]["obligation"] == 10
assert resp.json()["results"][0]["outlay"] == 22
| 34.9 | 98 | 0.66985 | 401 | 3,141 | 4.902743 | 0.216958 | 0.089522 | 0.099186 | 0.065107 | 0.814852 | 0.814852 | 0.769583 | 0.739064 | 0.739064 | 0.696338 | 0 | 0.035983 | 0.194842 | 3,141 | 89 | 99 | 35.292135 | 0.7414 | 0 | 0 | 0.507246 | 0 | 0 | 0.107609 | 0.012416 | 0 | 0 | 0 | 0 | 0.188406 | 1 | 0.072464 | false | 0 | 0.028986 | 0 | 0.101449 | 0.014493 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
c537802f4eeee415425561b8a9d3ba1c6bc6860c | 32,179 | py | Python | tests/unit/job/test_query_pandas.py | tswast/python-bigquery | 97eb986001b2fbe13b3ffcbf1a8241e1302f2948 | [
"Apache-2.0"
] | 1 | 2021-08-10T16:43:07.000Z | 2021-08-10T16:43:07.000Z | tests/unit/job/test_query_pandas.py | grimmer0125/python-bigquery | e2cbcaa75a5da2bcd520d9116ead90b02d7326fd | [
"Apache-2.0"
] | null | null | null | tests/unit/job/test_query_pandas.py | grimmer0125/python-bigquery | e2cbcaa75a5da2bcd520d9116ead90b02d7326fd | [
"Apache-2.0"
] | null | null | null | # Copyright 2015 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import concurrent.futures
import copy
import json
import mock
import pytest
try:
import pandas
except (ImportError, AttributeError): # pragma: NO COVER
pandas = None
try:
import pyarrow
except (ImportError, AttributeError): # pragma: NO COVER
pyarrow = None
try:
from google.cloud import bigquery_storage
except (ImportError, AttributeError): # pragma: NO COVER
bigquery_storage = None
try:
from tqdm import tqdm
except (ImportError, AttributeError): # pragma: NO COVER
tqdm = None
from .helpers import _make_client
from .helpers import _make_connection
from .helpers import _make_job_resource
@pytest.fixture
def table_read_options_kwarg():
# Create a BigQuery Storage table read options object with pyarrow compression
# enabled if a recent-enough version of google-cloud-bigquery-storage dependency is
# installed to support the compression.
if not hasattr(bigquery_storage, "ArrowSerializationOptions"):
return {}
read_options = bigquery_storage.ReadSession.TableReadOptions(
arrow_serialization_options=bigquery_storage.ArrowSerializationOptions(
buffer_compression=bigquery_storage.ArrowSerializationOptions.CompressionCodec.LZ4_FRAME
)
)
return {"read_options": read_options}
@pytest.mark.parametrize(
"query,expected",
(
(None, False),
("", False),
("select name, age from table", False),
("select name, age from table LIMIT 10;", False),
("select name, age from table order by other_column;", True),
("Select name, age From table Order By other_column", True),
("SELECT name, age FROM table ORDER BY other_column;", True),
("select name, age from table order\nby other_column", True),
("Select name, age From table Order\nBy other_column;", True),
("SELECT name, age FROM table ORDER\nBY other_column", True),
("SelecT name, age froM table OrdeR \n\t BY other_column;", True),
),
)
def test__contains_order_by(query, expected):
from google.cloud.bigquery import job as mut
if expected:
assert mut._contains_order_by(query)
else:
assert not mut._contains_order_by(query)
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(
bigquery_storage is None, reason="Requires `google-cloud-bigquery-storage`"
)
@pytest.mark.parametrize(
"query",
(
"select name, age from table order by other_column;",
"Select name, age From table Order By other_column;",
"SELECT name, age FROM table ORDER BY other_column;",
"select name, age from table order\nby other_column;",
"Select name, age From table Order\nBy other_column;",
"SELECT name, age FROM table ORDER\nBY other_column;",
"SelecT name, age froM table OrdeR \n\t BY other_column;",
),
)
def test_to_dataframe_bqstorage_preserve_order(query, table_read_options_kwarg):
from google.cloud.bigquery.job import QueryJob as target_class
job_resource = _make_job_resource(
project_id="test-project", job_type="query", ended=True
)
job_resource["configuration"]["query"]["query"] = query
job_resource["status"] = {"state": "DONE"}
get_query_results_resource = {
"jobComplete": True,
"jobReference": {"projectId": "test-project", "jobId": "test-job"},
"schema": {
"fields": [
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "age", "type": "INTEGER", "mode": "NULLABLE"},
]
},
"totalRows": "4",
}
connection = _make_connection(get_query_results_resource, job_resource)
client = _make_client(connection=connection)
job = target_class.from_api_repr(job_resource, client)
bqstorage_client = mock.create_autospec(bigquery_storage.BigQueryReadClient)
session = bigquery_storage.types.ReadSession()
session.avro_schema.schema = json.dumps(
{
"type": "record",
"name": "__root__",
"fields": [
{"name": "name", "type": ["null", "string"]},
{"name": "age", "type": ["null", "long"]},
],
}
)
bqstorage_client.create_read_session.return_value = session
job.to_dataframe(bqstorage_client=bqstorage_client)
destination_table = "projects/{projectId}/datasets/{datasetId}/tables/{tableId}".format(
**job_resource["configuration"]["query"]["destinationTable"]
)
expected_session = bigquery_storage.ReadSession(
table=destination_table,
data_format=bigquery_storage.DataFormat.ARROW,
**table_read_options_kwarg,
)
bqstorage_client.create_read_session.assert_called_once_with(
parent="projects/test-project",
read_session=expected_session,
max_stream_count=1, # Use a single stream to preserve row order.
)
@pytest.mark.skipif(pyarrow is None, reason="Requires `pyarrow`")
def test_to_arrow():
from google.cloud.bigquery.job import QueryJob as target_class
begun_resource = _make_job_resource(job_type="query")
query_resource = {
"jobComplete": True,
"jobReference": begun_resource["jobReference"],
"totalRows": "4",
"schema": {
"fields": [
{
"name": "spouse_1",
"type": "RECORD",
"fields": [
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "age", "type": "INTEGER", "mode": "NULLABLE"},
],
},
{
"name": "spouse_2",
"type": "RECORD",
"fields": [
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "age", "type": "INTEGER", "mode": "NULLABLE"},
],
},
]
},
}
tabledata_resource = {
"rows": [
{
"f": [
{"v": {"f": [{"v": "Phred Phlyntstone"}, {"v": "32"}]}},
{"v": {"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]}},
]
},
{
"f": [
{"v": {"f": [{"v": "Bhettye Rhubble"}, {"v": "27"}]}},
{"v": {"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]}},
]
},
]
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = _make_connection(
begun_resource, query_resource, done_resource, tabledata_resource
)
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
tbl = job.to_arrow(create_bqstorage_client=False)
assert isinstance(tbl, pyarrow.Table)
assert tbl.num_rows == 2
# Check the schema.
assert tbl.schema[0].name == "spouse_1"
assert tbl.schema[0].type[0].name == "name"
assert tbl.schema[0].type[1].name == "age"
assert pyarrow.types.is_struct(tbl.schema[0].type)
assert pyarrow.types.is_string(tbl.schema[0].type[0].type)
assert pyarrow.types.is_int64(tbl.schema[0].type[1].type)
assert tbl.schema[1].name == "spouse_2"
assert tbl.schema[1].type[0].name == "name"
assert tbl.schema[1].type[1].name == "age"
assert pyarrow.types.is_struct(tbl.schema[1].type)
assert pyarrow.types.is_string(tbl.schema[1].type[0].type)
assert pyarrow.types.is_int64(tbl.schema[1].type[1].type)
# Check the data.
tbl_data = tbl.to_pydict()
spouse_1 = tbl_data["spouse_1"]
assert spouse_1 == [
{"name": "Phred Phlyntstone", "age": 32},
{"name": "Bhettye Rhubble", "age": 27},
]
spouse_2 = tbl_data["spouse_2"]
assert spouse_2 == [
{"name": "Wylma Phlyntstone", "age": 29},
{"name": "Bharney Rhubble", "age": 33},
]
@pytest.mark.skipif(pyarrow is None, reason="Requires `pyarrow`")
def test_to_arrow_max_results_no_progress_bar():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
connection = _make_connection({})
client = _make_client(connection=connection)
begun_resource = _make_job_resource(job_type="query")
job = target_class.from_api_repr(begun_resource, client)
schema = [
SchemaField("name", "STRING", mode="REQUIRED"),
SchemaField("age", "INTEGER", mode="REQUIRED"),
]
rows = [
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
]
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result", return_value=row_iterator,
)
with result_patch as result_patch_tqdm:
tbl = job.to_arrow(create_bqstorage_client=False, max_results=123)
result_patch_tqdm.assert_called_once_with(max_results=123)
assert isinstance(tbl, pyarrow.Table)
assert tbl.num_rows == 2
@pytest.mark.skipif(pyarrow is None, reason="Requires `pyarrow`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
def test_to_arrow_w_tqdm_w_query_plan():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery._tqdm_helpers import _PROGRESS_BAR_UPDATE_INTERVAL
begun_resource = _make_job_resource(job_type="query")
rows = [
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
]
schema = [
SchemaField("name", "STRING", mode="REQUIRED"),
SchemaField("age", "INTEGER", mode="REQUIRED"),
]
connection = _make_connection({})
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
job._properties["statistics"] = {
"query": {
"queryPlan": [
{"name": "S00: Input", "id": "0", "status": "COMPLETE"},
{"name": "S01: Output", "id": "1", "status": "COMPLETE"},
]
},
}
reload_patch = mock.patch(
"google.cloud.bigquery.job._AsyncJob.reload", autospec=True
)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result",
side_effect=[
concurrent.futures.TimeoutError,
concurrent.futures.TimeoutError,
row_iterator,
],
)
with result_patch as result_patch_tqdm, reload_patch:
tbl = job.to_arrow(progress_bar_type="tqdm", create_bqstorage_client=False)
assert result_patch_tqdm.call_count == 3
assert isinstance(tbl, pyarrow.Table)
assert tbl.num_rows == 2
result_patch_tqdm.assert_called_with(
timeout=_PROGRESS_BAR_UPDATE_INTERVAL, max_results=None
)
@pytest.mark.skipif(pyarrow is None, reason="Requires `pyarrow`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
def test_to_arrow_w_tqdm_w_pending_status():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery._tqdm_helpers import _PROGRESS_BAR_UPDATE_INTERVAL
begun_resource = _make_job_resource(job_type="query")
rows = [
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
]
schema = [
SchemaField("name", "STRING", mode="REQUIRED"),
SchemaField("age", "INTEGER", mode="REQUIRED"),
]
connection = _make_connection({})
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
job._properties["statistics"] = {
"query": {
"queryPlan": [
{"name": "S00: Input", "id": "0", "status": "PENDING"},
{"name": "S00: Input", "id": "1", "status": "COMPLETE"},
]
},
}
reload_patch = mock.patch(
"google.cloud.bigquery.job._AsyncJob.reload", autospec=True
)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result",
side_effect=[concurrent.futures.TimeoutError, row_iterator],
)
with result_patch as result_patch_tqdm, reload_patch:
tbl = job.to_arrow(progress_bar_type="tqdm", create_bqstorage_client=False)
assert result_patch_tqdm.call_count == 2
assert isinstance(tbl, pyarrow.Table)
assert tbl.num_rows == 2
result_patch_tqdm.assert_called_with(
timeout=_PROGRESS_BAR_UPDATE_INTERVAL, max_results=None
)
@pytest.mark.skipif(pyarrow is None, reason="Requires `pyarrow`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
def test_to_arrow_w_tqdm_wo_query_plan():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
begun_resource = _make_job_resource(job_type="query")
rows = [
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
]
schema = [
SchemaField("name", "STRING", mode="REQUIRED"),
SchemaField("age", "INTEGER", mode="REQUIRED"),
]
connection = _make_connection({})
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
reload_patch = mock.patch(
"google.cloud.bigquery.job._AsyncJob.reload", autospec=True
)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result",
side_effect=[concurrent.futures.TimeoutError, row_iterator],
)
with result_patch as result_patch_tqdm, reload_patch:
tbl = job.to_arrow(progress_bar_type="tqdm", create_bqstorage_client=False)
assert result_patch_tqdm.call_count == 2
assert isinstance(tbl, pyarrow.Table)
assert tbl.num_rows == 2
result_patch_tqdm.assert_called()
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_to_dataframe():
from google.cloud.bigquery.job import QueryJob as target_class
begun_resource = _make_job_resource(job_type="query")
query_resource = {
"jobComplete": True,
"jobReference": begun_resource["jobReference"],
"totalRows": "4",
"schema": {
"fields": [
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "age", "type": "INTEGER", "mode": "NULLABLE"},
]
},
}
tabledata_resource = {
"rows": [
{"f": [{"v": "Phred Phlyntstone"}, {"v": "32"}]},
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
{"f": [{"v": "Bhettye Rhubble"}, {"v": "27"}]},
]
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = _make_connection(
begun_resource, query_resource, done_resource, tabledata_resource
)
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
df = job.to_dataframe(create_bqstorage_client=False)
assert isinstance(df, pandas.DataFrame)
assert len(df) == 4 # verify the number of rows
assert list(df) == ["name", "age"] # verify the column names
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_to_dataframe_ddl_query():
from google.cloud.bigquery.job import QueryJob as target_class
# Destination table may have no schema for some DDL and DML queries.
resource = _make_job_resource(job_type="query", ended=True)
query_resource = {
"jobComplete": True,
"jobReference": resource["jobReference"],
"schema": {"fields": []},
}
connection = _make_connection(query_resource)
client = _make_client(connection=connection)
job = target_class.from_api_repr(resource, client)
df = job.to_dataframe()
assert len(df) == 0
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(
bigquery_storage is None, reason="Requires `google-cloud-bigquery-storage`"
)
def test_to_dataframe_bqstorage(table_read_options_kwarg):
from google.cloud.bigquery.job import QueryJob as target_class
resource = _make_job_resource(job_type="query", ended=True)
query_resource = {
"jobComplete": True,
"jobReference": resource["jobReference"],
"totalRows": "4",
"schema": {
"fields": [
{"name": "name", "type": "STRING", "mode": "NULLABLE"},
{"name": "age", "type": "INTEGER", "mode": "NULLABLE"},
]
},
}
connection = _make_connection(query_resource)
client = _make_client(connection=connection)
job = target_class.from_api_repr(resource, client)
bqstorage_client = mock.create_autospec(bigquery_storage.BigQueryReadClient)
session = bigquery_storage.types.ReadSession()
session.avro_schema.schema = json.dumps(
{
"type": "record",
"name": "__root__",
"fields": [
{"name": "name", "type": ["null", "string"]},
{"name": "age", "type": ["null", "long"]},
],
}
)
bqstorage_client.create_read_session.return_value = session
job.to_dataframe(bqstorage_client=bqstorage_client)
destination_table = "projects/{projectId}/datasets/{datasetId}/tables/{tableId}".format(
**resource["configuration"]["query"]["destinationTable"]
)
expected_session = bigquery_storage.ReadSession(
table=destination_table,
data_format=bigquery_storage.DataFormat.ARROW,
**table_read_options_kwarg,
)
bqstorage_client.create_read_session.assert_called_once_with(
parent=f"projects/{client.project}",
read_session=expected_session,
max_stream_count=0, # Use default number of streams for best performance.
)
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(
bigquery_storage is None, reason="Requires `google-cloud-bigquery-storage`"
)
def test_to_dataframe_bqstorage_no_pyarrow_compression():
from google.cloud.bigquery.job import QueryJob as target_class
resource = _make_job_resource(job_type="query", ended=True)
query_resource = {
"jobComplete": True,
"jobReference": resource["jobReference"],
"totalRows": "4",
"schema": {"fields": [{"name": "name", "type": "STRING", "mode": "NULLABLE"}]},
}
connection = _make_connection(query_resource)
client = _make_client(connection=connection)
job = target_class.from_api_repr(resource, client)
bqstorage_client = mock.create_autospec(bigquery_storage.BigQueryReadClient)
session = bigquery_storage.types.ReadSession()
session.avro_schema.schema = json.dumps(
{
"type": "record",
"name": "__root__",
"fields": [{"name": "name", "type": ["null", "string"]}],
}
)
bqstorage_client.create_read_session.return_value = session
with mock.patch(
"google.cloud.bigquery._pandas_helpers._ARROW_COMPRESSION_SUPPORT", new=False
):
job.to_dataframe(bqstorage_client=bqstorage_client)
destination_table = "projects/{projectId}/datasets/{datasetId}/tables/{tableId}".format(
**resource["configuration"]["query"]["destinationTable"]
)
expected_session = bigquery_storage.ReadSession(
table=destination_table, data_format=bigquery_storage.DataFormat.ARROW,
)
bqstorage_client.create_read_session.assert_called_once_with(
parent=f"projects/{client.project}",
read_session=expected_session,
max_stream_count=0,
)
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_to_dataframe_column_dtypes():
from google.cloud.bigquery.job import QueryJob as target_class
begun_resource = _make_job_resource(job_type="query")
query_resource = {
"jobComplete": True,
"jobReference": begun_resource["jobReference"],
"totalRows": "4",
"schema": {
"fields": [
{"name": "start_timestamp", "type": "TIMESTAMP"},
{"name": "seconds", "type": "INT64"},
{"name": "miles", "type": "FLOAT64"},
{"name": "km", "type": "FLOAT64"},
{"name": "payment_type", "type": "STRING"},
{"name": "complete", "type": "BOOL"},
{"name": "date", "type": "DATE"},
]
},
}
row_data = [
[
"1433836800000000",
"420",
"1.1",
"1.77",
"Cto_dataframeash",
"true",
"1999-12-01",
],
["1387811700000000", "2580", "17.7", "28.5", "Cash", "false", "1953-06-14"],
["1385565300000000", "2280", "4.4", "7.1", "Credit", "true", "1981-11-04"],
]
rows = [{"f": [{"v": field} for field in row]} for row in row_data]
query_resource["rows"] = rows
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = _make_connection(
begun_resource, query_resource, done_resource, query_resource
)
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
df = job.to_dataframe(dtypes={"km": "float16"}, create_bqstorage_client=False)
assert isinstance(df, pandas.DataFrame)
assert len(df) == 3 # verify the number of rows
exp_columns = [field["name"] for field in query_resource["schema"]["fields"]]
assert list(df) == exp_columns # verify the column names
assert df.start_timestamp.dtype.name == "datetime64[ns, UTC]"
assert df.seconds.dtype.name == "int64"
assert df.miles.dtype.name == "float64"
assert df.km.dtype.name == "float16"
assert df.payment_type.dtype.name == "object"
assert df.complete.dtype.name == "bool"
assert df.date.dtype.name == "object"
@pytest.mark.skipif(pyarrow is None, reason="Requires `pyarrow`")
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
def test_to_dataframe_column_date_dtypes():
from google.cloud.bigquery.job import QueryJob as target_class
begun_resource = _make_job_resource(job_type="query")
query_resource = {
"jobComplete": True,
"jobReference": begun_resource["jobReference"],
"totalRows": "1",
"schema": {"fields": [{"name": "date", "type": "DATE"}]},
}
row_data = [
["1999-12-01"],
]
rows = [{"f": [{"v": field} for field in row]} for row in row_data]
query_resource["rows"] = rows
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = _make_connection(
begun_resource, query_resource, done_resource, query_resource
)
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
df = job.to_dataframe(date_as_object=False, create_bqstorage_client=False)
assert isinstance(df, pandas.DataFrame)
assert len(df) == 1 # verify the number of rows
exp_columns = [field["name"] for field in query_resource["schema"]["fields"]]
assert list(df) == exp_columns # verify the column names
assert df.date.dtype.name == "datetime64[ns]"
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
@mock.patch("tqdm.tqdm")
def test_to_dataframe_with_progress_bar(tqdm_mock):
from google.cloud.bigquery.job import QueryJob as target_class
begun_resource = _make_job_resource(job_type="query")
query_resource = {
"jobComplete": True,
"jobReference": begun_resource["jobReference"],
"totalRows": "4",
"schema": {"fields": [{"name": "name", "type": "STRING", "mode": "NULLABLE"}]},
}
done_resource = copy.deepcopy(begun_resource)
done_resource["status"] = {"state": "DONE"}
connection = _make_connection(
begun_resource, query_resource, done_resource, query_resource, query_resource,
)
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
job.to_dataframe(progress_bar_type=None, create_bqstorage_client=False)
tqdm_mock.assert_not_called()
job.to_dataframe(progress_bar_type="tqdm", create_bqstorage_client=False)
tqdm_mock.assert_called()
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
def test_to_dataframe_w_tqdm_pending():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery._tqdm_helpers import _PROGRESS_BAR_UPDATE_INTERVAL
begun_resource = _make_job_resource(job_type="query")
schema = [
SchemaField("name", "STRING", mode="NULLABLE"),
SchemaField("age", "INTEGER", mode="NULLABLE"),
]
rows = [
{"f": [{"v": "Phred Phlyntstone"}, {"v": "32"}]},
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
{"f": [{"v": "Bhettye Rhubble"}, {"v": "27"}]},
]
connection = _make_connection({})
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
job._properties["statistics"] = {
"query": {
"queryPlan": [
{"name": "S00: Input", "id": "0", "status": "PRNDING"},
{"name": "S01: Output", "id": "1", "status": "COMPLETE"},
]
},
}
reload_patch = mock.patch(
"google.cloud.bigquery.job._AsyncJob.reload", autospec=True
)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result",
side_effect=[concurrent.futures.TimeoutError, row_iterator],
)
with result_patch as result_patch_tqdm, reload_patch:
df = job.to_dataframe(progress_bar_type="tqdm", create_bqstorage_client=False)
assert result_patch_tqdm.call_count == 2
assert isinstance(df, pandas.DataFrame)
assert len(df) == 4 # verify the number of rows
assert list(df) == ["name", "age"] # verify the column names
result_patch_tqdm.assert_called_with(
timeout=_PROGRESS_BAR_UPDATE_INTERVAL, max_results=None
)
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
def test_to_dataframe_w_tqdm():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery._tqdm_helpers import _PROGRESS_BAR_UPDATE_INTERVAL
begun_resource = _make_job_resource(job_type="query")
schema = [
SchemaField("name", "STRING", mode="NULLABLE"),
SchemaField("age", "INTEGER", mode="NULLABLE"),
]
rows = [
{"f": [{"v": "Phred Phlyntstone"}, {"v": "32"}]},
{"f": [{"v": "Bharney Rhubble"}, {"v": "33"}]},
{"f": [{"v": "Wylma Phlyntstone"}, {"v": "29"}]},
{"f": [{"v": "Bhettye Rhubble"}, {"v": "27"}]},
]
connection = _make_connection({})
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
job._properties["statistics"] = {
"query": {
"queryPlan": [
{"name": "S00: Input", "id": "0", "status": "COMPLETE"},
{"name": "S01: Output", "id": "1", "status": "COMPLETE"},
]
},
}
reload_patch = mock.patch(
"google.cloud.bigquery.job._AsyncJob.reload", autospec=True
)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result",
side_effect=[
concurrent.futures.TimeoutError,
concurrent.futures.TimeoutError,
row_iterator,
],
)
with result_patch as result_patch_tqdm, reload_patch:
df = job.to_dataframe(progress_bar_type="tqdm", create_bqstorage_client=False)
assert result_patch_tqdm.call_count == 3
assert isinstance(df, pandas.DataFrame)
assert len(df) == 4 # verify the number of rows
assert list(df), ["name", "age"] # verify the column names
result_patch_tqdm.assert_called_with(
timeout=_PROGRESS_BAR_UPDATE_INTERVAL, max_results=None
)
@pytest.mark.skipif(pandas is None, reason="Requires `pandas`")
@pytest.mark.skipif(tqdm is None, reason="Requires `tqdm`")
def test_to_dataframe_w_tqdm_max_results():
from google.cloud.bigquery import table
from google.cloud.bigquery.job import QueryJob as target_class
from google.cloud.bigquery.schema import SchemaField
from google.cloud.bigquery._tqdm_helpers import _PROGRESS_BAR_UPDATE_INTERVAL
begun_resource = _make_job_resource(job_type="query")
schema = [
SchemaField("name", "STRING", mode="NULLABLE"),
SchemaField("age", "INTEGER", mode="NULLABLE"),
]
rows = [{"f": [{"v": "Phred Phlyntstone"}, {"v": "32"}]}]
connection = _make_connection({})
client = _make_client(connection=connection)
job = target_class.from_api_repr(begun_resource, client)
path = "/foo"
api_request = mock.Mock(return_value={"rows": rows})
row_iterator = table.RowIterator(client, api_request, path, schema)
job._properties["statistics"] = {
"query": {
"queryPlan": [
{"name": "S00: Input", "id": "0", "status": "COMPLETE"},
{"name": "S01: Output", "id": "1", "status": "COMPLETE"},
]
},
}
reload_patch = mock.patch(
"google.cloud.bigquery.job._AsyncJob.reload", autospec=True
)
result_patch = mock.patch(
"google.cloud.bigquery.job.QueryJob.result",
side_effect=[concurrent.futures.TimeoutError, row_iterator],
)
with result_patch as result_patch_tqdm, reload_patch:
job.to_dataframe(
progress_bar_type="tqdm", create_bqstorage_client=False, max_results=3
)
assert result_patch_tqdm.call_count == 2
result_patch_tqdm.assert_called_with(
timeout=_PROGRESS_BAR_UPDATE_INTERVAL, max_results=3
)
| 36.944891 | 100 | 0.636751 | 3,698 | 32,179 | 5.316928 | 0.085722 | 0.03077 | 0.052182 | 0.042112 | 0.853626 | 0.839284 | 0.818635 | 0.81182 | 0.792697 | 0.788272 | 0 | 0.011721 | 0.223189 | 32,179 | 870 | 101 | 36.987356 | 0.774853 | 0.039094 | 0 | 0.618403 | 0 | 0 | 0.180023 | 0.031277 | 0 | 0 | 0 | 0 | 0.092016 | 1 | 0.024357 | false | 0 | 0.070365 | 0 | 0.097429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d6e78c06b6f3b9cdabffbc56fd0d1c74ac44900 | 1,800 | py | Python | filter_functions.py | winterest/f-function | 521e1814b3ef66f4f7a314247d725937ff1c3d2d | [
"MIT"
] | null | null | null | filter_functions.py | winterest/f-function | 521e1814b3ef66f4f7a314247d725937ff1c3d2d | [
"MIT"
] | null | null | null | filter_functions.py | winterest/f-function | 521e1814b3ef66f4f7a314247d725937ff1c3d2d | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
class AXA(nn.Module):
def __init__(self,dim_feature = 256,dim_filter = 3, device='cuda:1'):
super(AXA, self).__init__()
self.dim_feature = dim_feature
self.dim_filter = dim_filter
#self.A =
self.A = nn.Parameter(0.1 * torch.randn(self.dim_filter, self.dim_feature)-0.05)
#self.A = torch.zeros(self.dim_filter, self.dim_feature).requires_grad_(True)
#self.A = self.A.to(device)
def forward(self, X):
#print(self.A.size())
#print(X.size())
#print(self.A)
AX = self.A@X
return AX@torch.transpose(self.A,0,1)
class BXtBXB(nn.Module):
def __init__(self,dim_feature = 256,dim_filter = 3):
super(BXtBXB, self).__init__()
self.dim_feature = dim_feature
self.dim_filter = dim_filter
self.B = torch.randn(self.dim_filter, self.dim_feature).requires_grad_(True)
def forward(self, X):
return self.B@torch.transpose(X)
AX = self.A@X
return AX@torch.transpose(self.A,0,1)
class tanhAXA(nn.Module):
def __init__(self,dim_feature = 256,dim_filter = 3, device='cuda:1'):
super(tanhAXA, self).__init__()
self.dim_feature = dim_feature
self.dim_filter = dim_filter
#self.A =
self.A = nn.Parameter(0.1 * torch.randn(self.dim_filter, self.dim_feature)-0.05)
#self.A = nn.Parameter(self.A)
#self.A = torch.zeros(self.dim_filter, self.dim_feature).requires_grad_(True)
#self.A = self.A.to(device)
self.tanh = nn.Tanh()
def forward(self, X):
#print(self.A.size())
#print(X.size())
#print(self.A)
AX = self.A@X
return self.tanh(AX@torch.transpose(self.A,0,1))
| 33.962264 | 88 | 0.603889 | 267 | 1,800 | 3.853933 | 0.149813 | 0.1069 | 0.14966 | 0.104956 | 0.854227 | 0.854227 | 0.854227 | 0.831876 | 0.822157 | 0.780369 | 0 | 0.022355 | 0.254444 | 1,800 | 52 | 89 | 34.615385 | 0.744411 | 0.192778 | 0 | 0.580645 | 0 | 0 | 0.008333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.193548 | false | 0 | 0.064516 | 0 | 0.483871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3d6fbd2565a458d54bc1f489a2321d741d1efd80 | 2,707 | py | Python | Python/random_password/random_password_test.py | toddnguyen47/utility-files | de7d9839b5265c10d7ec017f8e08dfb8d6caac2a | [
"MIT"
] | null | null | null | Python/random_password/random_password_test.py | toddnguyen47/utility-files | de7d9839b5265c10d7ec017f8e08dfb8d6caac2a | [
"MIT"
] | 1 | 2021-04-20T22:01:09.000Z | 2021-04-20T22:01:09.000Z | Python/random_password/random_password_test.py | toddnguyen47/utility-files | de7d9839b5265c10d7ec017f8e08dfb8d6caac2a | [
"MIT"
] | 1 | 2021-03-18T03:42:03.000Z | 2021-03-18T03:42:03.000Z | import pytest
from random_password import RandomPassword
@pytest.fixture(scope="function")
def random_password():
random_password = RandomPassword()
random_password.reset_index_to_replace()
yield random_password
print("Teardown now!")
def string_is_all_lowercase(str_input: str) -> bool:
return all(x.islower() for x in str_input)
def test_generate_lowercase_password(random_password):
random_password.set_current_random_password_all_lowercase()
s = random_password.current_random_password_list
assert len(s) > 0
assert string_is_all_lowercase(s)
def test_password_has_one_uppercase(random_password):
random_password.set_current_random_password_all_lowercase()
random_password.uppercase_one_char()
list1 = random_password.current_random_password_list
assert len(list1) > 0
assert any(x.isupper() for x in list1)
assert random_password._password_length - \
1 == len(random_password._index_to_replace)
def test_password_has_two_numbers(random_password):
random_password.set_current_random_password_all_lowercase()
random_password.uppercase_one_char()
random_password.replace_two_chars_with_number()
list1 = random_password.current_random_password_list
assert len(list1) > 0
assert 2 == sum(1 for x in list1 if x.isdigit())
assert any(x.isdigit() for x in list1)
assert random_password._password_length - 3 == len(random_password._index_to_replace)
def test_password_has_two_special_chars(random_password):
random_password.set_current_random_password_all_lowercase()
random_password.uppercase_one_char()
random_password.replace_two_chars_with_number()
random_password.replace_two_chars_with_special_chars()
list1 = random_password.current_random_password_list
assert len(list1) > 0
assert 2 == sum(
1 for x in list1 if x in random_password.SpecialCharsReplacement()._special_chars_tuple)
assert 2 == sum(1 for x in list1 if x.isdigit())
assert any(x.isdigit() for x in list1)
assert random_password._password_length - 5 == len(random_password._index_to_replace)
def test_five_replacements(random_password):
# Arrange
random_password.set_current_random_password_all_lowercase()
# Act
random_password.replace_five_lowercase_chars()
# Assert
list1 = random_password.current_random_password_list
assert len(list1) > 0
assert 2 == sum(
1 for x in list1 if x in random_password.SpecialCharsReplacement()._special_chars_tuple)
assert 2 == sum(1 for x in list1 if x.isdigit())
assert any(x.isdigit() for x in list1)
assert random_password._password_length - 5 == len(random_password._index_to_replace)
| 36.581081 | 96 | 0.77355 | 379 | 2,707 | 5.118734 | 0.174142 | 0.339175 | 0.030928 | 0.051031 | 0.753093 | 0.753093 | 0.736082 | 0.736082 | 0.681959 | 0.658763 | 0 | 0.0157 | 0.152937 | 2,707 | 73 | 97 | 37.082192 | 0.830353 | 0.006649 | 0 | 0.555556 | 0 | 0 | 0.007821 | 0 | 0 | 0 | 0 | 0 | 0.351852 | 1 | 0.12963 | false | 0.62963 | 0.037037 | 0.018519 | 0.185185 | 0.018519 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
3da2c269f6898f6debd3f82f3b6a828419a392c0 | 21,634 | py | Python | tests/test_content_client.py | danwagnerco/similarweb | 3b87690869bedd695bfd95a58b67011ae9df897d | [
"MIT"
] | 38 | 2015-03-27T13:20:55.000Z | 2022-03-21T14:05:27.000Z | tests/test_content_client.py | danwagnerco/similarweb | 3b87690869bedd695bfd95a58b67011ae9df897d | [
"MIT"
] | 4 | 2015-02-11T15:33:23.000Z | 2017-03-10T15:45:23.000Z | tests/test_content_client.py | danwagnerco/similarweb | 3b87690869bedd695bfd95a58b67011ae9df897d | [
"MIT"
] | 20 | 2015-03-27T13:25:06.000Z | 2021-06-18T09:25:07.000Z | import json
import httpretty
import os
from similarweb import ContentClient
TD = os.path.dirname(os.path.realpath(__file__))
def test_content_client_has_user_key():
client = ContentClient("test_key")
assert client.user_key == "test_key"
def test_content_client_has_base_url():
client = ContentClient("test_key")
assert client.base_url == "https://api.similarweb.com/Site/{0}/v2/"
def test_content_client_has_empty_full_url():
client = ContentClient("test_key")
assert client.full_url == ""
@httpretty.activate
def test_content_client_similar_sites_completes_full_url():
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/similarsites?UserKey=test_key")
f = "{0}/fixtures/content_client_similar_sites_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
client.similar_sites("example.com")
assert client.full_url == target_url
@httpretty.activate
def test_content_client_similar_sites_response_from_invalid_api_key():
expected = {"Error": "user_key_invalid"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/similarsites?UserKey=invalid_key")
f = "{0}/fixtures/content_client_similar_sites_invalid_api_key_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("invalid_key")
result = client.similar_sites("example.com")
assert result == expected
@httpretty.activate
def test_content_client_similar_sites_response_from_malformed_url():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"bad_url/v2/similarsites?UserKey=test_key")
f = "{0}/fixtures/content_client_similar_sites_url_malformed_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.similar_sites("bad_url")
assert result == expected
# This response is not JSON-formatted
@httpretty.activate
def test_content_client_similar_sites_response_from_malformed_url_incl_http():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"https://example.com/v2/similarsites?UserKey=test_key")
f = "{0}/fixtures/content_client_similar_sites_url_with_http_response.json".format(TD)
with open(f) as data_file:
stringified = data_file.read().replace("\n", "")
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.similar_sites("https://example.com")
assert result == expected
@httpretty.activate
def test_content_client_similar_sites_response_from_empty_response():
expected = {"Error": "Unknown Error"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/similarsites?UserKey=test_key")
f = "{0}/fixtures/content_client_similar_sites_empty_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.similar_sites("example.com")
assert result == expected
@httpretty.activate
def test_content_client_similar_sites_response_from_good_inputs():
expected = {"nfl.com": 0.9999999999999999,
"espn.go.com": 0.9999999999998606,
"nhl.com": 0.9999999878602834,
"sportsillustrated.cnn.com": 0.9999872885189645,
"sports.yahoo.com": 0.9999787609071635,
"cbssports.com": 0.9997651564945856,
"golfweb.com": 0.9994886009452536,
"mlb.com": 0.9987758980414373,
"hoopshype.com": 0.9892681920426786,
"msn.foxsports.com": 0.98444827064877,
"insidehoops.com": 0.9704204922805049,
"mlb.mlb.com": 0.9610661670727825,
"sportingnews.com": 0.9379576739746633,
"nba-basketball.org": 0.5895781619019344,
"dimemag.com": 0.5761373928338995,
"sportsline.com": 0.4785488863147692,
"slamonline.com": 0.37097801648129436,
"realgm.com": 0.3262779713759013,
"basketball-reference.com": 0.2913301249701222,
"82games.com": 0.28480732814372367}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/similarsites?UserKey=test_key")
f = "{0}/fixtures/content_client_similar_sites_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.similar_sites("example.com")
assert result == expected
@httpretty.activate
def test_content_client_also_visited_completes_full_url():
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/alsovisited?UserKey=test_key")
f = "{0}/fixtures/content_client_also_visited_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
client.also_visited("example.com")
assert client.full_url == target_url
@httpretty.activate
def test_content_client_also_visited_response_from_invalid_api_key():
expected = {"Error": "user_key_invalid"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/alsovisited?UserKey=invalid_key")
f = "{0}/fixtures/content_client_also_visited_invalid_api_key_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("invalid_key")
result = client.also_visited("example.com")
assert result == expected
@httpretty.activate
def test_content_client_also_visited_response_from_malformed_url():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"bad_url/v2/alsovisited?UserKey=test_key")
f = "{0}/fixtures/content_client_also_visited_url_malformed_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.also_visited("bad_url")
assert result == expected
# This response is not JSON-formatted
@httpretty.activate
def test_content_client_also_visited_response_from_malformed_url_incl_http():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"https://example.com/v2/alsovisited?UserKey=test_key")
f = "{0}/fixtures/content_client_also_visited_url_with_http_response.json".format(TD)
with open(f) as data_file:
stringified = data_file.read().replace("\n", "")
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.also_visited("https://example.com")
assert result == expected
@httpretty.activate
def test_content_client_also_visited_response_from_empty_response():
expected = {"Error": "Unknown Error"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/alsovisited?UserKey=test_key")
f = "{0}/fixtures/content_client_also_visited_empty_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.also_visited("example.com")
assert result == expected
@httpretty.activate
def test_content_client_also_visited_response_from_good_inputs():
expected = {"basketball.fantasysports.yahoo.com": 0.0044233824462893015,
"bleacherreport.com": 0.0040226422900098285,
"nfl.com": 0.003225871488152607,
"nhl.com": 0.0027238867724788027,
"nbaliveonline.tv": 0.0019106016141946106,
"basketball.realgm.com": 0.0019085029774910736,
"basketusa.com": 0.001823751528937848,
"nba-stream.com": 0.0012604654635019507,
"sbnation.com": 0.0010647115089141197,
"games.espn.go.com": 0.00103766904980084,
"espn.go.com": 0.0008876453503041353,
"scores.espn.go.com": 0.0007570183284250613,
"pba.inquirer.net": 0.0004930968059184227,
"rotoworld.com": 0.0004921489592139762}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/alsovisited?UserKey=test_key")
f = "{0}/fixtures/content_client_also_visited_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.also_visited("example.com")
assert result == expected
@httpretty.activate
def test_content_client_tags_completes_full_url():
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/tags?UserKey=test_key")
f = "{0}/fixtures/content_client_tags_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
client.tags("example.com")
assert client.full_url == target_url
@httpretty.activate
def test_content_client_tags_response_from_invalid_api_key():
expected = {"Error": "user_key_invalid"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/tags?UserKey=invalid_key")
f = "{0}/fixtures/content_client_tags_invalid_api_key_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("invalid_key")
result = client.tags("example.com")
assert result == expected
@httpretty.activate
def test_content_client_tags_response_from_malformed_url():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"bad_url/v2/tags?UserKey=test_key")
f = "{0}/fixtures/content_client_tags_url_malformed_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.tags("bad_url")
assert result == expected
# This response is not JSON-formatted
@httpretty.activate
def test_content_client_tags_response_from_malformed_url_incl_http():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"https://example.com/v2/tags?UserKey=test_key")
f = "{0}/fixtures/content_client_tags_url_with_http_response.json".format(TD)
with open(f) as data_file:
stringified = data_file.read().replace("\n", "")
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.tags("https://example.com")
assert result == expected
@httpretty.activate
def test_content_client_tags_response_from_empty_response():
expected = {"Error": "Unknown Error"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/tags?UserKey=test_key")
f = "{0}/fixtures/content_client_tags_empty_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.tags("example.com")
assert result == expected
@httpretty.activate
def test_content_client_tags_response_from_good_inputs():
expected = {"nba": 0.6398514098507464,
"sports": 0.36910410054316395,
"nba draft": 0.3662137380042584,
"basketball": 0.30321123768053937,
"professional sports": 0.2537060998187944,
"us sports": 0.2537060998187944,
"pro": 0.1728851238680308,
"sport": 0.14998747927195202,
"leagues": 0.10235439910323241,
"imported": 0.09014857846589025}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/tags?UserKey=test_key")
f = "{0}/fixtures/content_client_tags_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.tags("example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_completes_full_url():
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/category?UserKey=test_key")
f = "{0}/fixtures/content_client_category_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
client.category("example.com")
assert client.full_url == target_url
@httpretty.activate
def test_content_client_category_response_from_invalid_api_key():
expected = {"Error": "user_key_invalid"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/category?UserKey=invalid_key")
f = "{0}/fixtures/content_client_category_invalid_api_key_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("invalid_key")
result = client.category("example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_response_from_malformed_url():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"bad_url/v2/category?UserKey=test_key")
f = "{0}/fixtures/content_client_category_url_malformed_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category("bad_url")
assert result == expected
# This response is not JSON-formatted
@httpretty.activate
def test_content_client_category_response_from_malformed_url_incl_http():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"https://example.com/v2/category?UserKey=test_key")
f = "{0}/fixtures/content_client_category_url_with_http_response.json".format(TD)
with open(f) as data_file:
stringified = data_file.read().replace("\n", "")
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category("https://example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_response_from_empty_response():
expected = {"Error": "Unknown Error"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/category?UserKey=test_key")
f = "{0}/fixtures/content_client_category_empty_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category("example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_response_from_good_inputs():
expected = {"Category": "Sports/Basketball"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/category?UserKey=test_key")
f = "{0}/fixtures/content_client_category_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category("example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_rank_completes_full_url():
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/categoryrank?UserKey=test_key")
f = "{0}/fixtures/content_client_category_rank_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
client.category_rank("example.com")
assert client.full_url == target_url
@httpretty.activate
def test_content_client_category_rank_response_from_invalid_api_key():
expected = {"Error": "user_key_invalid"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/categoryrank?UserKey=invalid_key")
f = "{0}/fixtures/content_client_category_rank_invalid_api_key_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("invalid_key")
result = client.category_rank("example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_rank_response_from_malformed_url():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"bad_url/v2/categoryrank?UserKey=test_key")
f = "{0}/fixtures/content_client_category_rank_url_malformed_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category_rank("bad_url")
assert result == expected
# This response is not JSON-formatted
@httpretty.activate
def test_content_client_category_rank_response_from_malformed_url_incl_http():
expected = {"Error": "Malformed or Unknown URL"}
target_url = ("https://api.similarweb.com/Site/"
"https://example.com/v2/categoryrank?UserKey=test_key")
f = "{0}/fixtures/content_client_category_rank_url_with_http_response.json".format(TD)
with open(f) as data_file:
stringified = data_file.read().replace("\n", "")
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category_rank("https://example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_rank_response_from_empty_response():
expected = {"Error": "Unknown Error"}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/categoryrank?UserKey=test_key")
f = "{0}/fixtures/content_client_category_rank_empty_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category_rank("example.com")
assert result == expected
@httpretty.activate
def test_content_client_category_rank_response_from_good_inputs():
expected = {"Category": "Sports/Basketball",
"CategoryRank": 1}
target_url = ("https://api.similarweb.com/Site/"
"example.com/v2/categoryrank?UserKey=test_key")
f = "{0}/fixtures/content_client_category_rank_good_response.json".format(TD)
with open(f) as data_file:
stringified = json.dumps(json.load(data_file))
httpretty.register_uri(httpretty.GET, target_url, body=stringified)
client = ContentClient("test_key")
result = client.category_rank("example.com")
assert result == expected
| 41.845261 | 92 | 0.688407 | 2,611 | 21,634 | 5.44121 | 0.064343 | 0.041177 | 0.032519 | 0.046456 | 0.897163 | 0.888224 | 0.881678 | 0.875625 | 0.860843 | 0.85113 | 0 | 0.049052 | 0.193353 | 21,634 | 516 | 93 | 41.926357 | 0.765056 | 0.008274 | 0 | 0.664234 | 0 | 0 | 0.274944 | 0.139593 | 0 | 0 | 0 | 0 | 0.080292 | 1 | 0.080292 | false | 0 | 0.012165 | 0 | 0.092457 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3dc834ddee89eb0053b889ca0d67729f78ee0439 | 7,753 | py | Python | Packs/CTIX/Integrations/CTIX/CTIX_test.py | mazmat-panw/content | 024a65c1dea2548e2637a9cbbe54966e9e34a722 | [
"MIT"
] | 2 | 2021-12-06T21:38:24.000Z | 2022-01-13T08:23:36.000Z | Packs/CTIX/Integrations/CTIX/CTIX_test.py | mazmat-panw/content | 024a65c1dea2548e2637a9cbbe54966e9e34a722 | [
"MIT"
] | 87 | 2022-02-23T12:10:53.000Z | 2022-03-31T11:29:05.000Z | Packs/CTIX/Integrations/CTIX/CTIX_test.py | henry-sue-pa/content | 043c6badfb4f9c80673cad9242fdea72efe301f7 | [
"MIT"
] | 2 | 2022-01-05T15:27:01.000Z | 2022-02-01T19:27:43.000Z | import io
import json
'''CONSTANTS'''
BASE_URL = "http://test.com/"
ACCESS_ID = "access_id"
SECRET_KEY = "secret_key"
def util_load_json(path):
with io.open(path, mode='r', encoding='utf-8') as f:
return json.loads(f.read())
def test_ip(requests_mock):
from CTIX import Client, ip_details_command
from CommonServerPython import Common
ip_to_check = '6.7.8.9'
mock_response = util_load_json('test_data/ip_details.json')
requests_mock.get(f'http://test.com/objects/indicator/?q={ip_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'ip': ip_to_check,
'enhanced': False
}
response = ip_details_command(client, args)
assert response[0].outputs == mock_response["results"][0]
assert response[0].outputs_prefix == 'CTIX.IP'
assert response[0].outputs_key_field == 'name2'
assert isinstance(response, list)
assert len(response) == 1
assert isinstance(response[0].indicator, Common.IP)
assert response[0].indicator.ip == ip_to_check
def test_ip_not_found(requests_mock):
from CTIX import Client, ip_details_command
ip_to_check = '1.1.1.1'
mock_response = {"results": []}
requests_mock.get(f'http://test.com/objects/indicator/?q={ip_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'ip': ip_to_check,
'enhanced': False
}
response = ip_details_command(client, args)
assert response[0].outputs == []
assert response[0].readable_output == f"No matches found for IP {ip_to_check}"
def test_domain(requests_mock):
from CTIX import Client, domain_details_command
from CommonServerPython import Common
domain_to_check = 'testing.com'
mock_response = util_load_json('test_data/domain_details.json')
requests_mock.get(f'http://test.com/objects/indicator/?q={domain_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'domain': domain_to_check,
'enhanced': False
}
response = domain_details_command(client, args)
assert response[0].outputs == mock_response["results"][0]
assert response[0].outputs_prefix == 'CTIX.Domain'
assert response[0].outputs_key_field == 'name2'
assert isinstance(response, list)
assert len(response) == 1
assert isinstance(response[0].indicator, Common.Domain)
assert response[0].indicator.domain == domain_to_check
def test_domain_not_found(requests_mock):
from CTIX import Client, domain_details_command
domain_to_check = 'abc.com'
mock_response = {"results": []}
requests_mock.get(f'http://test.com/objects/indicator/?q={domain_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'domain': domain_to_check,
'enhanced': False
}
response = domain_details_command(client, args)
assert response[0].outputs == []
assert response[0].readable_output == f"No matches found for Domain {domain_to_check}"
def test_url(requests_mock):
from CTIX import Client, url_details_command
from CommonServerPython import Common
url_to_check = 'https://www.ibm.com/support/mynotifications/'
mock_response = util_load_json('test_data/url_details.json')
requests_mock.get(f'http://test.com/objects/indicator/?q={url_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'url': url_to_check,
'enhanced': False
}
response = url_details_command(client, args)
assert response[0].outputs == mock_response["results"][0]
assert response[0].outputs_prefix == 'CTIX.URL'
assert response[0].outputs_key_field == 'name2'
assert isinstance(response, list)
assert len(response) == 1
assert isinstance(response[0].indicator, Common.URL)
assert response[0].indicator.url == url_to_check
def test_url_not_found(requests_mock):
from CTIX import Client, url_details_command
url_to_check = 'https://abc.com'
mock_response = {"results": []}
requests_mock.get(f'http://test.com/objects/indicator/?q={url_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'url': url_to_check,
'enhanced': False
}
response = url_details_command(client, args)
assert response[0].outputs == []
assert response[0].readable_output == f"No matches found for URL {url_to_check}"
def test_file(requests_mock):
from CTIX import Client, file_details_command
from CommonServerPython import Common
file_to_check = '4d552241543b8176a3189864a16b6052f9d163a124291ec9552e1b77'
mock_response = util_load_json('test_data/file_details.json')
requests_mock.get(f'http://test.com/objects/indicator/?q={file_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'file': file_to_check,
'enhanced': False
}
response = file_details_command(client, args)
assert response[0].outputs == mock_response["results"][0]
assert response[0].outputs_prefix == 'CTIX.File'
assert response[0].outputs_key_field == 'name2'
assert isinstance(response, list)
assert len(response) == 1
assert isinstance(response[0].indicator, Common.File)
assert response[0].indicator.name == file_to_check
def test_file_not_found(requests_mock):
from CTIX import Client, file_details_command
file_to_check = '6AD8334857B3F054A9F93BA380B5555B'
mock_response = {"results": []}
requests_mock.get(f'http://test.com/objects/indicator/?q={file_to_check}',
json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
args = {
'file': file_to_check,
'enhanced': False
}
response = file_details_command(client, args)
assert response[0].outputs == []
assert response[0].readable_output == f"No matches found for FILE {file_to_check}"
def test_create_intel(requests_mock):
from CTIX import Client, create_intel_command
mock_response = util_load_json('test_data/create_intel.json')
requests_mock.post('http://test.com/create-intel/', json=mock_response)
client = Client(
base_url=BASE_URL,
access_id=ACCESS_ID,
secret_key=SECRET_KEY,
verify=False,
proxies={}
)
post_data = {
"ips": "1.2.3.4,3.45.56.78",
"urls": "https://abc_test.com,https://test_abc.com"
}
response = create_intel_command(client, post_data)
assert 'data', 'status' in response['CTIX']['Intel']['response']
assert 'status' in response['CTIX']['Intel']
assert response['CTIX']['Intel']['response']['status'] == 201
| 27.492908 | 90 | 0.651877 | 977 | 7,753 | 4.907881 | 0.101331 | 0.046715 | 0.075078 | 0.07341 | 0.842961 | 0.819604 | 0.757873 | 0.724505 | 0.71074 | 0.627737 | 0 | 0.021623 | 0.230491 | 7,753 | 281 | 91 | 27.590747 | 0.782099 | 0 | 0 | 0.654028 | 0 | 0 | 0.167356 | 0.02869 | 0 | 0 | 0 | 0 | 0.184834 | 1 | 0.047393 | false | 0 | 0.07109 | 0 | 0.123223 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
3df6e26c86bb65f7616ddd01a0fadb61c72542e2 | 37 | py | Python | up/tasks/ssl/models/postprocess/__init__.py | ModelTC/EOD | 164bff80486e9ae6a095a97667b365c46ceabd86 | [
"Apache-2.0"
] | 196 | 2021-10-30T05:15:36.000Z | 2022-03-30T18:43:40.000Z | up/tasks/ssl/models/postprocess/__init__.py | ModelTC/EOD | 164bff80486e9ae6a095a97667b365c46ceabd86 | [
"Apache-2.0"
] | 12 | 2021-10-30T11:33:28.000Z | 2022-03-31T14:22:58.000Z | up/tasks/ssl/models/postprocess/__init__.py | ModelTC/EOD | 164bff80486e9ae6a095a97667b365c46ceabd86 | [
"Apache-2.0"
] | 23 | 2021-11-01T07:26:17.000Z | 2022-03-27T05:55:37.000Z | from .ssl_postprocess import * # noqa | 37 | 37 | 0.783784 | 5 | 37 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135135 | 37 | 1 | 37 | 37 | 0.875 | 0.108108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3df6ec49c824d82c99b8771d470226ce481b714f | 156 | py | Python | happyml/models/__init__.py | guiferviz/happyml-py | 4252d0cff27461e38da404553772dafbc74f3eaa | [
"BSD-Source-Code"
] | 1 | 2016-08-15T13:27:48.000Z | 2016-08-15T13:27:48.000Z | happyml/models/__init__.py | guiferviz/happyml-py | 4252d0cff27461e38da404553772dafbc74f3eaa | [
"BSD-Source-Code"
] | null | null | null | happyml/models/__init__.py | guiferviz/happyml-py | 4252d0cff27461e38da404553772dafbc74f3eaa | [
"BSD-Source-Code"
] | null | null | null |
from model import Model
from linear_regression import LinearRegression
from perceptron import Perceptron
from perceptron_kernel import PerceptronKernel
| 17.333333 | 46 | 0.871795 | 18 | 156 | 7.444444 | 0.5 | 0.208955 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.128205 | 156 | 8 | 47 | 19.5 | 0.985294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
9a789566f0121c36774d7e887219a39d48844dbc | 31,421 | py | Python | KG/DuEE_baseline/bin/reader/task_reader.py | pkulzb/Research | 88da4910a356f1e95e1e1e05316500055533683d | [
"Apache-2.0"
] | 1,319 | 2020-02-14T10:42:07.000Z | 2022-03-31T15:42:18.000Z | KG/DuEE_baseline/bin/reader/task_reader.py | pkulzb/Research | 88da4910a356f1e95e1e1e05316500055533683d | [
"Apache-2.0"
] | 192 | 2020-02-14T02:53:34.000Z | 2022-03-31T02:25:48.000Z | KG/DuEE_baseline/bin/reader/task_reader.py | pkulzb/Research | 88da4910a356f1e95e1e1e05316500055533683d | [
"Apache-2.0"
] | 720 | 2020-02-14T02:12:38.000Z | 2022-03-31T12:21:15.000Z | # coding: utf-8
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""task reader
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
import sys
import os
import json
import random
import logging
import numpy as np
import six
from io import open
from collections import namedtuple
import tokenization
from batching import pad_batch_data
log = logging.getLogger(__name__)
if six.PY3:
import io
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8')
def csv_reader(fd, delimiter='\t'):
"""csv_reader"""
def gen():
"""gen"""
for i in fd:
yield i.rstrip('\n').split(delimiter)
return gen()
class BaseReader(object):
"""BaseReader
"""
def __init__(self,
vocab_path,
label_map_config=None,
max_seq_len=512,
do_lower_case=True,
in_tokens=False,
is_inference=False,
random_seed=None,
tokenizer="FullTokenizer",
is_classify=True,
is_regression=False,
for_cn=True,
task_id=0):
self.max_seq_len = max_seq_len
self.tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_path, do_lower_case=do_lower_case)
self.vocab = self.tokenizer.vocab
self.pad_id = self.vocab["[PAD]"]
self.cls_id = self.vocab["[CLS]"]
self.sep_id = self.vocab["[SEP]"]
self.in_tokens = in_tokens
self.is_inference = is_inference
self.for_cn = for_cn
self.task_id = task_id
np.random.seed(random_seed)
self.is_classify = is_classify
self.is_regression = is_regression
self.current_example = 0
self.current_epoch = 0
self.num_examples = 0
if label_map_config:
with open(label_map_config, encoding='utf8') as f:
self.label_map = json.load(f)
else:
self.label_map = None
def get_train_progress(self):
"""Gets progress for training phase."""
return self.current_example, self.current_epoch
def _read_tsv(self, input_file, quotechar=None):
"""Reads a tab separated value file."""
with open(input_file, 'r', encoding='utf8') as f:
reader = csv_reader(f)
headers = next(reader)
Example = namedtuple('Example', headers)
examples = []
for line in reader:
example = Example(*line)
examples.append(example)
return examples
def _truncate_seq_pair(self, tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""Converts a single `Example` into a single `Record`."""
text_a = tokenization.convert_to_unicode(example.text_a)
tokens_a = tokenizer.tokenize(text_a)
tokens_b = None
has_text_b = False
if isinstance(example, dict):
has_text_b = "text_b" in example.keys()
else:
has_text_b = "text_b" in example._fields
if has_text_b:
text_b = tokenization.convert_to_unicode(example.text_b)
tokens_b = tokenizer.tokenize(text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
self._truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT/ERNIE is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
text_type_ids = []
tokens.append("[CLS]")
text_type_ids.append(0)
for token in tokens_a:
tokens.append(token)
text_type_ids.append(0)
tokens.append("[SEP]")
text_type_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
text_type_ids.append(1)
tokens.append("[SEP]")
text_type_ids.append(1)
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
if self.is_inference:
Record = namedtuple(
'Record', ['token_ids', 'text_type_ids', 'position_ids'])
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids)
else:
if self.label_map:
label_id = self.label_map[example.label]
else:
label_id = example.label
Record = namedtuple('Record', [
'token_ids', 'text_type_ids', 'position_ids', 'label_id', 'qid'
])
qid = None
if "qid" in example._fields:
qid = example.qid
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
label_id=label_id,
qid=qid)
return record
def _prepare_batch_data(self, examples, batch_size, phase=None):
"""generate batch records"""
batch_reords, max_len = [], 0
for index, example in enumerate(examples):
if phase == "train":
self.current_example = index
record = self._convert_example_to_record(example, self.max_seq_len,
self.tokenizer)
max_len = max(max_len, len(record.token_ids))
if self.in_tokens:
to_append = (len(batch_records) + 1) * max_len <= batch_size
else:
to_append = len(batch_records) < batch_size
if to_append:
batch_records.append(record)
else:
yield self._pad_batch_records(batch_records)
batch_records, max_len = [record], len(record.token_ids)
if batch_records:
yield self._pad_batch_records(batch_records)
def get_num_examples(self, input_file):
"""func"""
examples = self._read_tsv(input_file)
return len(examples)
def data_generator(self,
input_file,
batch_size,
epoch,
dev_count=1,
shuffle=True,
phase=None):
"""func"""
examples = self._read_tsv(input_file)
def wrapper():
"""func"""
all_dev_batches = []
for epoch_index in range(epoch):
if phase == "train":
self.current_example = 0
self.current_epoch = epoch_index
if shuffle:
np.random.shuffle(examples)
for batch_data in self._prepare_batch_data(
examples, batch_size, phase=phase):
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data)
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
return wrapper
class TriggerSequenceLabelReader(BaseReader):
"""TriggerSequenceLabelReader
"""
def __init__(self,
vocab_path,
label_map_config=None,
labels_map=None,
max_seq_len=512,
do_lower_case=True,
in_tokens=False,
is_inference=False,
random_seed=None,
tokenizer="FullTokenizer",
is_classify=True,
is_regression=False,
for_cn=True,
task_id=0):
self.max_seq_len = max_seq_len
self.tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_path, do_lower_case=do_lower_case)
self.vocab = self.tokenizer.vocab
self.pad_id = self.vocab["[PAD]"]
self.cls_id = self.vocab["[CLS]"]
self.sep_id = self.vocab["[SEP]"]
self.in_tokens = in_tokens
self.is_inference = is_inference
self.for_cn = for_cn
self.task_id = task_id
np.random.seed(random_seed)
self.is_classify = is_classify
self.is_regression = is_regression
self.current_example = 0
self.current_epoch = 0
self.num_examples = 0
self.label_map = labels_map
def _process_examples_by_json(self, input_data):
"""_examples_by_json"""
def process_sent_ori_2_new(sent, start, end):
"""process_sent_ori_2_new"""
words = list(sent)
sent_ori_2_new_index = {}
new_words = []
new_start, new_end = -1, -1
for i, w in enumerate(words):
if i == start:
new_start = len(new_words)
if i == end:
new_end = len(new_words)
if len(w.strip()) == 0:
sent_ori_2_new_index[i] = -1
if i == end:
new_end -= 1
if i == start:
start += 1
else:
sent_ori_2_new_index[i] = len(new_words)
new_words.append(w)
if new_end == len(new_words):
new_end = len(new_words) - 1
return [words, new_words, sent_ori_2_new_index, new_start, new_end]
examples = []
k = 0
Example = namedtuple('Example', [
"id", "text_a", "label", "ori_text", "ori_2_new_index", "sentence"
])
for data in input_data:
event_id = data["event_id"]
sentence = data["text"]
trigger_start = data["trigger_start_index"]
trigger_text = data["trigger"]
trigger_end = trigger_start + len(trigger_text) - 1
event_type = data["event_type"]
(sent_words, new_sent_words, ori_2_new_sent_index,
new_trigger_start, new_trigger_end) = process_sent_ori_2_new(
sentence.lower(), trigger_start, trigger_end)
new_sent_labels = [u"O"] * len(new_sent_words)
for i in range(new_trigger_start, new_trigger_end + 1):
if i == new_trigger_start:
new_sent_labels[i] = u"B-{}".format(event_type)
else:
new_sent_labels[i] = u"I-{}".format(event_type)
example = Example(
id=event_id,
text_a=u" ".join(new_sent_words),
label=u" ".join(new_sent_labels),
ori_text=sent_words,
ori_2_new_index=ori_2_new_sent_index,
sentence=sentence)
if k > 0:
print(u"example {} : {}".format(
k, json.dumps(
example._asdict(), ensure_ascii=False)))
k -= 1
examples.append(example)
return examples
def _read_json_file(self, input_file):
"""_read_json_file"""
input_data = []
with open(input_file, "r", encoding='utf8') as f:
for line in f:
d_json = json.loads(line.strip())
input_data.append(d_json)
examples = self._process_examples_by_json(input_data)
return examples
def get_examples_by_file(self, input_file):
"""get_examples_by_file"""
return self._read_json_file(input_file)
def _pad_batch_records(self, batch_records):
"""_pad_batch_records"""
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [
record.text_type_ids for record in batch_records
]
batch_position_ids = [record.position_ids for record in batch_records]
batch_label_ids = [record.label_ids for record in batch_records]
# padding
padded_token_ids, input_mask, batch_seq_lens = pad_batch_data(
batch_token_ids,
pad_idx=self.pad_id,
return_input_mask=True,
return_seq_lens=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_label_ids = pad_batch_data(
batch_label_ids, pad_idx=len(self.label_map) - 1)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask, padded_label_ids, batch_seq_lens
]
return return_list
def _reseg_token_label(self, tokens, labels, tokenizer):
"""_reseg_token_label"""
assert len(tokens) == len(labels)
ret_tokens = []
ret_labels = []
for token, label in zip(tokens, labels):
sub_token = tokenizer.tokenize(token)
if len(sub_token) == 0:
continue
ret_tokens.extend(sub_token)
if len(sub_token) == 1:
ret_labels.append(label)
continue
if label == "O" or label.startswith("I-"):
ret_labels.extend([label] * len(sub_token))
elif label.startswith("B-"):
i_label = "I-" + label[2:]
ret_labels.extend([label] + [i_label] * (len(sub_token) - 1))
assert len(ret_tokens) == len(ret_labels)
return ret_tokens, ret_labels
def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""_convert_example_to_record"""
tokens = tokenization.whitespace_tokenize(example.text_a)
labels = tokenization.whitespace_tokenize(example.label)
tokens, labels = self._reseg_token_label(tokens, labels, tokenizer)
if len(tokens) > max_seq_length - 2:
tokens = tokens[0:(max_seq_length - 2)]
labels = labels[0:(max_seq_length - 2)]
tokens = ["[CLS]"] + tokens + ["[SEP]"]
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
text_type_ids = [0] * len(token_ids)
no_entity_id = len(self.label_map) - 1
label_ids = [no_entity_id
] + [self.label_map[label]
for label in labels] + [no_entity_id]
Record = namedtuple(
'Record',
['token_ids', 'text_type_ids', 'position_ids', 'label_ids'])
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
label_ids=label_ids)
return record
def _prepare_batch_data(self, examples, batch_size, phase=None):
"""generate batch records"""
batch_records, max_len = [], 0
k = 0
for index, example in enumerate(examples):
if phase == "train":
self.current_example = index
record = self._convert_example_to_record(example, self.max_seq_len,
self.tokenizer)
if k > 0:
print(u"feature {} : {}".format(
k, json.dumps(
record._asdict(), ensure_ascii=False)))
k -= 1
max_len = max(max_len, len(record.token_ids))
if self.in_tokens:
to_append = (len(batch_records) + 1) * max_len <= batch_size
else:
to_append = len(batch_records) < batch_size
if to_append:
batch_records.append(record)
else:
yield self._pad_batch_records(batch_records)
batch_records, max_len = [record], len(record.token_ids)
if batch_records:
yield self._pad_batch_records(batch_records)
def get_num_examples(self, input_file):
"""get_num_examples"""
examples = self._read_json_file(input_file)
return len(examples)
def data_generator(self,
input_file,
batch_size,
epoch,
dev_count=1,
shuffle=True,
phase=None):
"""data_generator"""
examples = self._read_json_file(input_file)
def wrapper():
"""wrapper"""
all_dev_batches = []
for epoch_index in range(epoch):
if phase == "train":
self.current_example = 0
self.current_epoch = epoch_index
if shuffle:
np.random.shuffle(examples)
for batch_data in self._prepare_batch_data(
examples, batch_size, phase=phase):
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data)
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
return wrapper
class RoleSequenceLabelReader(BaseReader):
"""RoleSequenceLabelReader
"""
def __init__(self,
vocab_path,
label_map_config=None,
labels_map=None,
max_seq_len=512,
do_lower_case=True,
in_tokens=False,
is_inference=False,
random_seed=None,
tokenizer="FullTokenizer",
is_classify=True,
is_regression=False,
for_cn=True,
task_id=0):
self.max_seq_len = max_seq_len
self.tokenizer = tokenization.FullTokenizer(
vocab_file=vocab_path, do_lower_case=do_lower_case)
self.vocab = self.tokenizer.vocab
self.pad_id = self.vocab["[PAD]"]
self.cls_id = self.vocab["[CLS]"]
self.sep_id = self.vocab["[SEP]"]
self.in_tokens = in_tokens
self.is_inference = is_inference
self.for_cn = for_cn
self.task_id = task_id
np.random.seed(random_seed)
self.is_classify = is_classify
self.is_regression = is_regression
self.current_example = 0
self.current_epoch = 0
self.num_examples = 0
self.label_map = labels_map
def _process_examples_by_json(self, input_data):
"""_examples_by_json"""
def process_sent_ori_2_new(sent, roles_list):
"""process_sent_ori_2_new"""
words = list(sent)
sent_ori_2_new_index = {}
new_words = []
new_start, new_end = -1, -1
new_roles_list = {}
for role_type, role in roles_list.items():
new_roles_list[role_type] = {
"role_type": role_type,
"start": -1,
"end": -1
}
for i, w in enumerate(words):
for role_type, role in roles_list.items():
if i == role["start"]:
new_roles_list[role_type]["start"] = len(new_words)
if i == role["end"]:
new_roles_list[role_type]["end"] = len(new_words)
if len(w.strip()) == 0:
sent_ori_2_new_index[i] = -1
for role_type, role in roles_list.items():
if i == role["start"]:
new_roles_list[role_type]["start"] += 1
if i == role["end"]:
new_roles_list[role_type]["end"] -= 1
else:
sent_ori_2_new_index[i] = len(new_words)
new_words.append(w)
for role_type, role in new_roles_list.items():
if role["start"] > -1:
role["text"] = u"".join(new_words[role["start"]:role["end"]
+ 1])
if role["end"] == len(new_words):
role["end"] = len(new_words) - 1
return [words, new_words, sent_ori_2_new_index, new_roles_list]
examples = []
k = 0
Example = namedtuple('Example', [
"id", "text_a", "label", "ori_text", "ori_2_new_index", "roles",
"sentence"
])
for data in input_data:
event_id = data["event_id"]
sentence = data["text"]
roles_list = {}
for role in data["arguments"]:
role_type = role["role"]
role_start = role["argument_start_index"]
role_text = role["argument"]
role_end = role_start + len(role_text) - 1
roles_list[role_type] = {
"role_type": role_type,
"start": role_start,
"end": role_end,
"argument": role_text
}
(sent_words, new_sent_words, ori_2_new_sent_index,
new_roles_list) = process_sent_ori_2_new(sentence.lower(),
roles_list)
new_sent_labels = [u"O"] * len(new_sent_words)
for role_type, role in new_roles_list.items():
for i in range(role["start"], role["end"] + 1):
if i == role["start"]:
new_sent_labels[i] = u"B-{}".format(role_type)
else:
new_sent_labels[i] = u"I-{}".format(role_type)
example = Example(
id=event_id,
text_a=u" ".join(new_sent_words),
label=u" ".join(new_sent_labels),
ori_text=sent_words,
ori_2_new_index=ori_2_new_sent_index,
roles=new_roles_list,
sentence=sentence)
if k > 0:
print(u"example {} : {}".format(
k, json.dumps(
example._asdict(), ensure_ascii=False)))
k -= 1
examples.append(example)
return examples
def _read_json_file(self, input_file):
"""_read_json_file"""
input_data = []
with open(input_file, "r", encoding='utf8') as f:
for line in f:
d_json = json.loads(line.strip())
input_data.append(d_json)
examples = self._process_examples_by_json(input_data)
return examples
def get_examples_by_file(self, input_file):
"""get_examples_by_file"""
return self._read_json_file(input_file)
def _pad_batch_records(self, batch_records):
"""_pad_batch_records"""
batch_token_ids = [record.token_ids for record in batch_records]
batch_text_type_ids = [
record.text_type_ids for record in batch_records
]
batch_position_ids = [record.position_ids for record in batch_records]
batch_label_ids = [record.label_ids for record in batch_records]
# padding
padded_token_ids, input_mask, batch_seq_lens = pad_batch_data(
batch_token_ids,
pad_idx=self.pad_id,
return_input_mask=True,
return_seq_lens=True)
padded_text_type_ids = pad_batch_data(
batch_text_type_ids, pad_idx=self.pad_id)
padded_position_ids = pad_batch_data(
batch_position_ids, pad_idx=self.pad_id)
padded_label_ids = pad_batch_data(
batch_label_ids, pad_idx=len(self.label_map) - 1)
padded_task_ids = np.ones_like(
padded_token_ids, dtype="int64") * self.task_id
return_list = [
padded_token_ids, padded_text_type_ids, padded_position_ids,
padded_task_ids, input_mask, padded_label_ids, batch_seq_lens
]
return return_list
def _reseg_token_label(self, tokens, labels, tokenizer):
"""_reseg_token_label"""
assert len(tokens) == len(labels)
ret_tokens = []
ret_labels = []
for token, label in zip(tokens, labels):
sub_token = tokenizer.tokenize(token)
if len(sub_token) == 0:
continue
ret_tokens.extend(sub_token)
if len(sub_token) == 1:
ret_labels.append(label)
continue
if label == "O" or label.startswith("I-"):
ret_labels.extend([label] * len(sub_token))
elif label.startswith("B-"):
i_label = "I-" + label[2:]
ret_labels.extend([label] + [i_label] * (len(sub_token) - 1))
assert len(ret_tokens) == len(ret_labels)
return ret_tokens, ret_labels
def _convert_example_to_record(self, example, max_seq_length, tokenizer):
"""_convert_example_to_record"""
tokens = tokenization.whitespace_tokenize(example.text_a)
labels = tokenization.whitespace_tokenize(example.label)
tokens, labels = self._reseg_token_label(tokens, labels, tokenizer)
if len(tokens) > max_seq_length - 2:
tokens = tokens[0:(max_seq_length - 2)]
labels = labels[0:(max_seq_length - 2)]
tokens = ["[CLS]"] + tokens + ["[SEP]"]
token_ids = tokenizer.convert_tokens_to_ids(tokens)
position_ids = list(range(len(token_ids)))
text_type_ids = [0] * len(token_ids)
no_entity_id = len(self.label_map) - 1
label_ids = [no_entity_id
] + [self.label_map[label]
for label in labels] + [no_entity_id]
Record = namedtuple(
'Record',
['token_ids', 'text_type_ids', 'position_ids', 'label_ids'])
record = Record(
token_ids=token_ids,
text_type_ids=text_type_ids,
position_ids=position_ids,
label_ids=label_ids)
return record
def _prepare_batch_data(self, examples, batch_size, phase=None):
"""generate batch records"""
batch_records, max_len = [], 0
k = 0
for index, example in enumerate(examples):
if phase == "train":
self.current_example = index
record = self._convert_example_to_record(example, self.max_seq_len,
self.tokenizer)
if k > 0:
print(u"feature {} : {}".format(
k, json.dumps(
record._asdict(), ensure_ascii=False)))
k -= 1
max_len = max(max_len, len(record.token_ids))
if self.in_tokens:
to_append = (len(batch_records) + 1) * max_len <= batch_size
else:
to_append = len(batch_records) < batch_size
if to_append:
batch_records.append(record)
else:
yield self._pad_batch_records(batch_records)
batch_records, max_len = [record], len(record.token_ids)
if batch_records:
yield self._pad_batch_records(batch_records)
def get_num_examples(self, input_file):
"""get_num_examples"""
examples = self._read_json_file(input_file)
return len(examples)
def data_generator(self,
input_file,
batch_size,
epoch,
dev_count=1,
shuffle=True,
phase=None):
"""data_generator"""
examples = self._read_json_file(input_file)
def wrapper():
"""wrapper"""
all_dev_batches = []
for epoch_index in range(epoch):
if phase == "train":
self.current_example = 0
self.current_epoch = epoch_index
if shuffle:
np.random.shuffle(examples)
for batch_data in self._prepare_batch_data(
examples, batch_size, phase=phase):
if len(all_dev_batches) < dev_count:
all_dev_batches.append(batch_data)
if len(all_dev_batches) == dev_count:
for batch in all_dev_batches:
yield batch
all_dev_batches = []
return wrapper
| 37.450536 | 85 | 0.5475 | 3,682 | 31,421 | 4.345736 | 0.096415 | 0.034498 | 0.020624 | 0.012249 | 0.779139 | 0.76414 | 0.746641 | 0.724455 | 0.719143 | 0.702394 | 0 | 0.008337 | 0.362496 | 31,421 | 838 | 86 | 37.495227 | 0.790475 | 0.086694 | 0 | 0.789954 | 0 | 0 | 0.028697 | 0 | 0 | 0 | 0 | 0 | 0.006088 | 1 | 0.053272 | false | 0 | 0.025875 | 0 | 0.120244 | 0.00761 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
b138491ecde8519e175fd3710ebb51b4254a63db | 558 | py | Python | Advanced/Exercises/Functions_Advanced_Exercise/8_age_assignment.py | tankishev/Python | 60e511fc901f136b88c681f77f209fe2f8c46447 | [
"MIT"
] | 2 | 2022-03-04T11:39:03.000Z | 2022-03-13T07:13:23.000Z | Advanced/Exercises/Functions_Advanced_Exercise/8_age_assignment.py | tankishev/Python | 60e511fc901f136b88c681f77f209fe2f8c46447 | [
"MIT"
] | null | null | null | Advanced/Exercises/Functions_Advanced_Exercise/8_age_assignment.py | tankishev/Python | 60e511fc901f136b88c681f77f209fe2f8c46447 | [
"MIT"
] | null | null | null | # Create a function called age_assignment that receives a different number of names and a different number of key-value pairs.
# The key will be a single letter (the first letter of each name) and the value - a number (age).
# Find its first letter in the key-value pairs for each name and assign the age to the person's name.
# It the end, return a dictionary with all the names and ages as shown in the example.
# Submit only the function in the judge system.
def age_assignment(*args, **kwargs):
return {arg: kwargs.get(arg[0]) for arg in args}
| 55.8 | 127 | 0.74552 | 101 | 558 | 4.09901 | 0.514851 | 0.036232 | 0.077295 | 0.086957 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002222 | 0.193548 | 558 | 9 | 128 | 62 | 0.917778 | 0.815412 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0 | 0 | 0.5 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
b17241489f7e34947b91cca48497d9f5d0378422 | 97 | py | Python | dj_pylti/context_processors.py | kajigga/dj-pylti | 2388719ee799b3033a9ab7ccf28667e69bcd8cd6 | [
"BSD-3-Clause"
] | null | null | null | dj_pylti/context_processors.py | kajigga/dj-pylti | 2388719ee799b3033a9ab7ccf28667e69bcd8cd6 | [
"BSD-3-Clause"
] | null | null | null | dj_pylti/context_processors.py | kajigga/dj-pylti | 2388719ee799b3033a9ab7ccf28667e69bcd8cd6 | [
"BSD-3-Clause"
] | null | null | null | from django.conf import settings
def settings_context(request):
return {'settings': settings}
| 19.4 | 32 | 0.783505 | 12 | 97 | 6.25 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123711 | 97 | 4 | 33 | 24.25 | 0.882353 | 0 | 0 | 0 | 0 | 0 | 0.082474 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 6 |
b1828304a0e851296c7890f1dc3d2db88334a7f2 | 30 | py | Python | cride/rides/views/__init__.py | Bruno321/cride | bfd911694e3a22f70272f17cde464f5d665d2033 | [
"MIT"
] | null | null | null | cride/rides/views/__init__.py | Bruno321/cride | bfd911694e3a22f70272f17cde464f5d665d2033 | [
"MIT"
] | null | null | null | cride/rides/views/__init__.py | Bruno321/cride | bfd911694e3a22f70272f17cde464f5d665d2033 | [
"MIT"
] | null | null | null | from .rides import RideViewSet | 30 | 30 | 0.866667 | 4 | 30 | 6.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1 | 30 | 1 | 30 | 30 | 0.962963 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1a62468c70b8f9e16a9b4abae510dace08df84a | 128 | py | Python | src/ploomber/executors/__init__.py | edblancas/ploomber | f9cec77ba8e69cc13a238cf917914c1f19f37bd3 | [
"Apache-2.0"
] | null | null | null | src/ploomber/executors/__init__.py | edblancas/ploomber | f9cec77ba8e69cc13a238cf917914c1f19f37bd3 | [
"Apache-2.0"
] | null | null | null | src/ploomber/executors/__init__.py | edblancas/ploomber | f9cec77ba8e69cc13a238cf917914c1f19f37bd3 | [
"Apache-2.0"
] | null | null | null | from ploomber.executors.Serial import Serial
from ploomber.executors.Parallel import Parallel
__all__ = ['Serial', 'Parallel']
| 25.6 | 48 | 0.804688 | 15 | 128 | 6.6 | 0.466667 | 0.242424 | 0.424242 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101563 | 128 | 4 | 49 | 32 | 0.86087 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1a6734c520e4ea8e7eea0c1623db4326930e8b0 | 28 | py | Python | src/pymas/__init__.py | mestradam/pymas | 528aa81be9848dea65152a359290238f6ba983a7 | [
"MIT"
] | 2 | 2021-06-26T18:01:41.000Z | 2022-03-06T03:40:55.000Z | src/pymas/__init__.py | mestradam/pymas | 528aa81be9848dea65152a359290238f6ba983a7 | [
"MIT"
] | 20 | 2020-12-20T22:48:09.000Z | 2022-02-27T16:11:26.000Z | src/pymas/__init__.py | mestradam/pymas | 528aa81be9848dea65152a359290238f6ba983a7 | [
"MIT"
] | 7 | 2019-05-11T12:55:34.000Z | 2021-09-04T06:20:14.000Z | from .core import Structure
| 14 | 27 | 0.821429 | 4 | 28 | 5.75 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.958333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
b1a9622041f283a17220ca29874cf794fbfb9c90 | 29 | py | Python | src/productrec/pipelines/splitting/__init__.py | HSV-AI/product-recommendation | 6e5fabce4f7e579e78a3c59730024d221169e3c4 | [
"Apache-2.0"
] | 2 | 2021-06-04T20:04:17.000Z | 2022-02-18T05:23:55.000Z | src/productrec/pipelines/splitting/__init__.py | HSV-AI/product-recommendation | 6e5fabce4f7e579e78a3c59730024d221169e3c4 | [
"Apache-2.0"
] | 28 | 2021-06-10T00:36:58.000Z | 2022-03-14T20:21:48.000Z | src/productrec/pipelines/splitting/__init__.py | HSV-AI/product-recommendation | 6e5fabce4f7e579e78a3c59730024d221169e3c4 | [
"Apache-2.0"
] | 1 | 2022-02-18T05:23:58.000Z | 2022-02-18T05:23:58.000Z | from .nodes import split_data | 29 | 29 | 0.862069 | 5 | 29 | 4.8 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103448 | 29 | 1 | 29 | 29 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.