hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
50487e2fa39c2e035c5e926ce8cd92f98e777c0a | 48 | py | Python | names/tests.py | housepig7/ops | ed1dc6f6e160e2a4a414c1eeeee78ded02597013 | [
"Apache-2.0"
] | 394 | 2017-09-08T04:19:06.000Z | 2022-03-25T16:43:22.000Z | names/tests.py | kevin4shey/autoops | 5e717a2d86dd37cd2cfaf6db3d9613a0c41c49ae | [
"Apache-2.0"
] | 9 | 2017-10-11T02:20:55.000Z | 2022-03-25T09:43:08.000Z | names/tests.py | kevin4shey/autoops | 5e717a2d86dd37cd2cfaf6db3d9613a0c41c49ae | [
"Apache-2.0"
] | 218 | 2017-09-10T08:10:55.000Z | 2022-03-16T08:54:27.000Z |
def a(b):
return 1,2
b = a(1)
print(b[0]) | 8 | 14 | 0.479167 | 12 | 48 | 1.916667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 0.291667 | 48 | 6 | 15 | 8 | 0.558824 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.25 | 0.5 | 0.25 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 6 |
504afd0dd2f2ece3116d732d344126ac10101d17 | 57 | py | Python | config.py | ukkdae5/Telegram-bot-Google-Drive | 738d7a464be68e40c3f986b78caba315f42b34bd | [
"MIT"
] | null | null | null | config.py | ukkdae5/Telegram-bot-Google-Drive | 738d7a464be68e40c3f986b78caba315f42b34bd | [
"MIT"
] | null | null | null | config.py | ukkdae5/Telegram-bot-Google-Drive | 738d7a464be68e40c3f986b78caba315f42b34bd | [
"MIT"
] | null | null | null | TOKEN = "1817094534:AAFDPBfqCZhaSrDsp3k0s1VLP1xksnROuJk"
| 28.5 | 56 | 0.877193 | 3 | 57 | 16.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.259259 | 0.052632 | 57 | 1 | 57 | 57 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0.807018 | 0.807018 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
acbcf58159806bce8bc3037d22d75ce7d220fdbc | 20 | py | Python | test/__init__.py | kiziebar/pysitcom | bcc9abc47826d0cbf8be124783f4a89db6501efe | [
"MIT"
] | 1 | 2020-11-29T20:57:09.000Z | 2020-11-29T20:57:09.000Z | test/__init__.py | kiziebar/pysitcom | bcc9abc47826d0cbf8be124783f4a89db6501efe | [
"MIT"
] | null | null | null | test/__init__.py | kiziebar/pysitcom | bcc9abc47826d0cbf8be124783f4a89db6501efe | [
"MIT"
] | null | null | null | from test import *
| 10 | 19 | 0.7 | 3 | 20 | 4.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 20 | 1 | 20 | 20 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
acc7da53bb103e0b5924610cd5c370b10465a690 | 7,443 | py | Python | koku/masu/test/processor/ocp/test_ocp_cloud_parquet_report_summary_updater.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | 2 | 2022-01-12T03:42:39.000Z | 2022-01-12T03:42:40.000Z | koku/masu/test/processor/ocp/test_ocp_cloud_parquet_report_summary_updater.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | null | null | null | koku/masu/test/processor/ocp/test_ocp_cloud_parquet_report_summary_updater.py | Vasyka/koku | b5aa9ec41c3b0821e74afe9ff3a5ffaedb910614 | [
"Apache-2.0"
] | 1 | 2021-07-21T09:33:59.000Z | 2021-07-21T09:33:59.000Z | #
# Copyright 2021 Red Hat Inc.
# SPDX-License-Identifier: Apache-2.0
#
"""Test the OCPCloudParquetReportSummaryUpdaterTest."""
import datetime
import decimal
from unittest.mock import MagicMock
from unittest.mock import Mock
from unittest.mock import patch
from tenant_schemas.utils import schema_context
from api.models import Provider
from api.utils import DateHelper
from masu.database.ocp_report_db_accessor import OCPReportDBAccessor
from masu.database.provider_db_accessor import ProviderDBAccessor
from masu.processor.ocp.ocp_cloud_parquet_summary_updater import OCPCloudParquetReportSummaryUpdater
from masu.test import MasuTestCase
class OCPCloudParquetReportSummaryUpdaterTest(MasuTestCase):
"""Test cases for the OCPCloudParquetReportSummaryUpdaterTest class."""
@classmethod
def setUpClass(cls):
"""Set up the test class with required objects."""
super().setUpClass()
cls.dh = DateHelper()
def setUp(self):
"""Set up tests."""
super().setUp()
self.today = self.dh.today
@patch("masu.processor.ocp.ocp_cloud_updater_base.OCPCloudUpdaterBase.get_infra_map")
@patch(
"masu.processor.ocp.ocp_cloud_parquet_summary_updater.AWSReportDBAccessor.populate_ocp_on_aws_tags_summary_table" # noqa: E501
)
@patch(
"masu.processor.ocp.ocp_cloud_parquet_summary_updater.AWSReportDBAccessor.populate_ocp_on_aws_cost_daily_summary_presto" # noqa: E501
)
@patch("masu.processor.ocp.ocp_cloud_parquet_summary_updater.aws_get_bills_from_provider")
def test_update_aws_summary_tables(self, mock_utility, mock_ocp_on_aws, mock_tag_summary, mock_map):
"""Test that summary tables are properly run for an OCP provider."""
fake_bills = MagicMock()
fake_bills.__iter__.return_value = [Mock(), Mock()]
first = Mock()
bill_id = 1
first.return_value.id = bill_id
fake_bills.first = first
mock_utility.return_value = fake_bills
start_date = self.dh.today.date()
end_date = start_date + datetime.timedelta(days=1)
with ProviderDBAccessor(self.aws_provider_uuid) as provider_accessor:
provider = provider_accessor.get_provider()
with OCPReportDBAccessor(self.schema_name) as accessor:
report_period = accessor.report_periods_for_provider_uuid(self.ocp_test_provider_uuid, start_date)
with schema_context(self.schema_name):
current_ocp_report_period_id = report_period.id
mock_map.return_value = {self.ocp_test_provider_uuid: (self.aws_provider_uuid, Provider.PROVIDER_AWS)}
updater = OCPCloudParquetReportSummaryUpdater(schema="acct10001", provider=provider, manifest=None)
updater.update_aws_summary_tables(
self.ocp_test_provider_uuid, self.aws_test_provider_uuid, start_date, end_date
)
mock_ocp_on_aws.assert_called_with(
start_date,
end_date,
self.ocp_test_provider_uuid,
self.aws_test_provider_uuid,
current_ocp_report_period_id,
bill_id,
decimal.Decimal(0),
)
@patch("masu.processor.ocp.ocp_cloud_updater_base.OCPCloudUpdaterBase.get_infra_map")
@patch(
"masu.processor.ocp.ocp_cloud_parquet_summary_updater.AzureReportDBAccessor.populate_ocp_on_azure_tags_summary_table" # noqa: E501
)
@patch(
"masu.processor.ocp.ocp_cloud_parquet_summary_updater.AzureReportDBAccessor.populate_ocp_on_azure_cost_daily_summary_presto" # noqa: E501
)
@patch("masu.processor.ocp.ocp_cloud_parquet_summary_updater.azure_get_bills_from_provider")
def test_update_azure_summary_tables(self, mock_utility, mock_ocp_on_azure, mock_tag_summary, mock_map):
"""Test that summary tables are properly run for an OCP provider."""
fake_bills = MagicMock()
fake_bills.__iter__.return_value = [Mock(), Mock()]
first = Mock()
bill_id = 1
first.return_value.id = bill_id
fake_bills.first = first
mock_utility.return_value = fake_bills
start_date = self.dh.today.date()
end_date = start_date + datetime.timedelta(days=1)
with ProviderDBAccessor(self.azure_provider_uuid) as provider_accessor:
provider = provider_accessor.get_provider()
with OCPReportDBAccessor(self.schema_name) as accessor:
report_period = accessor.report_periods_for_provider_uuid(self.ocp_test_provider_uuid, start_date)
with schema_context(self.schema_name):
current_ocp_report_period_id = report_period.id
mock_map.return_value = {self.ocp_test_provider_uuid: (self.azure_provider_uuid, Provider.PROVIDER_AZURE)}
updater = OCPCloudParquetReportSummaryUpdater(schema="acct10001", provider=provider, manifest=None)
updater.update_azure_summary_tables(
self.ocp_test_provider_uuid, self.azure_test_provider_uuid, start_date, end_date
)
mock_ocp_on_azure.assert_called_with(
start_date,
end_date,
self.ocp_test_provider_uuid,
self.azure_test_provider_uuid,
current_ocp_report_period_id,
bill_id,
decimal.Decimal(0),
)
@patch("masu.processor.ocp.ocp_cloud_updater_base.OCPCloudUpdaterBase.get_infra_map")
@patch(
"masu.processor.ocp.ocp_cloud_parquet_summary_updater.AzureReportDBAccessor.populate_ocp_on_azure_tags_summary_table" # noqa: E501
)
@patch(
"masu.processor.ocp.ocp_cloud_parquet_summary_updater.AzureReportDBAccessor.populate_ocp_on_azure_cost_daily_summary_presto" # noqa: E501
)
@patch("masu.processor.ocp.ocp_cloud_parquet_summary_updater.azure_get_bills_from_provider")
def test_update_azure_summary_tables_with_string_dates(
self, mock_utility, mock_ocp_on_azure, mock_tag_summary, mock_map
):
"""Test that summary tables are properly run for an OCP provider."""
fake_bills = MagicMock()
fake_bills.__iter__.return_value = [Mock(), Mock()]
first = Mock()
bill_id = 1
first.return_value.id = bill_id
fake_bills.first = first
mock_utility.return_value = fake_bills
start_date = self.dh.today.date()
end_date = start_date + datetime.timedelta(days=1)
with ProviderDBAccessor(self.azure_provider_uuid) as provider_accessor:
provider = provider_accessor.get_provider()
with OCPReportDBAccessor(self.schema_name) as accessor:
report_period = accessor.report_periods_for_provider_uuid(self.ocp_test_provider_uuid, start_date)
with schema_context(self.schema_name):
current_ocp_report_period_id = report_period.id
mock_map.return_value = {self.ocp_test_provider_uuid: (self.azure_provider_uuid, Provider.PROVIDER_AZURE)}
updater = OCPCloudParquetReportSummaryUpdater(schema="acct10001", provider=provider, manifest=None)
updater.update_azure_summary_tables(
self.ocp_test_provider_uuid, self.azure_test_provider_uuid, str(start_date), str(end_date)
)
mock_ocp_on_azure.assert_called_with(
start_date,
end_date,
self.ocp_test_provider_uuid,
self.azure_test_provider_uuid,
current_ocp_report_period_id,
bill_id,
decimal.Decimal(0),
)
| 46.229814 | 146 | 0.723767 | 916 | 7,443 | 5.469432 | 0.129913 | 0.064671 | 0.057485 | 0.049301 | 0.834331 | 0.833134 | 0.833134 | 0.825948 | 0.806986 | 0.806986 | 0 | 0.008062 | 0.200054 | 7,443 | 160 | 147 | 46.51875 | 0.833389 | 0.066371 | 0 | 0.647059 | 0 | 0 | 0.173667 | 0.169757 | 0 | 0 | 0 | 0 | 0.022059 | 1 | 0.036765 | false | 0 | 0.088235 | 0 | 0.132353 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4a207663502acae1032c8425c88e538e39596bea | 929 | py | Python | test/test_contact_relations_tags.py | Pluxbox/radiomanager-python-client | a25450c079110fb12d8e5b00f8b96c2619ed6172 | [
"MIT"
] | null | null | null | test/test_contact_relations_tags.py | Pluxbox/radiomanager-python-client | a25450c079110fb12d8e5b00f8b96c2619ed6172 | [
"MIT"
] | 1 | 2018-09-05T08:51:24.000Z | 2018-09-06T14:56:30.000Z | test/test_contact_relations_tags.py | Pluxbox/radiomanager-python-client | a25450c079110fb12d8e5b00f8b96c2619ed6172 | [
"MIT"
] | null | null | null | # coding: utf-8
"""
RadioManager
RadioManager # noqa: E501
OpenAPI spec version: 2.0
Contact: support@pluxbox.com
Generated by: https://github.com/swagger-api/swagger-codegen.git
"""
from __future__ import absolute_import
import unittest
import radiomanager_sdk
from radiomanager_sdk.models.contact_relations_tags import ContactRelationsTags # noqa: E501
from radiomanager_sdk.rest import ApiException
class TestContactRelationsTags(unittest.TestCase):
"""ContactRelationsTags unit test stubs"""
def setUp(self):
pass
def tearDown(self):
pass
def testContactRelationsTags(self):
"""Test ContactRelationsTags"""
# FIXME: construct object with mandatory attributes with example values
# model = radiomanager_sdk.models.contact_relations_tags.ContactRelationsTags() # noqa: E501
pass
if __name__ == '__main__':
unittest.main()
| 22.658537 | 101 | 0.723358 | 99 | 929 | 6.575758 | 0.585859 | 0.092166 | 0.058372 | 0.086022 | 0.12596 | 0.12596 | 0 | 0 | 0 | 0 | 0 | 0.016064 | 0.19591 | 929 | 40 | 102 | 23.225 | 0.855422 | 0.442411 | 0 | 0.214286 | 1 | 0 | 0.016807 | 0 | 0 | 0 | 0 | 0.025 | 0 | 1 | 0.214286 | false | 0.214286 | 0.357143 | 0 | 0.642857 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4a231e0719f77b0cb5128a896a17d6ee62ffa1d0 | 149 | py | Python | ding/worker/__init__.py | LuciusMos/DI-engine | b040b1c36afce038effec9eb483f625131573824 | [
"Apache-2.0"
] | 464 | 2021-07-08T07:26:33.000Z | 2022-03-31T12:35:16.000Z | ding/worker/__init__.py | LuciusMos/DI-engine | b040b1c36afce038effec9eb483f625131573824 | [
"Apache-2.0"
] | 177 | 2021-07-09T08:22:55.000Z | 2022-03-31T07:35:22.000Z | ding/worker/__init__.py | LuciusMos/DI-engine | b040b1c36afce038effec9eb483f625131573824 | [
"Apache-2.0"
] | 92 | 2021-07-08T12:16:37.000Z | 2022-03-31T09:24:41.000Z | from .collector import *
from .learner import *
from .replay_buffer import *
from .coordinator import *
from .adapter import *
from .buffer import *
| 21.285714 | 28 | 0.758389 | 19 | 149 | 5.894737 | 0.421053 | 0.446429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.161074 | 149 | 6 | 29 | 24.833333 | 0.896 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a840085c86e57f0eb584fc67c116af55c6fbcbaa | 28 | py | Python | sams/__init__.py | choderalab/sams | 07d2db80b43aa60062b9855b030f729516580ffa | [
"MIT"
] | 2 | 2020-08-31T15:22:44.000Z | 2021-03-05T07:47:54.000Z | sams/__init__.py | choderalab/sams | 07d2db80b43aa60062b9855b030f729516580ffa | [
"MIT"
] | 10 | 2016-05-27T22:08:33.000Z | 2021-06-07T07:32:02.000Z | sams/__init__.py | choderalab/sams | 07d2db80b43aa60062b9855b030f729516580ffa | [
"MIT"
] | 6 | 2016-05-27T18:03:38.000Z | 2021-04-13T03:55:37.000Z | from sams.samplers import *
| 14 | 27 | 0.785714 | 4 | 28 | 5.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 28 | 1 | 28 | 28 | 0.916667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8888b4fab31ae499dfc7a1b65c774ef2aac6a45 | 27 | py | Python | workspace/test.py | wangziling100/renju | 933a2eca2871414ad5f3478cc2e2736be697887f | [
"MIT"
] | null | null | null | workspace/test.py | wangziling100/renju | 933a2eca2871414ad5f3478cc2e2736be697887f | [
"MIT"
] | null | null | null | workspace/test.py | wangziling100/renju | 933a2eca2871414ad5f3478cc2e2736be697887f | [
"MIT"
] | null | null | null | import torch
print('test')
| 9 | 13 | 0.740741 | 4 | 27 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 27 | 2 | 14 | 13.5 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0.148148 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
a89b7002543f134ebe54433275481169fc64bbc6 | 30 | py | Python | ngboost/__init__.py | prismleong/ngboost | 298751df9441141c622074bed19f28986a7c1d7a | [
"Apache-2.0"
] | 1 | 2020-02-19T07:20:10.000Z | 2020-02-19T07:20:10.000Z | ngboost/__init__.py | prismleong/ngboost | 298751df9441141c622074bed19f28986a7c1d7a | [
"Apache-2.0"
] | null | null | null | ngboost/__init__.py | prismleong/ngboost | 298751df9441141c622074bed19f28986a7c1d7a | [
"Apache-2.0"
] | null | null | null | from .ngboost import NGBoost
| 15 | 29 | 0.8 | 4 | 30 | 6 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 30 | 1 | 30 | 30 | 0.96 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
a8da3609a9dca738c428e921744a9eba30eabe65 | 1,017 | py | Python | tests/save_clean.py | scicubator/countGauss | 7e744c3de1de342d72ef10da76c0c3b4605d70d4 | [
"BSD-2-Clause"
] | null | null | null | tests/save_clean.py | scicubator/countGauss | 7e744c3de1de342d72ef10da76c0c3b4605d70d4 | [
"BSD-2-Clause"
] | null | null | null | tests/save_clean.py | scicubator/countGauss | 7e744c3de1de342d72ef10da76c0c3b4605d70d4 | [
"BSD-2-Clause"
] | null | null | null | import pickle
import pylab as plt
D = pickle.load(open("tests/clean.pkl", "r"))
fig, ax = plt.subplots()
fig.canvas.draw()
plt.imshow(D['gauss'].T, cmap='gray_r', origin='lower')
labels = [-5, 5, 15, 25, 35, 45]
ax.set_xticklabels(labels)
ax.get_yticklabels()
ylabels = [-1, 1, 2, 3, 4, 5, 6]
ax.set_yticklabels(ylabels)
ax.set_yticklabels(ylabels, fontsize=45)
ax.set_xticklabels(labels, fontsize=45)
plt.ylabel("m/k", fontsize=50)
plt.savefig("tests/clean_gauss.pdf", transparent=True, bbox_inches='tight',
pad_inches=0)
fig, ax = plt.subplots()
fig.canvas.draw()
plt.imshow(D['countGauss'].T, cmap='gray_r', origin='lower')
labels = [-5, 5, 15, 25, 35, 45]
ax.set_xticklabels(labels)
ax.get_yticklabels()
# ylabels=[-1, 1, 2, 3, 4, 5, 6]
ylabels = []
ax.set_yticklabels(ylabels)
ax.set_yticklabels(ylabels, fontsize=45)
ax.set_xticklabels(labels, fontsize=45)
# plt.ylabel("m/k", fontsize=50)
plt.savefig("tests/clean_countgauss.pdf", transparent=True, bbox_inches='tight',
pad_inches=0)
| 30.818182 | 80 | 0.700098 | 165 | 1,017 | 4.206061 | 0.333333 | 0.057637 | 0.040346 | 0.103746 | 0.87464 | 0.864553 | 0.864553 | 0.864553 | 0.864553 | 0.740634 | 0 | 0.053215 | 0.113078 | 1,017 | 32 | 81 | 31.78125 | 0.716186 | 0.05998 | 0 | 0.642857 | 0 | 0 | 0.118573 | 0.049318 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.071429 | 0 | 0.071429 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
76386e5a44bfc216d4b1c5e64683a837c0a6520d | 515 | py | Python | book/BookRepository.py | TestowanieAutomatyczneUG/laboratorium_14-melkorw | ea733b52b6fa1836d3771b4e37be5a6b11c0765f | [
"MIT"
] | null | null | null | book/BookRepository.py | TestowanieAutomatyczneUG/laboratorium_14-melkorw | ea733b52b6fa1836d3771b4e37be5a6b11c0765f | [
"MIT"
] | null | null | null | book/BookRepository.py | TestowanieAutomatyczneUG/laboratorium_14-melkorw | ea733b52b6fa1836d3771b4e37be5a6b11c0765f | [
"MIT"
] | null | null | null | from abc import ABC
class BookRepository(ABC):
def __init__(self, data_source=[]):
self.__data_source = data_source
def find_all(self):
return self.data_source
def find_by_id(self, book_id):
return self.data_source[book_id]
def add(self, book):
self.__data_source.append(book)
return True
def delete(self, book):
self.__data_source.remove(book)
return True
@property
def data_source(self):
return self.__data_source
| 20.6 | 40 | 0.648544 | 69 | 515 | 4.463768 | 0.318841 | 0.292208 | 0.318182 | 0.194805 | 0.298701 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.266019 | 515 | 24 | 41 | 21.458333 | 0.814815 | 0 | 0 | 0.117647 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.352941 | false | 0 | 0.058824 | 0.176471 | 0.764706 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
76444f6144829331276ac52e3ac2e7e32fe50217 | 304,996 | py | Python | sdk/python/pulumi_azure_nextgen/recoveryservices/v20160810/outputs.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_nextgen/recoveryservices/v20160810/outputs.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_azure_nextgen/recoveryservices/v20160810/outputs.py | test-wiz-sec/pulumi-azure-nextgen | 20a695af0d020b34b0f1c336e1b69702755174cc | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi SDK Generator. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'A2APolicyDetailsResponse',
'A2AProtectedDiskDetailsResponse',
'A2AProtectedManagedDiskDetailsResponse',
'A2AReplicationDetailsResponse',
'AzureFabricSpecificDetailsResponse',
'AzureToAzureNetworkMappingSettingsResponse',
'AzureToAzureVmSyncedConfigDetailsResponse',
'AzureVmDiskDetailsResponse',
'CurrentScenarioDetailsResponse',
'DataStoreResponse',
'DiskDetailsResponse',
'EncryptionDetailsResponse',
'FabricPropertiesResponse',
'HealthErrorResponse',
'HyperVReplicaAzurePolicyDetailsResponse',
'HyperVReplicaAzureReplicationDetailsResponse',
'HyperVReplicaBasePolicyDetailsResponse',
'HyperVReplicaBaseReplicationDetailsResponse',
'HyperVReplicaBluePolicyDetailsResponse',
'HyperVReplicaBlueReplicationDetailsResponse',
'HyperVReplicaPolicyDetailsResponse',
'HyperVReplicaReplicationDetailsResponse',
'HyperVSiteDetailsResponse',
'InMageAgentDetailsResponse',
'InMageAzureV2PolicyDetailsResponse',
'InMageAzureV2ProtectedDiskDetailsResponse',
'InMageAzureV2ReplicationDetailsResponse',
'InMageBasePolicyDetailsResponse',
'InMagePolicyDetailsResponse',
'InMageProtectedDiskDetailsResponse',
'InMageReplicationDetailsResponse',
'InitialReplicationDetailsResponse',
'InputEndpointResponse',
'MasterTargetServerResponse',
'MobilityServiceUpdateResponse',
'NetworkMappingPropertiesResponse',
'OSDetailsResponse',
'OSDiskDetailsResponse',
'PolicyPropertiesResponse',
'ProcessServerResponse',
'ProtectionContainerMappingPropertiesResponse',
'ProtectionContainerMappingProviderSpecificDetailsResponse',
'RcmAzureMigrationPolicyDetailsResponse',
'RecoveryPlanActionResponse',
'RecoveryPlanAutomationRunbookActionDetailsResponse',
'RecoveryPlanGroupResponse',
'RecoveryPlanManualActionDetailsResponse',
'RecoveryPlanPropertiesResponse',
'RecoveryPlanProtectedItemResponse',
'RecoveryPlanScriptActionDetailsResponse',
'ReplicationProtectedItemPropertiesResponse',
'RetentionVolumeResponse',
'RoleAssignmentResponse',
'RunAsAccountResponse',
'StorageClassificationMappingPropertiesResponse',
'VCenterPropertiesResponse',
'VMNicDetailsResponse',
'VMwareDetailsResponse',
'VMwareV2FabricSpecificDetailsResponse',
'VmmDetailsResponse',
'VmmToAzureNetworkMappingSettingsResponse',
'VmmToVmmNetworkMappingSettingsResponse',
'VmwareCbtPolicyDetailsResponse',
]
@pulumi.output_type
class A2APolicyDetailsResponse(dict):
"""
A2A specific policy details.
"""
def __init__(__self__, *,
instance_type: str,
app_consistent_frequency_in_minutes: Optional[int] = None,
crash_consistent_frequency_in_minutes: Optional[int] = None,
multi_vm_sync_status: Optional[str] = None,
recovery_point_history: Optional[int] = None,
recovery_point_threshold_in_minutes: Optional[int] = None):
"""
A2A specific policy details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int app_consistent_frequency_in_minutes: The app consistent snapshot frequency in minutes.
:param int crash_consistent_frequency_in_minutes: The crash consistent snapshot frequency in minutes.
:param str multi_vm_sync_status: A value indicating whether multi-VM sync has to be enabled.
:param int recovery_point_history: The duration in minutes until which the recovery points need to be stored.
:param int recovery_point_threshold_in_minutes: The recovery point threshold in minutes.
"""
pulumi.set(__self__, "instance_type", 'A2A')
if app_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "app_consistent_frequency_in_minutes", app_consistent_frequency_in_minutes)
if crash_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "crash_consistent_frequency_in_minutes", crash_consistent_frequency_in_minutes)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if recovery_point_history is not None:
pulumi.set(__self__, "recovery_point_history", recovery_point_history)
if recovery_point_threshold_in_minutes is not None:
pulumi.set(__self__, "recovery_point_threshold_in_minutes", recovery_point_threshold_in_minutes)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="appConsistentFrequencyInMinutes")
def app_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The app consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "app_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="crashConsistentFrequencyInMinutes")
def crash_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The crash consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "crash_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether multi-VM sync has to be enabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="recoveryPointHistory")
def recovery_point_history(self) -> Optional[int]:
"""
The duration in minutes until which the recovery points need to be stored.
"""
return pulumi.get(self, "recovery_point_history")
@property
@pulumi.getter(name="recoveryPointThresholdInMinutes")
def recovery_point_threshold_in_minutes(self) -> Optional[int]:
"""
The recovery point threshold in minutes.
"""
return pulumi.get(self, "recovery_point_threshold_in_minutes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class A2AProtectedDiskDetailsResponse(dict):
"""
A2A protected disk details.
"""
def __init__(__self__, *,
data_pending_at_source_agent_in_mb: Optional[float] = None,
data_pending_in_staging_storage_account_in_mb: Optional[float] = None,
disk_capacity_in_bytes: Optional[int] = None,
disk_name: Optional[str] = None,
disk_type: Optional[str] = None,
disk_uri: Optional[str] = None,
monitoring_job_type: Optional[str] = None,
monitoring_percentage_completion: Optional[int] = None,
primary_disk_azure_storage_account_id: Optional[str] = None,
primary_staging_azure_storage_account_id: Optional[str] = None,
recovery_azure_storage_account_id: Optional[str] = None,
recovery_disk_uri: Optional[str] = None,
resync_required: Optional[bool] = None):
"""
A2A protected disk details.
:param float data_pending_at_source_agent_in_mb: The data pending at source virtual machine in MB.
:param float data_pending_in_staging_storage_account_in_mb: The data pending for replication in MB at staging account.
:param int disk_capacity_in_bytes: The disk capacity in bytes.
:param str disk_name: The disk name.
:param str disk_type: The type of disk.
:param str disk_uri: The disk uri.
:param str monitoring_job_type: The type of the monitoring job. The progress is contained in MonitoringPercentageCompletion property.
:param int monitoring_percentage_completion: The percentage of the monitoring job. The type of the monitoring job is defined by MonitoringJobType property.
:param str primary_disk_azure_storage_account_id: The primary disk storage account.
:param str primary_staging_azure_storage_account_id: The primary staging storage account.
:param str recovery_azure_storage_account_id: The recovery disk storage account.
:param str recovery_disk_uri: Recovery disk uri.
:param bool resync_required: A value indicating whether resync is required for this disk.
"""
if data_pending_at_source_agent_in_mb is not None:
pulumi.set(__self__, "data_pending_at_source_agent_in_mb", data_pending_at_source_agent_in_mb)
if data_pending_in_staging_storage_account_in_mb is not None:
pulumi.set(__self__, "data_pending_in_staging_storage_account_in_mb", data_pending_in_staging_storage_account_in_mb)
if disk_capacity_in_bytes is not None:
pulumi.set(__self__, "disk_capacity_in_bytes", disk_capacity_in_bytes)
if disk_name is not None:
pulumi.set(__self__, "disk_name", disk_name)
if disk_type is not None:
pulumi.set(__self__, "disk_type", disk_type)
if disk_uri is not None:
pulumi.set(__self__, "disk_uri", disk_uri)
if monitoring_job_type is not None:
pulumi.set(__self__, "monitoring_job_type", monitoring_job_type)
if monitoring_percentage_completion is not None:
pulumi.set(__self__, "monitoring_percentage_completion", monitoring_percentage_completion)
if primary_disk_azure_storage_account_id is not None:
pulumi.set(__self__, "primary_disk_azure_storage_account_id", primary_disk_azure_storage_account_id)
if primary_staging_azure_storage_account_id is not None:
pulumi.set(__self__, "primary_staging_azure_storage_account_id", primary_staging_azure_storage_account_id)
if recovery_azure_storage_account_id is not None:
pulumi.set(__self__, "recovery_azure_storage_account_id", recovery_azure_storage_account_id)
if recovery_disk_uri is not None:
pulumi.set(__self__, "recovery_disk_uri", recovery_disk_uri)
if resync_required is not None:
pulumi.set(__self__, "resync_required", resync_required)
@property
@pulumi.getter(name="dataPendingAtSourceAgentInMB")
def data_pending_at_source_agent_in_mb(self) -> Optional[float]:
"""
The data pending at source virtual machine in MB.
"""
return pulumi.get(self, "data_pending_at_source_agent_in_mb")
@property
@pulumi.getter(name="dataPendingInStagingStorageAccountInMB")
def data_pending_in_staging_storage_account_in_mb(self) -> Optional[float]:
"""
The data pending for replication in MB at staging account.
"""
return pulumi.get(self, "data_pending_in_staging_storage_account_in_mb")
@property
@pulumi.getter(name="diskCapacityInBytes")
def disk_capacity_in_bytes(self) -> Optional[int]:
"""
The disk capacity in bytes.
"""
return pulumi.get(self, "disk_capacity_in_bytes")
@property
@pulumi.getter(name="diskName")
def disk_name(self) -> Optional[str]:
"""
The disk name.
"""
return pulumi.get(self, "disk_name")
@property
@pulumi.getter(name="diskType")
def disk_type(self) -> Optional[str]:
"""
The type of disk.
"""
return pulumi.get(self, "disk_type")
@property
@pulumi.getter(name="diskUri")
def disk_uri(self) -> Optional[str]:
"""
The disk uri.
"""
return pulumi.get(self, "disk_uri")
@property
@pulumi.getter(name="monitoringJobType")
def monitoring_job_type(self) -> Optional[str]:
"""
The type of the monitoring job. The progress is contained in MonitoringPercentageCompletion property.
"""
return pulumi.get(self, "monitoring_job_type")
@property
@pulumi.getter(name="monitoringPercentageCompletion")
def monitoring_percentage_completion(self) -> Optional[int]:
"""
The percentage of the monitoring job. The type of the monitoring job is defined by MonitoringJobType property.
"""
return pulumi.get(self, "monitoring_percentage_completion")
@property
@pulumi.getter(name="primaryDiskAzureStorageAccountId")
def primary_disk_azure_storage_account_id(self) -> Optional[str]:
"""
The primary disk storage account.
"""
return pulumi.get(self, "primary_disk_azure_storage_account_id")
@property
@pulumi.getter(name="primaryStagingAzureStorageAccountId")
def primary_staging_azure_storage_account_id(self) -> Optional[str]:
"""
The primary staging storage account.
"""
return pulumi.get(self, "primary_staging_azure_storage_account_id")
@property
@pulumi.getter(name="recoveryAzureStorageAccountId")
def recovery_azure_storage_account_id(self) -> Optional[str]:
"""
The recovery disk storage account.
"""
return pulumi.get(self, "recovery_azure_storage_account_id")
@property
@pulumi.getter(name="recoveryDiskUri")
def recovery_disk_uri(self) -> Optional[str]:
"""
Recovery disk uri.
"""
return pulumi.get(self, "recovery_disk_uri")
@property
@pulumi.getter(name="resyncRequired")
def resync_required(self) -> Optional[bool]:
"""
A value indicating whether resync is required for this disk.
"""
return pulumi.get(self, "resync_required")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class A2AProtectedManagedDiskDetailsResponse(dict):
"""
A2A protected managed disk details.
"""
def __init__(__self__, *,
data_pending_at_source_agent_in_mb: Optional[float] = None,
data_pending_in_staging_storage_account_in_mb: Optional[float] = None,
disk_capacity_in_bytes: Optional[int] = None,
disk_id: Optional[str] = None,
disk_name: Optional[str] = None,
disk_type: Optional[str] = None,
monitoring_job_type: Optional[str] = None,
monitoring_percentage_completion: Optional[int] = None,
primary_staging_azure_storage_account_id: Optional[str] = None,
recovery_azure_resource_group_id: Optional[str] = None,
recovery_disk_id: Optional[str] = None,
resync_required: Optional[bool] = None):
"""
A2A protected managed disk details.
:param float data_pending_at_source_agent_in_mb: The data pending at source virtual machine in MB.
:param float data_pending_in_staging_storage_account_in_mb: The data pending for replication in MB at staging account.
:param int disk_capacity_in_bytes: The disk capacity in bytes.
:param str disk_id: The managed disk Arm id.
:param str disk_name: The disk name.
:param str disk_type: The type of disk.
:param str monitoring_job_type: The type of the monitoring job. The progress is contained in MonitoringPercentageCompletion property.
:param int monitoring_percentage_completion: The percentage of the monitoring job. The type of the monitoring job is defined by MonitoringJobType property.
:param str primary_staging_azure_storage_account_id: The primary staging storage account.
:param str recovery_azure_resource_group_id: The recovery disk resource group Arm Id.
:param str recovery_disk_id: Recovery disk Arm Id.
:param bool resync_required: A value indicating whether resync is required for this disk.
"""
if data_pending_at_source_agent_in_mb is not None:
pulumi.set(__self__, "data_pending_at_source_agent_in_mb", data_pending_at_source_agent_in_mb)
if data_pending_in_staging_storage_account_in_mb is not None:
pulumi.set(__self__, "data_pending_in_staging_storage_account_in_mb", data_pending_in_staging_storage_account_in_mb)
if disk_capacity_in_bytes is not None:
pulumi.set(__self__, "disk_capacity_in_bytes", disk_capacity_in_bytes)
if disk_id is not None:
pulumi.set(__self__, "disk_id", disk_id)
if disk_name is not None:
pulumi.set(__self__, "disk_name", disk_name)
if disk_type is not None:
pulumi.set(__self__, "disk_type", disk_type)
if monitoring_job_type is not None:
pulumi.set(__self__, "monitoring_job_type", monitoring_job_type)
if monitoring_percentage_completion is not None:
pulumi.set(__self__, "monitoring_percentage_completion", monitoring_percentage_completion)
if primary_staging_azure_storage_account_id is not None:
pulumi.set(__self__, "primary_staging_azure_storage_account_id", primary_staging_azure_storage_account_id)
if recovery_azure_resource_group_id is not None:
pulumi.set(__self__, "recovery_azure_resource_group_id", recovery_azure_resource_group_id)
if recovery_disk_id is not None:
pulumi.set(__self__, "recovery_disk_id", recovery_disk_id)
if resync_required is not None:
pulumi.set(__self__, "resync_required", resync_required)
@property
@pulumi.getter(name="dataPendingAtSourceAgentInMB")
def data_pending_at_source_agent_in_mb(self) -> Optional[float]:
"""
The data pending at source virtual machine in MB.
"""
return pulumi.get(self, "data_pending_at_source_agent_in_mb")
@property
@pulumi.getter(name="dataPendingInStagingStorageAccountInMB")
def data_pending_in_staging_storage_account_in_mb(self) -> Optional[float]:
"""
The data pending for replication in MB at staging account.
"""
return pulumi.get(self, "data_pending_in_staging_storage_account_in_mb")
@property
@pulumi.getter(name="diskCapacityInBytes")
def disk_capacity_in_bytes(self) -> Optional[int]:
"""
The disk capacity in bytes.
"""
return pulumi.get(self, "disk_capacity_in_bytes")
@property
@pulumi.getter(name="diskId")
def disk_id(self) -> Optional[str]:
"""
The managed disk Arm id.
"""
return pulumi.get(self, "disk_id")
@property
@pulumi.getter(name="diskName")
def disk_name(self) -> Optional[str]:
"""
The disk name.
"""
return pulumi.get(self, "disk_name")
@property
@pulumi.getter(name="diskType")
def disk_type(self) -> Optional[str]:
"""
The type of disk.
"""
return pulumi.get(self, "disk_type")
@property
@pulumi.getter(name="monitoringJobType")
def monitoring_job_type(self) -> Optional[str]:
"""
The type of the monitoring job. The progress is contained in MonitoringPercentageCompletion property.
"""
return pulumi.get(self, "monitoring_job_type")
@property
@pulumi.getter(name="monitoringPercentageCompletion")
def monitoring_percentage_completion(self) -> Optional[int]:
"""
The percentage of the monitoring job. The type of the monitoring job is defined by MonitoringJobType property.
"""
return pulumi.get(self, "monitoring_percentage_completion")
@property
@pulumi.getter(name="primaryStagingAzureStorageAccountId")
def primary_staging_azure_storage_account_id(self) -> Optional[str]:
"""
The primary staging storage account.
"""
return pulumi.get(self, "primary_staging_azure_storage_account_id")
@property
@pulumi.getter(name="recoveryAzureResourceGroupId")
def recovery_azure_resource_group_id(self) -> Optional[str]:
"""
The recovery disk resource group Arm Id.
"""
return pulumi.get(self, "recovery_azure_resource_group_id")
@property
@pulumi.getter(name="recoveryDiskId")
def recovery_disk_id(self) -> Optional[str]:
"""
Recovery disk Arm Id.
"""
return pulumi.get(self, "recovery_disk_id")
@property
@pulumi.getter(name="resyncRequired")
def resync_required(self) -> Optional[bool]:
"""
A value indicating whether resync is required for this disk.
"""
return pulumi.get(self, "resync_required")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class A2AReplicationDetailsResponse(dict):
"""
A2A provider specific settings.
"""
def __init__(__self__, *,
instance_type: str,
agent_version: Optional[str] = None,
fabric_object_id: Optional[str] = None,
is_replication_agent_update_required: Optional[bool] = None,
last_heartbeat: Optional[str] = None,
last_rpo_calculated_time: Optional[str] = None,
lifecycle_id: Optional[str] = None,
management_id: Optional[str] = None,
monitoring_job_type: Optional[str] = None,
monitoring_percentage_completion: Optional[int] = None,
multi_vm_group_id: Optional[str] = None,
multi_vm_group_name: Optional[str] = None,
os_type: Optional[str] = None,
primary_fabric_location: Optional[str] = None,
protected_disks: Optional[Sequence['outputs.A2AProtectedDiskDetailsResponse']] = None,
protected_managed_disks: Optional[Sequence['outputs.A2AProtectedManagedDiskDetailsResponse']] = None,
recovery_availability_set: Optional[str] = None,
recovery_azure_resource_group_id: Optional[str] = None,
recovery_azure_vm_name: Optional[str] = None,
recovery_azure_vm_size: Optional[str] = None,
recovery_cloud_service: Optional[str] = None,
recovery_fabric_location: Optional[str] = None,
recovery_fabric_object_id: Optional[str] = None,
rpo_in_seconds: Optional[int] = None,
selected_recovery_azure_network_id: Optional[str] = None,
test_failover_recovery_fabric_object_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None,
vm_synced_config_details: Optional['outputs.AzureToAzureVmSyncedConfigDetailsResponse'] = None):
"""
A2A provider specific settings.
:param str instance_type: Gets the Instance type.
:param str agent_version: The agent version.
:param str fabric_object_id: The fabric specific object Id of the virtual machine.
:param bool is_replication_agent_update_required: A value indicating whether replication agent update is required.
:param str last_heartbeat: The last heartbeat received from the source server.
:param str last_rpo_calculated_time: The time (in UTC) when the last RPO value was calculated by Protection Service.
:param str lifecycle_id: An id associated with the PE that survives actions like switch protection which change the backing PE/CPE objects internally.The lifecycle id gets carried forward to have a link/continuity in being able to have an Id that denotes the "same" protected item even though other internal Ids/ARM Id might be changing.
:param str management_id: The management Id.
:param str monitoring_job_type: The type of the monitoring job. The progress is contained in MonitoringPercentageCompletion property.
:param int monitoring_percentage_completion: The percentage of the monitoring job. The type of the monitoring job is defined by MonitoringJobType property.
:param str multi_vm_group_id: The multi vm group Id.
:param str multi_vm_group_name: The multi vm group name.
:param str os_type: The type of operating system.
:param str primary_fabric_location: Primary fabric location.
:param Sequence['A2AProtectedDiskDetailsResponseArgs'] protected_disks: The list of protected disks.
:param Sequence['A2AProtectedManagedDiskDetailsResponseArgs'] protected_managed_disks: The list of protected managed disks.
:param str recovery_availability_set: The recovery availability set.
:param str recovery_azure_resource_group_id: The recovery resource group.
:param str recovery_azure_vm_name: The name of recovery virtual machine.
:param str recovery_azure_vm_size: The size of recovery virtual machine.
:param str recovery_cloud_service: The recovery cloud service.
:param str recovery_fabric_location: The recovery fabric location.
:param str recovery_fabric_object_id: The recovery fabric object Id.
:param int rpo_in_seconds: The last RPO value in seconds.
:param str selected_recovery_azure_network_id: The recovery virtual network.
:param str test_failover_recovery_fabric_object_id: The test failover fabric object Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The virtual machine nic details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
:param 'AzureToAzureVmSyncedConfigDetailsResponseArgs' vm_synced_config_details: The synced configuration details.
"""
pulumi.set(__self__, "instance_type", 'A2A')
if agent_version is not None:
pulumi.set(__self__, "agent_version", agent_version)
if fabric_object_id is not None:
pulumi.set(__self__, "fabric_object_id", fabric_object_id)
if is_replication_agent_update_required is not None:
pulumi.set(__self__, "is_replication_agent_update_required", is_replication_agent_update_required)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if last_rpo_calculated_time is not None:
pulumi.set(__self__, "last_rpo_calculated_time", last_rpo_calculated_time)
if lifecycle_id is not None:
pulumi.set(__self__, "lifecycle_id", lifecycle_id)
if management_id is not None:
pulumi.set(__self__, "management_id", management_id)
if monitoring_job_type is not None:
pulumi.set(__self__, "monitoring_job_type", monitoring_job_type)
if monitoring_percentage_completion is not None:
pulumi.set(__self__, "monitoring_percentage_completion", monitoring_percentage_completion)
if multi_vm_group_id is not None:
pulumi.set(__self__, "multi_vm_group_id", multi_vm_group_id)
if multi_vm_group_name is not None:
pulumi.set(__self__, "multi_vm_group_name", multi_vm_group_name)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if primary_fabric_location is not None:
pulumi.set(__self__, "primary_fabric_location", primary_fabric_location)
if protected_disks is not None:
pulumi.set(__self__, "protected_disks", protected_disks)
if protected_managed_disks is not None:
pulumi.set(__self__, "protected_managed_disks", protected_managed_disks)
if recovery_availability_set is not None:
pulumi.set(__self__, "recovery_availability_set", recovery_availability_set)
if recovery_azure_resource_group_id is not None:
pulumi.set(__self__, "recovery_azure_resource_group_id", recovery_azure_resource_group_id)
if recovery_azure_vm_name is not None:
pulumi.set(__self__, "recovery_azure_vm_name", recovery_azure_vm_name)
if recovery_azure_vm_size is not None:
pulumi.set(__self__, "recovery_azure_vm_size", recovery_azure_vm_size)
if recovery_cloud_service is not None:
pulumi.set(__self__, "recovery_cloud_service", recovery_cloud_service)
if recovery_fabric_location is not None:
pulumi.set(__self__, "recovery_fabric_location", recovery_fabric_location)
if recovery_fabric_object_id is not None:
pulumi.set(__self__, "recovery_fabric_object_id", recovery_fabric_object_id)
if rpo_in_seconds is not None:
pulumi.set(__self__, "rpo_in_seconds", rpo_in_seconds)
if selected_recovery_azure_network_id is not None:
pulumi.set(__self__, "selected_recovery_azure_network_id", selected_recovery_azure_network_id)
if test_failover_recovery_fabric_object_id is not None:
pulumi.set(__self__, "test_failover_recovery_fabric_object_id", test_failover_recovery_fabric_object_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
if vm_synced_config_details is not None:
pulumi.set(__self__, "vm_synced_config_details", vm_synced_config_details)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="agentVersion")
def agent_version(self) -> Optional[str]:
"""
The agent version.
"""
return pulumi.get(self, "agent_version")
@property
@pulumi.getter(name="fabricObjectId")
def fabric_object_id(self) -> Optional[str]:
"""
The fabric specific object Id of the virtual machine.
"""
return pulumi.get(self, "fabric_object_id")
@property
@pulumi.getter(name="isReplicationAgentUpdateRequired")
def is_replication_agent_update_required(self) -> Optional[bool]:
"""
A value indicating whether replication agent update is required.
"""
return pulumi.get(self, "is_replication_agent_update_required")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The last heartbeat received from the source server.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter(name="lastRpoCalculatedTime")
def last_rpo_calculated_time(self) -> Optional[str]:
"""
The time (in UTC) when the last RPO value was calculated by Protection Service.
"""
return pulumi.get(self, "last_rpo_calculated_time")
@property
@pulumi.getter(name="lifecycleId")
def lifecycle_id(self) -> Optional[str]:
"""
An id associated with the PE that survives actions like switch protection which change the backing PE/CPE objects internally.The lifecycle id gets carried forward to have a link/continuity in being able to have an Id that denotes the "same" protected item even though other internal Ids/ARM Id might be changing.
"""
return pulumi.get(self, "lifecycle_id")
@property
@pulumi.getter(name="managementId")
def management_id(self) -> Optional[str]:
"""
The management Id.
"""
return pulumi.get(self, "management_id")
@property
@pulumi.getter(name="monitoringJobType")
def monitoring_job_type(self) -> Optional[str]:
"""
The type of the monitoring job. The progress is contained in MonitoringPercentageCompletion property.
"""
return pulumi.get(self, "monitoring_job_type")
@property
@pulumi.getter(name="monitoringPercentageCompletion")
def monitoring_percentage_completion(self) -> Optional[int]:
"""
The percentage of the monitoring job. The type of the monitoring job is defined by MonitoringJobType property.
"""
return pulumi.get(self, "monitoring_percentage_completion")
@property
@pulumi.getter(name="multiVmGroupId")
def multi_vm_group_id(self) -> Optional[str]:
"""
The multi vm group Id.
"""
return pulumi.get(self, "multi_vm_group_id")
@property
@pulumi.getter(name="multiVmGroupName")
def multi_vm_group_name(self) -> Optional[str]:
"""
The multi vm group name.
"""
return pulumi.get(self, "multi_vm_group_name")
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
The type of operating system.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="primaryFabricLocation")
def primary_fabric_location(self) -> Optional[str]:
"""
Primary fabric location.
"""
return pulumi.get(self, "primary_fabric_location")
@property
@pulumi.getter(name="protectedDisks")
def protected_disks(self) -> Optional[Sequence['outputs.A2AProtectedDiskDetailsResponse']]:
"""
The list of protected disks.
"""
return pulumi.get(self, "protected_disks")
@property
@pulumi.getter(name="protectedManagedDisks")
def protected_managed_disks(self) -> Optional[Sequence['outputs.A2AProtectedManagedDiskDetailsResponse']]:
"""
The list of protected managed disks.
"""
return pulumi.get(self, "protected_managed_disks")
@property
@pulumi.getter(name="recoveryAvailabilitySet")
def recovery_availability_set(self) -> Optional[str]:
"""
The recovery availability set.
"""
return pulumi.get(self, "recovery_availability_set")
@property
@pulumi.getter(name="recoveryAzureResourceGroupId")
def recovery_azure_resource_group_id(self) -> Optional[str]:
"""
The recovery resource group.
"""
return pulumi.get(self, "recovery_azure_resource_group_id")
@property
@pulumi.getter(name="recoveryAzureVMName")
def recovery_azure_vm_name(self) -> Optional[str]:
"""
The name of recovery virtual machine.
"""
return pulumi.get(self, "recovery_azure_vm_name")
@property
@pulumi.getter(name="recoveryAzureVMSize")
def recovery_azure_vm_size(self) -> Optional[str]:
"""
The size of recovery virtual machine.
"""
return pulumi.get(self, "recovery_azure_vm_size")
@property
@pulumi.getter(name="recoveryCloudService")
def recovery_cloud_service(self) -> Optional[str]:
"""
The recovery cloud service.
"""
return pulumi.get(self, "recovery_cloud_service")
@property
@pulumi.getter(name="recoveryFabricLocation")
def recovery_fabric_location(self) -> Optional[str]:
"""
The recovery fabric location.
"""
return pulumi.get(self, "recovery_fabric_location")
@property
@pulumi.getter(name="recoveryFabricObjectId")
def recovery_fabric_object_id(self) -> Optional[str]:
"""
The recovery fabric object Id.
"""
return pulumi.get(self, "recovery_fabric_object_id")
@property
@pulumi.getter(name="rpoInSeconds")
def rpo_in_seconds(self) -> Optional[int]:
"""
The last RPO value in seconds.
"""
return pulumi.get(self, "rpo_in_seconds")
@property
@pulumi.getter(name="selectedRecoveryAzureNetworkId")
def selected_recovery_azure_network_id(self) -> Optional[str]:
"""
The recovery virtual network.
"""
return pulumi.get(self, "selected_recovery_azure_network_id")
@property
@pulumi.getter(name="testFailoverRecoveryFabricObjectId")
def test_failover_recovery_fabric_object_id(self) -> Optional[str]:
"""
The test failover fabric object Id.
"""
return pulumi.get(self, "test_failover_recovery_fabric_object_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The virtual machine nic details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
@property
@pulumi.getter(name="vmSyncedConfigDetails")
def vm_synced_config_details(self) -> Optional['outputs.AzureToAzureVmSyncedConfigDetailsResponse']:
"""
The synced configuration details.
"""
return pulumi.get(self, "vm_synced_config_details")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AzureFabricSpecificDetailsResponse(dict):
"""
Azure Fabric Specific Details.
"""
def __init__(__self__, *,
instance_type: str,
container_ids: Optional[Sequence[str]] = None,
location: Optional[str] = None):
"""
Azure Fabric Specific Details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param Sequence[str] container_ids: The container Ids for the Azure fabric.
:param str location: The Location for the Azure fabric.
"""
pulumi.set(__self__, "instance_type", 'Azure')
if container_ids is not None:
pulumi.set(__self__, "container_ids", container_ids)
if location is not None:
pulumi.set(__self__, "location", location)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="containerIds")
def container_ids(self) -> Optional[Sequence[str]]:
"""
The container Ids for the Azure fabric.
"""
return pulumi.get(self, "container_ids")
@property
@pulumi.getter
def location(self) -> Optional[str]:
"""
The Location for the Azure fabric.
"""
return pulumi.get(self, "location")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AzureToAzureNetworkMappingSettingsResponse(dict):
"""
A2A Network Mapping fabric specific settings.
"""
def __init__(__self__, *,
instance_type: str,
primary_fabric_location: Optional[str] = None,
recovery_fabric_location: Optional[str] = None):
"""
A2A Network Mapping fabric specific settings.
:param str instance_type: Gets the Instance type.
:param str primary_fabric_location: The primary fabric location.
:param str recovery_fabric_location: The recovery fabric location.
"""
pulumi.set(__self__, "instance_type", 'AzureToAzure')
if primary_fabric_location is not None:
pulumi.set(__self__, "primary_fabric_location", primary_fabric_location)
if recovery_fabric_location is not None:
pulumi.set(__self__, "recovery_fabric_location", recovery_fabric_location)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="primaryFabricLocation")
def primary_fabric_location(self) -> Optional[str]:
"""
The primary fabric location.
"""
return pulumi.get(self, "primary_fabric_location")
@property
@pulumi.getter(name="recoveryFabricLocation")
def recovery_fabric_location(self) -> Optional[str]:
"""
The recovery fabric location.
"""
return pulumi.get(self, "recovery_fabric_location")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AzureToAzureVmSyncedConfigDetailsResponse(dict):
"""
Azure to Azure VM synced configuration details.
"""
def __init__(__self__, *,
input_endpoints: Optional[Sequence['outputs.InputEndpointResponse']] = None,
role_assignments: Optional[Sequence['outputs.RoleAssignmentResponse']] = None,
tags: Optional[Mapping[str, str]] = None):
"""
Azure to Azure VM synced configuration details.
:param Sequence['InputEndpointResponseArgs'] input_endpoints: The Azure VM input endpoints.
:param Sequence['RoleAssignmentResponseArgs'] role_assignments: The Azure role assignments.
:param Mapping[str, str] tags: The Azure VM tags.
"""
if input_endpoints is not None:
pulumi.set(__self__, "input_endpoints", input_endpoints)
if role_assignments is not None:
pulumi.set(__self__, "role_assignments", role_assignments)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="inputEndpoints")
def input_endpoints(self) -> Optional[Sequence['outputs.InputEndpointResponse']]:
"""
The Azure VM input endpoints.
"""
return pulumi.get(self, "input_endpoints")
@property
@pulumi.getter(name="roleAssignments")
def role_assignments(self) -> Optional[Sequence['outputs.RoleAssignmentResponse']]:
"""
The Azure role assignments.
"""
return pulumi.get(self, "role_assignments")
@property
@pulumi.getter
def tags(self) -> Optional[Mapping[str, str]]:
"""
The Azure VM tags.
"""
return pulumi.get(self, "tags")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class AzureVmDiskDetailsResponse(dict):
"""
Disk details for E2A provider.
"""
def __init__(__self__, *,
lun_id: Optional[str] = None,
max_size_mb: Optional[str] = None,
target_disk_location: Optional[str] = None,
target_disk_name: Optional[str] = None,
vhd_id: Optional[str] = None,
vhd_name: Optional[str] = None,
vhd_type: Optional[str] = None):
"""
Disk details for E2A provider.
:param str lun_id: Ordinal\LunId of the disk for the Azure VM.
:param str max_size_mb: Max side in MB.
:param str target_disk_location: Blob uri of the Azure disk.
:param str target_disk_name: The target Azure disk name.
:param str vhd_id: The VHD id.
:param str vhd_name: VHD name.
:param str vhd_type: VHD type.
"""
if lun_id is not None:
pulumi.set(__self__, "lun_id", lun_id)
if max_size_mb is not None:
pulumi.set(__self__, "max_size_mb", max_size_mb)
if target_disk_location is not None:
pulumi.set(__self__, "target_disk_location", target_disk_location)
if target_disk_name is not None:
pulumi.set(__self__, "target_disk_name", target_disk_name)
if vhd_id is not None:
pulumi.set(__self__, "vhd_id", vhd_id)
if vhd_name is not None:
pulumi.set(__self__, "vhd_name", vhd_name)
if vhd_type is not None:
pulumi.set(__self__, "vhd_type", vhd_type)
@property
@pulumi.getter(name="lunId")
def lun_id(self) -> Optional[str]:
"""
Ordinal\LunId of the disk for the Azure VM.
"""
return pulumi.get(self, "lun_id")
@property
@pulumi.getter(name="maxSizeMB")
def max_size_mb(self) -> Optional[str]:
"""
Max side in MB.
"""
return pulumi.get(self, "max_size_mb")
@property
@pulumi.getter(name="targetDiskLocation")
def target_disk_location(self) -> Optional[str]:
"""
Blob uri of the Azure disk.
"""
return pulumi.get(self, "target_disk_location")
@property
@pulumi.getter(name="targetDiskName")
def target_disk_name(self) -> Optional[str]:
"""
The target Azure disk name.
"""
return pulumi.get(self, "target_disk_name")
@property
@pulumi.getter(name="vhdId")
def vhd_id(self) -> Optional[str]:
"""
The VHD id.
"""
return pulumi.get(self, "vhd_id")
@property
@pulumi.getter(name="vhdName")
def vhd_name(self) -> Optional[str]:
"""
VHD name.
"""
return pulumi.get(self, "vhd_name")
@property
@pulumi.getter(name="vhdType")
def vhd_type(self) -> Optional[str]:
"""
VHD type.
"""
return pulumi.get(self, "vhd_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class CurrentScenarioDetailsResponse(dict):
"""
Current scenario details of the protected entity.
"""
def __init__(__self__, *,
job_id: Optional[str] = None,
scenario_name: Optional[str] = None,
start_time: Optional[str] = None):
"""
Current scenario details of the protected entity.
:param str job_id: ARM Id of the job being executed.
:param str scenario_name: Scenario name.
:param str start_time: Start time of the workflow.
"""
if job_id is not None:
pulumi.set(__self__, "job_id", job_id)
if scenario_name is not None:
pulumi.set(__self__, "scenario_name", scenario_name)
if start_time is not None:
pulumi.set(__self__, "start_time", start_time)
@property
@pulumi.getter(name="jobId")
def job_id(self) -> Optional[str]:
"""
ARM Id of the job being executed.
"""
return pulumi.get(self, "job_id")
@property
@pulumi.getter(name="scenarioName")
def scenario_name(self) -> Optional[str]:
"""
Scenario name.
"""
return pulumi.get(self, "scenario_name")
@property
@pulumi.getter(name="startTime")
def start_time(self) -> Optional[str]:
"""
Start time of the workflow.
"""
return pulumi.get(self, "start_time")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DataStoreResponse(dict):
"""
The data store details of the MT.
"""
def __init__(__self__, *,
capacity: Optional[str] = None,
free_space: Optional[str] = None,
symbolic_name: Optional[str] = None,
type: Optional[str] = None,
uuid: Optional[str] = None):
"""
The data store details of the MT.
:param str capacity: The capacity of data store in GBs.
:param str free_space: The free space of data store in GBs.
:param str symbolic_name: The symbolic name of data store.
:param str type: The type of data store.
:param str uuid: The uuid of data store.
"""
if capacity is not None:
pulumi.set(__self__, "capacity", capacity)
if free_space is not None:
pulumi.set(__self__, "free_space", free_space)
if symbolic_name is not None:
pulumi.set(__self__, "symbolic_name", symbolic_name)
if type is not None:
pulumi.set(__self__, "type", type)
if uuid is not None:
pulumi.set(__self__, "uuid", uuid)
@property
@pulumi.getter
def capacity(self) -> Optional[str]:
"""
The capacity of data store in GBs.
"""
return pulumi.get(self, "capacity")
@property
@pulumi.getter(name="freeSpace")
def free_space(self) -> Optional[str]:
"""
The free space of data store in GBs.
"""
return pulumi.get(self, "free_space")
@property
@pulumi.getter(name="symbolicName")
def symbolic_name(self) -> Optional[str]:
"""
The symbolic name of data store.
"""
return pulumi.get(self, "symbolic_name")
@property
@pulumi.getter
def type(self) -> Optional[str]:
"""
The type of data store.
"""
return pulumi.get(self, "type")
@property
@pulumi.getter
def uuid(self) -> Optional[str]:
"""
The uuid of data store.
"""
return pulumi.get(self, "uuid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DiskDetailsResponse(dict):
"""
On-prem disk details data.
"""
def __init__(__self__, *,
max_size_mb: Optional[int] = None,
vhd_id: Optional[str] = None,
vhd_name: Optional[str] = None,
vhd_type: Optional[str] = None):
"""
On-prem disk details data.
:param int max_size_mb: The hard disk max size in MB.
:param str vhd_id: The VHD Id.
:param str vhd_name: The VHD name.
:param str vhd_type: The type of the volume.
"""
if max_size_mb is not None:
pulumi.set(__self__, "max_size_mb", max_size_mb)
if vhd_id is not None:
pulumi.set(__self__, "vhd_id", vhd_id)
if vhd_name is not None:
pulumi.set(__self__, "vhd_name", vhd_name)
if vhd_type is not None:
pulumi.set(__self__, "vhd_type", vhd_type)
@property
@pulumi.getter(name="maxSizeMB")
def max_size_mb(self) -> Optional[int]:
"""
The hard disk max size in MB.
"""
return pulumi.get(self, "max_size_mb")
@property
@pulumi.getter(name="vhdId")
def vhd_id(self) -> Optional[str]:
"""
The VHD Id.
"""
return pulumi.get(self, "vhd_id")
@property
@pulumi.getter(name="vhdName")
def vhd_name(self) -> Optional[str]:
"""
The VHD name.
"""
return pulumi.get(self, "vhd_name")
@property
@pulumi.getter(name="vhdType")
def vhd_type(self) -> Optional[str]:
"""
The type of the volume.
"""
return pulumi.get(self, "vhd_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class EncryptionDetailsResponse(dict):
"""
Encryption details for the fabric.
"""
def __init__(__self__, *,
kek_cert_expiry_date: Optional[str] = None,
kek_cert_thumbprint: Optional[str] = None,
kek_state: Optional[str] = None):
"""
Encryption details for the fabric.
:param str kek_cert_expiry_date: The key encryption key certificate expiry date.
:param str kek_cert_thumbprint: The key encryption key certificate thumbprint.
:param str kek_state: The key encryption key state for the Vmm.
"""
if kek_cert_expiry_date is not None:
pulumi.set(__self__, "kek_cert_expiry_date", kek_cert_expiry_date)
if kek_cert_thumbprint is not None:
pulumi.set(__self__, "kek_cert_thumbprint", kek_cert_thumbprint)
if kek_state is not None:
pulumi.set(__self__, "kek_state", kek_state)
@property
@pulumi.getter(name="kekCertExpiryDate")
def kek_cert_expiry_date(self) -> Optional[str]:
"""
The key encryption key certificate expiry date.
"""
return pulumi.get(self, "kek_cert_expiry_date")
@property
@pulumi.getter(name="kekCertThumbprint")
def kek_cert_thumbprint(self) -> Optional[str]:
"""
The key encryption key certificate thumbprint.
"""
return pulumi.get(self, "kek_cert_thumbprint")
@property
@pulumi.getter(name="kekState")
def kek_state(self) -> Optional[str]:
"""
The key encryption key state for the Vmm.
"""
return pulumi.get(self, "kek_state")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class FabricPropertiesResponse(dict):
"""
Fabric properties.
"""
def __init__(__self__, *,
bcdr_state: Optional[str] = None,
custom_details: Optional[Any] = None,
encryption_details: Optional['outputs.EncryptionDetailsResponse'] = None,
friendly_name: Optional[str] = None,
health: Optional[str] = None,
health_error_details: Optional[Sequence['outputs.HealthErrorResponse']] = None,
internal_identifier: Optional[str] = None,
rollover_encryption_details: Optional['outputs.EncryptionDetailsResponse'] = None):
"""
Fabric properties.
:param str bcdr_state: BCDR state of the fabric.
:param Union['AzureFabricSpecificDetailsResponseArgs', 'HyperVSiteDetailsResponseArgs', 'VMwareDetailsResponseArgs', 'VMwareV2FabricSpecificDetailsResponseArgs', 'VmmDetailsResponseArgs'] custom_details: Fabric specific settings.
:param 'EncryptionDetailsResponseArgs' encryption_details: Encryption details for the fabric.
:param str friendly_name: Friendly name of the fabric.
:param str health: Health of fabric.
:param Sequence['HealthErrorResponseArgs'] health_error_details: Fabric health error details.
:param str internal_identifier: Dra Registration Id.
:param 'EncryptionDetailsResponseArgs' rollover_encryption_details: Rollover encryption details for the fabric.
"""
if bcdr_state is not None:
pulumi.set(__self__, "bcdr_state", bcdr_state)
if custom_details is not None:
pulumi.set(__self__, "custom_details", custom_details)
if encryption_details is not None:
pulumi.set(__self__, "encryption_details", encryption_details)
if friendly_name is not None:
pulumi.set(__self__, "friendly_name", friendly_name)
if health is not None:
pulumi.set(__self__, "health", health)
if health_error_details is not None:
pulumi.set(__self__, "health_error_details", health_error_details)
if internal_identifier is not None:
pulumi.set(__self__, "internal_identifier", internal_identifier)
if rollover_encryption_details is not None:
pulumi.set(__self__, "rollover_encryption_details", rollover_encryption_details)
@property
@pulumi.getter(name="bcdrState")
def bcdr_state(self) -> Optional[str]:
"""
BCDR state of the fabric.
"""
return pulumi.get(self, "bcdr_state")
@property
@pulumi.getter(name="customDetails")
def custom_details(self) -> Optional[Any]:
"""
Fabric specific settings.
"""
return pulumi.get(self, "custom_details")
@property
@pulumi.getter(name="encryptionDetails")
def encryption_details(self) -> Optional['outputs.EncryptionDetailsResponse']:
"""
Encryption details for the fabric.
"""
return pulumi.get(self, "encryption_details")
@property
@pulumi.getter(name="friendlyName")
def friendly_name(self) -> Optional[str]:
"""
Friendly name of the fabric.
"""
return pulumi.get(self, "friendly_name")
@property
@pulumi.getter
def health(self) -> Optional[str]:
"""
Health of fabric.
"""
return pulumi.get(self, "health")
@property
@pulumi.getter(name="healthErrorDetails")
def health_error_details(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
Fabric health error details.
"""
return pulumi.get(self, "health_error_details")
@property
@pulumi.getter(name="internalIdentifier")
def internal_identifier(self) -> Optional[str]:
"""
Dra Registration Id.
"""
return pulumi.get(self, "internal_identifier")
@property
@pulumi.getter(name="rolloverEncryptionDetails")
def rollover_encryption_details(self) -> Optional['outputs.EncryptionDetailsResponse']:
"""
Rollover encryption details for the fabric.
"""
return pulumi.get(self, "rollover_encryption_details")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HealthErrorResponse(dict):
"""
Health Error
"""
def __init__(__self__, *,
child_errors: Optional[Sequence['outputs.HealthErrorResponse']] = None,
creation_time_utc: Optional[str] = None,
entity_id: Optional[str] = None,
error_code: Optional[str] = None,
error_level: Optional[str] = None,
error_message: Optional[str] = None,
error_source: Optional[str] = None,
error_type: Optional[str] = None,
possible_causes: Optional[str] = None,
recommended_action: Optional[str] = None,
recovery_provider_error_message: Optional[str] = None):
"""
Health Error
:param Sequence['HealthErrorResponseArgs'] child_errors: The child health errors.
:param str creation_time_utc: Error creation time (UTC)
:param str entity_id: ID of the entity.
:param str error_code: Error code.
:param str error_level: Level of error.
:param str error_message: Error message.
:param str error_source: Source of error.
:param str error_type: Type of error.
:param str possible_causes: Possible causes of error.
:param str recommended_action: Recommended action to resolve error.
:param str recovery_provider_error_message: DRA error message.
"""
if child_errors is not None:
pulumi.set(__self__, "child_errors", child_errors)
if creation_time_utc is not None:
pulumi.set(__self__, "creation_time_utc", creation_time_utc)
if entity_id is not None:
pulumi.set(__self__, "entity_id", entity_id)
if error_code is not None:
pulumi.set(__self__, "error_code", error_code)
if error_level is not None:
pulumi.set(__self__, "error_level", error_level)
if error_message is not None:
pulumi.set(__self__, "error_message", error_message)
if error_source is not None:
pulumi.set(__self__, "error_source", error_source)
if error_type is not None:
pulumi.set(__self__, "error_type", error_type)
if possible_causes is not None:
pulumi.set(__self__, "possible_causes", possible_causes)
if recommended_action is not None:
pulumi.set(__self__, "recommended_action", recommended_action)
if recovery_provider_error_message is not None:
pulumi.set(__self__, "recovery_provider_error_message", recovery_provider_error_message)
@property
@pulumi.getter(name="childErrors")
def child_errors(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
The child health errors.
"""
return pulumi.get(self, "child_errors")
@property
@pulumi.getter(name="creationTimeUtc")
def creation_time_utc(self) -> Optional[str]:
"""
Error creation time (UTC)
"""
return pulumi.get(self, "creation_time_utc")
@property
@pulumi.getter(name="entityId")
def entity_id(self) -> Optional[str]:
"""
ID of the entity.
"""
return pulumi.get(self, "entity_id")
@property
@pulumi.getter(name="errorCode")
def error_code(self) -> Optional[str]:
"""
Error code.
"""
return pulumi.get(self, "error_code")
@property
@pulumi.getter(name="errorLevel")
def error_level(self) -> Optional[str]:
"""
Level of error.
"""
return pulumi.get(self, "error_level")
@property
@pulumi.getter(name="errorMessage")
def error_message(self) -> Optional[str]:
"""
Error message.
"""
return pulumi.get(self, "error_message")
@property
@pulumi.getter(name="errorSource")
def error_source(self) -> Optional[str]:
"""
Source of error.
"""
return pulumi.get(self, "error_source")
@property
@pulumi.getter(name="errorType")
def error_type(self) -> Optional[str]:
"""
Type of error.
"""
return pulumi.get(self, "error_type")
@property
@pulumi.getter(name="possibleCauses")
def possible_causes(self) -> Optional[str]:
"""
Possible causes of error.
"""
return pulumi.get(self, "possible_causes")
@property
@pulumi.getter(name="recommendedAction")
def recommended_action(self) -> Optional[str]:
"""
Recommended action to resolve error.
"""
return pulumi.get(self, "recommended_action")
@property
@pulumi.getter(name="recoveryProviderErrorMessage")
def recovery_provider_error_message(self) -> Optional[str]:
"""
DRA error message.
"""
return pulumi.get(self, "recovery_provider_error_message")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaAzurePolicyDetailsResponse(dict):
"""
Hyper-V Replica Azure specific protection profile details.
"""
def __init__(__self__, *,
instance_type: str,
active_storage_account_id: Optional[str] = None,
application_consistent_snapshot_frequency_in_hours: Optional[int] = None,
encryption: Optional[str] = None,
online_replication_start_time: Optional[str] = None,
recovery_point_history_duration_in_hours: Optional[int] = None,
replication_interval: Optional[int] = None):
"""
Hyper-V Replica Azure specific protection profile details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param str active_storage_account_id: The active storage account Id.
:param int application_consistent_snapshot_frequency_in_hours: The interval (in hours) at which Hyper-V Replica should create an application consistent snapshot within the VM.
:param str encryption: A value indicating whether encryption is enabled for virtual machines in this cloud.
:param str online_replication_start_time: The scheduled start time for the initial replication. If this parameter is Null, the initial replication starts immediately.
:param int recovery_point_history_duration_in_hours: The duration (in hours) to which point the recovery history needs to be maintained.
:param int replication_interval: The replication interval.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplicaAzure')
if active_storage_account_id is not None:
pulumi.set(__self__, "active_storage_account_id", active_storage_account_id)
if application_consistent_snapshot_frequency_in_hours is not None:
pulumi.set(__self__, "application_consistent_snapshot_frequency_in_hours", application_consistent_snapshot_frequency_in_hours)
if encryption is not None:
pulumi.set(__self__, "encryption", encryption)
if online_replication_start_time is not None:
pulumi.set(__self__, "online_replication_start_time", online_replication_start_time)
if recovery_point_history_duration_in_hours is not None:
pulumi.set(__self__, "recovery_point_history_duration_in_hours", recovery_point_history_duration_in_hours)
if replication_interval is not None:
pulumi.set(__self__, "replication_interval", replication_interval)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="activeStorageAccountId")
def active_storage_account_id(self) -> Optional[str]:
"""
The active storage account Id.
"""
return pulumi.get(self, "active_storage_account_id")
@property
@pulumi.getter(name="applicationConsistentSnapshotFrequencyInHours")
def application_consistent_snapshot_frequency_in_hours(self) -> Optional[int]:
"""
The interval (in hours) at which Hyper-V Replica should create an application consistent snapshot within the VM.
"""
return pulumi.get(self, "application_consistent_snapshot_frequency_in_hours")
@property
@pulumi.getter
def encryption(self) -> Optional[str]:
"""
A value indicating whether encryption is enabled for virtual machines in this cloud.
"""
return pulumi.get(self, "encryption")
@property
@pulumi.getter(name="onlineReplicationStartTime")
def online_replication_start_time(self) -> Optional[str]:
"""
The scheduled start time for the initial replication. If this parameter is Null, the initial replication starts immediately.
"""
return pulumi.get(self, "online_replication_start_time")
@property
@pulumi.getter(name="recoveryPointHistoryDurationInHours")
def recovery_point_history_duration_in_hours(self) -> Optional[int]:
"""
The duration (in hours) to which point the recovery history needs to be maintained.
"""
return pulumi.get(self, "recovery_point_history_duration_in_hours")
@property
@pulumi.getter(name="replicationInterval")
def replication_interval(self) -> Optional[int]:
"""
The replication interval.
"""
return pulumi.get(self, "replication_interval")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaAzureReplicationDetailsResponse(dict):
"""
Hyper V Replica Azure provider specific settings.
"""
def __init__(__self__, *,
instance_type: str,
azure_vm_disk_details: Optional[Sequence['outputs.AzureVmDiskDetailsResponse']] = None,
enable_rdp_on_target_option: Optional[str] = None,
encryption: Optional[str] = None,
initial_replication_details: Optional['outputs.InitialReplicationDetailsResponse'] = None,
last_replicated_time: Optional[str] = None,
license_type: Optional[str] = None,
o_s_details: Optional['outputs.OSDetailsResponse'] = None,
recovery_availability_set_id: Optional[str] = None,
recovery_azure_log_storage_account_id: Optional[str] = None,
recovery_azure_resource_group_id: Optional[str] = None,
recovery_azure_storage_account: Optional[str] = None,
recovery_azure_vm_name: Optional[str] = None,
recovery_azure_vm_size: Optional[str] = None,
selected_recovery_azure_network_id: Optional[str] = None,
source_vm_cpu_count: Optional[int] = None,
source_vm_ram_size_in_mb: Optional[int] = None,
use_managed_disks: Optional[str] = None,
vm_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None):
"""
Hyper V Replica Azure provider specific settings.
:param str instance_type: Gets the Instance type.
:param Sequence['AzureVmDiskDetailsResponseArgs'] azure_vm_disk_details: Azure VM Disk details.
:param str enable_rdp_on_target_option: The selected option to enable RDP\SSH on target vm after failover. String value of {SrsDataContract.EnableRDPOnTargetOption} enum.
:param str encryption: The encryption info.
:param 'InitialReplicationDetailsResponseArgs' initial_replication_details: Initial replication details.
:param str last_replicated_time: The Last replication time.
:param str license_type: License Type of the VM to be used.
:param 'OSDetailsResponseArgs' o_s_details: The operating system info.
:param str recovery_availability_set_id: The recovery availability set Id.
:param str recovery_azure_log_storage_account_id: The ARM id of the log storage account used for replication. This will be set to null if no log storage account was provided during enable protection.
:param str recovery_azure_resource_group_id: The target resource group Id.
:param str recovery_azure_storage_account: The recovery Azure storage account.
:param str recovery_azure_vm_name: Recovery Azure given name.
:param str recovery_azure_vm_size: The Recovery Azure VM size.
:param str selected_recovery_azure_network_id: The selected recovery azure network Id.
:param int source_vm_cpu_count: The CPU count of the VM on the primary side.
:param int source_vm_ram_size_in_mb: The RAM size of the VM on the primary side.
:param str use_managed_disks: A value indicating whether managed disks should be used during failover.
:param str vm_id: The virtual machine Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The PE Network details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplicaAzure')
if azure_vm_disk_details is not None:
pulumi.set(__self__, "azure_vm_disk_details", azure_vm_disk_details)
if enable_rdp_on_target_option is not None:
pulumi.set(__self__, "enable_rdp_on_target_option", enable_rdp_on_target_option)
if encryption is not None:
pulumi.set(__self__, "encryption", encryption)
if initial_replication_details is not None:
pulumi.set(__self__, "initial_replication_details", initial_replication_details)
if last_replicated_time is not None:
pulumi.set(__self__, "last_replicated_time", last_replicated_time)
if license_type is not None:
pulumi.set(__self__, "license_type", license_type)
if o_s_details is not None:
pulumi.set(__self__, "o_s_details", o_s_details)
if recovery_availability_set_id is not None:
pulumi.set(__self__, "recovery_availability_set_id", recovery_availability_set_id)
if recovery_azure_log_storage_account_id is not None:
pulumi.set(__self__, "recovery_azure_log_storage_account_id", recovery_azure_log_storage_account_id)
if recovery_azure_resource_group_id is not None:
pulumi.set(__self__, "recovery_azure_resource_group_id", recovery_azure_resource_group_id)
if recovery_azure_storage_account is not None:
pulumi.set(__self__, "recovery_azure_storage_account", recovery_azure_storage_account)
if recovery_azure_vm_name is not None:
pulumi.set(__self__, "recovery_azure_vm_name", recovery_azure_vm_name)
if recovery_azure_vm_size is not None:
pulumi.set(__self__, "recovery_azure_vm_size", recovery_azure_vm_size)
if selected_recovery_azure_network_id is not None:
pulumi.set(__self__, "selected_recovery_azure_network_id", selected_recovery_azure_network_id)
if source_vm_cpu_count is not None:
pulumi.set(__self__, "source_vm_cpu_count", source_vm_cpu_count)
if source_vm_ram_size_in_mb is not None:
pulumi.set(__self__, "source_vm_ram_size_in_mb", source_vm_ram_size_in_mb)
if use_managed_disks is not None:
pulumi.set(__self__, "use_managed_disks", use_managed_disks)
if vm_id is not None:
pulumi.set(__self__, "vm_id", vm_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="azureVMDiskDetails")
def azure_vm_disk_details(self) -> Optional[Sequence['outputs.AzureVmDiskDetailsResponse']]:
"""
Azure VM Disk details.
"""
return pulumi.get(self, "azure_vm_disk_details")
@property
@pulumi.getter(name="enableRDPOnTargetOption")
def enable_rdp_on_target_option(self) -> Optional[str]:
"""
The selected option to enable RDP\SSH on target vm after failover. String value of {SrsDataContract.EnableRDPOnTargetOption} enum.
"""
return pulumi.get(self, "enable_rdp_on_target_option")
@property
@pulumi.getter
def encryption(self) -> Optional[str]:
"""
The encryption info.
"""
return pulumi.get(self, "encryption")
@property
@pulumi.getter(name="initialReplicationDetails")
def initial_replication_details(self) -> Optional['outputs.InitialReplicationDetailsResponse']:
"""
Initial replication details.
"""
return pulumi.get(self, "initial_replication_details")
@property
@pulumi.getter(name="lastReplicatedTime")
def last_replicated_time(self) -> Optional[str]:
"""
The Last replication time.
"""
return pulumi.get(self, "last_replicated_time")
@property
@pulumi.getter(name="licenseType")
def license_type(self) -> Optional[str]:
"""
License Type of the VM to be used.
"""
return pulumi.get(self, "license_type")
@property
@pulumi.getter(name="oSDetails")
def o_s_details(self) -> Optional['outputs.OSDetailsResponse']:
"""
The operating system info.
"""
return pulumi.get(self, "o_s_details")
@property
@pulumi.getter(name="recoveryAvailabilitySetId")
def recovery_availability_set_id(self) -> Optional[str]:
"""
The recovery availability set Id.
"""
return pulumi.get(self, "recovery_availability_set_id")
@property
@pulumi.getter(name="recoveryAzureLogStorageAccountId")
def recovery_azure_log_storage_account_id(self) -> Optional[str]:
"""
The ARM id of the log storage account used for replication. This will be set to null if no log storage account was provided during enable protection.
"""
return pulumi.get(self, "recovery_azure_log_storage_account_id")
@property
@pulumi.getter(name="recoveryAzureResourceGroupId")
def recovery_azure_resource_group_id(self) -> Optional[str]:
"""
The target resource group Id.
"""
return pulumi.get(self, "recovery_azure_resource_group_id")
@property
@pulumi.getter(name="recoveryAzureStorageAccount")
def recovery_azure_storage_account(self) -> Optional[str]:
"""
The recovery Azure storage account.
"""
return pulumi.get(self, "recovery_azure_storage_account")
@property
@pulumi.getter(name="recoveryAzureVMName")
def recovery_azure_vm_name(self) -> Optional[str]:
"""
Recovery Azure given name.
"""
return pulumi.get(self, "recovery_azure_vm_name")
@property
@pulumi.getter(name="recoveryAzureVMSize")
def recovery_azure_vm_size(self) -> Optional[str]:
"""
The Recovery Azure VM size.
"""
return pulumi.get(self, "recovery_azure_vm_size")
@property
@pulumi.getter(name="selectedRecoveryAzureNetworkId")
def selected_recovery_azure_network_id(self) -> Optional[str]:
"""
The selected recovery azure network Id.
"""
return pulumi.get(self, "selected_recovery_azure_network_id")
@property
@pulumi.getter(name="sourceVmCPUCount")
def source_vm_cpu_count(self) -> Optional[int]:
"""
The CPU count of the VM on the primary side.
"""
return pulumi.get(self, "source_vm_cpu_count")
@property
@pulumi.getter(name="sourceVmRAMSizeInMB")
def source_vm_ram_size_in_mb(self) -> Optional[int]:
"""
The RAM size of the VM on the primary side.
"""
return pulumi.get(self, "source_vm_ram_size_in_mb")
@property
@pulumi.getter(name="useManagedDisks")
def use_managed_disks(self) -> Optional[str]:
"""
A value indicating whether managed disks should be used during failover.
"""
return pulumi.get(self, "use_managed_disks")
@property
@pulumi.getter(name="vmId")
def vm_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "vm_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The PE Network details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaBasePolicyDetailsResponse(dict):
"""
Base class for HyperVReplica policy details.
"""
def __init__(__self__, *,
instance_type: str,
allowed_authentication_type: Optional[int] = None,
application_consistent_snapshot_frequency_in_hours: Optional[int] = None,
compression: Optional[str] = None,
initial_replication_method: Optional[str] = None,
offline_replication_export_path: Optional[str] = None,
offline_replication_import_path: Optional[str] = None,
online_replication_start_time: Optional[str] = None,
recovery_points: Optional[int] = None,
replica_deletion_option: Optional[str] = None,
replication_port: Optional[int] = None):
"""
Base class for HyperVReplica policy details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int allowed_authentication_type: A value indicating the authentication type.
:param int application_consistent_snapshot_frequency_in_hours: A value indicating the application consistent frequency.
:param str compression: A value indicating whether compression has to be enabled.
:param str initial_replication_method: A value indicating whether IR is online.
:param str offline_replication_export_path: A value indicating the offline IR export path.
:param str offline_replication_import_path: A value indicating the offline IR import path.
:param str online_replication_start_time: A value indicating the online IR start time.
:param int recovery_points: A value indicating the number of recovery points.
:param str replica_deletion_option: A value indicating whether the VM has to be auto deleted. Supported Values: String.Empty, None, OnRecoveryCloud
:param int replication_port: A value indicating the recovery HTTPS port.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplicaBasePolicyDetails')
if allowed_authentication_type is not None:
pulumi.set(__self__, "allowed_authentication_type", allowed_authentication_type)
if application_consistent_snapshot_frequency_in_hours is not None:
pulumi.set(__self__, "application_consistent_snapshot_frequency_in_hours", application_consistent_snapshot_frequency_in_hours)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if initial_replication_method is not None:
pulumi.set(__self__, "initial_replication_method", initial_replication_method)
if offline_replication_export_path is not None:
pulumi.set(__self__, "offline_replication_export_path", offline_replication_export_path)
if offline_replication_import_path is not None:
pulumi.set(__self__, "offline_replication_import_path", offline_replication_import_path)
if online_replication_start_time is not None:
pulumi.set(__self__, "online_replication_start_time", online_replication_start_time)
if recovery_points is not None:
pulumi.set(__self__, "recovery_points", recovery_points)
if replica_deletion_option is not None:
pulumi.set(__self__, "replica_deletion_option", replica_deletion_option)
if replication_port is not None:
pulumi.set(__self__, "replication_port", replication_port)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="allowedAuthenticationType")
def allowed_authentication_type(self) -> Optional[int]:
"""
A value indicating the authentication type.
"""
return pulumi.get(self, "allowed_authentication_type")
@property
@pulumi.getter(name="applicationConsistentSnapshotFrequencyInHours")
def application_consistent_snapshot_frequency_in_hours(self) -> Optional[int]:
"""
A value indicating the application consistent frequency.
"""
return pulumi.get(self, "application_consistent_snapshot_frequency_in_hours")
@property
@pulumi.getter
def compression(self) -> Optional[str]:
"""
A value indicating whether compression has to be enabled.
"""
return pulumi.get(self, "compression")
@property
@pulumi.getter(name="initialReplicationMethod")
def initial_replication_method(self) -> Optional[str]:
"""
A value indicating whether IR is online.
"""
return pulumi.get(self, "initial_replication_method")
@property
@pulumi.getter(name="offlineReplicationExportPath")
def offline_replication_export_path(self) -> Optional[str]:
"""
A value indicating the offline IR export path.
"""
return pulumi.get(self, "offline_replication_export_path")
@property
@pulumi.getter(name="offlineReplicationImportPath")
def offline_replication_import_path(self) -> Optional[str]:
"""
A value indicating the offline IR import path.
"""
return pulumi.get(self, "offline_replication_import_path")
@property
@pulumi.getter(name="onlineReplicationStartTime")
def online_replication_start_time(self) -> Optional[str]:
"""
A value indicating the online IR start time.
"""
return pulumi.get(self, "online_replication_start_time")
@property
@pulumi.getter(name="recoveryPoints")
def recovery_points(self) -> Optional[int]:
"""
A value indicating the number of recovery points.
"""
return pulumi.get(self, "recovery_points")
@property
@pulumi.getter(name="replicaDeletionOption")
def replica_deletion_option(self) -> Optional[str]:
"""
A value indicating whether the VM has to be auto deleted. Supported Values: String.Empty, None, OnRecoveryCloud
"""
return pulumi.get(self, "replica_deletion_option")
@property
@pulumi.getter(name="replicationPort")
def replication_port(self) -> Optional[int]:
"""
A value indicating the recovery HTTPS port.
"""
return pulumi.get(self, "replication_port")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaBaseReplicationDetailsResponse(dict):
"""
Hyper V replica provider specific settings base class.
"""
def __init__(__self__, *,
instance_type: str,
initial_replication_details: Optional['outputs.InitialReplicationDetailsResponse'] = None,
last_replicated_time: Optional[str] = None,
v_m_disk_details: Optional[Sequence['outputs.DiskDetailsResponse']] = None,
vm_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None):
"""
Hyper V replica provider specific settings base class.
:param str instance_type: Gets the Instance type.
:param 'InitialReplicationDetailsResponseArgs' initial_replication_details: Initial replication details.
:param str last_replicated_time: The Last replication time.
:param Sequence['DiskDetailsResponseArgs'] v_m_disk_details: VM disk details.
:param str vm_id: The virtual machine Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The PE Network details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplicaBaseReplicationDetails')
if initial_replication_details is not None:
pulumi.set(__self__, "initial_replication_details", initial_replication_details)
if last_replicated_time is not None:
pulumi.set(__self__, "last_replicated_time", last_replicated_time)
if v_m_disk_details is not None:
pulumi.set(__self__, "v_m_disk_details", v_m_disk_details)
if vm_id is not None:
pulumi.set(__self__, "vm_id", vm_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="initialReplicationDetails")
def initial_replication_details(self) -> Optional['outputs.InitialReplicationDetailsResponse']:
"""
Initial replication details.
"""
return pulumi.get(self, "initial_replication_details")
@property
@pulumi.getter(name="lastReplicatedTime")
def last_replicated_time(self) -> Optional[str]:
"""
The Last replication time.
"""
return pulumi.get(self, "last_replicated_time")
@property
@pulumi.getter(name="vMDiskDetails")
def v_m_disk_details(self) -> Optional[Sequence['outputs.DiskDetailsResponse']]:
"""
VM disk details.
"""
return pulumi.get(self, "v_m_disk_details")
@property
@pulumi.getter(name="vmId")
def vm_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "vm_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The PE Network details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaBluePolicyDetailsResponse(dict):
"""
Hyper-V Replica Blue specific protection profile details.
"""
def __init__(__self__, *,
instance_type: str,
allowed_authentication_type: Optional[int] = None,
application_consistent_snapshot_frequency_in_hours: Optional[int] = None,
compression: Optional[str] = None,
initial_replication_method: Optional[str] = None,
offline_replication_export_path: Optional[str] = None,
offline_replication_import_path: Optional[str] = None,
online_replication_start_time: Optional[str] = None,
recovery_points: Optional[int] = None,
replica_deletion_option: Optional[str] = None,
replication_frequency_in_seconds: Optional[int] = None,
replication_port: Optional[int] = None):
"""
Hyper-V Replica Blue specific protection profile details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int allowed_authentication_type: A value indicating the authentication type.
:param int application_consistent_snapshot_frequency_in_hours: A value indicating the application consistent frequency.
:param str compression: A value indicating whether compression has to be enabled.
:param str initial_replication_method: A value indicating whether IR is online.
:param str offline_replication_export_path: A value indicating the offline IR export path.
:param str offline_replication_import_path: A value indicating the offline IR import path.
:param str online_replication_start_time: A value indicating the online IR start time.
:param int recovery_points: A value indicating the number of recovery points.
:param str replica_deletion_option: A value indicating whether the VM has to be auto deleted. Supported Values: String.Empty, None, OnRecoveryCloud
:param int replication_frequency_in_seconds: A value indicating the replication interval.
:param int replication_port: A value indicating the recovery HTTPS port.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplica2012R2')
if allowed_authentication_type is not None:
pulumi.set(__self__, "allowed_authentication_type", allowed_authentication_type)
if application_consistent_snapshot_frequency_in_hours is not None:
pulumi.set(__self__, "application_consistent_snapshot_frequency_in_hours", application_consistent_snapshot_frequency_in_hours)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if initial_replication_method is not None:
pulumi.set(__self__, "initial_replication_method", initial_replication_method)
if offline_replication_export_path is not None:
pulumi.set(__self__, "offline_replication_export_path", offline_replication_export_path)
if offline_replication_import_path is not None:
pulumi.set(__self__, "offline_replication_import_path", offline_replication_import_path)
if online_replication_start_time is not None:
pulumi.set(__self__, "online_replication_start_time", online_replication_start_time)
if recovery_points is not None:
pulumi.set(__self__, "recovery_points", recovery_points)
if replica_deletion_option is not None:
pulumi.set(__self__, "replica_deletion_option", replica_deletion_option)
if replication_frequency_in_seconds is not None:
pulumi.set(__self__, "replication_frequency_in_seconds", replication_frequency_in_seconds)
if replication_port is not None:
pulumi.set(__self__, "replication_port", replication_port)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="allowedAuthenticationType")
def allowed_authentication_type(self) -> Optional[int]:
"""
A value indicating the authentication type.
"""
return pulumi.get(self, "allowed_authentication_type")
@property
@pulumi.getter(name="applicationConsistentSnapshotFrequencyInHours")
def application_consistent_snapshot_frequency_in_hours(self) -> Optional[int]:
"""
A value indicating the application consistent frequency.
"""
return pulumi.get(self, "application_consistent_snapshot_frequency_in_hours")
@property
@pulumi.getter
def compression(self) -> Optional[str]:
"""
A value indicating whether compression has to be enabled.
"""
return pulumi.get(self, "compression")
@property
@pulumi.getter(name="initialReplicationMethod")
def initial_replication_method(self) -> Optional[str]:
"""
A value indicating whether IR is online.
"""
return pulumi.get(self, "initial_replication_method")
@property
@pulumi.getter(name="offlineReplicationExportPath")
def offline_replication_export_path(self) -> Optional[str]:
"""
A value indicating the offline IR export path.
"""
return pulumi.get(self, "offline_replication_export_path")
@property
@pulumi.getter(name="offlineReplicationImportPath")
def offline_replication_import_path(self) -> Optional[str]:
"""
A value indicating the offline IR import path.
"""
return pulumi.get(self, "offline_replication_import_path")
@property
@pulumi.getter(name="onlineReplicationStartTime")
def online_replication_start_time(self) -> Optional[str]:
"""
A value indicating the online IR start time.
"""
return pulumi.get(self, "online_replication_start_time")
@property
@pulumi.getter(name="recoveryPoints")
def recovery_points(self) -> Optional[int]:
"""
A value indicating the number of recovery points.
"""
return pulumi.get(self, "recovery_points")
@property
@pulumi.getter(name="replicaDeletionOption")
def replica_deletion_option(self) -> Optional[str]:
"""
A value indicating whether the VM has to be auto deleted. Supported Values: String.Empty, None, OnRecoveryCloud
"""
return pulumi.get(self, "replica_deletion_option")
@property
@pulumi.getter(name="replicationFrequencyInSeconds")
def replication_frequency_in_seconds(self) -> Optional[int]:
"""
A value indicating the replication interval.
"""
return pulumi.get(self, "replication_frequency_in_seconds")
@property
@pulumi.getter(name="replicationPort")
def replication_port(self) -> Optional[int]:
"""
A value indicating the recovery HTTPS port.
"""
return pulumi.get(self, "replication_port")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaBlueReplicationDetailsResponse(dict):
"""
HyperV replica 2012 R2 (Blue) replication details.
"""
def __init__(__self__, *,
instance_type: str,
initial_replication_details: Optional['outputs.InitialReplicationDetailsResponse'] = None,
last_replicated_time: Optional[str] = None,
v_m_disk_details: Optional[Sequence['outputs.DiskDetailsResponse']] = None,
vm_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None):
"""
HyperV replica 2012 R2 (Blue) replication details.
:param str instance_type: Gets the Instance type.
:param 'InitialReplicationDetailsResponseArgs' initial_replication_details: Initial replication details.
:param str last_replicated_time: The Last replication time.
:param Sequence['DiskDetailsResponseArgs'] v_m_disk_details: VM disk details.
:param str vm_id: The virtual machine Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The PE Network details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplica2012R2')
if initial_replication_details is not None:
pulumi.set(__self__, "initial_replication_details", initial_replication_details)
if last_replicated_time is not None:
pulumi.set(__self__, "last_replicated_time", last_replicated_time)
if v_m_disk_details is not None:
pulumi.set(__self__, "v_m_disk_details", v_m_disk_details)
if vm_id is not None:
pulumi.set(__self__, "vm_id", vm_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="initialReplicationDetails")
def initial_replication_details(self) -> Optional['outputs.InitialReplicationDetailsResponse']:
"""
Initial replication details.
"""
return pulumi.get(self, "initial_replication_details")
@property
@pulumi.getter(name="lastReplicatedTime")
def last_replicated_time(self) -> Optional[str]:
"""
The Last replication time.
"""
return pulumi.get(self, "last_replicated_time")
@property
@pulumi.getter(name="vMDiskDetails")
def v_m_disk_details(self) -> Optional[Sequence['outputs.DiskDetailsResponse']]:
"""
VM disk details.
"""
return pulumi.get(self, "v_m_disk_details")
@property
@pulumi.getter(name="vmId")
def vm_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "vm_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The PE Network details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaPolicyDetailsResponse(dict):
"""
Hyper-V Replica Blue specific protection profile details.
"""
def __init__(__self__, *,
instance_type: str,
allowed_authentication_type: Optional[int] = None,
application_consistent_snapshot_frequency_in_hours: Optional[int] = None,
compression: Optional[str] = None,
initial_replication_method: Optional[str] = None,
offline_replication_export_path: Optional[str] = None,
offline_replication_import_path: Optional[str] = None,
online_replication_start_time: Optional[str] = None,
recovery_points: Optional[int] = None,
replica_deletion_option: Optional[str] = None,
replication_port: Optional[int] = None):
"""
Hyper-V Replica Blue specific protection profile details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int allowed_authentication_type: A value indicating the authentication type.
:param int application_consistent_snapshot_frequency_in_hours: A value indicating the application consistent frequency.
:param str compression: A value indicating whether compression has to be enabled.
:param str initial_replication_method: A value indicating whether IR is online.
:param str offline_replication_export_path: A value indicating the offline IR export path.
:param str offline_replication_import_path: A value indicating the offline IR import path.
:param str online_replication_start_time: A value indicating the online IR start time.
:param int recovery_points: A value indicating the number of recovery points.
:param str replica_deletion_option: A value indicating whether the VM has to be auto deleted. Supported Values: String.Empty, None, OnRecoveryCloud
:param int replication_port: A value indicating the recovery HTTPS port.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplica2012')
if allowed_authentication_type is not None:
pulumi.set(__self__, "allowed_authentication_type", allowed_authentication_type)
if application_consistent_snapshot_frequency_in_hours is not None:
pulumi.set(__self__, "application_consistent_snapshot_frequency_in_hours", application_consistent_snapshot_frequency_in_hours)
if compression is not None:
pulumi.set(__self__, "compression", compression)
if initial_replication_method is not None:
pulumi.set(__self__, "initial_replication_method", initial_replication_method)
if offline_replication_export_path is not None:
pulumi.set(__self__, "offline_replication_export_path", offline_replication_export_path)
if offline_replication_import_path is not None:
pulumi.set(__self__, "offline_replication_import_path", offline_replication_import_path)
if online_replication_start_time is not None:
pulumi.set(__self__, "online_replication_start_time", online_replication_start_time)
if recovery_points is not None:
pulumi.set(__self__, "recovery_points", recovery_points)
if replica_deletion_option is not None:
pulumi.set(__self__, "replica_deletion_option", replica_deletion_option)
if replication_port is not None:
pulumi.set(__self__, "replication_port", replication_port)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="allowedAuthenticationType")
def allowed_authentication_type(self) -> Optional[int]:
"""
A value indicating the authentication type.
"""
return pulumi.get(self, "allowed_authentication_type")
@property
@pulumi.getter(name="applicationConsistentSnapshotFrequencyInHours")
def application_consistent_snapshot_frequency_in_hours(self) -> Optional[int]:
"""
A value indicating the application consistent frequency.
"""
return pulumi.get(self, "application_consistent_snapshot_frequency_in_hours")
@property
@pulumi.getter
def compression(self) -> Optional[str]:
"""
A value indicating whether compression has to be enabled.
"""
return pulumi.get(self, "compression")
@property
@pulumi.getter(name="initialReplicationMethod")
def initial_replication_method(self) -> Optional[str]:
"""
A value indicating whether IR is online.
"""
return pulumi.get(self, "initial_replication_method")
@property
@pulumi.getter(name="offlineReplicationExportPath")
def offline_replication_export_path(self) -> Optional[str]:
"""
A value indicating the offline IR export path.
"""
return pulumi.get(self, "offline_replication_export_path")
@property
@pulumi.getter(name="offlineReplicationImportPath")
def offline_replication_import_path(self) -> Optional[str]:
"""
A value indicating the offline IR import path.
"""
return pulumi.get(self, "offline_replication_import_path")
@property
@pulumi.getter(name="onlineReplicationStartTime")
def online_replication_start_time(self) -> Optional[str]:
"""
A value indicating the online IR start time.
"""
return pulumi.get(self, "online_replication_start_time")
@property
@pulumi.getter(name="recoveryPoints")
def recovery_points(self) -> Optional[int]:
"""
A value indicating the number of recovery points.
"""
return pulumi.get(self, "recovery_points")
@property
@pulumi.getter(name="replicaDeletionOption")
def replica_deletion_option(self) -> Optional[str]:
"""
A value indicating whether the VM has to be auto deleted. Supported Values: String.Empty, None, OnRecoveryCloud
"""
return pulumi.get(self, "replica_deletion_option")
@property
@pulumi.getter(name="replicationPort")
def replication_port(self) -> Optional[int]:
"""
A value indicating the recovery HTTPS port.
"""
return pulumi.get(self, "replication_port")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVReplicaReplicationDetailsResponse(dict):
"""
HyperV replica 2012 replication details.
"""
def __init__(__self__, *,
instance_type: str,
initial_replication_details: Optional['outputs.InitialReplicationDetailsResponse'] = None,
last_replicated_time: Optional[str] = None,
v_m_disk_details: Optional[Sequence['outputs.DiskDetailsResponse']] = None,
vm_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None):
"""
HyperV replica 2012 replication details.
:param str instance_type: Gets the Instance type.
:param 'InitialReplicationDetailsResponseArgs' initial_replication_details: Initial replication details.
:param str last_replicated_time: The Last replication time.
:param Sequence['DiskDetailsResponseArgs'] v_m_disk_details: VM disk details.
:param str vm_id: The virtual machine Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The PE Network details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
"""
pulumi.set(__self__, "instance_type", 'HyperVReplica2012')
if initial_replication_details is not None:
pulumi.set(__self__, "initial_replication_details", initial_replication_details)
if last_replicated_time is not None:
pulumi.set(__self__, "last_replicated_time", last_replicated_time)
if v_m_disk_details is not None:
pulumi.set(__self__, "v_m_disk_details", v_m_disk_details)
if vm_id is not None:
pulumi.set(__self__, "vm_id", vm_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="initialReplicationDetails")
def initial_replication_details(self) -> Optional['outputs.InitialReplicationDetailsResponse']:
"""
Initial replication details.
"""
return pulumi.get(self, "initial_replication_details")
@property
@pulumi.getter(name="lastReplicatedTime")
def last_replicated_time(self) -> Optional[str]:
"""
The Last replication time.
"""
return pulumi.get(self, "last_replicated_time")
@property
@pulumi.getter(name="vMDiskDetails")
def v_m_disk_details(self) -> Optional[Sequence['outputs.DiskDetailsResponse']]:
"""
VM disk details.
"""
return pulumi.get(self, "v_m_disk_details")
@property
@pulumi.getter(name="vmId")
def vm_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "vm_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The PE Network details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class HyperVSiteDetailsResponse(dict):
"""
HyperVSite fabric specific details.
"""
def __init__(__self__, *,
instance_type: str):
"""
HyperVSite fabric specific details.
:param str instance_type: Gets the class type. Overridden in derived classes.
"""
pulumi.set(__self__, "instance_type", 'HyperVSite')
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageAgentDetailsResponse(dict):
"""
The details of the InMage agent.
"""
def __init__(__self__, *,
agent_update_status: Optional[str] = None,
agent_version: Optional[str] = None,
post_update_reboot_status: Optional[str] = None):
"""
The details of the InMage agent.
:param str agent_update_status: A value indicating whether installed agent needs to be updated.
:param str agent_version: The agent version.
:param str post_update_reboot_status: A value indicating whether reboot is required after update is applied.
"""
if agent_update_status is not None:
pulumi.set(__self__, "agent_update_status", agent_update_status)
if agent_version is not None:
pulumi.set(__self__, "agent_version", agent_version)
if post_update_reboot_status is not None:
pulumi.set(__self__, "post_update_reboot_status", post_update_reboot_status)
@property
@pulumi.getter(name="agentUpdateStatus")
def agent_update_status(self) -> Optional[str]:
"""
A value indicating whether installed agent needs to be updated.
"""
return pulumi.get(self, "agent_update_status")
@property
@pulumi.getter(name="agentVersion")
def agent_version(self) -> Optional[str]:
"""
The agent version.
"""
return pulumi.get(self, "agent_version")
@property
@pulumi.getter(name="postUpdateRebootStatus")
def post_update_reboot_status(self) -> Optional[str]:
"""
A value indicating whether reboot is required after update is applied.
"""
return pulumi.get(self, "post_update_reboot_status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageAzureV2PolicyDetailsResponse(dict):
"""
InMage Azure v2 specific protection profile details.
"""
def __init__(__self__, *,
instance_type: str,
app_consistent_frequency_in_minutes: Optional[int] = None,
crash_consistent_frequency_in_minutes: Optional[int] = None,
multi_vm_sync_status: Optional[str] = None,
recovery_point_history: Optional[int] = None,
recovery_point_threshold_in_minutes: Optional[int] = None):
"""
InMage Azure v2 specific protection profile details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int app_consistent_frequency_in_minutes: The app consistent snapshot frequency in minutes.
:param int crash_consistent_frequency_in_minutes: The crash consistent snapshot frequency in minutes.
:param str multi_vm_sync_status: A value indicating whether multi-VM sync has to be enabled.
:param int recovery_point_history: The duration in minutes until which the recovery points need to be stored.
:param int recovery_point_threshold_in_minutes: The recovery point threshold in minutes.
"""
pulumi.set(__self__, "instance_type", 'InMageAzureV2')
if app_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "app_consistent_frequency_in_minutes", app_consistent_frequency_in_minutes)
if crash_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "crash_consistent_frequency_in_minutes", crash_consistent_frequency_in_minutes)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if recovery_point_history is not None:
pulumi.set(__self__, "recovery_point_history", recovery_point_history)
if recovery_point_threshold_in_minutes is not None:
pulumi.set(__self__, "recovery_point_threshold_in_minutes", recovery_point_threshold_in_minutes)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="appConsistentFrequencyInMinutes")
def app_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The app consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "app_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="crashConsistentFrequencyInMinutes")
def crash_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The crash consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "crash_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether multi-VM sync has to be enabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="recoveryPointHistory")
def recovery_point_history(self) -> Optional[int]:
"""
The duration in minutes until which the recovery points need to be stored.
"""
return pulumi.get(self, "recovery_point_history")
@property
@pulumi.getter(name="recoveryPointThresholdInMinutes")
def recovery_point_threshold_in_minutes(self) -> Optional[int]:
"""
The recovery point threshold in minutes.
"""
return pulumi.get(self, "recovery_point_threshold_in_minutes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageAzureV2ProtectedDiskDetailsResponse(dict):
"""
InMageAzureV2 protected disk details.
"""
def __init__(__self__, *,
disk_capacity_in_bytes: Optional[int] = None,
disk_id: Optional[str] = None,
disk_name: Optional[str] = None,
disk_resized: Optional[str] = None,
file_system_capacity_in_bytes: Optional[int] = None,
health_error_code: Optional[str] = None,
last_rpo_calculated_time: Optional[str] = None,
protection_stage: Optional[str] = None,
ps_data_in_mega_bytes: Optional[float] = None,
resync_duration_in_seconds: Optional[int] = None,
resync_progress_percentage: Optional[int] = None,
resync_required: Optional[str] = None,
rpo_in_seconds: Optional[int] = None,
source_data_in_mega_bytes: Optional[float] = None,
target_data_in_mega_bytes: Optional[float] = None):
"""
InMageAzureV2 protected disk details.
:param int disk_capacity_in_bytes: The disk capacity in bytes.
:param str disk_id: The disk id.
:param str disk_name: The disk name.
:param str disk_resized: A value indicating whether disk is resized.
:param int file_system_capacity_in_bytes: The disk file system capacity in bytes.
:param str health_error_code: The health error code for the disk.
:param str last_rpo_calculated_time: The last RPO calculated time.
:param str protection_stage: The protection stage.
:param float ps_data_in_mega_bytes: The PS data transit in MB.
:param int resync_duration_in_seconds: The resync duration in seconds.
:param int resync_progress_percentage: The resync progress percentage.
:param str resync_required: A value indicating whether resync is required for this disk.
:param int rpo_in_seconds: The RPO in seconds.
:param float source_data_in_mega_bytes: The source data transit in MB.
:param float target_data_in_mega_bytes: The target data transit in MB.
"""
if disk_capacity_in_bytes is not None:
pulumi.set(__self__, "disk_capacity_in_bytes", disk_capacity_in_bytes)
if disk_id is not None:
pulumi.set(__self__, "disk_id", disk_id)
if disk_name is not None:
pulumi.set(__self__, "disk_name", disk_name)
if disk_resized is not None:
pulumi.set(__self__, "disk_resized", disk_resized)
if file_system_capacity_in_bytes is not None:
pulumi.set(__self__, "file_system_capacity_in_bytes", file_system_capacity_in_bytes)
if health_error_code is not None:
pulumi.set(__self__, "health_error_code", health_error_code)
if last_rpo_calculated_time is not None:
pulumi.set(__self__, "last_rpo_calculated_time", last_rpo_calculated_time)
if protection_stage is not None:
pulumi.set(__self__, "protection_stage", protection_stage)
if ps_data_in_mega_bytes is not None:
pulumi.set(__self__, "ps_data_in_mega_bytes", ps_data_in_mega_bytes)
if resync_duration_in_seconds is not None:
pulumi.set(__self__, "resync_duration_in_seconds", resync_duration_in_seconds)
if resync_progress_percentage is not None:
pulumi.set(__self__, "resync_progress_percentage", resync_progress_percentage)
if resync_required is not None:
pulumi.set(__self__, "resync_required", resync_required)
if rpo_in_seconds is not None:
pulumi.set(__self__, "rpo_in_seconds", rpo_in_seconds)
if source_data_in_mega_bytes is not None:
pulumi.set(__self__, "source_data_in_mega_bytes", source_data_in_mega_bytes)
if target_data_in_mega_bytes is not None:
pulumi.set(__self__, "target_data_in_mega_bytes", target_data_in_mega_bytes)
@property
@pulumi.getter(name="diskCapacityInBytes")
def disk_capacity_in_bytes(self) -> Optional[int]:
"""
The disk capacity in bytes.
"""
return pulumi.get(self, "disk_capacity_in_bytes")
@property
@pulumi.getter(name="diskId")
def disk_id(self) -> Optional[str]:
"""
The disk id.
"""
return pulumi.get(self, "disk_id")
@property
@pulumi.getter(name="diskName")
def disk_name(self) -> Optional[str]:
"""
The disk name.
"""
return pulumi.get(self, "disk_name")
@property
@pulumi.getter(name="diskResized")
def disk_resized(self) -> Optional[str]:
"""
A value indicating whether disk is resized.
"""
return pulumi.get(self, "disk_resized")
@property
@pulumi.getter(name="fileSystemCapacityInBytes")
def file_system_capacity_in_bytes(self) -> Optional[int]:
"""
The disk file system capacity in bytes.
"""
return pulumi.get(self, "file_system_capacity_in_bytes")
@property
@pulumi.getter(name="healthErrorCode")
def health_error_code(self) -> Optional[str]:
"""
The health error code for the disk.
"""
return pulumi.get(self, "health_error_code")
@property
@pulumi.getter(name="lastRpoCalculatedTime")
def last_rpo_calculated_time(self) -> Optional[str]:
"""
The last RPO calculated time.
"""
return pulumi.get(self, "last_rpo_calculated_time")
@property
@pulumi.getter(name="protectionStage")
def protection_stage(self) -> Optional[str]:
"""
The protection stage.
"""
return pulumi.get(self, "protection_stage")
@property
@pulumi.getter(name="psDataInMegaBytes")
def ps_data_in_mega_bytes(self) -> Optional[float]:
"""
The PS data transit in MB.
"""
return pulumi.get(self, "ps_data_in_mega_bytes")
@property
@pulumi.getter(name="resyncDurationInSeconds")
def resync_duration_in_seconds(self) -> Optional[int]:
"""
The resync duration in seconds.
"""
return pulumi.get(self, "resync_duration_in_seconds")
@property
@pulumi.getter(name="resyncProgressPercentage")
def resync_progress_percentage(self) -> Optional[int]:
"""
The resync progress percentage.
"""
return pulumi.get(self, "resync_progress_percentage")
@property
@pulumi.getter(name="resyncRequired")
def resync_required(self) -> Optional[str]:
"""
A value indicating whether resync is required for this disk.
"""
return pulumi.get(self, "resync_required")
@property
@pulumi.getter(name="rpoInSeconds")
def rpo_in_seconds(self) -> Optional[int]:
"""
The RPO in seconds.
"""
return pulumi.get(self, "rpo_in_seconds")
@property
@pulumi.getter(name="sourceDataInMegaBytes")
def source_data_in_mega_bytes(self) -> Optional[float]:
"""
The source data transit in MB.
"""
return pulumi.get(self, "source_data_in_mega_bytes")
@property
@pulumi.getter(name="targetDataInMegaBytes")
def target_data_in_mega_bytes(self) -> Optional[float]:
"""
The target data transit in MB.
"""
return pulumi.get(self, "target_data_in_mega_bytes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageAzureV2ReplicationDetailsResponse(dict):
"""
InMageAzureV2 provider specific settings
"""
def __init__(__self__, *,
instance_type: str,
agent_version: Optional[str] = None,
azure_vm_disk_details: Optional[Sequence['outputs.AzureVmDiskDetailsResponse']] = None,
compressed_data_rate_in_mb: Optional[float] = None,
datastores: Optional[Sequence[str]] = None,
discovery_type: Optional[str] = None,
disk_resized: Optional[str] = None,
enable_rdp_on_target_option: Optional[str] = None,
infrastructure_vm_id: Optional[str] = None,
ip_address: Optional[str] = None,
is_agent_update_required: Optional[str] = None,
is_reboot_after_update_required: Optional[str] = None,
last_heartbeat: Optional[str] = None,
last_rpo_calculated_time: Optional[str] = None,
last_update_received_time: Optional[str] = None,
license_type: Optional[str] = None,
master_target_id: Optional[str] = None,
multi_vm_group_id: Optional[str] = None,
multi_vm_group_name: Optional[str] = None,
multi_vm_sync_status: Optional[str] = None,
os_disk_id: Optional[str] = None,
os_type: Optional[str] = None,
os_version: Optional[str] = None,
process_server_id: Optional[str] = None,
protected_disks: Optional[Sequence['outputs.InMageAzureV2ProtectedDiskDetailsResponse']] = None,
protection_stage: Optional[str] = None,
recovery_availability_set_id: Optional[str] = None,
recovery_azure_log_storage_account_id: Optional[str] = None,
recovery_azure_resource_group_id: Optional[str] = None,
recovery_azure_storage_account: Optional[str] = None,
recovery_azure_vm_name: Optional[str] = None,
recovery_azure_vm_size: Optional[str] = None,
replica_id: Optional[str] = None,
resync_progress_percentage: Optional[int] = None,
rpo_in_seconds: Optional[int] = None,
selected_recovery_azure_network_id: Optional[str] = None,
source_vm_cpu_count: Optional[int] = None,
source_vm_ram_size_in_mb: Optional[int] = None,
target_vm_id: Optional[str] = None,
uncompressed_data_rate_in_mb: Optional[float] = None,
use_managed_disks: Optional[str] = None,
v_center_infrastructure_id: Optional[str] = None,
validation_errors: Optional[Sequence['outputs.HealthErrorResponse']] = None,
vhd_name: Optional[str] = None,
vm_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None):
"""
InMageAzureV2 provider specific settings
:param str instance_type: Gets the Instance type.
:param str agent_version: The agent version.
:param Sequence['AzureVmDiskDetailsResponseArgs'] azure_vm_disk_details: Azure VM Disk details.
:param float compressed_data_rate_in_mb: The compressed data change rate in MB.
:param Sequence[str] datastores: The data stores of the on-premise machine. Value can be list of strings that contain data store names.
:param str discovery_type: A value indicating the discovery type of the machine. Value can be vCenter or physical.
:param str disk_resized: A value indicating whether any disk is resized for this VM.
:param str enable_rdp_on_target_option: The selected option to enable RDP\SSH on target vm after failover. String value of {SrsDataContract.EnableRDPOnTargetOption} enum.
:param str infrastructure_vm_id: The infrastructure VM Id.
:param str ip_address: The source IP address.
:param str is_agent_update_required: A value indicating whether installed agent needs to be updated.
:param str is_reboot_after_update_required: A value indicating whether the source server requires a restart after update.
:param str last_heartbeat: The last heartbeat received from the source server.
:param str last_rpo_calculated_time: The last RPO calculated time.
:param str last_update_received_time: The last update time received from on-prem components.
:param str license_type: License Type of the VM to be used.
:param str master_target_id: The master target Id.
:param str multi_vm_group_id: The multi vm group Id.
:param str multi_vm_group_name: The multi vm group name.
:param str multi_vm_sync_status: A value indicating whether multi vm sync is enabled or disabled.
:param str os_disk_id: The id of the disk containing the OS.
:param str os_type: The type of the OS on the VM.
:param str os_version: The OS Version of the protected item.
:param str process_server_id: The process server Id.
:param Sequence['InMageAzureV2ProtectedDiskDetailsResponseArgs'] protected_disks: The list of protected disks.
:param str protection_stage: The protection stage.
:param str recovery_availability_set_id: The recovery availability set Id.
:param str recovery_azure_log_storage_account_id: The ARM id of the log storage account used for replication. This will be set to null if no log storage account was provided during enable protection.
:param str recovery_azure_resource_group_id: The target resource group Id.
:param str recovery_azure_storage_account: The recovery Azure storage account.
:param str recovery_azure_vm_name: Recovery Azure given name.
:param str recovery_azure_vm_size: The Recovery Azure VM size.
:param str replica_id: The replica id of the protected item.
:param int resync_progress_percentage: The resync progress percentage.
:param int rpo_in_seconds: The RPO in seconds.
:param str selected_recovery_azure_network_id: The selected recovery azure network Id.
:param int source_vm_cpu_count: The CPU count of the VM on the primary side.
:param int source_vm_ram_size_in_mb: The RAM size of the VM on the primary side.
:param str target_vm_id: The ARM Id of the target Azure VM. This value will be null until the VM is failed over. Only after failure it will be populated with the ARM Id of the Azure VM.
:param float uncompressed_data_rate_in_mb: The uncompressed data change rate in MB.
:param str use_managed_disks: A value indicating whether managed disks should be used during failover.
:param str v_center_infrastructure_id: The vCenter infrastructure Id.
:param Sequence['HealthErrorResponseArgs'] validation_errors: The validation errors of the on-premise machine Value can be list of validation errors.
:param str vhd_name: The OS disk VHD name.
:param str vm_id: The virtual machine Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The PE Network details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
"""
pulumi.set(__self__, "instance_type", 'InMageAzureV2')
if agent_version is not None:
pulumi.set(__self__, "agent_version", agent_version)
if azure_vm_disk_details is not None:
pulumi.set(__self__, "azure_vm_disk_details", azure_vm_disk_details)
if compressed_data_rate_in_mb is not None:
pulumi.set(__self__, "compressed_data_rate_in_mb", compressed_data_rate_in_mb)
if datastores is not None:
pulumi.set(__self__, "datastores", datastores)
if discovery_type is not None:
pulumi.set(__self__, "discovery_type", discovery_type)
if disk_resized is not None:
pulumi.set(__self__, "disk_resized", disk_resized)
if enable_rdp_on_target_option is not None:
pulumi.set(__self__, "enable_rdp_on_target_option", enable_rdp_on_target_option)
if infrastructure_vm_id is not None:
pulumi.set(__self__, "infrastructure_vm_id", infrastructure_vm_id)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if is_agent_update_required is not None:
pulumi.set(__self__, "is_agent_update_required", is_agent_update_required)
if is_reboot_after_update_required is not None:
pulumi.set(__self__, "is_reboot_after_update_required", is_reboot_after_update_required)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if last_rpo_calculated_time is not None:
pulumi.set(__self__, "last_rpo_calculated_time", last_rpo_calculated_time)
if last_update_received_time is not None:
pulumi.set(__self__, "last_update_received_time", last_update_received_time)
if license_type is not None:
pulumi.set(__self__, "license_type", license_type)
if master_target_id is not None:
pulumi.set(__self__, "master_target_id", master_target_id)
if multi_vm_group_id is not None:
pulumi.set(__self__, "multi_vm_group_id", multi_vm_group_id)
if multi_vm_group_name is not None:
pulumi.set(__self__, "multi_vm_group_name", multi_vm_group_name)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if os_disk_id is not None:
pulumi.set(__self__, "os_disk_id", os_disk_id)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if os_version is not None:
pulumi.set(__self__, "os_version", os_version)
if process_server_id is not None:
pulumi.set(__self__, "process_server_id", process_server_id)
if protected_disks is not None:
pulumi.set(__self__, "protected_disks", protected_disks)
if protection_stage is not None:
pulumi.set(__self__, "protection_stage", protection_stage)
if recovery_availability_set_id is not None:
pulumi.set(__self__, "recovery_availability_set_id", recovery_availability_set_id)
if recovery_azure_log_storage_account_id is not None:
pulumi.set(__self__, "recovery_azure_log_storage_account_id", recovery_azure_log_storage_account_id)
if recovery_azure_resource_group_id is not None:
pulumi.set(__self__, "recovery_azure_resource_group_id", recovery_azure_resource_group_id)
if recovery_azure_storage_account is not None:
pulumi.set(__self__, "recovery_azure_storage_account", recovery_azure_storage_account)
if recovery_azure_vm_name is not None:
pulumi.set(__self__, "recovery_azure_vm_name", recovery_azure_vm_name)
if recovery_azure_vm_size is not None:
pulumi.set(__self__, "recovery_azure_vm_size", recovery_azure_vm_size)
if replica_id is not None:
pulumi.set(__self__, "replica_id", replica_id)
if resync_progress_percentage is not None:
pulumi.set(__self__, "resync_progress_percentage", resync_progress_percentage)
if rpo_in_seconds is not None:
pulumi.set(__self__, "rpo_in_seconds", rpo_in_seconds)
if selected_recovery_azure_network_id is not None:
pulumi.set(__self__, "selected_recovery_azure_network_id", selected_recovery_azure_network_id)
if source_vm_cpu_count is not None:
pulumi.set(__self__, "source_vm_cpu_count", source_vm_cpu_count)
if source_vm_ram_size_in_mb is not None:
pulumi.set(__self__, "source_vm_ram_size_in_mb", source_vm_ram_size_in_mb)
if target_vm_id is not None:
pulumi.set(__self__, "target_vm_id", target_vm_id)
if uncompressed_data_rate_in_mb is not None:
pulumi.set(__self__, "uncompressed_data_rate_in_mb", uncompressed_data_rate_in_mb)
if use_managed_disks is not None:
pulumi.set(__self__, "use_managed_disks", use_managed_disks)
if v_center_infrastructure_id is not None:
pulumi.set(__self__, "v_center_infrastructure_id", v_center_infrastructure_id)
if validation_errors is not None:
pulumi.set(__self__, "validation_errors", validation_errors)
if vhd_name is not None:
pulumi.set(__self__, "vhd_name", vhd_name)
if vm_id is not None:
pulumi.set(__self__, "vm_id", vm_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="agentVersion")
def agent_version(self) -> Optional[str]:
"""
The agent version.
"""
return pulumi.get(self, "agent_version")
@property
@pulumi.getter(name="azureVMDiskDetails")
def azure_vm_disk_details(self) -> Optional[Sequence['outputs.AzureVmDiskDetailsResponse']]:
"""
Azure VM Disk details.
"""
return pulumi.get(self, "azure_vm_disk_details")
@property
@pulumi.getter(name="compressedDataRateInMB")
def compressed_data_rate_in_mb(self) -> Optional[float]:
"""
The compressed data change rate in MB.
"""
return pulumi.get(self, "compressed_data_rate_in_mb")
@property
@pulumi.getter
def datastores(self) -> Optional[Sequence[str]]:
"""
The data stores of the on-premise machine. Value can be list of strings that contain data store names.
"""
return pulumi.get(self, "datastores")
@property
@pulumi.getter(name="discoveryType")
def discovery_type(self) -> Optional[str]:
"""
A value indicating the discovery type of the machine. Value can be vCenter or physical.
"""
return pulumi.get(self, "discovery_type")
@property
@pulumi.getter(name="diskResized")
def disk_resized(self) -> Optional[str]:
"""
A value indicating whether any disk is resized for this VM.
"""
return pulumi.get(self, "disk_resized")
@property
@pulumi.getter(name="enableRDPOnTargetOption")
def enable_rdp_on_target_option(self) -> Optional[str]:
"""
The selected option to enable RDP\SSH on target vm after failover. String value of {SrsDataContract.EnableRDPOnTargetOption} enum.
"""
return pulumi.get(self, "enable_rdp_on_target_option")
@property
@pulumi.getter(name="infrastructureVmId")
def infrastructure_vm_id(self) -> Optional[str]:
"""
The infrastructure VM Id.
"""
return pulumi.get(self, "infrastructure_vm_id")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The source IP address.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="isAgentUpdateRequired")
def is_agent_update_required(self) -> Optional[str]:
"""
A value indicating whether installed agent needs to be updated.
"""
return pulumi.get(self, "is_agent_update_required")
@property
@pulumi.getter(name="isRebootAfterUpdateRequired")
def is_reboot_after_update_required(self) -> Optional[str]:
"""
A value indicating whether the source server requires a restart after update.
"""
return pulumi.get(self, "is_reboot_after_update_required")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The last heartbeat received from the source server.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter(name="lastRpoCalculatedTime")
def last_rpo_calculated_time(self) -> Optional[str]:
"""
The last RPO calculated time.
"""
return pulumi.get(self, "last_rpo_calculated_time")
@property
@pulumi.getter(name="lastUpdateReceivedTime")
def last_update_received_time(self) -> Optional[str]:
"""
The last update time received from on-prem components.
"""
return pulumi.get(self, "last_update_received_time")
@property
@pulumi.getter(name="licenseType")
def license_type(self) -> Optional[str]:
"""
License Type of the VM to be used.
"""
return pulumi.get(self, "license_type")
@property
@pulumi.getter(name="masterTargetId")
def master_target_id(self) -> Optional[str]:
"""
The master target Id.
"""
return pulumi.get(self, "master_target_id")
@property
@pulumi.getter(name="multiVmGroupId")
def multi_vm_group_id(self) -> Optional[str]:
"""
The multi vm group Id.
"""
return pulumi.get(self, "multi_vm_group_id")
@property
@pulumi.getter(name="multiVmGroupName")
def multi_vm_group_name(self) -> Optional[str]:
"""
The multi vm group name.
"""
return pulumi.get(self, "multi_vm_group_name")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether multi vm sync is enabled or disabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="osDiskId")
def os_disk_id(self) -> Optional[str]:
"""
The id of the disk containing the OS.
"""
return pulumi.get(self, "os_disk_id")
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
The type of the OS on the VM.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="osVersion")
def os_version(self) -> Optional[str]:
"""
The OS Version of the protected item.
"""
return pulumi.get(self, "os_version")
@property
@pulumi.getter(name="processServerId")
def process_server_id(self) -> Optional[str]:
"""
The process server Id.
"""
return pulumi.get(self, "process_server_id")
@property
@pulumi.getter(name="protectedDisks")
def protected_disks(self) -> Optional[Sequence['outputs.InMageAzureV2ProtectedDiskDetailsResponse']]:
"""
The list of protected disks.
"""
return pulumi.get(self, "protected_disks")
@property
@pulumi.getter(name="protectionStage")
def protection_stage(self) -> Optional[str]:
"""
The protection stage.
"""
return pulumi.get(self, "protection_stage")
@property
@pulumi.getter(name="recoveryAvailabilitySetId")
def recovery_availability_set_id(self) -> Optional[str]:
"""
The recovery availability set Id.
"""
return pulumi.get(self, "recovery_availability_set_id")
@property
@pulumi.getter(name="recoveryAzureLogStorageAccountId")
def recovery_azure_log_storage_account_id(self) -> Optional[str]:
"""
The ARM id of the log storage account used for replication. This will be set to null if no log storage account was provided during enable protection.
"""
return pulumi.get(self, "recovery_azure_log_storage_account_id")
@property
@pulumi.getter(name="recoveryAzureResourceGroupId")
def recovery_azure_resource_group_id(self) -> Optional[str]:
"""
The target resource group Id.
"""
return pulumi.get(self, "recovery_azure_resource_group_id")
@property
@pulumi.getter(name="recoveryAzureStorageAccount")
def recovery_azure_storage_account(self) -> Optional[str]:
"""
The recovery Azure storage account.
"""
return pulumi.get(self, "recovery_azure_storage_account")
@property
@pulumi.getter(name="recoveryAzureVMName")
def recovery_azure_vm_name(self) -> Optional[str]:
"""
Recovery Azure given name.
"""
return pulumi.get(self, "recovery_azure_vm_name")
@property
@pulumi.getter(name="recoveryAzureVMSize")
def recovery_azure_vm_size(self) -> Optional[str]:
"""
The Recovery Azure VM size.
"""
return pulumi.get(self, "recovery_azure_vm_size")
@property
@pulumi.getter(name="replicaId")
def replica_id(self) -> Optional[str]:
"""
The replica id of the protected item.
"""
return pulumi.get(self, "replica_id")
@property
@pulumi.getter(name="resyncProgressPercentage")
def resync_progress_percentage(self) -> Optional[int]:
"""
The resync progress percentage.
"""
return pulumi.get(self, "resync_progress_percentage")
@property
@pulumi.getter(name="rpoInSeconds")
def rpo_in_seconds(self) -> Optional[int]:
"""
The RPO in seconds.
"""
return pulumi.get(self, "rpo_in_seconds")
@property
@pulumi.getter(name="selectedRecoveryAzureNetworkId")
def selected_recovery_azure_network_id(self) -> Optional[str]:
"""
The selected recovery azure network Id.
"""
return pulumi.get(self, "selected_recovery_azure_network_id")
@property
@pulumi.getter(name="sourceVmCPUCount")
def source_vm_cpu_count(self) -> Optional[int]:
"""
The CPU count of the VM on the primary side.
"""
return pulumi.get(self, "source_vm_cpu_count")
@property
@pulumi.getter(name="sourceVmRAMSizeInMB")
def source_vm_ram_size_in_mb(self) -> Optional[int]:
"""
The RAM size of the VM on the primary side.
"""
return pulumi.get(self, "source_vm_ram_size_in_mb")
@property
@pulumi.getter(name="targetVmId")
def target_vm_id(self) -> Optional[str]:
"""
The ARM Id of the target Azure VM. This value will be null until the VM is failed over. Only after failure it will be populated with the ARM Id of the Azure VM.
"""
return pulumi.get(self, "target_vm_id")
@property
@pulumi.getter(name="uncompressedDataRateInMB")
def uncompressed_data_rate_in_mb(self) -> Optional[float]:
"""
The uncompressed data change rate in MB.
"""
return pulumi.get(self, "uncompressed_data_rate_in_mb")
@property
@pulumi.getter(name="useManagedDisks")
def use_managed_disks(self) -> Optional[str]:
"""
A value indicating whether managed disks should be used during failover.
"""
return pulumi.get(self, "use_managed_disks")
@property
@pulumi.getter(name="vCenterInfrastructureId")
def v_center_infrastructure_id(self) -> Optional[str]:
"""
The vCenter infrastructure Id.
"""
return pulumi.get(self, "v_center_infrastructure_id")
@property
@pulumi.getter(name="validationErrors")
def validation_errors(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
The validation errors of the on-premise machine Value can be list of validation errors.
"""
return pulumi.get(self, "validation_errors")
@property
@pulumi.getter(name="vhdName")
def vhd_name(self) -> Optional[str]:
"""
The OS disk VHD name.
"""
return pulumi.get(self, "vhd_name")
@property
@pulumi.getter(name="vmId")
def vm_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "vm_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The PE Network details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageBasePolicyDetailsResponse(dict):
"""
Base class for the policies of providers using InMage replication.
"""
def __init__(__self__, *,
instance_type: str,
app_consistent_frequency_in_minutes: Optional[int] = None,
multi_vm_sync_status: Optional[str] = None,
recovery_point_history: Optional[int] = None,
recovery_point_threshold_in_minutes: Optional[int] = None):
"""
Base class for the policies of providers using InMage replication.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int app_consistent_frequency_in_minutes: The app consistent snapshot frequency in minutes.
:param str multi_vm_sync_status: A value indicating whether multi-VM sync has to be enabled.
:param int recovery_point_history: The duration in minutes until which the recovery points need to be stored.
:param int recovery_point_threshold_in_minutes: The recovery point threshold in minutes.
"""
pulumi.set(__self__, "instance_type", 'InMageBasePolicyDetails')
if app_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "app_consistent_frequency_in_minutes", app_consistent_frequency_in_minutes)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if recovery_point_history is not None:
pulumi.set(__self__, "recovery_point_history", recovery_point_history)
if recovery_point_threshold_in_minutes is not None:
pulumi.set(__self__, "recovery_point_threshold_in_minutes", recovery_point_threshold_in_minutes)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="appConsistentFrequencyInMinutes")
def app_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The app consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "app_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether multi-VM sync has to be enabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="recoveryPointHistory")
def recovery_point_history(self) -> Optional[int]:
"""
The duration in minutes until which the recovery points need to be stored.
"""
return pulumi.get(self, "recovery_point_history")
@property
@pulumi.getter(name="recoveryPointThresholdInMinutes")
def recovery_point_threshold_in_minutes(self) -> Optional[int]:
"""
The recovery point threshold in minutes.
"""
return pulumi.get(self, "recovery_point_threshold_in_minutes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMagePolicyDetailsResponse(dict):
"""
InMage specific protection profile details.
"""
def __init__(__self__, *,
instance_type: str,
app_consistent_frequency_in_minutes: Optional[int] = None,
multi_vm_sync_status: Optional[str] = None,
recovery_point_history: Optional[int] = None,
recovery_point_threshold_in_minutes: Optional[int] = None):
"""
InMage specific protection profile details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int app_consistent_frequency_in_minutes: The app consistent snapshot frequency in minutes.
:param str multi_vm_sync_status: A value indicating whether multi-VM sync has to be enabled.
:param int recovery_point_history: The duration in minutes until which the recovery points need to be stored.
:param int recovery_point_threshold_in_minutes: The recovery point threshold in minutes.
"""
pulumi.set(__self__, "instance_type", 'InMage')
if app_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "app_consistent_frequency_in_minutes", app_consistent_frequency_in_minutes)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if recovery_point_history is not None:
pulumi.set(__self__, "recovery_point_history", recovery_point_history)
if recovery_point_threshold_in_minutes is not None:
pulumi.set(__self__, "recovery_point_threshold_in_minutes", recovery_point_threshold_in_minutes)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="appConsistentFrequencyInMinutes")
def app_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The app consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "app_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether multi-VM sync has to be enabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="recoveryPointHistory")
def recovery_point_history(self) -> Optional[int]:
"""
The duration in minutes until which the recovery points need to be stored.
"""
return pulumi.get(self, "recovery_point_history")
@property
@pulumi.getter(name="recoveryPointThresholdInMinutes")
def recovery_point_threshold_in_minutes(self) -> Optional[int]:
"""
The recovery point threshold in minutes.
"""
return pulumi.get(self, "recovery_point_threshold_in_minutes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageProtectedDiskDetailsResponse(dict):
"""
InMage protected disk details.
"""
def __init__(__self__, *,
disk_capacity_in_bytes: Optional[int] = None,
disk_id: Optional[str] = None,
disk_name: Optional[str] = None,
disk_resized: Optional[str] = None,
file_system_capacity_in_bytes: Optional[int] = None,
health_error_code: Optional[str] = None,
last_rpo_calculated_time: Optional[str] = None,
protection_stage: Optional[str] = None,
ps_data_in_mb: Optional[float] = None,
resync_duration_in_seconds: Optional[int] = None,
resync_progress_percentage: Optional[int] = None,
resync_required: Optional[str] = None,
rpo_in_seconds: Optional[int] = None,
source_data_in_mb: Optional[float] = None,
target_data_in_mb: Optional[float] = None):
"""
InMage protected disk details.
:param int disk_capacity_in_bytes: The disk capacity in bytes.
:param str disk_id: The disk id.
:param str disk_name: The disk name.
:param str disk_resized: A value indicating whether disk is resized.
:param int file_system_capacity_in_bytes: The file system capacity in bytes.
:param str health_error_code: The health error code for the disk.
:param str last_rpo_calculated_time: The last RPO calculated time.
:param str protection_stage: The protection stage.
:param float ps_data_in_mb: The PS data transit in MB.
:param int resync_duration_in_seconds: The resync duration in seconds.
:param int resync_progress_percentage: The resync progress percentage.
:param str resync_required: A value indicating whether resync is required for this disk.
:param int rpo_in_seconds: The RPO in seconds.
:param float source_data_in_mb: The source data transit in MB.
:param float target_data_in_mb: The target data transit in MB.
"""
if disk_capacity_in_bytes is not None:
pulumi.set(__self__, "disk_capacity_in_bytes", disk_capacity_in_bytes)
if disk_id is not None:
pulumi.set(__self__, "disk_id", disk_id)
if disk_name is not None:
pulumi.set(__self__, "disk_name", disk_name)
if disk_resized is not None:
pulumi.set(__self__, "disk_resized", disk_resized)
if file_system_capacity_in_bytes is not None:
pulumi.set(__self__, "file_system_capacity_in_bytes", file_system_capacity_in_bytes)
if health_error_code is not None:
pulumi.set(__self__, "health_error_code", health_error_code)
if last_rpo_calculated_time is not None:
pulumi.set(__self__, "last_rpo_calculated_time", last_rpo_calculated_time)
if protection_stage is not None:
pulumi.set(__self__, "protection_stage", protection_stage)
if ps_data_in_mb is not None:
pulumi.set(__self__, "ps_data_in_mb", ps_data_in_mb)
if resync_duration_in_seconds is not None:
pulumi.set(__self__, "resync_duration_in_seconds", resync_duration_in_seconds)
if resync_progress_percentage is not None:
pulumi.set(__self__, "resync_progress_percentage", resync_progress_percentage)
if resync_required is not None:
pulumi.set(__self__, "resync_required", resync_required)
if rpo_in_seconds is not None:
pulumi.set(__self__, "rpo_in_seconds", rpo_in_seconds)
if source_data_in_mb is not None:
pulumi.set(__self__, "source_data_in_mb", source_data_in_mb)
if target_data_in_mb is not None:
pulumi.set(__self__, "target_data_in_mb", target_data_in_mb)
@property
@pulumi.getter(name="diskCapacityInBytes")
def disk_capacity_in_bytes(self) -> Optional[int]:
"""
The disk capacity in bytes.
"""
return pulumi.get(self, "disk_capacity_in_bytes")
@property
@pulumi.getter(name="diskId")
def disk_id(self) -> Optional[str]:
"""
The disk id.
"""
return pulumi.get(self, "disk_id")
@property
@pulumi.getter(name="diskName")
def disk_name(self) -> Optional[str]:
"""
The disk name.
"""
return pulumi.get(self, "disk_name")
@property
@pulumi.getter(name="diskResized")
def disk_resized(self) -> Optional[str]:
"""
A value indicating whether disk is resized.
"""
return pulumi.get(self, "disk_resized")
@property
@pulumi.getter(name="fileSystemCapacityInBytes")
def file_system_capacity_in_bytes(self) -> Optional[int]:
"""
The file system capacity in bytes.
"""
return pulumi.get(self, "file_system_capacity_in_bytes")
@property
@pulumi.getter(name="healthErrorCode")
def health_error_code(self) -> Optional[str]:
"""
The health error code for the disk.
"""
return pulumi.get(self, "health_error_code")
@property
@pulumi.getter(name="lastRpoCalculatedTime")
def last_rpo_calculated_time(self) -> Optional[str]:
"""
The last RPO calculated time.
"""
return pulumi.get(self, "last_rpo_calculated_time")
@property
@pulumi.getter(name="protectionStage")
def protection_stage(self) -> Optional[str]:
"""
The protection stage.
"""
return pulumi.get(self, "protection_stage")
@property
@pulumi.getter(name="psDataInMB")
def ps_data_in_mb(self) -> Optional[float]:
"""
The PS data transit in MB.
"""
return pulumi.get(self, "ps_data_in_mb")
@property
@pulumi.getter(name="resyncDurationInSeconds")
def resync_duration_in_seconds(self) -> Optional[int]:
"""
The resync duration in seconds.
"""
return pulumi.get(self, "resync_duration_in_seconds")
@property
@pulumi.getter(name="resyncProgressPercentage")
def resync_progress_percentage(self) -> Optional[int]:
"""
The resync progress percentage.
"""
return pulumi.get(self, "resync_progress_percentage")
@property
@pulumi.getter(name="resyncRequired")
def resync_required(self) -> Optional[str]:
"""
A value indicating whether resync is required for this disk.
"""
return pulumi.get(self, "resync_required")
@property
@pulumi.getter(name="rpoInSeconds")
def rpo_in_seconds(self) -> Optional[int]:
"""
The RPO in seconds.
"""
return pulumi.get(self, "rpo_in_seconds")
@property
@pulumi.getter(name="sourceDataInMB")
def source_data_in_mb(self) -> Optional[float]:
"""
The source data transit in MB.
"""
return pulumi.get(self, "source_data_in_mb")
@property
@pulumi.getter(name="targetDataInMB")
def target_data_in_mb(self) -> Optional[float]:
"""
The target data transit in MB.
"""
return pulumi.get(self, "target_data_in_mb")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InMageReplicationDetailsResponse(dict):
"""
InMage provider specific settings
"""
def __init__(__self__, *,
instance_type: str,
active_site_type: Optional[str] = None,
agent_details: Optional['outputs.InMageAgentDetailsResponse'] = None,
azure_storage_account_id: Optional[str] = None,
compressed_data_rate_in_mb: Optional[float] = None,
consistency_points: Optional[Mapping[str, str]] = None,
datastores: Optional[Sequence[str]] = None,
discovery_type: Optional[str] = None,
disk_resized: Optional[str] = None,
infrastructure_vm_id: Optional[str] = None,
ip_address: Optional[str] = None,
last_heartbeat: Optional[str] = None,
last_rpo_calculated_time: Optional[str] = None,
last_update_received_time: Optional[str] = None,
master_target_id: Optional[str] = None,
multi_vm_group_id: Optional[str] = None,
multi_vm_group_name: Optional[str] = None,
multi_vm_sync_status: Optional[str] = None,
os_details: Optional['outputs.OSDiskDetailsResponse'] = None,
os_version: Optional[str] = None,
process_server_id: Optional[str] = None,
protected_disks: Optional[Sequence['outputs.InMageProtectedDiskDetailsResponse']] = None,
protection_stage: Optional[str] = None,
reboot_after_update_status: Optional[str] = None,
replica_id: Optional[str] = None,
resync_details: Optional['outputs.InitialReplicationDetailsResponse'] = None,
retention_window_end: Optional[str] = None,
retention_window_start: Optional[str] = None,
rpo_in_seconds: Optional[int] = None,
source_vm_cpu_count: Optional[int] = None,
source_vm_ram_size_in_mb: Optional[int] = None,
uncompressed_data_rate_in_mb: Optional[float] = None,
v_center_infrastructure_id: Optional[str] = None,
validation_errors: Optional[Sequence['outputs.HealthErrorResponse']] = None,
vm_id: Optional[str] = None,
vm_nics: Optional[Sequence['outputs.VMNicDetailsResponse']] = None,
vm_protection_state: Optional[str] = None,
vm_protection_state_description: Optional[str] = None):
"""
InMage provider specific settings
:param str instance_type: Gets the Instance type.
:param str active_site_type: The active location of the VM. If the VM is being protected from Azure, this field will take values from { Azure, OnPrem }. If the VM is being protected between two data-centers, this field will be OnPrem always.
:param 'InMageAgentDetailsResponseArgs' agent_details: The agent details.
:param str azure_storage_account_id: A value indicating the underlying Azure storage account. If the VM is not running in Azure, this value shall be set to null.
:param float compressed_data_rate_in_mb: The compressed data change rate in MB.
:param Mapping[str, str] consistency_points: The collection of Consistency points.
:param Sequence[str] datastores: The data stores of the on-premise machine Value can be list of strings that contain data store names
:param str discovery_type: A value indicating the discovery type of the machine.
:param str disk_resized: A value indicating whether any disk is resized for this VM.
:param str infrastructure_vm_id: The infrastructure VM Id.
:param str ip_address: The source IP address.
:param str last_heartbeat: The last heartbeat received from the source server.
:param str last_rpo_calculated_time: The last RPO calculated time.
:param str last_update_received_time: The last update time received from on-prem components.
:param str master_target_id: The master target Id.
:param str multi_vm_group_id: The multi vm group Id, if any.
:param str multi_vm_group_name: The multi vm group name, if any.
:param str multi_vm_sync_status: A value indicating whether the multi vm sync is enabled or disabled.
:param 'OSDiskDetailsResponseArgs' os_details: The OS details.
:param str os_version: The OS Version of the protected item.
:param str process_server_id: The process server Id.
:param Sequence['InMageProtectedDiskDetailsResponseArgs'] protected_disks: The list of protected disks.
:param str protection_stage: The protection stage.
:param str reboot_after_update_status: A value indicating whether the source server requires a restart after update.
:param str replica_id: The replica id of the protected item.
:param 'InitialReplicationDetailsResponseArgs' resync_details: The resync details of the machine
:param str retention_window_end: The retention window end time.
:param str retention_window_start: The retention window start time.
:param int rpo_in_seconds: The RPO in seconds.
:param int source_vm_cpu_count: The CPU count of the VM on the primary side.
:param int source_vm_ram_size_in_mb: The RAM size of the VM on the primary side.
:param float uncompressed_data_rate_in_mb: The uncompressed data change rate in MB.
:param str v_center_infrastructure_id: The vCenter infrastructure Id.
:param Sequence['HealthErrorResponseArgs'] validation_errors: The validation errors of the on-premise machine Value can be list of validation errors
:param str vm_id: The virtual machine Id.
:param Sequence['VMNicDetailsResponseArgs'] vm_nics: The PE Network details.
:param str vm_protection_state: The protection state for the vm.
:param str vm_protection_state_description: The protection state description for the vm.
"""
pulumi.set(__self__, "instance_type", 'InMage')
if active_site_type is not None:
pulumi.set(__self__, "active_site_type", active_site_type)
if agent_details is not None:
pulumi.set(__self__, "agent_details", agent_details)
if azure_storage_account_id is not None:
pulumi.set(__self__, "azure_storage_account_id", azure_storage_account_id)
if compressed_data_rate_in_mb is not None:
pulumi.set(__self__, "compressed_data_rate_in_mb", compressed_data_rate_in_mb)
if consistency_points is not None:
pulumi.set(__self__, "consistency_points", consistency_points)
if datastores is not None:
pulumi.set(__self__, "datastores", datastores)
if discovery_type is not None:
pulumi.set(__self__, "discovery_type", discovery_type)
if disk_resized is not None:
pulumi.set(__self__, "disk_resized", disk_resized)
if infrastructure_vm_id is not None:
pulumi.set(__self__, "infrastructure_vm_id", infrastructure_vm_id)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if last_rpo_calculated_time is not None:
pulumi.set(__self__, "last_rpo_calculated_time", last_rpo_calculated_time)
if last_update_received_time is not None:
pulumi.set(__self__, "last_update_received_time", last_update_received_time)
if master_target_id is not None:
pulumi.set(__self__, "master_target_id", master_target_id)
if multi_vm_group_id is not None:
pulumi.set(__self__, "multi_vm_group_id", multi_vm_group_id)
if multi_vm_group_name is not None:
pulumi.set(__self__, "multi_vm_group_name", multi_vm_group_name)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if os_details is not None:
pulumi.set(__self__, "os_details", os_details)
if os_version is not None:
pulumi.set(__self__, "os_version", os_version)
if process_server_id is not None:
pulumi.set(__self__, "process_server_id", process_server_id)
if protected_disks is not None:
pulumi.set(__self__, "protected_disks", protected_disks)
if protection_stage is not None:
pulumi.set(__self__, "protection_stage", protection_stage)
if reboot_after_update_status is not None:
pulumi.set(__self__, "reboot_after_update_status", reboot_after_update_status)
if replica_id is not None:
pulumi.set(__self__, "replica_id", replica_id)
if resync_details is not None:
pulumi.set(__self__, "resync_details", resync_details)
if retention_window_end is not None:
pulumi.set(__self__, "retention_window_end", retention_window_end)
if retention_window_start is not None:
pulumi.set(__self__, "retention_window_start", retention_window_start)
if rpo_in_seconds is not None:
pulumi.set(__self__, "rpo_in_seconds", rpo_in_seconds)
if source_vm_cpu_count is not None:
pulumi.set(__self__, "source_vm_cpu_count", source_vm_cpu_count)
if source_vm_ram_size_in_mb is not None:
pulumi.set(__self__, "source_vm_ram_size_in_mb", source_vm_ram_size_in_mb)
if uncompressed_data_rate_in_mb is not None:
pulumi.set(__self__, "uncompressed_data_rate_in_mb", uncompressed_data_rate_in_mb)
if v_center_infrastructure_id is not None:
pulumi.set(__self__, "v_center_infrastructure_id", v_center_infrastructure_id)
if validation_errors is not None:
pulumi.set(__self__, "validation_errors", validation_errors)
if vm_id is not None:
pulumi.set(__self__, "vm_id", vm_id)
if vm_nics is not None:
pulumi.set(__self__, "vm_nics", vm_nics)
if vm_protection_state is not None:
pulumi.set(__self__, "vm_protection_state", vm_protection_state)
if vm_protection_state_description is not None:
pulumi.set(__self__, "vm_protection_state_description", vm_protection_state_description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="activeSiteType")
def active_site_type(self) -> Optional[str]:
"""
The active location of the VM. If the VM is being protected from Azure, this field will take values from { Azure, OnPrem }. If the VM is being protected between two data-centers, this field will be OnPrem always.
"""
return pulumi.get(self, "active_site_type")
@property
@pulumi.getter(name="agentDetails")
def agent_details(self) -> Optional['outputs.InMageAgentDetailsResponse']:
"""
The agent details.
"""
return pulumi.get(self, "agent_details")
@property
@pulumi.getter(name="azureStorageAccountId")
def azure_storage_account_id(self) -> Optional[str]:
"""
A value indicating the underlying Azure storage account. If the VM is not running in Azure, this value shall be set to null.
"""
return pulumi.get(self, "azure_storage_account_id")
@property
@pulumi.getter(name="compressedDataRateInMB")
def compressed_data_rate_in_mb(self) -> Optional[float]:
"""
The compressed data change rate in MB.
"""
return pulumi.get(self, "compressed_data_rate_in_mb")
@property
@pulumi.getter(name="consistencyPoints")
def consistency_points(self) -> Optional[Mapping[str, str]]:
"""
The collection of Consistency points.
"""
return pulumi.get(self, "consistency_points")
@property
@pulumi.getter
def datastores(self) -> Optional[Sequence[str]]:
"""
The data stores of the on-premise machine Value can be list of strings that contain data store names
"""
return pulumi.get(self, "datastores")
@property
@pulumi.getter(name="discoveryType")
def discovery_type(self) -> Optional[str]:
"""
A value indicating the discovery type of the machine.
"""
return pulumi.get(self, "discovery_type")
@property
@pulumi.getter(name="diskResized")
def disk_resized(self) -> Optional[str]:
"""
A value indicating whether any disk is resized for this VM.
"""
return pulumi.get(self, "disk_resized")
@property
@pulumi.getter(name="infrastructureVmId")
def infrastructure_vm_id(self) -> Optional[str]:
"""
The infrastructure VM Id.
"""
return pulumi.get(self, "infrastructure_vm_id")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The source IP address.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The last heartbeat received from the source server.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter(name="lastRpoCalculatedTime")
def last_rpo_calculated_time(self) -> Optional[str]:
"""
The last RPO calculated time.
"""
return pulumi.get(self, "last_rpo_calculated_time")
@property
@pulumi.getter(name="lastUpdateReceivedTime")
def last_update_received_time(self) -> Optional[str]:
"""
The last update time received from on-prem components.
"""
return pulumi.get(self, "last_update_received_time")
@property
@pulumi.getter(name="masterTargetId")
def master_target_id(self) -> Optional[str]:
"""
The master target Id.
"""
return pulumi.get(self, "master_target_id")
@property
@pulumi.getter(name="multiVmGroupId")
def multi_vm_group_id(self) -> Optional[str]:
"""
The multi vm group Id, if any.
"""
return pulumi.get(self, "multi_vm_group_id")
@property
@pulumi.getter(name="multiVmGroupName")
def multi_vm_group_name(self) -> Optional[str]:
"""
The multi vm group name, if any.
"""
return pulumi.get(self, "multi_vm_group_name")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether the multi vm sync is enabled or disabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="osDetails")
def os_details(self) -> Optional['outputs.OSDiskDetailsResponse']:
"""
The OS details.
"""
return pulumi.get(self, "os_details")
@property
@pulumi.getter(name="osVersion")
def os_version(self) -> Optional[str]:
"""
The OS Version of the protected item.
"""
return pulumi.get(self, "os_version")
@property
@pulumi.getter(name="processServerId")
def process_server_id(self) -> Optional[str]:
"""
The process server Id.
"""
return pulumi.get(self, "process_server_id")
@property
@pulumi.getter(name="protectedDisks")
def protected_disks(self) -> Optional[Sequence['outputs.InMageProtectedDiskDetailsResponse']]:
"""
The list of protected disks.
"""
return pulumi.get(self, "protected_disks")
@property
@pulumi.getter(name="protectionStage")
def protection_stage(self) -> Optional[str]:
"""
The protection stage.
"""
return pulumi.get(self, "protection_stage")
@property
@pulumi.getter(name="rebootAfterUpdateStatus")
def reboot_after_update_status(self) -> Optional[str]:
"""
A value indicating whether the source server requires a restart after update.
"""
return pulumi.get(self, "reboot_after_update_status")
@property
@pulumi.getter(name="replicaId")
def replica_id(self) -> Optional[str]:
"""
The replica id of the protected item.
"""
return pulumi.get(self, "replica_id")
@property
@pulumi.getter(name="resyncDetails")
def resync_details(self) -> Optional['outputs.InitialReplicationDetailsResponse']:
"""
The resync details of the machine
"""
return pulumi.get(self, "resync_details")
@property
@pulumi.getter(name="retentionWindowEnd")
def retention_window_end(self) -> Optional[str]:
"""
The retention window end time.
"""
return pulumi.get(self, "retention_window_end")
@property
@pulumi.getter(name="retentionWindowStart")
def retention_window_start(self) -> Optional[str]:
"""
The retention window start time.
"""
return pulumi.get(self, "retention_window_start")
@property
@pulumi.getter(name="rpoInSeconds")
def rpo_in_seconds(self) -> Optional[int]:
"""
The RPO in seconds.
"""
return pulumi.get(self, "rpo_in_seconds")
@property
@pulumi.getter(name="sourceVmCPUCount")
def source_vm_cpu_count(self) -> Optional[int]:
"""
The CPU count of the VM on the primary side.
"""
return pulumi.get(self, "source_vm_cpu_count")
@property
@pulumi.getter(name="sourceVmRAMSizeInMB")
def source_vm_ram_size_in_mb(self) -> Optional[int]:
"""
The RAM size of the VM on the primary side.
"""
return pulumi.get(self, "source_vm_ram_size_in_mb")
@property
@pulumi.getter(name="uncompressedDataRateInMB")
def uncompressed_data_rate_in_mb(self) -> Optional[float]:
"""
The uncompressed data change rate in MB.
"""
return pulumi.get(self, "uncompressed_data_rate_in_mb")
@property
@pulumi.getter(name="vCenterInfrastructureId")
def v_center_infrastructure_id(self) -> Optional[str]:
"""
The vCenter infrastructure Id.
"""
return pulumi.get(self, "v_center_infrastructure_id")
@property
@pulumi.getter(name="validationErrors")
def validation_errors(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
The validation errors of the on-premise machine Value can be list of validation errors
"""
return pulumi.get(self, "validation_errors")
@property
@pulumi.getter(name="vmId")
def vm_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "vm_id")
@property
@pulumi.getter(name="vmNics")
def vm_nics(self) -> Optional[Sequence['outputs.VMNicDetailsResponse']]:
"""
The PE Network details.
"""
return pulumi.get(self, "vm_nics")
@property
@pulumi.getter(name="vmProtectionState")
def vm_protection_state(self) -> Optional[str]:
"""
The protection state for the vm.
"""
return pulumi.get(self, "vm_protection_state")
@property
@pulumi.getter(name="vmProtectionStateDescription")
def vm_protection_state_description(self) -> Optional[str]:
"""
The protection state description for the vm.
"""
return pulumi.get(self, "vm_protection_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InitialReplicationDetailsResponse(dict):
"""
Initial replication details.
"""
def __init__(__self__, *,
initial_replication_progress_percentage: Optional[str] = None,
initial_replication_type: Optional[str] = None):
"""
Initial replication details.
:param str initial_replication_progress_percentage: The initial replication progress percentage.
:param str initial_replication_type: Initial replication type.
"""
if initial_replication_progress_percentage is not None:
pulumi.set(__self__, "initial_replication_progress_percentage", initial_replication_progress_percentage)
if initial_replication_type is not None:
pulumi.set(__self__, "initial_replication_type", initial_replication_type)
@property
@pulumi.getter(name="initialReplicationProgressPercentage")
def initial_replication_progress_percentage(self) -> Optional[str]:
"""
The initial replication progress percentage.
"""
return pulumi.get(self, "initial_replication_progress_percentage")
@property
@pulumi.getter(name="initialReplicationType")
def initial_replication_type(self) -> Optional[str]:
"""
Initial replication type.
"""
return pulumi.get(self, "initial_replication_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class InputEndpointResponse(dict):
"""
Azure VM input endpoint details.
"""
def __init__(__self__, *,
endpoint_name: Optional[str] = None,
private_port: Optional[int] = None,
protocol: Optional[str] = None,
public_port: Optional[int] = None):
"""
Azure VM input endpoint details.
:param str endpoint_name: The input endpoint name.
:param int private_port: The input endpoint private port.
:param str protocol: The input endpoint protocol.
:param int public_port: The input endpoint public port.
"""
if endpoint_name is not None:
pulumi.set(__self__, "endpoint_name", endpoint_name)
if private_port is not None:
pulumi.set(__self__, "private_port", private_port)
if protocol is not None:
pulumi.set(__self__, "protocol", protocol)
if public_port is not None:
pulumi.set(__self__, "public_port", public_port)
@property
@pulumi.getter(name="endpointName")
def endpoint_name(self) -> Optional[str]:
"""
The input endpoint name.
"""
return pulumi.get(self, "endpoint_name")
@property
@pulumi.getter(name="privatePort")
def private_port(self) -> Optional[int]:
"""
The input endpoint private port.
"""
return pulumi.get(self, "private_port")
@property
@pulumi.getter
def protocol(self) -> Optional[str]:
"""
The input endpoint protocol.
"""
return pulumi.get(self, "protocol")
@property
@pulumi.getter(name="publicPort")
def public_port(self) -> Optional[int]:
"""
The input endpoint public port.
"""
return pulumi.get(self, "public_port")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class MasterTargetServerResponse(dict):
"""
Details of a Master Target Server.
"""
def __init__(__self__, *,
agent_version: Optional[str] = None,
data_stores: Optional[Sequence['outputs.DataStoreResponse']] = None,
disk_count: Optional[int] = None,
id: Optional[str] = None,
ip_address: Optional[str] = None,
last_heartbeat: Optional[str] = None,
name: Optional[str] = None,
os_type: Optional[str] = None,
os_version: Optional[str] = None,
retention_volumes: Optional[Sequence['outputs.RetentionVolumeResponse']] = None,
validation_errors: Optional[Sequence['outputs.HealthErrorResponse']] = None,
version_status: Optional[str] = None):
"""
Details of a Master Target Server.
:param str agent_version: The version of the scout component on the server.
:param Sequence['DataStoreResponseArgs'] data_stores: The list of data stores in the fabric.
:param int disk_count: Disk count of the master target.
:param str id: The server Id.
:param str ip_address: The IP address of the server.
:param str last_heartbeat: The last heartbeat received from the server.
:param str name: The server name.
:param str os_type: The OS type of the server.
:param str os_version: OS Version of the master target.
:param Sequence['RetentionVolumeResponseArgs'] retention_volumes: The retention volumes of Master target Server.
:param Sequence['HealthErrorResponseArgs'] validation_errors: Validation errors.
:param str version_status: Version status
"""
if agent_version is not None:
pulumi.set(__self__, "agent_version", agent_version)
if data_stores is not None:
pulumi.set(__self__, "data_stores", data_stores)
if disk_count is not None:
pulumi.set(__self__, "disk_count", disk_count)
if id is not None:
pulumi.set(__self__, "id", id)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if name is not None:
pulumi.set(__self__, "name", name)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if os_version is not None:
pulumi.set(__self__, "os_version", os_version)
if retention_volumes is not None:
pulumi.set(__self__, "retention_volumes", retention_volumes)
if validation_errors is not None:
pulumi.set(__self__, "validation_errors", validation_errors)
if version_status is not None:
pulumi.set(__self__, "version_status", version_status)
@property
@pulumi.getter(name="agentVersion")
def agent_version(self) -> Optional[str]:
"""
The version of the scout component on the server.
"""
return pulumi.get(self, "agent_version")
@property
@pulumi.getter(name="dataStores")
def data_stores(self) -> Optional[Sequence['outputs.DataStoreResponse']]:
"""
The list of data stores in the fabric.
"""
return pulumi.get(self, "data_stores")
@property
@pulumi.getter(name="diskCount")
def disk_count(self) -> Optional[int]:
"""
Disk count of the master target.
"""
return pulumi.get(self, "disk_count")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The server Id.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The IP address of the server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The last heartbeat received from the server.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
The server name.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
The OS type of the server.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="osVersion")
def os_version(self) -> Optional[str]:
"""
OS Version of the master target.
"""
return pulumi.get(self, "os_version")
@property
@pulumi.getter(name="retentionVolumes")
def retention_volumes(self) -> Optional[Sequence['outputs.RetentionVolumeResponse']]:
"""
The retention volumes of Master target Server.
"""
return pulumi.get(self, "retention_volumes")
@property
@pulumi.getter(name="validationErrors")
def validation_errors(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
Validation errors.
"""
return pulumi.get(self, "validation_errors")
@property
@pulumi.getter(name="versionStatus")
def version_status(self) -> Optional[str]:
"""
Version status
"""
return pulumi.get(self, "version_status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class MobilityServiceUpdateResponse(dict):
"""
The Mobility Service update details.
"""
def __init__(__self__, *,
os_type: Optional[str] = None,
reboot_status: Optional[str] = None,
version: Optional[str] = None):
"""
The Mobility Service update details.
:param str os_type: The OS type.
:param str reboot_status: The reboot status of the update - whether it is required or not.
:param str version: The version of the latest update.
"""
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if reboot_status is not None:
pulumi.set(__self__, "reboot_status", reboot_status)
if version is not None:
pulumi.set(__self__, "version", version)
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
The OS type.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="rebootStatus")
def reboot_status(self) -> Optional[str]:
"""
The reboot status of the update - whether it is required or not.
"""
return pulumi.get(self, "reboot_status")
@property
@pulumi.getter
def version(self) -> Optional[str]:
"""
The version of the latest update.
"""
return pulumi.get(self, "version")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class NetworkMappingPropertiesResponse(dict):
"""
Network Mapping Properties.
"""
def __init__(__self__, *,
fabric_specific_settings: Optional[Any] = None,
primary_fabric_friendly_name: Optional[str] = None,
primary_network_friendly_name: Optional[str] = None,
primary_network_id: Optional[str] = None,
recovery_fabric_arm_id: Optional[str] = None,
recovery_fabric_friendly_name: Optional[str] = None,
recovery_network_friendly_name: Optional[str] = None,
recovery_network_id: Optional[str] = None,
state: Optional[str] = None):
"""
Network Mapping Properties.
:param Union['AzureToAzureNetworkMappingSettingsResponseArgs', 'VmmToAzureNetworkMappingSettingsResponseArgs', 'VmmToVmmNetworkMappingSettingsResponseArgs'] fabric_specific_settings: The fabric specific settings.
:param str primary_fabric_friendly_name: The primary fabric friendly name.
:param str primary_network_friendly_name: The primary network friendly name.
:param str primary_network_id: The primary network id for network mapping.
:param str recovery_fabric_arm_id: The recovery fabric ARM id.
:param str recovery_fabric_friendly_name: The recovery fabric friendly name.
:param str recovery_network_friendly_name: The recovery network friendly name.
:param str recovery_network_id: The recovery network id for network mapping.
:param str state: The pairing state for network mapping.
"""
if fabric_specific_settings is not None:
pulumi.set(__self__, "fabric_specific_settings", fabric_specific_settings)
if primary_fabric_friendly_name is not None:
pulumi.set(__self__, "primary_fabric_friendly_name", primary_fabric_friendly_name)
if primary_network_friendly_name is not None:
pulumi.set(__self__, "primary_network_friendly_name", primary_network_friendly_name)
if primary_network_id is not None:
pulumi.set(__self__, "primary_network_id", primary_network_id)
if recovery_fabric_arm_id is not None:
pulumi.set(__self__, "recovery_fabric_arm_id", recovery_fabric_arm_id)
if recovery_fabric_friendly_name is not None:
pulumi.set(__self__, "recovery_fabric_friendly_name", recovery_fabric_friendly_name)
if recovery_network_friendly_name is not None:
pulumi.set(__self__, "recovery_network_friendly_name", recovery_network_friendly_name)
if recovery_network_id is not None:
pulumi.set(__self__, "recovery_network_id", recovery_network_id)
if state is not None:
pulumi.set(__self__, "state", state)
@property
@pulumi.getter(name="fabricSpecificSettings")
def fabric_specific_settings(self) -> Optional[Any]:
"""
The fabric specific settings.
"""
return pulumi.get(self, "fabric_specific_settings")
@property
@pulumi.getter(name="primaryFabricFriendlyName")
def primary_fabric_friendly_name(self) -> Optional[str]:
"""
The primary fabric friendly name.
"""
return pulumi.get(self, "primary_fabric_friendly_name")
@property
@pulumi.getter(name="primaryNetworkFriendlyName")
def primary_network_friendly_name(self) -> Optional[str]:
"""
The primary network friendly name.
"""
return pulumi.get(self, "primary_network_friendly_name")
@property
@pulumi.getter(name="primaryNetworkId")
def primary_network_id(self) -> Optional[str]:
"""
The primary network id for network mapping.
"""
return pulumi.get(self, "primary_network_id")
@property
@pulumi.getter(name="recoveryFabricArmId")
def recovery_fabric_arm_id(self) -> Optional[str]:
"""
The recovery fabric ARM id.
"""
return pulumi.get(self, "recovery_fabric_arm_id")
@property
@pulumi.getter(name="recoveryFabricFriendlyName")
def recovery_fabric_friendly_name(self) -> Optional[str]:
"""
The recovery fabric friendly name.
"""
return pulumi.get(self, "recovery_fabric_friendly_name")
@property
@pulumi.getter(name="recoveryNetworkFriendlyName")
def recovery_network_friendly_name(self) -> Optional[str]:
"""
The recovery network friendly name.
"""
return pulumi.get(self, "recovery_network_friendly_name")
@property
@pulumi.getter(name="recoveryNetworkId")
def recovery_network_id(self) -> Optional[str]:
"""
The recovery network id for network mapping.
"""
return pulumi.get(self, "recovery_network_id")
@property
@pulumi.getter
def state(self) -> Optional[str]:
"""
The pairing state for network mapping.
"""
return pulumi.get(self, "state")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class OSDetailsResponse(dict):
"""
Disk Details.
"""
def __init__(__self__, *,
o_s_major_version: Optional[str] = None,
o_s_minor_version: Optional[str] = None,
o_s_version: Optional[str] = None,
os_edition: Optional[str] = None,
os_type: Optional[str] = None,
product_type: Optional[str] = None):
"""
Disk Details.
:param str o_s_major_version: The OS Major Version.
:param str o_s_minor_version: The OS Minor Version.
:param str o_s_version: The OS Version.
:param str os_edition: The OSEdition.
:param str os_type: VM Disk details.
:param str product_type: Product type.
"""
if o_s_major_version is not None:
pulumi.set(__self__, "o_s_major_version", o_s_major_version)
if o_s_minor_version is not None:
pulumi.set(__self__, "o_s_minor_version", o_s_minor_version)
if o_s_version is not None:
pulumi.set(__self__, "o_s_version", o_s_version)
if os_edition is not None:
pulumi.set(__self__, "os_edition", os_edition)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if product_type is not None:
pulumi.set(__self__, "product_type", product_type)
@property
@pulumi.getter(name="oSMajorVersion")
def o_s_major_version(self) -> Optional[str]:
"""
The OS Major Version.
"""
return pulumi.get(self, "o_s_major_version")
@property
@pulumi.getter(name="oSMinorVersion")
def o_s_minor_version(self) -> Optional[str]:
"""
The OS Minor Version.
"""
return pulumi.get(self, "o_s_minor_version")
@property
@pulumi.getter(name="oSVersion")
def o_s_version(self) -> Optional[str]:
"""
The OS Version.
"""
return pulumi.get(self, "o_s_version")
@property
@pulumi.getter(name="osEdition")
def os_edition(self) -> Optional[str]:
"""
The OSEdition.
"""
return pulumi.get(self, "os_edition")
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
VM Disk details.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="productType")
def product_type(self) -> Optional[str]:
"""
Product type.
"""
return pulumi.get(self, "product_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class OSDiskDetailsResponse(dict):
"""
Details of the OS Disk.
"""
def __init__(__self__, *,
os_type: Optional[str] = None,
os_vhd_id: Optional[str] = None,
vhd_name: Optional[str] = None):
"""
Details of the OS Disk.
:param str os_type: The type of the OS on the VM.
:param str os_vhd_id: The id of the disk containing the OS.
:param str vhd_name: The OS disk VHD name.
"""
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if os_vhd_id is not None:
pulumi.set(__self__, "os_vhd_id", os_vhd_id)
if vhd_name is not None:
pulumi.set(__self__, "vhd_name", vhd_name)
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
The type of the OS on the VM.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="osVhdId")
def os_vhd_id(self) -> Optional[str]:
"""
The id of the disk containing the OS.
"""
return pulumi.get(self, "os_vhd_id")
@property
@pulumi.getter(name="vhdName")
def vhd_name(self) -> Optional[str]:
"""
The OS disk VHD name.
"""
return pulumi.get(self, "vhd_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PolicyPropertiesResponse(dict):
"""
Protection profile custom data details.
"""
def __init__(__self__, *,
friendly_name: Optional[str] = None,
provider_specific_details: Optional[Any] = None):
"""
Protection profile custom data details.
:param str friendly_name: The FriendlyName.
:param Union['A2APolicyDetailsResponseArgs', 'HyperVReplicaAzurePolicyDetailsResponseArgs', 'HyperVReplicaBasePolicyDetailsResponseArgs', 'HyperVReplicaBluePolicyDetailsResponseArgs', 'HyperVReplicaPolicyDetailsResponseArgs', 'InMageAzureV2PolicyDetailsResponseArgs', 'InMageBasePolicyDetailsResponseArgs', 'InMagePolicyDetailsResponseArgs', 'RcmAzureMigrationPolicyDetailsResponseArgs', 'VmwareCbtPolicyDetailsResponseArgs'] provider_specific_details: The ReplicationChannelSetting.
"""
if friendly_name is not None:
pulumi.set(__self__, "friendly_name", friendly_name)
if provider_specific_details is not None:
pulumi.set(__self__, "provider_specific_details", provider_specific_details)
@property
@pulumi.getter(name="friendlyName")
def friendly_name(self) -> Optional[str]:
"""
The FriendlyName.
"""
return pulumi.get(self, "friendly_name")
@property
@pulumi.getter(name="providerSpecificDetails")
def provider_specific_details(self) -> Optional[Any]:
"""
The ReplicationChannelSetting.
"""
return pulumi.get(self, "provider_specific_details")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ProcessServerResponse(dict):
"""
Details of the Process Server.
"""
def __init__(__self__, *,
agent_version: Optional[str] = None,
available_memory_in_bytes: Optional[int] = None,
available_space_in_bytes: Optional[int] = None,
cpu_load: Optional[str] = None,
cpu_load_status: Optional[str] = None,
friendly_name: Optional[str] = None,
host_id: Optional[str] = None,
id: Optional[str] = None,
ip_address: Optional[str] = None,
last_heartbeat: Optional[str] = None,
machine_count: Optional[str] = None,
memory_usage_status: Optional[str] = None,
mobility_service_updates: Optional[Sequence['outputs.MobilityServiceUpdateResponse']] = None,
os_type: Optional[str] = None,
os_version: Optional[str] = None,
ps_service_status: Optional[str] = None,
replication_pair_count: Optional[str] = None,
space_usage_status: Optional[str] = None,
ssl_cert_expiry_date: Optional[str] = None,
ssl_cert_expiry_remaining_days: Optional[int] = None,
system_load: Optional[str] = None,
system_load_status: Optional[str] = None,
total_memory_in_bytes: Optional[int] = None,
total_space_in_bytes: Optional[int] = None,
version_status: Optional[str] = None):
"""
Details of the Process Server.
:param str agent_version: The version of the scout component on the server.
:param int available_memory_in_bytes: The available memory.
:param int available_space_in_bytes: The available space.
:param str cpu_load: The percentage of the CPU load.
:param str cpu_load_status: The CPU load status.
:param str friendly_name: The Process Server's friendly name.
:param str host_id: The agent generated Id.
:param str id: The Process Server Id.
:param str ip_address: The IP address of the server.
:param str last_heartbeat: The last heartbeat received from the server.
:param str machine_count: The servers configured with this PS.
:param str memory_usage_status: The memory usage status.
:param Sequence['MobilityServiceUpdateResponseArgs'] mobility_service_updates: The list of the mobility service updates available on the Process Server.
:param str os_type: The OS type of the server.
:param str os_version: OS Version of the process server. Note: This will get populated if user has CS version greater than 9.12.0.0.
:param str ps_service_status: The PS service status.
:param str replication_pair_count: The number of replication pairs configured in this PS.
:param str space_usage_status: The space usage status.
:param str ssl_cert_expiry_date: The PS SSL cert expiry date.
:param int ssl_cert_expiry_remaining_days: CS SSL cert expiry date.
:param str system_load: The percentage of the system load.
:param str system_load_status: The system load status.
:param int total_memory_in_bytes: The total memory.
:param int total_space_in_bytes: The total space.
:param str version_status: Version status
"""
if agent_version is not None:
pulumi.set(__self__, "agent_version", agent_version)
if available_memory_in_bytes is not None:
pulumi.set(__self__, "available_memory_in_bytes", available_memory_in_bytes)
if available_space_in_bytes is not None:
pulumi.set(__self__, "available_space_in_bytes", available_space_in_bytes)
if cpu_load is not None:
pulumi.set(__self__, "cpu_load", cpu_load)
if cpu_load_status is not None:
pulumi.set(__self__, "cpu_load_status", cpu_load_status)
if friendly_name is not None:
pulumi.set(__self__, "friendly_name", friendly_name)
if host_id is not None:
pulumi.set(__self__, "host_id", host_id)
if id is not None:
pulumi.set(__self__, "id", id)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if machine_count is not None:
pulumi.set(__self__, "machine_count", machine_count)
if memory_usage_status is not None:
pulumi.set(__self__, "memory_usage_status", memory_usage_status)
if mobility_service_updates is not None:
pulumi.set(__self__, "mobility_service_updates", mobility_service_updates)
if os_type is not None:
pulumi.set(__self__, "os_type", os_type)
if os_version is not None:
pulumi.set(__self__, "os_version", os_version)
if ps_service_status is not None:
pulumi.set(__self__, "ps_service_status", ps_service_status)
if replication_pair_count is not None:
pulumi.set(__self__, "replication_pair_count", replication_pair_count)
if space_usage_status is not None:
pulumi.set(__self__, "space_usage_status", space_usage_status)
if ssl_cert_expiry_date is not None:
pulumi.set(__self__, "ssl_cert_expiry_date", ssl_cert_expiry_date)
if ssl_cert_expiry_remaining_days is not None:
pulumi.set(__self__, "ssl_cert_expiry_remaining_days", ssl_cert_expiry_remaining_days)
if system_load is not None:
pulumi.set(__self__, "system_load", system_load)
if system_load_status is not None:
pulumi.set(__self__, "system_load_status", system_load_status)
if total_memory_in_bytes is not None:
pulumi.set(__self__, "total_memory_in_bytes", total_memory_in_bytes)
if total_space_in_bytes is not None:
pulumi.set(__self__, "total_space_in_bytes", total_space_in_bytes)
if version_status is not None:
pulumi.set(__self__, "version_status", version_status)
@property
@pulumi.getter(name="agentVersion")
def agent_version(self) -> Optional[str]:
"""
The version of the scout component on the server.
"""
return pulumi.get(self, "agent_version")
@property
@pulumi.getter(name="availableMemoryInBytes")
def available_memory_in_bytes(self) -> Optional[int]:
"""
The available memory.
"""
return pulumi.get(self, "available_memory_in_bytes")
@property
@pulumi.getter(name="availableSpaceInBytes")
def available_space_in_bytes(self) -> Optional[int]:
"""
The available space.
"""
return pulumi.get(self, "available_space_in_bytes")
@property
@pulumi.getter(name="cpuLoad")
def cpu_load(self) -> Optional[str]:
"""
The percentage of the CPU load.
"""
return pulumi.get(self, "cpu_load")
@property
@pulumi.getter(name="cpuLoadStatus")
def cpu_load_status(self) -> Optional[str]:
"""
The CPU load status.
"""
return pulumi.get(self, "cpu_load_status")
@property
@pulumi.getter(name="friendlyName")
def friendly_name(self) -> Optional[str]:
"""
The Process Server's friendly name.
"""
return pulumi.get(self, "friendly_name")
@property
@pulumi.getter(name="hostId")
def host_id(self) -> Optional[str]:
"""
The agent generated Id.
"""
return pulumi.get(self, "host_id")
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The Process Server Id.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The IP address of the server.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The last heartbeat received from the server.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter(name="machineCount")
def machine_count(self) -> Optional[str]:
"""
The servers configured with this PS.
"""
return pulumi.get(self, "machine_count")
@property
@pulumi.getter(name="memoryUsageStatus")
def memory_usage_status(self) -> Optional[str]:
"""
The memory usage status.
"""
return pulumi.get(self, "memory_usage_status")
@property
@pulumi.getter(name="mobilityServiceUpdates")
def mobility_service_updates(self) -> Optional[Sequence['outputs.MobilityServiceUpdateResponse']]:
"""
The list of the mobility service updates available on the Process Server.
"""
return pulumi.get(self, "mobility_service_updates")
@property
@pulumi.getter(name="osType")
def os_type(self) -> Optional[str]:
"""
The OS type of the server.
"""
return pulumi.get(self, "os_type")
@property
@pulumi.getter(name="osVersion")
def os_version(self) -> Optional[str]:
"""
OS Version of the process server. Note: This will get populated if user has CS version greater than 9.12.0.0.
"""
return pulumi.get(self, "os_version")
@property
@pulumi.getter(name="psServiceStatus")
def ps_service_status(self) -> Optional[str]:
"""
The PS service status.
"""
return pulumi.get(self, "ps_service_status")
@property
@pulumi.getter(name="replicationPairCount")
def replication_pair_count(self) -> Optional[str]:
"""
The number of replication pairs configured in this PS.
"""
return pulumi.get(self, "replication_pair_count")
@property
@pulumi.getter(name="spaceUsageStatus")
def space_usage_status(self) -> Optional[str]:
"""
The space usage status.
"""
return pulumi.get(self, "space_usage_status")
@property
@pulumi.getter(name="sslCertExpiryDate")
def ssl_cert_expiry_date(self) -> Optional[str]:
"""
The PS SSL cert expiry date.
"""
return pulumi.get(self, "ssl_cert_expiry_date")
@property
@pulumi.getter(name="sslCertExpiryRemainingDays")
def ssl_cert_expiry_remaining_days(self) -> Optional[int]:
"""
CS SSL cert expiry date.
"""
return pulumi.get(self, "ssl_cert_expiry_remaining_days")
@property
@pulumi.getter(name="systemLoad")
def system_load(self) -> Optional[str]:
"""
The percentage of the system load.
"""
return pulumi.get(self, "system_load")
@property
@pulumi.getter(name="systemLoadStatus")
def system_load_status(self) -> Optional[str]:
"""
The system load status.
"""
return pulumi.get(self, "system_load_status")
@property
@pulumi.getter(name="totalMemoryInBytes")
def total_memory_in_bytes(self) -> Optional[int]:
"""
The total memory.
"""
return pulumi.get(self, "total_memory_in_bytes")
@property
@pulumi.getter(name="totalSpaceInBytes")
def total_space_in_bytes(self) -> Optional[int]:
"""
The total space.
"""
return pulumi.get(self, "total_space_in_bytes")
@property
@pulumi.getter(name="versionStatus")
def version_status(self) -> Optional[str]:
"""
Version status
"""
return pulumi.get(self, "version_status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ProtectionContainerMappingPropertiesResponse(dict):
"""
Protection container mapping properties.
"""
def __init__(__self__, *,
health: Optional[str] = None,
health_error_details: Optional[Sequence['outputs.HealthErrorResponse']] = None,
policy_friendly_name: Optional[str] = None,
policy_id: Optional[str] = None,
provider_specific_details: Optional['outputs.ProtectionContainerMappingProviderSpecificDetailsResponse'] = None,
source_fabric_friendly_name: Optional[str] = None,
source_protection_container_friendly_name: Optional[str] = None,
state: Optional[str] = None,
target_fabric_friendly_name: Optional[str] = None,
target_protection_container_friendly_name: Optional[str] = None,
target_protection_container_id: Optional[str] = None):
"""
Protection container mapping properties.
:param str health: Health of pairing.
:param Sequence['HealthErrorResponseArgs'] health_error_details: Health error.
:param str policy_friendly_name: Friendly name of replication policy.
:param str policy_id: Policy ARM Id.
:param 'ProtectionContainerMappingProviderSpecificDetailsResponseArgs' provider_specific_details: Provider specific provider details.
:param str source_fabric_friendly_name: Friendly name of source fabric.
:param str source_protection_container_friendly_name: Friendly name of source protection container.
:param str state: Association Status
:param str target_fabric_friendly_name: Friendly name of target fabric.
:param str target_protection_container_friendly_name: Friendly name of paired container.
:param str target_protection_container_id: Paired protection container ARM ID.
"""
if health is not None:
pulumi.set(__self__, "health", health)
if health_error_details is not None:
pulumi.set(__self__, "health_error_details", health_error_details)
if policy_friendly_name is not None:
pulumi.set(__self__, "policy_friendly_name", policy_friendly_name)
if policy_id is not None:
pulumi.set(__self__, "policy_id", policy_id)
if provider_specific_details is not None:
pulumi.set(__self__, "provider_specific_details", provider_specific_details)
if source_fabric_friendly_name is not None:
pulumi.set(__self__, "source_fabric_friendly_name", source_fabric_friendly_name)
if source_protection_container_friendly_name is not None:
pulumi.set(__self__, "source_protection_container_friendly_name", source_protection_container_friendly_name)
if state is not None:
pulumi.set(__self__, "state", state)
if target_fabric_friendly_name is not None:
pulumi.set(__self__, "target_fabric_friendly_name", target_fabric_friendly_name)
if target_protection_container_friendly_name is not None:
pulumi.set(__self__, "target_protection_container_friendly_name", target_protection_container_friendly_name)
if target_protection_container_id is not None:
pulumi.set(__self__, "target_protection_container_id", target_protection_container_id)
@property
@pulumi.getter
def health(self) -> Optional[str]:
"""
Health of pairing.
"""
return pulumi.get(self, "health")
@property
@pulumi.getter(name="healthErrorDetails")
def health_error_details(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
Health error.
"""
return pulumi.get(self, "health_error_details")
@property
@pulumi.getter(name="policyFriendlyName")
def policy_friendly_name(self) -> Optional[str]:
"""
Friendly name of replication policy.
"""
return pulumi.get(self, "policy_friendly_name")
@property
@pulumi.getter(name="policyId")
def policy_id(self) -> Optional[str]:
"""
Policy ARM Id.
"""
return pulumi.get(self, "policy_id")
@property
@pulumi.getter(name="providerSpecificDetails")
def provider_specific_details(self) -> Optional['outputs.ProtectionContainerMappingProviderSpecificDetailsResponse']:
"""
Provider specific provider details.
"""
return pulumi.get(self, "provider_specific_details")
@property
@pulumi.getter(name="sourceFabricFriendlyName")
def source_fabric_friendly_name(self) -> Optional[str]:
"""
Friendly name of source fabric.
"""
return pulumi.get(self, "source_fabric_friendly_name")
@property
@pulumi.getter(name="sourceProtectionContainerFriendlyName")
def source_protection_container_friendly_name(self) -> Optional[str]:
"""
Friendly name of source protection container.
"""
return pulumi.get(self, "source_protection_container_friendly_name")
@property
@pulumi.getter
def state(self) -> Optional[str]:
"""
Association Status
"""
return pulumi.get(self, "state")
@property
@pulumi.getter(name="targetFabricFriendlyName")
def target_fabric_friendly_name(self) -> Optional[str]:
"""
Friendly name of target fabric.
"""
return pulumi.get(self, "target_fabric_friendly_name")
@property
@pulumi.getter(name="targetProtectionContainerFriendlyName")
def target_protection_container_friendly_name(self) -> Optional[str]:
"""
Friendly name of paired container.
"""
return pulumi.get(self, "target_protection_container_friendly_name")
@property
@pulumi.getter(name="targetProtectionContainerId")
def target_protection_container_id(self) -> Optional[str]:
"""
Paired protection container ARM ID.
"""
return pulumi.get(self, "target_protection_container_id")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ProtectionContainerMappingProviderSpecificDetailsResponse(dict):
"""
Container mapping provider specific details.
"""
def __init__(__self__, *,
instance_type: str):
"""
Container mapping provider specific details.
:param str instance_type: Gets the class type. Overridden in derived classes.
"""
pulumi.set(__self__, "instance_type", instance_type)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RcmAzureMigrationPolicyDetailsResponse(dict):
"""
RCM based Azure migration specific policy details.
"""
def __init__(__self__, *,
instance_type: str,
app_consistent_frequency_in_minutes: Optional[int] = None,
crash_consistent_frequency_in_minutes: Optional[int] = None,
multi_vm_sync_status: Optional[str] = None,
recovery_point_history: Optional[int] = None,
recovery_point_threshold_in_minutes: Optional[int] = None):
"""
RCM based Azure migration specific policy details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int app_consistent_frequency_in_minutes: The app consistent snapshot frequency in minutes.
:param int crash_consistent_frequency_in_minutes: The crash consistent snapshot frequency in minutes.
:param str multi_vm_sync_status: A value indicating whether multi-VM sync has to be enabled.
:param int recovery_point_history: The duration in minutes until which the recovery points need to be stored.
:param int recovery_point_threshold_in_minutes: The recovery point threshold in minutes.
"""
pulumi.set(__self__, "instance_type", 'RcmAzureMigration')
if app_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "app_consistent_frequency_in_minutes", app_consistent_frequency_in_minutes)
if crash_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "crash_consistent_frequency_in_minutes", crash_consistent_frequency_in_minutes)
if multi_vm_sync_status is not None:
pulumi.set(__self__, "multi_vm_sync_status", multi_vm_sync_status)
if recovery_point_history is not None:
pulumi.set(__self__, "recovery_point_history", recovery_point_history)
if recovery_point_threshold_in_minutes is not None:
pulumi.set(__self__, "recovery_point_threshold_in_minutes", recovery_point_threshold_in_minutes)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="appConsistentFrequencyInMinutes")
def app_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The app consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "app_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="crashConsistentFrequencyInMinutes")
def crash_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The crash consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "crash_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="multiVmSyncStatus")
def multi_vm_sync_status(self) -> Optional[str]:
"""
A value indicating whether multi-VM sync has to be enabled.
"""
return pulumi.get(self, "multi_vm_sync_status")
@property
@pulumi.getter(name="recoveryPointHistory")
def recovery_point_history(self) -> Optional[int]:
"""
The duration in minutes until which the recovery points need to be stored.
"""
return pulumi.get(self, "recovery_point_history")
@property
@pulumi.getter(name="recoveryPointThresholdInMinutes")
def recovery_point_threshold_in_minutes(self) -> Optional[int]:
"""
The recovery point threshold in minutes.
"""
return pulumi.get(self, "recovery_point_threshold_in_minutes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanActionResponse(dict):
"""
Recovery plan action details.
"""
def __init__(__self__, *,
action_name: str,
custom_details: Any,
failover_directions: Sequence[str],
failover_types: Sequence[str]):
"""
Recovery plan action details.
:param str action_name: The action name.
:param Union['RecoveryPlanAutomationRunbookActionDetailsResponseArgs', 'RecoveryPlanManualActionDetailsResponseArgs', 'RecoveryPlanScriptActionDetailsResponseArgs'] custom_details: The custom details.
:param Sequence[str] failover_directions: The list of failover directions.
:param Sequence[str] failover_types: The list of failover types.
"""
pulumi.set(__self__, "action_name", action_name)
pulumi.set(__self__, "custom_details", custom_details)
pulumi.set(__self__, "failover_directions", failover_directions)
pulumi.set(__self__, "failover_types", failover_types)
@property
@pulumi.getter(name="actionName")
def action_name(self) -> str:
"""
The action name.
"""
return pulumi.get(self, "action_name")
@property
@pulumi.getter(name="customDetails")
def custom_details(self) -> Any:
"""
The custom details.
"""
return pulumi.get(self, "custom_details")
@property
@pulumi.getter(name="failoverDirections")
def failover_directions(self) -> Sequence[str]:
"""
The list of failover directions.
"""
return pulumi.get(self, "failover_directions")
@property
@pulumi.getter(name="failoverTypes")
def failover_types(self) -> Sequence[str]:
"""
The list of failover types.
"""
return pulumi.get(self, "failover_types")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanAutomationRunbookActionDetailsResponse(dict):
"""
Recovery plan Automation runbook action details.
"""
def __init__(__self__, *,
fabric_location: str,
instance_type: str,
runbook_id: Optional[str] = None,
timeout: Optional[str] = None):
"""
Recovery plan Automation runbook action details.
:param str fabric_location: The fabric location.
:param str instance_type: Gets the type of action details (see RecoveryPlanActionDetailsTypes enum for possible values).
:param str runbook_id: The runbook ARM Id.
:param str timeout: The runbook timeout.
"""
pulumi.set(__self__, "fabric_location", fabric_location)
pulumi.set(__self__, "instance_type", 'AutomationRunbookActionDetails')
if runbook_id is not None:
pulumi.set(__self__, "runbook_id", runbook_id)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter(name="fabricLocation")
def fabric_location(self) -> str:
"""
The fabric location.
"""
return pulumi.get(self, "fabric_location")
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the type of action details (see RecoveryPlanActionDetailsTypes enum for possible values).
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="runbookId")
def runbook_id(self) -> Optional[str]:
"""
The runbook ARM Id.
"""
return pulumi.get(self, "runbook_id")
@property
@pulumi.getter
def timeout(self) -> Optional[str]:
"""
The runbook timeout.
"""
return pulumi.get(self, "timeout")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanGroupResponse(dict):
"""
Recovery plan group details.
"""
def __init__(__self__, *,
group_type: str,
end_group_actions: Optional[Sequence['outputs.RecoveryPlanActionResponse']] = None,
replication_protected_items: Optional[Sequence['outputs.RecoveryPlanProtectedItemResponse']] = None,
start_group_actions: Optional[Sequence['outputs.RecoveryPlanActionResponse']] = None):
"""
Recovery plan group details.
:param str group_type: The group type.
:param Sequence['RecoveryPlanActionResponseArgs'] end_group_actions: The end group actions.
:param Sequence['RecoveryPlanProtectedItemResponseArgs'] replication_protected_items: The list of protected items.
:param Sequence['RecoveryPlanActionResponseArgs'] start_group_actions: The start group actions.
"""
pulumi.set(__self__, "group_type", group_type)
if end_group_actions is not None:
pulumi.set(__self__, "end_group_actions", end_group_actions)
if replication_protected_items is not None:
pulumi.set(__self__, "replication_protected_items", replication_protected_items)
if start_group_actions is not None:
pulumi.set(__self__, "start_group_actions", start_group_actions)
@property
@pulumi.getter(name="groupType")
def group_type(self) -> str:
"""
The group type.
"""
return pulumi.get(self, "group_type")
@property
@pulumi.getter(name="endGroupActions")
def end_group_actions(self) -> Optional[Sequence['outputs.RecoveryPlanActionResponse']]:
"""
The end group actions.
"""
return pulumi.get(self, "end_group_actions")
@property
@pulumi.getter(name="replicationProtectedItems")
def replication_protected_items(self) -> Optional[Sequence['outputs.RecoveryPlanProtectedItemResponse']]:
"""
The list of protected items.
"""
return pulumi.get(self, "replication_protected_items")
@property
@pulumi.getter(name="startGroupActions")
def start_group_actions(self) -> Optional[Sequence['outputs.RecoveryPlanActionResponse']]:
"""
The start group actions.
"""
return pulumi.get(self, "start_group_actions")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanManualActionDetailsResponse(dict):
"""
Recovery plan manual action details.
"""
def __init__(__self__, *,
instance_type: str,
description: Optional[str] = None):
"""
Recovery plan manual action details.
:param str instance_type: Gets the type of action details (see RecoveryPlanActionDetailsTypes enum for possible values).
:param str description: The manual action description.
"""
pulumi.set(__self__, "instance_type", 'ManualActionDetails')
if description is not None:
pulumi.set(__self__, "description", description)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the type of action details (see RecoveryPlanActionDetailsTypes enum for possible values).
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter
def description(self) -> Optional[str]:
"""
The manual action description.
"""
return pulumi.get(self, "description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanPropertiesResponse(dict):
"""
Recovery plan custom details.
"""
def __init__(__self__, *,
allowed_operations: Optional[Sequence[str]] = None,
current_scenario: Optional['outputs.CurrentScenarioDetailsResponse'] = None,
current_scenario_status: Optional[str] = None,
current_scenario_status_description: Optional[str] = None,
failover_deployment_model: Optional[str] = None,
friendly_name: Optional[str] = None,
groups: Optional[Sequence['outputs.RecoveryPlanGroupResponse']] = None,
last_planned_failover_time: Optional[str] = None,
last_test_failover_time: Optional[str] = None,
last_unplanned_failover_time: Optional[str] = None,
primary_fabric_friendly_name: Optional[str] = None,
primary_fabric_id: Optional[str] = None,
recovery_fabric_friendly_name: Optional[str] = None,
recovery_fabric_id: Optional[str] = None,
replication_providers: Optional[Sequence[str]] = None):
"""
Recovery plan custom details.
:param Sequence[str] allowed_operations: The list of allowed operations.
:param 'CurrentScenarioDetailsResponseArgs' current_scenario: The current scenario details.
:param str current_scenario_status: The recovery plan status.
:param str current_scenario_status_description: The recovery plan status description.
:param str failover_deployment_model: The failover deployment model.
:param str friendly_name: The friendly name.
:param Sequence['RecoveryPlanGroupResponseArgs'] groups: The recovery plan groups.
:param str last_planned_failover_time: The start time of the last planned failover.
:param str last_test_failover_time: The start time of the last test failover.
:param str last_unplanned_failover_time: The start time of the last unplanned failover.
:param str primary_fabric_friendly_name: The primary fabric friendly name.
:param str primary_fabric_id: The primary fabric Id.
:param str recovery_fabric_friendly_name: The recovery fabric friendly name.
:param str recovery_fabric_id: The recovery fabric Id.
:param Sequence[str] replication_providers: The list of replication providers.
"""
if allowed_operations is not None:
pulumi.set(__self__, "allowed_operations", allowed_operations)
if current_scenario is not None:
pulumi.set(__self__, "current_scenario", current_scenario)
if current_scenario_status is not None:
pulumi.set(__self__, "current_scenario_status", current_scenario_status)
if current_scenario_status_description is not None:
pulumi.set(__self__, "current_scenario_status_description", current_scenario_status_description)
if failover_deployment_model is not None:
pulumi.set(__self__, "failover_deployment_model", failover_deployment_model)
if friendly_name is not None:
pulumi.set(__self__, "friendly_name", friendly_name)
if groups is not None:
pulumi.set(__self__, "groups", groups)
if last_planned_failover_time is not None:
pulumi.set(__self__, "last_planned_failover_time", last_planned_failover_time)
if last_test_failover_time is not None:
pulumi.set(__self__, "last_test_failover_time", last_test_failover_time)
if last_unplanned_failover_time is not None:
pulumi.set(__self__, "last_unplanned_failover_time", last_unplanned_failover_time)
if primary_fabric_friendly_name is not None:
pulumi.set(__self__, "primary_fabric_friendly_name", primary_fabric_friendly_name)
if primary_fabric_id is not None:
pulumi.set(__self__, "primary_fabric_id", primary_fabric_id)
if recovery_fabric_friendly_name is not None:
pulumi.set(__self__, "recovery_fabric_friendly_name", recovery_fabric_friendly_name)
if recovery_fabric_id is not None:
pulumi.set(__self__, "recovery_fabric_id", recovery_fabric_id)
if replication_providers is not None:
pulumi.set(__self__, "replication_providers", replication_providers)
@property
@pulumi.getter(name="allowedOperations")
def allowed_operations(self) -> Optional[Sequence[str]]:
"""
The list of allowed operations.
"""
return pulumi.get(self, "allowed_operations")
@property
@pulumi.getter(name="currentScenario")
def current_scenario(self) -> Optional['outputs.CurrentScenarioDetailsResponse']:
"""
The current scenario details.
"""
return pulumi.get(self, "current_scenario")
@property
@pulumi.getter(name="currentScenarioStatus")
def current_scenario_status(self) -> Optional[str]:
"""
The recovery plan status.
"""
return pulumi.get(self, "current_scenario_status")
@property
@pulumi.getter(name="currentScenarioStatusDescription")
def current_scenario_status_description(self) -> Optional[str]:
"""
The recovery plan status description.
"""
return pulumi.get(self, "current_scenario_status_description")
@property
@pulumi.getter(name="failoverDeploymentModel")
def failover_deployment_model(self) -> Optional[str]:
"""
The failover deployment model.
"""
return pulumi.get(self, "failover_deployment_model")
@property
@pulumi.getter(name="friendlyName")
def friendly_name(self) -> Optional[str]:
"""
The friendly name.
"""
return pulumi.get(self, "friendly_name")
@property
@pulumi.getter
def groups(self) -> Optional[Sequence['outputs.RecoveryPlanGroupResponse']]:
"""
The recovery plan groups.
"""
return pulumi.get(self, "groups")
@property
@pulumi.getter(name="lastPlannedFailoverTime")
def last_planned_failover_time(self) -> Optional[str]:
"""
The start time of the last planned failover.
"""
return pulumi.get(self, "last_planned_failover_time")
@property
@pulumi.getter(name="lastTestFailoverTime")
def last_test_failover_time(self) -> Optional[str]:
"""
The start time of the last test failover.
"""
return pulumi.get(self, "last_test_failover_time")
@property
@pulumi.getter(name="lastUnplannedFailoverTime")
def last_unplanned_failover_time(self) -> Optional[str]:
"""
The start time of the last unplanned failover.
"""
return pulumi.get(self, "last_unplanned_failover_time")
@property
@pulumi.getter(name="primaryFabricFriendlyName")
def primary_fabric_friendly_name(self) -> Optional[str]:
"""
The primary fabric friendly name.
"""
return pulumi.get(self, "primary_fabric_friendly_name")
@property
@pulumi.getter(name="primaryFabricId")
def primary_fabric_id(self) -> Optional[str]:
"""
The primary fabric Id.
"""
return pulumi.get(self, "primary_fabric_id")
@property
@pulumi.getter(name="recoveryFabricFriendlyName")
def recovery_fabric_friendly_name(self) -> Optional[str]:
"""
The recovery fabric friendly name.
"""
return pulumi.get(self, "recovery_fabric_friendly_name")
@property
@pulumi.getter(name="recoveryFabricId")
def recovery_fabric_id(self) -> Optional[str]:
"""
The recovery fabric Id.
"""
return pulumi.get(self, "recovery_fabric_id")
@property
@pulumi.getter(name="replicationProviders")
def replication_providers(self) -> Optional[Sequence[str]]:
"""
The list of replication providers.
"""
return pulumi.get(self, "replication_providers")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanProtectedItemResponse(dict):
"""
Recovery plan protected item.
"""
def __init__(__self__, *,
id: Optional[str] = None,
virtual_machine_id: Optional[str] = None):
"""
Recovery plan protected item.
:param str id: The ARM Id of the recovery plan protected item.
:param str virtual_machine_id: The virtual machine Id.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if virtual_machine_id is not None:
pulumi.set(__self__, "virtual_machine_id", virtual_machine_id)
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The ARM Id of the recovery plan protected item.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter(name="virtualMachineId")
def virtual_machine_id(self) -> Optional[str]:
"""
The virtual machine Id.
"""
return pulumi.get(self, "virtual_machine_id")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RecoveryPlanScriptActionDetailsResponse(dict):
"""
Recovery plan script action details.
"""
def __init__(__self__, *,
fabric_location: str,
instance_type: str,
path: str,
timeout: Optional[str] = None):
"""
Recovery plan script action details.
:param str fabric_location: The fabric location.
:param str instance_type: Gets the type of action details (see RecoveryPlanActionDetailsTypes enum for possible values).
:param str path: The script path.
:param str timeout: The script timeout.
"""
pulumi.set(__self__, "fabric_location", fabric_location)
pulumi.set(__self__, "instance_type", 'ScriptActionDetails')
pulumi.set(__self__, "path", path)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter(name="fabricLocation")
def fabric_location(self) -> str:
"""
The fabric location.
"""
return pulumi.get(self, "fabric_location")
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the type of action details (see RecoveryPlanActionDetailsTypes enum for possible values).
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter
def path(self) -> str:
"""
The script path.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def timeout(self) -> Optional[str]:
"""
The script timeout.
"""
return pulumi.get(self, "timeout")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ReplicationProtectedItemPropertiesResponse(dict):
"""
Replication protected item custom data details.
"""
def __init__(__self__, *,
active_location: Optional[str] = None,
allowed_operations: Optional[Sequence[str]] = None,
current_scenario: Optional['outputs.CurrentScenarioDetailsResponse'] = None,
failover_health: Optional[str] = None,
failover_health_errors: Optional[Sequence['outputs.HealthErrorResponse']] = None,
failover_recovery_point_id: Optional[str] = None,
friendly_name: Optional[str] = None,
last_successful_failover_time: Optional[str] = None,
last_successful_test_failover_time: Optional[str] = None,
policy_friendly_name: Optional[str] = None,
policy_id: Optional[str] = None,
primary_fabric_friendly_name: Optional[str] = None,
primary_protection_container_friendly_name: Optional[str] = None,
protectable_item_id: Optional[str] = None,
protected_item_type: Optional[str] = None,
protection_state: Optional[str] = None,
protection_state_description: Optional[str] = None,
provider_specific_details: Optional[Any] = None,
recovery_container_id: Optional[str] = None,
recovery_fabric_friendly_name: Optional[str] = None,
recovery_fabric_id: Optional[str] = None,
recovery_protection_container_friendly_name: Optional[str] = None,
recovery_services_provider_id: Optional[str] = None,
replication_health: Optional[str] = None,
replication_health_errors: Optional[Sequence['outputs.HealthErrorResponse']] = None,
test_failover_state: Optional[str] = None,
test_failover_state_description: Optional[str] = None):
"""
Replication protected item custom data details.
:param str active_location: The Current active location of the PE.
:param Sequence[str] allowed_operations: The allowed operations on the Replication protected item.
:param 'CurrentScenarioDetailsResponseArgs' current_scenario: The current scenario.
:param str failover_health: The consolidated failover health for the VM.
:param Sequence['HealthErrorResponseArgs'] failover_health_errors: List of failover health errors.
:param str failover_recovery_point_id: The recovery point ARM Id to which the Vm was failed over.
:param str friendly_name: The name.
:param str last_successful_failover_time: The Last successful failover time.
:param str last_successful_test_failover_time: The Last successful test failover time.
:param str policy_friendly_name: The name of Policy governing this PE.
:param str policy_id: The ID of Policy governing this PE.
:param str primary_fabric_friendly_name: The friendly name of the primary fabric.
:param str primary_protection_container_friendly_name: The name of primary protection container friendly name.
:param str protectable_item_id: The protected item ARM Id.
:param str protected_item_type: The type of protected item type.
:param str protection_state: The protection status.
:param str protection_state_description: The protection state description.
:param Union['A2AReplicationDetailsResponseArgs', 'HyperVReplicaAzureReplicationDetailsResponseArgs', 'HyperVReplicaBaseReplicationDetailsResponseArgs', 'HyperVReplicaBlueReplicationDetailsResponseArgs', 'HyperVReplicaReplicationDetailsResponseArgs', 'InMageAzureV2ReplicationDetailsResponseArgs', 'InMageReplicationDetailsResponseArgs'] provider_specific_details: The Replication provider custom settings.
:param str recovery_container_id: The recovery container Id.
:param str recovery_fabric_friendly_name: The friendly name of recovery fabric.
:param str recovery_fabric_id: The Arm Id of recovery fabric.
:param str recovery_protection_container_friendly_name: The name of recovery container friendly name.
:param str recovery_services_provider_id: The recovery provider ARM Id.
:param str replication_health: The consolidated protection health for the VM taking any issues with SRS as well as all the replication units associated with the VM's replication group into account. This is a string representation of the ProtectionHealth enumeration.
:param Sequence['HealthErrorResponseArgs'] replication_health_errors: List of replication health errors.
:param str test_failover_state: The Test failover state.
:param str test_failover_state_description: The Test failover state description.
"""
if active_location is not None:
pulumi.set(__self__, "active_location", active_location)
if allowed_operations is not None:
pulumi.set(__self__, "allowed_operations", allowed_operations)
if current_scenario is not None:
pulumi.set(__self__, "current_scenario", current_scenario)
if failover_health is not None:
pulumi.set(__self__, "failover_health", failover_health)
if failover_health_errors is not None:
pulumi.set(__self__, "failover_health_errors", failover_health_errors)
if failover_recovery_point_id is not None:
pulumi.set(__self__, "failover_recovery_point_id", failover_recovery_point_id)
if friendly_name is not None:
pulumi.set(__self__, "friendly_name", friendly_name)
if last_successful_failover_time is not None:
pulumi.set(__self__, "last_successful_failover_time", last_successful_failover_time)
if last_successful_test_failover_time is not None:
pulumi.set(__self__, "last_successful_test_failover_time", last_successful_test_failover_time)
if policy_friendly_name is not None:
pulumi.set(__self__, "policy_friendly_name", policy_friendly_name)
if policy_id is not None:
pulumi.set(__self__, "policy_id", policy_id)
if primary_fabric_friendly_name is not None:
pulumi.set(__self__, "primary_fabric_friendly_name", primary_fabric_friendly_name)
if primary_protection_container_friendly_name is not None:
pulumi.set(__self__, "primary_protection_container_friendly_name", primary_protection_container_friendly_name)
if protectable_item_id is not None:
pulumi.set(__self__, "protectable_item_id", protectable_item_id)
if protected_item_type is not None:
pulumi.set(__self__, "protected_item_type", protected_item_type)
if protection_state is not None:
pulumi.set(__self__, "protection_state", protection_state)
if protection_state_description is not None:
pulumi.set(__self__, "protection_state_description", protection_state_description)
if provider_specific_details is not None:
pulumi.set(__self__, "provider_specific_details", provider_specific_details)
if recovery_container_id is not None:
pulumi.set(__self__, "recovery_container_id", recovery_container_id)
if recovery_fabric_friendly_name is not None:
pulumi.set(__self__, "recovery_fabric_friendly_name", recovery_fabric_friendly_name)
if recovery_fabric_id is not None:
pulumi.set(__self__, "recovery_fabric_id", recovery_fabric_id)
if recovery_protection_container_friendly_name is not None:
pulumi.set(__self__, "recovery_protection_container_friendly_name", recovery_protection_container_friendly_name)
if recovery_services_provider_id is not None:
pulumi.set(__self__, "recovery_services_provider_id", recovery_services_provider_id)
if replication_health is not None:
pulumi.set(__self__, "replication_health", replication_health)
if replication_health_errors is not None:
pulumi.set(__self__, "replication_health_errors", replication_health_errors)
if test_failover_state is not None:
pulumi.set(__self__, "test_failover_state", test_failover_state)
if test_failover_state_description is not None:
pulumi.set(__self__, "test_failover_state_description", test_failover_state_description)
@property
@pulumi.getter(name="activeLocation")
def active_location(self) -> Optional[str]:
"""
The Current active location of the PE.
"""
return pulumi.get(self, "active_location")
@property
@pulumi.getter(name="allowedOperations")
def allowed_operations(self) -> Optional[Sequence[str]]:
"""
The allowed operations on the Replication protected item.
"""
return pulumi.get(self, "allowed_operations")
@property
@pulumi.getter(name="currentScenario")
def current_scenario(self) -> Optional['outputs.CurrentScenarioDetailsResponse']:
"""
The current scenario.
"""
return pulumi.get(self, "current_scenario")
@property
@pulumi.getter(name="failoverHealth")
def failover_health(self) -> Optional[str]:
"""
The consolidated failover health for the VM.
"""
return pulumi.get(self, "failover_health")
@property
@pulumi.getter(name="failoverHealthErrors")
def failover_health_errors(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
List of failover health errors.
"""
return pulumi.get(self, "failover_health_errors")
@property
@pulumi.getter(name="failoverRecoveryPointId")
def failover_recovery_point_id(self) -> Optional[str]:
"""
The recovery point ARM Id to which the Vm was failed over.
"""
return pulumi.get(self, "failover_recovery_point_id")
@property
@pulumi.getter(name="friendlyName")
def friendly_name(self) -> Optional[str]:
"""
The name.
"""
return pulumi.get(self, "friendly_name")
@property
@pulumi.getter(name="lastSuccessfulFailoverTime")
def last_successful_failover_time(self) -> Optional[str]:
"""
The Last successful failover time.
"""
return pulumi.get(self, "last_successful_failover_time")
@property
@pulumi.getter(name="lastSuccessfulTestFailoverTime")
def last_successful_test_failover_time(self) -> Optional[str]:
"""
The Last successful test failover time.
"""
return pulumi.get(self, "last_successful_test_failover_time")
@property
@pulumi.getter(name="policyFriendlyName")
def policy_friendly_name(self) -> Optional[str]:
"""
The name of Policy governing this PE.
"""
return pulumi.get(self, "policy_friendly_name")
@property
@pulumi.getter(name="policyId")
def policy_id(self) -> Optional[str]:
"""
The ID of Policy governing this PE.
"""
return pulumi.get(self, "policy_id")
@property
@pulumi.getter(name="primaryFabricFriendlyName")
def primary_fabric_friendly_name(self) -> Optional[str]:
"""
The friendly name of the primary fabric.
"""
return pulumi.get(self, "primary_fabric_friendly_name")
@property
@pulumi.getter(name="primaryProtectionContainerFriendlyName")
def primary_protection_container_friendly_name(self) -> Optional[str]:
"""
The name of primary protection container friendly name.
"""
return pulumi.get(self, "primary_protection_container_friendly_name")
@property
@pulumi.getter(name="protectableItemId")
def protectable_item_id(self) -> Optional[str]:
"""
The protected item ARM Id.
"""
return pulumi.get(self, "protectable_item_id")
@property
@pulumi.getter(name="protectedItemType")
def protected_item_type(self) -> Optional[str]:
"""
The type of protected item type.
"""
return pulumi.get(self, "protected_item_type")
@property
@pulumi.getter(name="protectionState")
def protection_state(self) -> Optional[str]:
"""
The protection status.
"""
return pulumi.get(self, "protection_state")
@property
@pulumi.getter(name="protectionStateDescription")
def protection_state_description(self) -> Optional[str]:
"""
The protection state description.
"""
return pulumi.get(self, "protection_state_description")
@property
@pulumi.getter(name="providerSpecificDetails")
def provider_specific_details(self) -> Optional[Any]:
"""
The Replication provider custom settings.
"""
return pulumi.get(self, "provider_specific_details")
@property
@pulumi.getter(name="recoveryContainerId")
def recovery_container_id(self) -> Optional[str]:
"""
The recovery container Id.
"""
return pulumi.get(self, "recovery_container_id")
@property
@pulumi.getter(name="recoveryFabricFriendlyName")
def recovery_fabric_friendly_name(self) -> Optional[str]:
"""
The friendly name of recovery fabric.
"""
return pulumi.get(self, "recovery_fabric_friendly_name")
@property
@pulumi.getter(name="recoveryFabricId")
def recovery_fabric_id(self) -> Optional[str]:
"""
The Arm Id of recovery fabric.
"""
return pulumi.get(self, "recovery_fabric_id")
@property
@pulumi.getter(name="recoveryProtectionContainerFriendlyName")
def recovery_protection_container_friendly_name(self) -> Optional[str]:
"""
The name of recovery container friendly name.
"""
return pulumi.get(self, "recovery_protection_container_friendly_name")
@property
@pulumi.getter(name="recoveryServicesProviderId")
def recovery_services_provider_id(self) -> Optional[str]:
"""
The recovery provider ARM Id.
"""
return pulumi.get(self, "recovery_services_provider_id")
@property
@pulumi.getter(name="replicationHealth")
def replication_health(self) -> Optional[str]:
"""
The consolidated protection health for the VM taking any issues with SRS as well as all the replication units associated with the VM's replication group into account. This is a string representation of the ProtectionHealth enumeration.
"""
return pulumi.get(self, "replication_health")
@property
@pulumi.getter(name="replicationHealthErrors")
def replication_health_errors(self) -> Optional[Sequence['outputs.HealthErrorResponse']]:
"""
List of replication health errors.
"""
return pulumi.get(self, "replication_health_errors")
@property
@pulumi.getter(name="testFailoverState")
def test_failover_state(self) -> Optional[str]:
"""
The Test failover state.
"""
return pulumi.get(self, "test_failover_state")
@property
@pulumi.getter(name="testFailoverStateDescription")
def test_failover_state_description(self) -> Optional[str]:
"""
The Test failover state description.
"""
return pulumi.get(self, "test_failover_state_description")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RetentionVolumeResponse(dict):
"""
The retention details of the MT.
"""
def __init__(__self__, *,
capacity_in_bytes: Optional[int] = None,
free_space_in_bytes: Optional[int] = None,
threshold_percentage: Optional[int] = None,
volume_name: Optional[str] = None):
"""
The retention details of the MT.
:param int capacity_in_bytes: The volume capacity.
:param int free_space_in_bytes: The free space available in this volume.
:param int threshold_percentage: The threshold percentage.
:param str volume_name: The volume name.
"""
if capacity_in_bytes is not None:
pulumi.set(__self__, "capacity_in_bytes", capacity_in_bytes)
if free_space_in_bytes is not None:
pulumi.set(__self__, "free_space_in_bytes", free_space_in_bytes)
if threshold_percentage is not None:
pulumi.set(__self__, "threshold_percentage", threshold_percentage)
if volume_name is not None:
pulumi.set(__self__, "volume_name", volume_name)
@property
@pulumi.getter(name="capacityInBytes")
def capacity_in_bytes(self) -> Optional[int]:
"""
The volume capacity.
"""
return pulumi.get(self, "capacity_in_bytes")
@property
@pulumi.getter(name="freeSpaceInBytes")
def free_space_in_bytes(self) -> Optional[int]:
"""
The free space available in this volume.
"""
return pulumi.get(self, "free_space_in_bytes")
@property
@pulumi.getter(name="thresholdPercentage")
def threshold_percentage(self) -> Optional[int]:
"""
The threshold percentage.
"""
return pulumi.get(self, "threshold_percentage")
@property
@pulumi.getter(name="volumeName")
def volume_name(self) -> Optional[str]:
"""
The volume name.
"""
return pulumi.get(self, "volume_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RoleAssignmentResponse(dict):
"""
Azure role assignment details.
"""
def __init__(__self__, *,
id: Optional[str] = None,
name: Optional[str] = None,
principal_id: Optional[str] = None,
role_definition_id: Optional[str] = None,
scope: Optional[str] = None):
"""
Azure role assignment details.
:param str id: The ARM Id of the role assignment.
:param str name: The name of the role assignment.
:param str principal_id: Principal Id.
:param str role_definition_id: Role definition id.
:param str scope: Role assignment scope.
"""
if id is not None:
pulumi.set(__self__, "id", id)
if name is not None:
pulumi.set(__self__, "name", name)
if principal_id is not None:
pulumi.set(__self__, "principal_id", principal_id)
if role_definition_id is not None:
pulumi.set(__self__, "role_definition_id", role_definition_id)
if scope is not None:
pulumi.set(__self__, "scope", scope)
@property
@pulumi.getter
def id(self) -> Optional[str]:
"""
The ARM Id of the role assignment.
"""
return pulumi.get(self, "id")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
The name of the role assignment.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="principalId")
def principal_id(self) -> Optional[str]:
"""
Principal Id.
"""
return pulumi.get(self, "principal_id")
@property
@pulumi.getter(name="roleDefinitionId")
def role_definition_id(self) -> Optional[str]:
"""
Role definition id.
"""
return pulumi.get(self, "role_definition_id")
@property
@pulumi.getter
def scope(self) -> Optional[str]:
"""
Role assignment scope.
"""
return pulumi.get(self, "scope")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RunAsAccountResponse(dict):
"""
CS Accounts Details.
"""
def __init__(__self__, *,
account_id: Optional[str] = None,
account_name: Optional[str] = None):
"""
CS Accounts Details.
:param str account_id: The CS RunAs account Id.
:param str account_name: The CS RunAs account name.
"""
if account_id is not None:
pulumi.set(__self__, "account_id", account_id)
if account_name is not None:
pulumi.set(__self__, "account_name", account_name)
@property
@pulumi.getter(name="accountId")
def account_id(self) -> Optional[str]:
"""
The CS RunAs account Id.
"""
return pulumi.get(self, "account_id")
@property
@pulumi.getter(name="accountName")
def account_name(self) -> Optional[str]:
"""
The CS RunAs account name.
"""
return pulumi.get(self, "account_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class StorageClassificationMappingPropertiesResponse(dict):
"""
Storage mapping properties.
"""
def __init__(__self__, *,
target_storage_classification_id: Optional[str] = None):
"""
Storage mapping properties.
:param str target_storage_classification_id: Target storage object Id.
"""
if target_storage_classification_id is not None:
pulumi.set(__self__, "target_storage_classification_id", target_storage_classification_id)
@property
@pulumi.getter(name="targetStorageClassificationId")
def target_storage_classification_id(self) -> Optional[str]:
"""
Target storage object Id.
"""
return pulumi.get(self, "target_storage_classification_id")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VCenterPropertiesResponse(dict):
"""
vCenter properties.
"""
def __init__(__self__, *,
discovery_status: Optional[str] = None,
fabric_arm_resource_name: Optional[str] = None,
friendly_name: Optional[str] = None,
infrastructure_id: Optional[str] = None,
internal_id: Optional[str] = None,
ip_address: Optional[str] = None,
last_heartbeat: Optional[str] = None,
port: Optional[str] = None,
process_server_id: Optional[str] = None,
run_as_account_id: Optional[str] = None):
"""
vCenter properties.
:param str discovery_status: The VCenter discovery status.
:param str fabric_arm_resource_name: The ARM resource name of the fabric containing this VCenter.
:param str friendly_name: Friendly name of the vCenter.
:param str infrastructure_id: The infrastructure Id of vCenter.
:param str internal_id: VCenter internal ID.
:param str ip_address: The IP address of the vCenter.
:param str last_heartbeat: The time when the last heartbeat was received by vCenter.
:param str port: The port number for discovery.
:param str process_server_id: The process server Id.
:param str run_as_account_id: The account Id which has privileges to discover the vCenter.
"""
if discovery_status is not None:
pulumi.set(__self__, "discovery_status", discovery_status)
if fabric_arm_resource_name is not None:
pulumi.set(__self__, "fabric_arm_resource_name", fabric_arm_resource_name)
if friendly_name is not None:
pulumi.set(__self__, "friendly_name", friendly_name)
if infrastructure_id is not None:
pulumi.set(__self__, "infrastructure_id", infrastructure_id)
if internal_id is not None:
pulumi.set(__self__, "internal_id", internal_id)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if port is not None:
pulumi.set(__self__, "port", port)
if process_server_id is not None:
pulumi.set(__self__, "process_server_id", process_server_id)
if run_as_account_id is not None:
pulumi.set(__self__, "run_as_account_id", run_as_account_id)
@property
@pulumi.getter(name="discoveryStatus")
def discovery_status(self) -> Optional[str]:
"""
The VCenter discovery status.
"""
return pulumi.get(self, "discovery_status")
@property
@pulumi.getter(name="fabricArmResourceName")
def fabric_arm_resource_name(self) -> Optional[str]:
"""
The ARM resource name of the fabric containing this VCenter.
"""
return pulumi.get(self, "fabric_arm_resource_name")
@property
@pulumi.getter(name="friendlyName")
def friendly_name(self) -> Optional[str]:
"""
Friendly name of the vCenter.
"""
return pulumi.get(self, "friendly_name")
@property
@pulumi.getter(name="infrastructureId")
def infrastructure_id(self) -> Optional[str]:
"""
The infrastructure Id of vCenter.
"""
return pulumi.get(self, "infrastructure_id")
@property
@pulumi.getter(name="internalId")
def internal_id(self) -> Optional[str]:
"""
VCenter internal ID.
"""
return pulumi.get(self, "internal_id")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The IP address of the vCenter.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The time when the last heartbeat was received by vCenter.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter
def port(self) -> Optional[str]:
"""
The port number for discovery.
"""
return pulumi.get(self, "port")
@property
@pulumi.getter(name="processServerId")
def process_server_id(self) -> Optional[str]:
"""
The process server Id.
"""
return pulumi.get(self, "process_server_id")
@property
@pulumi.getter(name="runAsAccountId")
def run_as_account_id(self) -> Optional[str]:
"""
The account Id which has privileges to discover the vCenter.
"""
return pulumi.get(self, "run_as_account_id")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VMNicDetailsResponse(dict):
"""
Hyper V VM network details.
"""
def __init__(__self__, *,
ip_address_type: Optional[str] = None,
nic_id: Optional[str] = None,
primary_nic_static_ip_address: Optional[str] = None,
recovery_nic_ip_address_type: Optional[str] = None,
recovery_vm_network_id: Optional[str] = None,
recovery_vm_subnet_name: Optional[str] = None,
replica_nic_id: Optional[str] = None,
replica_nic_static_ip_address: Optional[str] = None,
selection_type: Optional[str] = None,
source_nic_arm_id: Optional[str] = None,
v_m_network_name: Optional[str] = None,
v_m_subnet_name: Optional[str] = None):
"""
Hyper V VM network details.
:param str ip_address_type: Ip address type.
:param str nic_id: The nic Id.
:param str primary_nic_static_ip_address: Primary nic static IP address.
:param str recovery_nic_ip_address_type: IP allocation type for recovery VM.
:param str recovery_vm_network_id: Recovery VM network Id.
:param str recovery_vm_subnet_name: Recovery VM subnet name.
:param str replica_nic_id: The replica nic Id.
:param str replica_nic_static_ip_address: Replica nic static IP address.
:param str selection_type: Selection type for failover.
:param str source_nic_arm_id: The source nic ARM Id.
:param str v_m_network_name: VM network name.
:param str v_m_subnet_name: VM subnet name.
"""
if ip_address_type is not None:
pulumi.set(__self__, "ip_address_type", ip_address_type)
if nic_id is not None:
pulumi.set(__self__, "nic_id", nic_id)
if primary_nic_static_ip_address is not None:
pulumi.set(__self__, "primary_nic_static_ip_address", primary_nic_static_ip_address)
if recovery_nic_ip_address_type is not None:
pulumi.set(__self__, "recovery_nic_ip_address_type", recovery_nic_ip_address_type)
if recovery_vm_network_id is not None:
pulumi.set(__self__, "recovery_vm_network_id", recovery_vm_network_id)
if recovery_vm_subnet_name is not None:
pulumi.set(__self__, "recovery_vm_subnet_name", recovery_vm_subnet_name)
if replica_nic_id is not None:
pulumi.set(__self__, "replica_nic_id", replica_nic_id)
if replica_nic_static_ip_address is not None:
pulumi.set(__self__, "replica_nic_static_ip_address", replica_nic_static_ip_address)
if selection_type is not None:
pulumi.set(__self__, "selection_type", selection_type)
if source_nic_arm_id is not None:
pulumi.set(__self__, "source_nic_arm_id", source_nic_arm_id)
if v_m_network_name is not None:
pulumi.set(__self__, "v_m_network_name", v_m_network_name)
if v_m_subnet_name is not None:
pulumi.set(__self__, "v_m_subnet_name", v_m_subnet_name)
@property
@pulumi.getter(name="ipAddressType")
def ip_address_type(self) -> Optional[str]:
"""
Ip address type.
"""
return pulumi.get(self, "ip_address_type")
@property
@pulumi.getter(name="nicId")
def nic_id(self) -> Optional[str]:
"""
The nic Id.
"""
return pulumi.get(self, "nic_id")
@property
@pulumi.getter(name="primaryNicStaticIPAddress")
def primary_nic_static_ip_address(self) -> Optional[str]:
"""
Primary nic static IP address.
"""
return pulumi.get(self, "primary_nic_static_ip_address")
@property
@pulumi.getter(name="recoveryNicIpAddressType")
def recovery_nic_ip_address_type(self) -> Optional[str]:
"""
IP allocation type for recovery VM.
"""
return pulumi.get(self, "recovery_nic_ip_address_type")
@property
@pulumi.getter(name="recoveryVMNetworkId")
def recovery_vm_network_id(self) -> Optional[str]:
"""
Recovery VM network Id.
"""
return pulumi.get(self, "recovery_vm_network_id")
@property
@pulumi.getter(name="recoveryVMSubnetName")
def recovery_vm_subnet_name(self) -> Optional[str]:
"""
Recovery VM subnet name.
"""
return pulumi.get(self, "recovery_vm_subnet_name")
@property
@pulumi.getter(name="replicaNicId")
def replica_nic_id(self) -> Optional[str]:
"""
The replica nic Id.
"""
return pulumi.get(self, "replica_nic_id")
@property
@pulumi.getter(name="replicaNicStaticIPAddress")
def replica_nic_static_ip_address(self) -> Optional[str]:
"""
Replica nic static IP address.
"""
return pulumi.get(self, "replica_nic_static_ip_address")
@property
@pulumi.getter(name="selectionType")
def selection_type(self) -> Optional[str]:
"""
Selection type for failover.
"""
return pulumi.get(self, "selection_type")
@property
@pulumi.getter(name="sourceNicArmId")
def source_nic_arm_id(self) -> Optional[str]:
"""
The source nic ARM Id.
"""
return pulumi.get(self, "source_nic_arm_id")
@property
@pulumi.getter(name="vMNetworkName")
def v_m_network_name(self) -> Optional[str]:
"""
VM network name.
"""
return pulumi.get(self, "v_m_network_name")
@property
@pulumi.getter(name="vMSubnetName")
def v_m_subnet_name(self) -> Optional[str]:
"""
VM subnet name.
"""
return pulumi.get(self, "v_m_subnet_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VMwareDetailsResponse(dict):
"""
Store the fabric details specific to the VMware fabric.
"""
def __init__(__self__, *,
instance_type: str,
agent_count: Optional[str] = None,
agent_version: Optional[str] = None,
available_memory_in_bytes: Optional[int] = None,
available_space_in_bytes: Optional[int] = None,
cpu_load: Optional[str] = None,
cpu_load_status: Optional[str] = None,
cs_service_status: Optional[str] = None,
database_server_load: Optional[str] = None,
database_server_load_status: Optional[str] = None,
host_name: Optional[str] = None,
ip_address: Optional[str] = None,
last_heartbeat: Optional[str] = None,
master_target_servers: Optional[Sequence['outputs.MasterTargetServerResponse']] = None,
memory_usage_status: Optional[str] = None,
process_server_count: Optional[str] = None,
process_servers: Optional[Sequence['outputs.ProcessServerResponse']] = None,
protected_servers: Optional[str] = None,
ps_template_version: Optional[str] = None,
replication_pair_count: Optional[str] = None,
run_as_accounts: Optional[Sequence['outputs.RunAsAccountResponse']] = None,
space_usage_status: Optional[str] = None,
ssl_cert_expiry_date: Optional[str] = None,
ssl_cert_expiry_remaining_days: Optional[int] = None,
system_load: Optional[str] = None,
system_load_status: Optional[str] = None,
total_memory_in_bytes: Optional[int] = None,
total_space_in_bytes: Optional[int] = None,
version_status: Optional[str] = None,
web_load: Optional[str] = None,
web_load_status: Optional[str] = None):
"""
Store the fabric details specific to the VMware fabric.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param str agent_count: The number of source and target servers configured to talk to this CS.
:param str agent_version: The agent Version.
:param int available_memory_in_bytes: The available memory.
:param int available_space_in_bytes: The available space.
:param str cpu_load: The percentage of the CPU load.
:param str cpu_load_status: The CPU load status.
:param str cs_service_status: The CS service status.
:param str database_server_load: The database server load.
:param str database_server_load_status: The database server load status.
:param str host_name: The host name.
:param str ip_address: The IP address.
:param str last_heartbeat: The last heartbeat received from CS server.
:param Sequence['MasterTargetServerResponseArgs'] master_target_servers: The list of Master Target servers associated with the fabric.
:param str memory_usage_status: The memory usage status.
:param str process_server_count: The number of process servers.
:param Sequence['ProcessServerResponseArgs'] process_servers: The list of Process Servers associated with the fabric.
:param str protected_servers: The number of protected servers.
:param str ps_template_version: PS template version.
:param str replication_pair_count: The number of replication pairs configured in this CS.
:param Sequence['RunAsAccountResponseArgs'] run_as_accounts: The list of run as accounts created on the server.
:param str space_usage_status: The space usage status.
:param str ssl_cert_expiry_date: CS SSL cert expiry date.
:param int ssl_cert_expiry_remaining_days: CS SSL cert expiry date.
:param str system_load: The percentage of the system load.
:param str system_load_status: The system load status.
:param int total_memory_in_bytes: The total memory.
:param int total_space_in_bytes: The total space.
:param str version_status: Version status
:param str web_load: The web load.
:param str web_load_status: The web load status.
"""
pulumi.set(__self__, "instance_type", 'VMware')
if agent_count is not None:
pulumi.set(__self__, "agent_count", agent_count)
if agent_version is not None:
pulumi.set(__self__, "agent_version", agent_version)
if available_memory_in_bytes is not None:
pulumi.set(__self__, "available_memory_in_bytes", available_memory_in_bytes)
if available_space_in_bytes is not None:
pulumi.set(__self__, "available_space_in_bytes", available_space_in_bytes)
if cpu_load is not None:
pulumi.set(__self__, "cpu_load", cpu_load)
if cpu_load_status is not None:
pulumi.set(__self__, "cpu_load_status", cpu_load_status)
if cs_service_status is not None:
pulumi.set(__self__, "cs_service_status", cs_service_status)
if database_server_load is not None:
pulumi.set(__self__, "database_server_load", database_server_load)
if database_server_load_status is not None:
pulumi.set(__self__, "database_server_load_status", database_server_load_status)
if host_name is not None:
pulumi.set(__self__, "host_name", host_name)
if ip_address is not None:
pulumi.set(__self__, "ip_address", ip_address)
if last_heartbeat is not None:
pulumi.set(__self__, "last_heartbeat", last_heartbeat)
if master_target_servers is not None:
pulumi.set(__self__, "master_target_servers", master_target_servers)
if memory_usage_status is not None:
pulumi.set(__self__, "memory_usage_status", memory_usage_status)
if process_server_count is not None:
pulumi.set(__self__, "process_server_count", process_server_count)
if process_servers is not None:
pulumi.set(__self__, "process_servers", process_servers)
if protected_servers is not None:
pulumi.set(__self__, "protected_servers", protected_servers)
if ps_template_version is not None:
pulumi.set(__self__, "ps_template_version", ps_template_version)
if replication_pair_count is not None:
pulumi.set(__self__, "replication_pair_count", replication_pair_count)
if run_as_accounts is not None:
pulumi.set(__self__, "run_as_accounts", run_as_accounts)
if space_usage_status is not None:
pulumi.set(__self__, "space_usage_status", space_usage_status)
if ssl_cert_expiry_date is not None:
pulumi.set(__self__, "ssl_cert_expiry_date", ssl_cert_expiry_date)
if ssl_cert_expiry_remaining_days is not None:
pulumi.set(__self__, "ssl_cert_expiry_remaining_days", ssl_cert_expiry_remaining_days)
if system_load is not None:
pulumi.set(__self__, "system_load", system_load)
if system_load_status is not None:
pulumi.set(__self__, "system_load_status", system_load_status)
if total_memory_in_bytes is not None:
pulumi.set(__self__, "total_memory_in_bytes", total_memory_in_bytes)
if total_space_in_bytes is not None:
pulumi.set(__self__, "total_space_in_bytes", total_space_in_bytes)
if version_status is not None:
pulumi.set(__self__, "version_status", version_status)
if web_load is not None:
pulumi.set(__self__, "web_load", web_load)
if web_load_status is not None:
pulumi.set(__self__, "web_load_status", web_load_status)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="agentCount")
def agent_count(self) -> Optional[str]:
"""
The number of source and target servers configured to talk to this CS.
"""
return pulumi.get(self, "agent_count")
@property
@pulumi.getter(name="agentVersion")
def agent_version(self) -> Optional[str]:
"""
The agent Version.
"""
return pulumi.get(self, "agent_version")
@property
@pulumi.getter(name="availableMemoryInBytes")
def available_memory_in_bytes(self) -> Optional[int]:
"""
The available memory.
"""
return pulumi.get(self, "available_memory_in_bytes")
@property
@pulumi.getter(name="availableSpaceInBytes")
def available_space_in_bytes(self) -> Optional[int]:
"""
The available space.
"""
return pulumi.get(self, "available_space_in_bytes")
@property
@pulumi.getter(name="cpuLoad")
def cpu_load(self) -> Optional[str]:
"""
The percentage of the CPU load.
"""
return pulumi.get(self, "cpu_load")
@property
@pulumi.getter(name="cpuLoadStatus")
def cpu_load_status(self) -> Optional[str]:
"""
The CPU load status.
"""
return pulumi.get(self, "cpu_load_status")
@property
@pulumi.getter(name="csServiceStatus")
def cs_service_status(self) -> Optional[str]:
"""
The CS service status.
"""
return pulumi.get(self, "cs_service_status")
@property
@pulumi.getter(name="databaseServerLoad")
def database_server_load(self) -> Optional[str]:
"""
The database server load.
"""
return pulumi.get(self, "database_server_load")
@property
@pulumi.getter(name="databaseServerLoadStatus")
def database_server_load_status(self) -> Optional[str]:
"""
The database server load status.
"""
return pulumi.get(self, "database_server_load_status")
@property
@pulumi.getter(name="hostName")
def host_name(self) -> Optional[str]:
"""
The host name.
"""
return pulumi.get(self, "host_name")
@property
@pulumi.getter(name="ipAddress")
def ip_address(self) -> Optional[str]:
"""
The IP address.
"""
return pulumi.get(self, "ip_address")
@property
@pulumi.getter(name="lastHeartbeat")
def last_heartbeat(self) -> Optional[str]:
"""
The last heartbeat received from CS server.
"""
return pulumi.get(self, "last_heartbeat")
@property
@pulumi.getter(name="masterTargetServers")
def master_target_servers(self) -> Optional[Sequence['outputs.MasterTargetServerResponse']]:
"""
The list of Master Target servers associated with the fabric.
"""
return pulumi.get(self, "master_target_servers")
@property
@pulumi.getter(name="memoryUsageStatus")
def memory_usage_status(self) -> Optional[str]:
"""
The memory usage status.
"""
return pulumi.get(self, "memory_usage_status")
@property
@pulumi.getter(name="processServerCount")
def process_server_count(self) -> Optional[str]:
"""
The number of process servers.
"""
return pulumi.get(self, "process_server_count")
@property
@pulumi.getter(name="processServers")
def process_servers(self) -> Optional[Sequence['outputs.ProcessServerResponse']]:
"""
The list of Process Servers associated with the fabric.
"""
return pulumi.get(self, "process_servers")
@property
@pulumi.getter(name="protectedServers")
def protected_servers(self) -> Optional[str]:
"""
The number of protected servers.
"""
return pulumi.get(self, "protected_servers")
@property
@pulumi.getter(name="psTemplateVersion")
def ps_template_version(self) -> Optional[str]:
"""
PS template version.
"""
return pulumi.get(self, "ps_template_version")
@property
@pulumi.getter(name="replicationPairCount")
def replication_pair_count(self) -> Optional[str]:
"""
The number of replication pairs configured in this CS.
"""
return pulumi.get(self, "replication_pair_count")
@property
@pulumi.getter(name="runAsAccounts")
def run_as_accounts(self) -> Optional[Sequence['outputs.RunAsAccountResponse']]:
"""
The list of run as accounts created on the server.
"""
return pulumi.get(self, "run_as_accounts")
@property
@pulumi.getter(name="spaceUsageStatus")
def space_usage_status(self) -> Optional[str]:
"""
The space usage status.
"""
return pulumi.get(self, "space_usage_status")
@property
@pulumi.getter(name="sslCertExpiryDate")
def ssl_cert_expiry_date(self) -> Optional[str]:
"""
CS SSL cert expiry date.
"""
return pulumi.get(self, "ssl_cert_expiry_date")
@property
@pulumi.getter(name="sslCertExpiryRemainingDays")
def ssl_cert_expiry_remaining_days(self) -> Optional[int]:
"""
CS SSL cert expiry date.
"""
return pulumi.get(self, "ssl_cert_expiry_remaining_days")
@property
@pulumi.getter(name="systemLoad")
def system_load(self) -> Optional[str]:
"""
The percentage of the system load.
"""
return pulumi.get(self, "system_load")
@property
@pulumi.getter(name="systemLoadStatus")
def system_load_status(self) -> Optional[str]:
"""
The system load status.
"""
return pulumi.get(self, "system_load_status")
@property
@pulumi.getter(name="totalMemoryInBytes")
def total_memory_in_bytes(self) -> Optional[int]:
"""
The total memory.
"""
return pulumi.get(self, "total_memory_in_bytes")
@property
@pulumi.getter(name="totalSpaceInBytes")
def total_space_in_bytes(self) -> Optional[int]:
"""
The total space.
"""
return pulumi.get(self, "total_space_in_bytes")
@property
@pulumi.getter(name="versionStatus")
def version_status(self) -> Optional[str]:
"""
Version status
"""
return pulumi.get(self, "version_status")
@property
@pulumi.getter(name="webLoad")
def web_load(self) -> Optional[str]:
"""
The web load.
"""
return pulumi.get(self, "web_load")
@property
@pulumi.getter(name="webLoadStatus")
def web_load_status(self) -> Optional[str]:
"""
The web load status.
"""
return pulumi.get(self, "web_load_status")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VMwareV2FabricSpecificDetailsResponse(dict):
"""
VMwareV2 fabric Specific Details.
"""
def __init__(__self__, *,
instance_type: str,
rcm_service_endpoint: Optional[str] = None,
srs_service_endpoint: Optional[str] = None):
"""
VMwareV2 fabric Specific Details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param str rcm_service_endpoint: The endpoint for making requests to the RCM Service.
:param str srs_service_endpoint: The endpoint for making requests to the SRS Service.
"""
pulumi.set(__self__, "instance_type", 'VMwareV2')
if rcm_service_endpoint is not None:
pulumi.set(__self__, "rcm_service_endpoint", rcm_service_endpoint)
if srs_service_endpoint is not None:
pulumi.set(__self__, "srs_service_endpoint", srs_service_endpoint)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="rcmServiceEndpoint")
def rcm_service_endpoint(self) -> Optional[str]:
"""
The endpoint for making requests to the RCM Service.
"""
return pulumi.get(self, "rcm_service_endpoint")
@property
@pulumi.getter(name="srsServiceEndpoint")
def srs_service_endpoint(self) -> Optional[str]:
"""
The endpoint for making requests to the SRS Service.
"""
return pulumi.get(self, "srs_service_endpoint")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VmmDetailsResponse(dict):
"""
VMM fabric specific details.
"""
def __init__(__self__, *,
instance_type: str):
"""
VMM fabric specific details.
:param str instance_type: Gets the class type. Overridden in derived classes.
"""
pulumi.set(__self__, "instance_type", 'VMM')
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VmmToAzureNetworkMappingSettingsResponse(dict):
"""
E2A Network Mapping fabric specific settings.
"""
def __init__(__self__, *,
instance_type: str):
"""
E2A Network Mapping fabric specific settings.
:param str instance_type: Gets the Instance type.
"""
pulumi.set(__self__, "instance_type", 'VmmToAzure')
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VmmToVmmNetworkMappingSettingsResponse(dict):
"""
E2E Network Mapping fabric specific settings.
"""
def __init__(__self__, *,
instance_type: str):
"""
E2E Network Mapping fabric specific settings.
:param str instance_type: Gets the Instance type.
"""
pulumi.set(__self__, "instance_type", 'VmmToVmm')
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the Instance type.
"""
return pulumi.get(self, "instance_type")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VmwareCbtPolicyDetailsResponse(dict):
"""
VMware Cbt specific policy details.
"""
def __init__(__self__, *,
instance_type: str,
app_consistent_frequency_in_minutes: Optional[int] = None,
crash_consistent_frequency_in_minutes: Optional[int] = None,
recovery_point_history: Optional[int] = None,
recovery_point_threshold_in_minutes: Optional[int] = None):
"""
VMware Cbt specific policy details.
:param str instance_type: Gets the class type. Overridden in derived classes.
:param int app_consistent_frequency_in_minutes: The app consistent snapshot frequency in minutes.
:param int crash_consistent_frequency_in_minutes: The crash consistent snapshot frequency in minutes.
:param int recovery_point_history: The duration in minutes until which the recovery points need to be stored.
:param int recovery_point_threshold_in_minutes: The recovery point threshold in minutes.
"""
pulumi.set(__self__, "instance_type", 'VMwareCbt')
if app_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "app_consistent_frequency_in_minutes", app_consistent_frequency_in_minutes)
if crash_consistent_frequency_in_minutes is not None:
pulumi.set(__self__, "crash_consistent_frequency_in_minutes", crash_consistent_frequency_in_minutes)
if recovery_point_history is not None:
pulumi.set(__self__, "recovery_point_history", recovery_point_history)
if recovery_point_threshold_in_minutes is not None:
pulumi.set(__self__, "recovery_point_threshold_in_minutes", recovery_point_threshold_in_minutes)
@property
@pulumi.getter(name="instanceType")
def instance_type(self) -> str:
"""
Gets the class type. Overridden in derived classes.
"""
return pulumi.get(self, "instance_type")
@property
@pulumi.getter(name="appConsistentFrequencyInMinutes")
def app_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The app consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "app_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="crashConsistentFrequencyInMinutes")
def crash_consistent_frequency_in_minutes(self) -> Optional[int]:
"""
The crash consistent snapshot frequency in minutes.
"""
return pulumi.get(self, "crash_consistent_frequency_in_minutes")
@property
@pulumi.getter(name="recoveryPointHistory")
def recovery_point_history(self) -> Optional[int]:
"""
The duration in minutes until which the recovery points need to be stored.
"""
return pulumi.get(self, "recovery_point_history")
@property
@pulumi.getter(name="recoveryPointThresholdInMinutes")
def recovery_point_threshold_in_minutes(self) -> Optional[int]:
"""
The recovery point threshold in minutes.
"""
return pulumi.get(self, "recovery_point_threshold_in_minutes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
| 40.157472 | 491 | 0.65951 | 34,956 | 304,996 | 5.440239 | 0.026147 | 0.041647 | 0.038008 | 0.055551 | 0.851606 | 0.797922 | 0.754482 | 0.707734 | 0.678638 | 0.653082 | 0 | 0.000464 | 0.2508 | 304,996 | 7,594 | 492 | 40.16276 | 0.831775 | 0.227662 | 0 | 0.696298 | 1 | 0 | 0.16622 | 0.098678 | 0 | 0 | 0 | 0 | 0 | 1 | 0.160811 | false | 0 | 0.005659 | 0.014855 | 0.327281 | 0.001415 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
76562771a3ec6b1e10ef733b190f552e557f36e9 | 835 | py | Python | smartmin/csv_imports/tests.py | felixonmars/smartmin | d1189fc7c9757601e5aef578e1bf734bf666450b | [
"BSD-3-Clause"
] | null | null | null | smartmin/csv_imports/tests.py | felixonmars/smartmin | d1189fc7c9757601e5aef578e1bf734bf666450b | [
"BSD-3-Clause"
] | null | null | null | smartmin/csv_imports/tests.py | felixonmars/smartmin | d1189fc7c9757601e5aef578e1bf734bf666450b | [
"BSD-3-Clause"
] | null | null | null | from django.test import TestCase
from .models import ImportTask, generate_file_path
class ImportTest(TestCase):
def test_csv_import(self):
pass
def test_generate_file_path(self):
self.assertEquals(generate_file_path(ImportTask(), 'allo.csv'), 'csv_imports/allo.csv')
self.assertEquals(generate_file_path(ImportTask(), 'allo.xlsx'), 'csv_imports/allo.xlsx')
self.assertEquals(generate_file_path(ImportTask(), 'allo.foo.bar'), 'csv_imports/allo.foo.bar')
long_name = 'foo' * 100
self.assertEquals(generate_file_path(ImportTask(), '%s.xls.csv' % long_name),
'csv_imports/%s.csv' % long_name[:96])
self.assertEquals(generate_file_path(ImportTask(), '%s.abc.xlsx' % long_name),
'csv_imports/%s.xlsx' % long_name[:95])
| 39.761905 | 103 | 0.664671 | 105 | 835 | 5.028571 | 0.285714 | 0.159091 | 0.212121 | 0.265152 | 0.496212 | 0.424242 | 0.424242 | 0 | 0 | 0 | 0 | 0.010463 | 0.198802 | 835 | 20 | 104 | 41.75 | 0.778774 | 0 | 0 | 0 | 1 | 0 | 0.185629 | 0.053892 | 0 | 0 | 0 | 0 | 0.357143 | 1 | 0.142857 | false | 0.071429 | 0.785714 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
4f3177533b92b5f7f378479cf377f3f2e266fcbb | 48,414 | py | Python | wizardMainGUI.py | fireclawthefox/SetupConfigWizard | 5425994f9fb30a9db4207d6525f36534f64c37aa | [
"BSD-2-Clause"
] | 1 | 2020-01-19T20:23:55.000Z | 2020-01-19T20:23:55.000Z | wizardMainGUI.py | fireclawthefox/SetupConfigWizard | 5425994f9fb30a9db4207d6525f36534f64c37aa | [
"BSD-2-Clause"
] | null | null | null | wizardMainGUI.py | fireclawthefox/SetupConfigWizard | 5425994f9fb30a9db4207d6525f36534f64c37aa | [
"BSD-2-Clause"
] | null | null | null | #!/usr/bin/python
# -*- coding: utf-8 -*-
# This file was created using the DirectGUI Designer
from direct.gui import DirectGuiGlobals as DGG
from direct.gui.DirectButton import DirectButton
from direct.gui.DirectLabel import DirectLabel
from direct.gui.DirectFrame import DirectFrame
from direct.gui.DirectEntry import DirectEntry
from direct.gui.DirectCheckButton import DirectCheckButton
from direct.gui.DirectScrolledFrame import DirectScrolledFrame
from direct.gui.DirectRadioButton import DirectRadioButton
from panda3d.core import (
LPoint3f,
LVecBase3f,
LVecBase4f,
TextNode
)
class GUI:
def __init__(self, rootParent=None):
self.btnDeploy = DirectButton(
borderWidth=(2, 2),
frameColor=(0.2, 0.9, 0.2, 1.0),
frameSize=(-250.0, 250.0, -8.1, 19.4),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -470),
scale=LVecBase3f(1, 1, 1),
text='Deploy',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0.0, 3.0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["deploy"],
)
self.btnDeploy.setTransparency(0)
self.btnLoad = DirectButton(
borderWidth=(2, 2),
frameSize=(-125.0, 125.0, -4.7, 19.4),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(375, 0, -445),
scale=LVecBase3f(1, 1, 1),
text='Load',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0.0, 4.0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["load"],
)
self.btnLoad.setTransparency(0)
self.lblHeader = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.25, 0.25, 0.25, 1.0),
frameSize=(-250.0, 250.0, -20.0, 30.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -30),
scale=LVecBase3f(1, 1, 1),
text='Panda3D Setup Creation Wizard',
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(1, 1, 1, 1),
text_bg=LVecBase4f(0.25, 0.25, 0.25, 1),
parent=rootParent,
)
self.lblHeader.setTransparency(0)
self.btnSave = DirectButton(
borderWidth=(2, 2),
frameSize=(-125.0, 125.0, -4.7, 19.4),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(125, 0, -445),
scale=LVecBase3f(1, 1, 1),
text='Save',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0.0, 4.0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["save"],
)
self.btnSave.setTransparency(0)
self.frmMetadata = DirectFrame(
borderWidth=(2, 2),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmMetadata.setTransparency(0)
self.lblAppName = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-140, 0, 70),
scale=LVecBase3f(1, 1, 1),
text='Application Name',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.lblAppName.setTransparency(0)
self.txtAppName = DirectEntry(
borderWidth=(0.1666, 0.166),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, 70),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1.0, 1.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.txtAppName.setTransparency(0)
self.lblMetadata = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Metadata',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.lblMetadata.setTransparency(0)
self.lblAuthor = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-140, 0, 40),
scale=LVecBase3f(1, 1, 1),
text='Author',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.lblAuthor.setTransparency(0)
self.txtAuthor = DirectEntry(
borderWidth=(0.1666, 0.1666),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, 40),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1.0, 1.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.txtAuthor.setTransparency(0)
self.txtVersion = DirectEntry(
borderWidth=(0.167, 0.167),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, 10),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1, 1),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.txtVersion.setTransparency(0)
self.lblVersion = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-140, 0, 10),
scale=LVecBase3f(1, 1, 1),
text='Version',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmMetadata,
)
self.lblVersion.setTransparency(0)
self.frmPlatforms = DirectFrame(
borderWidth=(2, 2),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmPlatforms.setTransparency(0)
self.lblPlatforms = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Supported Platforms',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.lblPlatforms.setTransparency(0)
self.cbLinux = DirectCheckButton(
borderWidth=(0.0, 0.0),
frameSize=(-250.0, 250.0, -12.25, 27.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(0, 0, 65),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Linux',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-241.4, 0, 0.450001),
indicator_relief='sunken',
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.2),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24.0, 24.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.cbLinux.setTransparency(0)
self.cbMacOS = DirectCheckButton(
borderWidth=(0.0, 0.0),
frameSize=(-250.0, 250.0, -12.25, 27.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(0, 0, 35),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='MacOS',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-241.4, 0, 0.450001),
indicator_relief='sunken',
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.2),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.cbMacOS.setTransparency(0)
self.cbWindows = DirectCheckButton(
borderWidth=(0.0, 0.0),
frameSize=(-250.0, 250.0, -12.25, 27.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(0, 0, 5),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Windows',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-241.4, 0, 0.450001),
indicator_relief='sunken',
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.2),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.cbWindows.setTransparency(0)
self.cbAndroid = DirectCheckButton(
borderWidth=(0.0, 0.0),
frameSize=(-250.0, 250.0, -12.25, 27.55),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, -70),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Android',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-241.4, 0, 0.450001),
indicator_relief='sunken',
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.2),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.cbAndroid.setTransparency(0)
self.lblDesktop = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 90),
scale=LVecBase3f(1, 1, 1),
text='Desktop',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.lblDesktop.setTransparency(0)
self.lblMobile = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, -50),
scale=LVecBase3f(1, 1, 1),
text='Mobile',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlatforms,
)
self.lblMobile.setTransparency(0)
self.frmPlugins = DirectFrame(
borderWidth=(2, 2),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmPlugins.setTransparency(0)
self.lblPlugins = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Plugins',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmPlugins,
)
self.lblPlugins.setTransparency(0)
self.frmPluginSelection = DirectScrolledFrame(
canvasSize=(-215.0, 215.0, -300.0, 0.0),
frameColor=(1, 1, 1, 1),
frameSize=(-225.0, 225.0, -240.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 80),
scrollBarWidth=20,
state='normal',
horizontalScroll_borderWidth=(2, 2),
horizontalScroll_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_pos=LPoint3f(0, 0, 0),
horizontalScroll_decButton_borderWidth=(2, 2),
horizontalScroll_decButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_decButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_incButton_borderWidth=(2, 2),
horizontalScroll_incButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_incButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_thumb_borderWidth=(2, 2),
horizontalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_thumb_pos=LPoint3f(0, 0, 0),
verticalScroll_borderWidth=(2, 2),
verticalScroll_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_hpr=LVecBase3f(0, 0, 0),
verticalScroll_pos=LPoint3f(0, 0, 0),
verticalScroll_decButton_borderWidth=(2, 2),
verticalScroll_decButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_decButton_pos=LPoint3f(215, 0, -10),
verticalScroll_incButton_borderWidth=(2, 2),
verticalScroll_incButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_incButton_pos=LPoint3f(215, 0, -230),
verticalScroll_thumb_borderWidth=(2, 2),
verticalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
verticalScroll_thumb_pos=LPoint3f(215, 0, -100),
parent=self.frmPlugins,
)
self.frmPluginSelection.setTransparency(0)
self.frmApplications = DirectFrame(
borderWidth=(2.0, 2.0),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmApplications.setTransparency(0)
self.lblApplication = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.0, 0.0, 0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Applications',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmApplications,
)
self.lblApplication.setTransparency(0)
self.btnAddApplication = DirectButton(
borderWidth=(2, 2),
frameSize=(-250.0, 250.0, -12.25, 27.55),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 70),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Add Application',
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmApplications,
command=base.messenger.send,
extraArgs=["addApplication"],
)
self.btnAddApplication.setTransparency(0)
self.frmApplicationSelection = DirectScrolledFrame(
canvasSize=(-215.0, 215.0, -300.0, 0.0),
frameColor=(1.0, 1.0, 1.0, 1.0),
frameSize=(-225.0, 225.0, -200.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 40),
scrollBarWidth=20,
state='normal',
horizontalScroll_borderWidth=(2, 2),
horizontalScroll_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_pos=LPoint3f(0, 0, 0),
horizontalScroll_decButton_borderWidth=(2, 2),
horizontalScroll_decButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_decButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_incButton_borderWidth=(2, 2),
horizontalScroll_incButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_incButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_thumb_borderWidth=(2, 2),
horizontalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_thumb_pos=LPoint3f(0, 0, 0),
verticalScroll_borderWidth=(2, 2),
verticalScroll_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_hpr=LVecBase3f(0, 0, 0),
verticalScroll_pos=LPoint3f(0, 0, 0),
verticalScroll_decButton_borderWidth=(2, 2),
verticalScroll_decButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_decButton_pos=LPoint3f(215, 0, -10),
verticalScroll_incButton_borderWidth=(2, 2),
verticalScroll_incButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_incButton_pos=LPoint3f(215, 0, -190),
verticalScroll_thumb_borderWidth=(2, 2),
verticalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
verticalScroll_thumb_pos=LPoint3f(215, 0, -73.3333),
parent=self.frmApplications,
)
self.frmApplicationSelection.setTransparency(0)
self.lblNameCol = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-220, 0, 45),
scale=LVecBase3f(1, 1, 1),
text='Name',
text_align=TextNode.A_left,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmApplications,
)
self.lblNameCol.setTransparency(0)
self.lblPathCol = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-95, 0, 45),
scale=LVecBase3f(1, 1, 1),
text='Path',
text_align=TextNode.A_left,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmApplications,
)
self.lblPathCol.setTransparency(0)
self.lblTerminalCol = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(100, 0, 45),
scale=LVecBase3f(1, 1, 1),
text='Terminal App',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmApplications,
)
self.lblTerminalCol.setTransparency(0)
self.rbMetadata = DirectRadioButton(
borderWidth=(0.0, 0.0),
frameSize=(-86.25, 69.65, -6.25, 21.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(60, 0, -65),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Metadata',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-77.65, 0, 0.450001),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["metadata"]],
variable=[],
value=[],
)
self.rbMetadata.setTransparency(0)
self.rbPlatforms = DirectRadioButton(
borderWidth=(0.0, 0.0),
frameSize=(-86.25, 69.65, -6.25, 21.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(255, 0, -85),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Platforms',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-77.65, 0, 0.450001),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["platforms"]],
variable=[],
value=[],
)
self.rbPlatforms.setTransparency(0)
self.rbPlugins = DirectRadioButton(
borderWidth=(0.0, 0.0),
frameSize=(-86.25, 69.65, -7.8, 21.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(155, 0, -85),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Plugins',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-77.65, 0, -0.324999),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["plugins"]],
variable=[],
value=[],
)
self.rbPlugins.setTransparency(0)
self.rbApplications = DirectRadioButton(
borderWidth=(0.0, 0.0),
frameSize=(-86.25, 69.65, -8.1, 21.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(60, 0, -85),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Applications',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-77.65, 0, -0.474999),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["applications"]],
variable=[],
value=[],
)
self.rbApplications.setTransparency(0)
self.rbInclude = DirectRadioButton(
borderWidth=(0.0, 0.0),
frameSize=(-86.25, 69.65, -8.1, 21.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(155, 0, -65),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Include',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-77.65, 0, -0.474999),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["include"]],
variable=[],
value=[],
)
self.rbInclude.setTransparency(0)
self.frmInclude = DirectFrame(
borderWidth=(2, 2),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmInclude.setTransparency(0)
self.btnAddIncludePattern = DirectButton(
borderWidth=(2, 2),
frameSize=(-250.0, 250.0, -12.25, 27.55),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 70),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Add pattern',
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmInclude,
command=base.messenger.send,
extraArgs=["addIncludePattern"],
)
self.btnAddIncludePattern.setTransparency(0)
self.lblIncludePatterns = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Include Patterns',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmInclude,
)
self.lblIncludePatterns.setTransparency(0)
self.frmIncludePatterns = DirectScrolledFrame(
canvasSize=(-215.0, 215.0, -300.0, 0.0),
frameColor=(1.0, 1.0, 1.0, 1.0),
frameSize=(-225.0, 225.0, -200.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 40),
scrollBarWidth=20,
state='normal',
horizontalScroll_borderWidth=(2, 2),
horizontalScroll_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_pos=LPoint3f(0, 0, 0),
horizontalScroll_decButton_borderWidth=(2, 2),
horizontalScroll_decButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_decButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_incButton_borderWidth=(2, 2),
horizontalScroll_incButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_incButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_thumb_borderWidth=(2, 2),
horizontalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_thumb_pos=LPoint3f(0, 0, 0),
verticalScroll_borderWidth=(2, 2),
verticalScroll_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_hpr=LVecBase3f(0, 0, 0),
verticalScroll_pos=LPoint3f(0, 0, 0),
verticalScroll_decButton_borderWidth=(2, 2),
verticalScroll_decButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_decButton_pos=LPoint3f(215, 0, -10),
verticalScroll_incButton_borderWidth=(2, 2),
verticalScroll_incButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_incButton_pos=LPoint3f(215, 0, -190),
verticalScroll_thumb_borderWidth=(2, 2),
verticalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
verticalScroll_thumb_pos=LPoint3f(215, 0, -73.3333),
parent=self.frmInclude,
)
self.frmIncludePatterns.setTransparency(0)
self.lblPatternCol = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-220, 0, 45),
scale=LVecBase3f(1, 1, 1),
text='Pattern',
text_align=TextNode.A_left,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmInclude,
)
self.lblPatternCol.setTransparency(0)
self.frmExclude = DirectFrame(
borderWidth=(2, 2),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmExclude.setTransparency(0)
self.lblExcludePatterns = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Exclude Patterns',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmExclude,
)
self.lblExcludePatterns.setTransparency(0)
self.btnAddExcludePattern = DirectButton(
borderWidth=(2, 2),
frameSize=(-250.0, 250.0, -12.225, 27.55),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 70),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Add pattern',
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmExclude,
)
self.btnAddExcludePattern.setTransparency(0)
self.pg23054 = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-220, 0, 45),
scale=LVecBase3f(1, 1, 1),
text='Pattern',
text_align=TextNode.A_left,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmExclude,
)
self.pg23054.setTransparency(0)
self.frmExcludePatterns = DirectScrolledFrame(
canvasSize=(-215.0, 215.0, -300.0, 0.0),
frameColor=(1, 1, 1, 1),
frameSize=(-225.0, 225.0, -200.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 40),
scrollBarWidth=20,
state='normal',
horizontalScroll_borderWidth=(2, 2),
horizontalScroll_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_pos=LPoint3f(0, 0, 0),
horizontalScroll_decButton_borderWidth=(2, 2),
horizontalScroll_decButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_decButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_incButton_borderWidth=(2, 2),
horizontalScroll_incButton_frameSize=(-0.05, 0.05, -10.0, 10.0),
horizontalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_incButton_pos=LPoint3f(0, 0, 0),
horizontalScroll_thumb_borderWidth=(2, 2),
horizontalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
horizontalScroll_thumb_pos=LPoint3f(0, 0, 0),
verticalScroll_borderWidth=(2, 2),
verticalScroll_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_hpr=LVecBase3f(0, 0, 0),
verticalScroll_pos=LPoint3f(0, 0, 0),
verticalScroll_decButton_borderWidth=(2, 2),
verticalScroll_decButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_decButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_decButton_pos=LPoint3f(215, 0, -10),
verticalScroll_incButton_borderWidth=(2, 2),
verticalScroll_incButton_frameSize=(-10.0, 10.0, -0.05, 0.05),
verticalScroll_incButton_hpr=LVecBase3f(0, 0, 0),
verticalScroll_incButton_pos=LPoint3f(215, 0, -190),
verticalScroll_thumb_borderWidth=(2, 2),
verticalScroll_thumb_hpr=LVecBase3f(0, 0, 0),
verticalScroll_thumb_pos=LPoint3f(215, 0, -73.3333),
parent=self.frmExclude,
)
self.frmExcludePatterns.setTransparency(0)
self.rbExclude = DirectRadioButton(
borderWidth=(0.0, 0.0),
frameSize=(-86.25, 69.65, -8.1, 21.55),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(255, 0, -65),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Exclude',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-77.65, 0, -0.474999),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["exclude"]],
variable=[],
value=[],
)
self.rbExclude.setTransparency(0)
self.frmAdvanced = DirectFrame(
borderWidth=(2, 2),
frameColor=(1, 1, 1, 1),
frameSize=(-250.0, 250.0, -180.0, 150.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(250, 0, -245),
relief=3,
parent=rootParent,
)
self.frmAdvanced.setTransparency(0)
self.lblAdvanced = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(0, 0, 110),
scale=LVecBase3f(1, 1, 1),
text='Advanced',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.lblAdvanced.setTransparency(0)
self.lblBuildBase = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-180, 0, 70),
scale=LVecBase3f(1, 1, 1),
text='Build directory',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.lblBuildBase.setTransparency(0)
self.txtBuildBase = DirectEntry(
borderWidth=(0.167, 0.167),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, 70),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1, 1),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.txtBuildBase.setTransparency(0)
self.lblRequirementsPaths = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-180, 0, 40),
scale=LVecBase3f(1, 1, 1),
text='Requirements file',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.lblRequirementsPaths.setTransparency(0)
self.txtRequirementsPaths = DirectEntry(
borderWidth=(0.167, 0.167),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, 40),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1, 1),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.txtRequirementsPaths.setTransparency(0)
self.txtOptimizedWheels = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-180, 0, 10),
scale=LVecBase3f(1, 1, 1),
text='Use optimized wheels',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.txtOptimizedWheels.setTransparency(0)
self.cbOptimizedWheels = DirectCheckButton(
borderWidth=(2, 2),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(70, 0, 15),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-11, 0, -7.2),
indicator_relief='sunken',
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.2),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.cbOptimizedWheels.setTransparency(0)
self.lblOptimizedWheelsIndex = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-180, 0, -20),
scale=LVecBase3f(1, 1, 1),
text='Optimized wheel index',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.lblOptimizedWheelsIndex.setTransparency(0)
self.txtOptimizedWheelsIndex = DirectEntry(
borderWidth=(0.167, 0.167),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, -20),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1, 1),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.txtOptimizedWheelsIndex.setTransparency(0)
self.lblDistDir = DirectLabel(
borderWidth=(2, 2),
frameColor=(0.8, 0.8, 0.8, 0.0),
hpr=LVecBase3f(0, 0, 0),
pos=LPoint3f(-180, 0, -50),
scale=LVecBase3f(1, 1, 1),
text='Distribution directory',
text_align=TextNode.A_center,
text_scale=(12.0, 12.0),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.lblDistDir.setTransparency(0)
self.txtDistDir = DirectEntry(
borderWidth=(0.167, 0.167),
hpr=LVecBase3f(0, 0, 0),
overflow=1,
pos=LPoint3f(-60, 0, -50),
scale=LVecBase3f(12, 12, 12),
width=20.0,
text_align=TextNode.A_left,
text_scale=(1, 1),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=self.frmAdvanced,
)
self.txtDistDir.setTransparency(0)
self.rbAdvanced = DirectRadioButton(
borderWidth=(0.0, 0.0),
hpr=LVecBase3f(0, 0, 0),
indicatorValue=1,
pos=LPoint3f(350, 0, -65),
scale=LVecBase3f(0.5, 0.5, 0.5),
text='Advanced',
indicator_borderWidth=(2, 2),
indicator_hpr=LVecBase3f(0, 0, 0),
indicator_pos=LPoint3f(-64.55, 0, 0.150002),
indicator_relief=3,
indicator_text_align=TextNode.A_center,
indicator_text_scale=(24, 24),
indicator_text_pos=(0, -0.25),
indicator_text_fg=LVecBase4f(0, 0, 0, 1),
indicator_text_bg=LVecBase4f(0, 0, 0, 0),
text_align=TextNode.A_center,
text_scale=(24, 24),
text_pos=(0, 0),
text_fg=LVecBase4f(0, 0, 0, 1),
text_bg=LVecBase4f(0, 0, 0, 0),
parent=rootParent,
command=base.messenger.send,
extraArgs=["selectTab", ["advanced"]],
variable=[],
value=[],
)
self.rbAdvanced.setTransparency(0)
self.rbMetadata.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
self.rbPlatforms.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
self.rbPlugins.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
self.rbApplications.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
self.rbInclude.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
self.rbExclude.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
self.rbAdvanced.setOthers([self.rbMetadata,self.rbPlatforms,self.rbPlugins,self.rbApplications,self.rbInclude,self.rbExclude,self.rbAdvanced,])
def show(self):
self.btnDeploy.show()
self.btnLoad.show()
self.lblHeader.show()
self.btnSave.show()
self.frmMetadata.show()
self.frmPlatforms.show()
self.frmPlugins.show()
self.frmApplications.show()
self.rbMetadata.show()
self.rbPlatforms.show()
self.rbPlugins.show()
self.rbApplications.show()
self.rbInclude.show()
self.frmInclude.show()
self.frmExclude.show()
self.rbExclude.show()
self.frmAdvanced.show()
self.rbAdvanced.show()
def hide(self):
self.btnDeploy.hide()
self.btnLoad.hide()
self.lblHeader.hide()
self.btnSave.hide()
self.frmMetadata.hide()
self.frmPlatforms.hide()
self.frmPlugins.hide()
self.frmApplications.hide()
self.rbMetadata.hide()
self.rbPlatforms.hide()
self.rbPlugins.hide()
self.rbApplications.hide()
self.rbInclude.hide()
self.frmInclude.hide()
self.frmExclude.hide()
self.rbExclude.hide()
self.frmAdvanced.hide()
self.rbAdvanced.hide()
def destroy(self):
self.btnDeploy.destroy()
self.btnLoad.destroy()
self.lblHeader.destroy()
self.btnSave.destroy()
self.frmMetadata.destroy()
self.frmPlatforms.destroy()
self.frmPlugins.destroy()
self.frmApplications.destroy()
self.rbMetadata.destroy()
self.rbPlatforms.destroy()
self.rbPlugins.destroy()
self.rbApplications.destroy()
self.rbInclude.destroy()
self.frmInclude.destroy()
self.frmExclude.destroy()
self.rbExclude.destroy()
self.frmAdvanced.destroy()
self.rbAdvanced.destroy()
| 38.393339 | 155 | 0.535713 | 5,715 | 48,414 | 4.419248 | 0.04392 | 0.060421 | 0.0449 | 0.060738 | 0.795415 | 0.788644 | 0.782705 | 0.77934 | 0.767699 | 0.761562 | 0 | 0.117854 | 0.32945 | 48,414 | 1,260 | 156 | 38.42381 | 0.660116 | 0.001838 | 0 | 0.706723 | 0 | 0 | 0.013224 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003361 | false | 0 | 0.007563 | 0 | 0.011765 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
4f5df2bb10d294693c8b28456beba424c3efac12 | 138 | py | Python | workers/semantics/encodings/003-1.py | meyerweb/wpt | f04261533819893c71289614c03434c06856c13e | [
"BSD-3-Clause"
] | 14,668 | 2015-01-01T01:57:10.000Z | 2022-03-31T23:33:32.000Z | workers/semantics/encodings/003-1.py | meyerweb/wpt | f04261533819893c71289614c03434c06856c13e | [
"BSD-3-Clause"
] | 7,642 | 2018-05-28T09:38:03.000Z | 2022-03-31T20:55:48.000Z | workers/semantics/encodings/003-1.py | meyerweb/wpt | f04261533819893c71289614c03434c06856c13e | [
"BSD-3-Clause"
] | 5,941 | 2015-01-02T11:32:21.000Z | 2022-03-31T16:35:46.000Z | # -*- coding: utf-8 -*-
def main(request, response):
return u"PASS" if request.GET.first(b'x').decode('utf-8') == u'å' else u"FAIL"
| 27.6 | 82 | 0.601449 | 24 | 138 | 3.458333 | 0.791667 | 0.096386 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.017241 | 0.15942 | 138 | 4 | 83 | 34.5 | 0.698276 | 0.152174 | 0 | 0 | 0 | 0 | 0.130435 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0.5 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
4f9d4447da9792e9059582c98e72680c039e46f5 | 66 | py | Python | services/streaming/config/__init__.py | SashaNullptr/FNWClient | 0f49bf6f2355951c05c4971bed3c5d814ca286cd | [
"MIT"
] | null | null | null | services/streaming/config/__init__.py | SashaNullptr/FNWClient | 0f49bf6f2355951c05c4971bed3c5d814ca286cd | [
"MIT"
] | null | null | null | services/streaming/config/__init__.py | SashaNullptr/FNWClient | 0f49bf6f2355951c05c4971bed3c5d814ca286cd | [
"MIT"
] | null | null | null | from .config import config
from .env_vars import collect_env_vars
| 22 | 38 | 0.848485 | 11 | 66 | 4.818182 | 0.545455 | 0.264151 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 66 | 2 | 39 | 33 | 0.913793 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4fb0282e0cb2daa7ec01a6d466256d6f46c14fcf | 5,361 | py | Python | tests/test_model/test_recognizer/test_resnet.py | likyoo/ZCls | 568621aca3a8b090c93345f0858d52c5757f2f0e | [
"Apache-2.0"
] | 1 | 2021-05-07T12:54:03.000Z | 2021-05-07T12:54:03.000Z | tests/test_model/test_recognizer/test_resnet.py | likyoo/ZCls | 568621aca3a8b090c93345f0858d52c5757f2f0e | [
"Apache-2.0"
] | null | null | null | tests/test_model/test_recognizer/test_resnet.py | likyoo/ZCls | 568621aca3a8b090c93345f0858d52c5757f2f0e | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
"""
@date: 2020/11/21 下午4:16
@file: test_resnest.py
@author: zj
@description:
"""
import torch
from zcls.config import cfg
from zcls.config.key_word import KEY_OUTPUT
from zcls.model.recognizers.resnet.resnet import ResNet
from zcls.model.recognizers.resnet.torchvision_resnet import build_torchvision_resnet
def test_data(model, input_shape, output_shape):
data = torch.randn(input_shape)
outputs = model(data)[KEY_OUTPUT]
print(outputs.shape)
assert outputs.shape == output_shape
def test_resnet():
config_file = 'configs/benchmarks/resnet/r50_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
config_file = 'configs/benchmarks/resnet/rxt50_32x4d_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
config_file = 'configs/benchmarks/resnet/r50_torchvision_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = build_torchvision_resnet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
config_file = 'configs/benchmarks/resnet/rxt50_32x4d_torchvision_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = build_torchvision_resnet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
def test_resnetd():
config_file = 'configs/benchmarks/resnet/rd50_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
config_file = 'configs/benchmarks/resnet/rxtd50_32x4d_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
config_file = 'configs/benchmarks/resnet/rxtd50_32x4d_fast_avg_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
config_file = 'configs/benchmarks/resnet/rxtd50_32x4d_avg_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (1, 3, 224, 224), (1, 100))
def test_sknet():
config_file = 'configs/benchmarks/resnet/sknet50_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (3, 3, 224, 224), (3, 100))
def test_resnest():
config_file = 'configs/benchmarks/resnet/rstd50_2s2x40d_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (3, 3, 224, 224), (3, 100))
config_file = 'configs/benchmarks/resnet/rstd50_2s2x40d_fast_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (3, 3, 224, 224), (3, 100))
config_file = 'configs/benchmarks/resnet/rstd50_2s2x40d_fast_official_cifar100_224_e100_rmsprop.yaml'
cfg.merge_from_file(config_file)
model = ResNet(cfg)
print(model)
test_data(model, (3, 3, 224, 224), (3, 100))
if __name__ == '__main__':
print('*' * 10 + ' resnet')
test_resnet()
print('*' * 10 + ' resnetd')
test_resnetd()
print('*' * 10 + ' sknet')
test_sknet()
print('*' * 10 + ' resnest')
test_resnest()
# print('*' * 10 + ' se resnet')
# test_attention_resnet(with_attentions=(1, 1, 1, 1),
# reduction=16,
# attention_type='SqueezeAndExcitationBlock2D')
# print('*' * 10 + ' nl resnet')
# test_attention_resnet(with_attentions=(0, (1, 0, 1, 0), (1, 0, 1, 0, 1, 0), 0),
# reduction=16,
# attention_type='NonLocal2DEmbeddedGaussian')
# print('*' * 10 + ' snl resnet')
# test_attention_resnet(with_attentions=(0, (1, 0, 1, 0), (1, 0, 1, 0, 1, 0), 0),
# reduction=16,
# attention_type='SimplifiedNonLocal2DEmbeddedGaussian')
# print('*' * 10 + ' gc resnet')
# test_attention_resnet(with_attentions=(0, 1, 1, 1),
# reduction=16,
# attention_type='GlobalContextBlock2D')
#
# print('*' * 10 + ' se resnetd')
# test_attention_resnetd(with_attentions=(1, 1, 1, 1),
# reduction=16,
# attention_type='SqueezeAndExcitationBlock2D')
# print('*' * 10 + ' nl resnetd')
# test_attention_resnetd(with_attentions=(0, (1, 0, 1, 0), (1, 0, 1, 0, 1, 0), 0),
# reduction=16,
# attention_type='NonLocal2DEmbeddedGaussian')
# print('*' * 10 + ' snl resnetd')
# test_attention_resnetd(with_attentions=(0, (1, 0, 1, 0), (1, 0, 1, 0, 1, 0), 0),
# reduction=16,
# attention_type='SimplifiedNonLocal2DEmbeddedGaussian')
# print('*' * 10 + ' gc resnetd')
# test_attention_resnetd(with_attentions=(0, 1, 1, 1),
# reduction=16,
# attention_type='GlobalContextBlock2D')
| 33.298137 | 105 | 0.633837 | 679 | 5,361 | 4.727541 | 0.11782 | 0.074766 | 0.018692 | 0.019938 | 0.840498 | 0.801246 | 0.772897 | 0.758567 | 0.742368 | 0.742368 | 0 | 0.103994 | 0.234098 | 5,361 | 160 | 106 | 33.50625 | 0.677789 | 0.321768 | 0 | 0.578313 | 0 | 0 | 0.249791 | 0.238387 | 0 | 0 | 0 | 0 | 0.012048 | 1 | 0.060241 | false | 0 | 0.060241 | 0 | 0.120482 | 0.204819 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
96fc67ceeff3847e2c8b97602f77af5fdcb52592 | 24,283 | py | Python | tests/test_wrapper.py | JoyMonteiro/sympl | c8bee914651824360a46bf71119dd87a93a07219 | [
"BSD-3-Clause"
] | 46 | 2017-01-05T00:21:18.000Z | 2022-03-05T12:20:39.000Z | tests/test_wrapper.py | JoyMonteiro/sympl | c8bee914651824360a46bf71119dd87a93a07219 | [
"BSD-3-Clause"
] | 47 | 2017-03-27T13:37:31.000Z | 2022-02-02T07:14:22.000Z | tests/test_wrapper.py | JoyMonteiro/sympl | c8bee914651824360a46bf71119dd87a93a07219 | [
"BSD-3-Clause"
] | 11 | 2017-01-27T23:03:34.000Z | 2020-06-22T20:05:49.000Z | from datetime import timedelta, datetime
import unittest
from sympl import (
TendencyComponent, Stepper, DiagnosticComponent, UpdateFrequencyWrapper, ScalingWrapper,
TimeDifferencingWrapper, DataArray, ImplicitTendencyComponent
)
import pytest
import numpy as np
class MockTendencyComponent(TendencyComponent):
input_properties = None
diagnostic_properties = None
tendency_properties = None
def __init__(
self, input_properties, diagnostic_properties, tendency_properties,
diagnostic_output, tendency_output, **kwargs):
self.input_properties = input_properties
self.diagnostic_properties = diagnostic_properties
self.tendency_properties = tendency_properties
self.diagnostic_output = diagnostic_output
self.tendency_output = tendency_output
self.times_called = 0
self.state_given = None
super(MockTendencyComponent, self).__init__(**kwargs)
def array_call(self, state):
self.times_called += 1
self.state_given = state
return self.tendency_output, self.diagnostic_output
class MockImplicitTendencyComponent(ImplicitTendencyComponent):
input_properties = None
diagnostic_properties = None
tendency_properties = None
def __init__(
self, input_properties, diagnostic_properties, tendency_properties,
diagnostic_output, tendency_output, **kwargs):
self.input_properties = input_properties
self.diagnostic_properties = diagnostic_properties
self.tendency_properties = tendency_properties
self.diagnostic_output = diagnostic_output
self.tendency_output = tendency_output
self.times_called = 0
self.state_given = None
self.timestep_given = None
super(MockImplicitTendencyComponent, self).__init__(**kwargs)
def array_call(self, state, timestep):
self.times_called += 1
self.state_given = state
self.timestep_given = timestep
return self.tendency_output, self.diagnostic_output
class MockDiagnosticComponent(DiagnosticComponent):
input_properties = None
diagnostic_properties = None
def __init__(
self, input_properties, diagnostic_properties, diagnostic_output,
**kwargs):
self.input_properties = input_properties
self.diagnostic_properties = diagnostic_properties
self.diagnostic_output = diagnostic_output
self.times_called = 0
self.state_given = None
super(MockDiagnosticComponent, self).__init__(**kwargs)
def array_call(self, state):
self.times_called += 1
self.state_given = state
return self.diagnostic_output
class MockStepper(Stepper):
input_properties = None
diagnostic_properties = None
output_properties = None
def __init__(
self, input_properties, diagnostic_properties, output_properties,
diagnostic_output, state_output,
**kwargs):
self.input_properties = input_properties
self.diagnostic_properties = diagnostic_properties
self.output_properties = output_properties
self.diagnostic_output = diagnostic_output
self.state_output = state_output
self.times_called = 0
self.state_given = None
self.timestep_given = None
super(MockStepper, self).__init__(**kwargs)
def array_call(self, state, timestep):
self.times_called += 1
self.state_given = state
self.timestep_given = timestep
return self.diagnostic_output, self.state_output
class MockEmptyPrognostic(MockTendencyComponent):
def __init__(self, **kwargs):
super(MockEmptyPrognostic, self).__init__(
input_properties={},
diagnostic_properties={},
tendency_properties={},
diagnostic_output={},
tendency_output={},
**kwargs
)
class MockEmptyImplicitPrognostic(MockImplicitTendencyComponent):
def __init__(self, **kwargs):
super(MockEmptyImplicitPrognostic, self).__init__(
input_properties={},
diagnostic_properties={},
tendency_properties={},
diagnostic_output={},
tendency_output={},
**kwargs
)
class MockEmptyDiagnostic(MockDiagnosticComponent):
def __init__(self, **kwargs):
super(MockEmptyDiagnostic, self).__init__(
input_properties={},
diagnostic_properties={},
diagnostic_output={},
**kwargs
)
class MockEmptyImplicit(MockStepper):
def __init__(self, **kwargs):
super(MockEmptyImplicit, self).__init__(
input_properties={},
diagnostic_properties={},
output_properties={},
diagnostic_output={},
state_output={},
**kwargs
)
class UpdateFrequencyBase(object):
def get_component(self):
raise NotImplementedError()
def call_component(self, component, state):
raise NotImplementedError()
def test_set_update_frequency_calls_initially(self):
component = UpdateFrequencyWrapper(self.get_component(), timedelta(hours=1))
assert isinstance(component, self.component_type)
state = {'time': timedelta(hours=0)}
result = self.call_component(component, state)
assert component.times_called == 1
def test_set_update_frequency_does_not_repeat_call_at_same_timedelta(self):
component = UpdateFrequencyWrapper(self.get_component(), timedelta(hours=1))
assert isinstance(component, self.component_type)
state = {'time': timedelta(hours=0)}
result = self.call_component(component, state)
result = self.call_component(component, state)
assert component.times_called == 1
def test_set_update_frequency_does_not_repeat_call_at_same_datetime(self):
component = UpdateFrequencyWrapper(self.get_component(), timedelta(hours=1))
assert isinstance(component, self.component_type)
state = {'time': datetime(2010, 1, 1)}
result = self.call_component(component, state)
result = self.call_component(component, state)
assert component.times_called == 1
def test_set_update_frequency_updates_result_when_equal(self):
component = UpdateFrequencyWrapper(self.get_component(), timedelta(hours=1))
assert isinstance(component, self.component_type)
result = self.call_component(component, {'time': timedelta(hours=0)})
result = self.call_component(component, {'time': timedelta(hours=1)})
assert component.times_called == 2
def test_set_update_frequency_updates_result_when_greater(self):
component = UpdateFrequencyWrapper(self.get_component(), timedelta(hours=1))
assert isinstance(component, self.component_type)
result = self.call_component(component, {'time': timedelta(hours=0)})
result = self.call_component(component, {'time': timedelta(hours=2)})
assert component.times_called == 2
def test_set_update_frequency_does_not_update_when_less(self):
component = UpdateFrequencyWrapper(self.get_component(), timedelta(hours=1))
assert isinstance(component, self.component_type)
result = self.call_component(component, {'time': timedelta(hours=0)})
result = self.call_component(component, {'time': timedelta(minutes=59)})
assert component.times_called == 1
class PrognosticUpdateFrequencyTests(unittest.TestCase, UpdateFrequencyBase):
component_type = TendencyComponent
def get_component(self):
return MockEmptyPrognostic()
def call_component(self, component, state):
return component(state)
class ImplicitPrognosticUpdateFrequencyTests(unittest.TestCase, UpdateFrequencyBase):
component_type = ImplicitTendencyComponent
def get_component(self):
return MockEmptyImplicitPrognostic()
def call_component(self, component, state):
return component(state, timestep=timedelta(hours=1))
class ImplicitUpdateFrequencyTests(unittest.TestCase, UpdateFrequencyBase):
component_type = Stepper
def get_component(self):
return MockEmptyImplicit()
def call_component(self, component, state):
return component(state, timedelta(minutes=1))
class DiagnosticUpdateFrequencyTests(unittest.TestCase, UpdateFrequencyBase):
component_type = DiagnosticComponent
def get_component(self):
return MockEmptyDiagnostic()
def call_component(self, component, state):
return component(state)
def test_scaled_component_wrong_type():
class WrongObject(object):
def __init__(self):
self.a = 1
wrong_component = WrongObject()
with pytest.raises(TypeError):
ScalingWrapper(wrong_component)
class ScalingInputMixin(object):
def test_inputs_no_scaling(self):
self.input_properties = {
'input1': {
'dims': ['dim1'],
'units': 'm',
},
}
state = {
'time': timedelta(0),
'input1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
),
}
base_component = self.get_component()
component = ScalingWrapper(base_component, input_scale_factors={})
assert isinstance(component, self.component_type)
self.call_component(component, state)
assert base_component.state_given.keys() == state.keys()
assert np.all(base_component.state_given['input1'] == state['input1'].values)
def test_inputs_one_scaling(self):
self.input_properties = {
'input1': {
'dims': ['dim1'],
'units': 'm',
},
'input2': {
'dims': ['dim1'],
'units': 'm',
},
}
state = {
'time': timedelta(0),
'input1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
),
'input2': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
),
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
input_scale_factors={
'input1': 10.
})
assert isinstance(component, self.component_type)
self.call_component(component, state)
assert base_component.state_given.keys() == state.keys()
assert np.all(base_component.state_given['input1'] == state['input1'].values * 10.)
assert np.all(base_component.state_given['input2'] == state['input2'].values)
def test_inputs_two_scalings(self):
self.input_properties = {
'input1': {
'dims': ['dim1'],
'units': 'm',
},
'input2': {
'dims': ['dim1'],
'units': 'm',
},
}
state = {
'time': timedelta(0),
'input1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
),
'input2': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
),
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
input_scale_factors={
'input1': 10.,
'input2': 5.,
})
assert isinstance(component, self.component_type)
self.call_component(component, state)
assert base_component.state_given.keys() == state.keys()
assert np.all(base_component.state_given['input1'] == 10.)
assert np.all(base_component.state_given['input2'] == 5.)
def test_inputs_one_scaling_with_unit_conversion(self):
self.input_properties = {
'input1': {
'dims': ['dim1'],
'units': 'm',
},
'input2': {
'dims': ['dim1'],
'units': 'm',
},
}
state = {
'time': timedelta(0),
'input1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'km'}
),
'input2': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
),
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
input_scale_factors={
'input1': 0.5
})
assert isinstance(component, self.component_type)
self.call_component(component, state)
assert base_component.state_given.keys() == state.keys()
assert np.all(base_component.state_given['input1'] == 500.)
assert np.all(base_component.state_given['input2'] == 1.)
class ScalingOutputMixin(object):
def test_output_no_scaling(self):
self.output_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.output_state = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
output_scale_factors={},
)
assert isinstance(component, self.component_type)
state = {'time': timedelta(0)}
outputs = self.get_outputs(self.call_component(component, state))
assert outputs.keys() == self.output_state.keys()
assert np.all(outputs['diag1'] == 1.)
def test_output_one_scaling(self):
self.output_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.output_state = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
output_scale_factors={
'diag1': 10.,
},
)
assert isinstance(component, self.component_type)
state = {'time': timedelta(0)}
outputs = self.get_outputs(
self.call_component(component, state))
assert outputs.keys() == self.output_state.keys()
assert np.all(outputs['diag1'] == 10.)
def test_output_no_scaling_when_input_scaled(self):
self.input_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm'
}
}
self.output_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.output_state = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
input_scale_factors={
'diag1': 10.,
},
)
assert isinstance(component, self.component_type)
state = {
'time': timedelta(0),
'diag1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
)
}
outputs = self.get_outputs(
self.call_component(component, state))
assert outputs.keys() == self.output_state.keys()
assert np.all(outputs['diag1'] == 1.)
class ScalingDiagnosticMixin(object):
def test_diagnostic_no_scaling(self):
self.diagnostic_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.diagnostic_output = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
diagnostic_scale_factors={},
)
assert isinstance(component, self.component_type)
state = {'time': timedelta(0)}
diagnostics = self.get_diagnostics(self.call_component(component, state))
assert diagnostics.keys() == self.diagnostic_output.keys()
assert np.all(diagnostics['diag1'] == 1.)
def test_diagnostic_one_scaling(self):
self.diagnostic_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.diagnostic_output = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
diagnostic_scale_factors={
'diag1': 10.,
},
)
assert isinstance(component, self.component_type)
state = {'time': timedelta(0)}
diagnostics = self.get_diagnostics(
self.call_component(component, state))
assert diagnostics.keys() == self.diagnostic_output.keys()
assert np.all(diagnostics['diag1'] == 10.)
def test_diagnostic_no_scaling_when_input_scaled(self):
self.input_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm'
}
}
self.diagnostic_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.diagnostic_output = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
input_scale_factors={
'diag1': 10.,
},
)
assert isinstance(component, self.component_type)
state = {
'time': timedelta(0),
'diag1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
)
}
diagnostics = self.get_diagnostics(
self.call_component(component, state))
assert diagnostics.keys() == self.diagnostic_output.keys()
assert np.all(diagnostics['diag1'] == 1.)
class ScalingTendencyMixin(object):
def test_tendency_no_scaling(self):
self.tendency_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.tendency_output = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
tendency_scale_factors={},
)
assert isinstance(component, self.component_type)
state = {'time': timedelta(0)}
tendencies = self.get_tendencies(self.call_component(component, state))
assert tendencies.keys() == self.tendency_output.keys()
assert np.all(tendencies['diag1'] == 1.)
def test_tendency_one_scaling(self):
self.tendency_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm',
}
}
self.tendency_output = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
tendency_scale_factors={
'diag1': 10.,
},
)
assert isinstance(component, self.component_type)
state = {'time': timedelta(0)}
tendencies = self.get_tendencies(
self.call_component(component, state))
assert tendencies.keys() == self.tendency_output.keys()
assert np.all(tendencies['diag1'] == 10.)
def test_tendency_no_scaling_when_input_scaled(self):
self.input_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm'
}
}
self.tendency_properties = {
'diag1': {
'dims': ['dim1'],
'units': 'm/s',
}
}
self.tendency_output = {
'diag1': np.ones([10])
}
base_component = self.get_component()
component = ScalingWrapper(
base_component,
input_scale_factors={
'diag1': 10.,
},
)
assert isinstance(component, self.component_type)
state = {
'time': timedelta(0),
'diag1': DataArray(
np.ones([10]),
dims=['dim1'],
attrs={'units': 'm'}
)
}
tendencies = self.get_tendencies(
self.call_component(component, state))
assert tendencies.keys() == self.tendency_output.keys()
assert np.all(tendencies['diag1'] == 1.)
class DiagnosticScalingTests(
unittest.TestCase, ScalingInputMixin, ScalingDiagnosticMixin):
component_type = DiagnosticComponent
def setUp(self):
self.input_properties = {}
self.diagnostic_properties = {}
self.diagnostic_output = {}
def get_component(self):
return MockDiagnosticComponent(
self.input_properties,
self.diagnostic_properties,
self.diagnostic_output
)
def get_diagnostics(self, output):
return output
def call_component(self, component, state):
return component(state)
class PrognosticScalingTests(
unittest.TestCase, ScalingInputMixin, ScalingDiagnosticMixin, ScalingTendencyMixin):
component_type = TendencyComponent
def setUp(self):
self.input_properties = {}
self.diagnostic_properties = {}
self.tendency_properties = {}
self.diagnostic_output = {}
self.tendency_output = {}
def get_component(self):
return MockTendencyComponent(
self.input_properties,
self.diagnostic_properties,
self.tendency_properties,
self.diagnostic_output,
self.tendency_output,
)
def get_diagnostics(self, output):
return output[1]
def get_tendencies(self, output):
return output[0]
def call_component(self, component, state):
return component(state)
class ImplicitPrognosticScalingTests(
unittest.TestCase, ScalingInputMixin, ScalingDiagnosticMixin,
ScalingTendencyMixin):
component_type = ImplicitTendencyComponent
def setUp(self):
self.input_properties = {}
self.diagnostic_properties = {}
self.tendency_properties = {}
self.diagnostic_output = {}
self.tendency_output = {}
def get_component(self):
return MockImplicitTendencyComponent(
self.input_properties,
self.diagnostic_properties,
self.tendency_properties,
self.diagnostic_output,
self.tendency_output,
)
def get_diagnostics(self, output):
return output[1]
def get_tendencies(self, output):
return output[0]
def call_component(self, component, state):
return component(state, timedelta(hours=1))
class ImplicitScalingTests(
unittest.TestCase, ScalingInputMixin, ScalingDiagnosticMixin,
ScalingOutputMixin):
component_type = Stepper
def setUp(self):
self.input_properties = {}
self.diagnostic_properties = {}
self.output_properties = {}
self.diagnostic_output = {}
self.output_state = {}
def get_component(self):
return MockStepper(
self.input_properties,
self.diagnostic_properties,
self.output_properties,
self.diagnostic_output,
self.output_state,
)
def get_diagnostics(self, output):
return output[0]
def get_outputs(self, output):
return output[1]
def call_component(self, component, state):
return component(state, timedelta(hours=1))
if __name__ == '__main__':
pytest.main([__file__])
| 31.332903 | 92 | 0.583 | 2,184 | 24,283 | 6.229396 | 0.062271 | 0.047777 | 0.045277 | 0.045865 | 0.844101 | 0.796325 | 0.783976 | 0.757589 | 0.741272 | 0.720911 | 0 | 0.013927 | 0.311041 | 24,283 | 774 | 93 | 31.373385 | 0.799283 | 0 | 0 | 0.690549 | 0 | 0 | 0.034098 | 0 | 0 | 0 | 0 | 0 | 0.082317 | 1 | 0.094512 | false | 0 | 0.007622 | 0.035061 | 0.205793 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8c02daccbc6484e08abe672bcb4281ea47fec0b8 | 164 | py | Python | metaworld/envs/env_util.py | pkol/metaworld | 718e4d1bc2b34e0ae3ef6415fb6cbe4afe8ea4b9 | [
"MIT"
] | null | null | null | metaworld/envs/env_util.py | pkol/metaworld | 718e4d1bc2b34e0ae3ef6415fb6cbe4afe8ea4b9 | [
"MIT"
] | null | null | null | metaworld/envs/env_util.py | pkol/metaworld | 718e4d1bc2b34e0ae3ef6415fb6cbe4afe8ea4b9 | [
"MIT"
] | 1 | 2020-10-28T11:51:08.000Z | 2020-10-28T11:51:08.000Z | import os
ENV_ASSET_DIR = os.path.join(os.path.dirname(__file__), 'assets')
def get_asset_full_path(file_name):
return os.path.join(ENV_ASSET_DIR, file_name)
| 23.428571 | 65 | 0.77439 | 29 | 164 | 3.931034 | 0.517241 | 0.157895 | 0.192982 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.103659 | 164 | 6 | 66 | 27.333333 | 0.77551 | 0 | 0 | 0 | 0 | 0 | 0.036585 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
8c2b5a771d1c2aa636bbacc700d75ed4e8fcbf26 | 65 | py | Python | dictio_test.py | jacmba/dictio-api | 56f54f01360ff64ea4427c61249bad838388deaa | [
"MIT"
] | null | null | null | dictio_test.py | jacmba/dictio-api | 56f54f01360ff64ea4427c61249bad838388deaa | [
"MIT"
] | null | null | null | dictio_test.py | jacmba/dictio-api | 56f54f01360ff64ea4427c61249bad838388deaa | [
"MIT"
] | null | null | null | import unittest
class test_dictio(unittest.TestCase):
pass
| 10.833333 | 37 | 0.769231 | 8 | 65 | 6.125 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.169231 | 65 | 5 | 38 | 13 | 0.907407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
8c5ccb0d8296c9550a6dcca69cab2b28ab9d82a0 | 202 | py | Python | logging_opentracing/__init__.py | kornerc/opentracing-logging-python | 12dea383dfa7d27a7fbc3d60070a58cdb4333125 | [
"MIT"
] | null | null | null | logging_opentracing/__init__.py | kornerc/opentracing-logging-python | 12dea383dfa7d27a7fbc3d60070a58cdb4333125 | [
"MIT"
] | null | null | null | logging_opentracing/__init__.py | kornerc/opentracing-logging-python | 12dea383dfa7d27a7fbc3d60070a58cdb4333125 | [
"MIT"
] | null | null | null | from .handler import OpenTracingHandler
from .formatter import OpenTracingFormatter, OpenTracingFormatterABC
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
| 28.857143 | 68 | 0.851485 | 21 | 202 | 7.809524 | 0.52381 | 0.20122 | 0.219512 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094059 | 202 | 6 | 69 | 33.666667 | 0.896175 | 0 | 0 | 0 | 0 | 0 | 0.034653 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.6 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4fb4f0931830a0df31ca6994dfa1e65fca1425a9 | 2,689 | py | Python | src/nnpackage/train/loss.py | AlexanderDKazakov/schnetpack | 97b82469d977981b500e439a6c93696d8dac8a3f | [
"MIT"
] | null | null | null | src/nnpackage/train/loss.py | AlexanderDKazakov/schnetpack | 97b82469d977981b500e439a6c93696d8dac8a3f | [
"MIT"
] | null | null | null | src/nnpackage/train/loss.py | AlexanderDKazakov/schnetpack | 97b82469d977981b500e439a6c93696d8dac8a3f | [
"MIT"
] | null | null | null | import torch
__all__ = ["build_mse_loss", "build_mae_loss", "build_sae_loss"]
class LossFnError(Exception):
pass
def build_mse_loss(properties, loss_tradeoff=None):
"""
Build the mean squared error loss function.
Args:
properties (list): mapping between the model properties and the dataset properties
loss_tradeoff (list or None): multiply loss value of property with tradeoff factor
Returns:
mean squared error loss function
"""
if loss_tradeoff is None: loss_tradeoff = [1] * len(properties)
if len(properties) != len(loss_tradeoff): raise LossFnError("loss_tradeoff must have same length as properties!")
def loss_fn(batch, result):
loss = 0.0
for prop, factor in zip(properties, loss_tradeoff):
diff = batch[prop] - result[prop]
diff = diff ** 2
err_sq = factor * torch.sum(diff)
loss += err_sq
return loss
return loss_fn
def build_sae_loss(properties, loss_tradeoff=None):
"""
Build the sum abs error loss function.
Args:
properties (list): mapping between the model properties and the dataset properties
loss_tradeoff (list or None): multiply loss value of property with tradeoff factor
Returns:
sum abs error loss function
"""
if loss_tradeoff is None: loss_tradeoff = [1] * len(properties)
if len(properties) != len(loss_tradeoff): raise LossFnError("loss_tradeoff must have same length as properties!")
def loss_fn(batch, result):
loss = 0.0
for prop, factor in zip(properties, loss_tradeoff):
diff = batch[prop] - result[prop]
err = factor * torch.sum(torch.abs(diff))
loss += err
return loss
return loss_fn
def build_mae_loss(properties, loss_tradeoff=None):
"""
Build the mean absolute error loss function.
Args:
properties (list): mapping between the model properties and the dataset properties
loss_tradeoff (list or None): multiply loss value of property with tradeoff factor
Returns:
mean absolute error loss function
"""
if loss_tradeoff is None: loss_tradeoff = [1] * len(properties)
if len(properties) != len(loss_tradeoff): raise LossFnError("loss_tradeoff must have same length as properties!")
def loss_fn(batch, result):
loss = 0.0
for prop, factor in zip(properties, loss_tradeoff):
diff = batch[prop] - result[prop]
err_sq = factor * torch.mean(torch.abs(diff))
loss += err_sq
return loss
return loss_fn
| 30.908046 | 117 | 0.642246 | 342 | 2,689 | 4.912281 | 0.181287 | 0.15 | 0.117857 | 0.046429 | 0.907738 | 0.866667 | 0.866667 | 0.821429 | 0.771429 | 0.729762 | 0 | 0.005157 | 0.278914 | 2,689 | 86 | 118 | 31.267442 | 0.861269 | 0.300112 | 0 | 0.684211 | 0 | 0 | 0.108352 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.157895 | false | 0.026316 | 0.026316 | 0 | 0.368421 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8c94af35080d35b90492dd5dce8a5dcc981ff0af | 4,762 | py | Python | core/exploitdb.py | GDGSNF/PXXTF | b1e081348c1e993c61213f3fd5f960894ce91d01 | [
"MIT"
] | 16 | 2020-10-03T22:01:46.000Z | 2021-08-18T16:58:56.000Z | core/exploitdb.py | GDGSNF/PXXTF | b1e081348c1e993c61213f3fd5f960894ce91d01 | [
"MIT"
] | 1 | 2021-10-18T00:13:27.000Z | 2021-10-18T00:13:31.000Z | core/exploitdb.py | yezz123/PXXTF | b1e081348c1e993c61213f3fd5f960894ce91d01 | [
"MIT"
] | 5 | 2020-10-16T11:07:45.000Z | 2021-05-19T23:49:06.000Z | import os
import time
import sys
import core
R = '\033[31m' # Red
N = '\033[1;37m' # White
G = '\033[32m' # Green
O = '\033[0;33m' # Orange
B = '\033[1;34m' #Blue
E = '\033[0m' # End
def clean():
os.system("clear")
def explo():
os.system('searchsploit exploits .txt')
list = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"exploitdb/exploits" + G +
"(open_file (ex:exploits/path/file.txt))" + N + "): "))
print(("" + G + ""))
print(('\nSelect "open" ' + list))
print('Command Description')
print('-------- ------------')
print(('open open_file => ' + list))
print('back back ')
print('exit exit PTF\n')
pl = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"exploitdb/exploits" + N + "): "))
if pl == "back":
core.menu.exploit_db()
elif pl == 'open':
time.sleep(3)
print(("" + G + ""))
os.system('figlet open && cat /usr/share/exploitdb/%s' % (list))
print()
core.menu.exploit_db()
elif pl == 'clear':
clean()
explo()
elif pl == 'exit':
print()
print(("" + G + "Thanks for using PTF"))
print()
exit()
else:
print(("Wrong Command => ", pl))
print(("" + N + "" + B + "[" + R + "!" + B + "] " + N +
"Please enter 'show options'"))
explo()
def shel():
os.system('figlet shellcode')
time.sleep(5)
os.system('searchsploit shellcodes .txt')
list = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"exploitdb/shellcode" + G +
"(open_file (ex:shellcodes/path/file.txt))" + N + "): "))
print(("" + G + ""))
print(('\nSelect "open" ' + list))
print('Command Description')
print('-------- ------------')
print(('open open_file => ' + list))
print('back back ')
print('exit exit PTF\n')
ope = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"exploitdb/shellcode" + N + "): "))
if ope == "back":
core.menu.exploit_db()
elif ope == 'open':
time.sleep(3)
print(("" + G + ""))
os.system('figlet open && cat /usr/share/exploitdb/%s' % (list))
print()
core.menu.exploit_db()
elif ope == 'clear':
clean()
shel()
elif ope == 'exit':
print()
print(("" + G + "Thanks for using PTF"))
print()
exit()
else:
print(("Wrong Command => ", ope))
print(("" + N + "" + B + "[" + R + "!" + B + "] " + N +
"Please enter 'show options'"))
shel()
def searchsploit():
list = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"searchsploit" + G + "(input search cve/vulnerability)" + N +
"): "))
time.sleep(6)
os.system('searchsploit %s' % (list))
list = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"searchsploit" + G + "(open_file (ex:/path/path/file))" + N +
"): "))
print(("" + G + ""))
print(('\nSelect "open" ' + list))
print('Command Description')
print('-------- ------------')
print(('open open_file => ' + list))
print(('copy copy file directory ' + list))
print('back back ')
print('exit exit PTF\n')
ope = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"searchsploit" + N + "): "))
if ope == "back":
core.menu.exploit_db()
elif ope == 'open':
time.sleep(3)
print(("" + G + ""))
os.system('figlet open && cat /usr/share/exploitdb/%s' % (list))
print()
core.menu.exploit_db()
elif ope == 'copy':
dir = eval(
input("" + N + "Pentest>> (" + B + "modules/exploitdb)(" + R +
"searchsploit" + G + "(Select Dir (/home/...))" + N + "): "))
time.sleep(3)
print(("" + G + ""))
os.system('cp /usr/share/exploitdb/%s %s' % (list, dir))
print()
print(('copy success =>' + dir))
core.menu.exploit_db()
elif ope == 'clear':
clean()
shel()
elif ope == 'exit':
print()
print(("" + G + "Thanks for using PTF"))
print()
exit()
else:
print(("Wrong Command => ", ope))
print(("" + N + "" + B + "[" + R + "!" + B + "] " + N +
"Please enter 'show options'"))
shel()
| 31.536424 | 79 | 0.430491 | 486 | 4,762 | 4.191358 | 0.183128 | 0.029455 | 0.039273 | 0.066765 | 0.753068 | 0.753068 | 0.738832 | 0.72705 | 0.72705 | 0.721159 | 0 | 0.01241 | 0.356993 | 4,762 | 150 | 80 | 31.746667 | 0.652841 | 0.0063 | 0 | 0.702128 | 0 | 0 | 0.34328 | 0.03619 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028369 | false | 0 | 0.028369 | 0 | 0.056738 | 0.326241 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8cbb75fe87105228390f4b4a6c2a810c05b10854 | 26,203 | py | Python | shakenfist/tests/test_external_api.py | mcarden/shakenfist | cb90ffe81a3d0201949ddea8d4b36ce1b6c11246 | [
"Apache-2.0"
] | null | null | null | shakenfist/tests/test_external_api.py | mcarden/shakenfist | cb90ffe81a3d0201949ddea8d4b36ce1b6c11246 | [
"Apache-2.0"
] | null | null | null | shakenfist/tests/test_external_api.py | mcarden/shakenfist | cb90ffe81a3d0201949ddea8d4b36ce1b6c11246 | [
"Apache-2.0"
] | null | null | null | import base64
import bcrypt
import json
import mock
import testtools
from shakenfist import config
from shakenfist.external_api import app as external_api
from shakenfist import util
class FakeResponse(object):
def __init__(self, status_code, text):
self.status_code = status_code
self.text = text
def json(self):
return json.loads(self.text)
class FakeScheduler(object):
def place_instance(self, *args, **kwargs):
return config.parsed.get('NODE_NAME')
def _encode_key(key):
return bcrypt.hashpw(key.encode('utf-8'), bcrypt.gensalt())
class AuthTestCase(testtools.TestCase):
def setUp(self):
super(AuthTestCase, self).setUp()
external_api.TESTING = True
external_api.app.testing = True
external_api.app.debug = False
self.client = external_api.app.test_client()
def test_post_auth_no_args(self):
resp = self.client.post('/auth', data=json.dumps({}))
self.assertEqual(400, resp.status_code)
self.assertEqual(
{
'error': 'missing namespace in request',
'status': 400
},
resp.get_json())
def test_post_auth_no_key(self):
resp = self.client.post(
'/auth', data=json.dumps({'namespace': 'banana'}))
self.assertEqual(400, resp.status_code)
self.assertEqual(
{
'error': 'missing key in request',
'status': 400
},
resp.get_json())
@mock.patch('shakenfist.external_api.app.Auth._get_keys',
return_value=(None, [_encode_key('cheese')]))
def test_post_auth(self, mock_get_keys):
resp = self.client.post(
'/auth', data=json.dumps({'namespace': 'banana', 'key': 'cheese'}))
self.assertEqual(200, resp.status_code)
self.assertIn('access_token', resp.get_json())
@mock.patch('shakenfist.external_api.app.Auth._get_keys',
return_value=('cheese', [_encode_key('bacon')]))
def test_post_auth_not_authorized(self, mock_get_keys):
resp = self.client.post(
'/auth', data=json.dumps({'namespace': 'banana', 'key': 'hamster'}))
self.assertEqual(401, resp.status_code)
self.assertEqual(
{
'error': 'unauthorized',
'status': 401
},
resp.get_json())
@mock.patch('shakenfist.etcd.get',
return_value={
'service_key': 'cheese',
'keys': {
'key1': str(base64.b64encode(_encode_key('bacon')), 'utf-8'),
'key2': str(base64.b64encode(_encode_key('sausage')), 'utf-8')
}
})
def test_post_auth_service_key(self, mock_get):
resp = self.client.post(
'/auth', data=json.dumps({'namespace': 'banana', 'key': 'cheese'}))
self.assertEqual(200, resp.status_code)
self.assertIn('access_token', resp.get_json())
class ExternalApiTestCase(testtools.TestCase):
def setUp(self):
super(ExternalApiTestCase, self).setUp()
self.add_event = mock.patch(
'shakenfist.db.add_event')
self.mock_add_event = self.add_event.start()
self.scheduler = mock.patch(
'shakenfist.scheduler.Scheduler', FakeScheduler)
self.mock_scheduler = self.scheduler.start()
external_api.TESTING = True
external_api.app.testing = True
external_api.app.debug = False
self.client = external_api.app.test_client()
# Make a fake auth token
self.get_keys = mock.patch(
'shakenfist.external_api.app.Auth._get_keys',
return_value=('foo', ['bar'])
)
self.mock_get_keys = self.get_keys.start()
resp = self.client.post(
'/auth', data=json.dumps({'namespace': 'system', 'key': 'foo'}))
self.assertEqual(200, resp.status_code)
self.auth_header = 'Bearer %s' % resp.get_json()['access_token']
def test_get_root(self):
resp = self.client.get('/')
self.assertEqual('Shaken Fist REST API service',
resp.get_data().decode('utf-8'))
self.assertEqual(200, resp.status_code)
self.assertEqual('text/plain; charset=utf-8', resp.content_type)
def test_auth_add_key_missing_args(self):
resp = self.client.post('/auth/namespaces',
headers={'Authorization': self.auth_header},
data=json.dumps({}))
self.assertEqual(400, resp.status_code)
self.assertEqual(
{
'error': 'no namespace specified',
'status': 400
},
resp.get_json())
@mock.patch('shakenfist.db.get_lock')
@mock.patch('shakenfist.etcd.get', return_value=None)
@mock.patch('shakenfist.etcd.put')
def test_auth_add_key_missing_keyname(self, mock_put, mock_get, mock_lock):
resp = self.client.post('/auth/namespaces',
headers={'Authorization': self.auth_header},
data=json.dumps({
'namespace': 'foo'
}))
self.assertEqual(200, resp.status_code)
self.assertEqual('foo', resp.get_json())
@mock.patch('shakenfist.db.get_lock')
@mock.patch('shakenfist.etcd.get', return_value=None)
@mock.patch('shakenfist.etcd.put')
def test_auth_add_key_missing_key(self, mock_put, mock_get, mock_lock):
resp = self.client.post('/auth/namespaces',
headers={'Authorization': self.auth_header},
data=json.dumps({
'namespace': 'foo',
'key_name': 'bernard'
}))
self.assertEqual(400, resp.status_code)
self.assertEqual(
{
'error': 'no key specified',
'status': 400
},
resp.get_json())
@mock.patch('shakenfist.db.get_lock')
@mock.patch('shakenfist.etcd.get', return_value=None)
def test_auth_add_key_illegal_keyname(self, mock_get, mock_lock):
resp = self.client.post('/auth/namespaces',
headers={'Authorization': self.auth_header},
data=json.dumps({
'namespace': 'foo',
'key_name': 'service_key',
'key': 'cheese'
}))
self.assertEqual(
{
'error': 'illegal key name',
'status': 403
},
resp.get_json())
self.assertEqual(403, resp.status_code)
@mock.patch('shakenfist.db.get_lock')
@mock.patch('shakenfist.etcd.get', return_value=None)
@mock.patch('shakenfist.etcd.put')
@mock.patch('bcrypt.hashpw', return_value='terminator'.encode('utf-8'))
def test_auth_add_key_new_namespace(self, mock_hashpw, mock_put, mock_get, mock_lock):
resp = self.client.post('/auth/namespaces',
headers={'Authorization': self.auth_header},
data=json.dumps({
'namespace': 'foo',
'key_name': 'bernard',
'key': 'cheese'
}))
self.assertEqual(200, resp.status_code)
self.assertEqual('foo', resp.get_json())
mock_put.assert_called_with(
'namespace', None, 'foo',
{'name': 'foo', 'keys': {'bernard': 'dGVybWluYXRvcg=='}})
@mock.patch('shakenfist.etcd.get_all',
return_value=[
{'name': 'aaa'}, {'name': 'bbb'}, {'name': 'ccc'}
])
def test_get_namespaces(self, mock_get_all):
resp = self.client.get('/auth/namespaces',
headers={'Authorization': self.auth_header})
self.assertEqual(200, resp.status_code)
self.assertEqual(['aaa', 'bbb', 'ccc'], resp.get_json())
def test_delete_namespace_missing_args(self):
resp = self.client.delete('/auth/namespaces',
headers={'Authorization': self.auth_header})
self.assertEqual(405, resp.status_code)
self.assertEqual(
{
'message': 'The method is not allowed for the requested URL.'
},
resp.get_json())
def test_delete_namespace_system(self):
resp = self.client.delete('/auth/namespaces/system',
headers={'Authorization': self.auth_header})
self.assertEqual(403, resp.status_code)
self.assertEqual(
{
'error': 'you cannot delete the system namespace',
'status': 403
},
resp.get_json())
@mock.patch('shakenfist.db.get_instances',
return_value=[{'uuid': '123', 'state': 'created'}])
def test_delete_namespace_with_instances(self, mock_get_instances):
resp = self.client.delete('/auth/namespaces/foo',
headers={'Authorization': self.auth_header})
self.assertEqual(400, resp.status_code)
self.assertEqual(
{
'error': 'you cannot delete a namespace with instances',
'status': 400
},
resp.get_json())
@mock.patch('shakenfist.db.get_instances', return_value=[])
@mock.patch('shakenfist.db.get_networks',
return_value=[{'uuid': '123', 'state': 'created'}])
def test_delete_namespace_with_networks(self, mock_get_networks, mock_get_instances):
resp = self.client.delete('/auth/namespaces/foo',
headers={'Authorization': self.auth_header})
self.assertEqual(400, resp.status_code)
self.assertEqual(
{
'error': 'you cannot delete a namespace with networks',
'status': 400
},
resp.get_json())
@mock.patch('shakenfist.db.get_instances',
return_value=[{'uuid': '123', 'state': 'deleted'}])
@mock.patch('shakenfist.db.get_networks',
return_value=[{'uuid': '123', 'state': 'deleted'}])
@mock.patch('shakenfist.db.hard_delete_instance')
@mock.patch('shakenfist.db.hard_delete_network')
@mock.patch('shakenfist.etcd.delete')
@mock.patch('shakenfist.db.get_lock')
def test_delete_namespace_with_deleted(self, mock_lock, mock_etcd_delete,
mock_hd_network, mock_hd_instance,
mock_get_networks, mock_get_instances):
resp = self.client.delete('/auth/namespaces/foo',
headers={'Authorization': self.auth_header})
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_hd_instance.assert_called()
mock_hd_network.assert_called()
mock_etcd_delete.assert_called()
def test_delete_namespace_key_missing_args(self):
resp = self.client.delete('/auth/namespaces/system/',
headers={'Authorization': self.auth_header})
self.assertEqual(404, resp.status_code)
self.assertEqual(None, resp.get_json())
@mock.patch('shakenfist.db.get_lock')
@mock.patch('shakenfist.etcd.get', return_value={'keys': {}})
def test_delete_namespace_key_missing_key(self, mock_get, mock_lock):
resp = self.client.delete('/auth/namespaces/system/keys/mykey',
headers={'Authorization': self.auth_header})
self.assertEqual(404, resp.status_code)
self.assertEqual(
{
'error': 'key name not found in namespace',
'status': 404
},
resp.get_json())
@mock.patch('shakenfist.db.get_lock')
@mock.patch('shakenfist.etcd.get', return_value={'keys': {'mykey': 'foo'}})
@mock.patch('shakenfist.etcd.put')
def test_delete_namespace_key(self, mock_put, mock_get, mock_lock):
resp = self.client.delete('/auth/namespaces/system/keys/mykey',
headers={'Authorization': self.auth_header})
self.assertEqual(200, resp.status_code)
mock_put.assert_called_with('namespace', None, 'system', {'keys': {}})
@mock.patch('shakenfist.db.get_metadata', return_value={'a': 'a', 'b': 'b'})
def test_get_namespace_metadata(self, mock_md_get):
resp = self.client.get(
'/auth/namespaces/foo/metadata', headers={'Authorization': self.auth_header})
self.assertEqual({'a': 'a', 'b': 'b'}, resp.get_json())
self.assertEqual(200, resp.status_code)
self.assertEqual('application/json', resp.content_type)
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_put_namespace_metadata(self, mock_get_lock, mock_md_put,
mock_md_get):
resp = self.client.put('/auth/namespaces/foo/metadata/foo',
headers={'Authorization': self.auth_header},
data=json.dumps({
'key': 'foo',
'value': 'bar'
}))
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('namespace', 'foo', {'foo': 'bar'})
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_post_namespace_metadata(self, mock_get_lock, mock_md_put,
mock_md_get):
resp = self.client.post('/auth/namespaces/foo/metadata',
headers={'Authorization': self.auth_header},
data=json.dumps({
'key': 'foo',
'value': 'bar'
}))
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('namespace', 'foo', {'foo': 'bar'})
@mock.patch('shakenfist.db.get_metadata', return_value={'foo': 'bar', 'real': 'smart'})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_namespace_metadata(self, mock_get_lock, mock_md_put,
mock_md_get):
resp = self.client.delete('/auth/namespaces/foo/metadata/foo',
headers={'Authorization': self.auth_header})
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('namespace', 'foo', {'real': 'smart'})
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_namespace_metadata_bad_key(self, mock_get_lock,
mock_md_put, mock_md_get):
resp = self.client.delete('/auth/namespaces/foo/metadata/wrong',
headers={'Authorization': self.auth_header})
self.assertEqual({'error': 'key not found', 'status': 404},
resp.get_json())
self.assertEqual(404, resp.status_code)
@mock.patch('shakenfist.db.get_metadata', return_value={'foo': 'bar', 'real': 'smart'})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_namespace_metadata_no_keys(self, mock_get_lock,
mock_md_put, mock_md_get):
resp = self.client.delete('/auth/namespaces/foo/metadata/wrong',
headers={'Authorization': self.auth_header})
self.assertEqual({'error': 'key not found', 'status': 404},
resp.get_json())
self.assertEqual(404, resp.status_code)
@mock.patch('shakenfist.db.get_instance',
return_value={'uuid': '123',
'name': 'banana',
'namespace': 'foo'})
def test_get_instance(self, mock_get_instance):
resp = self.client.get(
'/instances/foo', headers={'Authorization': self.auth_header})
self.assertEqual({'uuid': '123', 'name': 'banana', 'namespace': 'foo'},
resp.get_json())
self.assertEqual(200, resp.status_code)
self.assertEqual('application/json', resp.content_type)
@mock.patch('shakenfist.db.get_instance', return_value=None)
def test_get_instance_not_found(self, mock_get_instance):
resp = self.client.get(
'/instances/foo', headers={'Authorization': self.auth_header})
self.assertEqual({'error': 'instance not found', 'status': 404},
resp.get_json())
self.assertEqual(404, resp.status_code)
self.assertEqual('application/json', resp.content_type)
@mock.patch('shakenfist.db.get_instance',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={'a': 'a', 'b': 'b'})
def test_get_instance_metadata(self, mock_get_instance, mock_md_get):
resp = self.client.get(
'/instances/foo/metadata', headers={'Authorization': self.auth_header})
self.assertEqual({'a': 'a', 'b': 'b'}, resp.get_json())
self.assertEqual(200, resp.status_code)
self.assertEqual('application/json', resp.content_type)
@mock.patch('shakenfist.db.get_instance',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_put_instance_metadata(self, mock_get_lock, mock_md_put,
mock_md_get, mock_get_instance):
resp = self.client.put('/instances/foo/metadata/foo',
headers={'Authorization': self.auth_header},
data=json.dumps({
'key': 'foo',
'value': 'bar'
}))
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('instance', 'foo', {'foo': 'bar'})
@mock.patch('shakenfist.db.get_instance',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_post_instance_metadata(self, mock_get_lock, mock_md_put,
mock_md_get, mock_get_instance):
resp = self.client.post('/instances/foo/metadata',
headers={'Authorization': self.auth_header},
data=json.dumps({
'key': 'foo',
'value': 'bar'
}))
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('instance', 'foo', {'foo': 'bar'})
@mock.patch('shakenfist.db.get_network',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={'a': 'a', 'b': 'b'})
def test_get_network_metadata(self, mock_get_network, mock_md_get):
resp = self.client.get(
'/networks/foo/metadata', headers={'Authorization': self.auth_header})
self.assertEqual({'a': 'a', 'b': 'b'}, resp.get_json())
self.assertEqual(200, resp.status_code)
self.assertEqual('application/json', resp.content_type)
@mock.patch('shakenfist.db.get_network',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_put_network_metadata(self, mock_get_lock, mock_md_put,
mock_md_get, mock_get_network):
resp = self.client.put('/networks/foo/metadata/foo',
headers={'Authorization': self.auth_header},
data=json.dumps({
'key': 'foo',
'value': 'bar'
}))
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('network', 'foo', {'foo': 'bar'})
@mock.patch('shakenfist.db.get_network',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_post_network_metadata(self, mock_get_lock, mock_md_put,
mock_md_get, mock_get_network):
resp = self.client.post('/networks/foo/metadata',
headers={'Authorization': self.auth_header},
data=json.dumps({
'key': 'foo',
'value': 'bar'
}))
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('network', 'foo', {'foo': 'bar'})
@mock.patch('shakenfist.db.get_instance',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={'foo': 'bar', 'real': 'smart'})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_instance_metadata(self, mock_get_lock, mock_md_put,
mock_md_get, mock_get_instance):
resp = self.client.delete('/instances/foo/metadata/foo',
headers={'Authorization': self.auth_header})
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('instance', 'foo', {'real': 'smart'})
@mock.patch('shakenfist.db.get_instance',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={'foo': 'bar', 'real': 'smart'})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_instance_metadata_bad_key(self, mock_get_lock,
mock_md_put, mock_md_get,
mock_get_instance):
resp = self.client.delete('/instances/foo/metadata/wrong',
headers={'Authorization': self.auth_header})
self.assertEqual({'error': 'key not found', 'status': 404},
resp.get_json())
self.assertEqual(404, resp.status_code)
@mock.patch('shakenfist.db.get_network',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={'foo': 'bar', 'real': 'smart'})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_network_metadata(self, mock_get_lock, mock_md_put,
mock_md_get, mock_get_network):
resp = self.client.delete('/networks/foo/metadata/foo',
headers={'Authorization': self.auth_header})
self.assertEqual(None, resp.get_json())
self.assertEqual(200, resp.status_code)
mock_md_put.assert_called_with('network', 'foo', {'real': 'smart'})
@mock.patch('shakenfist.db.get_network',
return_value={'uuid': 'foo',
'name': 'banana',
'namespace': 'foo'})
@mock.patch('shakenfist.db.get_metadata', return_value={'foo': 'bar', 'real': 'smart'})
@mock.patch('shakenfist.db.persist_metadata')
@mock.patch('shakenfist.db.get_lock')
def test_delete_network_metadata_bad_key(self, mock_get_lock,
mock_md_put, mock_md_get,
mock_get_network):
resp = self.client.delete('/networks/foo/metadata/wrong',
headers={'Authorization': self.auth_header})
self.assertEqual({'error': 'key not found', 'status': 404},
resp.get_json())
self.assertEqual(404, resp.status_code)
| 46.050967 | 91 | 0.550586 | 2,750 | 26,203 | 5.016727 | 0.059636 | 0.056756 | 0.11844 | 0.10503 | 0.864961 | 0.843143 | 0.819223 | 0.794216 | 0.779864 | 0.756451 | 0 | 0.011013 | 0.313895 | 26,203 | 568 | 92 | 46.132042 | 0.756369 | 0.00084 | 0 | 0.645098 | 0 | 0 | 0.205546 | 0.096986 | 0 | 0 | 0 | 0 | 0.188235 | 1 | 0.086275 | false | 0 | 0.015686 | 0.005882 | 0.115686 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
8cec0994c4cfb0dbdbd8156ecf514c1f1afa160d | 5,430 | py | Python | CLI/model-manager/utils/create_dataset.py | BrokenImage/raptor | 52e43a55135f9e1aba22fe1d3dbd80ee5835d075 | [
"Unlicense",
"MIT"
] | 3 | 2020-10-02T01:33:59.000Z | 2021-05-26T04:32:59.000Z | CLI/model-manager/utils/create_dataset.py | BrokenImage/raptor | 52e43a55135f9e1aba22fe1d3dbd80ee5835d075 | [
"Unlicense",
"MIT"
] | null | null | null | CLI/model-manager/utils/create_dataset.py | BrokenImage/raptor | 52e43a55135f9e1aba22fe1d3dbd80ee5835d075 | [
"Unlicense",
"MIT"
] | null | null | null | # from pymongo import MongoClient
from sklearn.model_selection import train_test_split
from sklearn.model_selection import train_test_split
from functools import reduce
import pandas as pd
import json
import os
# # MongoDb Client
# client = MongoClient()
# db = client.raptor
# images = db.images
def create_dataset_mongo(file_name, dataset_size=0, anomalies=[], test_size=0.25, complement=''):
"""Creates a new csv file to be used to train a anomaly detection model or adds data
to an existing csv file by creating a dataframe and the combining them.
Params:
name (str):
The file name for the csv file. (required)
dataset_size (int):
The total number of images that the dataset should contain. if set to zero
it will get all the images of a the specified types in the collection. (default 0)
anomalies ([str]):
The anomaly classes that are to be added to the dataset. (default all)
test_size (float or int):
The percent of the data that will be used for train data. (default all)
"""
if not anomalies:
anomalies = images.distinct('anomaly')
anom_images = [images.find( { 'anomaly': anomaly } ) for anomaly in anomalies]
anom_images.sort(key=len)
total_images = reduce(lambda x,y: len(x)+len(y), anom_images)
anaomaly_split = total_images // len(anomalies)
# Calculate how many of each anomaly type should be added the the dataset
anom_counts = []
for index, anomaly_lst in enumerate(anom_images):
n = len(anomaly_lst)
if n < anaomaly_split:
remaining_counts = anom_counts[index:]
for i in remaining_counts:
anomaly_counts[i] += (anaomaly_split - n) // len(remaining_counts)
image_docs = [anom_images[i][:anom_counts[i]] for i in range(len(anomalies))]
keys = [key for key in images.findOne()]
# Format data to be converted into csv file
data = {key:[doc[key] for doc in image_docs] for key in keys}
# Create a dataframe from the formated data
df = pd.Dataframe(data, columns=keys)
if complement != '':
df = pd.merge(df, pd.read_csv(complement), how='outer', on='x1')
# TODO: Use train test split to evenly split the dataset into train and test
# Save the dataframe as a csv file
df.to_csv(file_name)
def create_dataset_json(file_name, dataset_size=0, anomalies=[], test_size=0.25, complement=''):
"""Creates a new csv file to be used to train a anomaly detection model or adds data
to an existing csv file by creating a dataframe and the combining them.
Params:
name (str):
The file name for the csv file. (required)
dataset_size (int):
The total number of images that the dataset should contain. if set to zero
it will get all the images of a the specified types in the collection. (default 0)
anomalies ([str]):
The anomaly classes that are to be added to the dataset. (default all)
test_size (float or int):
The percent of the data that will be used for train data. (default all)
"""
with open('module_metadata.json') as f:
image_dict = json.load(f)
image_nums = image_dict.keys()
if not anomalies:
anomalies = get_distinct_anomalies(image_dict, image_nums)
anom_images = [find_anomaly(anomaly, image_dict, image_nums) for anomaly in anomalies]
anom_images.sort(key=len)
total_images = reduce(lambda x,y: x+len(y), anom_images, 0)
anaomaly_split = total_images // len(anomalies)
# Calculate how many of each anomaly type should be added the the dataset
anom_counts = [anaomaly_split] * len(anomalies)
for index, anomaly_lst in enumerate(anom_images):
n = len(anomaly_lst)
if n < anaomaly_split:
remaining_counts = anom_counts[index:]
for i in range(len(remaining_counts)):
anom_counts[i] += (anaomaly_split - n) // len(remaining_counts)
image_doc_nums = [anom_images[i][:anom_counts[i]] for i in range(len(anomalies))]
image_docs = []
for anomaly_nums in image_doc_nums:
for num in anomaly_nums:
image_docs.append(image_dict[num])
keys = image_dict["1"].keys()
# Format data to be converted into csv file
data = {key:[doc[key] for doc in image_docs] for key in keys}
# Create a dataframe from the formated data
df = pd.DataFrame(data, columns=keys)
if complement != '':
df = pd.merge(df, pd.read_csv(complement), how='outer', on='x1')
# TODO: Use train test split to evenly split the dataset into train and test
# Save the dataframe as a csv file
df.to_csv(file_name)
def get_distinct_anomalies(image_dict, image_nums):
distinct_anomalies = []
for num in image_nums:
if image_dict[num]['anomaly_class'] not in distinct_anomalies:
distinct_anomalies.append(image_dict[num]['anomaly_class'])
return distinct_anomalies
def find_anomaly(anomaly, image_dict, image_nums):
anomaly_nums = []
for num in image_nums:
if image_dict[num]['anomaly_class'] == anomaly:
anomaly_nums.append(num)
return anomaly_nums
if __name__ == '__main__':
create_dataset_json('test.csv', 50, anomalies=['Diode-Multi']) | 39.926471 | 98 | 0.661878 | 784 | 5,430 | 4.429847 | 0.182398 | 0.024187 | 0.016124 | 0.020731 | 0.779153 | 0.761301 | 0.761301 | 0.718687 | 0.718687 | 0.666283 | 0 | 0.003941 | 0.252302 | 5,430 | 136 | 99 | 39.926471 | 0.851478 | 0.353775 | 0 | 0.376812 | 0 | 0 | 0.03487 | 0 | 0 | 0 | 0 | 0.014706 | 0 | 1 | 0.057971 | false | 0 | 0.086957 | 0 | 0.173913 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
5076bed1087e60cd3f063c995222d9c874b6a0d3 | 96 | py | Python | venv/lib/python3.8/site-packages/numpy/random/__init__.py | Retraces/UkraineBot | 3d5d7f8aaa58fa0cb8b98733b8808e5dfbdb8b71 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/numpy/random/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/numpy/random/__init__.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/f3/54/e1/9dec60e6e98de566700f94d1cb335cd98a7e775e01e940bb3418152a1f | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
508c89001bdd6cce7ee49988efac6af4ba57f5e3 | 65 | py | Python | Pyfiles/tmp_func.py | rickyHong/Quantum_Machine_Learning_Express | ba5f57b3544b1c73b49eb251800459fc2394df2f | [
"MIT"
] | 14 | 2021-03-04T22:55:24.000Z | 2022-03-31T12:11:35.000Z | Pyfiles/tmp_func.py | rickyHong/Quantum_Machine_Learning_Express | ba5f57b3544b1c73b49eb251800459fc2394df2f | [
"MIT"
] | 15 | 2021-03-08T15:39:53.000Z | 2021-08-19T18:10:12.000Z | Pyfiles/tmp_func.py | rickyHong/Quantum_Machine_Learning_Express | ba5f57b3544b1c73b49eb251800459fc2394df2f | [
"MIT"
] | 9 | 2021-06-10T23:26:53.000Z | 2022-02-21T16:31:09.000Z | def task(params, id):
# Heavy job here
return params, id
| 16.25 | 21 | 0.630769 | 10 | 65 | 4.1 | 0.8 | 0.390244 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.276923 | 65 | 3 | 22 | 21.666667 | 0.87234 | 0.215385 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
50b9e629d0c4096a50b7e6aa5f99fbc1ba6d5fe5 | 4,790 | py | Python | python/solver/test_solver.py | ivanyu/ar-sudoku-solver | 50c55fc8c3debc8868ae1f5a6d47683bc45a4159 | [
"Apache-2.0"
] | 1 | 2021-01-28T17:26:39.000Z | 2021-01-28T17:26:39.000Z | python/solver/test_solver.py | ivanyu/ar-sudoku-solver | 50c55fc8c3debc8868ae1f5a6d47683bc45a4159 | [
"Apache-2.0"
] | null | null | null | python/solver/test_solver.py | ivanyu/ar-sudoku-solver | 50c55fc8c3debc8868ae1f5a6d47683bc45a4159 | [
"Apache-2.0"
] | null | null | null | # -*- coding: utf-8 -*-
import unittest
import numpy as np
from . import solve
class TestSolver(unittest.TestCase):
_FIELDS = [
[
[7, 0, 0, 3, 0, 0, 2, 0, 6],
[0, 0, 2, 0, 5, 8, 0, 0, 0],
[8, 3, 0, 0, 0, 7, 0, 4, 9],
[3, 9, 0, 0, 0, 0, 8, 5, 4],
[0, 0, 0, 7, 0, 3, 0, 0, 0],
[1, 2, 8, 0, 0, 0, 0, 6, 7],
[6, 8, 0, 5, 0, 0, 0, 2, 3],
[0, 0, 0, 8, 9, 0, 4, 0, 0],
[4, 0, 5, 0, 0, 1, 0, 0, 8],
],
[
[6, 3, 4, 0, 1, 5, 0, 0, 0],
[0, 0, 0, 6, 4, 0, 5, 0, 9],
[5, 0, 1, 2, 7, 8, 0, 0, 3],
[4, 0, 7, 3, 0, 9, 0, 8, 1],
[9, 8, 0, 4, 2, 1, 0, 5, 7],
[3, 0, 2, 8, 0, 7, 4, 9, 6],
[0, 2, 5, 0, 8, 0, 9, 0, 0],
[8, 6, 3, 0, 9, 0, 1, 7, 2],
[0, 4, 0, 0, 3, 2, 0, 6, 0],
],
[
[0, 3, 4, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 6, 0, 0, 5, 0, 9],
[5, 0, 1, 0, 7, 0, 0, 0, 3],
[4, 0, 7, 0, 0, 0, 0, 8, 1],
[9, 0, 0, 0, 2, 0, 0, 5, 7],
[3, 0, 2, 8, 0, 7, 0, 9, 6],
[0, 2, 5, 0, 8, 0, 0, 0, 0],
[8, 6, 3, 0, 9, 0, 1, 0, 2],
[0, 4, 0, 0, 3, 0, 0, 6, 0],
],
[
[0, 6, 7, 8, 0, 0, 5, 4, 0],
[2, 0, 0, 0, 3, 0, 0, 0, 7],
[0, 4, 9, 0, 7, 0, 8, 0, 0],
[0, 3, 0, 0, 0, 7, 9, 8, 4],
[0, 0, 0, 2, 0, 5, 0, 0, 0],
[7, 8, 6, 4, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 5, 0, 4, 2, 0],
[8, 0, 0, 0, 4, 0, 0, 0, 3],
[0, 9, 3, 0, 0, 2, 1, 5, 0],
],
[
[0, 4, 8, 3, 0, 6, 0, 5, 0],
[0, 0, 9, 0, 2, 0, 6, 0, 8],
[0, 0, 2, 0, 1, 0, 0, 0, 7],
[2, 0, 6, 0, 3, 0, 0, 0, 5],
[0, 0, 3, 0, 0, 9, 8, 0, 0],
[8, 0, 0, 0, 7, 4, 9, 0, 2],
[5, 0, 0, 0, 8, 0, 7, 0, 0],
[9, 0, 4, 0, 6, 0, 5, 0, 0],
[0, 8, 0, 5, 0, 2, 1, 6, 0],
],
[
[4, 0, 0, 2, 0, 0, 0, 3, 0],
[0, 0, 0, 0, 0, 3, 0, 0, 4],
[0, 6, 0, 7, 0, 0, 0, 0, 9],
[0, 0, 1, 8, 5, 0, 6, 0, 0],
[0, 0, 5, 4, 0, 0, 2, 0, 0],
[0, 0, 7, 0, 1, 0, 3, 0, 0],
[1, 0, 0, 0, 0, 9, 0, 5, 0],
[3, 0, 0, 1, 0, 0, 0, 0, 0],
[0, 7, 0, 0, 0, 4, 0, 0, 3],
],
[
[0, 0, 0, 0, 0, 7, 5, 0, 0],
[7, 0, 0, 1, 0, 0, 0, 4, 0],
[5, 0, 0, 0, 0, 0, 2, 0, 0],
[0, 0, 1, 3, 9, 0, 0, 0, 8],
[3, 0, 0, 7, 8, 6, 0, 0, 4],
[8, 0, 0, 0, 4, 1, 7, 0, 0],
[0, 0, 8, 0, 0, 0, 0, 0, 9],
[0, 5, 0, 0, 0, 3, 0, 0, 1],
[0, 0, 4, 6, 0, 0, 0, 0, 0],
],
[
[8, 0, 0, 0, 1, 0, 0, 0, 9],
[0, 5, 0, 8, 0, 7, 0, 1, 0],
[0, 0, 4, 0, 9, 0, 7, 0, 0],
[0, 6, 0, 7, 0, 1, 0, 2, 0],
[5, 0, 8, 0, 6, 0, 1, 0, 7],
[0, 1, 0, 5, 0, 2, 0, 9, 0],
[0, 0, 7, 0, 4, 0, 6, 0, 0],
[0, 8, 0, 3, 0, 9, 0, 4, 0],
[3, 0, 0, 0, 5, 0, 0, 0, 8],
],
[
[0, 0, 0, 6, 0, 4, 7, 0, 0],
[7, 0, 6, 0, 0, 0, 0, 0, 9],
[0, 0, 0, 0, 0, 5, 0, 8, 0],
[0, 7, 0, 0, 2, 0, 0, 9, 3],
[8, 0, 0, 0, 0, 0, 0, 0, 5],
[4, 3, 0, 0, 1, 0, 0, 7, 0],
[0, 5, 0, 2, 0, 0, 0, 0, 0],
[3, 0, 0, 0, 0, 0, 2, 0, 8],
[0, 0, 2, 3, 0, 1, 0, 0, 0]
],
[
[0, 3, 9, 1, 0, 0, 0, 0, 0],
[4, 0, 8, 0, 6, 0, 0, 0, 2],
[2, 0, 0, 5, 8, 0, 7, 0, 0],
[8, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 9, 0, 0, 0],
[3, 0, 6, 0, 0, 0, 0, 4, 9],
[0, 0, 0, 0, 1, 0, 0, 3, 0],
[0, 4, 0, 3, 0, 0, 0, 0, 8],
[7, 0, 0, 0, 0, 0, 4, 0, 0]
]
]
def test_solvable(self):
for f in self._FIELDS:
with self.subTest(f=f):
array = np.array(f)
solution = solve(array)
self._check_solution(solution)
def _check_solution(self, solution: np.array):
expected = set(range(1, 10))
for i_row in range(9):
self.assertSetEqual(set(solution[i_row, :]), expected)
for i_col in range(9):
self.assertSetEqual(set(solution[:, i_col]), expected)
for box_row in range(3):
for box_col in range(3):
actual = set(solution[box_row * 3:(box_row + 1) * 3, box_col * 3:(box_col + 1) * 3].reshape(-1))
self.assertSetEqual(actual, expected)
| 32.147651 | 112 | 0.274113 | 925 | 4,790 | 1.401081 | 0.04973 | 0.385802 | 0.298611 | 0.175926 | 0.604167 | 0.441358 | 0.357253 | 0.22608 | 0.077932 | 0 | 0 | 0.338531 | 0.491232 | 4,790 | 148 | 113 | 32.364865 | 0.19327 | 0.004384 | 0 | 0.068182 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022727 | 1 | 0.015152 | false | 0 | 0.022727 | 0 | 0.05303 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
0f9909e90bad5e92cb37c41c7e153066d3f38164 | 123 | py | Python | weboa/utils/__init__.py | lonagi/weboa | 01fcfffb945b0c77f9e365f07fafe33fe39d52cd | [
"Apache-2.0"
] | null | null | null | weboa/utils/__init__.py | lonagi/weboa | 01fcfffb945b0c77f9e365f07fafe33fe39d52cd | [
"Apache-2.0"
] | null | null | null | weboa/utils/__init__.py | lonagi/weboa | 01fcfffb945b0c77f9e365f07fafe33fe39d52cd | [
"Apache-2.0"
] | null | null | null | from .Console_Color import *
from .Printer import *
from .Processing import *
from .FileSystem import *
from .Meta import * | 24.6 | 28 | 0.764228 | 16 | 123 | 5.8125 | 0.5 | 0.430108 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.154472 | 123 | 5 | 29 | 24.6 | 0.894231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0fdc7c7961cb08e914f16a1399bfb30208162cdc | 3,878 | py | Python | tests/mechanisms/test_episodic_memory.py | AlirezaFarnia/PsyNeuLink | c66f8248d1391830e76c97df4b644e12a02c2b73 | [
"Apache-2.0"
] | null | null | null | tests/mechanisms/test_episodic_memory.py | AlirezaFarnia/PsyNeuLink | c66f8248d1391830e76c97df4b644e12a02c2b73 | [
"Apache-2.0"
] | null | null | null | tests/mechanisms/test_episodic_memory.py | AlirezaFarnia/PsyNeuLink | c66f8248d1391830e76c97df4b644e12a02c2b73 | [
"Apache-2.0"
] | null | null | null | import numpy as np
import pytest
from psyneulink.core.components.functions.statefulfunctions.memoryfunctions import ContentAddressableMemory
from psyneulink.library.components.mechanisms.processing.integrator.episodicmemorymechanism import EpisodicMemoryMechanism
import psyneulink.core.llvm as pnlvm
np.random.seed(0)
CONTENT_SIZE=10
ASSOC_SIZE=10
test_var = np.random.rand(2, CONTENT_SIZE)
test_initializer = {tuple(test_var[0]): test_var[1]}
test_data = [
(test_var, ContentAddressableMemory, {'default_variable':test_var}, [[
0.5488135039273248, 0.7151893663724195, 0.6027633760716439, 0.5448831829968969, 0.4236547993389047, 0.6458941130666561, 0.4375872112626925, 0.8917730007820798, 0.9636627605010293, 0.3834415188257777], [
0.7917250380826646, 0.5288949197529045, 0.5680445610939323, 0.925596638292661, 0.07103605819788694, 0.08712929970154071, 0.02021839744032572, 0.832619845547938, 0.7781567509498505, 0.8700121482468192 ]]),
(test_var, ContentAddressableMemory, {'default_variable':test_var, 'retrieval_prob':0.5},
[[ 0. for i in range(CONTENT_SIZE) ],[ 0. for i in range(ASSOC_SIZE) ]]),
(test_var, ContentAddressableMemory, {'default_variable':test_var, 'storage_prob':0.1},
[[ 0. for i in range(CONTENT_SIZE) ],[ 0. for i in range(ASSOC_SIZE) ]]),
(test_var, ContentAddressableMemory, {'default_variable':test_var, 'retrieval_prob':0.9, 'storage_prob':0.9}, [[
0.5488135039273248, 0.7151893663724195, 0.6027633760716439, 0.5448831829968969, 0.4236547993389047, 0.6458941130666561, 0.4375872112626925, 0.8917730007820798, 0.9636627605010293, 0.3834415188257777], [
0.7917250380826646, 0.5288949197529045, 0.5680445610939323, 0.925596638292661, 0.07103605819788694, 0.08712929970154071, 0.02021839744032572, 0.832619845547938, 0.7781567509498505, 0.8700121482468192 ]]),
]
# use list, naming function produces ugly names
names = [
"ContentAddressableMemory",
"ContentAddressableMemory Random Retrieval",
"ContentAddressableMemory Random Storage",
"ContentAddressableMemory Random Retrieval-Storage",
]
@pytest.mark.function
@pytest.mark.memory_function
@pytest.mark.parametrize("variable, func, params, expected", test_data, ids=names)
@pytest.mark.benchmark
def test_basic(variable, func, params, expected, benchmark):
f= func(seed=0, **params)
m = EpisodicMemoryMechanism(content_size=len(variable[0]), assoc_size=len(variable[1]), function=f)
m.execute(variable)
m.execute(variable)
res = [s.value for s in m.output_ports]
assert np.allclose(res[0], expected[0])
assert np.allclose(res[1], expected[1])
benchmark(m.execute, variable)
@pytest.mark.llvm
@pytest.mark.function
@pytest.mark.memory_function
@pytest.mark.parametrize("variable, func, params, expected", test_data, ids=names)
@pytest.mark.benchmark
def test_llvm(variable, func, params, expected, benchmark):
f= func(seed=0, **params)
m = EpisodicMemoryMechanism(content_size=len(variable[0]), assoc_size=len(variable[1]), function=f)
e = pnlvm.execution.MechExecution(m)
e.execute(variable)
res = e.execute(variable)
assert np.allclose(res[0], expected[0])
assert np.allclose(res[1], expected[1])
benchmark(e.execute, variable)
@pytest.mark.llvm
@pytest.mark.cuda
@pytest.mark.function
@pytest.mark.memory_function
@pytest.mark.parametrize("variable, func, params, expected", test_data, ids=names)
@pytest.mark.benchmark
def test_ptx_cuda(variable, func, params, expected, benchmark):
f= func(seed=0, **params)
m = EpisodicMemoryMechanism(content_size=len(variable[0]), assoc_size=len(variable[1]), function=f)
e = pnlvm.execution.MechExecution(m)
e.cuda_execute(variable)
res = e.cuda_execute(variable)
assert np.allclose(res[0], expected[0])
assert np.allclose(res[1], expected[1])
benchmark(e.cuda_execute, variable)
| 47.292683 | 211 | 0.758123 | 481 | 3,878 | 6.012474 | 0.205821 | 0.051867 | 0.037344 | 0.053942 | 0.742739 | 0.742739 | 0.742739 | 0.700899 | 0.700899 | 0.700899 | 0 | 0.210419 | 0.113976 | 3,878 | 81 | 212 | 47.876543 | 0.631257 | 0.011604 | 0 | 0.514286 | 0 | 0 | 0.095275 | 0.025059 | 0 | 0 | 0 | 0 | 0.085714 | 1 | 0.042857 | false | 0 | 0.071429 | 0 | 0.114286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e84d9dd85e71395eb3f0b336a69ee6fa601fe809 | 96 | py | Python | venv/lib/python3.8/site-packages/jedi/api/interpreter.py | GiulianaPola/select_repeats | 17a0d053d4f874e42cf654dd142168c2ec8fbd11 | [
"MIT"
] | 2 | 2022-03-13T01:58:52.000Z | 2022-03-31T06:07:54.000Z | venv/lib/python3.8/site-packages/jedi/api/interpreter.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | 19 | 2021-11-20T04:09:18.000Z | 2022-03-23T15:05:55.000Z | venv/lib/python3.8/site-packages/jedi/api/interpreter.py | DesmoSearch/Desmobot | b70b45df3485351f471080deb5c785c4bc5c4beb | [
"MIT"
] | null | null | null | /home/runner/.cache/pip/pool/92/2a/0d/26c65ed9e4695e254b4c2ea880a678ca6711c6b2fb281a4980b6fc89c0 | 96 | 96 | 0.895833 | 9 | 96 | 9.555556 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.427083 | 0 | 96 | 1 | 96 | 96 | 0.46875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e87dc62308f33cadbfc9d0cd6a7aee0b9d446605 | 16,839 | py | Python | poet/sawtooth_poet/tests/validator_registry_test/tests.py | sbilly/sawtooth-core | d3d8cf2599bef3c0424bbb5aaa8636fc39952859 | [
"Apache-2.0"
] | null | null | null | poet/sawtooth_poet/tests/validator_registry_test/tests.py | sbilly/sawtooth-core | d3d8cf2599bef3c0424bbb5aaa8636fc39952859 | [
"Apache-2.0"
] | null | null | null | poet/sawtooth_poet/tests/validator_registry_test/tests.py | sbilly/sawtooth-core | d3d8cf2599bef3c0424bbb5aaa8636fc39952859 | [
"Apache-2.0"
] | null | null | null | # Copyright 2017 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------------------------------------------------------------------
import unittest
import json
import base64
from sawtooth_signing import secp256k1_signer as signing
from validator_registry_test.validator_reg_message_factory \
import ValidatorRegistryMessageFactory
from sawtooth_poet.protobuf.validator_registry_pb2 import \
ValidatorRegistryPayload
class TestValidatorRegistry(unittest.TestCase):
"""
Set of tests to run in a test suite with an existing TPTester and
transaction processor.
"""
def __init__(self, test_name, tester):
super().__init__(test_name)
self.tester = tester
self.private_key = signing.generate_privkey()
self.public_key = signing.encode_pubkey(
signing.generate_pubkey(self.private_key), "hex")
self.factory = ValidatorRegistryMessageFactory(
private=self.private_key, public=self.public_key)
self._report_private_key = \
signing.encode_privkey(
signing.decode_privkey(
'5Jz5Kaiy3kCiHE537uXcQnJuiNJshf2bZZn43CrALMGoCd3zRuo',
'wif'), 'hex')
def _expect_invalid_transaction(self):
self.tester.expect(
self.factory.create_tp_response("INVALID_TRANSACTION"))
def _expect_ok(self):
self.tester.expect(self.factory.create_tp_response("OK"))
def test_valid_signup_info(self):
"""
Testing valid validator_registry transaction. This includes sending new
signup info for a validator that has already been registered.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
payload = ValidatorRegistryPayload(
verb="reg", name="val_1", id=self.factory.public_key,
signup_info=signup_info, block_num=0)
# Send validator registry paylaod
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
# Expect Request for the ValidatorMap
received = self.tester.expect(
self.factory.create_get_request_validator_map())
# Respond with a empty validator Map
self.tester.respond(
self.factory.create_get_empty_resposne_validator_map(), received)
# Expect a set the new validator to the ValidatorMap
received = self.tester.expect(
self.factory.create_set_request_validator_map())
# Respond with the ValidatorMap address
self.tester.respond(self.factory.create_set_response_validator_map(),
received)
# Expect a request to set ValidatorInfo for val_1
received = self.tester.expect(
self.factory.create_set_request_validator_info("val_1",
"registered"))
# Respond with address for val_1
# val_1 address is derived from the validators id
# val id is the same as the pubkey for the factory
self.tester.respond(self.factory.create_set_response_validator_info(),
received)
self._expect_ok()
# --------------------------
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
payload = ValidatorRegistryPayload(
verb="reg", name="val_1", id=self.factory.public_key,
signup_info=signup_info, block_num=0)
# Send validator registry paylaod
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
# Expect Request for the ValidatorMap
received = self.tester.expect(
self.factory.create_get_request_validator_map())
# Respond with a validator Map
self.tester.respond(self.factory.create_get_response_validator_map(),
received)
# Expect to receive a validator_info request
received = self.tester.expect(
self.factory.create_get_request_validator_info())
# Respond with the ValidatorInfo
self.tester.respond(
self.factory.create_get_response_validator_info("val_1"), received)
# Expect a request to set ValidatorInfo for val_1
received = self.tester.expect(
self.factory.create_set_request_validator_info("val_1", "revoked"))
# Respond with address for val_1
# val_1 address is derived from the validators id
# val id is the same as the pubkey for the factory
self.tester.respond(
self.factory.create_set_response_validator_info(), received)
# Expect a request to set ValidatorInfo for val_1
received = self.tester.expect(
self.factory.create_set_request_validator_info("val_1",
"registered"))
# Respond with address for val_1
# val_1 address is derived from the validators id
# val id is the same as the pubkey for the factory
self.tester.respond(self.factory.create_set_response_validator_info(),
received)
self._expect_ok()
def test_invalid_name(self):
"""
Test that a transaction with an invalid name returns an invalid
transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
# The name is longer the 64 characters
payload = ValidatorRegistryPayload(
verb="reg",
name="val_11111111111111111111111111111111111111111111111111111111"
"11111",
id=self.factory.public_key,
signup_info=signup_info,
block_num=0)
# Send validator registry paylaod
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def test_invalid_id(self):
"""
Test that a transaction with an id that does not match the
signer_pubkey returns an invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
# The idea should match the signer_pubkey in the transaction_header
payload = ValidatorRegistryPayload(
verb="reg",
name="val_1",
id="bad",
signup_info=signup_info,
block_num=0)
# Send validator registry paylaod
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def test_invalid_poet_pubkey(self):
"""
Test that a transaction without a poet_public_key returns an invalid
transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
signup_info.poet_public_key = "bad"
payload = ValidatorRegistryPayload(
verb="reg",
name="val_1",
id=self.factory.public_key,
signup_info=signup_info,
block_num=0)
# Send validator registry paylaod
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def _test_bad_signup_info(self, signup_info):
payload = ValidatorRegistryPayload(
verb="reg",
name="val_1",
id=self.factory.public_key,
signup_info=signup_info,
block_num=0)
# Send validator registry paylaod
self.tester.send(
self.factory.create_tp_process_request(payload.id, payload))
self._expect_invalid_transaction()
def test_invalid_verfication_report(self):
"""
Test that a transaction whose verication report is invalid returns an
invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
# Verifcation Report is None
proof_data = signup_info.proof_data
signup_info.proof_data = json.dumps({})
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No verification signature
proof_data_dict = json.loads(proof_data)
del proof_data_dict["signature"]
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad verification signature
proof_data_dict["signature"] = "bads"
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No Nonce
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report["nonce"] = None
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
def test_invalid_pse_manifest(self):
"""
Test that a transaction whose pse_manifast is invalid returns an
invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
proof_data = signup_info.proof_data
proof_data_dict = json.loads(proof_data)
# ------------------------------------------------------
# no pseManifestStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
pse_status = verification_report['pseManifestStatus']
verification_report['pseManifestStatus'] = None
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad pseManifestStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report['pseManifestStatus'] = "bad"
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No pseManifestHash
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report['pseManifestStatus'] = pse_status
verification_report['pseManifestHash'] = None
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad pseManifestHash
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report['pseManifestHash'] = "Bad"
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
def test_invalid_envalve_body(self):
"""
Test that a transaction whose enclave_body is invalid returns an
invalid transaction.
"""
signup_info = self.factory.create_signup_info(
self.factory.pubkey_hash, "000")
proof_data = signup_info.proof_data
proof_data_dict = json.loads(proof_data)
# ------------------------------------------------------
# No isvEnclaveQuoteStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
enclave_status = verification_report["isvEnclaveQuoteStatus"]
verification_report["isvEnclaveQuoteStatus"] = None
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No isvEnclaveQuoteStatus
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report["isvEnclaveQuoteStatus"] = "Bad"
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No isvEnclaveQuoteBody
verification_report = \
json.loads(proof_data_dict["verification_report"])
verification_report["isvEnclaveQuoteStatus"] = enclave_status
verification_report['isvEnclaveQuoteBody'] = None
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# No report body in isvEnclaveQuoteBody
verification_report = \
json.loads(proof_data_dict["verification_report"])
quote = {"test": "none"}
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(
json.dumps(quote).encode()).decode()
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
# ------------------------------------------------------
# Bad isvEnclaveQuoteBody
verification_report = \
json.loads(proof_data_dict["verification_report"])
quote = {"report_body": "none"}
verification_report['isvEnclaveQuoteBody'] = \
base64.b64encode(
json.dumps(quote).encode()).decode()
proof_data_dict = {
'verification_report': json.dumps(verification_report),
'signature':
signing.sign(
json.dumps(verification_report),
self._report_private_key)
}
signup_info.proof_data = json.dumps(proof_data_dict)
self._test_bad_signup_info(signup_info)
| 36.212903 | 80 | 0.599026 | 1,699 | 16,839 | 5.630959 | 0.117128 | 0.078394 | 0.050277 | 0.052263 | 0.765757 | 0.75081 | 0.731159 | 0.721647 | 0.713181 | 0.678478 | 0 | 0.011929 | 0.283152 | 16,839 | 464 | 81 | 36.290948 | 0.780631 | 0.207851 | 0 | 0.730909 | 0 | 0 | 0.079325 | 0.01496 | 0 | 0 | 0 | 0 | 0 | 1 | 0.04 | false | 0 | 0.021818 | 0 | 0.065455 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e88eb314a0a8ce9181d28641319cd4433dbba675 | 197 | py | Python | Server/Python/src/dbs/dao/MySQL/MigrationBlock/Insert.py | vkuznet/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 8 | 2015-08-14T04:01:32.000Z | 2021-06-03T00:56:42.000Z | Server/Python/src/dbs/dao/MySQL/MigrationBlock/Insert.py | yuyiguo/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 162 | 2015-01-07T21:34:47.000Z | 2021-10-13T09:42:41.000Z | Server/Python/src/dbs/dao/MySQL/MigrationBlock/Insert.py | yuyiguo/DBS | 14df8bbe8ee8f874fe423399b18afef911fe78c7 | [
"Apache-2.0"
] | 16 | 2015-01-22T15:27:29.000Z | 2021-04-28T09:23:28.000Z | #!/usr/bin/env python
""" DAO Object for MigrationBlock table """
from dbs.dao.Oracle.MigrationBlock.Insert import Insert as OraMigBlkInsert
class Insert(OraMigBlkInsert):
pass
| 21.888889 | 74 | 0.715736 | 23 | 197 | 6.130435 | 0.782609 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.192893 | 197 | 8 | 75 | 24.625 | 0.886792 | 0.28934 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 6 |
fa02f55482deb400debce54be310c83da9fed8c1 | 126 | py | Python | professor_section/admin.py | Waleed-Daud/Alzitoona | 0f8cd859dfab722050e56dc3001cd5a6c1440c97 | [
"Apache-2.0"
] | null | null | null | professor_section/admin.py | Waleed-Daud/Alzitoona | 0f8cd859dfab722050e56dc3001cd5a6c1440c97 | [
"Apache-2.0"
] | null | null | null | professor_section/admin.py | Waleed-Daud/Alzitoona | 0f8cd859dfab722050e56dc3001cd5a6c1440c97 | [
"Apache-2.0"
] | null | null | null | from django.contrib import admin
from professor_section.models import Professor_Rating
admin.site.register(Professor_Rating) | 25.2 | 53 | 0.873016 | 17 | 126 | 6.294118 | 0.647059 | 0.280374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079365 | 126 | 5 | 54 | 25.2 | 0.922414 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa51c9cecdbe0db78b565e5875024539487018ee | 484 | py | Python | ga4ghmongo/schema/__init__.py | Phelimb/ga4gh-mongo | 5f5a3e1922be0e0d13af1874fad6eed5418ee761 | [
"MIT"
] | 2 | 2016-06-10T16:09:30.000Z | 2020-01-10T06:44:20.000Z | ga4ghmongo/schema/__init__.py | Phelimb/ga4gh-mongo | 5f5a3e1922be0e0d13af1874fad6eed5418ee761 | [
"MIT"
] | 1 | 2016-03-23T10:33:07.000Z | 2016-03-23T10:33:07.000Z | ga4ghmongo/schema/__init__.py | Phelimb/ga4gh-mongo | 5f5a3e1922be0e0d13af1874fad6eed5418ee761 | [
"MIT"
] | null | null | null | from ga4ghmongo.schema.models import VariantCallSet
from ga4ghmongo.schema.models import CallSet
from ga4ghmongo.schema.models import Call
from ga4ghmongo.schema.models import VariantCall
from ga4ghmongo.schema.models import SequenceCall
from ga4ghmongo.schema.models import Variant
from ga4ghmongo.schema.models import VariantSet
from ga4ghmongo.schema.models import VariantSetMetadata
from ga4ghmongo.schema.models import Reference
from ga4ghmongo.schema.models import ReferenceSet
| 44 | 55 | 0.876033 | 60 | 484 | 7.066667 | 0.25 | 0.330189 | 0.471698 | 0.613208 | 0.754717 | 0 | 0 | 0 | 0 | 0 | 0 | 0.022523 | 0.082645 | 484 | 10 | 56 | 48.4 | 0.932432 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa5c806300c2085cda7e260804ede487240ca304 | 1,100 | py | Python | plenum/test/view_change/test_instance_change_from_unknown.py | spivachuk/plenum | 05123166e8ffa89520541ea3b59b20390aaf92a4 | [
"Apache-2.0"
] | null | null | null | plenum/test/view_change/test_instance_change_from_unknown.py | spivachuk/plenum | 05123166e8ffa89520541ea3b59b20390aaf92a4 | [
"Apache-2.0"
] | null | null | null | plenum/test/view_change/test_instance_change_from_unknown.py | spivachuk/plenum | 05123166e8ffa89520541ea3b59b20390aaf92a4 | [
"Apache-2.0"
] | null | null | null | def test_instance_change_from_known(fake_view_changer):
current_view = fake_view_changer.node.viewNo
proposed_view = current_view + 1
ic_msg = fake_view_changer._create_instance_change_msg(view_no=proposed_view,
suspicion_code=26)
frm = list(fake_view_changer.node.nodestack.connecteds)[0]
fake_view_changer.process_instance_change_msg(ic_msg,
frm=frm)
assert fake_view_changer.instanceChanges.hasInstChngFrom(proposed_view, frm)
def test_instance_change_from_unknown(fake_view_changer):
current_view = fake_view_changer.node.viewNo
proposed_view = current_view + 1
ic_msg = fake_view_changer._create_instance_change_msg(view_no=proposed_view,
suspicion_code=26)
frm = b'SomeUnknownNode'
fake_view_changer.process_instance_change_msg(ic_msg,
frm=frm)
assert not fake_view_changer.instanceChanges.hasInstChngFrom(proposed_view, frm)
| 52.380952 | 84 | 0.66 | 127 | 1,100 | 5.228346 | 0.267717 | 0.13253 | 0.248494 | 0.085843 | 0.888554 | 0.813253 | 0.813253 | 0.813253 | 0.63253 | 0.63253 | 0 | 0.008895 | 0.284545 | 1,100 | 20 | 85 | 55 | 0.834816 | 0 | 0 | 0.666667 | 0 | 0 | 0.013636 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.111111 | false | 0 | 0 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d7626e5d1f4dd59502ea51f9ea79b2a61fef4fb6 | 189 | py | Python | nextbus/populate/__init__.py | macph/nextbus | 4288e235a73c8949c44523c0fc7c98d233d0f75c | [
"MIT"
] | 1 | 2019-10-17T19:40:35.000Z | 2019-10-17T19:40:35.000Z | nextbus/populate/__init__.py | macph/nextbus | 4288e235a73c8949c44523c0fc7c98d233d0f75c | [
"MIT"
] | 5 | 2021-03-31T18:52:25.000Z | 2022-02-22T14:25:41.000Z | nextbus/populate/__init__.py | macph/nextbus | 4288e235a73c8949c44523c0fc7c98d233d0f75c | [
"MIT"
] | null | null | null | """
Populating database with data from NPTG, NaPTAN and NSPL.
"""
from nextbus.populate.file_ops import backup_database, restore_database
from nextbus.populate.runner import run_population
| 31.5 | 71 | 0.825397 | 26 | 189 | 5.846154 | 0.730769 | 0.144737 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10582 | 189 | 5 | 72 | 37.8 | 0.899408 | 0.301587 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d7732e82a92521c20d57d82858b3708247ffb1dc | 26 | py | Python | aiotikpy/__init__.py | OofChair/aiotikpy | 146b85d226ae28b9ab2baf3c1724c08fa4888992 | [
"MIT"
] | 3 | 2021-03-02T13:24:17.000Z | 2021-05-23T14:03:26.000Z | aiotikpy/__init__.py | OofChair/aiotikpy | 146b85d226ae28b9ab2baf3c1724c08fa4888992 | [
"MIT"
] | 1 | 2021-03-03T19:09:55.000Z | 2021-03-04T11:34:12.000Z | aiotikpy/__init__.py | OofChair/aiotikpy | 146b85d226ae28b9ab2baf3c1724c08fa4888992 | [
"MIT"
] | 1 | 2021-05-23T14:03:28.000Z | 2021-05-23T14:03:28.000Z | from .aiotikpy import API
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
d7806482a6c581ff39fe46daaf4cab73989cb05b | 27 | py | Python | website/server/codenames/collocations/__init__.py | mderijk/codenames | 7133a8e85243550dddf4a64e90c9550f3b9e2cb4 | [
"MIT"
] | 2 | 2021-06-10T20:53:06.000Z | 2021-06-11T10:45:16.000Z | website/server/codenames/collocations/__init__.py | mderijk/codenames | 7133a8e85243550dddf4a64e90c9550f3b9e2cb4 | [
"MIT"
] | null | null | null | website/server/codenames/collocations/__init__.py | mderijk/codenames | 7133a8e85243550dddf4a64e90c9550f3b9e2cb4 | [
"MIT"
] | 1 | 2021-07-26T07:05:38.000Z | 2021-07-26T07:05:38.000Z |
from .collocator import *
| 9 | 25 | 0.740741 | 3 | 27 | 6.666667 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.185185 | 27 | 2 | 26 | 13.5 | 0.909091 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad59f902574d774145340fa013e3f55c8b0639cd | 462 | py | Python | sdk/python/pulumi_akamai/TrafficManagement/__init__.py | yliu-d/pulumi-akamai | 06f734863e453a04475fa3bd419a10c3758e3909 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_akamai/TrafficManagement/__init__.py | yliu-d/pulumi-akamai | 06f734863e453a04475fa3bd419a10c3758e3909 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-04-28T09:19:32.000Z | 2020-04-28T09:19:32.000Z | sdk/python/pulumi_akamai/TrafficManagement/__init__.py | yliu-d/pulumi-akamai | 06f734863e453a04475fa3bd419a10c3758e3909 | [
"ECL-2.0",
"Apache-2.0"
] | 1 | 2020-04-28T09:18:19.000Z | 2020-04-28T09:18:19.000Z | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
# Export this package's modules as members:
from .get_gtm_default_datacenter import *
from .gtm_a_smap import *
from .gtm_cidrmap import *
from .gtm_datacenter import *
from .gtm_domain import *
from .gtm_geomap import *
from .gtm_property import *
from .gtm_resource import *
| 33 | 87 | 0.748918 | 72 | 462 | 4.652778 | 0.638889 | 0.208955 | 0.271642 | 0.137313 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.002577 | 0.160173 | 462 | 13 | 88 | 35.538462 | 0.860825 | 0.474026 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad69b0c5e66e236f64494a607e4a527e538545fd | 94 | py | Python | rec/models/__init__.py | gergely-flamich/relative-entropy-coding | c99d90cabec4395de2d01d889bd2b7ed7b7453d7 | [
"MIT"
] | 14 | 2020-11-17T23:31:10.000Z | 2022-01-28T04:23:38.000Z | rec/models/__init__.py | gergely-flamich/relative-entropy-coding | c99d90cabec4395de2d01d889bd2b7ed7b7453d7 | [
"MIT"
] | null | null | null | rec/models/__init__.py | gergely-flamich/relative-entropy-coding | c99d90cabec4395de2d01d889bd2b7ed7b7453d7 | [
"MIT"
] | 1 | 2021-05-05T04:08:23.000Z | 2021-05-05T04:08:23.000Z | from .mnist_vae import MNISTVAE, MNISTVampVAE
from .resnet_vae import BidirectionalResNetVAE
| 23.5 | 46 | 0.861702 | 11 | 94 | 7.181818 | 0.727273 | 0.227848 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106383 | 94 | 3 | 47 | 31.333333 | 0.940476 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
ad73264e6a706f6158c33dfa324a41f02bc5727d | 40 | py | Python | Utils/SystemUtils.py | lsliluu/android_package_tools | 800dd82c49ece7411dd47e7cdfc77b1f68812e8c | [
"Apache-2.0"
] | null | null | null | Utils/SystemUtils.py | lsliluu/android_package_tools | 800dd82c49ece7411dd47e7cdfc77b1f68812e8c | [
"Apache-2.0"
] | null | null | null | Utils/SystemUtils.py | lsliluu/android_package_tools | 800dd82c49ece7411dd47e7cdfc77b1f68812e8c | [
"Apache-2.0"
] | null | null | null |
def get_system_operating():
pass
| 6.666667 | 27 | 0.675 | 5 | 40 | 5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 40 | 5 | 28 | 8 | 0.833333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | true | 0.5 | 0 | 0 | 0.5 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 6 |
ad755ba2d08850a340f68a676aaf8f78de575b6d | 146 | py | Python | chap8/8-1.py | StewedChickenwithStats/Answers-to-Python-Crash-Course | 9ffbe02abba5d111f702d920db7932303daf59d4 | [
"MIT"
] | 1 | 2022-02-21T07:05:48.000Z | 2022-02-21T07:05:48.000Z | chap8/8-1.py | StewedChickenwithStats/Answers-to-Python-Crash-Course | 9ffbe02abba5d111f702d920db7932303daf59d4 | [
"MIT"
] | null | null | null | chap8/8-1.py | StewedChickenwithStats/Answers-to-Python-Crash-Course | 9ffbe02abba5d111f702d920db7932303daf59d4 | [
"MIT"
] | null | null | null | def display_message():
"""show what you will learn in this chapter"""
print("You will learn function in this chapter.")
display_message() | 29.2 | 53 | 0.712329 | 21 | 146 | 4.857143 | 0.619048 | 0.27451 | 0.235294 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.178082 | 146 | 5 | 54 | 29.2 | 0.85 | 0.273973 | 0 | 0 | 0 | 0 | 0.39604 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | true | 0 | 0 | 0 | 0.333333 | 0.333333 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d127272170bdfa36948532744150becb605915c9 | 2,807 | py | Python | smoke/layers/uncertainty_loss.py | ZhuokunYao/smoke | d524fbe43b1aba6078c25d9aca7924b71a635e1d | [
"MIT"
] | null | null | null | smoke/layers/uncertainty_loss.py | ZhuokunYao/smoke | d524fbe43b1aba6078c25d9aca7924b71a635e1d | [
"MIT"
] | null | null | null | smoke/layers/uncertainty_loss.py | ZhuokunYao/smoke | d524fbe43b1aba6078c25d9aca7924b71a635e1d | [
"MIT"
] | null | null | null | import numpy as np
import torch
from torch.nn import functional as F
def laplacian_aleatoric_uncertainty_loss_original(input, target, log_variance, reduction='mean'):
'''
References:
MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships, CVPR'20
Geometry and Uncertainty in Deep Learning for Computer Vision, University of Cambridge
'''
assert reduction in ['mean', 'sum']
loss = 1.4142 * torch.exp(-0.5*log_variance) * torch.abs(input - target) + 0.5*log_variance
return loss.mean() if reduction == 'mean' else loss.sum()
"""
def laplacian_aleatoric_uncertainty_loss(input, target, log_std, reduction='sum', reg_weight=None):
'''
References:
MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships, CVPR'20
Geometry and Uncertainty in Deep Learning for Computer Vision, University of Cambridge
'''
assert reduction in ['mean', 'sum']
loss1 = 1.4142 * torch.exp(-log_std) * torch.abs(input - target)
loss2 = log_std
loss = loss1 + loss2
#if reg_weight is not None:
# loss1 *= reg_weight
# loss2 *= reg_weight
# loss *= reg_weight
loss1 = loss1.mean() if reduction == 'mean' else loss1.sum()
loss2 = loss2.mean() if reduction == 'mean' else loss2.sum()
loss = loss.mean() if reduction == 'mean' else loss.sum()
return loss, loss1, loss2
"""
def laplacian_aleatoric_uncertainty_loss(input, target, log_std, FUNCTION, reduction='sum', reg_weight=None):
'''
References:
MonoPair: Monocular 3D Object Detection Using Pairwise Spatial Relationships, CVPR'20
Geometry and Uncertainty in Deep Learning for Computer Vision, University of Cambridge
'''
assert reduction in ['mean', 'sum']
loss1 = 1.4142 * torch.exp(-log_std) * FUNCTION( input, target, reduce = False)
loss2 = log_std
loss = loss1 + loss2
#if reg_weight is not None:
# loss1 *= reg_weight
# loss2 *= reg_weight
# loss *= reg_weight
loss1 = loss1.mean() if reduction == 'mean' else loss1.sum()
loss2 = loss2.mean() if reduction == 'mean' else loss2.sum()
loss = loss.mean() if reduction == 'mean' else loss.sum()
return loss, loss1, loss2
def gaussian_aleatoric_uncertainty_loss(input, target, log_variance, reduction='mean'):
'''
References:
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?, Neuips'17
Geometry and Uncertainty in Deep Learning for Computer Vision, University of Cambridge
'''
assert reduction in ['mean', 'sum']
loss = 0.5 * torch.exp(-log_variance) * torch.abs(input - target)**2 + 0.5 * log_variance
return loss.mean() if reduction == 'mean' else loss.sum()
if __name__ == '__main__':
pass
| 41.279412 | 109 | 0.676167 | 366 | 2,807 | 5.068306 | 0.215847 | 0.070081 | 0.06469 | 0.081941 | 0.885714 | 0.850674 | 0.805391 | 0.762264 | 0.762264 | 0.706739 | 0 | 0.028636 | 0.216245 | 2,807 | 67 | 110 | 41.895522 | 0.814545 | 0.24047 | 0 | 0.227273 | 0 | 0 | 0.0489 | 0 | 0 | 0 | 0 | 0 | 0.136364 | 1 | 0.136364 | false | 0.045455 | 0.136364 | 0 | 0.409091 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
d13c72cc274cf4838e7eccebbbdc8e1c8f7697bb | 139 | py | Python | tests/autojsonrpc_test.py | evolvIQ/python-pipe-rpc | c1e4607a152f57326f79da9528e8dd6c97a841e2 | [
"Apache-2.0"
] | 2 | 2019-01-24T22:06:59.000Z | 2019-03-19T13:30:58.000Z | tests/autojsonrpc_test.py | rickardp/streamrpc | c1e4607a152f57326f79da9528e8dd6c97a841e2 | [
"Apache-2.0"
] | null | null | null | tests/autojsonrpc_test.py | rickardp/streamrpc | c1e4607a152f57326f79da9528e8dd6c97a841e2 | [
"Apache-2.0"
] | null | null | null | from . import jsonrpc_test
class JsonAutoDetectTests(jsonrpc_test.JsonTests):
def _servertype(self):
return "streamrpc.Server" | 27.8 | 50 | 0.76259 | 15 | 139 | 6.866667 | 0.866667 | 0.213592 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.158273 | 139 | 5 | 51 | 27.8 | 0.880342 | 0 | 0 | 0 | 0 | 0 | 0.114286 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
0f5b86aad0cc87f5ca9f0083933aacbe8782dda3 | 35 | py | Python | spell/__init__.py | modul/spell | 2f1410ee374517075bc38368ff0eaf1d087dc994 | [
"MIT"
] | null | null | null | spell/__init__.py | modul/spell | 2f1410ee374517075bc38368ff0eaf1d087dc994 | [
"MIT"
] | 2 | 2020-03-09T17:41:40.000Z | 2020-03-10T10:34:31.000Z | spell/__init__.py | modul/spell | 2f1410ee374517075bc38368ff0eaf1d087dc994 | [
"MIT"
] | null | null | null | from .spell import Speller, TABLES
| 17.5 | 34 | 0.8 | 5 | 35 | 5.6 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 35 | 1 | 35 | 35 | 0.933333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e3de3bc4d0c6b6cbaf976573590f8a553d36f9a | 32 | py | Python | qakgc/linker/__init__.py | pbmstrk/odqa | b5917237d5162eae208382e5d25b0c8c47018681 | [
"BSD-3-Clause"
] | 4 | 2020-09-04T16:58:52.000Z | 2021-06-23T03:37:18.000Z | qakgc/linker/__init__.py | pbmstrk/QAKGC | b5917237d5162eae208382e5d25b0c8c47018681 | [
"BSD-3-Clause"
] | 3 | 2020-07-07T23:45:21.000Z | 2020-07-09T11:49:23.000Z | qakgc/linker/__init__.py | pbmstrk/odqa | b5917237d5162eae208382e5d25b0c8c47018681 | [
"BSD-3-Clause"
] | null | null | null | from .linker import EntityLinker | 32 | 32 | 0.875 | 4 | 32 | 7 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.09375 | 32 | 1 | 32 | 32 | 0.965517 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e509d7e1210be9a15605a5ddbbf8d42251b1510 | 37 | py | Python | src/__init__.py | sfwatanabe/lockstep-sdk-python | b388c818663a4b090debb68c65c18728a082fec0 | [
"MIT"
] | 1 | 2022-01-07T17:21:29.000Z | 2022-01-07T17:21:29.000Z | src/__init__.py | sfwatanabe/lockstep-sdk-python | b388c818663a4b090debb68c65c18728a082fec0 | [
"MIT"
] | 14 | 2022-01-13T19:58:35.000Z | 2022-02-14T20:50:49.000Z | src/__init__.py | sfwatanabe/lockstep-sdk-python | b388c818663a4b090debb68c65c18728a082fec0 | [
"MIT"
] | 5 | 2021-12-30T16:41:20.000Z | 2022-01-14T20:11:36.000Z | from lockstep_api import LockstepApi
| 18.5 | 36 | 0.891892 | 5 | 37 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 37 | 1 | 37 | 37 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
7e63b826d4715d8f83a8bfe22b1f094e2b09dd5c | 10,252 | py | Python | genomic_data_service/rnaseq/tests/test_rnaseq_repository.py | ENCODE-DCC/genomic-data-service | 954017a5bcc5f448fbe2867768186df5e066c67c | [
"MIT"
] | 3 | 2020-10-26T02:15:55.000Z | 2022-01-26T18:39:09.000Z | genomic_data_service/rnaseq/tests/test_rnaseq_repository.py | ENCODE-DCC/genomic-data-service | 954017a5bcc5f448fbe2867768186df5e066c67c | [
"MIT"
] | 3 | 2021-08-17T02:01:54.000Z | 2022-03-30T17:14:02.000Z | genomic_data_service/rnaseq/tests/test_rnaseq_repository.py | ENCODE-DCC/genomic-data-service | 954017a5bcc5f448fbe2867768186df5e066c67c | [
"MIT"
] | 1 | 2022-03-24T21:15:34.000Z | 2022-03-24T21:15:34.000Z | import pytest
def test_rnaseq_repository_memory_init():
from genomic_data_service.rnaseq.repository.memory import Memory
memory = Memory()
assert isinstance(memory, Memory)
def test_rnaseq_repository_memory_load(as_expressions):
from genomic_data_service.rnaseq.repository.memory import Memory
memory = Memory()
memory.load(as_expressions[1])
assert len(memory.data) == 1
def test_rnaseq_repository_memory_bulk_load(as_expressions):
from genomic_data_service.rnaseq.repository.memory import Memory
memory = Memory()
memory.bulk_load(as_expressions)
assert len(memory.data) == 3
def test_rnaseq_repository_memory_bulk_load_from_files(mocker, mock_portal, raw_expressions):
from genomic_data_service.rnaseq.repository.memory import Memory
mocker.patch(
'genomic_data_service.rnaseq.domain.file.get_expression_generator',
return_value=raw_expressions,
)
memory = Memory()
files = list(mock_portal.get_rna_seq_files())
assert len(files) == 4
memory.bulk_load_from_files(files)
# Four expression values for four files.
assert len(memory.data) == 16
assert memory.data[0] == {
'embedded': {
'expression': {
'gene_id': 'ENSG00000034677.12',
'transcript_ids': [
'ENST00000341084.6',
'ENST00000432381.2',
'ENST00000517584.5',
'ENST00000519342.1',
'ENST00000519449.5',
'ENST00000519527.5',
'ENST00000520071.1',
'ENST00000520903.1',
'ENST00000522182.1',
'ENST00000522369.5',
'ENST00000523167.1',
'ENST00000523255.5',
'ENST00000523481.5',
'ENST00000523644.1',
'ENST00000524233.1'
],
'tpm': 9.34,
'fpkm': 14.49
},
'file': {
'@id': '/files/ENCFF241WYH/',
'assay_title': 'polyA plus RNA-seq',
'assembly': 'GRCh38',
'biosample_ontology': {
'organ_slims': ['musculature of body'],
'term_name': 'muscle of trunk',
'synonyms': [
'torso muscle organ',
'trunk musculature',
'trunk muscle',
'muscle of trunk',
'muscle organ of torso',
'trunk muscle organ',
'muscle organ of trunk',
'body musculature'
],
'name': 'tissue_UBERON_0001774',
'term_id': 'UBERON:0001774',
'classification': 'tissue'
},
'dataset': '/experiments/ENCSR906HEV/',
'donors': ['/human-donors/ENCDO676JUB/'],
'genome_annotation': 'V29'
},
'dataset': {
'@id': '/experiments/ENCSR906HEV/',
'biosample_summary': 'muscle of trunk tissue female embryo (113 days)',
'replicates': [
{
'library': {
'biosample': {
'age_units': 'day',
'sex': 'female',
'age': '113',
'donor': {
'organism': {
'scientific_name': 'Homo sapiens'
}
}
}
}
}
]
},
'gene': {
'geneid': '25897',
'symbol': 'RNF19A',
'name': 'ring finger protein 19A, RBR E3 ubiquitin protein ligase',
'synonyms': ['DKFZp566B1346', 'RNF19', 'dorfin'],
'@id': '/genes/25897/',
'title': 'RNF19A (Homo sapiens)'
},
'@id': '/expressions/ENCFF241WYH/ENSG00000034677.12/',
'@type': ['RNAExpression', 'Item'],
},
'_index': 'rna-expression',
'_type': 'rna-expression',
'principals_allowed': {
'view': ['system.Everyone']
},
'_id': '/expressions/ENCFF241WYH/ENSG00000034677.12/'
}
def test_rnaseq_repository_memory_clear(as_expressions):
from genomic_data_service.rnaseq.repository.memory import Memory
memory = Memory()
memory.bulk_load(as_expressions)
assert len(memory.data) == 3
memory.clear()
assert len(memory.data) == 0
def test_rnaseq_repository_elasticsearch_init():
from genomic_data_service.rnaseq.repository.elasticsearch import Elasticsearch
es = Elasticsearch({})
assert isinstance(es, Elasticsearch)
@pytest.mark.integration
def test_rnaseq_repository_elasticsearch_load(mocker, mock_portal, raw_expressions, elasticsearch_client):
from genomic_data_service.rnaseq.repository.elasticsearch import Elasticsearch
es = Elasticsearch(
elasticsearch_client
)
item = {
'embedded': {
'expression': {
'gene_id': 'ENSG00000034677.12',
'transcript_ids': [
'ENST00000341084.6',
'ENST00000432381.2',
'ENST00000517584.5',
'ENST00000519342.1',
'ENST00000519449.5',
'ENST00000519527.5',
'ENST00000520071.1',
'ENST00000520903.1',
'ENST00000522182.1',
'ENST00000522369.5',
'ENST00000523167.1',
'ENST00000523255.5',
'ENST00000523481.5',
'ENST00000523644.1',
'ENST00000524233.1'
],
'tpm': 9.34,
'fpkm': 14.49
},
'file': {
'@id': '/files/ENCFF241WYH/',
'assay_title': 'polyA plus RNA-seq',
'assembly': 'GRCh38',
'biosample_ontology': {
'organ_slims': ['musculature of body'],
'term_name': 'muscle of trunk',
'synonyms': [
'torso muscle organ',
'trunk musculature',
'trunk muscle',
'muscle of trunk',
'muscle organ of torso',
'trunk muscle organ',
'muscle organ of trunk',
'body musculature'
],
'name': 'tissue_UBERON_0001774',
'term_id': 'UBERON:0001774',
'classification': 'tissue'
},
'dataset': '/experiments/ENCSR906HEV/',
'donors': ['/human-donors/ENCDO676JUB/'],
'genome_annotation': 'V29'
},
'dataset': {
'@id': '/experiments/ENCSR906HEV/',
'biosample_summary': 'muscle of trunk tissue female embryo (113 days)',
'replicates': [
{
'library': {
'biosample': {
'age_units': 'day',
'sex': 'female',
'age': '113'
}
}
}
]
},
'gene': {
'geneid': '25897',
'symbol': 'RNF19A',
'name': 'ring finger protein 19A, RBR E3 ubiquitin protein ligase',
'synonyms': ['DKFZp566B1346', 'RNF19', 'dorfin'],
'@id': '/genes/25897/',
'title': 'RNF19A (Homo sapiens)'
},
'@id': '/files/ENCFF241WYH/',
'@type': ['RNAExpression', 'Item'],
'expression_id': '/expressions/ENCFF241WYH/ENSG00000034677.12/'
},
'principals_allowed': {
'view': ['system.Everyone']
}
}
es.load(item)
data = es.data
assert len(data) == 1
assert data[0]['_source']['embedded']['expression_id'] == '/expressions/ENCFF241WYH/ENSG00000034677.12/'
es.clear()
@pytest.mark.integration
def test_rnaseq_repository_elasticsearch_bulk_load(mocker, raw_files, raw_expressions, repositories, elasticsearch_client):
from genomic_data_service.rnaseq.repository.elasticsearch import Elasticsearch
from genomic_data_service.rnaseq.domain.file import RnaSeqFile
mocker.patch(
'genomic_data_service.rnaseq.domain.file.get_expression_generator',
return_value=raw_expressions,
)
es = Elasticsearch(
elasticsearch_client
)
rna_file = RnaSeqFile(raw_files[0], repositories)
as_documents = list(rna_file.as_documents())
es.bulk_load(as_documents)
data = es.data
assert len(data) == 4
data.sort(key=lambda d: d['_id'])
assert data[0]['_id'] == '/expressions/ENCFF241WYH/ENSG00000034677.12/'
assert data[3]['_id'] == '/expressions/ENCFF241WYH/ENSG00000060982.14/'
es.clear()
@pytest.mark.integration
def test_rnaseq_repository_elasticsearch_bulk_load_from_files(mocker, mock_portal, raw_expressions, elasticsearch_client):
from genomic_data_service.rnaseq.repository.elasticsearch import Elasticsearch
mocker.patch(
'genomic_data_service.rnaseq.domain.file.get_expression_generator',
return_value=raw_expressions,
)
es = Elasticsearch(
elasticsearch_client
)
files = mock_portal.get_rna_seq_files()
es.bulk_load_from_files(files)
data = es.data
assert len(data) == 16
data.sort(key=lambda d: d['_id'])
assert data[0]['_id'] == '/expressions/ENCFF106SZG/ENSG00000034677.12/'
assert data[15]['_id'] == '/expressions/ENCFF730OTJ/ENSG00000060982.14/'
es.clear()
| 37.691176 | 123 | 0.502341 | 834 | 10,252 | 5.978417 | 0.199041 | 0.057762 | 0.046931 | 0.062575 | 0.850983 | 0.779783 | 0.729041 | 0.715002 | 0.70357 | 0.694545 | 0 | 0.107194 | 0.385778 | 10,252 | 271 | 124 | 37.830258 | 0.684612 | 0.003707 | 0 | 0.675889 | 0 | 0 | 0.290443 | 0.072268 | 0 | 0 | 0 | 0 | 0.067194 | 1 | 0.035573 | false | 0 | 0.043478 | 0 | 0.079051 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
7e6d1cbd2f291d30043c75b7662645673a7b054e | 154 | py | Python | slack/web/classes/dialogs.py | priya1puresoftware/python-slack-sdk | 3503182feaaf4d41b57fd8bf10038ebc99f1f3c7 | [
"MIT"
] | 2,486 | 2016-11-03T14:31:43.000Z | 2020-10-26T23:07:44.000Z | slack/web/classes/dialogs.py | priya1puresoftware/python-slack-sdk | 3503182feaaf4d41b57fd8bf10038ebc99f1f3c7 | [
"MIT"
] | 721 | 2016-11-03T21:26:56.000Z | 2020-10-26T12:41:29.000Z | slack/web/classes/dialogs.py | priya1puresoftware/python-slack-sdk | 3503182feaaf4d41b57fd8bf10038ebc99f1f3c7 | [
"MIT"
] | 627 | 2016-11-02T19:04:19.000Z | 2020-10-25T19:21:13.000Z | from slack_sdk.models.dialogs import DialogBuilder # noqa
from slack import deprecation
deprecation.show_message(__name__, "slack_sdk.models.dialogs")
| 25.666667 | 62 | 0.831169 | 20 | 154 | 6.05 | 0.6 | 0.14876 | 0.231405 | 0.347107 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.097403 | 154 | 5 | 63 | 30.8 | 0.870504 | 0.025974 | 0 | 0 | 0 | 0 | 0.162162 | 0.162162 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7eae79cc34bb990b956807d03e7ab112b2cb806f | 88 | py | Python | poseidon/__init__.py | movermeyer/poseidon | 6d1cecbe02f1e510dd185fe23f88f7af35eb737f | [
"MIT"
] | 25 | 2015-01-16T15:47:51.000Z | 2021-07-17T20:35:11.000Z | poseidon/__init__.py | movermeyer/poseidon | 6d1cecbe02f1e510dd185fe23f88f7af35eb737f | [
"MIT"
] | 1 | 2015-06-16T10:46:16.000Z | 2015-06-16T10:46:16.000Z | poseidon/__init__.py | movermeyer/poseidon | 6d1cecbe02f1e510dd185fe23f88f7af35eb737f | [
"MIT"
] | 4 | 2015-04-04T16:26:37.000Z | 2018-03-04T20:52:47.000Z | from poseidon.client import connect
from poseidon.version import version as __version__
| 29.333333 | 51 | 0.863636 | 12 | 88 | 6 | 0.583333 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.113636 | 88 | 2 | 52 | 44 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e5f56798e9407b54072a4448e6371089d6ef149 | 127 | py | Python | bayes_implicit_solvent/gb_models/__init__.py | openforcefield/bayes-implicit-solvent | 067239fcbb8af28eb6310d702804887662692ec2 | [
"MIT"
] | 4 | 2019-11-12T16:23:26.000Z | 2021-07-01T05:37:37.000Z | bayes_implicit_solvent/gb_models/__init__.py | openforcefield/bayes-implicit-solvent | 067239fcbb8af28eb6310d702804887662692ec2 | [
"MIT"
] | 4 | 2019-01-18T22:05:03.000Z | 2019-11-12T18:37:31.000Z | bayes_implicit_solvent/gb_models/__init__.py | openforcefield/bayes-implicit-solvent | 067239fcbb8af28eb6310d702804887662692ec2 | [
"MIT"
] | 2 | 2019-12-02T20:23:56.000Z | 2021-03-25T23:28:36.000Z | from . import obc2_parameters, jax_gb_models, numpy_gb_models
__all__ = ["jax_gb_models", "numpy_gb_models", "obc2_parameters"] | 63.5 | 65 | 0.811024 | 19 | 127 | 4.684211 | 0.473684 | 0.359551 | 0.247191 | 0.359551 | 0.539326 | 0.539326 | 0 | 0 | 0 | 0 | 0 | 0.017094 | 0.07874 | 127 | 2 | 65 | 63.5 | 0.74359 | 0 | 0 | 0 | 0 | 0 | 0.335938 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
7d2af4c6cd637bc26b338971398f403bac22846a | 42,948 | py | Python | app/script/infoFinder.py | VanBronckhorst/VizProject3 | b380cdca4f7dddbfc2297d2bfd3fb8912d9d4362 | [
"Apache-2.0"
] | null | null | null | app/script/infoFinder.py | VanBronckhorst/VizProject3 | b380cdca4f7dddbfc2297d2bfd3fb8912d9d4362 | [
"Apache-2.0"
] | null | null | null | app/script/infoFinder.py | VanBronckhorst/VizProject3 | b380cdca4f7dddbfc2297d2bfd3fb8912d9d4362 | [
"Apache-2.0"
] | null | null | null | __author__ = 'Filippo'
topArtists = topArtists = [{'images': [ {'url': 'https://i.scdn.co/image/0059f04fc4565a8d73516ad6fc70cb2e0513b67c', 'width': 565, 'height': 600}, {'url': 'https://i.scdn.co/image/9dd6761fca722ea3aac3b268cbf471e35c06def7', 'width': 200, 'height': 212}, {'url': 'https://i.scdn.co/image/c2bee31e3a75e26e6de847c0e1d95ad2c3d51f5e', 'width': 64, 'height': 68}], 'id': 'ARMHUOV12FE0873941', 'name': 'Glenn Miller His Orchestra'}, {'images': [ {'url': 'https://i.scdn.co/image/8f7e7e08c10d883ae1c64c43d8aba5f5e8c72bac', 'width': 1000, 'height': 1254}, {'url': 'https://i.scdn.co/image/7e7522c92a9b85c5ca78f559e6178bffccf03a7f', 'width': 640, 'height': 803}, {'url': 'https://i.scdn.co/image/4687b89f3440e6eb391857ce1d6ba0d80ba66826', 'width': 200, 'height': 251}, {'url': 'https://i.scdn.co/image/29ad263783e79fef9116b6c97a2271901848e9f1', 'width': 64, 'height': 80}], 'id': 'ARRL2QH1187B9AC814', 'name': 'Bing Crosby', 'location': {'latlon': {'lat': 47.2528768, 'lon': -122.4442906}, 'city': 'Tacoma', 'region': 'WA', 'location': 'tacoma, washington', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/e55960863d1db9f0630889ea713d57cda4164240', 'width': 640, 'height': 671}, {'url': 'https://i.scdn.co/image/ac70fff871e66803fb6b28486cebdf4bfc5d6245', 'width': 200, 'height': 210}, {'url': 'https://i.scdn.co/image/f8e9b545d717a0b7f876a4852fbaddd0e9000b50', 'width': 64, 'height': 67}], 'id': 'ARX5KSY1187FB3CBDC', 'name': 'Perry Como', 'location': {'latlon': {'lat': 40.2625702, 'lon': -80.18727969999999}, 'city': 'Canonsburg', 'region': 'PA', 'location': 'Canonsburg, PA', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/ff13be54a95720ac688dde1fa3ed36db3cc5207f', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/09dc16998a5738f3109f40c1c038380243a37b69', 'width': 300, 'height': 300}, {'url': 'https://i.scdn.co/image/50243ff2de671190cc1bc9791cade52c2e00aaf2', 'width': 64, 'height': 64}], 'id': 'ARD6CFI1187FB5650B', 'name': 'Kay Kyser', 'location': {'latlon': {'lat': 35.9382103, 'lon': -77.7905339}, 'city': 'Rocky Mount', 'region': 'North Carolina', 'location': 'Rocky Mount, NC, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/2623fc9e3e8c59e2df8b1a57ce7d697e4949b744', 'width': 562, 'height': 686}, {'url': 'https://i.scdn.co/image/a9b6327cde18bfab7001cc4a6b41e76ec712ee08', 'width': 200, 'height': 244}, {'url': 'https://i.scdn.co/image/6f688d4c087e9de14a2e9f5b1143c26be2d57eb3', 'width': 64, 'height': 78}], 'id': 'ARZPHO81187B9ADA0E', 'name': 'Dinah Shore', 'location': {'latlon': {'lat': 35.1859163, 'lon': -86.11220709999999}, 'city': 'Winchester', 'region': 'Tennessee', 'location': 'Winchester, TN, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/a451df3e51f5da6a87b3e0bfe00195150749a646', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/e42478529a2a227573af02a43bf4e873d690fb2b', 'width': 300, 'height': 300}, {'url': 'https://i.scdn.co/image/ed7612c2fba9af73a12dbd015d26adee3fe7e8ac', 'width': 64, 'height': 64}], 'id': 'ARIVACK12FE087A9F8', 'name': 'Jimmy Dorsey His Orchestra'}, {'images': [ {'url': 'https://i.scdn.co/image/ae48cc43ef35b816071df230666665af3ec79c42', 'width': 600, 'height': 600}, {'url': 'https://i.scdn.co/image/718095e1177d2f32b13b264339dc6dfb3d9ab9a8', 'width': 300, 'height': 300}, {'url': 'https://i.scdn.co/image/4dd79922c3ea16e707409406bd0826f8bda6ecc6', 'width': 64, 'height': 64}], 'id': 'ARMFCXT1241B9C6990', 'name': 'His Orchestra', 'location': {'latlon': {'lat': 34.0522342, 'lon': -118.2436849}, 'city': 'Los Angeles', 'region': 'California', 'location': 'Los Angeles, CA, S', 'country': 'United States'}}, {'images': [], 'id': 'ARGOMOJ12F8DB5D5CE', 'name': 'Vaughn Monroe his Orchestra'}, {'id': 'ARXTEOZ12D5CD7C16D', 'name': 'Harry James His Orchestra'}, {'images': [ {'url': 'https://i.scdn.co/image/d0a3c1d046165b767e2924dd144882e2944d5dd9', 'width': 999, 'height': 650}, {'url': 'https://i.scdn.co/image/d4da0d72899bb9267744e5f0c77c6560caac4fa9', 'width': 640, 'height': 416}, {'url': 'https://i.scdn.co/image/8821d7a555d81884dca496f2c5d19ad906ce9289', 'width': 200, 'height': 130}, {'url': 'https://i.scdn.co/image/6e842df4553e17b5cfb4d5bf490f17c77a45c138', 'width': 64, 'height': 42}], 'id': 'ARTASUV1187B9A2B67', 'name': 'Frank Sinatra', 'location': {'latlon': {'lat': 40.7439905, 'lon': -74.0323626}, 'city': 'Hoboken', 'region': 'NJ', 'location': 'Hoboken, NJ', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/c7b8708eab6d0f0902908c1b9f9ba1daaeed06af', 'width': 1000, 'height': 1296}, {'url': 'https://i.scdn.co/image/e25cb372ca9a5317c17d5f62b3556f76ce2edde8', 'width': 640, 'height': 829}, {'url': 'https://i.scdn.co/image/7d0e683d6bb4cbb384586cd6d9007f5a40928251', 'width': 200, 'height': 259}, {'url': 'https://i.scdn.co/image/16045a251c9e9f5772d4aeb3f6fa23fe4fdeb54a', 'width': 64, 'height': 83}], 'id': 'ARULZ741187B9AD2EF', 'name': 'Elvis Presley', 'location': {'latlon': {'lat': 35.1495343, 'lon': -90.0489801}, 'city': 'Memphis', 'region': 'TN', 'location': 'Memphis, TN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/e55960863d1db9f0630889ea713d57cda4164240', 'width': 640, 'height': 671}, {'url': 'https://i.scdn.co/image/ac70fff871e66803fb6b28486cebdf4bfc5d6245', 'width': 200, 'height': 210}, {'url': 'https://i.scdn.co/image/f8e9b545d717a0b7f876a4852fbaddd0e9000b50', 'width': 64, 'height': 67}], 'id': 'ARX5KSY1187FB3CBDC', 'name': 'Perry Como', 'location': {'latlon': {'lat': 40.2625702, 'lon': -80.18727969999999}, 'city': 'Canonsburg', 'region': 'PA', 'location': 'Canonsburg, PA', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/6a42043a38a415f876cbaa24a2837e1822b94472', 'width': 1000, 'height': 1236}, {'url': 'https://i.scdn.co/image/8d1565ea84ddea3592e75196c758db146c5e7639', 'width': 640, 'height': 791}, {'url': 'https://i.scdn.co/image/4d19dce246c4465806bed02a164282b0c175ae3f', 'width': 200, 'height': 247}, {'url': 'https://i.scdn.co/image/649d7fdd28ca4c19ddc27c091bc681d74bab7250', 'width': 64, 'height': 79}], 'id': 'ARO302W1187FB3C3FF', 'name': 'Pat Boone', 'location': {'latlon': {'lat': 36.1626638, 'lon': -86.7816016}, 'city': 'Nashville', 'region': 'TN', 'location': 'Nashville, TN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/817a06ed9ae98622151c9c1eeb7a27733d231e76', 'width': 1000, 'height': 1215}, {'url': 'https://i.scdn.co/image/af28bf19e31fec3ad4294a29a25d72a7df0d204a', 'width': 640, 'height': 777}, {'url': 'https://i.scdn.co/image/9508a6b52dfdf7839fdc8e51475b64473bbd7518', 'width': 200, 'height': 243}, {'url': 'https://i.scdn.co/image/d9f9145f50f7e177eb044f6d76874a28dd012c2d', 'width': 64, 'height': 78}], 'id': 'AR24XPH1187FB58AA5', 'name': 'Patti Page', 'location': {'latlon': {'lat': 36.3125963, 'lon': -95.61609}, 'city': 'Claremore', 'region': 'OK', 'location': 'Claremore, OK', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/4d18abc275714e2b9b829098a94ccfd705d1f24b', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/6e23cfcf9508c01a10fd06d59b5be2dade4dd980', 'width': 300, 'height': 300}, {'url': 'https://i.scdn.co/image/090a964b999fab76f984f5fc3734b90f6426b3fe', 'width': 64, 'height': 64}], 'id': 'AR8LA5Y1187B98D9E2', 'name': 'Eddie Fisher', 'location': {'latlon': {'lat': 40.6331249, 'lon': -89.3985283}, 'city': '', 'region': 'Illinois', 'location': 'Illinois, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/d0a3c1d046165b767e2924dd144882e2944d5dd9', 'width': 999, 'height': 650}, {'url': 'https://i.scdn.co/image/d4da0d72899bb9267744e5f0c77c6560caac4fa9', 'width': 640, 'height': 416}, {'url': 'https://i.scdn.co/image/8821d7a555d81884dca496f2c5d19ad906ce9289', 'width': 200, 'height': 130}, {'url': 'https://i.scdn.co/image/6e842df4553e17b5cfb4d5bf490f17c77a45c138', 'width': 64, 'height': 42}], 'id': 'ARTASUV1187B9A2B67', 'name': 'Frank Sinatra', 'location': {'latlon': {'lat': 40.7439905, 'lon': -74.0323626}, 'city': 'Hoboken', 'region': 'NJ', 'location': 'Hoboken, NJ', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/1e7e3ddbe8c3862d32d35aef5e4a763718f1e370', 'width': 1000, 'height': 1170}, {'url': 'https://i.scdn.co/image/172221e04fef2e038871248b3abdecbcf8f5c131', 'width': 640, 'height': 749}, {'url': 'https://i.scdn.co/image/5ee1c7e5f1a45125ee8315d90ca62e6afb04cc25', 'width': 200, 'height': 234}, {'url': 'https://i.scdn.co/image/afe5d30d0286526a60aa0d37c02d5864eb24f67b', 'width': 64, 'height': 75}], 'id': 'ARD5ZH01187FB3C0E5', 'name': 'Fats Domino', 'location': {'latlon': {'lat': 29.95106579999999, 'lon': -90.0715323}, 'city': 'New Orleans', 'region': 'LA', 'location': 'New Orleans, LA, S ', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/6534f65aa75d38ffc2c436fc4d3bf5ac0f99e3e3', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/71e8a8ab7d1b8151018c1e2d759285c62a300382', 'width': 300, 'height': 300}, {'url': 'https://i.scdn.co/image/dcab95dfe84aaa44aa6b7449a767cc93c867aaa9', 'width': 64, 'height': 64}], 'id': 'AR9H4Y51187B99939B', 'name': 'Four', 'location': {'latlon': {'lat': 37.8393332, 'lon': -84.2700179}, 'city': '', 'region': 'Kentucky', 'location': 'Kentucky, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/10557069b43b1059e6490d062a5d21154a78d69d', 'width': 689, 'height': 857}, {'url': 'https://i.scdn.co/image/f6fa63eb8267c9381557adbb37119900c49c3734', 'width': 640, 'height': 796}, {'url': 'https://i.scdn.co/image/634e2cd425103bfd8766a7f31adcaa0bdfedb3ac', 'width': 200, 'height': 249}, {'url': 'https://i.scdn.co/image/864692e8803ffa885e34cbcde41acb218019c17e', 'width': 64, 'height': 80}], 'id': 'AR59BQE1187FB3CBE0', 'name': 'The Platters', 'location': {'latlon': {'lat': 34.0522342, 'lon': -118.2436849}, 'city': 'Los Angeles', 'region': 'California', 'location': 'Los Angeles, CA, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/f4fe8c49e3a3f92e40d3a9f3c4d9653c11aaf48c', 'width': 1000, 'height': 1278}, {'url': 'https://i.scdn.co/image/eb641a1787c4b017efb8f702bdaa29c2bb730c6c', 'width': 640, 'height': 818}, {'url': 'https://i.scdn.co/image/02f6a44aa76d7aa0983c0bdaa3fa47a49985c509', 'width': 200, 'height': 256}, {'url': 'https://i.scdn.co/image/ea87d0b6ea0363411f93c5bfb94c2a59a3cec70f', 'width': 64, 'height': 82}], 'id': 'AREIN101187FB5A5FA', 'name': 'Nat King Cole', 'location': {'latlon': {'lat': 32.3668052, 'lon': -86.2999689}, 'city': 'Montgomery', 'region': 'AL', 'location': 'Montgomery, AL', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/8c2622e70133d2e80b00e81b353c908657d5e7f6', 'width': 1000, 'height': 982}, {'url': 'https://i.scdn.co/image/e5db293abd499b37934e485559f49d9caf9bf008', 'width': 640, 'height': 628}, {'url': 'https://i.scdn.co/image/6a95b472121e3b8c5e19615c67716265dd3abdfc', 'width': 200, 'height': 196}, {'url': 'https://i.scdn.co/image/fce3a73bfc6028030554cd0af14e42b227e2e80b', 'width': 64, 'height': 63}], 'id': 'AR6XZ861187FB4CECD', 'name': 'The Beatles', 'location': {'latlon': {'lat': 53.4083714, 'lon': -2.9915726}, 'city': 'Liverpool', 'region': 'England', 'location': 'Liverpool, England, GB', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/c7b8708eab6d0f0902908c1b9f9ba1daaeed06af', 'width': 1000, 'height': 1296}, {'url': 'https://i.scdn.co/image/e25cb372ca9a5317c17d5f62b3556f76ce2edde8', 'width': 640, 'height': 829}, {'url': 'https://i.scdn.co/image/7d0e683d6bb4cbb384586cd6d9007f5a40928251', 'width': 200, 'height': 259}, {'url': 'https://i.scdn.co/image/16045a251c9e9f5772d4aeb3f6fa23fe4fdeb54a', 'width': 64, 'height': 83}], 'id': 'ARULZ741187B9AD2EF', 'name': 'Elvis Presley', 'location': {'latlon': {'lat': 35.1495343, 'lon': -90.0489801}, 'city': 'Memphis', 'region': 'TN', 'location': 'Memphis, TN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/b2d04f712c91bcf98a28ce1a8c2f674ddb724ec6', 'width': 1000, 'height': 1469}, {'url': 'https://i.scdn.co/image/4ca270764861f2e13851b8e5110bb96ba7f39359', 'width': 640, 'height': 940}, {'url': 'https://i.scdn.co/image/89ecdb230bcc12e980ce58fd88d20cc6dbc5b388', 'width': 200, 'height': 294}, {'url': 'https://i.scdn.co/image/2e615b79eb4c945b7a57e241448e681d7f2da8bd', 'width': 64, 'height': 94}], 'id': 'ARGKD4W1187B990E04', 'name': 'Brenda Lee', 'location': {'latlon': {'lat': 33.7489954, 'lon': -84.3879824}, 'city': 'Atlanta', 'region': 'GA', 'location': 'Atlanta, GA', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/7832094ca0d3539486b74f02924ba746c70b8951', 'width': 1000, 'height': 1239}, {'url': 'https://i.scdn.co/image/833f9840930f1e50b9583bd08267ab2bf47f17a9', 'width': 640, 'height': 793}, {'url': 'https://i.scdn.co/image/bec5833f085cf25d49c8919a2396283c578e8751', 'width': 200, 'height': 248}, {'url': 'https://i.scdn.co/image/86086d26f614a233fbeb4f1eda0a4b731bdb3766', 'width': 64, 'height': 79}], 'id': 'AR20CFC1187B98A25D', 'name': 'Ray Charles', 'location': {'latlon': {'lat': 30.46937389999999, 'lon': -83.6301544}, 'city': 'Greenville', 'region': 'FL', 'location': 'Greenville, Florida', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/078dabab8bc05dbe533d7765ce00637a22619ec7', 'width': 1000, 'height': 1196}, {'url': 'https://i.scdn.co/image/f7254dff6ba9780bab64f135d04b624c3a08d978', 'width': 640, 'height': 765}, {'url': 'https://i.scdn.co/image/55887d09c47f121b77d14f351426553195482391', 'width': 200, 'height': 239}, {'url': 'https://i.scdn.co/image/3539139b14d49d8971731fa53b645738aacd5a72', 'width': 64, 'height': 77}], 'id': 'AR392MV1187FB3C3FE', 'name': 'Connie Francis', 'location': {'latlon': {'lat': 40.735657, 'lon': -74.1723667}, 'city': 'Newark', 'region': 'NJ', 'location': 'Newark, NJ', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/c54e83c7f03513ac2bad82efad454d2b111de6f8', 'width': 500, 'height': 518}, {'url': 'https://i.scdn.co/image/6df67ef68e08b052eb1a816422ab82ba95927bd1', 'width': 200, 'height': 207}, {'url': 'https://i.scdn.co/image/97eca5166c41afb8c2268ebcc3ab115ec2c10eb9', 'width': 64, 'height': 66}], 'id': 'AR2DGPY1187FB4CECF', 'name': 'The Beach Boys', 'location': {'latlon': {'lat': 33.9164032, 'lon': -118.3525748}, 'city': 'Hawthorne', 'region': 'California', 'location': 'Hawthorne, CA, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/703fe6e6d231364377958c1cd0725a8e6e1d7f6c', 'width': 1000, 'height': 1547}, {'url': 'https://i.scdn.co/image/200a0f2569b4d05518fbbcaabe0ec3add46dae6b', 'width': 640, 'height': 990}, {'url': 'https://i.scdn.co/image/ac7284c3f76b45703a219c56f3f170ec381bbe21', 'width': 200, 'height': 309}, {'url': 'https://i.scdn.co/image/9615b471ae7971293fa3bda6c825751d07628dc1', 'width': 64, 'height': 99}], 'id': 'ARVNNXD1187B9AE50D', 'name': 'Marvin Gaye', 'location': {'latlon': {'lat': 38.9071923, 'lon': -77.0368707}, 'city': 'Washington', 'region': 'DC', 'location': 'Washington, DC', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/8467571f46c4a18f7c2baa1e581c9573b5e596aa', 'width': 465, 'height': 507}, {'url': 'https://i.scdn.co/image/3e64d0df8c0f523a1aca92aa2ea5f603d9e003e9', 'width': 200, 'height': 218}, {'url': 'https://i.scdn.co/image/623af55fd16535a7de4fd049ea7cbfaaf7c3528a', 'width': 64, 'height': 70}], 'id': 'ARRDLVE1187FB48F11', 'name': 'James Brown', 'location': {'latlon': {'lat': 34.5773206, 'lon': -83.3323851}, 'city': 'Toccoa', 'region': 'GA', 'location': 'Taccoa, Georgia', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/10d80af483070c9a1d4636a36ca2d1f89289c933', 'width': 1000, 'height': 1250}, {'url': 'https://i.scdn.co/image/0b5b079ad92eac89fad895f309499ff772ce08c1', 'width': 640, 'height': 800}, {'url': 'https://i.scdn.co/image/d0a244ebffff84aa94682338ca70b5d0e18790fa', 'width': 200, 'height': 250}, {'url': 'https://i.scdn.co/image/0fdfd8a3beef84b7bc9cf9191519f6192a54764e', 'width': 64, 'height': 80}], 'id': 'ARUA34B1187FB3DF74', 'name': 'Chubby Checker', 'location': {'latlon': {'lat': 33.836081, 'lon': -81.1637245}, 'city': '', 'region': 'South Carolina', 'location': 'South Carolina, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/fa37079d82ebbec1c47174920f113dabdac6dbea', 'width': 1000, 'height': 1222}, {'url': 'https://i.scdn.co/image/91d7bc79c679635efacb821e0fcf95635630cf28', 'width': 640, 'height': 782}, {'url': 'https://i.scdn.co/image/9cfb52968425a1c94ed4a4c1af6bd80e0186a5ef', 'width': 200, 'height': 244}, {'url': 'https://i.scdn.co/image/0251a22fb108ef7c6ef50a9e90ff82c8731490a0', 'width': 64, 'height': 78}], 'id': 'ARVTPMZ1187FB44AEB', 'name': 'Bobby Vinton', 'location': {'latlon': {'lat': 40.2625702, 'lon': -80.18727969999999}, 'city': 'Canonsburg', 'region': 'PA', 'location': 'Canonsburg, PA', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/8ee2f7de8a8ba9c8c994f7d44c76a5cbe8abfa94', 'width': 1000, 'height': 1314}, {'url': 'https://i.scdn.co/image/b0703a467b1b6b66f7892d8c6688c6d77919d77b', 'width': 640, 'height': 841}, {'url': 'https://i.scdn.co/image/d3e24f26140a24deebc402beee18de5f4ce96075', 'width': 200, 'height': 263}, {'url': 'https://i.scdn.co/image/c052227a2cd1cb7fa7ec3430c4e3b43f6229a4a2', 'width': 64, 'height': 84}], 'id': 'ARTB3OO1187FB49BDA', 'name': 'Elton John', 'location': {'latlon': {'lat': 51.595172, 'lon': -0.378002}, 'city': 'Pinner', 'region': '', 'location': 'Pinner, K', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/1d5a05673975ba0c378cd280344e000b0b865620', 'width': 1000, 'height': 733}, {'url': 'https://i.scdn.co/image/1ea4795e17ffa658d7ad23095d58997a278179a9', 'width': 640, 'height': 469}, {'url': 'https://i.scdn.co/image/1f363042bdd59e0f80cb7b9cac1b312f9389b019', 'width': 200, 'height': 147}, {'url': 'https://i.scdn.co/image/c707552215757c7f2dc6071e32d487a3c7b28d3f', 'width': 64, 'height': 47}], 'id': 'AR6LKUT1187FB57287', 'name': 'Bee Gees', 'location': {'latlon': {'lat': -27.2297407, 'lon': 153.1082561}, 'city': 'Brisbane', 'region': 'Queensland', 'location': 'Redcliffe, Brisbane, Queensland, AU', 'country': 'Australia'}}, {'images': [ {'url': 'https://i.scdn.co/image/ed45f575339b044d7168780a15ff457b85be851a', 'width': 500, 'height': 493}, {'url': 'https://i.scdn.co/image/6d1f554e76304a29b8baa3f61213a88702c0f599', 'width': 200, 'height': 197}, {'url': 'https://i.scdn.co/image/53a2c2a3d9f4c61f20b04d95936fadc83b7b08b5', 'width': 64, 'height': 63}], 'id': 'AR58JT11187B9AF4CC', 'name': 'Carpenters', 'location': {'latlon': {'lat': 34.0522342, 'lon': -118.2436849}, 'city': 'Los Angeles', 'region': 'California', 'location': 'Los Angeles, CA, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/a09994fb177b1c7d9b0c2ebdccdeb3e0bba387f4', 'width': 1000, 'height': 1055}, {'url': 'https://i.scdn.co/image/2427903500a2213e46fe901c702322bbb98a9f62', 'width': 640, 'height': 675}, {'url': 'https://i.scdn.co/image/65db42200ec816648fd69140835131eff09a320f', 'width': 200, 'height': 211}, {'url': 'https://i.scdn.co/image/999dbe2ab26f4f607a820fe44d2f349f8f86bf58', 'width': 64, 'height': 68}], 'id': 'AR4EUFM1187B99592B', 'name': 'Chicago', 'location': {'latlon': {'lat': 41.8781136, 'lon': -87.6297982}, 'city': 'Chicago', 'region': 'IL', 'location': 'Chicago, IL', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/860e29514323260192168a1fdd64ceaf3073b92b', 'width': 350, 'height': 463}, {'url': 'https://i.scdn.co/image/539fc51c588801578cc9297aeb13f1a4e9dd396d', 'width': 200, 'height': 265}, {'url': 'https://i.scdn.co/image/130125ae017e93d1b7ef701aa181f35b46a86312', 'width': 64, 'height': 85}], 'id': 'ARUM60L1187B9A7E1F', 'name': 'Stevie Wonder', 'location': {'latlon': {'lat': 42.331427, 'lon': -83.0457538}, 'city': 'Detroit', 'region': 'MI', 'location': 'Detroit Michigan', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/5c895601cd21d72e72504af83699d0fce60cae65', 'width': 999, 'height': 649}, {'url': 'https://i.scdn.co/image/e0f8d2173ce2d363b3f4d8a014e834f4c4854d1b', 'width': 640, 'height': 416}, {'url': 'https://i.scdn.co/image/ce2f498468cb5f4c48b91b7dc2e10fdeb9a40760', 'width': 200, 'height': 130}, {'url': 'https://i.scdn.co/image/773ceedfa2d02ca77abd5d1cb603ab3968d56a64', 'width': 64, 'height': 42}], 'id': 'AROQSCE1187B99F0FC', 'name': 'John Denver', 'location': {'latlon': {'lat': 39.1910983, 'lon': -106.8175387}, 'city': 'Aspen', 'region': 'CO', 'location': 'Aspen, CO', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/8467571f46c4a18f7c2baa1e581c9573b5e596aa', 'width': 465, 'height': 507}, {'url': 'https://i.scdn.co/image/3e64d0df8c0f523a1aca92aa2ea5f603d9e003e9', 'width': 200, 'height': 218}, {'url': 'https://i.scdn.co/image/623af55fd16535a7de4fd049ea7cbfaaf7c3528a', 'width': 64, 'height': 70}], 'id': 'ARRDLVE1187FB48F11', 'name': 'James Brown', 'location': {'latlon': {'lat': 34.5773206, 'lon': -83.3323851}, 'city': 'Toccoa', 'region': 'GA', 'location': 'Taccoa, Georgia', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/c7b8708eab6d0f0902908c1b9f9ba1daaeed06af', 'width': 1000, 'height': 1296}, {'url': 'https://i.scdn.co/image/e25cb372ca9a5317c17d5f62b3556f76ce2edde8', 'width': 640, 'height': 829}, {'url': 'https://i.scdn.co/image/7d0e683d6bb4cbb384586cd6d9007f5a40928251', 'width': 200, 'height': 259}, {'url': 'https://i.scdn.co/image/16045a251c9e9f5772d4aeb3f6fa23fe4fdeb54a', 'width': 64, 'height': 83}], 'id': 'ARULZ741187B9AD2EF', 'name': 'Elvis Presley', 'location': {'latlon': {'lat': 35.1495343, 'lon': -90.0489801}, 'city': 'Memphis', 'region': 'TN', 'location': 'Memphis, TN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/f280ce2ac687d7a5982fc4b8dcbdcb4577bc6f43', 'width': 500, 'height': 681}, {'url': 'https://i.scdn.co/image/d91b4fe4aca47c221a4fd4503cb3344c96cc14a9', 'width': 200, 'height': 272}, {'url': 'https://i.scdn.co/image/7c63b7355fb1152e3e4d56de45bec2b11dd7d4a5', 'width': 64, 'height': 87}], 'id': 'AR8L6W21187B9AD317', 'name': 'Diana Ross', 'location': {'latlon': {'lat': 42.331427, 'lon': -83.0457538}, 'city': 'Detroit', 'region': 'MI', 'location': 'Detroit, MI ', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/041abf224de6851dec32ca50cede99539c88fbfc', 'width': 1000, 'height': 1108}, {'url': 'https://i.scdn.co/image/c8a5abb41349b1e6b7505ee3fa56d4b855b48c90', 'width': 640, 'height': 709}, {'url': 'https://i.scdn.co/image/1d5af4bd60f01af5a63c65fa233d294e69d53324', 'width': 200, 'height': 222}, {'url': 'https://i.scdn.co/image/7f8fbc2ba80681f576d5a6440971ad243db19202', 'width': 64, 'height': 71}], 'id': 'AR5J7RY1187FB3781C', 'name': 'Helen Reddy', 'location': {'latlon': {'lat': -37.814107, 'lon': 144.96328}, 'city': 'Melbourne', 'region': '', 'location': 'melbourne, australia', 'country': 'Australia'}}, {'images': [ {'url': 'https://i.scdn.co/image/c9b53f16231496566a250b980853688d0060c5f7', 'width': 450, 'height': 668}, {'url': 'https://i.scdn.co/image/b7d1f37f8fa6cc5a1c676fb9a90c7401d5dcc21d', 'width': 200, 'height': 297}, {'url': 'https://i.scdn.co/image/6fe4d999bcd666990dab0cc614eb3efbaa8c2294', 'width': 64, 'height': 95}], 'id': 'ARBEOHF1187B9B044D', 'name': 'Madonna', 'location': {'latlon': {'lat': 40.7127837, 'lon': -74.0059413}, 'city': 'New York', 'region': 'NY', 'location': 'New York City, NY', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/fb71d7e04f2ba840b5a68c07ee62653317ad3609', 'width': 1000, 'height': 982}, {'url': 'https://i.scdn.co/image/9cbd37b1208bca44e08323cc33b081778051ccb5', 'width': 640, 'height': 629}, {'url': 'https://i.scdn.co/image/bf5cbf103efc965156264559b4ee8f289f484412', 'width': 200, 'height': 196}, {'url': 'https://i.scdn.co/image/ba18b9077f61cdbb0360b12f7b337246076315ad', 'width': 64, 'height': 63}], 'id': 'AR5HOKG1187FB3873E', 'name': "Shakin' Stevens", 'location': {'latlon': {'lat': 51.483308, 'lon': -3.238867}, 'city': 'Ely', 'region': 'Wales', 'location': 'Ely, Wales, GB', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/610f183c9b2f78b7ce2fe17c7501e4287869f457', 'width': 1000, 'height': 1420}, {'url': 'https://i.scdn.co/image/a78ccaeb59704bce7f7d93155a89bfeaebe4ff0d', 'width': 640, 'height': 909}, {'url': 'https://i.scdn.co/image/9f318d1e29b276f3392498e9784ddde92bc3c149', 'width': 200, 'height': 284}, {'url': 'https://i.scdn.co/image/6b0e7ba60a075aad5bfa83e8a6f8065077055050', 'width': 64, 'height': 91}], 'id': 'AR1WWVL1187B9B0306', 'name': 'UB40', 'location': {'latlon': {'lat': 52.48624299999999, 'lon': -1.890401}, 'city': 'Birmingham', 'region': '', 'location': 'Birmingham, England', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/e1af28249a29a2c5cf0c0a961e476691e850972a', 'width': 1000, 'height': 1022}, {'url': 'https://i.scdn.co/image/017e962e3ab8ee27d8d4afd59b5cee2fe626302e', 'width': 640, 'height': 654}, {'url': 'https://i.scdn.co/image/f2ca392aa5331f47d8ceecf8000025596db2aae9', 'width': 200, 'height': 204}, {'url': 'https://i.scdn.co/image/69d4809c84154b3cacaf506ea4dba32b04ce8992', 'width': 64, 'height': 65}], 'id': 'ARJIQGE127D395D4E3', 'name': 'Madness', 'location': {'latlon': {'lat': 51.5390111, 'lon': -0.1425553}, 'city': 'London', 'region': 'England', 'location': 'Camden Town, London, England, GB', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/85328c931a10c2b83ad6edc448b209f07a7348f0', 'width': 999, 'height': 670}, {'url': 'https://i.scdn.co/image/6fa6ff6c55e5c14175c9af96cbd81335271937c0', 'width': 640, 'height': 429}, {'url': 'https://i.scdn.co/image/9d44aa7261262168a2747b8aab4ab2f72a5e0c03', 'width': 200, 'height': 134}, {'url': 'https://i.scdn.co/image/c81eaba9fe88dc55379cb756f1efea88fd68b797', 'width': 64, 'height': 43}], 'id': 'ARTMOFA11F50C4879F', 'name': 'Kool', 'location': {'latlon': {'lat': 27.6648274, 'lon': -81.5157535}, 'city': '', 'region': 'Florida', 'location': 'Florida, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/f804f486e59e8b50f340c5102d68314cd3e67166', 'width': 1000, 'height': 756}, {'url': 'https://i.scdn.co/image/8e40d2bc8225a6cd16cfe58b0c0e3d18ab978189', 'width': 640, 'height': 484}, {'url': 'https://i.scdn.co/image/8cab23e5f43e00e15ba7c86c1304a65ea997e639', 'width': 200, 'height': 151}, {'url': 'https://i.scdn.co/image/f847a32191ba414e63f0dd31a761b8377e5ddf45', 'width': 64, 'height': 48}], 'id': 'ARJG2ID1187B9A767E', 'name': 'The Jam', 'location': {'latlon': {'lat': 51.316774, 'lon': -0.5600349}, 'city': 'Woking', 'region': 'Surrey', 'location': 'Woking, Surrey, England', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/738650ce127e119788a7bca020fbd054b5aa57b5', 'width': 1000, 'height': 881}, {'url': 'https://i.scdn.co/image/8675fae7dd68a7f8ee97d65106c2f68c8026498b', 'width': 640, 'height': 564}, {'url': 'https://i.scdn.co/image/e8f489fb953e2d1cb00d1c18c903d6149cb1196d', 'width': 200, 'height': 176}, {'url': 'https://i.scdn.co/image/06e195eaaa853b397ccaa971edaa25554dba3c05', 'width': 64, 'height': 56}], 'id': 'ARXPPEY1187FB51DF4', 'name': 'Michael Jackson', 'location': {'latlon': {'lat': 41.5933696, 'lon': -87.3464271}, 'city': 'Gary', 'region': 'Indiana ', 'location': 'Gary, IN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/a83dde6a555a82ae9f5b30639e20026338d23dd8', 'width': 320, 'height': 480}, {'url': 'https://i.scdn.co/image/d086a88a34a4fd21d48b83af9947806688e3c36a', 'width': 200, 'height': 300}, {'url': 'https://i.scdn.co/image/197a7bf3eac3d431f90c7d06a7067a667ba1eb85', 'width': 64, 'height': 96}], 'id': 'ARJIAK51187B9B0C57', 'name': 'Cliff Richard', 'location': {'latlon': {'lat': 51.5073509, 'lon': -0.1277583}, 'city': 'London', 'region': '', 'location': 'London, K', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/2e6d5966dafa119f4a75e7df62ffa163e0d861cd', 'width': 1000, 'height': 1000}, {'url': 'https://i.scdn.co/image/4189f0e3b7d74c13fae3dea9810e4665235dc4ef', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/8f9cb6c496286cb24855e3d2cf57df4da0a55b9f', 'width': 200, 'height': 200}, {'url': 'https://i.scdn.co/image/a02747bd3dd58d89752903a767117cb3eb928836', 'width': 64, 'height': 64}], 'id': 'AR9L6R11187B9AA3F2', 'name': 'Duran Duran', 'location': {'latlon': {'lat': 52.48624299999999, 'lon': -1.890401}, 'city': 'Birmingham', 'region': '', 'location': 'Birmingham, K', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/300d845955ce68f61fa5156cec232feeee2e49fc', 'width': 1000, 'height': 872}, {'url': 'https://i.scdn.co/image/cfb0b13fb0cddbca799d2d09ff78ba5e5d7d5aa4', 'width': 640, 'height': 558}, {'url': 'https://i.scdn.co/image/e26ea765886bbf49615c6fdbfb535bfe57740c4b', 'width': 200, 'height': 174}, {'url': 'https://i.scdn.co/image/be0277dda220ed9edd151bd9ad4c727fdec17850', 'width': 64, 'height': 56}], 'id': 'ARA1FK71187FB5AFAD', 'name': 'Wham!', 'location': {'latlon': {'lat': 51.64761499999999, 'lon': -0.35842}, 'city': '', 'region': '', 'location': 'Bushey, Hertfordshire, England', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/c9b53f16231496566a250b980853688d0060c5f7', 'width': 450, 'height': 668}, {'url': 'https://i.scdn.co/image/b7d1f37f8fa6cc5a1c676fb9a90c7401d5dcc21d', 'width': 200, 'height': 297}, {'url': 'https://i.scdn.co/image/6fe4d999bcd666990dab0cc614eb3efbaa8c2294', 'width': 64, 'height': 95}], 'id': 'ARBEOHF1187B9B044D', 'name': 'Madonna', 'location': {'latlon': {'lat': 40.7127837, 'lon': -74.0059413}, 'city': 'New York', 'region': 'NY', 'location': 'New York City, NY', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/6483f64e640565f7f5bd82fb91cb76045408d992', 'width': 1000, 'height': 1207}, {'url': 'https://i.scdn.co/image/19faf3653d06d970e73adad7376c6280385f427f', 'width': 640, 'height': 773}, {'url': 'https://i.scdn.co/image/57b9286bd7aa077b266bd0be980279c3775d4428', 'width': 200, 'height': 241}, {'url': 'https://i.scdn.co/image/be4c529320ee4208751787831a4e4952ac89f2e0', 'width': 64, 'height': 77}], 'id': 'ARBUOOR1187B997391', 'name': 'Oasis', 'location': {'latlon': {'lat': 53.4807593, 'lon': -2.2426305}, 'city': 'Manchester', 'region': 'England', 'location': 'Manchester, England, GB', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/4e292498ce6004e7e5936a10a02c2f74410055a3', 'width': 400, 'height': 300}, {'url': 'https://i.scdn.co/image/e48568563ccf0745521b624444ff743e629d157c', 'width': 200, 'height': 150}, {'url': 'https://i.scdn.co/image/895c52f99eeb85e05e6ff14e5f6f125c6b4985ba', 'width': 64, 'height': 48}], 'id': 'ARCQ0AA1187B9AE55A', 'name': 'Boyzone', 'location': {'latlon': {'lat': 53.3498053, 'lon': -6.2603097}, 'city': 'Dublin', 'region': 'Dublin', 'location': 'Dublin, Dublin, IE', 'country': 'Ireland'}}, {'images': [ {'url': 'https://i.scdn.co/image/20e78a9ef314b580776bff93e8e073f74c79b969', 'width': 1000, 'height': 1000}, {'url': 'https://i.scdn.co/image/e5e75d3f721a31445755f639c9317f2a46ee93a7', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/6f0a4da1b1e4addd84be4c177a63fcbb721d6453', 'width': 200, 'height': 200}, {'url': 'https://i.scdn.co/image/0e96bfe9b4eed6b1c3b1026e9a0461bfb0fff8dc', 'width': 64, 'height': 64}], 'id': 'ARKSZW81187B9B695D', 'name': 'Mariah Carey', 'location': {'latlon': {'lat': 40.7127837, 'lon': -74.0059413}, 'city': 'New York', 'region': 'NY', 'location': 'New York, NY', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/770d5b1c4033a02da7a443f496673a6f2512ef57', 'width': 1000, 'height': 718}, {'url': 'https://i.scdn.co/image/753a74d543748de3652d829e31209905485445cc', 'width': 639, 'height': 459}, {'url': 'https://i.scdn.co/image/31d259ac0224423ae0e1bed08c4c1d5f0c6ee8f3', 'width': 200, 'height': 144}, {'url': 'https://i.scdn.co/image/898be37fe04e74b4398cab38d74c8bb10c85bf1b', 'width': 64, 'height': 46}], 'id': 'AR7VWZ11187B98DA42', 'name': 'Spice Girls', 'location': {'latlon': {'lat': 51.5073509, 'lon': -0.1277583}, 'city': 'London', 'region': '', 'location': 'London', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/738650ce127e119788a7bca020fbd054b5aa57b5', 'width': 1000, 'height': 881}, {'url': 'https://i.scdn.co/image/8675fae7dd68a7f8ee97d65106c2f68c8026498b', 'width': 640, 'height': 564}, {'url': 'https://i.scdn.co/image/e8f489fb953e2d1cb00d1c18c903d6149cb1196d', 'width': 200, 'height': 176}, {'url': 'https://i.scdn.co/image/06e195eaaa853b397ccaa971edaa25554dba3c05', 'width': 64, 'height': 56}], 'id': 'ARXPPEY1187FB51DF4', 'name': 'Michael Jackson', 'location': {'latlon': {'lat': 41.5933696, 'lon': -87.3464271}, 'city': 'Gary', 'region': 'Indiana ', 'location': 'Gary, IN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/e4fdc45cbefaecc3ea4dc506c6a2688ffc5c5818', 'width': 600, 'height': 600}, {'url': 'https://i.scdn.co/image/cbc8f36eac517aadc57abdde2c8a157025de23c2', 'width': 300, 'height': 300}, {'url': 'https://i.scdn.co/image/1f3e51b3a3a745ee4ca9a59f79924b94ea0201a2', 'width': 64, 'height': 64}], 'id': 'ARSWWY01187FB41587', 'name': 'Take', 'location': {'latlon': {'lat': 40.7127837, 'lon': -74.0059413}, 'city': 'New York', 'region': 'New York', 'location': 'New York, NY, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/226917d0f37fc00c8d1bd903d9418c7221425cbc', 'width': 1000, 'height': 1335}, {'url': 'https://i.scdn.co/image/a3261c713329f5633cf299913589fd445d88d898', 'width': 640, 'height': 854}, {'url': 'https://i.scdn.co/image/ff8e0db0d34d498fcd0311c55350cc55d78265ef', 'width': 200, 'height': 267}, {'url': 'https://i.scdn.co/image/1d2efc02a19317459027562167cee64ff90d1e57', 'width': 64, 'height': 85}], 'id': 'ARFWL8S1187B9B4B44', 'name': 'C\xe9line Dion', 'location': {'latlon': {'lat': 45.7215131, 'lon': -73.49402649999999}, 'city': "l'Assomption", 'region': 'Qu\xe9bec', 'location': 'Charlemagne, Charlemagne, Quebec, CA', 'country': 'Canada'}}, {'images': [ {'url': 'https://i.scdn.co/image/a58a98315691d06293a08c30bca3c16b2ac624d7', 'width': 1000, 'height': 1000}, {'url': 'https://i.scdn.co/image/6ea87265a172c1b870377bcaf156286837006dfa', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/ccee26d8d147eb73787ad3b8040d82175f98be4b', 'width': 200, 'height': 200}, {'url': 'https://i.scdn.co/image/104ea495b075b3d426c199fdb01b765c28fe749b', 'width': 64, 'height': 64}], 'id': 'AR4L4WQ1187FB51996', 'name': 'The Prodigy', 'location': {'latlon': {'lat': 51.880087, 'lon': 0.5509269}, 'city': 'Braintree', 'region': '', 'location': 'Braintree, England', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/4d3ab2b999520e779bc4aa3711a9d203105e90be', 'width': 1000, 'height': 1335}, {'url': 'https://i.scdn.co/image/40987e153474ea4924086fd6bcaea29f14453a2f', 'width': 640, 'height': 854}, {'url': 'https://i.scdn.co/image/42f87c5b3008daa98cb0008ce2549fa330c8eb20', 'width': 200, 'height': 267}, {'url': 'https://i.scdn.co/image/1ce8552dd1ecfa989e3a3a5421e7bfa4e937ea6a', 'width': 64, 'height': 85}], 'id': 'ARENMIP1187FB5638D', 'name': 'Robbie Williams', 'location': {'latlon': {'lat': 53.002668, 'lon': -2.179404}, 'city': 'Stoke-on-Trent', 'region': 'England', 'location': 'Stoke-on-Trent, England, GB', 'country': 'United Kingdom'}}, {'images': [ {'url': 'https://i.scdn.co/image/f08ed487e3894e0f9ab1c199cbd449d0fb7e244c', 'width': 1000, 'height': 621}, {'url': 'https://i.scdn.co/image/65b89be0a27e4486b2dc0825defa793e9ab18822', 'width': 640, 'height': 397}, {'url': 'https://i.scdn.co/image/e7f36f618a1e577741be6d1f108d27dd693f5b81', 'width': 200, 'height': 124}, {'url': 'https://i.scdn.co/image/d808c91f555eae0cd83500e38a5c047b19d157de', 'width': 64, 'height': 40}], 'id': 'ARTH9041187FB43E1F', 'name': 'Eminem', 'location': {'latlon': {'lat': 42.331427, 'lon': -83.0457538}, 'city': 'Detroit', 'region': 'MI', 'location': 'detroit, MI', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/aa77841809dd8bc82563155439da231c24715f26', 'width': 400, 'height': 492}, {'url': 'https://i.scdn.co/image/75334630786298b885319b60f71de1fe053576ea', 'width': 200, 'height': 246}, {'url': 'https://i.scdn.co/image/7cc2561129441ee83c978e1d8c519d88732b9b04', 'width': 64, 'height': 79}], 'id': 'ARPAAHK1187B9ABB00', 'name': 'Nickelback', 'location': {'latlon': {'lat': 51.6440582, 'lon': -111.92872}, 'city': 'Hanna', 'region': '', 'location': 'Hanna, Canada', 'country': 'Canada'}}, {'images': [ {'url': 'https://i.scdn.co/image/d26467f9fcb679e0d46abd3ca893d81ac6f812f6', 'width': 1000, 'height': 667}, {'url': 'https://i.scdn.co/image/cd2803ed795f5f77c652febd7f9a1450f48fb099', 'width': 640, 'height': 427}, {'url': 'https://i.scdn.co/image/caca2f9d2c835592c52777aae03bbaf4e272ea8f', 'width': 200, 'height': 133}, {'url': 'https://i.scdn.co/image/04bdcfa06410b3e3fffe93e2a000d0d79521e5ac', 'width': 64, 'height': 43}], 'id': 'ARS54I31187FB46721', 'name': 'Taylor Swift', 'location': {'latlon': {'lat': 40.3356483, 'lon': -75.9268747}, 'city': 'Reading', 'region': 'PA', 'location': 'Reading, PA', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/31adc01e0d401ebb60dbf7af7e2522c2aa6d277b', 'width': 600, 'height': 800}, {'url': 'https://i.scdn.co/image/69d7d1d6bc3e05541200972f0f61f17268b2748e', 'width': 200, 'height': 267}, {'url': 'https://i.scdn.co/image/a0776727706817fe9b9232a0c330e54841026adc', 'width': 64, 'height': 85}], 'id': 'ARHATLI1187FB52C76', 'name': 'Toby Keith', 'location': {'latlon': {'lat': 36.1626638, 'lon': -86.7816016}, 'city': 'Nashville', 'region': 'TN', 'location': 'Nashville, TN', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/e58df4e4c4b6b2136fe134eaef9ba1dbb089b5f3', 'width': 1000, 'height': 563}, {'url': 'https://i.scdn.co/image/5edfa77f1c15abf58b73c3a1654f6eb8fe1e5fe7', 'width': 640, 'height': 360}, {'url': 'https://i.scdn.co/image/28b7305fd2233f316db955cc8173317bf885e50f', 'width': 200, 'height': 113}, {'url': 'https://i.scdn.co/image/95e2e1627b4443f85545232a84d90b50bb55b58a', 'width': 64, 'height': 36}], 'id': 'ARZ9S861187B9B5B0A', 'name': 'Kenny Chesney', 'location': {'latlon': {'lat': 36.1528688, 'lon': -83.784746}, 'city': 'Corryton', 'region': 'TN', 'location': 'Corryton, Tennessee', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/9eab4c6ef2d1b5b9a3a6cac281df8440d261af44', 'width': 1000, 'height': 667}, {'url': 'https://i.scdn.co/image/819a4083cd3c8a4fcc640eda718ec2b05f5365a8', 'width': 640, 'height': 427}, {'url': 'https://i.scdn.co/image/8642e0662d87fcb3d6bcd2473cfdcce314556a97', 'width': 200, 'height': 133}, {'url': 'https://i.scdn.co/image/4eaeab7d6e59c1768dc010bac7145aa27bab177d', 'width': 64, 'height': 43}], 'id': 'AR7ZFLN1187FB4830A', 'name': 'Rascal Flatts', 'location': {'latlon': {'lat': 39.9611755, 'lon': -82.99879419999999}, 'city': 'Columbus', 'region': 'OH', 'location': 'Columbus, OH', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/c7c2a5af3ae2ae4ecf0a66fedb81f03c5c1686aa', 'width': 1000, 'height': 1000}, {'url': 'https://i.scdn.co/image/d128b2bc39f2f698b0b4f474e951cd46e058b9b1', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/ca26e27a926ea38c08f06499da3cc598f2c6ce69', 'width': 200, 'height': 200}, {'url': 'https://i.scdn.co/image/fc2ec392f330c96e6332ab8aa6db4af119011a17', 'width': 64, 'height': 64}], 'id': 'ARLE2071187FB3A270', 'name': 'Carrie Underwood', 'location': {'latlon': {'lat': 35.4700993, 'lon': -95.5230356}, 'city': 'Checotah', 'region': 'Oklahoma', 'location': 'Checotah, OK, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/2c838adf68a4b24d58c5617f99c7153bf7484bcd', 'width': 1000, 'height': 1000}, {'url': 'https://i.scdn.co/image/c51b19fe1a528f56f9a0de6123dfa39bceff3e79', 'width': 640, 'height': 640}, {'url': 'https://i.scdn.co/image/d8491b247c35b3cd8c24b967dd3614375e60a5a2', 'width': 200, 'height': 200}, {'url': 'https://i.scdn.co/image/8d55dfb5fecd6593bc2d56f469396544198afa76', 'width': 64, 'height': 64}], 'id': 'ARQUMH41187B9AF699', 'name': 'Linkin Park', 'location': {'latlon': {'lat': 34.1533395, 'lon': -118.7616764}, 'city': 'Agoura Hills', 'region': 'California', 'location': 'Agoura Hills, CA, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/7252f8b4fdaf235cc05bbdb575b2e7a447adbe75', 'width': 1000, 'height': 1333}, {'url': 'https://i.scdn.co/image/584b6f93f4830594e6dcc81a02a5a46e0d096507', 'width': 640, 'height': 853}, {'url': 'https://i.scdn.co/image/5e00c9accaf78a61e47e952b6e7060549ad6f290', 'width': 200, 'height': 267}, {'url': 'https://i.scdn.co/image/16c4686cb1951a01299f5f60cd44c7bb7468e523', 'width': 64, 'height': 85}], 'id': 'AR52EZT1187B9900BF', 'name': 'Alicia Keys', 'location': {'latlon': {'lat': 40.7127837, 'lon': -74.0059413}, 'city': 'New York', 'region': 'New York', 'location': 'New York, NY, S', 'country': 'United States'}}, {'images': [ {'url': 'https://i.scdn.co/image/51725c003bbf5c6ffb2fe4bc0b25b44236ace1b3', 'width': 1000, 'height': 667}, {'url': 'https://i.scdn.co/image/f09a550e849c12f28207f902310ca774a739a7b0', 'width': 640, 'height': 427}, {'url': 'https://i.scdn.co/image/b94c58adf8b88d567dba11d8550a8061c75340a3', 'width': 200, 'height': 133}, {'url': 'https://i.scdn.co/image/88fe9dceae2b5c84ab0a59642b17927f8e5600e9', 'width': 64, 'height': 43}], 'id': 'ARLGIX31187B9AE9A0', 'name': 'Jay-Z', 'location': {'latlon': {'lat': 40.6781784, 'lon': -73.9441579}, 'city': 'Brooklyn', 'region': 'NY', 'location': 'Brooklyn, NY', 'country': 'United States'}}]
from urllib2 import Request, urlopen, URLError
keys= ['CFLBFVAPYPMUYTTSR','QVJX27LZP1Q9GYBYV','45PFVVAQZQJD5BIV5','PKT63BEXLXWEDVVKE','ZTFPNPKXWOQAHAG2G'];
request = Request('http://developer.echonest.com/api/v4/genre/similar?api_key='+keys[0]+'&name=hard+rock')
toRedo=[]
import json
res = []
import geocoder
g = geocoder.google('Mountain View, CA')
g.latlng
for g in topArtists:
request = Request('http://developer.echonest.com/api/v4/artist/profile?api_key='+keys[0]+'&id='+g['id']+"&bucket=genre")
k = keys[0]
keys.remove(k)
keys.append(k)
try:
response = urlopen(request)
sim = response.read()
sim = json.loads(sim)
print sim
g["genres"] = sim["response"]["artist"]["genres"]
#res.append(sim["response"]["artists"][0])
except URLError, e:
print "change key"
toRedo.append(g)
print topArtists | 1,193 | 42,023 | 0.679892 | 4,509 | 42,948 | 6.474606 | 0.190508 | 0.068507 | 0.077071 | 0.111324 | 0.471946 | 0.448654 | 0.385627 | 0.383846 | 0.375933 | 0.368261 | 0 | 0.240469 | 0.080237 | 42,948 | 36 | 42,024 | 1,193 | 0.498582 | 0.000955 | 0 | 0 | 0 | 0 | 0.659729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.115385 | null | null | 0.115385 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
ada9011690166d7c35db5c4bd61263aa610094f5 | 39 | py | Python | src/lib/wsgiref/__init__.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/wsgiref/__init__.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/wsgiref/__init__.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("wsgiref")
| 19.5 | 38 | 0.769231 | 6 | 39 | 4.166667 | 0.666667 | 0.48 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.076923 | 39 | 1 | 39 | 39 | 0.694444 | 0 | 0 | 0 | 0 | 0 | 0.179487 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
add6d9ac2b6829d3b34c3e658f2578feac7c0f3a | 12,317 | py | Python | src/tf_transformers/models/gpt2/convert.py | legacyai/tf-transformers | 65a5f9a4bcb3236483daa598a37b91673f56cb97 | [
"Apache-2.0"
] | 116 | 2021-03-15T09:48:41.000Z | 2022-03-24T05:15:51.000Z | src/tf_transformers/models/gpt2/convert.py | legacyai/tf-transformers | 65a5f9a4bcb3236483daa598a37b91673f56cb97 | [
"Apache-2.0"
] | 4 | 2021-03-20T11:20:57.000Z | 2022-01-05T04:59:07.000Z | src/tf_transformers/models/gpt2/convert.py | legacyai/tf-transformers | 65a5f9a4bcb3236483daa598a37b91673f56cb97 | [
"Apache-2.0"
] | 9 | 2021-03-17T04:14:48.000Z | 2021-09-13T07:15:31.000Z | # coding=utf-8
# Copyright 2021 TF-Transformers Authors and The TensorFlow Authors.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import numpy as np
import tensorflow as tf
from absl import logging
from tf_transformers.core import keras_utils
def convert_gpt2_pt(model, config, model_name):
"""PT converter
Args:
model_hf: HuggingFace Model (TF)
model: tf_transformers model/layer
config: dict
Returns:
a function
"""
# When dropout, use_auto_regressive is enabled assertion won't work
SKIP_ASSERT = False
try:
# LegacyLayer
local_config = model._config_dict
except Exception as e:
# LegacyModel
local_config = model.model_config
if local_config['use_dropout']:
logging.warn("Note: As `use_dropout` is True we will skip Assertions, please verify the model.")
SKIP_ASSERT = True
if local_config['use_auto_regressive']:
raise ValueError(
"Please save model checkpoint without `use_auto_regressive` and then reload it with `use_auto_regressive`."
)
SKIP_ASSERT = True
import torch
import transformers
transformers.logging.set_verbosity_error()
# From vars (Transformer variables)
from_model_vars = [
"h.{}.ln_1.weight",
"h.{}.ln_1.bias",
"h.{}.attn.c_attn.weight",
"h.{}.attn.c_attn.bias",
"h.{}.attn.c_proj.weight",
"h.{}.attn.c_proj.bias",
"h.{}.ln_2.weight",
"h.{}.ln_2.bias",
"h.{}.mlp.c_fc.weight",
"h.{}.mlp.c_fc.bias",
"h.{}.mlp.c_proj.weight",
"h.{}.mlp.c_proj.bias",
]
# To vars (Transformer variables)
to_model_vars = [
"tf_transformers/gpt2/transformer/layer_{}/ln_1/layer_norm/gamma:0",
"tf_transformers/gpt2/transformer/layer_{}/ln_1/layer_norm/beta:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention/qkv/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention/qkv/bias:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_output/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_output/bias:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_layer_norm/gamma:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_layer_norm/beta:0",
"tf_transformers/gpt2/transformer/layer_{}/intermediate/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/intermediate/bias:0",
"tf_transformers/gpt2/transformer/layer_{}/output/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/output/bias:0",
]
# Simple Assertion
assert len(from_model_vars) == len(to_model_vars)
mapping_dict = {}
for index in range(len(from_model_vars)):
for i in range(config["num_hidden_layers"]):
mapping_dict[from_model_vars[index].format(i)] = to_model_vars[index].format(i)
# Word Embeddings
mapping_dict["wte.weight"] = "tf_transformers/gpt2/word_embeddings/embeddings:0"
# Positional Embedding
mapping_dict["wpe.weight"] = "tf_transformers/gpt2/positional_embeddings/embeddings:0"
mapping_dict["ln_f.weight"] = "tf_transformers/gpt2/ln_f/layer_norm/gamma:0"
mapping_dict["ln_f.bias"] = "tf_transformers/gpt2/ln_f/layer_norm/beta:0"
# BertModel
from transformers import GPT2Model
model_hf = GPT2Model.from_pretrained(model_name)
# HF model variable name to variable values, for fast retrieval
from_to_variable_dict = {name: var.detach().numpy() for name, var in model_hf.named_parameters()}
# We need variable name to the index where it is stored inside tf_transformers model
tf_transformers_model_index_dict = {}
for index, var in enumerate(model.variables):
tf_transformers_model_index_dict[var.name] = index
# In auto_regressive mode, positional embeddings variable name has
# cond extra name. So, in case someone converts in that mode,
# replace above mapping here, only for positional embeddings
if var.name == "tf_transformers/gpt2/cond/positional_embeddings/embeddings:0":
mapping_dict["wpe.weight"] = "tf_transformers/gpt2/cond/positional_embeddings/embeddings:0"
# Start assigning HF values to tf_transformers
# assigned_map and assigned_map_values are used for sanity check if needed
assigned_map = []
# assigned_map_values = []
for original_var, legacy_var in mapping_dict.items():
index = tf_transformers_model_index_dict[legacy_var]
from_shape = from_to_variable_dict.get(original_var).shape
to_shape = model.variables[index].shape
if len(from_shape) == 2:
if len(to_shape) == 1:
model.variables[index].assign(np.squeeze(from_to_variable_dict.get(original_var)))
continue
model.variables[index].assign(from_to_variable_dict.get(original_var))
assigned_map.append((original_var, legacy_var))
if SKIP_ASSERT is False:
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
text = "This is a long sentence to check how close models are."
inputs = tokenizer(text, return_tensors="pt")
outputs_hf = model_hf(**inputs)
outputs_hf = torch.sum(outputs_hf["last_hidden_state"], dim=-1).detach().numpy()
inputs_tf = {}
inputs_tf["input_ids"] = tf.cast(tf.constant(inputs["input_ids"].numpy()), tf.int32)
outputs_tf = model(inputs_tf)
outputs_tf = tf.reduce_sum(outputs_tf["token_embeddings"], axis=-1).numpy()
tf.debugging.assert_near(outputs_hf, outputs_tf, rtol=1.0)
def convert_gpt2_tf(model, config, model_name):
"""TF converter
Args:
model_hf: HuggingFace Model (TF)
model: tf_transformers model/layer
config: dict
Returns:
a function
"""
# When dropout, use_auto_regressive is enabled assertion won't work
SKIP_ASSERT = False
try:
# LegacyLayer
local_config = model._config_dict
except Exception as e:
# LegacyModel
local_config = model.model_config
if local_config['use_dropout']:
logging.warn("Note: As `use_dropout` is True we will skip Assertions, please verify the model.")
SKIP_ASSERT = True
if local_config['use_auto_regressive']:
raise ValueError(
"Please save model checkpoint without `use_auto_regressive` and then reload it with `use_auto_regressive`."
)
SKIP_ASSERT = True
import transformers
transformers.logging.set_verbosity_error()
# From vars (Transformer variables)
from_model_vars = [
"tfgp_t2model/transformer/h_._{}/ln_1/gamma:0",
"tfgp_t2model/transformer/h_._{}/ln_1/beta:0",
"tfgp_t2model/transformer/h_._{}/attn/c_attn/weight:0",
"tfgp_t2model/transformer/h_._{}/attn/c_attn/bias:0",
"tfgp_t2model/transformer/h_._{}/attn/c_proj/weight:0",
"tfgp_t2model/transformer/h_._{}/attn/c_proj/bias:0",
"tfgp_t2model/transformer/h_._{}/ln_2/gamma:0",
"tfgp_t2model/transformer/h_._{}/ln_2/beta:0",
"tfgp_t2model/transformer/h_._{}/mlp/c_fc/weight:0",
"tfgp_t2model/transformer/h_._{}/mlp/c_fc/bias:0",
"tfgp_t2model/transformer/h_._{}/mlp/c_proj/weight:0",
"tfgp_t2model/transformer/h_._{}/mlp/c_proj/bias:0",
]
# To vars (Transformer variables)
to_model_vars = [
"tf_transformers/gpt2/transformer/layer_{}/ln_1/layer_norm/gamma:0",
"tf_transformers/gpt2/transformer/layer_{}/ln_1/layer_norm/beta:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention/qkv/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention/qkv/bias:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_output/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_output/bias:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_layer_norm/gamma:0",
"tf_transformers/gpt2/transformer/layer_{}/self_attention_layer_norm/beta:0",
"tf_transformers/gpt2/transformer/layer_{}/intermediate/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/intermediate/bias:0",
"tf_transformers/gpt2/transformer/layer_{}/output/kernel:0",
"tf_transformers/gpt2/transformer/layer_{}/output/bias:0",
]
# Simple Assertion
assert len(from_model_vars) == len(to_model_vars)
mapping_dict = {}
for index in range(len(from_model_vars)):
for i in range(config["num_hidden_layers"]):
mapping_dict[from_model_vars[index].format(i)] = to_model_vars[index].format(i)
# Word Embeddings
mapping_dict["tfgp_t2model/transformer/wte/weight:0"] = "tf_transformers/gpt2/word_embeddings/embeddings:0"
# Positional Embedding
mapping_dict[
"tfgp_t2model/transformer/wpe/embeddings:0"
] = "tf_transformers/gpt2/positional_embeddings/embeddings:0"
mapping_dict["tfgp_t2model/transformer/ln_f/gamma:0"] = "tf_transformers/gpt2/ln_f/layer_norm/gamma:0"
mapping_dict["tfgp_t2model/transformer/ln_f/beta:0"] = "tf_transformers/gpt2/ln_f/layer_norm/beta:0"
# GPT2Model
from transformers import TFGPT2Model
model_hf = TFGPT2Model.from_pretrained(model_name)
# HF model variable name to variable values, for fast retrieval
from_to_variable_dict = {var.name: var for var in model_hf.variables}
# We need variable name to the index where it is stored inside tf_transformers model
tf_transformers_model_index_dict = {}
for index, var in enumerate(model.variables):
tf_transformers_model_index_dict[var.name] = index
# In auto_regressive mode, positional embeddings variable name has
# cond extra name. So, in case someone converts in that mode,
# replace above mapping here, only for positional embeddings
if var.name == "tf_transformers/gpt2/cond/positional_embeddings/embeddings:0":
mapping_dict[
"tfgp_t2model/transformer/wpe/embeddings:0"
] = "tf_transformers/gpt2/cond/positional_embeddings/embeddings:0"
# Start assigning HF values to tf_transformers
# assigned_map and assigned_map_values are used for sanity check if needed
assigned_map = []
# assigned_map_values = []
for original_var, legacy_var in mapping_dict.items():
index = tf_transformers_model_index_dict[legacy_var]
from_shape = from_to_variable_dict.get(original_var).shape
to_shape = model.variables[index].shape
if len(from_shape) == 2:
if len(to_shape) == 1:
model.variables[index].assign(tf.squeeze(from_to_variable_dict.get(original_var)))
continue
model.variables[index].assign(from_to_variable_dict.get(original_var))
assigned_map.append((original_var, legacy_var))
if SKIP_ASSERT is False:
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
text = "This is a long sentence to check how close models are."
inputs = tokenizer(text, return_tensors="tf")
outputs_hf = model_hf(**inputs)
outputs_hf = tf.reduce_sum(outputs_hf["last_hidden_state"], axis=-1).numpy()
del model_hf
inputs_tf = {}
inputs_tf["input_ids"] = inputs["input_ids"]
outputs_tf = model(inputs_tf)
outputs_tf = tf.reduce_sum(outputs_tf["token_embeddings"], axis=-1).numpy()
if keras_utils.get_policy_name() == 'float32':
tf.debugging.assert_near(outputs_hf, outputs_tf, rtol=1.0)
| 42.619377 | 120 | 0.688642 | 1,624 | 12,317 | 4.956281 | 0.155172 | 0.086967 | 0.080507 | 0.063735 | 0.864952 | 0.845944 | 0.827432 | 0.806311 | 0.767797 | 0.743695 | 0 | 0.015579 | 0.197451 | 12,317 | 288 | 121 | 42.767361 | 0.798685 | 0.191443 | 0 | 0.648045 | 0 | 0 | 0.400061 | 0.321919 | 0 | 0 | 0 | 0 | 0.078212 | 1 | 0.011173 | false | 0 | 0.061453 | 0 | 0.072626 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
adf177cd5beee1fc7849b7dd0f2569591ed85802 | 126 | py | Python | src/start_ssl_server.py | calleggr/PieServer | d3302600dfcd9466b4f0c5082a2de8bb2622ac24 | [
"MIT"
] | 1 | 2015-05-22T01:15:39.000Z | 2015-05-22T01:15:39.000Z | src/start_ssl_server.py | calleggr/PieServer | d3302600dfcd9466b4f0c5082a2de8bb2622ac24 | [
"MIT"
] | 8 | 2015-04-13T17:06:37.000Z | 2015-04-16T04:13:46.000Z | src/start_ssl_server.py | rockwotj/PieServer | d3302600dfcd9466b4f0c5082a2de8bb2622ac24 | [
"MIT"
] | null | null | null | from test_app.framework.server.server import PieServer
from test_app.main import app
PieServer(app,port=8443,ssl=True).run()
| 25.2 | 54 | 0.81746 | 21 | 126 | 4.809524 | 0.619048 | 0.158416 | 0.217822 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.034483 | 0.079365 | 126 | 4 | 55 | 31.5 | 0.836207 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
adfcf995caa59434260ef0a9040cc271638addbb | 1,146 | py | Python | alpyro_msgs/actionlib_tutorials/averagingactionfeedback.py | rho2/alpyro_msgs | b5a680976c40c83df70d61bb2db1de32a1cde8d3 | [
"MIT"
] | 1 | 2020-12-13T13:07:10.000Z | 2020-12-13T13:07:10.000Z | alpyro_msgs/actionlib_tutorials/averagingactionfeedback.py | rho2/alpyro_msgs | b5a680976c40c83df70d61bb2db1de32a1cde8d3 | [
"MIT"
] | null | null | null | alpyro_msgs/actionlib_tutorials/averagingactionfeedback.py | rho2/alpyro_msgs | b5a680976c40c83df70d61bb2db1de32a1cde8d3 | [
"MIT"
] | null | null | null | from typing import Final
from alpyro_msgs import RosMessage
from alpyro_msgs.actionlib_msgs.goalstatus import GoalStatus
from alpyro_msgs.actionlib_tutorials.averagingfeedback import AveragingFeedback
from alpyro_msgs.std_msgs.header import Header
class AveragingActionFeedback(RosMessage):
__msg_typ__ = "actionlib_tutorials/AveragingActionFeedback"
__msg_def__ = "c3RkX21zZ3MvSGVhZGVyIGhlYWRlcgogIHVpbnQzMiBzZXEKICB0aW1lIHN0YW1wCiAgc3RyaW5nIGZyYW1lX2lkCmFjdGlvbmxpYl9tc2dzL0dvYWxTdGF0dXMgc3RhdHVzCiAgdWludDggUEVORElORz0wCiAgdWludDggQUNUSVZFPTEKICB1aW50OCBQUkVFTVBURUQ9MgogIHVpbnQ4IFNVQ0NFRURFRD0zCiAgdWludDggQUJPUlRFRD00CiAgdWludDggUkVKRUNURUQ9NQogIHVpbnQ4IFBSRUVNUFRJTkc9NgogIHVpbnQ4IFJFQ0FMTElORz03CiAgdWludDggUkVDQUxMRUQ9OAogIHVpbnQ4IExPU1Q9OQogIGFjdGlvbmxpYl9tc2dzL0dvYWxJRCBnb2FsX2lkCiAgICB0aW1lIHN0YW1wCiAgICBzdHJpbmcgaWQKICB1aW50OCBzdGF0dXMKICBzdHJpbmcgdGV4dAphY3Rpb25saWJfdHV0b3JpYWxzL0F2ZXJhZ2luZ0ZlZWRiYWNrIGZlZWRiYWNrCiAgaW50MzIgc2FtcGxlCiAgZmxvYXQzMiBkYXRhCiAgZmxvYXQzMiBtZWFuCiAgZmxvYXQzMiBzdGRfZGV2Cgo="
__md5_sum__ = "78a4a09241b1791069223ae7ebd5b16b"
header: Header
status: GoalStatus
feedback: AveragingFeedback
| 71.625 | 670 | 0.934555 | 53 | 1,146 | 19.773585 | 0.45283 | 0.038168 | 0.053435 | 0.043893 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.080439 | 0.045375 | 1,146 | 15 | 671 | 76.4 | 0.877514 | 0 | 0 | 0 | 0 | 0 | 0.63438 | 0.63438 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.416667 | 0 | 1 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
bc30706739971e383b3c845c1119ef44f340f4f0 | 24 | py | Python | python/testData/resolve/multiFile/relativeAndSameDirectoryImports/plainDirectoryImportResolveSameDirectoryModuleNotThrowsException/not-valid-identifier/script.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | 2 | 2019-04-28T07:48:50.000Z | 2020-12-11T14:18:08.000Z | python/testData/resolve/multiFile/relativeAndSameDirectoryImports/plainDirectoryImportResolveSameDirectoryModuleNotThrowsException/not-valid-identifier/script.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | python/testData/resolve/multiFile/relativeAndSameDirectoryImports/plainDirectoryImportResolveSameDirectoryModuleNotThrowsException/not-valid-identifier/script.py | 06needhamt/intellij-community | 63d7b8030e4fdefeb4760e511e289f7e6b3a5c5b | [
"Apache-2.0"
] | null | null | null | import lib
# <ref> | 12 | 13 | 0.5 | 3 | 24 | 4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.375 | 24 | 2 | 13 | 12 | 0.8 | 0.208333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
70b048b4a00c8d829ff96af21606834b58d8417a | 84 | py | Python | titan/react_pkg/module/props.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | titan/react_pkg/module/props.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | titan/react_pkg/module/props.py | mnieber/gen | 65f8aa4fb671c4f90d5cbcb1a0e10290647a31d9 | [
"MIT"
] | null | null | null | from pathlib import Path
def module_path(self):
return Path(self.output_path)
| 14 | 33 | 0.761905 | 13 | 84 | 4.769231 | 0.692308 | 0.258065 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 84 | 5 | 34 | 16.8 | 0.885714 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 6 |
cb3eb069731550d5487cbaba216324ea90d54e5c | 27,991 | py | Python | classes.py | whitevegagabriel/pygame-blueprint | 33fd11c13f8fc85a006891ecd4ad617444999edd | [
"MIT"
] | null | null | null | classes.py | whitevegagabriel/pygame-blueprint | 33fd11c13f8fc85a006891ecd4ad617444999edd | [
"MIT"
] | null | null | null | classes.py | whitevegagabriel/pygame-blueprint | 33fd11c13f8fc85a006891ecd4ad617444999edd | [
"MIT"
] | null | null | null | import pygame
win = pygame.display.set_mode((500,500))
class Camera:
def __init(self, x, y):
self.x
self.y
self.Camvel = 10
class Player():
def __init__(self,x,y,height, width, health, CameraX, CameraY,varwidth,varheight):
self.x = x
self.y = y
self.win = win
self.width = width
self.height = height
self.vel = 5
self.CameraX = CameraX
self.CameraY = CameraY
self.varwidth = varwidth
self.varheight = varheight
self.camvel = 10
self.inventory = []
self.facing = 1
self.walkCount = 0
self.ani_count = 0
self.hitbox = (self.x - self.CameraX, self.y - self.CameraY, self.height,self.width)
self.health = 50
self.healthbar = (self.x - self.CameraX, self.y - self.CameraY, self.health, 10)
self.isright = False
self.action = False
self.right = [pygame.image.load('manwalk/027.png'),
pygame.image.load('manwalk/028.png'),
pygame.image.load('manwalk/029.png'),
pygame.image.load('manwalk/030.png'),
pygame.image.load('manwalk/031.png'),
pygame.image.load('manwalk/032.png'),
pygame.image.load('manwalk/033.png'),
pygame.image.load('manwalk/034.png'),
pygame.image.load('manwalk/035.png')]
self.isleft = False
self.left = [
pygame.image.load('manwalk/009.png'),
pygame.image.load('manwalk/010.png'),
pygame.image.load('manwalk/011.png'),
pygame.image.load('manwalk/012.png'),
pygame.image.load('manwalk/013.png'),
pygame.image.load('manwalk/014.png'),
pygame.image.load('manwalk/015.png'),
pygame.image.load('manwalk/016.png'),
pygame.image.load('manwalk/017.png')]
self.isstanding = False
self.standing = pygame.image.load('manwalk/018.png')
self.isup = False
self.up = [pygame.image.load('manwalk/000.png'),
pygame.image.load('manwalk/001.png'),
pygame.image.load('manwalk/002.png'),
pygame.image.load('manwalk/003.png'),
pygame.image.load('manwalk/004.png'),
pygame.image.load('manwalk/005.png'),
pygame.image.load('manwalk/006.png'),
pygame.image.load('manwalk/007.png'),
pygame.image.load('manwalk/008.png')]
self.isdown = False
self.down = [pygame.image.load('manwalk/018.png'),
pygame.image.load('manwalk/019.png'),
pygame.image.load('manwalk/020.png'),
pygame.image.load('manwalk/021.png'),
pygame.image.load('manwalk/022.png'),
pygame.image.load('manwalk/023.png'),
pygame.image.load('manwalk/024.png'),
pygame.image.load('manwalk/025.png'),
pygame.image.load('manwalk/026.png')]
self.isdead = False
self.dead = [pygame.image.load('mandie/000.png'),
pygame.image.load('mandie/001.png'),
pygame.image.load('mandie/002.png'),
pygame.image.load('mandie/003.png'),
pygame.image.load('mandie/004.png')]
self.isrobe = True
self.robestanding = pygame.image.load('robe/018.png')
self.robeup = [pygame.image.load('robe/000.png'),
pygame.image.load('robe/001.png'),
pygame.image.load('robe/002.png'),
pygame.image.load('robe/003.png'),
pygame.image.load('robe/004.png'),
pygame.image.load('robe/005.png'),
pygame.image.load('robe/006.png'),
pygame.image.load('robe/007.png'),
pygame.image.load('robe/008.png')]
self.robedown = [pygame.image.load('robe/018.png'),
pygame.image.load('robe/019.png'),
pygame.image.load('robe/020.png'),
pygame.image.load('robe/021.png'),
pygame.image.load('robe/022.png'),
pygame.image.load('robe/023.png'),
pygame.image.load('robe/024.png'),
pygame.image.load('robe/025.png'),
pygame.image.load('robe/026.png')]
self.robert = [pygame.image.load('robe/027.png'),
pygame.image.load('robe/028.png'),
pygame.image.load('robe/029.png'),
pygame.image.load('robe/030.png'),
pygame.image.load('robe/031.png'),
pygame.image.load('robe/032.png'),
pygame.image.load('robe/033.png'),
pygame.image.load('robe/034.png'),
pygame.image.load('robe/035.png')]
self.robelft = [pygame.image.load('robe/009.png'),
pygame.image.load('robe/010.png'),
pygame.image.load('robe/011.png'),
pygame.image.load('robe/012.png'),
pygame.image.load('robe/013.png'),
pygame.image.load('robe/014.png'),
pygame.image.load('robe/015.png'),
pygame.image.load('robe/016.png'),
pygame.image.load('robe/017.png')]
self.robeskrt_standing = pygame.image.load('robe/robe_skirt/018.png')
self.robeskrt_up = [pygame.image.load('robe/robe_skirt/000.png'),
pygame.image.load('robe/robe_skirt/001.png'),
pygame.image.load('robe/robe_skirt/002.png'),
pygame.image.load('robe/robe_skirt/003.png'),
pygame.image.load('robe/robe_skirt/004.png'),
pygame.image.load('robe/robe_skirt/005.png'),
pygame.image.load('robe/robe_skirt/006.png'),
pygame.image.load('robe/robe_skirt/007.png'),
pygame.image.load('robe/robe_skirt/008.png')]
self.robeskrt_rt = [
pygame.image.load('robe/robe_skirt/009.png'),
pygame.image.load('robe/robe_skirt/010.png'),
pygame.image.load('robe/robe_skirt/011.png'),
pygame.image.load('robe/robe_skirt/012.png'),
pygame.image.load('robe/robe_skirt/013.png'),
pygame.image.load('robe/robe_skirt/014.png'),
pygame.image.load('robe/robe_skirt/015.png'),
pygame.image.load('robe/robe_skirt/016.png'),
pygame.image.load('robe/robe_skirt/017.png')]
self.robeskrt_down = [
pygame.image.load('robe/robe_skirt/018.png'),
pygame.image.load('robe/robe_skirt/019.png'),
pygame.image.load('robe/robe_skirt/020.png'),
pygame.image.load('robe/robe_skirt/021.png'),
pygame.image.load('robe/robe_skirt/022.png'),
pygame.image.load('robe/robe_skirt/023.png'),
pygame.image.load('robe/robe_skirt/024.png'),
pygame.image.load('robe/robe_skirt/025.png'),
pygame.image.load('robe/robe_skirt/026.png')]
self.robeskrt_lft = [
pygame.image.load('robe/robe_skirt/027.png'),
pygame.image.load('robe/robe_skirt/028.png'),
pygame.image.load('robe/robe_skirt/029.png'),
pygame.image.load('robe/robe_skirt/030.png'),
pygame.image.load('robe/robe_skirt/031.png'),
pygame.image.load('robe/robe_skirt/032.png'),
pygame.image.load('robe/robe_skirt/033.png'),
pygame.image.load('robe/robe_skirt/034.png'),
pygame.image.load('robe/robe_skirt/035.png')]
self.robehood_standing =pygame.image.load('robe/robe_hood/018.png')
self.robehood_up = [
pygame.image.load('robe/robe_hood/000.png'),
pygame.image.load('robe/robe_hood/001.png'),
pygame.image.load('robe/robe_hood/002.png'),
pygame.image.load('robe/robe_hood/003.png'),
pygame.image.load('robe/robe_hood/004.png'),
pygame.image.load('robe/robe_hood/005.png'),
pygame.image.load('robe/robe_hood/006.png'),
pygame.image.load('robe/robe_hood/007.png'),
pygame.image.load('robe/robe_hood/008.png')]
self.robehood_lft = [
pygame.image.load('robe/robe_hood/009.png'),
pygame.image.load('robe/robe_hood/010.png'),
pygame.image.load('robe/robe_hood/011.png'),
pygame.image.load('robe/robe_hood/012.png'),
pygame.image.load('robe/robe_hood/013.png'),
pygame.image.load('robe/robe_hood/014.png'),
pygame.image.load('robe/robe_hood/015.png'),
pygame.image.load('robe/robe_hood/016.png'),
pygame.image.load('robe/robe_hood/017.png')]
self.robehood_down = [
pygame.image.load('robe/robe_hood/018.png'),
pygame.image.load('robe/robe_hood/019.png'),
pygame.image.load('robe/robe_hood/020.png'),
pygame.image.load('robe/robe_hood/021.png'),
pygame.image.load('robe/robe_hood/022.png'),
pygame.image.load('robe/robe_hood/023.png'),
pygame.image.load('robe/robe_hood/024.png'),
pygame.image.load('robe/robe_hood/025.png'),
pygame.image.load('robe/robe_hood/026.png')]
self.robehood_rt = [
pygame.image.load('robe/robe_hood/027.png'),
pygame.image.load('robe/robe_hood/028.png'),
pygame.image.load('robe/robe_hood/029.png'),
pygame.image.load('robe/robe_hood/030.png'),
pygame.image.load('robe/robe_hood/031.png'),
pygame.image.load('robe/robe_hood/032.png'),
pygame.image.load('robe/robe_hood/033.png'),
pygame.image.load('robe/robe_hood/034.png'),
pygame.image.load('robe/robe_hood/035.png')]
self.thrust = False
self.thrust_up = [
pygame.image.load('manwalk/man_thrust/000.png'),
pygame.image.load('manwalk/man_thrust/001.png'),
pygame.image.load('manwalk/man_thrust/002.png'),
pygame.image.load('manwalk/man_thrust/003.png'),
pygame.image.load('manwalk/man_thrust/004.png'),
pygame.image.load('manwalk/man_thrust/005.png'),
pygame.image.load('manwalk/man_thrust/006.png'),
pygame.image.load('manwalk/man_thrust/007.png')]
self.thrust_lft = [
pygame.image.load('manwalk/man_thrust/008.png'),
pygame.image.load('manwalk/man_thrust/009.png'),
pygame.image.load('manwalk/man_thrust/010.png'),
pygame.image.load('manwalk/man_thrust/011.png'),
pygame.image.load('manwalk/man_thrust/012.png'),
pygame.image.load('manwalk/man_thrust/013.png'),
pygame.image.load('manwalk/man_thrust/014.png'),
pygame.image.load('manwalk/man_thrust/015.png')]
self.thrust_down = [
pygame.image.load('manwalk/man_thrust/016.png'),
pygame.image.load('manwalk/man_thrust/017.png'),
pygame.image.load('manwalk/man_thrust/018.png'),
pygame.image.load('manwalk/man_thrust/019.png'),
pygame.image.load('manwalk/man_thrust/020.png'),
pygame.image.load('manwalk/man_thrust/021.png'),
pygame.image.load('manwalk/man_thrust/022.png'),
pygame.image.load('manwalk/man_thrust/023.png')]
self.thrust_rt = [
pygame.image.load('manwalk/man_thrust/024.png'),
pygame.image.load('manwalk/man_thrust/025.png'),
pygame.image.load('manwalk/man_thrust/026.png'),
pygame.image.load('manwalk/man_thrust/027.png'),
pygame.image.load('manwalk/man_thrust/028.png'),
pygame.image.load('manwalk/man_thrust/029.png'),
pygame.image.load('manwalk/man_thrust/030.png'),
pygame.image.load('manwalk/man_thrust/031.png')]
self.robeskrt_thrust_up = [
pygame.image.load('robe/robe_skrt_thrust/000.png'),
pygame.image.load('robe/robe_skrt_thrust/001.png'),
pygame.image.load('robe/robe_skrt_thrust/002.png'),
pygame.image.load('robe/robe_skrt_thrust/003.png'),
pygame.image.load('robe/robe_skrt_thrust/004.png'),
pygame.image.load('robe/robe_skrt_thrust/005.png'),
pygame.image.load('robe/robe_skrt_thrust/006.png'),
pygame.image.load('robe/robe_skrt_thrust/007.png')]
self.robeskrt_thrust_lft = [pygame.image.load('robe/robe_skrt_thrust/008.png'),
pygame.image.load('robe/robe_skrt_thrust/009.png'),
pygame.image.load('robe/robe_skrt_thrust/010.png'),
pygame.image.load('robe/robe_skrt_thrust/011.png'),
pygame.image.load('robe/robe_skrt_thrust/012.png'),
pygame.image.load('robe/robe_skrt_thrust/013.png'),
pygame.image.load('robe/robe_skrt_thrust/014.png'),
pygame.image.load('robe/robe_skrt_thrust/015.png')]
self.robeskrt_thrust_down = [pygame.image.load('robe/robe_skrt_thrust/016.png'),
pygame.image.load('robe/robe_skrt_thrust/017.png'),
pygame.image.load('robe/robe_skrt_thrust/018.png'),
pygame.image.load('robe/robe_skrt_thrust/019.png'),
pygame.image.load('robe/robe_skrt_thrust/020.png'),
pygame.image.load('robe/robe_skrt_thrust/021.png'),
pygame.image.load('robe/robe_skrt_thrust/022.png'),
pygame.image.load('robe/robe_skrt_thrust/023.png')]
self.robeskrt_thrust_rt = [
pygame.image.load('robe/robe_skrt_thrust/024.png'),
pygame.image.load('robe/robe_skrt_thrust/025.png'),
pygame.image.load('robe/robe_skrt_thrust/026.png'),
pygame.image.load('robe/robe_skrt_thrust/027.png'),
pygame.image.load('robe/robe_skrt_thrust/028.png'),
pygame.image.load('robe/robe_skrt_thrust/029.png'),
pygame.image.load('robe/robe_skrt_thrust/030.png'),
pygame.image.load('robe/robe_skrt_thrust/031.png')]
self.robeshirt_thrust_up =[
pygame.image.load('robe/robe_shirt_thrust/000.png'),
pygame.image.load('robe/robe_shirt_thrust/001.png'),
pygame.image.load('robe/robe_shirt_thrust/002.png'),
pygame.image.load('robe/robe_shirt_thrust/003.png'),
pygame.image.load('robe/robe_shirt_thrust/004.png'),
pygame.image.load('robe/robe_shirt_thrust/005.png'),
pygame.image.load('robe/robe_shirt_thrust/006.png'),
pygame.image.load('robe/robe_shirt_thrust/007.png')]
self.robeshirt_thrust_lft = [
pygame.image.load('robe/robe_shirt_thrust/008.png'),
pygame.image.load('robe/robe_shirt_thrust/009.png'),
pygame.image.load('robe/robe_shirt_thrust/010.png'),
pygame.image.load('robe/robe_shirt_thrust/011.png'),
pygame.image.load('robe/robe_shirt_thrust/012.png'),
pygame.image.load('robe/robe_shirt_thrust/013.png'),
pygame.image.load('robe/robe_shirt_thrust/014.png'),
pygame.image.load('robe/robe_shirt_thrust/015.png')]
self.robeshirt_thrust_down =[
pygame.image.load('robe/robe_shirt_thrust/016.png'),
pygame.image.load('robe/robe_shirt_thrust/017.png'),
pygame.image.load('robe/robe_shirt_thrust/018.png'),
pygame.image.load('robe/robe_shirt_thrust/019.png'),
pygame.image.load('robe/robe_shirt_thrust/020.png'),
pygame.image.load('robe/robe_shirt_thrust/021.png'),
pygame.image.load('robe/robe_shirt_thrust/022.png'),
pygame.image.load('robe/robe_shirt_thrust/023.png')]
self.robeshirt_thrust_rt = [
pygame.image.load('robe/robe_shirt_thrust/024.png'),
pygame.image.load('robe/robe_shirt_thrust/025.png'),
pygame.image.load('robe/robe_shirt_thrust/026.png'),
pygame.image.load('robe/robe_shirt_thrust/027.png'),
pygame.image.load('robe/robe_shirt_thrust/028.png'),
pygame.image.load('robe/robe_shirt_thrust/029.png'),
pygame.image.load('robe/robe_shirt_thrust/030.png'),
pygame.image.load('robe/robe_shirt_thrust/031.png')]
self.robe_hood_thrust_up=[
pygame.image.load('robe/robe_hood_thrust/000.png'),
pygame.image.load('robe/robe_hood_thrust/001.png'),
pygame.image.load('robe/robe_hood_thrust/002.png'),
pygame.image.load('robe/robe_hood_thrust/003.png'),
pygame.image.load('robe/robe_hood_thrust/004.png'),
pygame.image.load('robe/robe_hood_thrust/005.png'),
pygame.image.load('robe/robe_hood_thrust/006.png'),
pygame.image.load('robe/robe_hood_thrust/007.png')]
self.robe_hood_thrust_lft=[
pygame.image.load('robe/robe_hood_thrust/008.png'),
pygame.image.load('robe/robe_hood_thrust/009.png'),
pygame.image.load('robe/robe_hood_thrust/010.png'),
pygame.image.load('robe/robe_hood_thrust/011.png'),
pygame.image.load('robe/robe_hood_thrust/012.png'),
pygame.image.load('robe/robe_hood_thrust/013.png'),
pygame.image.load('robe/robe_hood_thrust/014.png'),
pygame.image.load('robe/robe_hood_thrust/015.png')]
self.robe_hood_thrust_down=[
pygame.image.load('robe/robe_hood_thrust/016.png'),
pygame.image.load('robe/robe_hood_thrust/017.png'),
pygame.image.load('robe/robe_hood_thrust/018.png'),
pygame.image.load('robe/robe_hood_thrust/019.png'),
pygame.image.load('robe/robe_hood_thrust/020.png'),
pygame.image.load('robe/robe_hood_thrust/021.png'),
pygame.image.load('robe/robe_hood_thrust/022.png'),
pygame.image.load('robe/robe_hood_thrust/023.png')]
self.robe_hood_thrust_rt= [
pygame.image.load('robe/robe_hood_thrust/024.png'),
pygame.image.load('robe/robe_hood_thrust/025.png'),
pygame.image.load('robe/robe_hood_thrust/026.png'),
pygame.image.load('robe/robe_hood_thrust/027.png'),
pygame.image.load('robe/robe_hood_thrust/028.png'),
pygame.image.load('robe/robe_hood_thrust/029.png'),
pygame.image.load('robe/robe_hood_thrust/030.png'),
pygame.image.load('robe/robe_hood_thrust/031.png')]
self.weapon_staff = True
self.staff_thrust_up =[pygame.image.load('weapon/staff_thrust/000.png'),
pygame.image.load('weapon/staff_thrust/001.png'),
pygame.image.load('weapon/staff_thrust/002.png'),
pygame.image.load('weapon/staff_thrust/003.png'),
pygame.image.load('weapon/staff_thrust/004.png'),
pygame.image.load('weapon/staff_thrust/005.png'),
pygame.image.load('weapon/staff_thrust/006.png'),
pygame.image.load('weapon/staff_thrust/007.png')]
self.staff_thrust_lft = [pygame.image.load('weapon/staff_thrust/008.png'),
pygame.image.load('weapon/staff_thrust/009.png'),
pygame.image.load('weapon/staff_thrust/010.png'),
pygame.image.load('weapon/staff_thrust/011.png'),
pygame.image.load('weapon/staff_thrust/012.png'),
pygame.image.load('weapon/staff_thrust/013.png'),
pygame.image.load('weapon/staff_thrust/014.png'),
pygame.image.load('weapon/staff_thrust/015.png')]
self.staff_thrust_down = [pygame.image.load('weapon/staff_thrust/016.png'),
pygame.image.load('weapon/staff_thrust/017.png'),
pygame.image.load('weapon/staff_thrust/018.png'),
pygame.image.load('weapon/staff_thrust/019.png'),
pygame.image.load('weapon/staff_thrust/020.png'),
pygame.image.load('weapon/staff_thrust/021.png'),
pygame.image.load('weapon/staff_thrust/022.png'),
pygame.image.load('weapon/staff_thrust/023.png')]
self.staff_thrust_rt = [
pygame.image.load('weapon/staff_thrust/024.png'),
pygame.image.load('weapon/staff_thrust/025.png'),
pygame.image.load('weapon/staff_thrust/026.png'),
pygame.image.load('weapon/staff_thrust/027.png'),
pygame.image.load('weapon/staff_thrust/028.png'),
pygame.image.load('weapon/staff_thrust/029.png'),
pygame.image.load('weapon/staff_thrust/030.png'),
pygame.image.load('weapon/staff_thrust/031.png')]
self.chain_helm_up = [
pygame.image.load('chain/chain_helm/000.png'),
pygame.image.load('chain/chain_helm/001.png'),
pygame.image.load('chain/chain_helm/002.png'),
pygame.image.load('chain/chain_helm/003.png'),
pygame.image.load('chain/chain_helm/004.png'),
pygame.image.load('chain/chain_helm/005.png'),
pygame.image.load('chain/chain_helm/006.png'),
pygame.image.load('chain/chain_helm/007.png'),
pygame.image.load('chain/chain_helm/008.png')]
self.chain_helm_lft = [
pygame.image.load('chain/chain_helm/009.png'),
pygame.image.load('chain/chain_helm/010.png'),
pygame.image.load('chain/chain_helm/011.png'),
pygame.image.load('chain/chain_helm/012.png'),
pygame.image.load('chain/chain_helm/013.png'),
pygame.image.load('chain/chain_helm/014.png'),
pygame.image.load('chain/chain_helm/015.png'),
pygame.image.load('chain/chain_helm/016.png'),
pygame.image.load('chain/chain_helm/017.png')]
self.chain_helm_down = [
pygame.image.load('chain/chain_helm/018.png'),
pygame.image.load('chain/chain_helm/019.png'),
pygame.image.load('chain/chain_helm/020.png'),
pygame.image.load('chain/chain_helm/021.png'),
pygame.image.load('chain/chain_helm/022.png'),
pygame.image.load('chain/chain_helm/023.png'),
pygame.image.load('chain/chain_helm/024.png'),
pygame.image.load('chain/chain_helm/025.png'),
pygame.image.load('chain/chain_helm/026.png')
]
self.chain_helm_rt = [
pygame.image.load('chain/chain_helm/027.png'),
pygame.image.load('chain/chain_helm/028.png'),
pygame.image.load('chain/chain_helm/029.png'),
pygame.image.load('chain/chain_helm/030.png'),
pygame.image.load('chain/chain_helm/031.png'),
pygame.image.load('chain/chain_helm/032.png'),
pygame.image.load('chain/chain_helm/033.png'),
pygame.image.load('chain/chain_helm/034.png'),
pygame.image.load('chain/chain_helm/035.png')]
self.chain_helm_standing = pygame.image.load('chain/chain_helm/018.png')
self.ischain = True
self.ischain_helm = True
def hit(self):
if self.health > 0:
print('HIT')
self.health -= 20
elif self.health <= 0:
print('DEAD DEAD DEAD')
self.isdead = True
self.isstanding = False
self.isright = False
self.isleft = False
self.isup = False
self.isdown = False
self.action = False
if self.isdead:
win.blit(self.dead[round(self.ani_count)], (self.x - self.CameraX, self.y - self.CameraY))
def draw(self, win):
self.healthbar
self.hitbox
pygame.draw.rect(win,(255,0,0),self.hitbox,2)
pygame.draw.rect(win,(255,0,0),self.healthbar,0)
if self.isstanding:
win.blit(self.standing, (self.x - self.CameraX, self.y - self.CameraY))
if self.walkCount + 1 >= 27:
self.walkCount = 0
if self.ani_count + 1 >=17:
self.ani_count = 0
if self.isleft:
self.walkCount += 1
win.blit(self.left[round(self.walkCount//3)], (self.x - self.CameraX, self.y - self.CameraY))
self.facing = -1
if self.isright:
self.facing = 1
win.blit(self.right[round(self.walkCount//3)], (self.x-self.CameraX,self.y-self.CameraY))
self.walkCount +=1
if self.isup:
self.walkCount +=1
win.blit(self.up[round(self.walkCount//3)], (self.x - self.CameraX, self.y - self.CameraY))
self.facing = -2
if self.isdown:
self.walkCount +=1
win.blit(self.down[round(self.walkCount//3)], (self.x - self.CameraX, self.y - self.CameraY))
self.facing = 2
if self.ischain:
if self.isstanding:
win.blit(self.chain_helm_standing, (self.x - self.CameraX, self.y - self.CameraY))
if self.isup:
win.blit(self.chain_helm_up[self.walkCount//3], (self.x - self.CameraX, self.y - self.CameraY))
if self.isdown:
win.blit(self.chain_helm_down[self.walkCount//3], (self.x - self.CameraX, self.y - self.CameraY))
if self.isright:
win.blit(self.chain_helm_rt[self.walkCount//3], (self.x - self.CameraX, self.y - self.CameraY))
if self.isleft:
win.blit(self.chain_helm_lft[self.walkCount//3], (self.x - self.CameraX, self.y - self.CameraY))
if self.isrobe:
if self.isstanding:
win.blit(self.robestanding, (round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robeskrt_standing,
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robehood_standing,
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.isup:
win.blit(self.robeup[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robeskrt_up[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robehood_up[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.isdown:
win.blit(self.robedown[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robeskrt_down[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robehood_down[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.isright:
win.blit(self.robeskrt_rt[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robert[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robehood_rt[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.isleft:
win.blit(self.robeskrt_lft[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robehood_lft[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
win.blit(self.robelft[round(self.walkCount // 3)],
(round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.action: #thrust rights
self.ani_count += 1
if self.weapon_staff:
if self.facing == 1:
win.blit(self.thrust_rt[round(self.ani_count // 3)], (self.x - self.CameraX, self.y - self.CameraY))
win.blit(self.robeskrt_thrust_rt[round(self.ani_count // 3)], (self.x - self.CameraX, self.y - self.CameraY))
win.blit(self.robeshirt_thrust_rt[round(self.ani_count // 3)],(self.x - self.CameraX, self.y - self.CameraY))
win.blit(self.robe_hood_thrust_rt[round(self.ani_count//3)], (round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.weapon_staff:
win.blit(self.staff_thrust_rt[round(self.ani_count//3)], (round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.facing == -1:
win.blit(self.thrust_lft[round(self.ani_count // 3)], (self.x - self.CameraX, self.y - self.CameraY))
win.blit(self.robeskrt_thrust_lft[round(self.ani_count // 3)], (self.x - self.CameraX, self.y - self.CameraY))
win.blit(self.robeshirt_thrust_lft[round(self.ani_count // 3)],(self.x - self.CameraX, self.y - self.CameraY))
win.blit(self.robe_hood_thrust_lft[round(self.ani_count//3)], (round(self.x - self.CameraX), round(self.y - self.CameraY)))
if self.weapon_staff:
win.blit(self.staff_thrust_lft[round(self.ani_count//3)], (round(self.x - self.CameraX), round(self.y - self.CameraY)))
class Map():
def __init__(self,x,y,CameraX,CameraY,varwidth, varheight):
self.x = x
self.y = y
self.cam = self.Camera()
self.win = win
self.vel= 5
self.pic= pygame.image.load('bigmap2.png')
self.CameraX = CameraX
self.CameraY = CameraY
self.varwidth = varwidth
self.varheight = varheight
self.camvel = 10
class Camera:
def __init(self, x, y):
self.x
self.y
self.Camvel = 10
def draw(self, win):
win.blit(self.pic, (self.x - self.CameraX , self.y - self.CameraY))
| 47.684838 | 143 | 0.634418 | 3,953 | 27,991 | 4.365798 | 0.033898 | 0.223722 | 0.305076 | 0.31707 | 0.892745 | 0.856125 | 0.717638 | 0.440028 | 0.175976 | 0.171804 | 0 | 0.050749 | 0.198171 | 27,991 | 586 | 144 | 47.766212 | 0.718187 | 0.000464 | 0 | 0.154982 | 0 | 0 | 0.29174 | 0.252529 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012915 | false | 0 | 0.001845 | 0 | 0.02214 | 0.00369 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
cbe86df08806357f3dc9efac9b467eefd70d9a51 | 26 | py | Python | NekoGram/storages/pg/__init__.py | lyteloli/NekoGram | f077471000b40a74e0eb4e98dfb570b5e34d23ab | [
"MIT"
] | 8 | 2020-08-21T07:43:52.000Z | 2022-01-27T06:48:01.000Z | NekoGram/storages/pg/__init__.py | lyteloli/NekoGram | f077471000b40a74e0eb4e98dfb570b5e34d23ab | [
"MIT"
] | null | null | null | NekoGram/storages/pg/__init__.py | lyteloli/NekoGram | f077471000b40a74e0eb4e98dfb570b5e34d23ab | [
"MIT"
] | 1 | 2022-01-27T06:48:02.000Z | 2022-01-27T06:48:02.000Z | from .pg import PGStorage
| 13 | 25 | 0.807692 | 4 | 26 | 5.25 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.153846 | 26 | 1 | 26 | 26 | 0.954545 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1dba84b0e6ae55b8d09af6f66624152fd943c6af | 34 | py | Python | sylver/backend/__init__.py | jdclarke5/sylver | 181f151480855882b2d74b21a4360472213324a6 | [
"MIT"
] | null | null | null | sylver/backend/__init__.py | jdclarke5/sylver | 181f151480855882b2d74b21a4360472213324a6 | [
"MIT"
] | 1 | 2022-02-13T20:48:28.000Z | 2022-02-13T20:48:28.000Z | sylver/backend/__init__.py | jdclarke5/sylver | 181f151480855882b2d74b21a4360472213324a6 | [
"MIT"
] | null | null | null | from .backend import MemoryBackend | 34 | 34 | 0.882353 | 4 | 34 | 7.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 34 | 1 | 34 | 34 | 0.967742 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
1dc870811864fe07831d63de85737bd90bc52af4 | 43 | py | Python | conftest.py | glebite/Bragi | ba28b49b30fc49c66fa809fca1db39ee08f23b15 | [
"CC0-1.0"
] | null | null | null | conftest.py | glebite/Bragi | ba28b49b30fc49c66fa809fca1db39ee08f23b15 | [
"CC0-1.0"
] | null | null | null | conftest.py | glebite/Bragi | ba28b49b30fc49c66fa809fca1db39ee08f23b15 | [
"CC0-1.0"
] | null | null | null | import pytest
import src.simple_word_cloud
| 14.333333 | 28 | 0.883721 | 7 | 43 | 5.142857 | 0.857143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 2 | 29 | 21.5 | 0.923077 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
0e0992eb19043525dd755d2738274cd0c0e8c35f | 146 | py | Python | ros_start/scritps/bumper_client_use_lib.py | OTL/ros_book_programs | aea2d2db076ac96a8455458a75aa76a70f81768d | [
"BSD-2-Clause"
] | 29 | 2015-06-27T00:12:16.000Z | 2022-03-04T22:41:02.000Z | ros_start/scritps/bumper_client_use_lib.py | OTL/ros_book_programs | aea2d2db076ac96a8455458a75aa76a70f81768d | [
"BSD-2-Clause"
] | 11 | 2015-07-09T01:53:08.000Z | 2019-05-24T06:52:36.000Z | ros_start/scritps/bumper_client_use_lib.py | OTL/ros_book_programs | aea2d2db076ac96a8455458a75aa76a70f81768d | [
"BSD-2-Clause"
] | 24 | 2015-08-30T17:18:19.000Z | 2022-03-17T06:33:38.000Z | #!/usr/bin/env python
import rospy
from ros_start.bumper_client import go_until_bumper
rospy.init_node('bumper_client_use_lib')
go_until_bumper()
| 24.333333 | 51 | 0.842466 | 25 | 146 | 4.52 | 0.68 | 0.212389 | 0.230089 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.068493 | 146 | 5 | 52 | 29.2 | 0.830882 | 0.136986 | 0 | 0 | 0 | 0 | 0.168 | 0.168 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
0e165296a2bb37be9ec61ef104246ee03a8ee576 | 55 | py | Python | service/files/__init__.py | vishu221b/bookme-flask-REST-API-Collection | 9ee923e13d786af9b11421370edac718743855af | [
"MIT"
] | null | null | null | service/files/__init__.py | vishu221b/bookme-flask-REST-API-Collection | 9ee923e13d786af9b11421370edac718743855af | [
"MIT"
] | null | null | null | service/files/__init__.py | vishu221b/bookme-flask-REST-API-Collection | 9ee923e13d786af9b11421370edac718743855af | [
"MIT"
] | null | null | null | from .fileServiceBaseModel import FileServiceBaseModel
| 27.5 | 54 | 0.909091 | 4 | 55 | 12.5 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072727 | 55 | 1 | 55 | 55 | 0.980392 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
38dee720c90330322391860ea4b547e3235ca64f | 1,526 | py | Python | core/utils/db_routers.py | Fowerus/drf-crm | be5420f1942886b685214c33537cf4b3759704a1 | [
"Apache-2.0"
] | 3 | 2021-09-20T09:21:46.000Z | 2021-09-21T08:51:14.000Z | core/utils/db_routers.py | Fowerus/drf-crm | be5420f1942886b685214c33537cf4b3759704a1 | [
"Apache-2.0"
] | null | null | null | core/utils/db_routers.py | Fowerus/drf-crm | be5420f1942886b685214c33537cf4b3759704a1 | [
"Apache-2.0"
] | null | null | null | class DataBaseRouter:
"""
A router to control if database should use
primary database or mongo one.
"""
nonrel_models = {'log'}
marketplace_model = {'Marketplace'}
def db_for_read(self, model, **hints):
return ['mongo' if model._meta.app_label == 'Marketplace'
or model._meta.model_name in self.nonrel_models else 'default'][0]
def db_for_write(self, model, **hints):
return ['mongo' if model._meta.app_label == 'Marketplace'
or model._meta.model_name in self.nonrel_models else 'default'][0]
def allow_relation(self, obj1, obj2, **hints):
return True
def allow_migrate(self, db, app_label, model_name=None, **hints):
if app_label == 'Marketplace' or model_name == 'log':
return db == 'mongo'
else:
return db == 'default'
# def db_for_read(self, model, **_hints):
# if model._meta.model_name in self.nonrel_models or model._meta.model_name in self.marketplace_model :
# return 'mongo'
# return 'default'
# def db_for_write(self, model, **_hints):
# if model._meta.model_name in self.nonrel_models or model._meta.model_name in self.marketplace_model:
# return 'mongo'
# return 'default'
# def allow_migrate(self, _db, _app_label, model_name=None, **_hints):
# if _db == 'mongo' or model_name in self.nonrel_models or model_name in self.marketplace_model:
# return False
# return True
| 35.488372 | 111 | 0.631717 | 200 | 1,526 | 4.565 | 0.215 | 0.108434 | 0.096386 | 0.131435 | 0.756846 | 0.726177 | 0.726177 | 0.642935 | 0.605696 | 0.605696 | 0 | 0.003537 | 0.258847 | 1,526 | 42 | 112 | 36.333333 | 0.803714 | 0.432503 | 0 | 0.25 | 0 | 0 | 0.102994 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0 | 0.1875 | 0.75 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 6 |
2a1d1668c5b7c9e8cfb0bfd126d2c76f18b62bbb | 2,382 | py | Python | trial_types.py | cravatsc/info_sample_task | 88bbc5bd7120d21732f4ff9dceac6753743a91c8 | [
"MIT"
] | 1 | 2018-02-07T18:11:46.000Z | 2018-02-07T18:11:46.000Z | trial_types.py | cravatsc/info_sample_task | 88bbc5bd7120d21732f4ff9dceac6753743a91c8 | [
"MIT"
] | 6 | 2018-03-20T20:03:32.000Z | 2022-03-11T23:16:29.000Z | trial_types.py | cravatsc/info_sample_task | 88bbc5bd7120d21732f4ff9dceac6753743a91c8 | [
"MIT"
] | 1 | 2018-02-03T15:57:31.000Z | 2018-02-03T15:57:31.000Z | pic_cat = 'category_of_pic'
distribution_type = 'probability_dist'
reward = 'reward_type'
majority_cat = 'majority_cat'
indoor_outdoor_cat = 'ioc'
living_nonliving_cat = 'lnc'
easy_dist = 0.65
hard_dist = 0.60
high_reward = 5
low_reward = 1
indoor = 'indoor'
outdoor = 'outdoor'
living = 'living'
nonliving = 'nonliving'
trial_types = {}
trial_types[1] = {pic_cat: indoor_outdoor_cat, distribution_type: easy_dist, reward: high_reward, majority_cat: indoor}
trial_types[2] = {pic_cat: indoor_outdoor_cat, distribution_type: hard_dist, reward: high_reward, majority_cat: indoor}
trial_types[3] = {pic_cat: indoor_outdoor_cat, distribution_type: easy_dist, reward: low_reward, majority_cat: indoor}
trial_types[4] = {pic_cat: indoor_outdoor_cat, distribution_type: hard_dist, reward: low_reward, majority_cat: indoor}
trial_types[5] = {pic_cat: indoor_outdoor_cat, distribution_type: easy_dist, reward: high_reward, majority_cat: outdoor}
trial_types[6] = {pic_cat: indoor_outdoor_cat, distribution_type: hard_dist, reward: high_reward, majority_cat: outdoor}
trial_types[7] = {pic_cat: indoor_outdoor_cat, distribution_type: easy_dist, reward: low_reward, majority_cat: outdoor}
trial_types[8] = {pic_cat: indoor_outdoor_cat, distribution_type: hard_dist, reward: low_reward, majority_cat: outdoor}
trial_types[9] = {pic_cat: living_nonliving_cat, distribution_type: easy_dist, reward: high_reward, majority_cat: living}
trial_types[10] = {pic_cat: living_nonliving_cat, distribution_type: hard_dist, reward: high_reward, majority_cat: living}
trial_types[11] = {pic_cat: living_nonliving_cat, distribution_type: easy_dist, reward: low_reward, majority_cat: living}
trial_types[12] = {pic_cat: living_nonliving_cat, distribution_type: hard_dist, reward: low_reward, majority_cat: living}
trial_types[13] = {pic_cat: living_nonliving_cat, distribution_type: easy_dist, reward: high_reward, majority_cat: nonliving}
trial_types[14] = {pic_cat: living_nonliving_cat, distribution_type: hard_dist, reward: high_reward, majority_cat: nonliving}
trial_types[15] = {pic_cat: living_nonliving_cat, distribution_type: easy_dist, reward: low_reward, majority_cat: nonliving}
trial_types[16] = {pic_cat: living_nonliving_cat, distribution_type: hard_dist, reward: low_reward, majority_cat: nonliving}
minority_category = {indoor:outdoor, outdoor:indoor, living:nonliving, nonliving:living}
| 64.378378 | 125 | 0.808564 | 346 | 2,382 | 5.144509 | 0.121387 | 0.111236 | 0.170787 | 0.096067 | 0.807865 | 0.807865 | 0.802247 | 0.792135 | 0.792135 | 0.660674 | 0 | 0.014305 | 0.09026 | 2,382 | 36 | 126 | 66.166667 | 0.807107 | 0 | 0 | 0 | 0 | 0 | 0.036944 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2a22b42078502e05c33c0f21a770f2b2b8edc019 | 33 | py | Python | delta_node/crypto/aes/__init__.py | delta-mpc/delta-node | 674fc61f951e41ed353597f93ca6ea6bc74a102b | [
"Apache-2.0"
] | 4 | 2021-07-22T01:11:15.000Z | 2022-03-17T03:26:20.000Z | delta_node/crypto/aes/__init__.py | delta-mpc/delta-node | 674fc61f951e41ed353597f93ca6ea6bc74a102b | [
"Apache-2.0"
] | 10 | 2021-09-13T09:55:02.000Z | 2022-03-23T09:41:26.000Z | delta_node/crypto/aes/__init__.py | delta-mpc/delta-node | 674fc61f951e41ed353597f93ca6ea6bc74a102b | [
"Apache-2.0"
] | null | null | null | from .aes import encrypt, decrypt | 33 | 33 | 0.818182 | 5 | 33 | 5.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.121212 | 33 | 1 | 33 | 33 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2a447cffeaf3608ff8d7e73e07fafc99adccb107 | 106 | py | Python | public/Python27/Tools/Scripts/2to3.py | NingrumFadillah/cekmutasi | 1fccb6cafb874c2a80ece9b71d7c682fd44dbd48 | [
"MIT"
] | null | null | null | public/Python27/Tools/Scripts/2to3.py | NingrumFadillah/cekmutasi | 1fccb6cafb874c2a80ece9b71d7c682fd44dbd48 | [
"MIT"
] | null | null | null | public/Python27/Tools/Scripts/2to3.py | NingrumFadillah/cekmutasi | 1fccb6cafb874c2a80ece9b71d7c682fd44dbd48 | [
"MIT"
] | null | null | null | #!/usr/bin/env python
from lib2to3.main import main
import sys
import os
sys.exit(main("lib2to3.fixes"))
| 15.142857 | 31 | 0.754717 | 18 | 106 | 4.444444 | 0.666667 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.042553 | 0.113208 | 106 | 6 | 32 | 17.666667 | 0.808511 | 0.188679 | 0 | 0 | 0 | 0 | 0.152941 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa5a6975319ed5123802e3784f9f813d48e2ad41 | 113 | py | Python | device/imu/__init__.py | ZJU-Robotics-Lab/CICT | ff873a03ab03d9113b8db96d26246939bb5da0d4 | [
"MIT"
] | 12 | 2021-02-09T05:08:36.000Z | 2022-02-24T07:51:30.000Z | device/imu/__init__.py | ZJU-Robotics-Lab/CICT | ff873a03ab03d9113b8db96d26246939bb5da0d4 | [
"MIT"
] | null | null | null | device/imu/__init__.py | ZJU-Robotics-Lab/CICT | ff873a03ab03d9113b8db96d26246939bb5da0d4 | [
"MIT"
] | 6 | 2021-03-30T06:30:13.000Z | 2022-03-01T14:15:00.000Z | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
from .mtdevice import *
from .mtdef import *
from .mtnode import * | 22.6 | 23 | 0.663717 | 16 | 113 | 4.6875 | 0.75 | 0.266667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.021053 | 0.159292 | 113 | 5 | 24 | 22.6 | 0.768421 | 0.380531 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
aa8d27dcf53858f38667e13bf8d39e42338992d6 | 18,574 | py | Python | tests/test_github_api.py | sgibson91/bump-helm-deps-action | 50198077b147fd6d927c46b89c9e2efbdeed3f2c | [
"MIT"
] | null | null | null | tests/test_github_api.py | sgibson91/bump-helm-deps-action | 50198077b147fd6d927c46b89c9e2efbdeed3f2c | [
"MIT"
] | 2 | 2022-01-19T14:24:23.000Z | 2022-01-19T14:26:36.000Z | tests/test_github_api.py | sgibson91/bump-helm-deps-action | 50198077b147fd6d927c46b89c9e2efbdeed3f2c | [
"MIT"
] | null | null | null | import base64
import unittest
from unittest.mock import call, patch
from helm_bot.github_api import GitHubAPI
from helm_bot.main import UpdateHelmDeps
from helm_bot.yaml_parser import YamlParser
yaml = YamlParser()
class TestGitHubAPI(unittest.TestCase):
def test_assign_labels(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
labels=["label1", "label2"],
)
github = GitHubAPI(helm_deps)
pr_url = "/".join([github.api_url, "issues", "1"])
with patch("helm_bot.github_api.post_request") as mock:
github._assign_labels(pr_url)
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([pr_url, "labels"]),
headers=helm_deps.headers,
json={"labels": helm_deps.labels},
)
def test_assign_reviewers(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart_name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
reviewers=["reviewer1", "reviewer2"],
)
github = GitHubAPI(helm_deps)
pr_url = "/".join([github.api_url, "pull", "1"])
with patch("helm_bot.github_api.post_request") as mock:
github._assign_reviewers(pr_url)
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([pr_url, "requested_reviewers"]),
headers=helm_deps.headers,
json={"reviewers": helm_deps.reviewers},
)
def test_assign_team_reviewers(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart_name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
team_reviewers=["team1", "team2"],
)
github = GitHubAPI(helm_deps)
pr_url = "/".join([github.api_url, "pull", "1"])
with patch("helm_bot.github_api.post_request") as mock:
github._assign_reviewers(pr_url)
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([pr_url, "requested_reviewers"]),
headers=helm_deps.headers,
json={"team_reviewers": helm_deps.team_reviewers},
)
def test_create_commit(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
helm_deps.sha = "test_sha"
commit_msg = "This is a commit message"
contents = {"key1": "This is a test"}
contents = yaml.object_to_yaml_str(contents).encode("utf-8")
contents = base64.b64encode(contents)
contents = contents.decode("utf-8")
body = {
"message": commit_msg,
"content": contents,
"sha": helm_deps.sha,
"branch": helm_deps.head_branch,
}
with patch("helm_bot.github_api.put") as mock:
github.create_commit(
commit_msg,
contents,
)
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([github.api_url, "contents", helm_deps.chart_path]),
json=body,
headers=helm_deps.headers,
)
def test_create_update_pull_request_no_labels_no_reviewers(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
helm_deps.chart_name = "chart-name"
github.pr_exists = False
helm_deps.chart_versions = {
"chart1": {"current": "1.2.3", "latest": "7.8.9"},
"chart2": {"current": "4.5.6", "latest": "10.11.12"},
}
helm_deps.charts_to_update = ["chart1", "chart2"]
expected_pr = {
"title": f"Bumping helm chart dependency versions: {helm_deps.chart_name}",
"body": (
f"This Pull Request is bumping the dependencies of the `{helm_deps.chart_name}` chart to the following versions.\n\n"
+ "\n".join(
[
f"- {chart}: `{helm_deps.chart_versions[chart]['current']}` -> `{helm_deps.chart_versions[chart]['latest']}`"
for chart in helm_deps.charts_to_update
]
)
),
"base": helm_deps.base_branch,
"head": helm_deps.head_branch,
}
with patch("helm_bot.github_api.post_request") as mock:
github.create_update_pull_request()
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([github.api_url, "pulls"]),
headers=helm_deps.headers,
json=expected_pr,
return_json=True,
)
def test_create_update_pull_request_with_labels_no_reviewers(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
labels=["label1", "label2"],
)
github = GitHubAPI(helm_deps)
helm_deps.chart_name = "chart-name"
github.pr_exists = False
helm_deps.chart_versions = {
"chart1": {"current": "1.2.3", "latest": "7.8.9"},
"chart2": {"current": "4.5.6", "latest": "10.11.12"},
}
helm_deps.charts_to_update = ["chart1", "chart2"]
expected_pr = {
"title": f"Bumping helm chart dependency versions: {helm_deps.chart_name}",
"body": (
f"This Pull Request is bumping the dependencies of the `{helm_deps.chart_name}` chart to the following versions.\n\n"
+ "\n".join(
[
f"- {chart}: `{helm_deps.chart_versions[chart]['current']}` -> `{helm_deps.chart_versions[chart]['latest']}`"
for chart in helm_deps.charts_to_update
]
)
),
"base": helm_deps.base_branch,
"head": helm_deps.head_branch,
}
mock_post = patch(
"helm_bot.github_api.post_request",
return_value={
"issue_url": "/".join([github.api_url, "issues", "1"]),
"number": 1,
},
)
calls = [
call(
"/".join([github.api_url, "pulls"]),
headers=helm_deps.headers,
json=expected_pr,
return_json=True,
),
call(
"/".join([github.api_url, "issues", "1", "labels"]),
headers=helm_deps.headers,
json={"labels": helm_deps.labels},
),
]
with mock_post as mock:
github.create_update_pull_request()
self.assertEqual(mock.call_count, 2)
self.assertDictEqual(
mock.return_value,
{
"issue_url": "/".join([github.api_url, "issues", "1"]),
"number": 1,
},
)
mock.assert_has_calls(calls)
def test_create_update_pull_request_no_labels_with_reviewers(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
reviewers=["reviewer1", "reviewer2"],
)
github = GitHubAPI(helm_deps)
helm_deps.chart_name = "chart-name"
github.pr_exists = False
helm_deps.chart_versions = {
"chart1": {"current": "1.2.3", "latest": "7.8.9"},
"chart2": {"current": "4.5.6", "latest": "10.11.12"},
}
helm_deps.charts_to_update = ["chart1", "chart2"]
expected_pr = {
"title": f"Bumping helm chart dependency versions: {helm_deps.chart_name}",
"body": (
f"This Pull Request is bumping the dependencies of the `{helm_deps.chart_name}` chart to the following versions.\n\n"
+ "\n".join(
[
f"- {chart}: `{helm_deps.chart_versions[chart]['current']}` -> `{helm_deps.chart_versions[chart]['latest']}`"
for chart in helm_deps.charts_to_update
]
)
),
"base": helm_deps.base_branch,
"head": helm_deps.head_branch,
}
mock_post = patch(
"helm_bot.github_api.post_request",
return_value={
"url": "/".join([github.api_url, "pulls", "1"]),
"number": 1,
},
)
calls = [
call(
"/".join([github.api_url, "pulls"]),
headers=helm_deps.headers,
json=expected_pr,
return_json=True,
),
call(
"/".join([github.api_url, "pulls", "1", "requested_reviewers"]),
headers=helm_deps.headers,
json={"reviewers": helm_deps.reviewers},
),
]
with mock_post as mock:
github.create_update_pull_request()
self.assertEqual(mock.call_count, 2)
self.assertDictEqual(
mock.return_value,
{
"url": "/".join([github.api_url, "pulls", "1"]),
"number": 1,
},
)
mock.assert_has_calls(calls)
def test_create_update_pull_request_with_labels_and_reviewers(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
labels=["label1", "label2"],
reviewers=["reviewer1", "reviewer2"],
)
github = GitHubAPI(helm_deps)
helm_deps.chart_name = "chart-name"
github.pr_exists = False
helm_deps.chart_versions = {
"chart1": {"current": "1.2.3", "latest": "7.8.9"},
"chart2": {"current": "4.5.6", "latest": "10.11.12"},
}
helm_deps.charts_to_update = ["chart1", "chart2"]
expected_pr = {
"title": f"Bumping helm chart dependency versions: {helm_deps.chart_name}",
"body": (
f"This Pull Request is bumping the dependencies of the `{helm_deps.chart_name}` chart to the following versions.\n\n"
+ "\n".join(
[
f"- {chart}: `{helm_deps.chart_versions[chart]['current']}` -> `{helm_deps.chart_versions[chart]['latest']}`"
for chart in helm_deps.charts_to_update
]
)
),
"base": helm_deps.base_branch,
"head": helm_deps.head_branch,
}
mock_post = patch(
"helm_bot.github_api.post_request",
return_value={
"issue_url": "/".join([github.api_url, "issues", "1"]),
"url": "/".join([github.api_url, "pulls", "1"]),
"number": 1,
},
)
calls = [
call(
"/".join([github.api_url, "pulls"]),
headers=helm_deps.headers,
json=expected_pr,
return_json=True,
),
call(
"/".join([github.api_url, "issues", "1", "labels"]),
headers=helm_deps.headers,
json={"labels": helm_deps.labels},
),
call(
"/".join([github.api_url, "pulls", "1", "requested_reviewers"]),
headers=helm_deps.headers,
json={"reviewers": helm_deps.reviewers},
),
]
with mock_post as mock:
github.create_update_pull_request()
self.assertEqual(mock.call_count, 3)
self.assertDictEqual(
mock.return_value,
{
"issue_url": "/".join([github.api_url, "issues", "1"]),
"url": "/".join([github.api_url, "pulls", "1"]),
"number": 1,
},
)
mock.assert_has_calls(calls)
def test_create_ref(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
test_ref = "test_ref"
test_sha = "test_sha"
test_body = {"ref": f"refs/heads/{test_ref}", "sha": test_sha}
with patch("helm_bot.github_api.post_request") as mock:
github.create_ref(test_ref, test_sha)
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([github.api_url, "git", "refs"]),
headers=helm_deps.headers,
json=test_body,
)
def test_find_existing_pull_request_no_matches(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
mock_get = patch(
"helm_bot.github_api.get_request",
return_value=[
{
"head": {
"label": "some_branch",
}
}
],
)
with mock_get as mock:
github.find_existing_pull_request()
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([github.api_url, "pulls"]),
headers=helm_deps.headers,
params={"state": "open", "sort": "created", "direction": "desc"},
output="json",
)
self.assertFalse(github.pr_exists)
self.assertTrue(
helm_deps.head_branch.startswith(
"/".join(["bump-helm-deps", "chart-name"])
)
)
def test_find_existing_pull_request_match(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
mock_get = patch(
"helm_bot.github_api.get_request",
return_value=[
{
"head": {
"label": "bump-helm-deps/chart-name",
},
"number": 1,
}
],
)
with mock_get as mock:
github.find_existing_pull_request()
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([github.api_url, "pulls"]),
headers=helm_deps.headers,
params={"state": "open", "sort": "created", "direction": "desc"},
output="json",
)
self.assertTrue(github.pr_exists)
self.assertEqual(
helm_deps.head_branch, "/".join(["bump-helm-deps", "chart-name"])
)
self.assertEqual(github.pr_number, 1)
def test_get_ref(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
test_ref = "test_ref"
mock_get = patch(
"helm_bot.github_api.get_request", return_value={"object": {"sha": "sha"}}
)
with mock_get as mock:
resp = github.get_ref(test_ref)
self.assertEqual(mock.call_count, 1)
mock.assert_called_with(
"/".join([github.api_url, "git", "ref", "heads", test_ref]),
headers=helm_deps.headers,
output="json",
)
self.assertDictEqual(resp, {"object": {"sha": "sha"}})
def test_update_existing_pr(self):
helm_deps = UpdateHelmDeps(
"octocat/octocat",
"ThIs_Is_A_t0k3n",
"chart-name/Chart.yaml",
{"some_chart": "https://some-chart.com"},
)
github = GitHubAPI(helm_deps)
github.pr_exists = True
github.pr_number = 1
helm_deps.chart_versions = {
"chart": {"current": "old_version", "latest": "new_version"},
}
helm_deps.charts_to_update = ["chart"]
helm_deps.chart_name = "chart-name"
expected_pr = {
"title": f"Bumping helm chart dependency versions: {helm_deps.chart_name}",
"body": (
f"This Pull Request is bumping the dependencies of the `{helm_deps.chart_name}` chart to the following versions.\n\n"
+ "\n".join(
[
f"- {chart}: `{helm_deps.chart_versions[chart]['current']}` -> `{helm_deps.chart_versions[chart]['latest']}`"
for chart in helm_deps.charts_to_update
]
)
),
"base": helm_deps.base_branch,
"state": "open",
}
mock_patch = patch(
"helm_bot.github_api.patch_request", return_value={"number": 1}
)
with mock_patch as mock:
github.create_update_pull_request()
mock.assert_called_with(
"/".join([github.api_url, "pulls", str(github.pr_number)]),
headers=helm_deps.headers,
json=expected_pr,
return_json=True,
)
self.assertDictEqual(mock.return_value, {"number": 1})
if __name__ == "__main__":
unittest.main()
| 34.460111 | 133 | 0.505007 | 1,891 | 18,574 | 4.68588 | 0.080381 | 0.097506 | 0.049882 | 0.045142 | 0.860964 | 0.842343 | 0.823271 | 0.818643 | 0.809164 | 0.804537 | 0 | 0.01381 | 0.364542 | 18,574 | 538 | 134 | 34.524164 | 0.736931 | 0 | 0 | 0.66879 | 0 | 0.021231 | 0.229784 | 0.075859 | 0 | 0 | 0 | 0 | 0.07431 | 1 | 0.027601 | false | 0 | 0.012739 | 0 | 0.042463 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
aab6d3f34daac872e37d769f68b2034f8b4a6e6b | 7,985 | py | Python | elegantrl/agents/AgentStep1AC.py | tnerush71/ElegantRL | 051a67f2f7dd03c0fa497f85e4e1bb57f76c7dcf | [
"Apache-2.0"
] | 759 | 2021-09-03T01:25:57.000Z | 2022-03-31T14:55:40.000Z | elegantrl/agents/AgentStep1AC.py | Z-2ez4U/ElegantRL | 177c6e12dfa29d9d5a2d78f3eeebddf5c91404d4 | [
"Apache-2.0"
] | 89 | 2021-09-05T01:19:21.000Z | 2022-03-31T10:46:12.000Z | elegantrl/agents/AgentStep1AC.py | Z-2ez4U/ElegantRL | 177c6e12dfa29d9d5a2d78f3eeebddf5c91404d4 | [
"Apache-2.0"
] | 187 | 2021-09-03T03:41:29.000Z | 2022-03-31T14:00:20.000Z | import torch
import numpy as np
from copy import deepcopy
from elegantrl.agents.AgentBase import AgentBase
from elegantrl.agents.net import ActorBiConv, CriticBiConv, ShareBiConv
class AgentStep1AC(AgentBase):
def __init__(self):
AgentBase.__init__(self)
self.ClassAct = ActorBiConv
self.ClassCri = CriticBiConv
self.if_use_cri_target = False
self.if_use_act_target = False
self.explore_noise = 2 ** -8
self.obj_critic = (-np.log(0.5)) ** 0.5 # for reliable_lambda
def init(self, net_dim=256, state_dim=8, action_dim=2, reward_scale=1.0, gamma=0.99,
learning_rate=1e-4, if_per_or_gae=False, env_num=1, gpu_id=0):
AgentBase.init(self, net_dim=net_dim, state_dim=state_dim, action_dim=action_dim,
reward_scale=reward_scale, gamma=gamma,
learning_rate=learning_rate, if_per_or_gae=if_per_or_gae,
env_num=env_num, gpu_id=gpu_id, )
if if_per_or_gae: # if_use_per
self.criterion = torch.nn.MSELoss(reduction='none')
self.get_obj_critic = self.get_obj_critic_per
else:
self.criterion = torch.nn.MSELoss(reduction='mean')
self.get_obj_critic = self.get_obj_critic_raw
self.get_obj_critic = self.get_obj_critic_raw
def select_actions(self, state: torch.Tensor) -> torch.Tensor:
action = self.act.get_action(state.to(self.device), self.explore_noise)
return action.detach().cpu()
def update_net(self, buffer, batch_size, repeat_times, soft_update_tau) -> (float, float):
buffer.update_now_len()
obj_actor = None
update_a = 0
for update_c in range(1, int(buffer.now_len / batch_size * repeat_times)):
'''objective of critic (loss function of critic)'''
obj_critic, state = self.get_obj_critic(buffer, batch_size)
self.obj_critic = 0.99 * self.obj_critic + 0.01 * obj_critic.item() # for reliable_lambda
self.optim_update(self.cri_optim, obj_critic)
if self.if_use_cri_target:
self.soft_update(self.cri_target, self.cri, soft_update_tau)
'''objective of actor using reliable_lambda and TTUR (Two Time-scales Update Rule)'''
reliable_lambda = np.exp(-self.obj_critic ** 2) # for reliable_lambda
if_update_a = update_a / update_c < 1 / (2 - reliable_lambda)
if if_update_a: # auto TTUR
update_a += 1
obj_actor = -self.cri(state, self.act(state)).mean() # policy gradient
self.optim_update(self.act_optim, obj_actor)
if self.if_use_act_target:
self.soft_update(self.act_target, self.act, soft_update_tau)
return self.obj_critic, obj_actor.item()
def get_obj_critic_raw(self, buffer, batch_size):
with torch.no_grad():
# reward, mask, action, state, next_s = buffer.sample_batch(batch_size)
q_label, action, state = buffer.sample_batch_one_step(batch_size)
q_value = self.cri(state, action)
obj_critic = self.criterion(q_value, q_label)
return obj_critic, state
def get_obj_critic_per(self, buffer, batch_size):
with torch.no_grad():
# reward, mask, action, state, next_s, is_weights = buffer.sample_batch(batch_size)
q_label, action, state, is_weights = buffer.sample_batch_one_step(batch_size)
q_value = self.cri(state, action)
td_error = self.criterion(q_value, q_label) # or td_error = (q_value - q_label).abs()
obj_critic = (td_error * is_weights).mean()
buffer.td_error_update(td_error.detach())
return obj_critic, q_value
class AgentShareStep1AC(AgentBase):
def __init__(self):
AgentBase.__init__(self)
self.ClassAct = ShareBiConv
self.ClassCri = self.ClassAct
self.if_use_cri_target = True
self.if_use_act_target = True
self.obj_critic = (-np.log(0.5)) ** 0.5 # for reliable_lambda
def init(self, net_dim=256, state_dim=8, action_dim=2, reward_scale=1.0, gamma=0.99,
learning_rate=1e-4, if_per_or_gae=False, env_num=1, gpu_id=0):
AgentBase.init(self, net_dim=net_dim, state_dim=state_dim, action_dim=action_dim,
reward_scale=reward_scale, gamma=gamma,
learning_rate=learning_rate, if_per_or_gae=if_per_or_gae,
env_num=env_num, gpu_id=gpu_id, )
self.act = self.cri = self.ClassAct(net_dim, state_dim, action_dim).to(self.device)
if self.if_use_act_target:
self.act_target = self.cri_target = deepcopy(self.act)
else:
self.act_target = self.cri_target = self.act
self.cri_optim = torch.optim.Adam(
[{'params': self.act.enc_s.parameters(), 'lr': learning_rate * 1.25},
{'params': self.act.enc_a.parameters(), },
{'params': self.act.mid_n.parameters(), 'lr': learning_rate * 1.25},
{'params': self.act.dec_a.parameters(), },
{'params': self.act.dec_q.parameters(), },
], lr=learning_rate)
self.act_optim = self.cri_optim
if if_per_or_gae: # if_use_per
self.criterion = torch.nn.MSELoss(reduction='none')
self.get_obj_critic = self.get_obj_critic_per
else:
self.criterion = torch.nn.MSELoss(reduction='mean')
self.get_obj_critic = self.get_obj_critic_raw
def select_actions(self, state: torch.Tensor) -> torch.Tensor:
action = self.act.get_action(state.to(self.device), self.explore_noise)
return action.detach().cpu()
def update_net(self, buffer, batch_size, repeat_times, soft_update_tau) -> (float, float):
buffer.update_now_len()
obj_critic = None
obj_actor = None
update_a = 0
for update_c in range(1, int(buffer.now_len / batch_size * repeat_times)):
'''objective of critic'''
obj_critic, state = self.get_obj_critic(buffer, batch_size)
self.obj_critic = 0.995 * self.obj_critic + 0.005 * obj_critic.item() # for reliable_lambda
reliable_lambda = np.exp(-self.obj_critic ** 2) # for reliable_lambda
'''objective of actor using reliable_lambda and TTUR (Two Time-scales Update Rule)'''
if_update_a = update_a / update_c < 1 / (2 - reliable_lambda)
if if_update_a: # auto TTUR
update_a += 1
action_pg = self.act(state) # policy gradient
obj_actor = -self.act_target.critic(state, action_pg).mean()
obj_united = obj_critic + obj_actor * reliable_lambda
else:
obj_united = obj_critic
self.optim_update(self.cri_optim, obj_united)
if self.if_use_act_target:
self.soft_update(self.act_target, self.act, soft_update_tau)
return obj_critic.item(), obj_actor.item()
def get_obj_critic_raw(self, buffer, batch_size):
with torch.no_grad():
# reward, mask, action, state, next_s = buffer.sample_batch(batch_size)
q_label, action, state = buffer.sample_batch_one_step(batch_size)
q_value = self.act.critic(state, action)
obj_critic = self.criterion(q_value, q_label)
return obj_critic, state
def get_obj_critic_per(self, buffer, batch_size):
with torch.no_grad():
# reward, mask, action, state, next_s, is_weights = buffer.sample_batch(batch_size)
q_label, action, state, is_weights = buffer.sample_batch_one_step(batch_size)
q_value = self.act.critic(state, action)
td_error = self.criterion(q_value, q_label) # or td_error = (q_value - q_label).abs()
obj_critic = (td_error * is_weights).mean()
buffer.td_error_update(td_error.detach())
return obj_critic, q_value
| 45.628571 | 104 | 0.641077 | 1,120 | 7,985 | 4.246429 | 0.128571 | 0.079479 | 0.04037 | 0.04037 | 0.826745 | 0.789739 | 0.767031 | 0.749369 | 0.749369 | 0.711943 | 0 | 0.012123 | 0.25623 | 7,985 | 174 | 105 | 45.890805 | 0.788685 | 0.07226 | 0 | 0.686567 | 0 | 0 | 0.007 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.089552 | false | 0 | 0.037313 | 0 | 0.201493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2ac0bbad717fb61566e6d51bcd2020eb15a074b5 | 179 | py | Python | client.py | BontaVlad/jira-cli | 263473400e6b1842af531002d23af1ed4b56bfba | [
"MIT"
] | null | null | null | client.py | BontaVlad/jira-cli | 263473400e6b1842af531002d23af1ed4b56bfba | [
"MIT"
] | null | null | null | client.py | BontaVlad/jira-cli | 263473400e6b1842af531002d23af1ed4b56bfba | [
"MIT"
] | null | null | null | import requests
class Client(object):
def _get(self, url, *args, **kwargs):
return requests.get(url, *args, **kwargs)
def get_issue(self, issue):
pass
| 16.272727 | 49 | 0.614525 | 23 | 179 | 4.695652 | 0.608696 | 0.111111 | 0.240741 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.251397 | 179 | 10 | 50 | 17.9 | 0.80597 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0.166667 | 0.166667 | 0.166667 | 0.833333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 6 |
2af29eb875c27c29e3fa69b04cf250db7680e382 | 96 | py | Python | test/test_coniql.py | callumforrester/coniql | 6b0b217d37a93acc680946b8eb2a40db0da551d2 | [
"Apache-2.0"
] | null | null | null | test/test_coniql.py | callumforrester/coniql | 6b0b217d37a93acc680946b8eb2a40db0da551d2 | [
"Apache-2.0"
] | null | null | null | test/test_coniql.py | callumforrester/coniql | 6b0b217d37a93acc680946b8eb2a40db0da551d2 | [
"Apache-2.0"
] | null | null | null | import pytest
def test_delete_this_directory_if_you_do_not_want_unit_tests():
assert False | 19.2 | 63 | 0.854167 | 16 | 96 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.114583 | 96 | 5 | 64 | 19.2 | 0.847059 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 1 | 0.333333 | true | 0 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
6303cecf5330316b97e77e046db0bf27ec6dd570 | 3,555 | py | Python | src/util/dendrogram.py | yizumi1012xxx/cookpad | 48bafd2e5dc7d99fc79df43d95ae46a7cb08bb14 | [
"MIT"
] | null | null | null | src/util/dendrogram.py | yizumi1012xxx/cookpad | 48bafd2e5dc7d99fc79df43d95ae46a7cb08bb14 | [
"MIT"
] | null | null | null | src/util/dendrogram.py | yizumi1012xxx/cookpad | 48bafd2e5dc7d99fc79df43d95ae46a7cb08bb14 | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
'''
dendrogram.py
'''
import numpy as np
from scipy.spatial.distance import squareform
from scipy.cluster.hierarchy import linkage, dendrogram
from matplotlib import pyplot as plt
CONFUSION_MATRIX = [
[32, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 2, 0, 0, 1, 0, 0],
[2, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 2, 23, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 3, 0, 3, 0, 0],
[1, 1, 5, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 18, 3, 1, 0, 4, 2, 1, 1, 2, 0, 1, 3, 3, 1, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 27, 3, 1, 3, 0, 0, 1, 1, 2, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 26, 0, 6, 6, 2, 0, 0, 0, 0, 2, 0, 0, 0, 0, 1, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 1, 0, 33, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 1, 2, 3, 0, 15, 10, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 1, 2, 0, 7, 0, 10, 24, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 0, 3, 0, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 17, 2, 0, 8, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 3, 3, 27, 2, 4, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 35, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 2],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 2, 2, 34, 3, 1, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 4, 23, 0, 5, 1, 0, 0, 0, 1, 0, 1],
[2, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 3, 2, 1, 0, 0, 18, 0, 0, 1, 1, 1, 0, 1, 1],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 26, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 33, 0, 0, 0, 0, 0, 1],
[1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 29, 0, 0, 2, 0, 7],
[0, 3, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 0, 0, 0, 28, 3, 1, 0, 1],
[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 1, 0, 0, 0, 0, 0, 23, 0, 1, 1],
[2, 0, 4, 0, 0, 0, 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 0, 0, 1, 3, 29, 2, 2],
[1, 2, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 2, 1, 2, 31, 1],
[0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 6, 2, 3, 0, 0, 0, 30],
]
LABELS = [
'bread_sandwich',
'bread_sliced',
'bread_sweets',
'bread_table',
'noodle_somen',
'noodle_udon',
'pasta_cream',
'pasta_gratin',
'pasta_japanese',
'pasta_oil',
'pasta_tomato',
'rice_boiled',
'rice_bowl',
'rice_curry',
'rice_fried',
'rice_risotto',
'rice_sushi',
'soup_miso',
'soup_potage',
'sweets_cheese',
'sweets_cookie',
'sweets_muffin',
'sweets_pie',
'sweets_pound',
'sweets_pudding'
]
def get_dendrogram(confusion_mat, labels=None, a=0.5):
# calculate distance matrix
distance_mat = np.zeros_like(confusion_mat, dtype=np.float)
row, col = distance_mat.shape
for r in range(row):
for c in range(col):
if r < c:
val = 1 / (confusion_mat[c][r] + confusion_mat[r][c] + a)
distance_mat[r][c] = val
distance_mat[c][r] = val
# change format
dist_vec = squareform(distance_mat)
result = linkage(dist_vec, method='average')
# draw dendrogram
dendrogram(result, labels=labels, orientation='right', color_threshold=0.4, leaf_font_size=6)
plt.savefig('result/dendrogram.png')
if __name__ == '__main__':
get_dendrogram(CONFUSION_MATRIX, labels=LABELS)
| 38.641304 | 97 | 0.453165 | 811 | 3,555 | 1.922318 | 0.144266 | 0.441309 | 0.542656 | 0.600385 | 0.345093 | 0.329057 | 0.311738 | 0.293137 | 0.286722 | 0.279666 | 0 | 0.265833 | 0.302672 | 3,555 | 91 | 98 | 39.065934 | 0.36305 | 0.025879 | 0 | 0 | 0 | 0 | 0.095017 | 0.006083 | 0 | 0 | 0 | 0 | 0 | 1 | 0.013699 | false | 0 | 0.054795 | 0 | 0.068493 | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
6332ead5a78cda36894f41f3133c002fa52e9f13 | 7,237 | py | Python | test/test_condition.py | featureflow/featureflow-python-sdk | a84cf54812fdc65d9aa52d10b17325504e67057f | [
"Apache-2.0"
] | null | null | null | test/test_condition.py | featureflow/featureflow-python-sdk | a84cf54812fdc65d9aa52d10b17325504e67057f | [
"Apache-2.0"
] | null | null | null | test/test_condition.py | featureflow/featureflow-python-sdk | a84cf54812fdc65d9aa52d10b17325504e67057f | [
"Apache-2.0"
] | 2 | 2020-06-01T05:37:16.000Z | 2020-07-15T08:17:18.000Z | from datetime import date
import unittest
from featureflow.condition import Condition
from .test_helpers import values, fake
class ConditionTest(unittest.TestCase):
"""Tests for Featureflow.Condition"""
def test_equals(self):
"""Test 'equals' operator for all supported types"""
operator = 'equals'
vals = [fake.word(), fake.random_int(0, 100), fake.date()]
# Equals case
for val in vals:
condition = Condition(operator=operator, values=values(value=val))
self.assertTrue(condition.evaluate(val))
# Not equals case
for val in vals:
condition = Condition(operator=operator, values=values())
self.assertFalse(condition.evaluate(val))
def test_contains(self):
"""Test 'contains' operator for strings"""
operator = 'contains'
val = fake.word()
length = len(val) // 2
substr = val[fake.random_int(0, length):fake.random_int(1, length)]
condition = Condition(operator=operator, values=values(value=substr))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=fake.word()))
self.assertFalse(condition.evaluate(val))
def test_starts_with(self):
"""Test 'startsWith' operator for strings"""
operator = 'startsWith'
val = fake.word()
length = len(val) // 2
substr = val[0:fake.random_int(1, length)]
condition = Condition(operator=operator, values=values(value=substr))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=fake.word()))
self.assertFalse(condition.evaluate(val))
def test_ends_with(self):
"""Test 'endsWith' operator for strings"""
operator = 'endsWith'
val = fake.word()
length = len(val) // 2
substr = val[fake.random_int(0, length):]
condition = Condition(operator=operator, values=values(value=substr))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=fake.word()))
self.assertFalse(condition.evaluate(val))
def test_matches(self):
"""Test 'matches' operator for strings"""
operator = 'matches'
val = fake.word()
length = len(val) // 2
substr = val[fake.random_int(0, length):fake.random_int(1, length)]
regex = ".*{}.*".format(substr)
condition = Condition(operator=operator, values=values(value=regex))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=fake.word()))
self.assertFalse(condition.evaluate(val))
def test_in(self):
"""Test 'in' operator for strings"""
operator = 'in'
val = fake.word()
condition = Condition(operator=operator, values=values(value=val))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=fake.word()))
self.assertFalse(condition.evaluate(val))
def test_not_in(self):
"""Test 'notIn' operator for strings"""
operator = 'notIn'
val = fake.word()
condition = Condition(operator=operator, values=values(value=val))
self.assertFalse(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=fake.word()))
self.assertTrue(condition.evaluate(val))
def test_before(self):
"""Test 'before' operator for strings"""
operator = 'before'
val = fake.date()
val_true = fake.date(end_datetime=date.fromisoformat(val))
val_false = fake.date_between(start_date=date.fromisoformat(val)).isoformat()
condition = Condition(operator=operator, values=values(value=val_true))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=val_false))
self.assertFalse(condition.evaluate(val))
def test_after(self):
"""Test 'after' operator for strings"""
operator = 'after'
val = fake.date()
val_true = fake.date_between(start_date=date.fromisoformat(val)).isoformat()
val_false = fake.date(end_datetime=date.fromisoformat(val))
condition = Condition(operator=operator, values=values(value=val_true))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=val_false))
self.assertFalse(condition.evaluate(val))
def test_greater_than(self):
"""Test 'greaterThan' operator for strings"""
operator = 'greaterThan'
val = fake.random_int(0, 100)
val_true = val - fake.random_int(1, val)
val_false = val + fake.random_int(0, 100)
condition = Condition(operator=operator, values=values(value=val_true))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=val_false))
self.assertFalse(condition.evaluate(val))
def test_less_than(self):
"""Test 'lessThan' operator for strings"""
operator = 'lessThan'
val = fake.random_int(0, 100)
val_true = val + fake.random_int(1, val)
val_false = val - fake.random_int(0, 100)
condition = Condition(operator=operator, values=values(value=val_true))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=val_false))
self.assertFalse(condition.evaluate(val))
def test_greater_than_or_equal(self):
"""Test 'greaterThanOrEqual' operator for strings"""
operator = 'greaterThanOrEqual'
val = fake.random_int(0, 100)
val_true = val - fake.random_int(1, val)
val_false = val + fake.random_int(1, 100)
# Greater
condition = Condition(operator=operator, values=values(value=val_true))
self.assertTrue(condition.evaluate(val))
# Equal
condition = Condition(operator=operator, values=values(value=val))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=val_false))
self.assertFalse(condition.evaluate(val))
def test_less_than_or_equal(self):
"""Test 'lessThanOrEqual' operator for strings"""
operator = 'lessThanOrEqual'
val = fake.random_int(0, 100)
val_true = val + fake.random_int(1, val)
val_false = val - fake.random_int(1, 100)
# Less
condition = Condition(operator=operator, values=values(value=val_true))
self.assertTrue(condition.evaluate(val))
# Equal
condition = Condition(operator=operator, values=values(value=val))
self.assertTrue(condition.evaluate(val))
# Not
condition = Condition(operator=operator, values=values(value=val_false))
self.assertFalse(condition.evaluate(val))
| 32.308036 | 85 | 0.641841 | 810 | 7,237 | 5.64321 | 0.087654 | 0.11026 | 0.159265 | 0.20827 | 0.790855 | 0.770072 | 0.770072 | 0.724787 | 0.724787 | 0.694378 | 0 | 0.009234 | 0.236838 | 7,237 | 223 | 86 | 32.452915 | 0.818396 | 0.087191 | 0 | 0.634146 | 0 | 0 | 0.017643 | 0 | 0 | 0 | 0 | 0 | 0.227642 | 1 | 0.105691 | false | 0 | 0.03252 | 0 | 0.146341 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
2d4e6d7aefc0e938d7ce26d0ff64ad4962c220fc | 43 | py | Python | dev/Tools/Python/2.7.13/mac/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyxb/bundles/wssplat/wsu.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 123 | 2015-01-12T06:43:22.000Z | 2022-03-20T18:06:46.000Z | dev/Tools/Python/2.7.13/mac/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyxb/bundles/wssplat/wsu.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 103 | 2015-01-08T18:35:57.000Z | 2022-01-18T01:44:14.000Z | dev/Tools/Python/2.7.13/mac/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyxb/bundles/wssplat/wsu.py | jeikabu/lumberyard | 07228c605ce16cbf5aaa209a94a3cb9d6c1a4115 | [
"AML"
] | 54 | 2015-02-15T17:12:00.000Z | 2022-03-07T23:02:32.000Z | from pyxb.bundles.wssplat.raw.wsu import *
| 21.5 | 42 | 0.790698 | 7 | 43 | 4.857143 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.093023 | 43 | 1 | 43 | 43 | 0.871795 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
2da7f848ad64044364bdd8149d34c121e9ca5027 | 88 | py | Python | config.py | RynX0z/ASCJH867ATYOIC8GIGU6F | fefb11a760f7023dc5b747644417de1a414cf97c | [
"MIT"
] | null | null | null | config.py | RynX0z/ASCJH867ATYOIC8GIGU6F | fefb11a760f7023dc5b747644417de1a414cf97c | [
"MIT"
] | null | null | null | config.py | RynX0z/ASCJH867ATYOIC8GIGU6F | fefb11a760f7023dc5b747644417de1a414cf97c | [
"MIT"
] | null | null | null |
#token bot from botfather
token_bot = "5011944187:AAFk0sJb9vXHorfuwy6eHw9yHKw6efO0JJ0"
| 22 | 60 | 0.852273 | 8 | 88 | 9.25 | 0.75 | 0.216216 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2125 | 0.090909 | 88 | 3 | 61 | 29.333333 | 0.7125 | 0.272727 | 0 | 0 | 0 | 0 | 0.741935 | 0.741935 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93062e380863b8bd2e3de0d8577091d5de9af98d | 1,197 | py | Python | tests/python_parser/conftest.py | abersheeran/mingshe | a68901a41f152764d2e81b61770c30d5be2aadc2 | [
"Apache-2.0"
] | 45 | 2021-05-17T06:16:00.000Z | 2022-03-22T08:10:03.000Z | tests/python_parser/conftest.py | abersheeran/mingshe | a68901a41f152764d2e81b61770c30d5be2aadc2 | [
"Apache-2.0"
] | 16 | 2021-05-17T01:33:27.000Z | 2021-12-31T15:04:30.000Z | tests/python_parser/conftest.py | abersheeran/mingshe | a68901a41f152764d2e81b61770c30d5be2aadc2 | [
"Apache-2.0"
] | 2 | 2021-09-02T04:54:44.000Z | 2021-09-22T09:21:53.000Z | """"Conftest for pure python parser."""
from pathlib import Path
import pytest
from pegen.build import build_parser
from .utils import generate_parser
@pytest.fixture(scope="session")
def python_parser_cls():
grammar_path = Path(__file__).parent.parent.parent / "mingshe.gram"
grammar = build_parser(grammar_path)[0]
source_path = str(Path(__file__).parent / "parser_cache" / "py_parser.py")
parser_cls = generate_parser(grammar, source_path, "PythonParser")
return parser_cls
@pytest.fixture(scope="session")
def python_parse_file():
grammar_path = Path(__file__).parent.parent.parent / "mingshe.gram"
grammar = build_parser(grammar_path)[0]
source_path = str(Path(__file__).parent / "parser_cache" / "py_parser.py")
parser_cls = generate_parser(grammar, source_path, "parse_file")
return parser_cls
@pytest.fixture(scope="session")
def python_parse_str():
grammar_path = Path(__file__).parent.parent.parent / "mingshe.gram"
grammar = build_parser(grammar_path)[0]
source_path = str(Path(__file__).parent / "parser_cache" / "py_parser.py")
parser_cls = generate_parser(grammar, source_path, "parse_string")
return parser_cls
| 31.5 | 78 | 0.741019 | 159 | 1,197 | 5.176101 | 0.207547 | 0.076549 | 0.102066 | 0.09113 | 0.789793 | 0.789793 | 0.748481 | 0.748481 | 0.748481 | 0.748481 | 0 | 0.002907 | 0.137845 | 1,197 | 37 | 79 | 32.351351 | 0.794574 | 0.027569 | 0 | 0.6 | 1 | 0 | 0.14076 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.12 | false | 0 | 0.16 | 0 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
930d8a731b592e2eef5ef27d290261235eb04ddc | 9,187 | py | Python | rangedet/core/detection_metric.py | jie311/RangeDet | 5078ce339c6d27a009aed1ca2790911ce4d10bc7 | [
"Apache-2.0"
] | 125 | 2021-08-09T02:14:04.000Z | 2022-03-30T03:41:56.000Z | rangedet/core/detection_metric.py | jie311/RangeDet | 5078ce339c6d27a009aed1ca2790911ce4d10bc7 | [
"Apache-2.0"
] | 15 | 2021-08-31T06:12:31.000Z | 2022-03-17T00:21:35.000Z | rangedet/core/detection_metric.py | jie311/RangeDet | 5078ce339c6d27a009aed1ca2790911ce4d10bc7 | [
"Apache-2.0"
] | 8 | 2021-08-10T03:08:10.000Z | 2022-03-09T06:21:11.000Z | import mxnet as mx
import numpy as np
class LossWithIgnore(mx.metric.EvalMetric):
def __init__(self, name, output_names, label_names, ignore_label=-1):
super(LossWithIgnore, self).__init__(name, output_names, label_names)
self.ignore_label = ignore_label
def update(self, labels, preds):
raise NotImplemented
class FgLossWithIgnore(LossWithIgnore):
def __init__(self, name, output_names, label_names, bg_label=0, ignore_label=-1):
super(FgLossWithIgnore, self).__init__(name, output_names, label_names, ignore_label)
self.bg_label = bg_label
def update(self, labels, preds):
raise NotImplemented
class AccWithIgnore(LossWithIgnore):
def __init__(self, name, output_names, label_names, ignore_label=-1, axis=1):
super(AccWithIgnore, self).__init__(name, output_names, label_names, ignore_label)
self.axis = axis
def update(self, labels, preds):
if len(preds) == 1 and len(labels) == 1:
pred = preds[0]
label = labels[0]
elif len(preds) == 2:
pred = preds[0]
label = preds[1]
else:
raise Exception(
"unknown loss output: len(preds): {}, len(labels): {}".format(
len(preds), len(labels)
)
)
if pred.shape[self.axis] == 1:
pred_label = pred > 0.5
else:
pred_label = mx.ndarray.argmax(pred, axis=self.axis)
pred_label = pred_label.astype('int32').asnumpy().reshape(-1)
label = label.astype('int32').asnumpy().reshape(-1)
keep_inds = np.where(label != self.ignore_label)[0]
pred_label = pred_label[keep_inds]
label = label[keep_inds]
self.sum_metric += np.sum(pred_label == label)
self.num_inst += len(pred_label)
class AccWithIgnoreDebug(LossWithIgnore):
def __init__(self, name, output_names, label_names, ignore_label=-1, axis=1):
super(AccWithIgnoreDebug, self).__init__(name, output_names, label_names, ignore_label)
self.axis = axis
def update(self, labels, preds):
if len(preds) == 1 and len(labels) == 1:
pred = preds[0]
label = labels[0]
elif len(preds) == 2:
pred = preds[0]
label = preds[1]
else:
raise Exception(
"unknown loss output: len(preds): {}, len(labels): {}".format(
len(preds), len(labels)
)
)
print(label)
if pred.shape[self.axis] == 1:
pred_label = pred > 0.5
else:
pred_label = mx.ndarray.argmax(pred, axis=self.axis)
pred_label = pred_label.astype('int32').asnumpy().reshape(-1)
label = label.astype('int32').asnumpy().reshape(-1)
keep_inds = np.where(label != self.ignore_label)[0]
pred_label = pred_label[keep_inds]
label = label[keep_inds]
self.sum_metric += np.sum(pred_label == label)
self.num_inst += len(pred_label)
class FgAccWithIgnore(FgLossWithIgnore):
def __init__(self, name, output_names, label_names, bg_label=0, ignore_label=-1, axis=1):
super(FgAccWithIgnore, self).__init__(name, output_names, label_names, bg_label, ignore_label)
self.axis = axis
def update(self, labels, preds):
pred = preds[0]
label = labels[0]
if pred.shape[self.axis] == 1:
pred_label = pred > 0.5
else:
pred_label = mx.ndarray.argmax(pred, axis=self.axis)
pred_label = pred_label.astype('int32').asnumpy().reshape(-1)
label = label.astype('int32').asnumpy().reshape(-1)
keep_inds = np.where((label != self.bg_label) & (label != self.ignore_label))[0]
pred_label = pred_label[keep_inds]
label = label[keep_inds]
self.sum_metric += np.sum(pred_label == label)
self.num_inst += len(pred_label)
class CeWithIgnore(LossWithIgnore):
def __init__(self, name, output_names, label_names, ignore_label=-1):
super(CeWithIgnore, self).__init__(name, output_names, label_names, ignore_label)
def update(self, labels, preds):
pred = preds[0]
label = labels[0]
label = label.astype('int32').asnumpy().reshape(-1)
pred = pred.asnumpy().astype('float32').reshape((pred.shape[0], pred.shape[1], -1)).transpose((0, 2, 1))
pred = pred.reshape((label.shape[0], -1)) # -1 x c
keep_inds = np.where(label != self.ignore_label)[0]
label = label[keep_inds]
prob = pred[keep_inds, label]
prob += 1e-14
ce_loss = -1 * np.log(prob)
ce_loss = np.sum(ce_loss)
self.sum_metric += ce_loss
self.num_inst += label.shape[0]
class FgCeWithIgnore(FgLossWithIgnore):
def __init__(self, name, output_names, label_names, bg_label=0, ignore_label=-1):
super(FgCeWithIgnore, self).__init__(name, output_names, label_names, bg_label, ignore_label)
def update(self, labels, preds):
pred = preds[0]
label = labels[0]
label = label.astype('int32').asnumpy().reshape(-1)
pred = pred.asnumpy().reshape((pred.shape[0], pred.shape[1], -1)).transpose((0, 2, 1))
pred = pred.reshape((label.shape[0], -1)) # -1 x c
keep_inds = np.where((label != self.ignore_label) & (label != self.bg_label))[0]
label = label[keep_inds]
prob = pred[keep_inds, label]
prob += 1e-14
ce_loss = -1 * np.log(prob)
ce_loss = np.sum(ce_loss)
self.sum_metric += ce_loss
self.num_inst += label.shape[0]
class L1(FgLossWithIgnore):
def __init__(self, name, output_names, label_names, bg_label=0, ignore_label=-1):
super(L1, self).__init__(name, output_names, label_names, bg_label, ignore_label)
def update(self, labels, preds):
if len(preds) == 1 and len(labels) == 1:
pred = preds[0].asnumpy()
label = labels[0].asnumpy()
elif len(preds) == 2:
pred = preds[0].asnumpy()
label = preds[1].asnumpy()
else:
raise Exception(
"unknown loss output: len(preds): {}, len(labels): {}".format(
len(preds), len(labels)
)
)
label = label.reshape(-1)
num_inst = len(np.where((label != self.bg_label) & (label != self.ignore_label))[0])
self.sum_metric += np.sum(pred)
self.num_inst += num_inst
class SigmoidCrossEntropy(mx.metric.EvalMetric):
def __init__(self, name, output_names, label_names):
super(SigmoidCrossEntropy, self).__init__(name, output_names, label_names)
def update(self, labels, preds):
x = preds[0].reshape(-1) # logit
z = preds[1].reshape(-1) # label
l = mx.nd.relu(x) - x * z + mx.nd.log1p(mx.nd.exp(-mx.nd.abs(x)))
l = l.mean().asnumpy()
self.num_inst += 1
self.sum_metric += l
class ScalarLoss(mx.metric.EvalMetric):
def __init__(self, name, output_names, label_names, reduction='sum'):
super(ScalarLoss, self).__init__(name, output_names, label_names)
self.reduction = reduction
def update(self, labels, preds):
if self.reduction == 'sum':
loss = preds[0].asnumpy().sum()
elif self.reduction == 'mean':
loss = preds[0].asnumpy().mean()
self.num_inst += 1
self.sum_metric += loss
class ScalarLossFP16(mx.metric.EvalMetric):
def __init__(self, name, output_names, label_names):
super(ScalarLossFP16, self).__init__(name, output_names, label_names)
def update(self, labels, preds):
loss = preds[0].asnumpy().sum()
self.num_inst += 1
self.sum_metric += loss
class RegLossFP16(mx.metric.EvalMetric):
def __init__(self, name, output_names, label_names):
super(RegLossFP16, self).__init__(name, output_names, label_names)
def update(self, labels, preds):
loss = preds[0].asnumpy().sum()
self.num_inst += 1
self.sum_metric += loss
class AccWithLogits(LossWithIgnore):
def __init__(self, name, output_names, label_names, ignore_label=-1, axis=1):
super(AccWithLogits, self).__init__(name, output_names, label_names, ignore_label)
self.axis = axis
def update(self, labels, preds):
if len(preds) == 1 and len(labels) == 1:
pred = preds[0]
label = labels[0]
elif len(preds) == 2:
pred = preds[0]
label = preds[1]
else:
raise Exception(
"unknown loss output: len(preds): {}, len(labels): {}".format(
len(preds), len(labels)
)
)
pred_label = pred > 0
pred_label = pred_label.astype('int32').asnumpy().reshape(-1)
label = label.astype('int32').asnumpy().reshape(-1)
keep_inds = np.where(label != self.ignore_label)[0]
pred_label = pred_label[keep_inds]
label = label[keep_inds]
self.sum_metric += np.sum(pred_label == label)
self.num_inst += len(pred_label)
| 35.064885 | 112 | 0.598345 | 1,170 | 9,187 | 4.463248 | 0.07265 | 0.053428 | 0.074684 | 0.099579 | 0.864994 | 0.856377 | 0.84527 | 0.836078 | 0.819801 | 0.783991 | 0 | 0.022606 | 0.268096 | 9,187 | 261 | 113 | 35.199234 | 0.754015 | 0.002721 | 0 | 0.699507 | 0 | 0 | 0.030032 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.128079 | false | 0 | 0.009852 | 0 | 0.20197 | 0.004926 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
93150ffde70af0c0af06df775f3d8313e0151c8f | 112 | py | Python | pyck/controllers/__init__.py | kashifpk/PyCK | 11513c6b928d37afcf83de717e8d2f74fce731af | [
"Ruby"
] | 2 | 2015-01-11T22:23:58.000Z | 2016-05-17T06:57:57.000Z | pyck/controllers/__init__.py | kashifpk/PyCK | 11513c6b928d37afcf83de717e8d2f74fce731af | [
"Ruby"
] | 31 | 2015-01-14T11:30:50.000Z | 2017-01-31T14:35:48.000Z | pyck/controllers/__init__.py | kashifpk/PyCK | 11513c6b928d37afcf83de717e8d2f74fce731af | [
"Ruby"
] | null | null | null | from .crud_controller import CRUDController, add_crud_handler
__all__ = ['CRUDController', 'add_crud_handler']
| 28 | 61 | 0.821429 | 13 | 112 | 6.384615 | 0.615385 | 0.409639 | 0.506024 | 0.674699 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089286 | 112 | 3 | 62 | 37.333333 | 0.813725 | 0 | 0 | 0 | 0 | 0 | 0.267857 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.5 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
933d677982c2c530dd72a0cb3754e790261982f0 | 76 | py | Python | elecsim/__init__.py | alexanderkell/elecsim | 35e400809759a8e9a9baa3776344e383b13d8c54 | [
"MIT"
] | 18 | 2019-01-18T21:41:49.000Z | 2022-02-14T15:49:40.000Z | elecsim/__init__.py | alexanderkell/elecsim | 35e400809759a8e9a9baa3776344e383b13d8c54 | [
"MIT"
] | 40 | 2020-01-28T22:37:53.000Z | 2022-03-12T01:00:07.000Z | elecsim/__init__.py | alexanderkell/elecsim | 35e400809759a8e9a9baa3776344e383b13d8c54 | [
"MIT"
] | 3 | 2020-08-03T16:45:54.000Z | 2021-08-04T07:45:16.000Z | from elecsim.model.world import World
import elecsim.scenario.scenario_data
| 25.333333 | 37 | 0.868421 | 11 | 76 | 5.909091 | 0.636364 | 0.338462 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.078947 | 76 | 2 | 38 | 38 | 0.928571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa7ed8c90d560a49364813a42cb291ae139b6b36 | 130 | py | Python | ds2/stack/__init__.py | aslisabanci/datastructures | f7952801245bc8d386a03d92a38121f558bdacca | [
"MIT"
] | 159 | 2017-10-02T22:03:14.000Z | 2022-03-10T23:02:22.000Z | ds2/stack/__init__.py | aslisabanci/datastructures | f7952801245bc8d386a03d92a38121f558bdacca | [
"MIT"
] | 9 | 2019-02-04T14:55:09.000Z | 2021-06-05T13:30:28.000Z | ds2/stack/__init__.py | aslisabanci/datastructures | f7952801245bc8d386a03d92a38121f558bdacca | [
"MIT"
] | 49 | 2017-09-29T17:51:16.000Z | 2022-03-10T23:12:17.000Z | from ds2.stack.liststack import ListStack
from ds2.stack.anotherstack import AnotherStack
from ds2.stack.badstack import BadStack
| 32.5 | 47 | 0.861538 | 18 | 130 | 6.222222 | 0.388889 | 0.1875 | 0.321429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.025424 | 0.092308 | 130 | 3 | 48 | 43.333333 | 0.923729 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
fa828be2abafb4c1a3fe69d29314c922932d2969 | 45 | py | Python | pygman/__init__.py | pygman/pygman | ea2048d32b81c061054f46e6074fe3acca0e6921 | [
"MIT"
] | null | null | null | pygman/__init__.py | pygman/pygman | ea2048d32b81c061054f46e6074fe3acca0e6921 | [
"MIT"
] | null | null | null | pygman/__init__.py | pygman/pygman | ea2048d32b81c061054f46e6074fe3acca0e6921 | [
"MIT"
] | null | null | null | from pygman.pygman import *
print("pygman")
| 11.25 | 27 | 0.733333 | 6 | 45 | 5.5 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 45 | 3 | 28 | 15 | 0.846154 | 0 | 0 | 0 | 0 | 0 | 0.133333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.5 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 6 |
faf420f64ad5247db827bed6531f85d0e2ee0b5f | 199 | py | Python | doommoses/__init__.py | sortiz/doommoses | 337fb5533608e56ed6dfddc58dcf66f6cafb6a37 | [
"MIT"
] | 1 | 2022-01-19T11:57:29.000Z | 2022-01-19T11:57:29.000Z | doommoses/__init__.py | sortiz/doommoses | 337fb5533608e56ed6dfddc58dcf66f6cafb6a37 | [
"MIT"
] | null | null | null | doommoses/__init__.py | sortiz/doommoses | 337fb5533608e56ed6dfddc58dcf66f6cafb6a37 | [
"MIT"
] | 1 | 2022-01-19T09:22:46.000Z | 2022-01-19T09:22:46.000Z | from doommoses.corpus import *
from doommoses.tokenize import *
from doommoses.truecase import *
from doommoses.normalize import *
# from doommoses.subwords import *
__version__ = "0.0.44"
| 22.111111 | 35 | 0.748744 | 24 | 199 | 6.041667 | 0.458333 | 0.448276 | 0.524138 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.024242 | 0.170854 | 199 | 8 | 36 | 24.875 | 0.854545 | 0.160804 | 0 | 0 | 0 | 0 | 0.038217 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.8 | 0 | 0.8 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
4f0c2599487fb2e9022bd4c4900ca768f3598152 | 36 | py | Python | favorite-animals/jerron-pa.py | jasonstewartpariveda/learn-git-1 | ae981f5a3d787860240ce658f46f1d98d0caf76e | [
"MIT"
] | 1 | 2021-09-29T18:48:12.000Z | 2021-09-29T18:48:12.000Z | favorite-animals/jerron-pa.py | jasonstewartpariveda/learn-git-1 | ae981f5a3d787860240ce658f46f1d98d0caf76e | [
"MIT"
] | 21 | 2021-09-27T17:19:45.000Z | 2021-09-30T04:07:26.000Z | favorite-animals/jerron-pa.py | jasonstewartpariveda/learn-git-1 | ae981f5a3d787860240ce658f46f1d98d0caf76e | [
"MIT"
] | 192 | 2021-09-27T17:10:51.000Z | 2021-10-05T03:06:36.000Z | print("My favorite animal is duck.") | 36 | 36 | 0.75 | 6 | 36 | 4.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 36 | 1 | 36 | 36 | 0.84375 | 0 | 0 | 0 | 0 | 0 | 0.72973 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 6 |
877e968c643bcaeedadf25c8b80ea961c6b659ad | 11,211 | py | Python | backend_old/encoding.py | Jemoka/gregarious | 014dcb62114bf8361732a1ac4ee5525f831cd64f | [
"Apache-2.0"
] | null | null | null | backend_old/encoding.py | Jemoka/gregarious | 014dcb62114bf8361732a1ac4ee5525f831cd64f | [
"Apache-2.0"
] | null | null | null | backend_old/encoding.py | Jemoka/gregarious | 014dcb62114bf8361732a1ac4ee5525f831cd64f | [
"Apache-2.0"
] | null | null | null | import time
import tensorflow as tf
import keras
import numpy as np
from keras.utils import to_categorical
from nltk import sent_tokenize, word_tokenize
from nltk.tokenize.treebank import TreebankWordDetokenizer
from nltk.stem import WordNetLemmatizer
class SentenceOneHotEncoder(object):
def __init__(self, minval=2):
self.encoding_dict = {"__$UNDEF$__": 1}
self.encoding_dict_rev = {1: "__$UNDEF$__"}
self.occurence = {}
self.trained = False
self.__currentID = 2
self.__minval = minval
self.__maxWords = 0
self.__lemma = WordNetLemmatizer()
@property
def vocabSize(self):
return self.__currentID
@property
def sentSize(self):
return self.__maxWords
def train(self, sentences):
assert not self.trained, "Please call SentenceOneHotEncoder.untrain to reset training."
assert type(sentences)==list or type(sentences)==np.ndarray, "Please supply an *array* of string sentences."
sents_tokenized = []
for item in sentences:
assert type(item)==str, "Please supply an array of *string* sentences."
for sent in sent_tokenize(item):
sents_tokenized.append(sent)
for sent in sents_tokenized:
words = word_tokenize(sent)
if len(words) > self.__maxWords:
self.__maxWords = len(words)
for word in words:
preppedWord = self.__lemma.lemmatize(self.__lemma.lemmatize(word.lower(), "v"), "n")
count = self.occurence.get(preppedWord)
if not count:
self.occurence[preppedWord] = 1
else:
self.occurence[preppedWord] = count+1
if count+1>=self.__minval:
id = self.encoding_dict.get(preppedWord)
if not id:
self.encoding_dict[preppedWord] = self.__currentID
self.encoding_dict_rev[self.__currentID] = preppedWord
self.__currentID += 1
self.trained = True
def untrain(self):
print("DANGER AHEAD: you are resetting the training of this vectorizer and ALL DATA WILL BE LOST!")
print("You have 5 seconds to kill this...")
time.sleep(5)
print("Welp. Your training weights is going now.")
self.encoding_dict = {"__$UNDEF$__": 1}
self.encoding_dict_rev = {1: "__$UNDEF$__"}
self.trained = False
self.occurence = {}
self.__currentID = 2
print("Done.")
def encode(self, sentences):
assert self.trained, "Model not trained! Please call SentenceOneHotEncoder.train to train."
assert type(sentences)==list or type(sentences)==np.ndarray or type(sentences)==str, "Please supply an *array* of string sentences or a string of sentences."
if type(sentences) == str:
sentences = [sentences]
sents = []
for item in sentences:
assert type(item)==str, "Please supply an array of *string* sentences."
for sent in sent_tokenize(item):
sents.append(sent)
sents_encoded = []
for sent in sents:
words = word_tokenize(sent)
if len(words) > self.__maxWords:
self.__maxWords = len(words)
for sent in sents:
words = word_tokenize(sent)
word_vectors = []
for word in words:
id = self.encoding_dict.get(self.__lemma.lemmatize(self.__lemma.lemmatize(word.lower(), "v"), "n"), 1)
word_vectors.append(id)
while len(word_vectors) < self.__maxWords:
word_vectors.append(0)
cats = to_categorical(np.array(word_vectors), num_classes=self.__currentID).tolist()
cats_n = [0]*len(cats)
for i, item in enumerate(cats):
if item[0] == 1:
item = [0]*self.__currentID
cats_n[i] = item
sents_encoded.append(cats_n)
return np.asarray(sents_encoded)
def decode(self, sentences):
assert self.trained, "Model not trained! Please call SentenceVectorizer.train to train."
assert type(sentences)==list or type(sentences)==np.ndarray, "Please supply an *array* of string sentences."
detokenizer = TreebankWordDetokenizer()
sents_decoded = []
for sent in sentences:
assert type(sent)==list or type(sent)==np.ndarray, "Please supply an array of array vector sentences."
if type(sent) == np.ndarray:
sent = sent.tolist()
words_decoded = []
for w in sent:
word = w.index(1)
if word == 0:
continue
word_decoded = self.encoding_dict_rev[word]
words_decoded.append(word_decoded)
sents_decoded.append(detokenizer.detokenize(words_decoded))
return sents_decoded
class SentenceVectorizer(object):
def __init__(self, pad=False, minval=2):
self.encoding_dict = {"__$UNDEF$__": 1}
self.encoding_dict_rev = {1: "__$UNDEF$__"}
self.occurence = {}
self.trained = False
self.pad = pad
self.__currentID = 2
self.__minval = minval
@property
def sequenceLength(self):
return self.__currentID
def train(self, sentences):
assert not self.trained, "Please call SentenceVectorizer.untrain to reset training."
assert type(sentences)==list or type(sentences)==np.ndarray, "Please supply an *array* of string sentences."
sents_tokenized = []
for item in sentences:
assert type(item)==str, "Please supply an array of *string* sentences."
for sent in sent_tokenize(item):
sents_tokenized.append(sent)
for sent in sents_tokenized:
words = word_tokenize(sent)
for word in words:
count = self.occurence.get(word.lower())
if not count:
self.occurence[word.lower()] = 1
else:
self.occurence[word.lower()] = count+1
if count+1>=self.__minval:
id = self.encoding_dict.get(word.lower())
if not id:
self.encoding_dict[word.lower()] = self.__currentID
self.encoding_dict_rev[self.__currentID] = word.lower()
self.__currentID += 1
self.trained = True
def untrain(self):
print("DANGER AHEAD: you are resetting the training of this vectorizer and ALL DATA WILL BE LOST!")
print("You have 5 seconds to kill this...")
time.sleep(5)
print("Welp. Your training weights is going now.")
self.encoding_dict = {"__$UNDEF$__": 1}
self.encoding_dict_rev = {1: "__$UNDEF$__"}
self.trained = False
self.occurence = {}
self.__currentID = 2
print("Done.")
def encode(self, sentences):
assert self.trained, "Model not trained! Please call SentenceVectorizer.train to train."
assert type(sentences)==list or type(sentences)==np.ndarray or type(sentences)==str, "Please supply an *array* of string sentences or a string of sentences."
if type(sentences) == str:
sentences = [sentences]
sents = []
for item in sentences:
assert type(item)==str, "Please supply an array of *string* sentences."
for sent in sent_tokenize(item):
sents.append(sent)
sents_encoded = []
for sent in sents:
words = word_tokenize(sent)
word_vectors = []
for word in words:
id = self.encoding_dict.get(word.lower(), 1)
word_vectors.append(id)
if self.pad:
while len(word_vectors)<self.__currentID:
word_vectors.append(0)
sents_encoded.append(word_vectors)
return sents_encoded
def decode(self, sentences):
assert self.trained, "Model not trained! Please call SentenceVectorizer.train to train."
assert type(sentences)==list or type(sentences)==np.ndarray, "Please supply an *array* of string sentences."
detokenizer = TreebankWordDetokenizer()
sents_decoded = []
for sent in sentences:
assert type(sent)==list or type(sent)==np.ndarray, "Please supply an array of array vector sentences."
if type(sent) == np.ndarray:
sent = sent.tolist()
words_decoded = []
for word in sent:
if word == 0:
continue
word_decoded = self.encoding_dict_rev[word]
words_decoded.append(word_decoded)
sents_decoded.append(detokenizer.detokenize(words_decoded))
return sents_decoded
| 52.633803 | 173 | 0.471858 | 1,017 | 11,211 | 5.021632 | 0.130777 | 0.042295 | 0.056393 | 0.044645 | 0.793421 | 0.765224 | 0.745252 | 0.741727 | 0.724104 | 0.724104 | 0 | 0.006189 | 0.452324 | 11,211 | 212 | 174 | 52.882075 | 0.82557 | 0 | 0 | 0.728205 | 0 | 0 | 0.125769 | 0.013737 | 0 | 0 | 0 | 0 | 0.092308 | 1 | 0.066667 | false | 0 | 0.041026 | 0.015385 | 0.153846 | 0.041026 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87abc499167e159189844c7311c46303cdfc0b20 | 1,083 | py | Python | 2021/d06/d06.py | pravin/advent-2016 | ecb0f72b9152c13e9c05d3ed2510bf7b8aa0907c | [
"Apache-2.0"
] | null | null | null | 2021/d06/d06.py | pravin/advent-2016 | ecb0f72b9152c13e9c05d3ed2510bf7b8aa0907c | [
"Apache-2.0"
] | null | null | null | 2021/d06/d06.py | pravin/advent-2016 | ecb0f72b9152c13e9c05d3ed2510bf7b8aa0907c | [
"Apache-2.0"
] | null | null | null | #!/usr/bin/env python3
inp = [1,3,4,1,5,2,1,1,1,1,5,1,5,1,1,1,1,3,1,1,1,1,1,1,1,2,1,5,1,1,1,1,1,4,4,1,1,4,1,1,2,3,1,5,1,4,1,2,4,1,1,1,1,1,1,1,1,2,5,3,3,5,1,1,1,1,4,1,1,3,1,1,1,2,3,4,1,1,5,1,1,1,1,1,2,1,3,1,3,1,2,5,1,1,1,1,5,1,5,5,1,1,1,1,3,4,4,4,1,5,1,1,4,4,1,1,1,1,3,1,1,1,1,1,1,3,2,1,4,1,1,4,1,5,5,1,2,2,1,5,4,2,1,1,5,1,5,1,3,1,1,1,1,1,4,1,2,1,1,5,1,1,4,1,4,5,3,5,5,1,2,1,1,1,1,1,3,5,1,2,1,2,1,3,1,1,1,1,1,4,5,4,1,3,3,1,1,1,1,1,1,1,1,1,5,1,1,1,5,1,1,4,1,5,2,4,1,1,1,2,1,1,4,4,1,2,1,1,1,1,5,3,1,1,1,1,4,1,4,1,1,1,1,1,1,3,1,1,2,1,1,1,1,1,2,1,1,1,1,1,1,1,2,1,1,1,1,1,1,4,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,2,5,1,2,1,1,1,1,1,1,1,1,1]
def solve(vec, days):
for j in range(days):
zero = vec[0]
for i in range(1, 9):
vec[i-1] = vec[i] # reduce internal timer of fishes
vec[8] = zero # new lantern fishes born
vec[6] += zero # lantern fishes with zero timer reset
return sum(vec)
def arr2vec(arr):
vec = [0] * 9
for x in arr:
vec[x] += 1
return vec
print(solve(arr2vec(inp), 80))
print(solve(arr2vec(inp), 256))
| 45.125 | 607 | 0.527239 | 377 | 1,083 | 1.514589 | 0.111406 | 0.455342 | 0.478109 | 0.455342 | 0.448336 | 0.401051 | 0.32049 | 0.227671 | 0.162872 | 0.103328 | 0 | 0.339744 | 0.135734 | 1,083 | 23 | 608 | 47.086957 | 0.270299 | 0.105263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.125 | false | 0 | 0 | 0 | 0.25 | 0.125 | 0 | 0 | 1 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
87d4dece268509b4ef4dcf33db8a3ea86c9d02ed | 102 | py | Python | Project-Django/PetShelter/views/__init__.py | wyattjoh/CMPUT410-Project | 36255dd530b1e4d7510e89f0e3ccc7447e01edac | [
"Apache-2.0"
] | null | null | null | Project-Django/PetShelter/views/__init__.py | wyattjoh/CMPUT410-Project | 36255dd530b1e4d7510e89f0e3ccc7447e01edac | [
"Apache-2.0"
] | null | null | null | Project-Django/PetShelter/views/__init__.py | wyattjoh/CMPUT410-Project | 36255dd530b1e4d7510e89f0e3ccc7447e01edac | [
"Apache-2.0"
] | null | null | null | from mainviews import *
from activities import *
from applications import *
from management import *
| 17 | 26 | 0.794118 | 12 | 102 | 6.75 | 0.5 | 0.37037 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.166667 | 102 | 5 | 27 | 20.4 | 0.952941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
87e209c367fd532edfe50ebb14cc6d14abfc80be | 2,468 | py | Python | run.py | shubhamg0sai/termux_voice_calculator | c80e490d2ab5dab6d42ac76fc6fdd5e0ee5d4687 | [
"MIT"
] | 1 | 2022-02-04T16:23:11.000Z | 2022-02-04T16:23:11.000Z | run.py | shubhamg0sai/termux_voice_calculator | c80e490d2ab5dab6d42ac76fc6fdd5e0ee5d4687 | [
"MIT"
] | null | null | null | run.py | shubhamg0sai/termux_voice_calculator | c80e490d2ab5dab6d42ac76fc6fdd5e0ee5d4687 | [
"MIT"
] | null | null | null | import os
import time
import subprocess
from word2number import w2n
from num2words import num2words
nt1 = "speak Your first number"
nt2 = "speak Your second number"
op = "speak Your math operator like addition subtraction multiplication division"
r = "the result is"
n = 1
def add():
subprocess.call(["termux-tts-speak",nt1])
n1 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n1))
subprocess.call(["termux-tts-speak",nt2])
n2 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n2))
res = w2n.word_to_num(n1) + w2n.word_to_num(n2)
print(res)
result = num2words(res)
print(result)
subprocess.call(["termux-tts-speak",r,result])
def sub():
subprocess.call(["termux-tts-speak",nt1])
n1 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n1))
subprocess.call(["termux-tts-speak",nt2])
n2 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n2))
res = w2n.word_to_num(n1) - w2n.word_to_num(n2)
print(res)
result = num2words(res)
print(result)
subprocess.call(["termux-tts-speak",r,result])
def mul():
subprocess.call(["termux-tts-speak",nt1])
n1 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n1))
subprocess.call(["termux-tts-speak",nt2])
n2 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n2))
res = w2n.word_to_num(n1) * w2n.word_to_num(n2)
print(res)
result = num2words(res)
print(result)
subprocess.call(["termux-tts-speak",r,result])
def div():
subprocess.call(["termux-tts-speak",nt1])
n1 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n1))
subprocess.call(["termux-tts-speak",nt2])
n2 = subprocess.getoutput("termux-speech-to-text")
print("......",str(n2))
res = w2n.word_to_num(n1) / w2n.word_to_num(n2)
print(res)
result = num2words(res)
print(result)
subprocess.call(["termux-tts-speak",r,result])
def calculator():
subprocess.call(["termux-tts-speak",op])
inp = subprocess.getoutput("termux-speech-to-text")
print("......",str(inp))
if "addition" in inp:
add()
elif "substraction" in inp:
sub()
elif "multiplication" in inp:
mul()
elif "division" in inp:
div()
else:
subprocess.call(["termux-tts-speak","wrong operator"])
while n>0:
calculator()
| 28.045455 | 81 | 0.628444 | 326 | 2,468 | 4.708589 | 0.177914 | 0.127687 | 0.18241 | 0.209772 | 0.758958 | 0.722476 | 0.722476 | 0.722476 | 0.69316 | 0.69316 | 0 | 0.025896 | 0.186386 | 2,468 | 87 | 82 | 28.367816 | 0.738546 | 0 | 0 | 0.540541 | 0 | 0 | 0.266207 | 0.07658 | 0 | 0 | 0 | 0 | 0 | 1 | 0.067568 | false | 0 | 0.067568 | 0 | 0.135135 | 0.22973 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
e201bc20cd0f44f300b7d1ca7bde70faaf409ce3 | 72 | py | Python | test/domain/core/__init__.py | trcox/py-core-domain | ca490809b247aef08e7de8981432f31b4f9d31a1 | [
"Apache-2.0"
] | null | null | null | test/domain/core/__init__.py | trcox/py-core-domain | ca490809b247aef08e7de8981432f31b4f9d31a1 | [
"Apache-2.0"
] | null | null | null | test/domain/core/__init__.py | trcox/py-core-domain | ca490809b247aef08e7de8981432f31b4f9d31a1 | [
"Apache-2.0"
] | null | null | null | from .event_test import EventTest
from .reading_test import ReadingTest
| 24 | 37 | 0.861111 | 10 | 72 | 6 | 0.7 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.111111 | 72 | 2 | 38 | 36 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
e20ac79651ce0fdc3607557e5079b29fa0d333dd | 101 | py | Python | point_seg/dataloader/tfrecord/__init__.py | soumik12345/point-cloud-segmentation | ee57007d981f2f7fcf208dee5a6bbefe311022f8 | [
"MIT"
] | 28 | 2021-10-03T08:56:48.000Z | 2022-03-11T18:15:42.000Z | point_seg/dataloader/tfrecord/__init__.py | soumik12345/point-cloud-segmentation | ee57007d981f2f7fcf208dee5a6bbefe311022f8 | [
"MIT"
] | 16 | 2021-09-30T15:44:50.000Z | 2021-12-31T11:43:39.000Z | point_seg/dataloader/tfrecord/__init__.py | soumik12345/point-cloud-segmentation | ee57007d981f2f7fcf208dee5a6bbefe311022f8 | [
"MIT"
] | 5 | 2021-11-03T16:24:59.000Z | 2022-02-14T14:01:51.000Z | from .tfrecord_creator import ShapeNetCoreTFRecordWriter
from .tfrecord_loader import TFRecordLoader
| 33.666667 | 56 | 0.90099 | 10 | 101 | 8.9 | 0.7 | 0.269663 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079208 | 101 | 2 | 57 | 50.5 | 0.956989 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
3550f00d63baf17b2d5dca4e57d4c0823674356f | 24,235 | py | Python | tests/data/test_dataset.py | Tokkiu/RecBole-1 | fbf146c419cb472cf704788b31e0c7aa00601405 | [
"MIT"
] | 4 | 2021-06-06T03:48:12.000Z | 2021-12-02T00:20:56.000Z | tests/data/test_dataset.py | Tokkiu/RecBole-1 | fbf146c419cb472cf704788b31e0c7aa00601405 | [
"MIT"
] | null | null | null | tests/data/test_dataset.py | Tokkiu/RecBole-1 | fbf146c419cb472cf704788b31e0c7aa00601405 | [
"MIT"
] | 1 | 2021-08-11T20:17:10.000Z | 2021-08-11T20:17:10.000Z | # -*- coding: utf-8 -*-
# @Time : 2021/1/3
# @Author : Yushuo Chen
# @Email : chenyushuo@ruc.edu.cn
# UPDATE
# @Time : 2020/1/3
# @Author : Yushuo Chen
# @email : chenyushuo@ruc.edu.cn
import logging
import os
import pytest
from recbole.config import Config, EvalSetting
from recbole.data import create_dataset
from recbole.utils import init_seed
current_path = os.path.dirname(os.path.realpath(__file__))
def new_dataset(config_dict=None, config_file_list=None):
config = Config(config_dict=config_dict, config_file_list=config_file_list)
init_seed(config['seed'], config['reproducibility'])
logging.basicConfig(level=logging.ERROR)
return create_dataset(config)
def split_dataset(config_dict=None, config_file_list=None):
dataset = new_dataset(config_dict=config_dict, config_file_list=config_file_list)
config = dataset.config
es_str = [_.strip() for _ in config['eval_setting'].split(',')]
es = EvalSetting(config)
es.set_ordering_and_splitting(es_str[0])
return dataset.build(es)
class TestDataset:
def test_filter_nan_user_or_item(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_nan_user_or_item',
'data_path': current_path,
'load_col': None,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 1
assert len(dataset.user_feat) == 3
assert len(dataset.item_feat) == 3
def test_remove_duplication_by_first(self):
config_dict = {
'model': 'BPR',
'dataset': 'remove_duplication',
'data_path': current_path,
'load_col': None,
'rm_dup_inter': 'first',
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.inter_feat[dataset.time_field][0] == 0
def test_remove_duplication_by_last(self):
config_dict = {
'model': 'BPR',
'dataset': 'remove_duplication',
'data_path': current_path,
'load_col': None,
'rm_dup_inter': 'last',
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.inter_feat[dataset.time_field][0] == 2
def test_filter_by_field_value_with_lowest_val(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_field_value',
'data_path': current_path,
'load_col': None,
'lowest_val': {
'timestamp': 4,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 6
def test_filter_by_field_value_with_highest_val(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_field_value',
'data_path': current_path,
'load_col': None,
'highest_val': {
'timestamp': 4,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 5
def test_filter_by_field_value_with_equal_val(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_field_value',
'data_path': current_path,
'load_col': None,
'equal_val': {
'rating': 0,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 3
def test_filter_by_field_value_with_not_equal_val(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_field_value',
'data_path': current_path,
'load_col': None,
'not_equal_val': {
'rating': 4,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 9
def test_filter_by_field_value_in_same_field(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_field_value',
'data_path': current_path,
'load_col': None,
'lowest_val': {
'timestamp': 3,
},
'highest_val': {
'timestamp': 8,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 6
def test_filter_by_field_value_in_different_field(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_field_value',
'data_path': current_path,
'load_col': None,
'lowest_val': {
'timestamp': 3,
},
'highest_val': {
'timestamp': 8,
},
'not_equal_val': {
'rating': 4,
}
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 5
def test_filter_inter_by_user_or_item_is_true(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_inter_by_user_or_item',
'data_path': current_path,
'load_col': None,
'filter_inter_by_user_or_item': True,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 1
def test_filter_inter_by_user_or_item_is_false(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_inter_by_user_or_item',
'data_path': current_path,
'load_col': None,
'filter_inter_by_user_or_item': False,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 2
def test_filter_by_inter_num_in_min_user_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'min_user_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.user_num == 6
assert dataset.item_num == 7
def test_filter_by_inter_num_in_min_item_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'min_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.user_num == 7
assert dataset.item_num == 6
def test_filter_by_inter_num_in_max_user_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'max_user_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.user_num == 6
assert dataset.item_num == 7
def test_filter_by_inter_num_in_max_item_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'max_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.user_num == 5
assert dataset.item_num == 5
def test_filter_by_inter_num_in_min_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'min_user_inter_num': 2,
'min_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.user_num == 5
assert dataset.item_num == 5
def test_filter_by_inter_num_in_complex_way(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'max_user_inter_num': 3,
'min_user_inter_num': 2,
'min_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert dataset.user_num == 3
assert dataset.item_num == 3
def test_rm_dup_by_first_and_filter_value(self):
config_dict = {
'model': 'BPR',
'dataset': 'rm_dup_and_filter_value',
'data_path': current_path,
'load_col': None,
'rm_dup_inter': 'first',
'highest_val': {
'rating': 4,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 1
def test_rm_dup_by_last_and_filter_value(self):
config_dict = {
'model': 'BPR',
'dataset': 'rm_dup_and_filter_value',
'data_path': current_path,
'load_col': None,
'rm_dup_inter': 'last',
'highest_val': {
'rating': 4,
},
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 2
def test_rm_dup_and_filter_by_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'rm_dup_and_filter_by_inter_num',
'data_path': current_path,
'load_col': None,
'rm_dup_inter': 'first',
'min_user_inter_num': 2,
'min_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 4
assert dataset.user_num == 3
assert dataset.item_num == 3
def test_filter_value_and_filter_inter_by_ui(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_value_and_filter_inter_by_ui',
'data_path': current_path,
'load_col': None,
'highest_val': {
'age': 2,
},
'not_equal_val': {
'price': 2,
},
'filter_inter_by_user_or_item': True,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 2
assert dataset.user_num == 3
assert dataset.item_num == 3
def test_filter_value_and_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_value_and_inter_num',
'data_path': current_path,
'load_col': None,
'highest_val': {
'rating': 0,
'age': 0,
'price': 0,
},
'min_user_inter_num': 2,
'min_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 4
assert dataset.user_num == 3
assert dataset.item_num == 3
def test_filter_inter_by_ui_and_inter_num(self):
config_dict = {
'model': 'BPR',
'dataset': 'filter_inter_by_ui_and_inter_num',
'data_path': current_path,
'load_col': None,
'filter_inter_by_user_or_item': True,
'min_user_inter_num': 2,
'min_item_inter_num': 2,
}
dataset = new_dataset(config_dict=config_dict)
assert len(dataset.inter_feat) == 4
assert dataset.user_num == 3
assert dataset.item_num == 3
def test_remap_id(self):
config_dict = {
'model': 'BPR',
'dataset': 'remap_id',
'data_path': current_path,
'load_col': None,
'fields_in_same_space': None,
}
dataset = new_dataset(config_dict=config_dict)
user_list = dataset.token2id('user_id', ['ua', 'ub', 'uc', 'ud'])
item_list = dataset.token2id('item_id', ['ia', 'ib', 'ic', 'id'])
assert (user_list == [1, 2, 3, 4]).all()
assert (item_list == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['user_id'] == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['item_id'] == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['add_user'] == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['add_item'] == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['user_list'][0] == [1, 2]).all()
assert (dataset.inter_feat['user_list'][1] == []).all()
assert (dataset.inter_feat['user_list'][2] == [3, 4, 1]).all()
assert (dataset.inter_feat['user_list'][3] == [5]).all()
def test_remap_id_with_fields_in_same_space(self):
config_dict = {
'model': 'BPR',
'dataset': 'remap_id',
'data_path': current_path,
'load_col': None,
'fields_in_same_space': [
['user_id', 'add_user', 'user_list'],
['item_id', 'add_item'],
],
}
dataset = new_dataset(config_dict=config_dict)
user_list = dataset.token2id('user_id', ['ua', 'ub', 'uc', 'ud', 'ue', 'uf'])
item_list = dataset.token2id('item_id', ['ia', 'ib', 'ic', 'id', 'ie', 'if'])
assert (user_list == [1, 2, 3, 4, 5, 6]).all()
assert (item_list == [1, 2, 3, 4, 5, 6]).all()
assert (dataset.inter_feat['user_id'] == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['item_id'] == [1, 2, 3, 4]).all()
assert (dataset.inter_feat['add_user'] == [2, 5, 4, 6]).all()
assert (dataset.inter_feat['add_item'] == [5, 3, 6, 1]).all()
assert (dataset.inter_feat['user_list'][0] == [3, 5]).all()
assert (dataset.inter_feat['user_list'][1] == []).all()
assert (dataset.inter_feat['user_list'][2] == [1, 2, 3]).all()
assert (dataset.inter_feat['user_list'][3] == [6]).all()
def test_ui_feat_preparation_and_fill_nan(self):
config_dict = {
'model': 'BPR',
'dataset': 'ui_feat_preparation_and_fill_nan',
'data_path': current_path,
'load_col': None,
'filter_inter_by_user_or_item': False,
'normalize_field': None,
'normalize_all': None,
}
dataset = new_dataset(config_dict=config_dict)
user_token_list = dataset.id2token('user_id', dataset.user_feat['user_id'])
item_token_list = dataset.id2token('item_id', dataset.item_feat['item_id'])
assert (user_token_list == ['[PAD]', 'ua', 'ub', 'uc', 'ud', 'ue']).all()
assert (item_token_list == ['[PAD]', 'ia', 'ib', 'ic', 'id', 'ie']).all()
assert dataset.inter_feat['rating'][3] == 1.0
assert dataset.user_feat['age'][4] == 1.5
assert dataset.item_feat['price'][4] == 1.5
assert (dataset.inter_feat['time_list'][0] == [1., 2., 3.]).all()
assert (dataset.inter_feat['time_list'][1] == [2.]).all()
assert (dataset.inter_feat['time_list'][2] == []).all()
assert (dataset.inter_feat['time_list'][3] == [5, 4]).all()
assert (dataset.user_feat['profile'][0] == []).all()
assert (dataset.user_feat['profile'][1] == [1, 2, 3]).all()
assert (dataset.user_feat['profile'][2] == []).all()
assert (dataset.user_feat['profile'][3] == [3]).all()
assert (dataset.user_feat['profile'][4] == []).all()
assert (dataset.user_feat['profile'][5] == [3, 2]).all()
def test_set_label_by_threshold(self):
config_dict = {
'model': 'BPR',
'dataset': 'set_label_by_threshold',
'data_path': current_path,
'load_col': None,
'threshold': {
'rating': 4,
},
'normalize_field': None,
'normalize_all': None,
}
dataset = new_dataset(config_dict=config_dict)
assert (dataset.inter_feat['label'] == [1., 0., 1., 0.]).all()
def test_normalize_all(self):
config_dict = {
'model': 'BPR',
'dataset': 'normalize',
'data_path': current_path,
'load_col': None,
'normalize_all': True,
}
dataset = new_dataset(config_dict=config_dict)
assert (dataset.inter_feat['rating'] == [0., .25, 1., .75, .5]).all()
assert (dataset.inter_feat['star'] == [1., .5, 0., .25, 0.75]).all()
def test_normalize_field(self):
config_dict = {
'model': 'BPR',
'dataset': 'normalize',
'data_path': current_path,
'load_col': None,
'normalize_field': ['rating'],
'normalize_all': False,
}
dataset = new_dataset(config_dict=config_dict)
assert (dataset.inter_feat['rating'] == [0., .25, 1., .75, .5]).all()
assert (dataset.inter_feat['star'] == [4., 2., 0., 1., 3.]).all()
def test_TO_RS_811(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'TO_RS',
'split_ratio': [0.8, 0.1, 0.1],
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert (train_dataset.inter_feat['item_id'].numpy() == list(range(1, 17)) + [1] + [1] + [1] + [1, 2, 3] +
list(range(1, 8)) + list(range(1, 9)) + list(range(1, 10))).all()
assert (valid_dataset.inter_feat['item_id'].numpy() == list(range(17, 19)) + [] + [] + [2] + [4] +
[8] + [9] + [10]).all()
assert (test_dataset.inter_feat['item_id'].numpy() == list(range(19, 21)) + [] + [2] + [3] + [5] +
[9] + [10] + [11]).all()
def test_TO_RS_820(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'TO_RS',
'split_ratio': [0.8, 0.2, 0.0],
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert (train_dataset.inter_feat['item_id'].numpy() == list(range(1, 17)) + [1] + [1] + [1, 2] + [1, 2, 3, 4] +
list(range(1, 9)) + list(range(1, 9)) + list(range(1, 10))).all()
assert (valid_dataset.inter_feat['item_id'].numpy() == list(range(17, 21)) + [] + [2] + [3] + [5] +
[9] + [9, 10] + [10, 11]).all()
assert len(test_dataset.inter_feat) == 0
def test_TO_RS_802(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'TO_RS',
'split_ratio': [0.8, 0.0, 0.2],
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert (train_dataset.inter_feat['item_id'].numpy() == list(range(1, 17)) + [1] + [1] + [1, 2] + [1, 2, 3, 4] +
list(range(1, 9)) + list(range(1, 9)) + list(range(1, 10))).all()
assert len(valid_dataset.inter_feat) == 0
assert (test_dataset.inter_feat['item_id'].numpy() == list(range(17, 21)) + [] + [2] + [3] + [5] +
[9] + [9, 10] + [10, 11]).all()
def test_TO_LS(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'TO_LS',
'leave_one_num': 2,
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert (train_dataset.inter_feat['item_id'].numpy() == list(range(1, 19)) + [1] + [1] + [1] + [1, 2, 3] +
list(range(1, 8)) + list(range(1, 9)) + list(range(1, 10))).all()
assert (valid_dataset.inter_feat['item_id'].numpy() == list(range(19, 20)) + [] + [] + [2] + [4] +
[8] + [9] + [10]).all()
assert (test_dataset.inter_feat['item_id'].numpy() == list(range(20, 21)) + [] + [2] + [3] + [5] +
[9] + [10] + [11]).all()
def test_RO_RS_811(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'RO_RS',
'split_ratio': [0.8, 0.1, 0.1],
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert len(train_dataset.inter_feat) == 16 + 1 + 1 + 1 + 3 + 7 + 8 + 9
assert len(valid_dataset.inter_feat) == 2 + 0 + 0 + 1 + 1 + 1 + 1 + 1
assert len(test_dataset.inter_feat) == 2 + 0 + 1 + 1 + 1 + 1 + 1 + 1
def test_RO_RS_820(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'RO_RS',
'split_ratio': [0.8, 0.2, 0.0],
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert len(train_dataset.inter_feat) == 16 + 1 + 1 + 2 + 4 + 8 + 8 + 9
assert len(valid_dataset.inter_feat) == 4 + 0 + 1 + 1 + 1 + 1 + 2 + 2
assert len(test_dataset.inter_feat) == 0
def test_RO_RS_802(self):
config_dict = {
'model': 'BPR',
'dataset': 'build_dataset',
'data_path': current_path,
'load_col': None,
'eval_setting': 'RO_RS',
'split_ratio': [0.8, 0.0, 0.2],
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert len(train_dataset.inter_feat) == 16 + 1 + 1 + 2 + 4 + 8 + 8 + 9
assert len(valid_dataset.inter_feat) == 0
assert len(test_dataset.inter_feat) == 4 + 0 + 1 + 1 + 1 + 1 + 2 + 2
class TestSeqDataset:
def test_seq_leave_one_out(self):
config_dict = {
'model': 'GRU4Rec',
'dataset': 'seq_dataset',
'data_path': current_path,
'load_col': None,
'training_neg_sample_num': 0
}
train_dataset, valid_dataset, test_dataset = split_dataset(config_dict=config_dict)
assert (train_dataset.uid_list == [1, 1, 1, 1, 1, 2, 2, 3, 4]).all()
assert (train_dataset.item_list_index == [slice(0, 1), slice(0, 2), slice(0, 3), slice(0, 4), slice(0, 5),
slice(8, 9), slice(8, 10), slice(13, 14), slice(16, 17)]).all()
assert (train_dataset.target_index == [1, 2, 3, 4, 5, 9, 10, 14, 17]).all()
assert (train_dataset.item_list_length == [1, 2, 3, 4, 5, 1, 2, 1, 1]).all()
assert (valid_dataset.uid_list == [1, 2]).all()
assert (valid_dataset.item_list_index == [slice(0, 6), slice(8, 11)]).all()
assert (valid_dataset.target_index == [6, 11]).all()
assert (valid_dataset.item_list_length == [6, 3]).all()
assert (test_dataset.uid_list == [1, 2, 3]).all()
assert (test_dataset.item_list_index == [slice(0, 7), slice(8, 12), slice(13, 15)]).all()
assert (test_dataset.target_index == [7, 12, 15]).all()
assert (test_dataset.item_list_length == [7, 4, 2]).all()
assert (train_dataset.inter_matrix().toarray() == [
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 1., 1., 1., 0., 0.],
[0., 0., 0., 0., 1., 1., 1., 0., 0.],
[0., 0., 0., 0., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0.],
]).all()
assert (valid_dataset.inter_matrix().toarray() == [
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 1., 1., 1., 1., 0.],
[0., 0., 0., 0., 1., 1., 1., 1., 0.],
[0., 0., 0., 0., 1., 1., 0., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0.],
]).all()
assert (test_dataset.inter_matrix().toarray() == [
[0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 1., 1., 1., 1., 1., 1., 1.],
[0., 0., 0., 0., 1., 1., 1., 1., 1.],
[0., 0., 0., 0., 1., 1., 1., 0., 0.],
[0., 0., 0., 1., 1., 0., 0., 0., 0.],
]).all()
if __name__ == "__main__":
pytest.main()
| 38.468254 | 119 | 0.531752 | 3,042 | 24,235 | 3.911571 | 0.060487 | 0.098328 | 0.086058 | 0.014791 | 0.875452 | 0.848475 | 0.809312 | 0.771409 | 0.730145 | 0.706614 | 0 | 0.042524 | 0.312028 | 24,235 | 629 | 120 | 38.529412 | 0.671145 | 0.007551 | 0 | 0.610915 | 0 | 0 | 0.148026 | 0.024581 | 0 | 0 | 0 | 0 | 0.202465 | 1 | 0.068662 | false | 0 | 0.010563 | 0 | 0.086268 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35c6c0ec4c5b53de540191e7a38188c03c417ccb | 24 | py | Python | modules/histogram/__init__.py | plusterm/plusterm | 45e9382accdaae7d51c65cab77e571bc6d264936 | [
"MIT"
] | 2 | 2018-01-10T16:20:45.000Z | 2018-01-16T12:04:13.000Z | modules/histogram/__init__.py | plusterm/plusterm | 45e9382accdaae7d51c65cab77e571bc6d264936 | [
"MIT"
] | 14 | 2018-01-10T12:56:43.000Z | 2018-05-11T16:28:31.000Z | modules/histogram/__init__.py | plusterm/plusterm | 45e9382accdaae7d51c65cab77e571bc6d264936 | [
"MIT"
] | null | null | null | from .histogram import * | 24 | 24 | 0.791667 | 3 | 24 | 6.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.125 | 24 | 1 | 24 | 24 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35d46ec7ed1b661cf4768b1eb1a458e5e4f47f29 | 65 | py | Python | src/language/parser/__init__.py | ArielTriana/battle-sim | 75205bbff62024d28b42fd25ce268440ecc6f009 | [
"MIT"
] | 2 | 2021-11-23T15:47:07.000Z | 2022-03-03T01:38:19.000Z | src/language/parser/__init__.py | ArielTriana/battle-sim | 75205bbff62024d28b42fd25ce268440ecc6f009 | [
"MIT"
] | 11 | 2021-11-05T15:47:39.000Z | 2022-02-07T05:05:11.000Z | src/language/parser/__init__.py | ArielTriana/battle-sim | 75205bbff62024d28b42fd25ce268440ecc6f009 | [
"MIT"
] | 1 | 2021-12-07T00:00:48.000Z | 2021-12-07T00:00:48.000Z | from .build_parser import build_parser
from .parser import Parser | 32.5 | 38 | 0.861538 | 10 | 65 | 5.4 | 0.4 | 0.407407 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.107692 | 65 | 2 | 39 | 32.5 | 0.931034 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
35dcc0d4deceb389dbb8eb31943319e5b035cbe7 | 5,574 | py | Python | lib/cogs/reactions.py | doobdev/doob | 0d444e613621f5cf8e6617c859dc183149d77a3b | [
"MIT"
] | null | null | null | lib/cogs/reactions.py | doobdev/doob | 0d444e613621f5cf8e6617c859dc183149d77a3b | [
"MIT"
] | 53 | 2020-06-05T02:26:21.000Z | 2020-11-03T02:05:59.000Z | lib/cogs/reactions.py | doobdev/doob | 0d444e613621f5cf8e6617c859dc183149d77a3b | [
"MIT"
] | 6 | 2020-08-15T18:55:23.000Z | 2020-10-05T04:12:29.000Z | from datetime import datetime
from discord import Embed
from discord.ext.commands import Cog
from ..db import db # pylint: disable=relative-beyond-top-level
class Reactions(Cog):
def __init__(self, bot):
self.bot = bot
@Cog.listener()
async def on_raw_reaction_add(self, payload):
if payload.emoji.name != "⭐":
return
guild = self.bot.get_guild(payload.guild_id)
starboardchannel = await self.bot.fetch_channel(
db.field("SELECT StarBoardChannel from guilds WHERE GuildID = ?", guild.id)
)
message = await self.bot.get_channel(payload.channel_id).fetch_message(
payload.message_id
)
if not message.author.bot and payload.member.id != message.author.id:
msg_id, stars = (
db.record(
"SELECT StarMessageID, Stars from starboard WHERE (GuildID, MessageID) = (?, ?)",
guild.id,
message.id,
)
or (None, 0)
)
embed = Embed(
title=f"⭐ x{stars+1}",
colour=message.author.colour,
timestamp=datetime.utcnow(),
)
embed.set_thumbnail(url=message.author.avatar_url)
embed.set_footer(text=f"⭐ x{stars+1}")
fields = [
("Author", message.author.mention, False),
("Content", message.content or "Image", False),
(
"Jump To Link",
f"[Jump](https://discord.com/channels/{message.guild.id}/{message.channel.id}/{message.id})",
False,
),
]
for name, value, inline in fields:
embed.add_field(name=name, value=value, inline=inline)
if len(message.attachments):
embed.set_image(url=message.attachments[0].url)
if not stars:
star_message = await starboardchannel.send(embed=embed)
db.execute(
"INSERT INTO starboard (MessageID, StarMessageID, GuildID) VALUES (?, ?, ?)",
message.id,
star_message.id,
message.guild.id,
)
else:
star_message = await starboardchannel.fetch_message(msg_id)
await star_message.edit(embed=embed)
db.execute(
"UPDATE starboard SET Stars = Stars + 1 WHERE (GuildID, MessageID) = (?, ?)",
message.guild.id,
message.id,
)
db.commit()
else:
await message.remove_reaction(payload.emoji, payload.member)
# async def on_starboard_remove(self, payload):
# if payload.emoji.name == "⭐":
# guild = self.bot.get_guild(payload.guild_id)
# starboardchannel = await self.bot.fetch_channel(db.field("SELECT StarBoardChannel from guilds WHERE GuildID = ?", guild.id))
# message = await self.bot.get_channel(payload.channel_id).fetch_message(payload.message_id)
# if not message.author.bot and payload.member.id != message.author.id:
# msg_id, stars = db.record("SELECT StarMessageID, Stars from starboard WHERE (GuildID, MessageID) = (?, ?)", guild.id, message.id) or (None, 0)
# embed = Embed(title=f"⭐ x{stars-1}", colour=message.author.colour, timestamp=datetime.utcnow())
# embed.set_thumbnail(url=message.author.avatar_url)
# embed.set_footer(text=f"⭐ x{stars-1}")
# #pip ( gotta get that proper indentation B) )
# fields = [("Author", message.author.mention, False),
# ("Content", message.content or "Image", False),
# ("Jump To Link", f"[Jump](https://discord.com/channels/{message.guild.id}/{message.channel.id}/{message.id})", False)]
# for name, value, inline in fields:
# embed.add_field(name=name, value=value, inline=inline)
# if len(message.attachments):
# embed.set_image(url=message.attachments[0].url)
# if not stars:
# star_message = await starboardchannel.send(embed=embed)
# db.execute("INSERT INTO starboard (MessageID, StarMessageID, GuildID) VALUES (?, ?, ?)", message.id, star_message.id, message.guild.id)
# db.commit()
# else:
# star_message = await starboardchannel.fetch_message(msg_id)
# await star_message.edit(embed=embed)
# db.execute("UPDATE starboard SET Stars = Stars - 1 WHERE (GuildID, MessageID) = (?, ?)", message.guild.id, message.id)
# db.commit()
# else:
# await message.remove_reaction(payload.emoji, payload.member)
# @Cog.listener()
# async def on_reaction_remove(self, payload):
# await self.on_starboard_remove(payload)
# @Cog.listener()
# async def on_reaction_clear_emoji(self, reaction):
# await self.on_starboard_remove(payload=reaction)
# @Cog.listener()
# async def on_reaction_clear(self, reaction):
# await self.on_starboard_remove(payload=reaction)
@Cog.listener()
async def on_ready(self):
if not self.bot.ready:
self.bot.cogs_ready.ready_up("reactions")
def setup(bot):
bot.add_cog(Reactions(bot))
| 40.985294 | 160 | 0.554898 | 602 | 5,574 | 5.038206 | 0.184385 | 0.041543 | 0.036927 | 0.031322 | 0.878998 | 0.872074 | 0.851632 | 0.816353 | 0.816353 | 0.816353 | 0 | 0.002675 | 0.329207 | 5,574 | 135 | 161 | 41.288889 | 0.8069 | 0.435414 | 0 | 0.152778 | 0 | 0.013889 | 0.139086 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027778 | false | 0 | 0.055556 | 0 | 0.111111 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
35df8ef3ee363dd0fb961b655e0e6949bc6f8aa6 | 56 | py | Python | io_scene_xray/details/__init__.py | clayne/blender-xray | 84d5d52049ec9e22c85ba8544995bd39c3a83e55 | [
"BSD-2-Clause"
] | 93 | 2016-12-02T14:42:18.000Z | 2022-03-23T08:15:41.000Z | io_scene_xray/details/__init__.py | clayne/blender-xray | 84d5d52049ec9e22c85ba8544995bd39c3a83e55 | [
"BSD-2-Clause"
] | 276 | 2018-07-04T20:13:22.000Z | 2022-03-31T09:13:37.000Z | io_scene_xray/details/__init__.py | clayne/blender-xray | 84d5d52049ec9e22c85ba8544995bd39c3a83e55 | [
"BSD-2-Clause"
] | 31 | 2018-07-04T20:03:17.000Z | 2022-01-27T18:37:36.000Z | # addon modules
from . import ops
from . import utility
| 14 | 21 | 0.75 | 8 | 56 | 5.25 | 0.75 | 0.47619 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.196429 | 56 | 3 | 22 | 18.666667 | 0.933333 | 0.232143 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57e28a793173d493e03072f8ce10374841e083d4 | 36 | py | Python | src/pyFPLdata/__init__.py | andrewl776/FPLdata_py | 5bb5520dc8a4754cf356f57cd73e4c467ea2d649 | [
"MIT"
] | null | null | null | src/pyFPLdata/__init__.py | andrewl776/FPLdata_py | 5bb5520dc8a4754cf356f57cd73e4c467ea2d649 | [
"MIT"
] | null | null | null | src/pyFPLdata/__init__.py | andrewl776/FPLdata_py | 5bb5520dc8a4754cf356f57cd73e4c467ea2d649 | [
"MIT"
] | null | null | null | from pyFPLdata.script import FPLdata | 36 | 36 | 0.888889 | 5 | 36 | 6.4 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.083333 | 36 | 1 | 36 | 36 | 0.969697 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 6 |
57f464dab2d970526944c51f47f3ffd89706f3ac | 8,264 | py | Python | src/waldur_ansible/python_management/tests/unittests/test_service.py | opennode/waldur-ansible | c81c5f0491be02fa9a55a6d5bf9d845750fd1ba9 | [
"MIT"
] | 1 | 2017-09-05T08:09:47.000Z | 2017-09-05T08:09:47.000Z | src/waldur_ansible/python_management/tests/unittests/test_service.py | opennode/waldur-ansible | c81c5f0491be02fa9a55a6d5bf9d845750fd1ba9 | [
"MIT"
] | null | null | null | src/waldur_ansible/python_management/tests/unittests/test_service.py | opennode/waldur-ansible | c81c5f0491be02fa9a55a6d5bf9d845750fd1ba9 | [
"MIT"
] | 3 | 2017-09-24T03:13:19.000Z | 2018-08-12T07:44:38.000Z | from django.test import TestCase
from mock import patch
from rest_framework.exceptions import APIException
from waldur_ansible.python_management import python_management_service
from waldur_ansible.python_management.tests import factories, fixtures
class PythonManagementServiceTest(TestCase):
def setUp(self):
self.fixture = fixtures.PythonManagementFixture()
def test_identifies_removed_virtual_envs(self):
virtual_env = factories.VirtualEnvironmentFactory(name='first-virt-env', python_management=self.fixture.python_management)
factories.InstalledLibraryFactory(name='lib1', version='11', virtual_environment=virtual_env)
persisted_virtual_envs = [virtual_env]
transient_virtual_envs = []
transient_libs = [self.transient_lib('lib2', '22')]
transient_virtual_envs.append(self.transient_virtual_env('second-virt-env', transient_libs))
_, _, removed_virtual_envs = python_management_service.PythonManagementService().identify_changed_created_removed_envs(transient_virtual_envs, persisted_virtual_envs)
self.assertIn(virtual_env, removed_virtual_envs)
def test_identifies_changed_virtual_envs(self):
virtual_env = factories.VirtualEnvironmentFactory(name='first-virt-env', python_management=self.fixture.python_management)
library = factories.InstalledLibraryFactory(name='lib1', version='11', virtual_environment=virtual_env)
persisted_virtual_envs = [virtual_env]
transient_virtual_envs = []
transient_libs = [self.transient_lib('lib2', '22')]
transient_virtual_envs.append(self.transient_virtual_env(virtual_env.name, transient_libs))
_, changed_virtual_envs, _ = python_management_service.PythonManagementService().identify_changed_created_removed_envs(transient_virtual_envs, persisted_virtual_envs)
expected_change = dict(
name=virtual_env.name,
libraries_to_install=transient_libs,
libraries_to_remove=[self.transient_lib(library.name, library.version)],
)
self.assertIn(expected_change, changed_virtual_envs)
def test_identifies_created_virtual_envs(self):
virtual_env = factories.VirtualEnvironmentFactory(name='first-virt-env', python_management=self.fixture.python_management)
library = factories.InstalledLibraryFactory(name='lib1', version='11', virtual_environment=virtual_env)
persisted_virtual_envs = [virtual_env]
transient_virtual_envs = []
existing_virtual_env = self.transient_virtual_env(virtual_env.name, [self.transient_lib(library.name, library.version)])
transient_virtual_envs.append(existing_virtual_env)
new_virtual_env = self.transient_virtual_env('second-virt-env', [self.transient_lib('lib2', '22')])
transient_virtual_envs.append(new_virtual_env)
created_virtual_envs, _, _ = python_management_service.PythonManagementService().identify_changed_created_removed_envs(transient_virtual_envs, persisted_virtual_envs)
self.assertIn(new_virtual_env, created_virtual_envs)
def test_identifies_blocked_requests(self):
sync_request = factories.PythonManagementSynchronizeRequestFactory(
python_management=self.fixture.python_management, virtual_env_name='virtual-env')
with patch(
'waldur_ansible.python_management.python_management_service.locking_service.PythonManagementBackendLockingService.is_processing_allowed') as mocked_locking_service, \
patch('waldur_ansible.python_management.python_management_service.executors.PythonManagementRequestExecutor.execute') as executor:
mocked_locking_service.return_value = False
python_management_service.PythonManagementService().create_or_refuse_request(sync_request)
executor.assert_not_called()
def returns_true(self, lock_value):
return True
def returns_false(self, lock_value):
return False
def test_removal_not_possible_to_process_if_is_processing(self):
python_management = self.fixture.python_management
with patch('waldur_ansible.python_management.python_management_service.locking_service.PythonManagementBackendLockingService.is_processing_allowed',
side_effect=self.returns_false):
self.assertRaises(APIException, lambda: python_management_service.PythonManagementService().schedule_python_management_removal(python_management))
def test_removal_possible_when_not_processing(self):
python_management = self.fixture.python_management
with patch('waldur_ansible.python_management.python_management_service.executors.PythonManagementRequestExecutor.execute') as execute:
python_management_service.PythonManagementService().schedule_python_management_removal(python_management)
execute.assert_called_once()
def test_virtual_env_search_not_possible_to_process_if_is_processing(self):
python_management = self.fixture.python_management
with patch('waldur_ansible.python_management.python_management_service.locking_service.PythonManagementBackendLockingService.is_processing_allowed',
side_effect=self.returns_false):
self.assertRaises(APIException, lambda: python_management_service.PythonManagementService().schedule_virtual_environments_search(python_management))
def test_virtual_env_search_possible_when_not_processing(self):
python_management = self.fixture.python_management
with patch('waldur_ansible.python_management.python_management_service.executors.PythonManagementRequestExecutor.execute') as execute:
python_management_service.PythonManagementService().schedule_virtual_environments_search(python_management)
execute.assert_called_once()
def test_installed_libs_search_not_possible_to_process_if_is_processing(self):
python_management = self.fixture.python_management
virtual_env_name = 'oh-my-env'
with patch('waldur_ansible.python_management.python_management_service.locking_service.PythonManagementBackendLockingService.is_processing_allowed',
side_effect=self.returns_false):
self.assertRaises(APIException, lambda: python_management_service.PythonManagementService().schedule_installed_libraries_search(python_management, virtual_env_name))
def test_installed_libs_search_possible_when_not_processing(self):
python_management = self.fixture.python_management
virtual_env_name = 'oh-my-env'
with patch('waldur_ansible.python_management.python_management_service.executors.PythonManagementRequestExecutor.execute') as execute:
python_management_service.PythonManagementService().schedule_installed_libraries_search(python_management, virtual_env_name)
execute.assert_called_once()
def test_update_not_possible_to_process_if_is_processing(self):
python_management = self.fixture.python_management
with patch('waldur_ansible.python_management.python_management_service.cache_utils.is_syncing', side_effect=self.returns_true):
self.assertRaises(APIException, lambda: python_management_service.PythonManagementService().schedule_virtual_environments_update([], python_management))
def test_update_possible_when_not_processing(self):
python_management = self.fixture.python_management
with patch('waldur_ansible.python_management.python_management_service.executors.PythonManagementRequestExecutor.execute'), \
patch('waldur_ansible.python_management.python_management_service.PythonManagementService.create_or_refuse_requests') as create_or_refuse_requests:
create_or_refuse_requests.return_value = []
python_management_service.PythonManagementService().schedule_virtual_environments_update([], python_management)
create_or_refuse_requests.assert_called_once()
def transient_lib(self, name, version):
return dict(
name=name,
version=version,
)
def transient_virtual_env(self, name, libs):
return dict(
name=name,
installed_libraries=libs
)
| 59.453237 | 182 | 0.781704 | 882 | 8,264 | 6.880952 | 0.126984 | 0.187181 | 0.090954 | 0.062119 | 0.817103 | 0.784314 | 0.767672 | 0.7217 | 0.702422 | 0.685121 | 0 | 0.002554 | 0.147144 | 8,264 | 138 | 183 | 59.884058 | 0.858541 | 0 | 0 | 0.361111 | 0 | 0 | 0.169652 | 0.153074 | 0 | 0 | 0 | 0 | 0.111111 | 1 | 0.157407 | false | 0 | 0.046296 | 0.037037 | 0.25 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 |
57fd1a00e5458a3c61ed36310d23f30660065d20 | 233 | py | Python | src/transmittals/signals.py | Talengi/phase | 60ff6f37778971ae356c5b2b20e0d174a8288bfe | [
"MIT"
] | 8 | 2016-01-29T11:53:40.000Z | 2020-03-02T22:42:02.000Z | src/transmittals/signals.py | Talengi/phase | 60ff6f37778971ae356c5b2b20e0d174a8288bfe | [
"MIT"
] | 289 | 2015-03-23T07:42:52.000Z | 2022-03-11T23:26:10.000Z | src/transmittals/signals.py | Talengi/phase | 60ff6f37778971ae356c5b2b20e0d174a8288bfe | [
"MIT"
] | 7 | 2015-12-08T09:03:20.000Z | 2020-05-11T15:36:51.000Z | # -*- coding: utf-8 -*-
from django.dispatch import Signal
transmittal_created = Signal(providing_args=['document', 'metadata', 'revision'])
transmittal_pdf_generated = Signal(providing_args=['document', 'metadata', 'revision'])
| 25.888889 | 87 | 0.738197 | 25 | 233 | 6.68 | 0.68 | 0.179641 | 0.227545 | 0.323353 | 0.51497 | 0.51497 | 0 | 0 | 0 | 0 | 0 | 0.004762 | 0.098712 | 233 | 8 | 88 | 29.125 | 0.790476 | 0.090129 | 0 | 0 | 1 | 0 | 0.228571 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.333333 | 0 | 0.333333 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.