hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
80f2b1fa813344b3250b5826d0ebeb630acef676 | 47,918 | py | Python | tests/test_properties.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 4 | 2021-05-14T08:22:47.000Z | 2022-02-04T19:48:25.000Z | tests/test_properties.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 1,083 | 2019-11-06T17:01:07.000Z | 2022-03-25T10:26:51.000Z | tests/test_properties.py | debrief/pepys-import | 12d29c0e0f69e1119400334983947893e7679b6b | [
"Apache-2.0"
] | 4 | 2019-11-06T12:00:45.000Z | 2021-06-09T04:18:28.000Z | import unittest
from datetime import datetime
import pytest
from pepys_import.core.formats import unit_registry
from pepys_import.core.formats.location import Location
from pepys_import.core.store.data_store import DataStore
from pepys_import.core.validators import constants as validation_constants
from pepys_import.file.importer import Importer
class TestStateSpeedProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_state_speed_none(self):
state = self.store.db_classes.State()
state.speed = None
assert state.speed is None
def test_state_speed_scalar(self):
state = self.store.db_classes.State()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
state.speed = 5
assert "Speed must be a Quantity" in str(exception.value)
def test_state_speed_wrong_units(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
state.speed = 5 * unit_registry.metre
assert "Speed must be a Quantity with a dimensionality of [length]/[time]" in str(
exception.value
)
def test_state_speed_right_units(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the right SI units succeeds
state.speed = 5 * (unit_registry.metre / unit_registry.second)
# Check setting with a Quantity of strange but valid units succeeds
state.speed = 5 * (unit_registry.angstrom / unit_registry.day)
def test_state_speed_roundtrip(self):
state = self.store.db_classes.State()
# Check setting and retrieving field works, and gives units as a result
state.speed = 10 * (unit_registry.metre / unit_registry.second)
assert state.speed == 10 * (unit_registry.metre / unit_registry.second)
assert state.speed.check("[length]/[time]")
def test_state_speed_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.State.speed, "expression")
class TestStateHeadingProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_state_heading_none(self):
state = self.store.db_classes.State()
state.heading = None
assert state.heading is None
def test_state_heading_scalar(self):
state = self.store.db_classes.State()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
state.heading = 5
assert "Heading must be a Quantity" in str(exception.value)
def test_state_heading_wrong_units(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
state.heading = 5 * unit_registry.second
assert "Heading must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_state_heading_wrong_units_dimensionless(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
state.heading = unit_registry.Quantity(5)
assert "Heading must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_state_heading_right_units(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the right SI units succeeds
state.heading = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
state.heading = 0.784 * unit_registry.radian
def test_state_heading_roundtrip(self):
state = self.store.db_classes.State()
# Check setting and retrieving field works, and gives units as a result
state.heading = 157 * unit_registry.degree
assert state.heading == 157 * unit_registry.degree
assert state.heading.check("")
def test_state_heading_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.State.heading, "expression")
class TestStateCourseProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_state_course_none(self):
state = self.store.db_classes.State()
state.course = None
assert state.course is None
def test_state_course_scalar(self):
state = self.store.db_classes.State()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
state.course = 5
assert "Course must be a Quantity" in str(exception.value)
def test_state_course_wrong_units(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
state.course = 5 * unit_registry.second
assert "Course must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_state_course_wrong_units_dimensionless(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
state.course = unit_registry.Quantity(5)
assert "Course must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_state_course_right_units(self):
state = self.store.db_classes.State()
# Check setting with a Quantity of the right SI units succeeds
state.course = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
state.course = 0.784 * unit_registry.radian
def test_state_course_roundtrip(self):
state = self.store.db_classes.State()
# Check setting and retrieving field works, and gives units as a result
state.course = 157 * unit_registry.degree
assert state.course == 157 * unit_registry.degree
assert state.course.check("")
def test_state_course_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.State.course, "expression")
class TestContactBearingProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_bearing_none(self):
contact = self.store.db_classes.Contact()
contact.bearing = None
assert contact.bearing is None
def test_contact_bearing_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.bearing = 5
assert "Bearing must be a Quantity" in str(exception.value)
def test_contact_bearing_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.bearing = 5 * unit_registry.second
assert "Bearing must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_contact_bearing_wrong_units_dimensionless(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.bearing = unit_registry.Quantity(5)
assert "Bearing must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_contact_bearing_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.bearing = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
contact.bearing = 0.784 * unit_registry.radian
def test_contact_bearing_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.bearing = 157 * unit_registry.degree
assert contact.bearing == 157 * unit_registry.degree
assert contact.bearing.check("")
def test_contact_bearing_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.bearing, "expression")
class TestContactRelBearingProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_rel_bearing_none(self):
contact = self.store.db_classes.Contact()
contact.rel_bearing = None
assert contact.rel_bearing is None
def test_contact_rel_bearing_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.rel_bearing = 5
assert "Relative Bearing must be a Quantity" in str(exception.value)
def test_contact_rel_bearing_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.rel_bearing = 5 * unit_registry.second
assert "Relative Bearing must be a Quantity with a dimensionality of ''" in str(
exception.value
)
def test_contact_rel_bearing_wrong_units_dimensionless(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.rel_bearing = unit_registry.Quantity(5)
assert "Relative Bearing must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_contact_rel_bearing_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.rel_bearing = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
contact.rel_bearing = 0.784 * unit_registry.radian
def test_contact_rel_bearing_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.rel_bearing = 157 * unit_registry.degree
assert contact.rel_bearing == 157 * unit_registry.degree
assert contact.rel_bearing.check("")
def test_contact_rel_bearing_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.rel_bearing, "expression")
class TestContactAmbigBearingProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_ambig_bearing_none(self):
contact = self.store.db_classes.Contact()
contact.ambig_bearing = None
assert contact.ambig_bearing is None
def test_contact_ambig_bearing_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.ambig_bearing = 5
assert "Ambig Bearing must be a Quantity" in str(exception.value)
def test_contact_ambig_bearing_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.ambig_bearing = 5 * unit_registry.second
assert "Ambig Bearing must be a Quantity with a dimensionality of ''" in str(
exception.value
)
def test_contact_ambig_bearing_wrong_units_dimensionless(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.ambig_bearing = unit_registry.Quantity(5)
assert "Ambig Bearing must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_contact_ambig_bearing_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.ambig_bearing = 178 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
contact.ambig_bearing = 0.324 * unit_registry.radian
def test_contact_ambig_bearing_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.ambig_bearing = 234 * unit_registry.degree
assert contact.ambig_bearing == 234 * unit_registry.degree
assert contact.ambig_bearing.check("")
def test_contact_ambig_bearing_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.ambig_bearing, "expression")
class TestContactMLAProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_mla_none(self):
contact = self.store.db_classes.Contact()
contact.mla = None
assert contact.mla is None
def test_contact_mla_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.mla = 5
assert "MLA must be a Quantity" in str(exception.value)
def test_contact_mla_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.mla = 5 * unit_registry.second
assert "MLA must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_contact_mla_wrong_units_dimensionless(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.mla = unit_registry.Quantity(5)
assert "MLA must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_contact_mla_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.mla = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
contact.mla = 0.784 * unit_registry.radian
def test_contact_mla_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.mla = 234 * unit_registry.degree
assert contact.mla == 234 * unit_registry.degree
assert contact.mla.check("")
def test_contact_mla_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.mla, "expression")
class TestContactSLAProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_soa_none(self):
contact = self.store.db_classes.Contact()
contact.soa = None
assert contact.soa is None
def test_contact_soa_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.soa = 5
assert "SOA must be a Quantity" in str(exception.value)
def test_contact_soa_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.soa = 5 * unit_registry.second
assert "SOA must be a Quantity with a dimensionality of [length]/[time]" in str(
exception.value
)
def test_contact_soa_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.soa = 57 * (unit_registry.metre / unit_registry.second)
# Check setting with a Quantity of strange but valid units succeeds
contact.soa = 0.784 * (unit_registry.angstrom / unit_registry.day)
def test_contact_soa_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.soa = 19 * unit_registry.knot
assert contact.soa == 19 * unit_registry.knot
assert contact.soa.check("[length]/[time]")
def test_contact_soa_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.soa, "expression")
class TestContactOrientationProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_orientation_none(self):
contact = self.store.db_classes.Contact()
contact.orientation = None
assert contact.orientation is None
def test_contact_orientation_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.orientation = 5
assert "Orientation must be a Quantity" in str(exception.value)
def test_contact_orientation_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.orientation = 5 * unit_registry.second
assert "Orientation must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_contact_orientation_wrong_units_dimensionless(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.orientation = unit_registry.Quantity(5)
assert "Orientation must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_contact_orientation_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.orientation = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
contact.orientation = 0.784 * unit_registry.radian
def test_contact_orientation_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.orientation = 53 * unit_registry.degree
assert contact.orientation == 53 * unit_registry.degree
assert contact.orientation.check("")
def test_contact_orientation_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.orientation, "expression")
class TestContactMajorProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_major_none(self):
contact = self.store.db_classes.Contact()
contact.major = None
assert contact.major is None
def test_contact_major_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.major = 5
assert "Major must be a Quantity" in str(exception.value)
def test_contact_major_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.major = 5 * unit_registry.second
assert "Major must be a Quantity with a dimensionality of [length]" in str(exception.value)
def test_contact_major_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.major = 57 * unit_registry.kilometre
# Check setting with a Quantity of strange but valid units succeeds
contact.major = 1523 * unit_registry.angstrom
def test_contact_major_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.major = 1234 * unit_registry.metre
assert contact.major == 1234 * unit_registry.metre
assert contact.major.check("[length]")
def test_contact_major_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.major, "expression")
class TestContactMinorProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_minor_none(self):
contact = self.store.db_classes.Contact()
contact.minor = None
assert contact.minor is None
def test_contact_minor_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.minor = 5
assert "Minor must be a Quantity" in str(exception.value)
def test_contact_minor_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.minor = 5 * unit_registry.second
assert "Minor must be a Quantity with a dimensionality of [length]" in str(exception.value)
def test_contact_minor_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.minor = 32 * unit_registry.kilometre
# Check setting with a Quantity of strange but valid units succeeds
contact.minor = 1943 * unit_registry.angstrom
def test_contact_minor_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.minor = 1023 * unit_registry.metre
assert contact.minor == 1023 * unit_registry.metre
assert contact.minor.check("[length]")
def test_contact_minor_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.minor, "expression")
class TestContactRangeProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_range_none(self):
contact = self.store.db_classes.Contact()
contact.range = None
assert contact.range is None
def test_contact_range_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.range = 5
assert "Range must be a Quantity" in str(exception.value)
def test_contact_range_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.range = 5 * unit_registry.second
assert "Range must be a Quantity with a dimensionality of [length]" in str(exception.value)
def test_contact_range_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.range = 19 * unit_registry.yard
# Check setting with a Quantity of strange but valid units succeeds
contact.range = 2341 * unit_registry.angstrom
def test_contact_range_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.range = 976 * unit_registry.metre
assert contact.range == 976 * unit_registry.metre
assert contact.range.check("[length]")
def test_contact_range_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.range, "expression")
class TestContactFreqProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_contact_freq_none(self):
contact = self.store.db_classes.Contact()
contact.freq = None
assert contact.freq is None
def test_contact_freq_scalar(self):
contact = self.store.db_classes.Contact()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
contact.freq = 5
assert "Freq must be a Quantity" in str(exception.value)
def test_contact_freq_wrong_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
contact.freq = 5 * unit_registry.kilogram
assert "Freq must be a Quantity with a dimensionality of [time]^-1" in str(exception.value)
def test_contact_freq_right_units(self):
contact = self.store.db_classes.Contact()
# Check setting with a Quantity of the right SI units succeeds
contact.freq = 32 * unit_registry.hertz
def test_contact_freq_roundtrip(self):
contact = self.store.db_classes.Contact()
# Check setting and retrieving field works, and gives units as a result
contact.freq = 567 * unit_registry.hertz
assert contact.freq == 567 * unit_registry.hertz
assert contact.freq.check("[time]^-1")
def test_contact_freq_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Contact.freq, "expression")
CLASSES_WITH_ELEVATION = [
pytest.param("State", id="state"),
pytest.param("Media", id="media"),
pytest.param("Contact", id="contact"),
]
@pytest.mark.parametrize(
"class_name",
CLASSES_WITH_ELEVATION,
)
class TestElevationProperty:
def setup_class(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_elevation_none(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
obj.elevation = None
assert obj.elevation is None
def test_elevation_scalar(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
obj.elevation = 5
assert "Elevation must be a Quantity" in str(exception.value)
def test_elevation_wrong_units(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
obj.elevation = 5 * unit_registry.second
assert "Elevation must be a Quantity with a dimensionality of [length]" in str(
exception.value
)
def test_elevation_right_units(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
# Check setting with a Quantity of the right SI units succeeds
obj.elevation = 5 * unit_registry.metre
# Check setting with a Quantity of strange but valid units succeeds
obj.elevation = 5 * unit_registry.angstrom
def test_state_elevation_roundtrip(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
# Check setting and retrieving field works, and gives units as a result
obj.elevation = 10 * unit_registry.metre
assert obj.elevation == 10 * unit_registry.metre
assert obj.elevation.check("[length]")
def test_elevation_class_attribute(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}.elevation")
assert hasattr(obj, "expression")
CLASSES_WITH_LOCATION = [
pytest.param("State", id="state"),
pytest.param("Media", id="media"),
pytest.param("Contact", id="contact"),
]
@pytest.mark.parametrize(
"class_name",
CLASSES_WITH_LOCATION,
)
class TestLocationProperty:
def setup_class(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_location_property_none(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
obj.location = None
assert obj.location is None
def test_location_property_invalid_type(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
with pytest.raises(TypeError) as exception:
obj.location = (50, -1)
assert "location value must be an instance of the Location class" in str(exception.value)
def test_location_invalid_location(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
obj.location = Location()
assert "location object does not have valid values" in str(exception.value)
def test_location_valid_location(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}()")
loc = Location()
loc.set_latitude_decimal_degrees(50.23)
loc.set_longitude_decimal_degrees(-1.34)
obj.location = loc
def test_location_roundtrip_not_to_db(self, class_name):
# Tests a roundtrip of a Location object, but without
# actually committing to the DB - so the Location object
# is converted to and from a string, but not actually stored
# in the database as a WKBElement.
obj = eval(f"self.store.db_classes.{class_name}()")
loc = Location()
loc.set_latitude_decimal_degrees(50.23)
loc.set_longitude_decimal_degrees(-1.34)
obj.location = loc
assert obj.location.latitude == 50.23
assert obj.location.longitude == -1.34
def test_location_class_attribute(self, class_name):
obj = eval(f"self.store.db_classes.{class_name}.elevation")
assert hasattr(obj, "expression")
class TestActivationMinRangeProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_activation_min_range_none(self):
activation = self.store.db_classes.Activation()
activation.min_range = None
assert activation.min_range is None
def test_activation_min_range_scalar(self):
activation = self.store.db_classes.Activation()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
activation.min_range = 5
assert "min_range must be a Quantity" in str(exception.value)
def test_activation_min_range_wrong_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
activation.min_range = 5 * unit_registry.second
assert "min_range must be a Quantity with a dimensionality of [length]" in str(
exception.value
)
def test_activation_min_range_right_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the right SI units succeeds
activation.min_range = 57 * unit_registry.kilometre
# Check setting with a Quantity of strange but valid units succeeds
activation.min_range = 1523 * unit_registry.angstrom
def test_activation_min_range_roundtrip(self):
activation = self.store.db_classes.Activation()
# Check setting and retrieving field works, and gives units as a result
activation.min_range = 99 * unit_registry.metre
assert activation.min_range == 99 * unit_registry.metre
assert activation.min_range.check("[length]")
def test_activation_min_range_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Activation.min_range, "expression")
class TestActivationMaxRangeProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_activation_max_range_none(self):
activation = self.store.db_classes.Activation()
activation.max_range = None
assert activation.max_range is None
def test_activation_max_range_scalar(self):
activation = self.store.db_classes.Activation()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
activation.max_range = 5
assert "max_range must be a Quantity" in str(exception.value)
def test_activation_max_range_wrong_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
activation.max_range = 5 * unit_registry.second
assert "max_range must be a Quantity with a dimensionality of [length]" in str(
exception.value
)
def test_activation_max_range_right_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the right SI units succeeds
activation.max_range = 23 * unit_registry.kilometre
# Check setting with a Quantity of strange but valid units succeeds
activation.max_range = 978 * unit_registry.angstrom
def test_activation_max_range_roundtrip(self):
activation = self.store.db_classes.Activation()
# Check setting and retrieving field works, and gives units as a result
activation.max_range = 143 * unit_registry.metre
assert activation.max_range == 143 * unit_registry.metre
assert activation.max_range.check("[length]")
def test_activation_max_range_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Activation.max_range, "expression")
class TestActivationLeftArcProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_activation_left_arc_none(self):
activation = self.store.db_classes.Activation()
activation.left_arc = None
assert activation.left_arc is None
def test_activation_left_arc_scalar(self):
activation = self.store.db_classes.Activation()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
activation.left_arc = 5
assert "left_arc must be a Quantity" in str(exception.value)
def test_activation_left_arc_wrong_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
activation.left_arc = 5 * unit_registry.second
assert "left_arc must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_contact_left_arc_wrong_units_dimensionless(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
activation.left_arc = unit_registry.Quantity(5)
assert "left_arc must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_activation_left_arc_right_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the right SI units succeeds
activation.left_arc = 57 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
activation.left_arc = 0.784 * unit_registry.radian
def test_activation_left_arc_roundtrip(self):
activation = self.store.db_classes.Activation()
# Check setting and retrieving field works, and gives units as a result
activation.left_arc = 157 * unit_registry.degree
assert activation.left_arc == 157 * unit_registry.degree
assert activation.left_arc.check("")
def test_activation_left_arc_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Activation.left_arc, "expression")
class TestActivationRightArcProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_activation_right_arc_none(self):
activation = self.store.db_classes.Activation()
activation.right_arc = None
assert activation.right_arc is None
def test_activation_right_arc_scalar(self):
activation = self.store.db_classes.Activation()
# Check setting with a scalar (float) gives error
with pytest.raises(TypeError) as exception:
activation.right_arc = 5
assert "right_arc must be a Quantity" in str(exception.value)
def test_activation_right_arc_wrong_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
activation.right_arc = 5 * unit_registry.second
assert "right_arc must be a Quantity with a dimensionality of ''" in str(exception.value)
def test_contact_right_arc_wrong_units_dimensionless(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the wrong units gives error
with pytest.raises(ValueError) as exception:
activation.right_arc = unit_registry.Quantity(5)
assert "right_arc must be a Quantity with angular units (degree or radian)" in str(
exception.value
)
def test_activation_right_arc_right_units(self):
activation = self.store.db_classes.Activation()
# Check setting with a Quantity of the right SI units succeeds
activation.right_arc = 98 * unit_registry.degree
# Check setting with a Quantity of strange but valid units succeeds
activation.right_arc = 0.523 * unit_registry.radian
def test_activation_right_arc_roundtrip(self):
activation = self.store.db_classes.Activation()
# Check setting and retrieving field works, and gives units as a result
activation.right_arc = 121 * unit_registry.degree
assert activation.right_arc == 121 * unit_registry.degree
assert activation.right_arc.check("")
def test_activation_right_arc_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Activation.right_arc, "expression")
class TestGeometryGeometryProperty(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
def tearDown(self):
pass
def test_geometry_geometry_none(self):
geometry = self.store.db_classes.Geometry1()
geometry.geometry = None
assert geometry.geometry is None
def test_geometry_geometry_loc(self):
geometry = self.store.db_classes.Geometry1()
loc = Location()
loc.set_latitude_decimal_degrees(50)
loc.set_longitude_decimal_degrees(-1)
geometry.geometry = loc
assert geometry.geometry == loc.to_wkt()
def test_geometry_geometry_other(self):
geometry = self.store.db_classes.Geometry1()
geometry.geometry = "Test String"
assert geometry.geometry == "Test String"
def test_geometry_class_attribute(self):
# Check this is a valid SQLAlchemy column when used as a class attribute
assert hasattr(self.store.db_classes.Geometry1.geometry, "expression")
class TestLocationRoundtripToDB(unittest.TestCase):
def setUp(self):
self.store = DataStore("", "", "", 0, ":memory:", db_type="sqlite")
self.store.initialise()
with self.store.session_scope():
self.change_id = self.store.add_to_changes("TEST", datetime.utcnow(), "TEST").change_id
print(self.change_id)
self.nationality = self.store.add_to_nationalities(
"test_nationality", self.change_id
).name
self.platform_type = self.store.add_to_platform_types(
"test_platform_type", self.change_id
).name
self.sensor_type = self.store.add_to_sensor_types(
"test_sensor_type", self.change_id
).name
self.privacy = self.store.add_to_privacies("test_privacy", 0, self.change_id).name
self.platform = self.store.get_platform(
platform_name="Test Platform",
nationality=self.nationality,
platform_type=self.platform_type,
privacy=self.privacy,
change_id=self.change_id,
)
self.sensor = self.platform.get_sensor(
self.store, "gps", self.sensor_type, change_id=self.change_id
)
self.file = self.store.get_datafile(
"test_file", "csv", 0, "HASHED-1", change_id=self.change_id
)
self.current_time = datetime.utcnow()
self.store.session.expunge(self.sensor)
self.store.session.expunge(self.platform)
self.store.session.expunge(self.file)
class TestParser(Importer):
def __init__(
self,
name="Test Importer",
validation_level=validation_constants.NONE_LEVEL,
short_name="Test Importer",
datafile_type="Test",
):
super().__init__(name, validation_level, short_name, datafile_type)
self.text_label = None
self.depth = 0.0
self.errors = list()
def can_load_this_header(self, header) -> bool:
return True
def can_load_this_filename(self, filename):
return True
def can_load_this_type(self, suffix):
return True
def can_load_this_file(self, file_contents):
return True
def _load_this_file(self, data_store, path, file_contents, datafile):
pass
self.parser = TestParser()
self.file.measurements[self.parser.short_name] = dict()
def tearDown(self):
pass
def test_location_roundtrip_to_db(self):
with self.store.session_scope():
states = self.store.session.query(self.store.db_classes.State).all()
# there must be no entry at the beginning
self.assertEqual(len(states), 0)
state = self.file.create_state(
self.store,
self.platform,
self.sensor,
self.current_time,
parser_name=self.parser.short_name,
)
loc = Location()
loc.set_latitude_decimal_degrees(50.23)
loc.set_longitude_decimal_degrees(-1.35)
state.location = loc
# there must be no entry because it's kept in-memory
states = self.store.session.query(self.store.db_classes.State).all()
self.assertEqual(len(states), 0)
self.assertEqual(state.time, self.current_time)
# Commit to the DB
if self.file.validate():
self.file.commit(self.store, change_id=self.change_id)
# In a separate session, check that we get a Location class with the right
# lat and lon
with self.store.session_scope():
states = self.store.session.query(self.store.db_classes.State).all()
self.assertEqual(len(states), 1)
loc = states[0].location
assert loc.latitude == 50.23
assert loc.longitude == -1.35
| 35.079063 | 99 | 0.671793 | 6,087 | 47,918 | 5.127321 | 0.045178 | 0.055079 | 0.045819 | 0.074976 | 0.858827 | 0.819417 | 0.777828 | 0.761262 | 0.742903 | 0.691637 | 0 | 0.008861 | 0.246379 | 47,918 | 1,365 | 100 | 35.104762 | 0.855394 | 0.161129 | 0 | 0.391574 | 0 | 0 | 0.087123 | 0.01118 | 0 | 0 | 0 | 0 | 0.164808 | 1 | 0.218092 | false | 0.027261 | 0.013631 | 0.004957 | 0.263941 | 0.001239 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
0384c81b29930b3b686f4f8cee69729463aefc4f | 181 | py | Python | youtube_api/__init__.py | jklwonder/youtube-data-api | 85e90e38f2848cdb68e05cb7935307c03543bf99 | [
"MIT"
] | null | null | null | youtube_api/__init__.py | jklwonder/youtube-data-api | 85e90e38f2848cdb68e05cb7935307c03543bf99 | [
"MIT"
] | null | null | null | youtube_api/__init__.py | jklwonder/youtube-data-api | 85e90e38f2848cdb68e05cb7935307c03543bf99 | [
"MIT"
] | null | null | null | from youtube_api.youtube_api import YoutubeDataApi, YouTubeDataAPI
import youtube_api.parsers as P
import youtube_api.youtube_api_utils as youtube_api_utils
__version__ = '0.0.16'
| 30.166667 | 66 | 0.856354 | 28 | 181 | 5.107143 | 0.428571 | 0.41958 | 0.237762 | 0.27972 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.02439 | 0.093923 | 181 | 5 | 67 | 36.2 | 0.847561 | 0 | 0 | 0 | 0 | 0 | 0.033149 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.75 | 0 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
0386d906fe90bff2f93b74b3a933cc59f5c170eb | 11,965 | py | Python | suche.py | stierlpz/python-suche | ee901589a7df1030d8f289e86866db9391051f86 | [
"Apache-2.0"
] | null | null | null | suche.py | stierlpz/python-suche | ee901589a7df1030d8f289e86866db9391051f86 | [
"Apache-2.0"
] | null | null | null | suche.py | stierlpz/python-suche | ee901589a7df1030d8f289e86866db9391051f86 | [
"Apache-2.0"
] | null | null | null | import time
import webbrowser
print(f'---------------------------------------------------------------------------------------')
print(f'Suche auf: ')
print(f'b für Google Bilder')
print(f'c für Wikmeidia Comans')
print(f'g für Google')
print(f'm für Goolge Maps')
print(f's für Google Shopping')
print(f't für Google Taschenrechner')
print(f'y für Youtube')
gw=input(f'Wo suchst du? : ')
print(f'---------------------------------------------------------------------------------------')
if gw=="g":
print(f'Suche auf Google')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.com/search?q='
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="G":
print(f'Suche auf Google')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.com/search?q='
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="w":
print(f'Suche auf Wikipedia')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://de.wikipedia.org/wiki/'
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="W":
print(f'Suche auf Wikipedia')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://de.wikipedia.org/wiki/'
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="c":
print(f'Suche auf Wikimedia Commons')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://commons.wikimedia.org/w/index.php?search='
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="C":
print(f'Suche auf Wikimedia Commons')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://commons.wikimedia.org/w/index.php?search='
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="t":
print(f'Öffne den Taschenrechner von Google')
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.com/search?q=Taschenrechner'
webbrowser.open(url)
print(f'Taschenrechner geöffnet')
print(url)
print(f'-----------------------------------------------------------------------------------')
if gw=="T":
print(f'Öffne den Taschenrechner von Google')
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.com/search?q=Taschenrechner'
webbrowser.open(url)
print(f'Taschenrechner geöffnet')
print(url)
print(f'-----------------------------------------------------------------------------------')
if gw=="m":
print(f'Suche auf Google Maps')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.com/maps/search/'
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="M":
print(f'Suche auf Google Maps')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.com/maps/search/'
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="y":
print(f'Suche auf Youtube')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.youtube.com/results?search_query='
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="Y":
print(f'Suche auf Youtube')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.youtube.com/results?search_query='
webbrowser.open(url+s)
print(f'Suche abgeschlossen')
print(url+s)
print(f'-----------------------------------------------------------------------------------')
if gw=="b":
print(f'Suche auf Google Bilder')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.de/search?q='
url2='&hl=de&authuser=0&tbm=isch&sxsrf=AOaemvIy0b1tErToWnyfge9YuociFQM9lA%3A1636633673968&source=hp&biw=1366&bih=625&ei=SQyNYcTGN8H67_UPvua4yAs&iflsig=ALs-wAMAAAAAYY0aWSs5aCcdJj6xqgZpqcFw4KP89eKh&oq=5&gs_lcp=CgNpbWcQAzIHCCMQ7wMQJzIHCCMQ7wMQJzIFCAAQgAQyCAgAEIAEELEDMggIABCABBCxAzILCAAQgAQQsQMQgwEyCAgAEIAEELEDMggIABCxAxCDATIFCAAQgAQyCAgAEIAEELEDUABYAGD2A2gAcAB4AIABPYgBPZIBATGYAQCgAQGqAQtnd3Mtd2l6LWltZw&sclient=img&ved=0ahUKEwiEj4vGp5D0AhVB_bsIHT4zDrkQ4dUDCAY&uact=5'
webbrowser.open(url+s+url2)
print(f'Suche abgeschlossen')
print(url+s+url2)
print(f'-----------------------------------------------------------------------------------')
if gw=="B":
print(f'Suche auf Google Bilder')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.de/search?q='
url2='&hl=de&authuser=0&tbm=isch&sxsrf=AOaemvIy0b1tErToWnyfge9YuociFQM9lA%3A1636633673968&source=hp&biw=1366&bih=625&ei=SQyNYcTGN8H67_UPvua4yAs&iflsig=ALs-wAMAAAAAYY0aWSs5aCcdJj6xqgZpqcFw4KP89eKh&oq=5&gs_lcp=CgNpbWcQAzIHCCMQ7wMQJzIHCCMQ7wMQJzIFCAAQgAQyCAgAEIAEELEDMggIABCABBCxAzILCAAQgAQQsQMQgwEyCAgAEIAEELEDMggIABCxAxCDATIFCAAQgAQyCAgAEIAEELEDUABYAGD2A2gAcAB4AIABPYgBPZIBATGYAQCgAQGqAQtnd3Mtd2l6LWltZw&sclient=img&ved=0ahUKEwiEj4vGp5D0AhVB_bsIHT4zDrkQ4dUDCAY&uact=5'
webbrowser.open(url+s+url2)
print(f'Suche abgeschlossen')
print(url+s+url2)
print(f'-----------------------------------------------------------------------------------')
if gw=="s":
print(f'Suche auf Google Shopping')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.de/search?q='
url2='&source=lmns&tbm=shop&authuser=0&bih=625&biw=1366&hl=de&sa=X&ved=2ahUKEwi7n5nwqJD0AhUbgv0HHZv8DzsQ_AUoAHoECAEQBw'
webbrowser.open(url+s+url2)
print(f'Suche abgeschlossen')
print(url+s+url2)
print(f'-----------------------------------------------------------------------------------')
if gw=="S":
print(f'Suche auf Google Shopping')
print(f'-----------------------------------------------------------------------------------')
s=input(f'Was suchst du?: ')
print(f'-----------------------------------------------------------------------------------')
print(f'Suche nach ',s)
print(f'-----------------------------------------------------------------------------------')
time.sleep(5)
url='https://www.google.de/search?q='
url2='&source=lmns&tbm=shop&authuser=0&bih=625&biw=1366&hl=de&sa=X&ved=2ahUKEwi7n5nwqJD0AhUbgv0HHZv8DzsQ_AUoAHoECAEQBw'
webbrowser.open(url+s+url2)
print(f'Suche abgeschlossen')
print(url+s+url2)
print(f'-----------------------------------------------------------------------------------')
| 51.351931 | 472 | 0.347179 | 922 | 11,965 | 4.494577 | 0.101952 | 0.167954 | 0.114141 | 0.048263 | 0.951496 | 0.947394 | 0.947394 | 0.947394 | 0.947394 | 0.947394 | 0 | 0.015358 | 0.129294 | 11,965 | 232 | 473 | 51.573276 | 0.382415 | 0 | 0 | 0.875576 | 0 | 0.018433 | 0.69543 | 0.537005 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.009217 | 0 | 0.009217 | 0.608295 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
039739236bf3a8269db09739ebf98752ed87d74d | 8,237 | py | Python | utest/test_get_keyword_types.py | bollwyvl/PythonLibCore | 7a13ec08801f282cef7b83f563b8210742f63dcd | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | utest/test_get_keyword_types.py | bollwyvl/PythonLibCore | 7a13ec08801f282cef7b83f563b8210742f63dcd | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | utest/test_get_keyword_types.py | bollwyvl/PythonLibCore | 7a13ec08801f282cef7b83f563b8210742f63dcd | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | import pytest
from robotlibcore import PY2, RF31
if not PY2:
from typing import List, Union, Dict
from DynamicTypesAnnotationsLibrary import DynamicTypesAnnotationsLibrary
from DynamicTypesAnnotationsLibrary import CustomObject
from DynamicTypesLibrary import DynamicTypesLibrary
@pytest.fixture(scope='module')
def lib():
return DynamicTypesLibrary()
@pytest.fixture(scope='module')
def lib_types():
return DynamicTypesAnnotationsLibrary('aaa')
def test_using_keyword_types(lib):
types = lib.get_keyword_types('keyword_with_types')
assert types == {'arg1': str}
def test_types_disabled(lib):
types = lib.get_keyword_types('keyword_with_disabled_types')
assert types is None
@pytest.mark.skipif(not RF31, reason='Only for RF3.1')
def test_keyword_types_and_bool_default_rf31(lib):
types = lib.get_keyword_types('keyword_robot_types_and_bool_default')
assert types == {'arg1': str}
@pytest.mark.skipif(RF31, reason='Only for RF3.2+')
def test_keyword_types_and_bool_default_rf32(lib):
types = lib.get_keyword_types('keyword_robot_types_and_bool_default')
assert types == {'arg1': str}
def test_one_keyword_type_defined(lib):
types = lib.get_keyword_types('keyword_with_one_type')
assert types == {'arg1': str}
def test_keyword_no_args(lib):
types = lib.get_keyword_types('keyword_with_no_args')
assert types == {}
def test_not_keyword(lib):
with pytest.raises(ValueError):
lib.get_keyword_types('not_keyword')
@pytest.mark.skipif(RF31, reason='Only for RF3.2+')
def test_keyword_none_rf32(lib):
types = lib.get_keyword_types('keyword_none')
assert types == {}
@pytest.mark.skipif(not RF31, reason='Only for RF3.2+')
def test_keyword_none_rf31(lib):
types = lib.get_keyword_types('keyword_none')
assert types == {}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_single_annotation(lib_types):
types = lib_types.get_keyword_types('keyword_with_one_annotation')
assert types == {'arg': str}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_multiple_annotations(lib_types):
types = lib_types.get_keyword_types('keyword_with_multiple_annotations')
assert types == {'arg1': str, 'arg2': List}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_multiple_types(lib_types):
types = lib_types.get_keyword_types('keyword_multiple_types')
assert types == {'arg': Union[List, None]}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_new_type(lib_types):
types = lib_types.get_keyword_types('keyword_new_type')
assert len(types) == 1
assert types['arg']
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_return_type(lib_types):
types = lib_types.get_keyword_types('keyword_define_return_type')
assert types == {'arg': str}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_forward_references(lib_types):
types = lib_types.get_keyword_types('keyword_forward_references')
assert types == {'arg': CustomObject}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_with_annotation_and_default(lib_types):
types = lib_types.get_keyword_types('keyword_with_annotations_and_default')
assert types == {'arg': str}
def test_keyword_with_many_defaults(lib):
types = lib.get_keyword_types('keyword_many_default_types')
assert types == {}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_with_annotation_external_class(lib_types):
types = lib_types.get_keyword_types('keyword_with_webdriver')
assert types == {'arg': CustomObject}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_with_annotation_and_default(lib_types):
types = lib_types.get_keyword_types('keyword_default_and_annotation')
assert types == {'arg1': int, 'arg2': Union[bool, str]}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_with_robot_types_and_annotations(lib_types):
types = lib_types.get_keyword_types('keyword_robot_types_and_annotations')
assert types == {'arg': str}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_with_robot_types_disbaled_and_annotations(lib_types):
types = lib_types.get_keyword_types('keyword_robot_types_disabled_and_annotations')
assert types is None
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_with_robot_types_and_bool_annotations(lib_types):
types = lib_types.get_keyword_types('keyword_robot_types_and_bool_hint')
assert types == {'arg1': str}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_init_args(lib_types):
types = lib_types.get_keyword_types('__init__')
assert types == {'arg': str}
def test_dummy_magic_method(lib):
with pytest.raises(ValueError):
lib.get_keyword_types('__foobar__')
def test_varargs(lib):
types = lib.get_keyword_types('varargs_and_kwargs')
assert types == {}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_init_args_with_annotation(lib_types):
types = lib_types.get_keyword_types('__init__')
assert types == {'arg': str}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_exception_in_annotations(lib_types):
types = lib_types.get_keyword_types('keyword_exception_annotations')
assert types == {'arg': 'NotHere'}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_only_arguments(lib_types):
types = lib_types.get_keyword_types('keyword_only_arguments')
assert types == {}
@pytest.mark.skipif(RF31, reason='Only for RF3.2+')
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_only_arguments_many(lib_types):
types = lib_types.get_keyword_types('keyword_only_arguments_many')
assert types == {}
@pytest.mark.skipif(not RF31, reason='Only for RF3.1')
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_only_arguments_many(lib_types):
types = lib_types.get_keyword_types('keyword_only_arguments_many')
assert types == {}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_mandatory_and_keyword_only_arguments(lib_types):
types = lib_types.get_keyword_types('keyword_mandatory_and_keyword_only_arguments')
assert types == {'arg': int, 'some': bool}
@pytest.mark.skipif(RF31, reason='Only for RF3.2+')
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_only_arguments_many_positional_and_default_rf32(lib_types):
types = lib_types.get_keyword_types('keyword_only_arguments_many_positional_and_default')
assert types == {'four': Union[int, str], 'six': Union[bool, str]}
@pytest.mark.skipif(not RF31, reason='Only for RF3.1')
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_only_arguments_many_positional_and_default_rf31(lib_types):
types = lib_types.get_keyword_types('keyword_only_arguments_many_positional_and_default')
assert types == {'four': Union[int, str], 'six': Union[bool, str]}
@pytest.mark.skipif(RF31, reason='Only for RF3.2+')
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_all_args_rf32(lib_types):
types = lib_types.get_keyword_types('keyword_all_args')
assert types == {}
@pytest.mark.skipif(not RF31, reason='Only for RF3.1')
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_all_args_rf31(lib_types):
types = lib_types.get_keyword_types('keyword_all_args')
assert types == {}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_self_and_types(lib_types):
types = lib_types.get_keyword_types('keyword_self_and_types')
assert types == {'mandatory': str, 'other': bool}
@pytest.mark.skipif(PY2, reason='Only applicable on Python 3')
def test_keyword_self_and_keyword_only_types(lib_types):
types = lib_types.get_keyword_types('keyword_self_and_keyword_only_types')
assert types == {'varargs': int, 'other': bool, 'kwargs': int}
| 34.03719 | 93 | 0.761199 | 1,187 | 8,237 | 4.951137 | 0.080876 | 0.083036 | 0.094436 | 0.119789 | 0.833418 | 0.829675 | 0.795644 | 0.756849 | 0.728263 | 0.701548 | 0 | 0.016611 | 0.122982 | 8,237 | 241 | 94 | 34.178423 | 0.796927 | 0 | 0 | 0.50625 | 0 | 0 | 0.232609 | 0.095423 | 0 | 0 | 0 | 0 | 0.225 | 1 | 0.24375 | false | 0 | 0.0375 | 0.0125 | 0.29375 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
03ecad44144f1751718a47fa203f40baab6e9442 | 4,079 | py | Python | data_utils/load_uds.py | Yottaxx/T-LSTM | 92618d8c3ee2418b194a2e1592512548da955b77 | [
"MIT"
] | 9 | 2020-05-23T05:40:27.000Z | 2021-11-19T01:29:36.000Z | data_utils/load_uds.py | ayyyq/T-LSTM | 36dbc88ac710d3925851cd87c2368ecfc7061b70 | [
"MIT"
] | 1 | 2020-11-29T04:35:52.000Z | 2021-01-29T07:39:37.000Z | data_utils/load_uds.py | Yottaxx/T-LSTM | 92618d8c3ee2418b194a2e1592512548da955b77 | [
"MIT"
] | 2 | 2020-10-26T13:42:49.000Z | 2020-11-01T02:01:33.000Z | from data_utils import DataStruct
from tqdm import trange
import torch
from data_utils.save_uds_utils import data_load
import numpy as np
def S_get_g_data_loader_split():
text_list, edge_index_list, data_confidence, test_mask, dev_mask, train_mask, data_trigger_index = data_load()
train_list = []
dev_list = []
test_list = []
for i in trange(len(data_confidence)):
x = text_list[i]
# print("----------------")
# print("edge")
# print(edge_index_list[i][0])
# print(edge_index_list[i][1])
# print(x)
edge = np.stack([edge_index_list[i][0], edge_index_list[i][1]], 0)
#
# print(len(x))
edge_index = torch.sparse_coo_tensor(torch.tensor(edge), torch.ones(len(edge[0])),
(len(x), len(x))).to_dense()
eep = torch.tensor(data_confidence[i]).unsqueeze(0)
# print(eep)
trigger = ["uds"]
trigger_index = torch.tensor(np.array(data_trigger_index[i], dtype=np.int)).unsqueeze(0)
# print(x[data_trigger_index[i]])
if test_mask[i] :
data = DataStruct(tuple(text_list[i]), edge_index.numpy().tolist(),
tuple(trigger), tuple(trigger_index.numpy().tolist()),
tuple(eep.numpy().tolist()), tuple([len(test_list)]))
test_list.append(data)
if train_mask[i]:
data = DataStruct(tuple(text_list[i]), edge_index.numpy().tolist(),
tuple(trigger), tuple(trigger_index.numpy().tolist()),
tuple(eep.numpy().tolist()), tuple([len(train_list)]))
train_list.append(data)
if dev_mask[i] :
data = DataStruct(tuple(text_list[i]), edge_index.numpy().tolist(),
tuple(trigger), tuple(trigger_index.numpy().tolist()),
tuple(eep.numpy().tolist()), tuple([len(dev_list)]))
dev_list.append(data)
return train_list, dev_list, test_list
def S_get_g_data_loader_split_xlnet():
text_list, text_list_emb,edge_index_list, data_confidence, test_mask, dev_mask, train_mask, data_trigger_index = data_load()
train_list = []
dev_list = []
test_list = []
for i in trange(len(data_confidence)):
x = text_list[i]
x_emb = torch.tensor(text_list_emb[i])
# print("----------------")
# print("edge")
# print(edge_index_list[i][0])
# print(edge_index_list[i][1])
# print(x)
edge = np.stack([edge_index_list[i][0], edge_index_list[i][1]], 0)
#
# print(len(x))
edge_index = torch.sparse_coo_tensor(torch.tensor(edge), torch.ones(len(edge[0])),
(len(x), len(x))).to_dense()
eep = torch.tensor(data_confidence[i]).unsqueeze(0)
# print(eep)
trigger = ["uds"]
trigger_index = torch.tensor(np.array(data_trigger_index[i], dtype=np.int)).unsqueeze(0)
# print(x[data_trigger_index[i]])
if test_mask[i] :
data = DataStruct(tuple(text_list[i]), x_emb,edge_index.numpy().tolist(),
tuple(trigger), tuple(trigger_index.numpy().tolist()),
tuple([eep.numpy().tolist()]), tuple([len(test_list)]))
test_list.append(data)
if train_mask[i]:
data = DataStruct(tuple(text_list[i]),x_emb ,edge_index.numpy().tolist(),
tuple(trigger), tuple(trigger_index.numpy().tolist()),
tuple([eep.numpy().tolist()]), tuple([len(train_list)]))
train_list.append(data)
if dev_mask[i] :
data = DataStruct(tuple(text_list[i]),x_emb ,edge_index.numpy().tolist(),
tuple(trigger), tuple(trigger_index.numpy().tolist()),
tuple([eep.numpy().tolist()]), tuple([len(dev_list)]))
dev_list.append(data)
return train_list, dev_list, test_list | 45.831461 | 128 | 0.558715 | 512 | 4,079 | 4.193359 | 0.111328 | 0.075454 | 0.134141 | 0.117373 | 0.921751 | 0.919888 | 0.919888 | 0.898463 | 0.898463 | 0.898463 | 0 | 0.005506 | 0.28757 | 4,079 | 89 | 129 | 45.831461 | 0.73331 | 0.080167 | 0 | 0.75 | 0 | 0 | 0.001607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.03125 | false | 0 | 0.078125 | 0 | 0.140625 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ff034da3afa6e5031147aaf09b143db36c5e638a | 80,277 | py | Python | heat/core/linalg/tests/test_basics.py | tkurze/heat | 293d873865a538547e3805bbf7dcd88726b8200e | [
"MIT"
] | null | null | null | heat/core/linalg/tests/test_basics.py | tkurze/heat | 293d873865a538547e3805bbf7dcd88726b8200e | [
"MIT"
] | null | null | null | heat/core/linalg/tests/test_basics.py | tkurze/heat | 293d873865a538547e3805bbf7dcd88726b8200e | [
"MIT"
] | null | null | null | from typing import Type
import torch
import os
import unittest
import heat as ht
import numpy as np
from ...tests.test_suites.basic_test import TestCase
class TestLinalgBasics(TestCase):
def test_cross(self):
a = ht.eye(3)
b = ht.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]])
# different types
cross = ht.cross(a, b)
self.assertEqual(cross.shape, a.shape)
self.assertEqual(cross.dtype, a.dtype)
self.assertEqual(cross.split, a.split)
self.assertEqual(cross.comm, a.comm)
self.assertEqual(cross.device, a.device)
self.assertTrue(ht.equal(cross, ht.array([[0, 0, 1], [1, 0, 0], [0, 1, 0]])))
# axis
a = ht.eye(3, split=0)
b = ht.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]], dtype=ht.float, split=0)
cross = ht.cross(a, b)
self.assertEqual(cross.shape, a.shape)
self.assertEqual(cross.dtype, a.dtype)
self.assertEqual(cross.split, a.split)
self.assertEqual(cross.comm, a.comm)
self.assertEqual(cross.device, a.device)
self.assertTrue(ht.equal(cross, ht.array([[0, 0, 1], [1, 0, 0], [0, 1, 0]])))
a = ht.eye(3, dtype=ht.int8, split=1)
b = ht.array([[0, 1, 0], [0, 0, 1], [1, 0, 0]], dtype=ht.int8, split=1)
cross = ht.cross(a, b, axis=0)
self.assertEqual(cross.shape, a.shape)
self.assertEqual(cross.dtype, a.dtype)
self.assertEqual(cross.split, a.split)
self.assertEqual(cross.comm, a.comm)
self.assertEqual(cross.device, a.device)
self.assertTrue(ht.equal(cross, ht.array([[0, 0, -1], [-1, 0, 0], [0, -1, 0]])))
# test axisa, axisb, axisc
np.random.seed(42)
np_a = np.random.randn(40, 3, 50)
np_b = np.random.randn(3, 40, 50)
np_cross = np.cross(np_a, np_b, axisa=1, axisb=0)
a = ht.array(np_a, split=0)
b = ht.array(np_b, split=1)
cross = ht.cross(a, b, axisa=1, axisb=0)
self.assert_array_equal(cross, np_cross)
cross_axisc = ht.cross(a, b, axisa=1, axisb=0, axisc=1)
np_cross_axisc = np.cross(np_a, np_b, axisa=1, axisb=0, axisc=1)
self.assert_array_equal(cross_axisc, np_cross_axisc)
# test vector axes with 2 elements
b_2d = ht.array(np_b[:-1, :, :], split=1)
cross_3d_2d = ht.cross(a, b_2d, axisa=1, axisb=0)
np_cross_3d_2d = np.cross(np_a, np_b[:-1, :, :], axisa=1, axisb=0)
self.assert_array_equal(cross_3d_2d, np_cross_3d_2d)
a_2d = ht.array(np_a[:, :-1, :], split=0)
cross_2d_3d = ht.cross(a_2d, b, axisa=1, axisb=0)
np_cross_2d_3d = np.cross(np_a[:, :-1, :], np_b, axisa=1, axisb=0)
self.assert_array_equal(cross_2d_3d, np_cross_2d_3d)
cross_z_comp = ht.cross(a_2d, b_2d, axisa=1, axisb=0)
np_cross_z_comp = np.cross(np_a[:, :-1, :], np_b[:-1, :, :], axisa=1, axisb=0)
self.assert_array_equal(cross_z_comp, np_cross_z_comp)
a_wrong_split = ht.array(np_a[:, :-1, :], split=2)
with self.assertRaises(ValueError):
ht.cross(a_wrong_split, b, axisa=1, axisb=0)
with self.assertRaises(ValueError):
ht.cross(ht.eye(3), ht.eye(4))
with self.assertRaises(ValueError):
ht.cross(ht.eye(3, split=0), ht.eye(3, split=1))
if torch.cuda.is_available():
with self.assertRaises(ValueError):
ht.cross(ht.eye(3, device="gpu"), ht.eye(3, device="cpu"))
with self.assertRaises(TypeError):
ht.cross(ht.eye(3), ht.eye(3), axis="wasd")
with self.assertRaises(ValueError):
ht.cross(ht.eye(3, split=0), ht.eye(3, split=0), axis=0)
def test_dot(self):
# ONLY TESTING CORRECTNESS! ALL CALLS IN DOT ARE PREVIOUSLY TESTED
# cases to test:
data2d = np.ones((10, 10))
data3d = np.ones((10, 10, 10))
data1d = np.arange(10)
a1d = ht.array(data1d, dtype=ht.float32, split=0)
b1d = ht.array(data1d, dtype=ht.float32, split=0)
# 2 1D arrays,
self.assertEqual(ht.dot(a1d, b1d), np.dot(data1d, data1d))
ret = []
self.assertEqual(ht.dot(a1d, b1d, out=ret), np.dot(data1d, data1d))
a1d = ht.array(data1d, dtype=ht.float32, split=None)
b1d = ht.array(data1d, dtype=ht.float32, split=0)
self.assertEqual(ht.dot(a1d, b1d), np.dot(data1d, data1d))
a1d = ht.array(data1d, dtype=ht.float32, split=None)
b1d = ht.array(data1d, dtype=ht.float32, split=None)
self.assertEqual(ht.dot(a1d, b1d), np.dot(data1d, data1d))
a1d = ht.array(data1d, dtype=ht.float32, split=0)
b1d = ht.array(data1d, dtype=ht.float32, split=0)
self.assertEqual(ht.dot(a1d, b1d), np.dot(data1d, data1d))
# 2 1D arrays,
a2d = ht.array(data2d, split=1)
b2d = ht.array(data2d, split=1)
# 2 2D arrays,
res = ht.dot(a2d, b2d) - ht.array(np.dot(data2d, data2d))
self.assertEqual(ht.equal(res, ht.zeros(res.shape)), 1)
ret = ht.array(data2d, split=1)
ht.dot(a2d, b2d, out=ret)
res = ret - ht.array(np.dot(data2d, data2d))
self.assertEqual(ht.equal(res, ht.zeros(res.shape)), 1)
const1 = 5
const2 = 6
# a is const
res = ht.dot(const1, b2d) - ht.array(np.dot(const1, data2d))
ret = 0
ht.dot(const1, b2d, out=ret)
self.assertEqual(ht.equal(res, ht.zeros(res.shape)), 1)
# b is const
res = ht.dot(a2d, const2) - ht.array(np.dot(data2d, const2))
self.assertEqual(ht.equal(res, ht.zeros(res.shape)), 1)
# a and b and const
self.assertEqual(ht.dot(const2, const1), 5 * 6)
with self.assertRaises(NotImplementedError):
ht.dot(ht.array(data3d), ht.array(data1d))
def test_matmul(self):
with self.assertRaises(ValueError):
ht.matmul(ht.ones((25, 25)), ht.ones((42, 42)))
# cases to test:
n, m = 21, 31
j, k = m, 45
a_torch = torch.ones((n, m), device=self.device.torch_device)
a_torch[0] = torch.arange(1, m + 1, device=self.device.torch_device)
a_torch[:, -1] = torch.arange(1, n + 1, device=self.device.torch_device)
b_torch = torch.ones((j, k), device=self.device.torch_device)
b_torch[0] = torch.arange(1, k + 1, device=self.device.torch_device)
b_torch[:, 0] = torch.arange(1, j + 1, device=self.device.torch_device)
# splits None None
a = ht.ones((n, m), split=None)
b = ht.ones((j, k), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
self.assertEqual(ht.all(ret00 == ht.array(a_torch @ b_torch)), 1)
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, None)
self.assertEqual(a.split, None)
self.assertEqual(b.split, None)
# splits None None
a = ht.ones((n, m), split=None)
b = ht.ones((j, k), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b, allow_resplit=True)
self.assertEqual(ht.all(ret00 == ht.array(a_torch @ b_torch)), 1)
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, None)
self.assertEqual(a.split, 0)
self.assertEqual(b.split, None)
if a.comm.size > 1:
# splits 00
a = ht.ones((n, m), split=0, dtype=ht.float64)
b = ht.ones((j, k), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = a @ b
ret_comp00 = ht.array(a_torch @ b_torch, split=0)
self.assertTrue(ht.equal(ret00, ret_comp00))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float64)
self.assertEqual(ret00.split, 0)
# splits 00 (numpy)
a = ht.array(np.ones((n, m)), split=0)
b = ht.array(np.ones((j, k)), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = a @ b
ret_comp00 = ht.array(a_torch @ b_torch, split=0)
self.assertTrue(ht.equal(ret00, ret_comp00))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float64)
self.assertEqual(ret00.split, 0)
# splits 01
a = ht.ones((n, m), split=0)
b = ht.ones((j, k), split=1, dtype=ht.float64)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp01 = ht.array(a_torch @ b_torch, split=0)
self.assertTrue(ht.equal(ret00, ret_comp01))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float64)
self.assertEqual(ret00.split, 0)
# splits 10
a = ht.ones((n, m), split=1)
b = ht.ones((j, k), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp10 = ht.array(a_torch @ b_torch, split=1)
self.assertTrue(ht.equal(ret00, ret_comp10))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 1)
# splits 11
a = ht.ones((n, m), split=1)
b = ht.ones((j, k), split=1)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp11 = ht.array(a_torch @ b_torch, split=1)
self.assertTrue(ht.equal(ret00, ret_comp11))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 1)
# splits 11 (torch)
a = ht.array(torch.ones((n, m), device=self.device.torch_device), split=1)
b = ht.array(torch.ones((j, k), device=self.device.torch_device), split=1)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp11 = ht.array(a_torch @ b_torch, split=1)
self.assertTrue(ht.equal(ret00, ret_comp11))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 1)
# splits 0 None
a = ht.ones((n, m), split=0)
b = ht.ones((j, k), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp0 = ht.array(a_torch @ b_torch, split=0)
self.assertTrue(ht.equal(ret00, ret_comp0))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# splits 1 None
a = ht.ones((n, m), split=1)
b = ht.ones((j, k), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp1 = ht.array(a_torch @ b_torch, split=1)
self.assertTrue(ht.equal(ret00, ret_comp1))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 1)
# splits None 0
a = ht.ones((n, m), split=None)
b = ht.ones((j, k), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=0)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# splits None 1
a = ht.ones((n, m), split=None)
b = ht.ones((j, k), split=1)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=1)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n, k))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 1)
# vector matrix mult:
# a -> vector
a_torch = torch.ones((m), device=self.device.torch_device)
b_torch = torch.ones((j, k), device=self.device.torch_device)
b_torch[0] = torch.arange(1, k + 1, device=self.device.torch_device)
b_torch[:, 0] = torch.arange(1, j + 1, device=self.device.torch_device)
# splits None None
a = ht.ones((m), split=None)
b = ht.ones((j, k), split=None)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (k,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, None)
# splits None 0
a = ht.ones((m), split=None)
b = ht.ones((j, k), split=0)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (k,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# splits None 1
a = ht.ones((m), split=None)
b = ht.ones((j, k), split=1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=0)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (k,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# splits 0 None
a = ht.ones((m), split=None)
b = ht.ones((j, k), split=0)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (k,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# splits 0 0
a = ht.ones((m), split=0)
b = ht.ones((j, k), split=0)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (k,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# splits 0 1
a = ht.ones((m), split=0)
b = ht.ones((j, k), split=1)
b[0] = ht.arange(1, k + 1)
b[:, 0] = ht.arange(1, j + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (k,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
# b -> vector
a_torch = torch.ones((n, m), device=self.device.torch_device)
a_torch[0] = torch.arange(1, m + 1, device=self.device.torch_device)
a_torch[:, -1] = torch.arange(1, n + 1, device=self.device.torch_device)
b_torch = torch.ones((j), device=self.device.torch_device)
# splits None None
a = ht.ones((n, m), split=None)
b = ht.ones((j), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array(a_torch @ b_torch, split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, None)
a = ht.ones((n, m), split=None, dtype=ht.int64)
b = ht.ones((j), split=None, dtype=ht.int64)
a[0] = ht.arange(1, m + 1, dtype=ht.int64)
a[:, -1] = ht.arange(1, n + 1, dtype=ht.int64)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.int64)
self.assertEqual(ret00.split, None)
# splits 0 None
a = ht.ones((n, m), split=0)
b = ht.ones((j), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
a = ht.ones((n, m), split=0, dtype=ht.int64)
b = ht.ones((j), split=None, dtype=ht.int64)
a[0] = ht.arange(1, m + 1, dtype=ht.int64)
a[:, -1] = ht.arange(1, n + 1, dtype=ht.int64)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.int64)
self.assertEqual(ret00.split, 0)
# splits 1 None
a = ht.ones((n, m), split=1)
b = ht.ones((j), split=None)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
a = ht.ones((n, m), split=1, dtype=ht.int64)
b = ht.ones((j), split=None, dtype=ht.int64)
a[0] = ht.arange(1, m + 1, dtype=ht.int64)
a[:, -1] = ht.arange(1, n + 1, dtype=ht.int64)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.int64)
self.assertEqual(ret00.split, 0)
# splits None 0
a = ht.ones((n, m), split=None)
b = ht.ones((j), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
a = ht.ones((n, m), split=None, dtype=ht.int64)
b = ht.ones((j), split=0, dtype=ht.int64)
a[0] = ht.arange(1, m + 1, dtype=ht.int64)
a[:, -1] = ht.arange(1, n + 1, dtype=ht.int64)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.int64)
self.assertEqual(ret00.split, 0)
# splits 0 0
a = ht.ones((n, m), split=0)
b = ht.ones((j), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
a = ht.ones((n, m), split=0, dtype=ht.int64)
b = ht.ones((j), split=0, dtype=ht.int64)
a[0] = ht.arange(1, m + 1, dtype=ht.int64)
a[:, -1] = ht.arange(1, n + 1, dtype=ht.int64)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.int64)
self.assertEqual(ret00.split, 0)
# splits 1 0
a = ht.ones((n, m), split=1)
b = ht.ones((j), split=0)
a[0] = ht.arange(1, m + 1)
a[:, -1] = ht.arange(1, n + 1)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.float)
self.assertEqual(ret00.split, 0)
a = ht.ones((n, m), split=1, dtype=ht.int64)
b = ht.ones((j), split=0, dtype=ht.int64)
a[0] = ht.arange(1, m + 1, dtype=ht.int64)
a[:, -1] = ht.arange(1, n + 1, dtype=ht.int64)
ret00 = ht.matmul(a, b)
ret_comp = ht.array((a_torch @ b_torch), split=None)
self.assertTrue(ht.equal(ret00, ret_comp))
self.assertIsInstance(ret00, ht.DNDarray)
self.assertEqual(ret00.shape, (n,))
self.assertEqual(ret00.dtype, ht.int64)
self.assertEqual(ret00.split, 0)
with self.assertRaises(NotImplementedError):
a = ht.zeros((3, 3, 3), split=2)
b = a.copy()
a @ b
def test_matrix_norm(self):
a = ht.arange(9, dtype=ht.float) - 4
b = a.reshape((3, 3))
b0 = a.reshape((3, 3), new_split=0)
b1 = a.reshape((3, 3), new_split=1)
# different ord
mn = ht.linalg.matrix_norm(b, ord="fro")
self.assertEqual(mn.split, b.split)
self.assertEqual(mn.dtype, b.dtype)
self.assertEqual(mn.device, b.device)
self.assertTrue(ht.allclose(mn, ht.array(7.745966692414834)))
mn = ht.linalg.matrix_norm(b0, ord=1)
self.assertEqual(mn.split, b.split)
self.assertEqual(mn.dtype, b.dtype)
self.assertEqual(mn.device, b.device)
self.assertEqual(mn.item(), 7.0)
mn = ht.linalg.matrix_norm(b0, ord=-1)
self.assertEqual(mn.split, b.split)
self.assertEqual(mn.dtype, b.dtype)
self.assertEqual(mn.device, b.device)
self.assertEqual(mn.item(), 6.0)
mn = ht.linalg.matrix_norm(b1)
self.assertEqual(mn.split, b.split)
self.assertEqual(mn.dtype, b.dtype)
self.assertEqual(mn.device, b.device)
self.assertTrue(ht.allclose(mn, ht.array(7.745966692414834)))
# higher dimension + different dtype
m = ht.arange(8).reshape(2, 2, 2)
mn = ht.linalg.matrix_norm(m, axis=(2, 1), ord=ht.inf)
self.assertEqual(mn.split, m.split)
self.assertEqual(mn.dtype, ht.float)
self.assertEqual(mn.device, m.device)
self.assertTrue(ht.equal(mn, ht.array([4.0, 12.0])))
mn = ht.linalg.matrix_norm(m, axis=(2, 1), ord=-ht.inf)
self.assertEqual(mn.split, m.split)
self.assertEqual(mn.dtype, ht.float)
self.assertEqual(mn.device, m.device)
self.assertTrue(ht.equal(mn, ht.array([2.0, 10.0])))
# too many axis to infer
with self.assertRaises(ValueError):
ht.linalg.matrix_norm(ht.ones((2, 2, 2)))
# bad axis
with self.assertRaises(TypeError):
ht.linalg.matrix_norm(ht.ones((2, 2)), axis=1)
with self.assertRaises(TypeError):
ht.linalg.matrix_norm(ht.ones(2, 2), axis=(1, 2, 3))
# bad array
with self.assertRaises(ValueError):
ht.linalg.matrix_norm(ht.array([1, 2, 3]))
# bad ord
with self.assertRaises(ValueError):
ht.linalg.matrix_norm(ht.ones((2, 2)), ord=3)
# Not implemented yet; SVD needed
with self.assertRaises(NotImplementedError):
ht.linalg.matrix_norm(ht.ones((2, 2)), ord=2)
with self.assertRaises(NotImplementedError):
ht.linalg.matrix_norm(ht.ones((2, 2)), ord=-2)
with self.assertRaises(NotImplementedError):
ht.linalg.matrix_norm(ht.ones((2, 2)), ord="nuc")
def test_norm(self):
a = ht.arange(9, dtype=ht.float) - 4
a0 = ht.array([1 + 1j, 2 - 2j, 0 + 1j, 2 + 1j], dtype=ht.complex64, split=0)
b = a.reshape((3, 3))
b0 = a.reshape((3, 3), new_split=0)
b1 = a.reshape((3, 3), new_split=1)
# vectors
gn = ht.linalg.norm(a, axis=0, ord=1)
self.assertEqual(gn.split, a.split)
self.assertEqual(gn.dtype, a.dtype)
self.assertEqual(gn.device, a.device)
self.assertEqual(gn.item(), 20.0)
# complex type
gn = ht.linalg.norm(a0, keepdims=True)
self.assertEqual(gn.split, None)
self.assertEqual(gn.dtype, ht.float)
self.assertEqual(gn.device, a0.device)
self.assertEqual(gn.item(), 4.0)
# matrices
gn = ht.linalg.norm(b, ord="fro")
self.assertEqual(gn.split, None)
self.assertEqual(gn.dtype, b.dtype)
self.assertEqual(gn.device, b.device)
self.assertTrue(ht.allclose(gn, ht.array(7.745966692414834)))
gn = ht.linalg.norm(b0, ord=ht.inf)
self.assertEqual(gn.split, None)
self.assertEqual(gn.dtype, b0.dtype)
self.assertEqual(gn.device, b0.device)
self.assertEqual(gn.item(), 9.0)
gn = ht.linalg.norm(b1, axis=(0,), ord=-ht.inf, keepdims=True)
self.assertEqual(gn.split, b1.split)
self.assertEqual(gn.dtype, b1.dtype)
self.assertEqual(gn.device, b1.device)
self.assertTrue(ht.equal(gn, ht.array([[1.0, 0.0, 1.0]])))
# higher dimension + different dtype
gn = ht.linalg.norm(ht.ones((3, 3, 3), dtype=ht.int), axis=(-2, -1))
self.assertEqual(gn.split, None)
self.assertEqual(gn.dtype, ht.float)
self.assertTrue(ht.equal(gn, ht.array([3.0, 3.0, 3.0])))
# bad axis
with self.assertRaises(ValueError):
ht.linalg.norm(ht.ones(2), axis=(0, 1, 2))
def test_outer(self):
# test outer, a and b local, different dtypes
a = ht.arange(3, dtype=ht.int32)
b = ht.arange(8, dtype=ht.float32)
ht_outer = ht.outer(a, b, split=None)
np_outer = np.outer(a.numpy(), b.numpy())
t_outer = torch.einsum("i,j->ij", a.larray, b.larray)
self.assertTrue((ht_outer.numpy() == np_outer).all())
self.assertTrue(ht_outer.larray.dtype is t_outer.dtype)
# test outer, a and b distributed, no data on some ranks
a_split = ht.arange(3, dtype=ht.float32, split=0)
b_split = ht.arange(8, dtype=ht.float32, split=0)
ht_outer_split = ht.outer(a_split, b_split, split=None)
# a and b split 0, outer split 1
ht_outer_split = ht.outer(a_split, b_split, split=1)
self.assertTrue(ht_outer_split.split == 1)
self.assertTrue((ht_outer_split.numpy() == np_outer).all())
# a and b distributed, outer split unspecified
ht_outer_split = ht.outer(a_split, b_split, split=None)
self.assertTrue(ht_outer_split.split == 0)
self.assertTrue((ht_outer_split.numpy() == np_outer).all())
# a not distributed, outer.split = 1
ht_outer_split = ht.outer(a, b_split, split=1)
self.assertTrue(ht_outer_split.split == 1)
self.assertTrue((ht_outer_split.numpy() == np_outer).all())
# b not distributed, outer.split = 0
ht_outer_split = ht.outer(a_split, b, split=0)
self.assertTrue(ht_outer_split.split == 0)
self.assertTrue((ht_outer_split.numpy() == np_outer).all())
# a_split.ndim > 1 and a.split != 0
a_split_3d = ht.random.randn(3, 3, 3, dtype=ht.float64, split=2)
ht_outer_split = ht.outer(a_split_3d, b_split)
np_outer_3d = np.outer(a_split_3d.numpy(), b_split.numpy())
self.assertTrue(ht_outer_split.split == 0)
self.assertTrue((ht_outer_split.numpy() == np_outer_3d).all())
# write to out buffer
ht_out = ht.empty((a.gshape[0], b.gshape[0]), dtype=ht.float32)
ht.outer(a, b, out=ht_out)
self.assertTrue((ht_out.numpy() == np_outer).all())
ht_out_split = ht.empty((a_split.gshape[0], b_split.gshape[0]), dtype=ht.float32, split=1)
ht.outer(a_split, b_split, out=ht_out_split, split=1)
self.assertTrue((ht_out_split.numpy() == np_outer).all())
# test exceptions
t_a = torch.arange(3)
with self.assertRaises(TypeError):
ht.outer(t_a, b)
np_b = np.arange(8)
with self.assertRaises(TypeError):
ht.outer(a, np_b)
a_0d = ht.array(2.3)
with self.assertRaises(RuntimeError):
ht.outer(a_0d, b)
t_out = torch.empty((a.gshape[0], b.gshape[0]), dtype=torch.float32)
with self.assertRaises(TypeError):
ht.outer(a, b, out=t_out)
ht_out_wrong_shape = ht.empty((7, b.gshape[0]), dtype=ht.float32)
with self.assertRaises(ValueError):
ht.outer(a, b, out=ht_out_wrong_shape)
ht_out_wrong_split = ht.empty(
(a_split.gshape[0], b_split.gshape[0]), dtype=ht.float32, split=1
)
with self.assertRaises(ValueError):
ht.outer(a_split, b_split, out=ht_out_wrong_split, split=0)
def test_projection(self):
a = ht.arange(1, 4, dtype=ht.float32, split=None)
e1 = ht.array([1, 0, 0], dtype=ht.float32, split=None)
self.assertTrue(ht.equal(ht.linalg.projection(a, e1), e1))
a.resplit_(axis=0)
self.assertTrue(ht.equal(ht.linalg.projection(a, e1), e1))
e2 = ht.array([0, 1, 0], dtype=ht.float32, split=0)
self.assertTrue(ht.equal(ht.linalg.projection(a, e2), e2 * 2))
a = ht.arange(1, 4, dtype=ht.float32, split=None)
e3 = ht.array([0, 0, 1], dtype=ht.float32, split=0)
self.assertTrue(ht.equal(ht.linalg.projection(a, e3), e3 * 3))
a = np.arange(1, 4)
with self.assertRaises(TypeError):
ht.linalg.projection(a, e1)
a = ht.array([[1], [2], [3]], dtype=ht.float32, split=None)
with self.assertRaises(RuntimeError):
ht.linalg.projection(a, e1)
def test_trace(self):
# ------------------------------------------------
# UNDISTRIBUTED CASE
# ------------------------------------------------
# CASE 2-D
# ------------------------------------------------
x = ht.arange(24).reshape((6, 4))
x_np = x.numpy()
dtype = ht.float32
result = ht.trace(x)
result_np = np.trace(x_np)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# direct call
result = x.trace()
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# input = array_like (other than DNDarray)
result = ht.trace(x.tolist())
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# dtype
result = ht.trace(x, dtype=dtype)
result_np = np.trace(x_np, dtype=np.float32)
self.assertIsInstance(result, float)
self.assertEqual(result, result_np)
# offset != 0
# negative offset
o = -(x.gshape[0] - 1)
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# positive offset
o = x.gshape[1] - 1
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# offset resulting into empty array
# negative
o = -x.gshape[0]
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, 0)
self.assertEqual(result, result_np)
# positive
o = x.gshape[1]
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, 0)
self.assertEqual(result, result_np)
# Exceptions
with self.assertRaises(TypeError):
x = "[[1, 2], [3, 4]]"
ht.trace(x)
with self.assertRaises(ValueError):
x = ht.arange(24)
ht.trace(x)
with self.assertRaises(TypeError):
x = ht.arange(24).reshape((6, 4))
ht.trace(x, axis1=0.2)
with self.assertRaises(TypeError):
ht.trace(x, axis2=1.4)
with self.assertRaises(ValueError):
ht.trace(x, axis1=2)
with self.assertRaises(ValueError):
ht.trace(x, axis2=2)
with self.assertRaises(TypeError):
ht.trace(x, offset=1.2)
with self.assertRaises(ValueError):
ht.trace(x, axis1=1, axis2=1)
with self.assertRaises(ValueError):
ht.trace(x, dtype="ht.int64")
with self.assertRaises(TypeError):
ht.trace(x, out=[])
with self.assertRaises(ValueError):
# As result is scalar
out = ht.array([])
ht.trace(x, out=out)
with self.assertRaises(ValueError):
ht.trace(x, dtype="ht.float32")
# ------------------------------------------------
# CASE > 2-D (4D)
# ------------------------------------------------
x = ht.arange(24).reshape((1, 2, 3, 4))
x_np = x.numpy()
out = ht.empty((3, 4))
axis1 = 1
axis2 = 3
result = ht.trace(x)
result_np = np.trace(x_np)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# input = array_like (other than DNDarray)
result = ht.trace(x.tolist())
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# out
result = ht.trace(x, out=out)
result_np = np.trace(x_np)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
self.assert_array_equal(out, result_np)
result = ht.trace(x, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# reversed axes order
result = ht.trace(x, axis1=axis2, axis2=axis1)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# negative axes
axis1 = 1
axis2 = 2
result = ht.trace(x, axis1=axis1, axis2=-axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=-axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
result = ht.trace(x, axis1=-axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=-axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
result = ht.trace(x, axis1=-axis1, axis2=-axis2)
result_np = np.trace(x_np, axis1=-axis1, axis2=-axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# different axes
axis1 = 1
axis2 = 2
o = 0
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2, dtype=dtype)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2, dtype=np.float32)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# offset != 0
# negative offset
o = -(x.gshape[0] - 1)
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# positive offset
o = x.gshape[1] - 1
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# offset resulting into zero array
axis1 = 1
axis2 = 2
# negative
o = -x.gshape[axis1]
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, np.zeros((1, 4)))
self.assert_array_equal(result, result_np)
# positive
o = x.gshape[axis2]
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, np.zeros((1, 4)))
self.assert_array_equal(result, result_np)
# Exceptions
with self.assertRaises(ValueError):
out = ht.array([])
ht.trace(x, out=out)
# ------------------------------------------------
# DISTRIBUTED CASE
# ------------------------------------------------
# CASE 2-D
# ------------------------------------------------
x = ht.arange(24, split=0).reshape((6, 4))
x_np = np.arange(24).reshape((6, 4))
dtype = ht.float32
result = ht.trace(x)
result_np = np.trace(x_np)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# different split axis
x_2 = ht.array(torch.arange(24).reshape((6, 4)), split=1)
result = ht.trace(x_2)
result_np = np.trace(x_np)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# input = array_like (other than DNDarray)
result = ht.trace(x.tolist())
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# dtype
result = ht.trace(x, dtype=dtype)
result_np = np.trace(x_np, dtype=np.float32)
self.assertIsInstance(result, float)
self.assertEqual(result, result_np)
# offset != 0
# negative offset
o = -(x.gshape[0] - 1)
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# positive offset
o = x.gshape[1] - 1
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, result_np)
# offset resulting into empty array
# negative
o = -x.gshape[0]
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, 0)
self.assertEqual(result, result_np)
# positive
o = x.gshape[1]
result = ht.trace(x, offset=o)
result_np = np.trace(x_np, offset=o)
self.assertIsInstance(result, int)
self.assertEqual(result, 0)
self.assertEqual(result, result_np)
# Exceptions
with self.assertRaises(TypeError):
x = "[[1, 2], [3, 4]]"
ht.trace(x)
with self.assertRaises(ValueError):
x = ht.arange(24)
ht.trace(x)
with self.assertRaises(TypeError):
x = ht.arange(24).reshape((6, 4))
ht.trace(x, axis1=0.2)
with self.assertRaises(TypeError):
ht.trace(x, axis2=1.4)
with self.assertRaises(ValueError):
ht.trace(x, axis1=2)
with self.assertRaises(ValueError):
ht.trace(x, axis2=2)
with self.assertRaises(TypeError):
ht.trace(x, offset=1.2)
with self.assertRaises(ValueError):
ht.trace(x, axis1=1, axis2=1)
with self.assertRaises(ValueError):
ht.trace(x, dtype="ht.int64")
with self.assertRaises(TypeError):
ht.trace(x, out=[])
with self.assertRaises(ValueError):
# As result is scalar
out = ht.array([])
ht.trace(x, out=out)
# ------------------------------------------------
# CASE > 2-D (4D)
# ------------------------------------------------
x = ht.arange(24, split=0).reshape((1, 2, 3, 4))
x_np = x.numpy()
# ------------------------------------------------
# CASE split axis NOT in (axis1, axis2)
# ------------------------------------------------
axis1 = 1
axis2 = 2
out = ht.empty((1, 4), split=0, dtype=x.dtype)
result = ht.trace(x, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# input = array_like (other than DNDarray)
result = ht.trace(x.tolist(), axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# out
result = ht.trace(x, out=out, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
self.assert_array_equal(out, result_np)
# reversed axes order
result = ht.trace(x, axis1=axis2, axis2=axis1)
result_np = np.trace(x_np, axis1=axis2, axis2=axis1)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# different axes (still not in x.split = 0)
axis1 = 1
axis2 = 3
result = ht.trace(x, offset=0, axis1=axis1, axis2=axis2, dtype=dtype)
result_np = np.trace(x_np, offset=0, axis1=axis1, axis2=axis2, dtype=np.float32)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# negative axes
axis1 = 1
axis2 = 2
result = ht.trace(x, axis1=axis1, axis2=-axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=-axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
result = ht.trace(x, axis1=-axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=-axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
result = ht.trace(x, axis1=-axis1, axis2=-axis2)
result_np = np.trace(x_np, axis1=-axis1, axis2=-axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# offset != 0
# negative offset
axis1 = 1
axis2 = 2
o = -(x.gshape[axis1] - 1)
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# positive offset
o = x.gshape[axis2] - 1
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# offset resulting into zero array
axis1 = 1
axis2 = 2
# negative
o = -x.gshape[axis1]
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, np.zeros((1, 4)))
self.assert_array_equal(result, result_np)
# positive
o = x.gshape[axis2]
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, np.zeros((1, 4)))
self.assert_array_equal(result, result_np)
# different split axis (that is still not in (axis1, axis2))
x = ht.arange(24).reshape((1, 2, 3, 4, 1))
x = ht.array(x, split=2, dtype=dtype)
x_np = x.numpy()
axis1 = 0
axis2 = 1
out = ht.empty((3, 4, 1), split=2, dtype=x.dtype)
result = ht.trace(x, axis1=axis1, axis2=axis2, out=out)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
self.assert_array_equal(out, result_np)
# different split axis (that is still not in (axis1, axis2))
x = ht.arange(24).reshape((1, 2, 3, 4, 1))
x = ht.array(x, split=3, dtype=dtype)
x_np = x.numpy()
axis1 = 2
axis2 = 4
out = ht.empty((1, 2, 4), split=1, dtype=x.dtype)
result = ht.trace(x, axis1=axis1, axis2=axis2, out=out)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# Exceptions
with self.assertRaises(ValueError):
out = ht.array([])
ht.trace(x, out=out, axis1=axis1, axis2=axis2)
# ------------------------------------------------
# CASE split axis IN (axis1, axis2)
# ------------------------------------------------
x = ht.arange(24).reshape((1, 2, 3, 4))
split_axis = 1
x = ht.array(x, split=split_axis, dtype=dtype)
x_np = x.numpy()
axis1 = 1
axis2 = 2
result_shape = list(x.gshape)
del result_shape[axis1], result_shape[axis2 - 1]
out = ht.empty(tuple(result_shape), split=split_axis, dtype=x.dtype)
result = ht.trace(x, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# input = array_like (other than DNDarray)
result = ht.trace(x.tolist(), axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# out
result = ht.trace(x, out=out, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
self.assert_array_equal(out, result_np)
# reversed axes order
result = ht.trace(x, axis1=axis2, axis2=axis1)
result_np = np.trace(x_np, axis1=axis2, axis2=axis1)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# axis2 = a.split
axis1 = 0
axis2 = 1
result = ht.trace(x, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# offset != 0
# negative offset
o = -(x.gshape[0] - 1)
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# positive offset
o = x.gshape[1] - 1
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# different axes
axis1 = 1
axis2 = 2
result_shape = list(x.gshape)
del result_shape[axis1], result_shape[axis2 - 1]
o = 0
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2, dtype=dtype)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2, dtype=np.float32)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, result_np)
# offset resulting into zero array
# negative
o = -x.gshape[axis1]
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, np.zeros(result_shape, dtype=result_np.dtype))
self.assert_array_equal(result, result_np)
# positive
o = x.gshape[axis2]
result = ht.trace(x, offset=o, axis1=axis1, axis2=axis2)
result_np = np.trace(x_np, offset=o, axis1=axis1, axis2=axis2)
self.assertIsInstance(result, ht.DNDarray)
self.assert_array_equal(result, np.zeros(result_shape, dtype=result_np.dtype))
self.assert_array_equal(result, result_np)
# Exceptions
with self.assertRaises(ValueError):
out = ht.array([])
ht.trace(x, out=out, axis1=axis1, axis2=axis2)
def test_transpose(self):
# vector transpose, not distributed
vector = ht.arange(10)
vector_t = vector.T
self.assertIsInstance(vector_t, ht.DNDarray)
self.assertEqual(vector_t.dtype, ht.int32)
self.assertEqual(vector_t.split, None)
self.assertEqual(vector_t.shape, (10,))
# simple matrix transpose, not distributed
simple_matrix = ht.zeros((2, 4))
simple_matrix_t = simple_matrix.transpose()
self.assertIsInstance(simple_matrix_t, ht.DNDarray)
self.assertEqual(simple_matrix_t.dtype, ht.float32)
self.assertEqual(simple_matrix_t.split, None)
self.assertEqual(simple_matrix_t.shape, (4, 2))
self.assertEqual(simple_matrix_t.larray.shape, (4, 2))
# 4D array, not distributed, with given axis
array_4d = ht.zeros((2, 3, 4, 5))
array_4d_t = ht.transpose(array_4d, axes=(-1, 0, 2, 1))
self.assertIsInstance(array_4d_t, ht.DNDarray)
self.assertEqual(array_4d_t.dtype, ht.float32)
self.assertEqual(array_4d_t.split, None)
self.assertEqual(array_4d_t.shape, (5, 2, 4, 3))
self.assertEqual(array_4d_t.larray.shape, (5, 2, 4, 3))
# vector transpose, distributed
vector_split = ht.arange(10, split=0)
vector_split_t = vector_split.T
self.assertIsInstance(vector_split_t, ht.DNDarray)
self.assertEqual(vector_split_t.dtype, ht.int32)
self.assertEqual(vector_split_t.split, 0)
self.assertEqual(vector_split_t.shape, (10,))
self.assertLessEqual(vector_split_t.lshape[0], 10)
# matrix transpose, distributed
matrix_split = ht.ones((10, 20), split=1)
matrix_split_t = matrix_split.transpose()
self.assertIsInstance(matrix_split_t, ht.DNDarray)
self.assertEqual(matrix_split_t.dtype, ht.float32)
self.assertEqual(matrix_split_t.split, 0)
self.assertEqual(matrix_split_t.shape, (20, 10))
self.assertLessEqual(matrix_split_t.lshape[0], 20)
self.assertEqual(matrix_split_t.lshape[1], 10)
# 4D array, distributed
array_4d_split = ht.ones((3, 4, 5, 6), split=3)
array_4d_split_t = ht.transpose(array_4d_split, axes=(1, 0, 3, 2))
self.assertIsInstance(array_4d_t, ht.DNDarray)
self.assertEqual(array_4d_split_t.dtype, ht.float32)
self.assertEqual(array_4d_split_t.split, 2)
self.assertEqual(array_4d_split_t.shape, (4, 3, 6, 5))
self.assertEqual(array_4d_split_t.lshape[0], 4)
self.assertEqual(array_4d_split_t.lshape[1], 3)
self.assertLessEqual(array_4d_split_t.lshape[2], 6)
self.assertEqual(array_4d_split_t.lshape[3], 5)
# exceptions
with self.assertRaises(TypeError):
ht.transpose(1)
with self.assertRaises(ValueError):
ht.transpose(ht.zeros((2, 3)), axes=1.0)
with self.assertRaises(ValueError):
ht.transpose(ht.zeros((2, 3)), axes=(-1,))
with self.assertRaises(TypeError):
ht.zeros((2, 3)).transpose(axes="01")
with self.assertRaises(TypeError):
ht.zeros((2, 3)).transpose(axes=(0, 1.0))
with self.assertRaises((ValueError, IndexError)):
ht.zeros((2, 3)).transpose(axes=(0, 3))
def test_tril(self):
local_ones = ht.ones((5,))
# 1D case, no offset, data is not split, module-level call
result = ht.tril(local_ones)
comparison = torch.ones((5, 5), device=self.device.torch_device).tril()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.lshape, (5, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 1D case, positive offset, data is not split, module-level call
result = ht.tril(local_ones, k=2)
comparison = torch.ones((5, 5), device=self.device.torch_device).tril(diagonal=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.lshape, (5, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 1D case, negative offset, data is not split, module-level call
result = ht.tril(local_ones, k=-2)
comparison = torch.ones((5, 5), device=self.device.torch_device).tril(diagonal=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.lshape, (5, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
local_ones = ht.ones((4, 5))
# 2D case, no offset, data is not split, method
result = local_ones.tril()
comparison = torch.ones((4, 5), device=self.device.torch_device).tril()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.lshape, (4, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 2D case, positive offset, data is not split, method
result = local_ones.tril(k=2)
comparison = torch.ones((4, 5), device=self.device.torch_device).tril(diagonal=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.lshape, (4, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 2D case, negative offset, data is not split, method
result = local_ones.tril(k=-2)
comparison = torch.ones((4, 5), device=self.device.torch_device).tril(diagonal=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.lshape, (4, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
local_ones = ht.ones((3, 4, 5, 6))
# 2D+ case, no offset, data is not split, module-level call
result = local_ones.tril()
comparison = torch.ones((5, 6), device=self.device.torch_device).tril()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (3, 4, 5, 6))
self.assertEqual(result.lshape, (3, 4, 5, 6))
self.assertEqual(result.split, None)
for i in range(3):
for j in range(4):
self.assertTrue((result.larray[i, j] == comparison).all())
# 2D+ case, positive offset, data is not split, module-level call
result = local_ones.tril(k=2)
comparison = torch.ones((5, 6), device=self.device.torch_device).tril(diagonal=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (3, 4, 5, 6))
self.assertEqual(result.lshape, (3, 4, 5, 6))
self.assertEqual(result.split, None)
for i in range(3):
for j in range(4):
self.assertTrue((result.larray[i, j] == comparison).all())
# # 2D+ case, negative offset, data is not split, module-level call
result = local_ones.tril(k=-2)
comparison = torch.ones((5, 6), device=self.device.torch_device).tril(diagonal=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (3, 4, 5, 6))
self.assertEqual(result.lshape, (3, 4, 5, 6))
self.assertEqual(result.split, None)
for i in range(3):
for j in range(4):
self.assertTrue((result.larray[i, j] == comparison).all())
distributed_ones = ht.ones((5,), split=0)
# 1D case, no offset, data is split, method
result = distributed_ones.tril()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.split, 1)
self.assertTrue(result.lshape[0] == 5 or result.lshape[0] == 0)
self.assertLessEqual(result.lshape[1], 5)
self.assertTrue(result.sum(), 15)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 0)
# 1D case, positive offset, data is split, method
result = distributed_ones.tril(k=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 5)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 22)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 0)
# 1D case, negative offset, data is split, method
result = distributed_ones.tril(k=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 5)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 6)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 0)
distributed_ones = ht.ones((4, 5), split=0)
# 2D case, no offset, data is horizontally split, method
result = distributed_ones.tril()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 0)
self.assertLessEqual(result.lshape[0], 4)
self.assertEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 10)
if result.comm.rank == 0:
self.assertTrue(result.larray[0, -1] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[-1, 0] == 1)
# 2D case, positive offset, data is horizontally split, method
result = distributed_ones.tril(k=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 0)
self.assertLessEqual(result.lshape[0], 4)
self.assertEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 17)
if result.comm.rank == 0:
self.assertTrue(result.larray[0, -1] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[-1, 0] == 1)
# 2D case, negative offset, data is horizontally split, method
result = distributed_ones.tril(k=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 0)
self.assertLessEqual(result.lshape[0], 4)
self.assertEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 3)
if result.comm.rank == 0:
self.assertTrue(result.larray[0, -1] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[-1, 0] == 1)
distributed_ones = ht.ones((4, 5), split=1)
# 2D case, no offset, data is vertically split, method
result = distributed_ones.tril()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 4)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 10)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 0)
# 2D case, positive offset, data is horizontally split, method
result = distributed_ones.tril(k=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 4)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 17)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 0)
# 2D case, negative offset, data is horizontally split, method
result = distributed_ones.tril(k=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 4)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 3)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 0)
with self.assertRaises(TypeError):
ht.tril("asdf")
with self.assertRaises(TypeError):
ht.tril(distributed_ones, m=["sdf", "sf"])
def test_triu(self):
local_ones = ht.ones((5,))
# 1D case, no offset, data is not split, module-level call
result = ht.triu(local_ones)
comparison = torch.ones((5, 5), device=self.device.torch_device).triu()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.lshape, (5, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 1D case, positive offset, data is not split, module-level call
result = ht.triu(local_ones, k=2)
comparison = torch.ones((5, 5), device=self.device.torch_device).triu(diagonal=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.lshape, (5, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 1D case, negative offset, data is not split, module-level call
result = ht.triu(local_ones, k=-2)
comparison = torch.ones((5, 5), device=self.device.torch_device).triu(diagonal=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.lshape, (5, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
local_ones = ht.ones((4, 5))
# 2D case, no offset, data is not split, method
result = local_ones.triu()
comparison = torch.ones((4, 5), device=self.device.torch_device).triu()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.lshape, (4, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 2D case, positive offset, data is not split, method
result = local_ones.triu(k=2)
comparison = torch.ones((4, 5), device=self.device.torch_device).triu(diagonal=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.lshape, (4, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
# 2D case, negative offset, data is not split, method
result = local_ones.triu(k=-2)
comparison = torch.ones((4, 5), device=self.device.torch_device).triu(diagonal=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.lshape, (4, 5))
self.assertEqual(result.split, None)
self.assertTrue((result.larray == comparison).all())
local_ones = ht.ones((3, 4, 5, 6))
# 2D+ case, no offset, data is not split, module-level call
result = local_ones.triu()
comparison = torch.ones((5, 6), device=self.device.torch_device).triu()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (3, 4, 5, 6))
self.assertEqual(result.lshape, (3, 4, 5, 6))
self.assertEqual(result.split, None)
for i in range(3):
for j in range(4):
self.assertTrue((result.larray[i, j] == comparison).all())
# 2D+ case, positive offset, data is not split, module-level call
result = local_ones.triu(k=2)
comparison = torch.ones((5, 6), device=self.device.torch_device).triu(diagonal=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (3, 4, 5, 6))
self.assertEqual(result.lshape, (3, 4, 5, 6))
self.assertEqual(result.split, None)
for i in range(3):
for j in range(4):
self.assertTrue((result.larray[i, j] == comparison).all())
# # 2D+ case, negative offset, data is not split, module-level call
result = local_ones.triu(k=-2)
comparison = torch.ones((5, 6), device=self.device.torch_device).triu(diagonal=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (3, 4, 5, 6))
self.assertEqual(result.lshape, (3, 4, 5, 6))
self.assertEqual(result.split, None)
for i in range(3):
for j in range(4):
self.assertTrue((result.larray[i, j] == comparison).all())
distributed_ones = ht.ones((5,), split=0)
# 1D case, no offset, data is split, method
result = distributed_ones.triu()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 5)
self.assertLessEqual(result.lshape[1], 5)
self.assertTrue(result.sum(), 15)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 1)
# 1D case, positive offset, data is split, method
result = distributed_ones.triu(k=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 5)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 6)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 1)
# 1D case, negative offset, data is split, method
result = distributed_ones.triu(k=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (5, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 5)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 22)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 1)
distributed_ones = ht.ones((4, 5), split=0)
# 2D case, no offset, data is horizontally split, method
result = distributed_ones.triu()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 0)
self.assertLessEqual(result.lshape[0], 4)
self.assertEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 14)
if result.comm.rank == 0:
self.assertTrue(result.larray[0, -1] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[-1, 0] == 0)
# # 2D case, positive offset, data is horizontally split, method
result = distributed_ones.triu(k=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 0)
self.assertLessEqual(result.lshape[0], 4)
self.assertEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 6)
if result.comm.rank == 0:
self.assertTrue(result.larray[0, -1] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[-1, 0] == 0)
# # 2D case, negative offset, data is horizontally split, method
result = distributed_ones.triu(k=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 0)
self.assertLessEqual(result.lshape[0], 4)
self.assertEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 19)
if result.comm.rank == 0:
self.assertTrue(result.larray[0, -1] == 1)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[-1, 0] == 0)
distributed_ones = ht.ones((4, 5), split=1)
# 2D case, no offset, data is vertically split, method
result = distributed_ones.triu()
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 4)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 14)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 1)
# 2D case, positive offset, data is horizontally split, method
result = distributed_ones.triu(k=2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 4)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 6)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 1)
# 2D case, negative offset, data is horizontally split, method
result = distributed_ones.triu(k=-2)
self.assertIsInstance(result, ht.DNDarray)
self.assertEqual(result.shape, (4, 5))
self.assertEqual(result.split, 1)
self.assertEqual(result.lshape[0], 4)
self.assertLessEqual(result.lshape[1], 5)
self.assertEqual(result.sum(), 19)
if result.comm.rank == 0:
self.assertTrue(result.larray[-1, 0] == 0)
if result.comm.rank == result.shape[0] - 1:
self.assertTrue(result.larray[0, -1] == 1)
def test_vdot(self):
a = ht.array([[1 + 1j, 2 + 2j], [3 + 3j, 4 + 4j]], split=0)
b = ht.array([[1 + 2j, 3 + 4j], [5 + 6j, 7 + 8j]], split=0)
vdot = ht.vdot(a, b)
self.assertEqual(vdot.dtype, a.dtype)
self.assertEqual(vdot.split, None)
self.assertTrue(ht.equal(vdot, ht.array([110 + 10j])))
vdot = ht.vdot(b, a)
self.assertTrue(ht.equal(vdot, ht.array([110 - 10j])))
with self.assertRaises(ValueError):
ht.vdot(ht.array([1, 2, 3]), ht.array([[1, 2], [3, 4]]))
def test_vecdot(self):
a = ht.array([1, 1, 1])
b = ht.array([1, 2, 3])
c = ht.linalg.vecdot(a, b)
self.assertEqual(c.dtype, ht.int64)
self.assertEqual(c.device, a.device)
self.assertTrue(ht.equal(c, ht.array([6])))
a = ht.full((4, 4), 2, split=0)
b = ht.ones(4)
c = ht.linalg.vecdot(a, b, axis=0, keepdim=True)
self.assertEqual(c.dtype, ht.float32)
self.assertEqual(c.device, a.device)
self.assertTrue(ht.equal(c, ht.array([[8, 8, 8, 8]])))
def test_vector_norm(self):
a = ht.arange(9, dtype=ht.float) - 4
a_split = ht.arange(9, dtype=ht.float, split=0) - 4
b = a.reshape((3, 3))
b0 = ht.reshape(a, (3, 3), new_split=0)
b1 = ht.reshape(a, (3, 3), new_split=1)
# vector infintity norm
vn = ht.vector_norm(a, ord=ht.inf)
self.assertEqual(vn.split, a.split)
self.assertEqual(vn.dtype, a.dtype)
self.assertEqual(vn.device, a.device)
self.assertEqual(vn.item(), 4.0)
# vector 0 norm
vn = ht.vector_norm(a, ord=0)
self.assertEqual(vn.split, a.split)
self.assertEqual(vn.dtype, a.dtype)
self.assertEqual(vn.device, a.device)
self.assertEqual(vn.item(), 8.0)
# split vector -infinity
vn = ht.vector_norm(a_split, ord=-ht.inf)
self.assertEqual(vn.split, a.split)
self.assertEqual(vn.dtype, a.dtype)
self.assertEqual(vn.device, a.device)
self.assertEqual(vn.item(), 0.0)
# matrix 1 norm no axis
vn = ht.vector_norm(b, ord=1)
self.assertEqual(vn.split, b.split)
self.assertEqual(vn.dtype, b.dtype)
self.assertEqual(vn.device, b.device)
self.assertEqual(vn.item(), 20.0)
# split matrix axis l2-norm
vn = ht.vector_norm(b0, axis=1, ord=2)
self.assertEqual(vn.split, 0)
self.assertEqual(vn.dtype, b0.dtype)
self.assertEqual(vn.device, b0.device)
self.assertTrue(ht.allclose(vn, ht.array([5.38516481, 1.41421356, 5.38516481], split=0)))
# split matrix axis keepdim norm 3
vn = ht.vector_norm(b1, axis=1, keepdims=True, ord=3)
self.assertEqual(vn.split, None)
self.assertEqual(vn.dtype, b1.dtype)
self.assertEqual(vn.device, b1.device)
self.assertTrue(
ht.allclose(vn, ht.array([[4.62606501], [1.25992105], [4.62606501]], split=None))
)
# different dtype
vn = ht.linalg.vector_norm(ht.full((4, 4, 4), 1 + 1j, dtype=ht.int), axis=0, ord=4)
self.assertEqual(vn.split, None)
self.assertEqual(vn.dtype, ht.float)
self.assertTrue(
ht.equal(
vn,
ht.array(
[
[2.0, 2.0, 2.0, 2.0],
[2.0, 2.0, 2.0, 2.0],
[2.0, 2.0, 2.0, 2.0],
[2.0, 2.0, 2.0, 2.0],
]
),
)
)
# bad ord
with self.assertRaises(ValueError):
ht.vector_norm(ht.array([1, 2, 3]), ord="fro")
# bad axis
with self.assertRaises(TypeError):
ht.vector_norm(ht.array([1, 2, 3]), axis=(1, 2))
with self.assertRaises(TypeError):
ht.vector_norm(ht.array([1, 2, 3]), axis="r")
| 40.915902 | 98 | 0.57253 | 10,938 | 80,277 | 4.12772 | 0.026422 | 0.11894 | 0.066513 | 0.045272 | 0.9128 | 0.868923 | 0.846021 | 0.825777 | 0.816674 | 0.801214 | 0 | 0.048997 | 0.277452 | 80,277 | 1,961 | 99 | 40.936767 | 0.729381 | 0.068787 | 0 | 0.767396 | 0 | 0 | 0.001328 | 0 | 0 | 0 | 0 | 0 | 0.490391 | 1 | 0.009278 | false | 0 | 0.004639 | 0 | 0.014579 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4552252a9ed101924dd53915bdd54dee6967c926 | 78 | py | Python | RECOVERED_FILES/root/ez-segway/simulator/ez_lib/test_dhcp.py | AlsikeE/Ez | 2f84ac1896a5b6d8f467c14d3618274bdcfd2cad | [
"Apache-2.0"
] | null | null | null | RECOVERED_FILES/root/ez-segway/simulator/ez_lib/test_dhcp.py | AlsikeE/Ez | 2f84ac1896a5b6d8f467c14d3618274bdcfd2cad | [
"Apache-2.0"
] | null | null | null | RECOVERED_FILES/root/ez-segway/simulator/ez_lib/test_dhcp.py | AlsikeE/Ez | 2f84ac1896a5b6d8f467c14d3618274bdcfd2cad | [
"Apache-2.0"
] | 1 | 2021-05-08T02:23:00.000Z | 2021-05-08T02:23:00.000Z | 0 1 1 1 0 1
1 0 1 0 1 1
1 1 0 1 1 0
1 0 1 0 1 1
0 1 1 1 0 1
1 1 0 1 1 0
| 11.142857 | 12 | 0.461538 | 36 | 78 | 1 | 0.055556 | 0.722222 | 0.833333 | 0.777778 | 1 | 1 | 0.972222 | 0.972222 | 0.916667 | 0.555556 | 0 | 1 | 0.538462 | 78 | 6 | 13 | 13 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 |
4562a8a41918f972906c893fcbc06dc05b5a57ab | 19,790 | py | Python | utils/evaluate_model.py | alessiabertugli/AC-VRNN | 3a204bd23a7b90c3939efc6468fa6477c31a733f | [
"Apache-2.0"
] | 23 | 2020-08-10T07:52:30.000Z | 2022-03-30T13:24:49.000Z | utils/evaluate_model.py | alessiabertugli/AC-VRNN | 3a204bd23a7b90c3939efc6468fa6477c31a733f | [
"Apache-2.0"
] | 3 | 2021-02-11T02:54:24.000Z | 2021-11-08T06:40:59.000Z | utils/evaluate_model.py | alessiabertugli/AC-VRNN | 3a204bd23a7b90c3939efc6468fa6477c31a733f | [
"Apache-2.0"
] | 2 | 2020-09-14T00:37:12.000Z | 2021-07-25T21:39:40.000Z | from utils.metrics import displacement_error, final_displacement_error, cal_l2_losses, cal_fde, cal_ade, \
l2_loss, miss_rate, linear_velocity_acceleration_1D
from utils.absolute import relative_to_abs
from utils.adj_matrix import compute_adjs_distsim, compute_adjs_knnsim, compute_adjs
from utils.losses import l2_error_graph
import os
import torch
import argparse
import random
import numpy as np
from dataset_processing.dataset_loader import data_loader
from dataset_processing.dataloader_sdd import data_loader_sdd
from dataset_processing.dataloader_sways import data_loader_sways
from attrdict import AttrDict
from models.vrnn.vrnn_model import VRNN
from models.graph.graph_vrnn_model import GraphVRNN
def evaluate_helper(error, seq_start_end):
sum_ = 0
error = torch.stack(error, dim=1)
for (start, end) in seq_start_end:
start = start.item()
end = end.item()
_error = error[start:end]
_error = torch.sum(_error, dim=0)
_error = torch.min(_error)
sum_ += _error
return sum_
def evaluate_helper_l2(error, seq_start_end):
sum_ = 0
error = torch.stack(error, dim=1)
for (start, end) in seq_start_end:
start = start.item()
end = end.item()
_error = error[start:end]
_error = torch.sum(_error, dim=0)
_error = torch.min(_error, 0)
sum_ += _error[0]
return sum_
def evaluate_baseline(args, loader, model, num_samples):
ade_outer, fde_outer, miss_rate_outer, mean_l2_outer, best_l2_outer, max_l2_outer = [], [], [], [], [], []
total_traj = 0
threshold = 3
model.eval()
with torch.no_grad():
for batch in loader:
(obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_gt_rel, non_linear_ped,
loss_mask, seq_start_end, maps, dnames) = batch
ade, fde, l2, losses = [], [], [], []
total_traj += pred_traj_gt.size(1)
for idx in range(num_samples):
if args.model == 'vrnn':
kld_loss, nll_loss, _, h = model(obs_traj_rel.cuda(), obs_traj[0])
loss = kld_loss + nll_loss
elif args.model == 'rnn':
loss, _, h = model(obs_traj_rel.cuda())
sample_traj_rel = model.sample(args.pred_len, obs_traj_rel.size(1), obs_traj[-1], dnames, h)
sample_traj = relative_to_abs(sample_traj_rel, obs_traj[-1])
ade.append(displacement_error(sample_traj, pred_traj_gt.cpu(), mode='raw'))
fde.append(final_displacement_error(sample_traj[-1], pred_traj_gt[-1].cpu(), mode='raw'))
l2.append(l2_loss(relative_to_abs(sample_traj, obs_traj[-1]), pred_traj_gt.cpu(), loss_mask[:, args.obs_len:]))
losses.append(loss)
ade_sum = evaluate_helper(ade, seq_start_end)
fde_sum = evaluate_helper(fde, seq_start_end)
ade_outer.append(ade_sum)
fde_outer.append(fde_sum)
miss_rate_outer.append(miss_rate(losses, threshold))
mean_l2_outer.append(torch.mean(torch.stack(l2)))
best_l2_outer.append(torch.max(torch.stack(l2)))
max_l2_outer.append(torch.min(torch.stack(l2)))
ade = sum(ade_outer) / (total_traj * args.pred_len)
fde = sum(fde_outer) / total_traj
m_rate = sum(miss_rate_outer) / total_traj
mean_l2 = sum(mean_l2_outer) / total_traj
best_l2 = sum(best_l2_outer) / total_traj
max_l2 = sum(max_l2_outer) / total_traj
return ade, fde, m_rate, mean_l2, best_l2, max_l2
def check_accuracy_baseline(args, loader, model, limit=False):
losses = []
metrics = {}
val_loss = 0
l2_losses_abs, l2_losses_rel = ([],) * 2
disp_error, disp_error_l, disp_error_nl = ([],) * 3
f_disp_error, f_disp_error_l, f_disp_error_nl = ([],) * 3
total_traj, total_traj_l, total_traj_nl = 0, 0, 0
loss_mask_sum = 0
model.eval()
with torch.no_grad():
for batch in loader:
(obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_gt_rel, non_linear_ped,
loss_mask, seq_start_end, maps, dnames) = batch
linear_ped = 1 - non_linear_ped
loss_mask = loss_mask[:, args.obs_len:]
if args.model == 'vrnn':
kld_loss, nll_loss, _, h = model(obs_traj_rel.cuda(), obs_traj[0])
loss = kld_loss + nll_loss
elif args.model == 'rnn':
loss, _, h = model(obs_traj_rel.cuda())
val_loss += loss.item()
pred_traj_rel = model.sample(args.pred_len, obs_traj_rel.size(1), obs_traj[-1], dnames, h)
pred_traj = relative_to_abs(pred_traj_rel, obs_traj[-1])
l2_loss_abs, l2_loss_rel = cal_l2_losses(pred_traj_gt, pred_traj_gt_rel, pred_traj, pred_traj_rel, loss_mask)
ade, ade_l, ade_nl = cal_ade(pred_traj_gt, pred_traj, linear_ped, non_linear_ped)
fde, fde_l, fde_nl = cal_fde(pred_traj_gt, pred_traj, linear_ped, non_linear_ped)
losses.append(loss.item())
l2_losses_abs.append(l2_loss_abs.item())
l2_losses_rel.append(l2_loss_rel.item())
disp_error.append(ade.item())
disp_error_l.append(ade_l.item())
disp_error_nl.append(ade_nl.item())
f_disp_error.append(fde.item())
f_disp_error_l.append(fde_l.item())
f_disp_error_nl.append(fde_nl.item())
loss_mask_sum += torch.numel(loss_mask.data)
total_traj += pred_traj_gt.size(1)
total_traj_l += torch.sum(linear_ped).item()
total_traj_nl += torch.sum(non_linear_ped).item()
if limit and total_traj >= args.num_samples_check:
break
metrics['loss'] = sum(losses) / len(losses)
metrics['l2_loss_abs'] = sum(l2_losses_abs) / loss_mask_sum
metrics['l2_loss_rel'] = sum(l2_losses_rel) / loss_mask_sum
metrics['ade'] = sum(disp_error) / (total_traj * args.pred_len)
metrics['fde'] = sum(f_disp_error) / total_traj
if total_traj_l != 0:
metrics['ade_l'] = sum(disp_error_l) / (total_traj_l * args.pred_len)
metrics['fde_l'] = sum(f_disp_error_l) / total_traj_l
else:
metrics['ade_l'] = 0
metrics['fde_l'] = 0
if total_traj_nl != 0:
metrics['ade_nl'] = sum(disp_error_nl) / (
total_traj_nl * args.pred_len)
metrics['fde_nl'] = sum(f_disp_error_nl) / total_traj_nl
else:
metrics['ade_nl'] = 0
metrics['fde_nl'] = 0
model.train()
return metrics, val_loss/len(loader)
def evaluate_graph(args, loader, model, num_samples, epoch):
ade_outer, fde_outer, miss_rate_outer, mean_l2_outer, best_l2_outer, max_l2_outer = [], [], [], [], [], []
mean_l2_graph = []
total_traj = 0
threshold = 3
model.eval()
with torch.no_grad():
for batch in loader:
(obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_gt_rel, non_linear_ped,
loss_mask, seq_start_end, maps, dnames) = batch
if args.adj_type == 0:
adj_out = compute_adjs(args, seq_start_end)
elif args.adj_type == 1:
adj_out = compute_adjs_distsim(args, seq_start_end, obs_traj, pred_traj_gt)
elif args.adj_type == 2:
adj_out = compute_adjs_knnsim(args, seq_start_end, obs_traj, pred_traj_gt)
ade, fde, l2, losses = [], [], [], []
l2_graph = []
total_traj += pred_traj_gt.size(1)
kld_loss, nll_loss, kld_hm, h = model(obs_traj_rel.cuda(), adj_out.cuda(), seq_start_end.cuda(),
obs_traj[0], maps[:args.obs_len], epoch)
for idx in range(num_samples):
sample_traj_rel = model.sample(args.pred_len, seq_start_end.cuda(), False, maps[args.obs_len-1:],
obs_traj[-1], dnames, h).cpu()
sample_traj = relative_to_abs(sample_traj_rel, obs_traj[-1])
ade.append(displacement_error(sample_traj, pred_traj_gt.cpu(), mode='raw'))
fde.append(final_displacement_error(sample_traj[-1], pred_traj_gt[-1].cpu(), mode='raw'))
l2.append(l2_loss(relative_to_abs(sample_traj, obs_traj[-1]), pred_traj_gt.cpu(), loss_mask[:, args.obs_len:]))
loss = kld_loss + nll_loss + kld_hm
losses.append(loss)
l2_graph.append(l2_error_graph(sample_traj, pred_traj_gt.cpu()))
ade_sum = evaluate_helper(ade, seq_start_end)
fde_sum = evaluate_helper(fde, seq_start_end)
l2_sum = evaluate_helper_l2(l2_graph, seq_start_end)
ade_outer.append(ade_sum)
fde_outer.append(fde_sum)
miss_rate_outer.append(miss_rate(losses, threshold))
mean_l2_outer.append(torch.mean(torch.stack(l2)))
best_l2_outer.append(torch.max(torch.stack(l2)))
max_l2_outer.append(torch.min(torch.stack(l2)))
mean_l2_graph.append(l2_sum)
ade = sum(ade_outer) / (total_traj * args.pred_len)
fde = sum(fde_outer) / total_traj
m_rate = sum(miss_rate_outer) / total_traj
mean_l2 = sum(mean_l2_outer) / total_traj
best_l2 = sum(best_l2_outer) / total_traj
max_l2 = sum(max_l2_outer) / total_traj
l2_graph_steps = sum(mean_l2_graph) / total_traj
mean_velocity1d, mean_velocity1d_v2, mean_acceleration1d = linear_velocity_acceleration_1D(l2_graph_steps)
return ade, fde, m_rate, mean_l2, best_l2, max_l2
def check_accuracy_graph(args, loader, model, epoch, limit=False):
losses = []
val_loss = 0
metrics = {}
l2_losses_abs, l2_losses_rel = ([],) * 2
disp_error, disp_error_l, disp_error_nl = ([],) * 3
f_disp_error, f_disp_error_l, f_disp_error_nl = ([],) * 3
total_traj, total_traj_l, total_traj_nl = 0, 0, 0
loss_mask_sum = 0
model.eval()
with torch.no_grad():
for batch in loader:
(obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_gt_rel, non_linear_ped,
loss_mask, seq_start_end, maps, dnames) = batch
linear_ped = 1 - non_linear_ped
loss_mask = loss_mask[:, args.obs_len:]
if args.adj_type == 0:
adj_out = compute_adjs(args, seq_start_end)
elif args.adj_type == 1:
adj_out = compute_adjs_distsim(args, seq_start_end, obs_traj, pred_traj_gt)
elif args.adj_type == 2:
adj_out = compute_adjs_knnsim(args, seq_start_end, obs_traj, pred_traj_gt)
kld_loss, nll_loss, kld_hm, h = model(obs_traj_rel.cuda(), adj_out.cuda(), seq_start_end.cuda(),
obs_traj[0], maps[:args.obs_len], epoch)
loss = kld_loss + nll_loss + kld_hm
val_loss += loss.item()
pred_traj_rel = model.sample(args.pred_len, seq_start_end.cuda(), False, maps[args.obs_len-1:],
obs_traj[-1], dnames, h).cpu()
pred_traj = relative_to_abs(pred_traj_rel, obs_traj[-1])
l2_loss_abs, l2_loss_rel = cal_l2_losses(pred_traj_gt, pred_traj_gt_rel, pred_traj, pred_traj_rel, loss_mask)
ade, ade_l, ade_nl = cal_ade(pred_traj_gt, pred_traj, linear_ped, non_linear_ped)
fde, fde_l, fde_nl = cal_fde(pred_traj_gt, pred_traj, linear_ped, non_linear_ped)
losses.append(loss.item())
l2_losses_abs.append(l2_loss_abs.item())
l2_losses_rel.append(l2_loss_rel.item())
disp_error.append(ade.item())
disp_error_l.append(ade_l.item())
disp_error_nl.append(ade_nl.item())
f_disp_error.append(fde.item())
f_disp_error_l.append(fde_l.item())
f_disp_error_nl.append(fde_nl.item())
loss_mask_sum += torch.numel(loss_mask.data)
total_traj += pred_traj_gt.size(1)
total_traj_l += torch.sum(linear_ped).item()
total_traj_nl += torch.sum(non_linear_ped).item()
if limit and total_traj >= args.num_samples_check:
break
metrics['loss'] = sum(losses) / len(losses)
metrics['l2_loss_abs'] = sum(l2_losses_abs) / loss_mask_sum
metrics['l2_loss_rel'] = sum(l2_losses_rel) / loss_mask_sum
metrics['ade'] = sum(disp_error) / (total_traj * args.pred_len)
metrics['fde'] = sum(f_disp_error) / total_traj
if total_traj_l != 0:
metrics['ade_l'] = sum(disp_error_l) / (total_traj_l * args.pred_len)
metrics['fde_l'] = sum(f_disp_error_l) / total_traj_l
else:
metrics['ade_l'] = 0
metrics['fde_l'] = 0
if total_traj_nl != 0:
metrics['ade_nl'] = sum(disp_error_nl) / (
total_traj_nl * args.pred_len)
metrics['fde_nl'] = sum(f_disp_error_nl) / total_traj_nl
else:
metrics['ade_nl'] = 0
metrics['fde_nl'] = 0
model.train()
return metrics, val_loss/len(loader)
def evaluate_graph_sways(args, loader, model, num_samples, epoch):
ade_outer, fde_outer = [], []
total_traj = 0
model.eval()
with torch.no_grad():
for batch in loader:
(obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_gt_rel, seq_start_end, maps, dnames) = batch
if args.adj_type == 0:
adj_out = compute_adjs(args, seq_start_end)
elif args.adj_type == 1:
adj_out = compute_adjs_distsim(args, seq_start_end, obs_traj, pred_traj_gt)
elif args.adj_type == 2:
adj_out = compute_adjs_knnsim(args, seq_start_end, obs_traj, pred_traj_gt)
ade, fde, l2, losses = [], [], [], []
total_traj += pred_traj_gt.size(1)
kld_loss, nll_loss, kld_hm, h = model(obs_traj_rel.cuda(), adj_out.cuda(), seq_start_end.cuda(),
obs_traj[0], maps[:args.obs_len], epoch)
for idx in range(num_samples):
sample_traj_rel = model.sample(args.pred_len, seq_start_end.cuda(), False, maps[args.obs_len-1:],
obs_traj[-1], dnames, h).cpu()
sample_traj = relative_to_abs(sample_traj_rel, obs_traj[-1])
ade.append(displacement_error(sample_traj, pred_traj_gt.cpu(), mode='raw'))
fde.append(final_displacement_error(sample_traj[-1], pred_traj_gt[-1].cpu(), mode='raw'))
loss = kld_loss + nll_loss + kld_hm
losses.append(loss)
ade_sum = evaluate_helper(ade, seq_start_end)
fde_sum = evaluate_helper(fde, seq_start_end)
ade_outer.append(ade_sum)
fde_outer.append(fde_sum)
ade = sum(ade_outer) / (total_traj * args.pred_len)
fde = sum(fde_outer) / total_traj
return ade, fde
def check_accuracy_graph_sways(args, loader, model, epoch, limit=False):
losses = []
val_loss = 0
metrics = {}
disp_error = []
f_disp_error = []
total_traj = 0
model.eval()
with torch.no_grad():
for batch in loader:
(obs_traj, pred_traj_gt, obs_traj_rel, pred_traj_gt_rel, seq_start_end, maps, dnames) = batch
if args.adj_type == 0:
adj_out = compute_adjs(args, seq_start_end)
elif args.adj_type == 1:
adj_out = compute_adjs_distsim(args, seq_start_end, obs_traj, pred_traj_gt)
elif args.adj_type == 2:
adj_out = compute_adjs_knnsim(args, seq_start_end, obs_traj, pred_traj_gt)
kld_loss, nll_loss, kld_hm, h = model(obs_traj_rel.cuda(), adj_out.cuda(), seq_start_end.cuda(), obs_traj[0],
maps[:args.obs_len], epoch)
loss = kld_loss + nll_loss + kld_hm
val_loss += loss.item()
pred_traj_rel = model.sample(args.pred_len, seq_start_end.cuda(), False, maps[args.obs_len-1:],
obs_traj[-1], dnames, h).cpu()
pred_traj = relative_to_abs(pred_traj_rel, obs_traj[-1])
ade, ade_l, ade_nl = cal_ade(pred_traj_gt, pred_traj, linear_ped=None, non_linear_ped=None)
fde, fde_l, fde_nl = cal_fde(pred_traj_gt, pred_traj, linear_ped=None, non_linear_ped=None)
losses.append(loss.item())
disp_error.append(ade.item())
f_disp_error.append(fde.item())
total_traj += pred_traj_gt.size(1)
if limit and total_traj >= args.num_samples_check:
break
metrics['loss'] = sum(losses) / len(losses)
metrics['ade'] = sum(disp_error) / (total_traj * args.pred_len)
metrics['fde'] = sum(f_disp_error) / total_traj
metrics['ade_l'] = 0
metrics['fde_l'] = 0
metrics['ade_nl'] = 0
metrics['fde_nl'] = 0
model.train()
return metrics, val_loss/len(loader)
def min_nll_sampling_strategy(model, pred_traj_gt_rel, seq_start_end, maps, obs_traj, h, dnames):
min_nll = 1e10
best_nll_sample = torch.zeros(pred_traj_gt_rel.shape)
for s in range(1000):
sample_traj_rel, nll_loss_pred = model.sample_likelihood(args.pred_len, seq_start_end.cuda(),
maps[args.obs_len - 1:], obs_traj[-1], h,
dnames, pred_traj_gt_rel.cuda())
if nll_loss_pred < min_nll:
min_nll = nll_loss_pred
best_nll_sample = sample_traj_rel
return best_nll_sample
def get_model_baseline(checkpoint):
args = AttrDict(checkpoint['args'])
model = VRNN(x_dim=args.x_dim,
h_dim=args.h_dim,
z_dim=args.z_dim,
n_layers=args.n_layers,
writer=None)
model.load_state_dict(checkpoint['best_state_ade'])
model.cuda()
model.train()
return model
def get_model_graph(checkpoint):
args = AttrDict(checkpoint['args'])
model = GraphVRNN(args=args,
writer=None)
model.load_state_dict(checkpoint['best_state_ade'])
model.cuda()
model.train()
return model
def main(args):
if os.path.isdir(args.model_path):
filenames = os.listdir(args.model_path)
filenames.sort()
paths = [
os.path.join(args.model_path, file_) for file_ in filenames
]
else:
paths = [args.model_path]
for path in paths:
checkpoint = torch.load(path)
model = get_model_graph(checkpoint)
_args = AttrDict(checkpoint['args'])
_args.test_data = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../trj2020/datasets/sdd_npy/test.npy')
if _args.model == 'gat' or _args.model == 'gcn':
_args.hmap_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../trj2020/dataset_processing/local_hm_5x5_sdd')
_, loader = data_loader_sdd(_args, args.dset_type)
ade, fde, m_rate, mean_l2, best_l2, max_l2 = evaluate_graph(_args, loader, model, args.num_samples, epoch=500)
print('Dataset: {}, Pred Len: {}, ADE: {:.2f}, FDE: {:.2f}'.format(
_args.dname, _args.pred_len, ade, fde))
def set_random_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
if __name__ == '__main__':
set_random_seed(76)
parser = argparse.ArgumentParser()
parser.add_argument('--model_path', type=str)
parser.add_argument('--num_samples', default=20, type=int)
parser.add_argument('--dset_type', default='test', type=str)
args = parser.parse_args()
main(args)
| 41.401674 | 140 | 0.617079 | 2,837 | 19,790 | 3.915051 | 0.067677 | 0.048258 | 0.043216 | 0.030251 | 0.819033 | 0.808409 | 0.797065 | 0.790132 | 0.774016 | 0.774016 | 0 | 0.015282 | 0.269277 | 19,790 | 477 | 141 | 41.48847 | 0.752783 | 0 | 0 | 0.757033 | 0 | 0 | 0.022688 | 0.004447 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033248 | false | 0 | 0.038363 | 0 | 0.099744 | 0.002558 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
456a14bf6510ad53b9ae4250aedb9455dbb9c1e0 | 124 | py | Python | Bosikov_Garik_dz_02/Task_2_1.py | gbosikov/Python_Course | 79d70dd6cd48dff158310ac82093c8a8c57ea7c4 | [
"MIT"
] | null | null | null | Bosikov_Garik_dz_02/Task_2_1.py | gbosikov/Python_Course | 79d70dd6cd48dff158310ac82093c8a8c57ea7c4 | [
"MIT"
] | null | null | null | Bosikov_Garik_dz_02/Task_2_1.py | gbosikov/Python_Course | 79d70dd6cd48dff158310ac82093c8a8c57ea7c4 | [
"MIT"
] | null | null | null | """"
15 * 3
15 / 3
15 // 2
15 ** 2
"""
print(type(15 * 3))
print(type(15 / 3))
print(type(15 // 2))
print(type(15 ** 2)) | 8.857143 | 20 | 0.483871 | 24 | 124 | 2.5 | 0.208333 | 0.2 | 0.733333 | 0.4 | 0.833333 | 0.583333 | 0.583333 | 0 | 0 | 0 | 0 | 0.252632 | 0.233871 | 124 | 14 | 21 | 8.857143 | 0.378947 | 0.25 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 8 |
4572a7862d7b4ef233103af19233a61fe715b2ec | 40,705 | py | Python | HO-ResNet.py | zlannnn/HO-ResNet | 262c243d1c4f8396fe7ef403d5f2b2e5f7fc9ffe | [
"MIT"
] | 5 | 2021-04-12T04:13:04.000Z | 2021-04-20T09:33:11.000Z | HO-ResNet.py | zlannnn/HO-ResNet | 262c243d1c4f8396fe7ef403d5f2b2e5f7fc9ffe | [
"MIT"
] | null | null | null | HO-ResNet.py | zlannnn/HO-ResNet | 262c243d1c4f8396fe7ef403d5f2b2e5f7fc9ffe | [
"MIT"
] | null | null | null | import torch.nn.functional as func
import torch
import torch.nn as nn
import torch.nn.init as init
def _weights_init(m):
classname = m.__class__.__name__
#print(classname)
if isinstance(m, nn.Linear) or isinstance(m, nn.Conv2d):
init.kaiming_normal_(m.weight)
class LambdaLayer(nn.Module):
def __init__(self, lambd):
super(LambdaLayer, self).__init__()
self.lambd = lambd
def forward(self, x):
return self.lambd(x)
"""
Use Euler method, which is the stand ResNet
1 block = 2layers para and flops
"""
"""
For shortcut option, I only tested A and identity.
If you want use option B, I guess BN in B shall be removed.
"""
class BasicBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes//4, planes//4), "constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
out = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
out += self.shortcut(x)
out = func.relu(out)
return out
"""
Use MidPoint method, shall have half blocks number Euler does
1 block = 4layers para and flops
"""
class MidBlock(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(MidBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
out = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
out = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.5 * out + shortcut))))))
out += shortcut
out = func.relu(out)
return out
"""
Use Improved Euler method, shall have half blocks number Euler does
1 block = 4layers para and flops
"""
class ImprovedEuler(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(ImprovedEuler, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
out = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 0.5 * out
out = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.5 * out + shortcut))))))
outx += 0.5 * out
outx += self.shortcut
out = func.relu(outx)
return out
"""
Use RK2 method, shall have half blocks number Euler does
1 block = 4layers para and flops
"""
class RK2Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(RK2Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
out = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 0.25 * out
out = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.666666 * shortcut))))))
outx += 0.75 * out
outx += shortcut
out = func.relu(outx)
return out
"""
Use Heun3 method, shall have 1/3. blocks number Euler does
1 block = 6layers para and flops
"""
class Heun3Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(Heun3Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
out = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 0.25 * out
out = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.333333 * out + shortcut))))))
out = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(0.666666 * out + shortcut))))))
outx += 0.75 * out
outx += shortcut
out = func.relu(outx)
return out
"""
Use RK3 method, shall have 1/3. blocks number Euler does
1 block = 6layers para and flops
"""
class RK3Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(RK3Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
k1 = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 1/6. * k1
k2 = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.5 * k1 + shortcut))))))
outx += 2/3. * k2
k3 = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(2*k2-k1 + shortcut))))))
outx += 1/6. * k3
outx += shortcut
out = func.relu(outx)
return out
"""
Use RK4 method, shall have 1/4 blocks number Euler does
1 block = 8layers para and flops
"""
class RK4Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(RK4Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.conv7 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn7 = nn.BatchNorm2d(planes)
self.conv8 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn8 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
out = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 1 / 6. * out
out = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.5 * out + shortcut))))))
outx += 1 / 3. * out
out = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(0.5 * out + shortcut))))))
outx += 1 / 3. * out
out = self.conv8(func.relu(self.bn8(self.conv7(func.relu(self.bn7(out + shortcut))))))
outx += 1 / 6. * out
out = outx + shortcut
out = func.relu(out)
return out
"""
Use Gill 4 method, shall have 1/4 blocks number Euler does
1 block = 8layers para and flops
"""
class Gill4Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(Gill4Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.conv7 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn7 = nn.BatchNorm2d(planes)
self.conv8 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn8 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
k1 = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 1 / 6. * k1
k2 = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(0.5 * k1 + shortcut))))))
outx += 0.097631 * k2
k3 = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(0.2071 * k1 + 0.29289 * k2 + shortcut))))))
outx += 0.569 * k3
out = self.conv8(func.relu(self.bn8(self.conv7(func.relu(self.bn7(1.7071 * k3 - 0.7071 * k2 + shortcut))))))
outx += 1 / 6. * out
out = outx + shortcut
out = func.relu(out)
return out
"""
Use Kutta-Nystrom 5-6 method, shall have 1/6 blocks number Euler does
1 block = 12layers para and flops
"""
class KuttaNys56Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(KuttaNys56Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.conv7 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn7 = nn.BatchNorm2d(planes)
self.conv8 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn8 = nn.BatchNorm2d(planes)
self.conv9 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn9 = nn.BatchNorm2d(planes)
self.conv10 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn10 = nn.BatchNorm2d(planes)
self.conv11 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn11 = nn.BatchNorm2d(planes)
self.conv12 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn12 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
k1 = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
k2 = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(1/3. * k1 + shortcut))))))
k3 = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(1/25. * (4 * k1 + 6 * k2) + shortcut))))))
k4 = self.conv8(func.relu(self.bn8(self.conv7(func.relu(self.bn7(1/4. * (k1 - 12 * k2 + 15 * k3) + shortcut))))))
k5 = self.conv10(func.relu(self.bn10(self.conv9(func.relu(self.bn9(1/81. * (6 * k1 + 90 * k2 + - 50 * k3 + 8 * k4) + shortcut))))))
k6 = self.conv12(func.relu(self.bn12(self.conv11(func.relu(self.bn11(1/75. *(6 * k1 + 36 * k2 + 10 * k3 + 8 * k4) + shortcut))))))
out = 1/192. * (23 * k1 + 125 * k2 - 81 * k5 + 125 * k6)
out = out + shortcut
out = func.relu(out)
return out
"""
Use Huta 6-8 method, shall have 1/8 blocks number Euler does
1 block = 16layers para and flops
"""
class Huta68Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(Huta68Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.conv7 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn7 = nn.BatchNorm2d(planes)
self.conv8 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn8 = nn.BatchNorm2d(planes)
self.conv9 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn9 = nn.BatchNorm2d(planes)
self.conv10 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn10 = nn.BatchNorm2d(planes)
self.conv11 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn11 = nn.BatchNorm2d(planes)
self.conv12 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn12 = nn.BatchNorm2d(planes)
self.conv13 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn13 = nn.BatchNorm2d(planes)
self.conv14 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn14 = nn.BatchNorm2d(planes)
self.conv15 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn15 = nn.BatchNorm2d(planes)
self.conv16 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn16 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
k1 = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
k2 = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(1/9. * k1 + shortcut))))))
k3 = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(1/24. * (k1 + 3 * k2) + shortcut))))))
k4 = self.conv8(func.relu(self.bn8(self.conv7(func.relu(
self.bn7(1/6. * (k1 - 3 * k2 + 4 * k3) + shortcut))))))
k5 = self.conv10(func.relu(self.bn10(self.conv9(func.relu(
self.bn9(1/8. * (-5 * k1 + 27 * k2 - 24 * k3 + 6 * k4) + shortcut))))))
k6 = self.conv12(func.relu(self.bn12(self.conv11(func.relu(
self.bn11(1/9. * (221 * k1 - 981 * k2 + 867 * k3 - 102*k4 + k5) + shortcut))))))
k7 = self.conv14(func.relu(self.bn14(self.conv13(func.relu(
self.bn13(1/48. *(-183 * k1 + 678 * k2 - 472 * k3 -66 * k4 +80 * k5 + 3*k6) + shortcut))))))
k8 = self.conv16(func.relu(self.bn16(self.conv15(func.relu(
self.bn15(1/82. * (716 * k1 - 2079 * k2 + 1002 * k3 + 834 * k4 -454 * k5 - 9*k6 + 72 * k7) + shortcut))))))
out = 1/840.*(41*k1+216*k3 +24*k4+ 272*k5 + 27*k6+ 216*k7+41*k8)
# 8
out = out + shortcut
out = func.relu(out)
return out
"""
Use RK-Fehlberg 6-8 method, shall have 1/8 blocks number Euler does
1 block = 16layers para and flops
"""
class RKFehlberg(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(RKFehlberg, self).__init__()
self.h = 1
self.e = 9999
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.conv7 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn7 = nn.BatchNorm2d(planes)
self.conv8 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn8 = nn.BatchNorm2d(planes)
self.conv9 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn9 = nn.BatchNorm2d(planes)
self.conv10 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn10 = nn.BatchNorm2d(planes)
self.conv11 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn11 = nn.BatchNorm2d(planes)
self.conv12 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn12 = nn.BatchNorm2d(planes)
self.conv13 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn13 = nn.BatchNorm2d(planes)
self.conv14 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn14 = nn.BatchNorm2d(planes)
self.conv15 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn15 = nn.BatchNorm2d(planes)
self.conv16 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn16 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
k1 = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
k2 = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(1/4. * k1 + shortcut))))))
k3 = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(3/32. * (k1 + 3 * k2) + shortcut))))))
k4 = self.conv8(func.relu(self.bn8(self.conv7(func.relu(
self.bn7((1932/2197*k1 - 7200/2197 * k2 + 7296/32 * k3) + shortcut))))))
k5 = self.conv10(func.relu(self.bn10(self.conv9(func.relu(
self.bn9((439/216. * k1 - 8 * k2 + 3680/513 * k3 - 845/4104 * k4) + shortcut))))))
k6 = self.conv12(func.relu(self.bn12(self.conv11(func.relu(
self.bn11((-8/27 * k1 - 2 * k2 + 3544/2565 * k3 + 1859/4194*k4 - 11/40 * k5) + shortcut))))))
k7 = self.conv14(func.relu(self.bn14(self.conv13(func.relu(
self.bn13(1/48. *(-183 * k1 + 678 * k2 - 472 * k3 -66 * k4 +80 * k5 + 3*k6) + shortcut))))))
k8 = self.conv16(func.relu(self.bn16(self.conv15(func.relu(
self.bn15(1/82. * (716 * k1 - 2079 * k2 + 1002 * k3 + 834 * k4 -454 * k5 - 9*k6 + 72 * k7) + shortcut))))))
y1 = x + 25/216 * k1 + 1408/2565 * k3 + 2197/4104*k4 - 1/5*k5
y2 = x + 1/360 * k1 - 128/4275 * k3 + 2197/75240*k4 + 1/50*k5 + 2/55 * k6
y2 = y2.abs()
self.e = y2
q = 0.84 * (self.h * self.e/y2)^0.25
if y2/self.h > self.e:
self.h *= q
out = self.h * (1 / 840. * (41 * k1 + 216 * k3 + 24 * k4 + 272 * k5 + 27 * k6 + 216 * k7 + 41 * k8))
# 8
out = out + shortcut
out = func.relu(out)
return out
"""
Use Verner 8-9 method, shall have 1/14 blocks number Euler does
1 block = 28layers para and flops
"""
class Verner89Block(nn.Module):
expansion = 1
def __init__(self, in_planes, planes, stride=1, option='A'):
super(Verner89Block, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes, kernel_size=3, stride=stride, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
self.conv3 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn3 = nn.BatchNorm2d(planes)
self.conv4 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn4 = nn.BatchNorm2d(planes)
self.conv5 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn5 = nn.BatchNorm2d(planes)
self.conv6 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn6 = nn.BatchNorm2d(planes)
self.conv7 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn7 = nn.BatchNorm2d(planes)
self.conv8 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn8 = nn.BatchNorm2d(planes)
self.conv9 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn9 = nn.BatchNorm2d(planes)
self.conv10 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn10 = nn.BatchNorm2d(planes)
self.conv11 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn11 = nn.BatchNorm2d(planes)
self.conv12 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn12 = nn.BatchNorm2d(planes)
self.conv13 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn13 = nn.BatchNorm2d(planes)
self.conv14 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn14 = nn.BatchNorm2d(planes)
self.conv15 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn15 = nn.BatchNorm2d(planes)
self.conv16 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn16 = nn.BatchNorm2d(planes)
self.conv17 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn17 = nn.BatchNorm2d(planes)
self.conv18 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn18 = nn.BatchNorm2d(planes)
self.conv19 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn19 = nn.BatchNorm2d(planes)
self.conv20 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn20 = nn.BatchNorm2d(planes)
self.conv21 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn21 = nn.BatchNorm2d(planes)
self.conv22 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn22 = nn.BatchNorm2d(planes)
self.conv23 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn23 = nn.BatchNorm2d(planes)
self.conv24 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn24 = nn.BatchNorm2d(planes)
self.conv25 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn25 = nn.BatchNorm2d(planes)
self.conv26 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn26 = nn.BatchNorm2d(planes)
self.conv27 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn27 = nn.BatchNorm2d(planes)
self.conv28 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn28 = nn.BatchNorm2d(planes)
self.conv29 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn29 = nn.BatchNorm2d(planes)
self.conv30 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn30 = nn.BatchNorm2d(planes)
self.conv31 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn31 = nn.BatchNorm2d(planes)
self.conv32 = nn.Conv2d(planes, planes, kernel_size=3, stride=1, padding=1, bias=False)
self.bn32 = nn.BatchNorm2d(planes)
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
if option == 'A':
"""
For CIFAR10 ResNet paper uses option A.
"""
self.shortcut = LambdaLayer(lambda x:
func.pad(x[:, :, ::2, ::2], (0, 0, 0, 0, planes // 4, planes // 4),
"constant", 0))
elif option == 'B':
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, self.expansion * planes, kernel_size=1, stride=stride, bias=False),
nn.BatchNorm2d(self.expansion * planes)
)
def forward(self, x):
shortcut = self.shortcut(x)
k1 = self.conv2(func.relu(self.bn2(self.conv1(func.relu(self.bn1(x))))))
outx = 103 / 1680. * k1
k2 = self.conv4(func.relu(self.bn4(self.conv3(func.relu(self.bn3(1/12. * k1 + shortcut))))))
k3 = self.conv6(func.relu(self.bn6(self.conv5(func.relu(self.bn5(1/27. * (k1 + 2 * k2) + shortcut))))))
k4 = self.conv8(func.relu(self.bn8(self.conv7(func.relu(self.bn7(1/24. * (k1 + 3 * k3) + shortcut))))))
k5 = self.conv10(func.relu(self.bn10(self.conv9(func.relu(self.bn9(1/375. * (234.25203582132 * k1 - 899.27141518056 * k3 + 837.49386649824 * k4) + shortcut))))))
k6 = self.conv12(func.relu(self.bn12(self.conv11(func.relu(self.bn11((0.053333333333 * k1 + 0.2739534538729544 * k4 + 0.24567579393091227 * k5) + shortcut))))))
k7 = self.conv14(func.relu(self.bn14(self.conv13(func.relu(self.bn13((0.06162164740427197 * k1 + 0.1815318224097963 * k4 - 0.013477689611 * k5 + 0.007024903611899742 * k6) + shortcut))))))
k8 = self.conv16(func.relu(self.bn16(self.conv15(func.relu(self.bn15(1/54. * (4*k1+ 13.550510257220001 * k6 + 18.44948974278* k7) + shortcut))))))
outx -= -27 / 140. * k8
#8
k9 = self.conv18(func.relu(self.bn18(self.conv17(func.relu(self.bn17(1/512 * (38*k1 + 61.66173591606*k6 + 174.33826408394 * k7 - 18 * k8) + shortcut))))))
outx += 76 / 105. * k9
k10 = self.conv20(func.relu(self.bn20(self.conv19(func.relu(self.bn19(11/144. * k1 + 0.30503531279770835 * k6 + 0.31070542794303235 * k7 - 1/16. * k8 -8/27. * k9 + shortcut))))))
outx -= 201 / 280. * k10
k11 = self.conv22(func.relu(self.bn22(self.conv21(func.relu(self.bn21(0.07112936653168327*k1 + 0.37852828889059764 * k7 - 0.01174633003514941 * k8 + 0.07272054197227078*k9 - 0.26063186735940236* k10 + shortcut))))))
outx += 1024 / 1365. * k11
k12 = self.conv24(func.relu(self.bn24(self.conv23(func.relu(self.bn23(-8.141639713845233 * k1 -574.4363925621823 * k6 + 847.8814814814815 * k7 + 113.71920186905155 * k8 + 626.9414848959715* k9 + 605.7315968367965 * k10 -328.69135802469134 * k11 + shortcut))))))
outx += 3 / 7280. * k12
k13 = self.conv26(func.relu(self.bn26(self.conv25(func.relu(self.bn25(0.0878037592818966 * k1+0.6933735017296832*k6-1.9030978898036277*k7 + 0.22886338868515282*k8 -0.6904282483623702*k9 -0.07691188807394458* k10 +2624/ 1053.*k11 +3/1664.*k12 + shortcut))))))
outx += 12 / 35. * k13
k14 = self.conv28(func.relu(self.bn28(self.conv27(func.relu(self.bn27(-137/1296.*k1 + 5.5746781906054865*k6 + 7.485506994579699 * k7 - 299/48. * k8 + 184/81. * k9 -44/9.* k10-5120/1053.*k11 - 11/468.* k12 +16/9.* k13 + shortcut))))))
outx += 9 / 280. * k14
out = outx + shortcut
out = func.relu(out)
return out
class CFResNet(nn.Module):
def __init__(self, block, num_blocks, num_classes=10):
super(CFResNet, self).__init__()
self.in_planes = 16
self.conv1 = nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(16)
self.layer1 = self._make_layer(block, 16, num_blocks[0], stride=1)
self.layer2 = self._make_layer(block, 32, num_blocks[1], stride=2)
self.layer3 = self._make_layer(block, 64, num_blocks[2], stride=2)
self.linear = nn.Linear(64, num_classes)
self.apply(_weights_init)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1]*(num_blocks-1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes * block.expansion
return nn.Sequential(*layers)
def forward(self, x):
out = func.relu(self.bn1(self.conv1(x)))
out = self.layer1(out)
out = self.layer2(out)
out = self.layer3(out)
out = func.avg_pool2d(out, out.size()[3])
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
if __name__ == '__main__':
net = CFResNet(Gill4Block, [1, 1, 1], num_classes=10)
a = torch.rand(7, 3, 32, 32)
print(net(a).shape)
| 49.160628 | 269 | 0.588822 | 5,562 | 40,705 | 4.247033 | 0.067422 | 0.073152 | 0.088054 | 0.085641 | 0.839133 | 0.835746 | 0.83346 | 0.83236 | 0.829312 | 0.824486 | 0 | 0.098335 | 0.266503 | 40,705 | 827 | 270 | 49.220073 | 0.692836 | 0.000516 | 0 | 0.719493 | 0 | 0 | 0.003645 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.047544 | false | 0 | 0.006339 | 0.001585 | 0.118859 | 0.001585 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
45a6f1204530bc8231a65c93ef386a5f29b6f6d1 | 8,641 | py | Python | Project-Source-and-Support-Files/Player.py | lawrence914/15-112-Term-Project | bf915a67757fe043aab46f0b4181c006fabcf588 | [
"CNRI-Python"
] | null | null | null | Project-Source-and-Support-Files/Player.py | lawrence914/15-112-Term-Project | bf915a67757fe043aab46f0b4181c006fabcf588 | [
"CNRI-Python"
] | null | null | null | Project-Source-and-Support-Files/Player.py | lawrence914/15-112-Term-Project | bf915a67757fe043aab46f0b4181c006fabcf588 | [
"CNRI-Python"
] | null | null | null | import pygame
from GameSprite import GameSprite
from Bullet import Bullet
from Bullet import PlayerBullet1
from Bullet import PlayerBullet2
from Bullet import PlayerBullet3
class Player(GameSprite):
def __init__(self,x,y,number):
#player's bullet sprite group
self.bullets = pygame.sprite.Group()
self.bulletSize = 10
self.number = number
#animates the player's sprite with numerous images
self.images = []
if number == 0:
self.appendImages1()
else:
self.appendImages2()
self.index = 0
#scales the image down to match the size
if number == 0:
self.image = pygame.transform.scale(self.images\
[self.index].convert_alpha(),(40,60))
else:
self.image = pygame.transform.scale(self.images\
[self.index].convert_alpha(),(60,60))
#animation timer
self.timer = 0
#invincibility after being hit
self.countdown = 70
self.isHit = False
self.width,self.height = self.image.get_size()
super(Player, self).__init__(x, y, self.image, self.height/2)
def appendImages1(self):
''' The source of the images is a gif file from \
http://vignette3.wikia.nocookie.net/dragons-crown/images/9/97/DC_-_\
Wizard_Sprite.gif/revision/latest?cb=20130424094257'''
self.images.append(pygame.image.load('images/player_gif_files/1.gif'))
self.images.append(pygame.image.load('images/player_gif_files/2.gif'))
self.images.append(pygame.image.load('images/player_gif_files/3.gif'))
self.images.append(pygame.image.load('images/player_gif_files/4.gif'))
self.images.append(pygame.image.load('images/player_gif_files/5.gif'))
self.images.append(pygame.image.load('images/player_gif_files/6.gif'))
self.images.append(pygame.image.load('images/player_gif_files/7.gif'))
self.images.append(pygame.image.load('images/player_gif_files/8.gif'))
self.images.append(pygame.image.load('images/player_gif_files/9.gif'))
self.images.append(pygame.image.load('images/player_gif_files/10.gif'))
self.images.append(pygame.image.load('images/player_gif_files/11.gif'))
self.images.append(pygame.image.load('images/player_gif_files/12.gif'))
self.images.append(pygame.image.load('images/player_gif_files/13.gif'))
self.images.append(pygame.image.load('images/player_gif_files/14.gif'))
self.images.append(pygame.image.load('images/player_gif_files/15.gif'))
self.images.append(pygame.image.load('images/player_gif_files/16.gif'))
self.images.append(pygame.image.load('images/player_gif_files/17.gif'))
self.images.append(pygame.image.load('images/player_gif_files/18.gif'))
self.images.append(pygame.image.load('images/player_gif_files/19.gif'))
self.images.append(pygame.image.load('images/player_gif_files/20.gif'))
self.images.append(pygame.image.load('images/player_gif_files/21.gif'))
self.images.append(pygame.image.load('images/player_gif_files/22.gif'))
self.images.append(pygame.image.load('images/player_gif_files/23.gif'))
self.images.append(pygame.image.load('images/player_gif_files/23.gif'))
self.images.append(pygame.image.load('images/player_gif_files/25.gif'))
self.images.append(pygame.image.load('images/player_gif_files/26.gif'))
self.images.append(pygame.image.load('images/player_gif_files/27.gif'))
self.images.append(pygame.image.load('images/player_gif_files/28.gif'))
self.images.append(pygame.image.load('images/player_gif_files/29.gif'))
self.images.append(pygame.image.load('images/player_gif_files/30.gif'))
def appendImages2(self):
''' The source of the images is a gif file from \
http://img1.wikia.nocookie.net/__cb20130424094151/\
dragons-crown/images/2/24/DC_-_Fighter_Sprite.gif'''
self.images.append(pygame.image.load('images/player2_gif_files/1.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/2.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/3.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/4.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/5.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/6.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/7.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/8.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/9.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/10.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/11.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/12.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/13.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/14.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/15.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/16.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/17.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/18.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/19.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/20.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/21.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/22.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/23.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/23.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/25.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/26.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/27.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/28.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/29.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/30.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/31.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/32.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/33.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/34.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/35.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/36.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/37.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/38.gif'))
self.images.append(pygame.image.load('images/player2_gif_files/39.gif'))
def update(self, screenWidth, screenHeight):
#slows down the animation to simulate flying
if self.timer % 2 == 0:
#updates image animation
self.index += 1
self.timer += 1
if self.index >= len(self.images):
self.index = 0
if self.number == 0:
self.image = pygame.transform.scale(self.images\
[self.index].convert_alpha(),(40,60))
else:
self.image = pygame.transform.scale(self.images\
[self.index].convert_alpha(),(60,60))
if self.isHit == True:
self.countdown -= 1
if self.countdown == 0:
self.isHit = False
self.countdown = 100
super(Player, self).update(screenWidth, screenHeight)
def fireBullet(self, weaponLevel):
#fires a bullet sprite
if weaponLevel == 1:
self.bullets.add(PlayerBullet1(self.x,self.y,self.bulletSize))
elif weaponLevel == 2:
#fires two bullets
self.bullets.add(PlayerBullet2(self.x-5,self.y,self.bulletSize*2))
elif weaponLevel == 3:
#fires three bullets
self.bullets.add(PlayerBullet3(self.x-10,self.y,self.bulletSize*2))
def getPlayerBounds(self):
#gets bounds of bullet
(x0, y0) = (self.x-self.width/2, self.y-self.height/2)
(x1, y1) = (self.x + self.width/2, self.y + self.height/2)
return (x0, y0, x1, y1) | 54.345912 | 80 | 0.679435 | 1,209 | 8,641 | 4.724566 | 0.123242 | 0.131303 | 0.193277 | 0.265756 | 0.757003 | 0.75 | 0.75 | 0.75 | 0.75 | 0.75 | 0 | 0.036693 | 0.176831 | 8,641 | 159 | 81 | 54.345912 | 0.766343 | 0.074413 | 0 | 0.164063 | 0 | 0 | 0.262425 | 0.262425 | 0 | 0 | 0 | 0 | 0 | 1 | 0.046875 | false | 0 | 0.046875 | 0 | 0.109375 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
45b19fc87b87f4292e326e1fcb84b9cb82a964ea | 130 | py | Python | trendmicro_deepsecurity/icon_trendmicro_deepsecurity/actions/__init__.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | 1 | 2020-03-18T09:14:55.000Z | 2020-03-18T09:14:55.000Z | trendmicro_deepsecurity/icon_trendmicro_deepsecurity/actions/__init__.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | null | null | null | trendmicro_deepsecurity/icon_trendmicro_deepsecurity/actions/__init__.py | OSSSP/insightconnect-plugins | 846758dab745170cf1a8c146211a8bea9592e8ff | [
"MIT"
] | null | null | null | # GENERATED BY KOMAND SDK - DO NOT EDIT
from .deploy_rules.action import DeployRules
from .search_rules.action import SearchRules
| 32.5 | 44 | 0.823077 | 19 | 130 | 5.526316 | 0.789474 | 0.209524 | 0.32381 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130769 | 130 | 3 | 45 | 43.333333 | 0.929204 | 0.284615 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
2fa6bfac0dd02744c2763d14e84280a474b81531 | 3,232 | py | Python | test/statements/import9.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 1,482 | 2015-10-16T21:59:32.000Z | 2022-03-30T11:44:40.000Z | test/statements/import9.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 226 | 2015-10-15T15:53:44.000Z | 2022-03-25T03:08:27.000Z | test/statements/import9.py | abjugard/MagicPython | 2802ded681e0ab1a1057821c1da287147d639505 | [
"MIT"
] | 129 | 2015-10-20T02:41:49.000Z | 2022-03-22T01:44:36.000Z | from . . . foo import \
(
# XXX: legal comment inside import
time as bar,
# another comment
baz,
datetime as ham
)
raise Exception('!') from None
from : keyword.control.import.python, source.python
: source.python
. : punctuation.separator.period.python, source.python
: source.python
. : punctuation.separator.period.python, source.python
: source.python
. : punctuation.separator.period.python, source.python
: source.python
foo : source.python
: source.python
import : keyword.control.import.python, source.python
: source.python
\ : punctuation.separator.continuation.line.python, source.python
: source.python
: source.python
( : punctuation.parenthesis.begin.python, source.python
: source.python
# : comment.line.number-sign.python, punctuation.definition.comment.python, source.python
: comment.line.number-sign.python, source.python
XXX : comment.line.number-sign.python, keyword.codetag.notation.python, source.python
: legal comment inside import : comment.line.number-sign.python, source.python
: source.python
time : source.python
: source.python
as : keyword.control.import.python, source.python
: source.python
bar : source.python
, : punctuation.separator.element.python, source.python
: source.python
# : comment.line.number-sign.python, punctuation.definition.comment.python, source.python
another comment : comment.line.number-sign.python, source.python
: source.python
baz : source.python
, : punctuation.separator.element.python, source.python
: source.python
datetime : source.python
: source.python
as : keyword.control.import.python, source.python
: source.python
ham : source.python
: source.python
) : punctuation.parenthesis.end.python, source.python
raise : keyword.control.flow.python, source.python
: source.python
Exception : meta.function-call.python, source.python, support.type.exception.python
( : meta.function-call.python, punctuation.definition.arguments.begin.python, source.python
' : meta.function-call.arguments.python, meta.function-call.python, punctuation.definition.string.begin.python, source.python, string.quoted.single.python
! : meta.function-call.arguments.python, meta.function-call.python, source.python, string.quoted.single.python
' : meta.function-call.arguments.python, meta.function-call.python, punctuation.definition.string.end.python, source.python, string.quoted.single.python
) : meta.function-call.python, punctuation.definition.arguments.end.python, source.python
: source.python
from : keyword.control.flow.python, source.python
: source.python
None : constant.language.python, source.python
| 47.529412 | 166 | 0.627166 | 327 | 3,232 | 6.198777 | 0.143731 | 0.319684 | 0.426246 | 0.248643 | 0.809571 | 0.77553 | 0.710409 | 0.704489 | 0.597435 | 0.547114 | 0 | 0 | 0.274752 | 3,232 | 67 | 167 | 48.238806 | 0.864761 | 0.076733 | 0 | 0.474576 | 0 | 0.033898 | 0.000336 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.101695 | null | null | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2fa72d8b74280f18c55d94c51297b2791dcefeb1 | 9,727 | py | Python | tests/verbs/test_nested_player.py | RathmoreChaos/intficpy | a5076bba93208dc18dcbf2e4ad720af9e2127eda | [
"MIT"
] | 25 | 2019-04-30T23:51:44.000Z | 2022-03-23T02:02:54.000Z | tests/verbs/test_nested_player.py | RathmoreChaos/intficpy | a5076bba93208dc18dcbf2e4ad720af9e2127eda | [
"MIT"
] | 4 | 2019-07-09T03:43:35.000Z | 2022-01-10T23:41:46.000Z | tests/verbs/test_nested_player.py | RathmoreChaos/intficpy | a5076bba93208dc18dcbf2e4ad720af9e2127eda | [
"MIT"
] | 5 | 2021-04-24T03:54:39.000Z | 2022-01-06T20:59:03.000Z | from ..helpers import IFPTestCase
from intficpy.thing_base import Thing
from intficpy.things import (
Surface,
Container,
)
class TestPlayerGetOn(IFPTestCase):
def setUp(self):
super().setUp()
self.surface = Surface(self.game, "bench")
self.start_room.addThing(self.surface)
def test_climb_on_cannot_sit_stand_lie(self):
FAILURE_MSG = f"You cannot climb on {self.surface.lowNameArticle(True)}. "
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb on bench")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, FAILURE_MSG)
def test_climb_on_can_lie(self):
SUCCESS_MSG = f"You lie on {self.surface.lowNameArticle(True)}. "
self.surface.can_contain_lying_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb on bench")
self.assertIs(self.me.location, self.surface)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_on_can_sit(self):
SUCCESS_MSG = f"You sit on {self.surface.lowNameArticle(True)}. "
self.surface.can_contain_sitting_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb on bench")
self.assertIs(self.me.location, self.surface)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_on_can_stand(self):
SUCCESS_MSG = f"You stand on {self.surface.lowNameArticle(True)}. "
self.surface.can_contain_standing_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb on bench")
self.assertIs(self.me.location, self.surface)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
class TestPlayerGetOff(IFPTestCase):
def setUp(self):
super().setUp()
self.surface = Surface(self.game, "bench")
self.surface.can_contain_standing_player = True
self.start_room.addThing(self.surface)
self.game.turnMain("climb on bench")
self.assertIs(self.me.location, self.surface)
def test_climb_down_from(self):
SUCCESS_MSG = f"You climb down from {self.surface.lowNameArticle(True)}. "
self.game.turnMain("climb down from bench")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_down(self):
SUCCESS_MSG = f"You climb down from {self.surface.lowNameArticle(True)}. "
self.game.turnMain("climb down")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
class TestPlayerGetIn(IFPTestCase):
def setUp(self):
super().setUp()
self.container = Container(self.game, "box")
self.start_room.addThing(self.container)
def test_climb_in_cannot_sit_stand_lie(self):
FAILURE_MSG = f"You cannot climb into {self.container.lowNameArticle(True)}. "
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, FAILURE_MSG)
def test_climb_in_can_lie(self):
SUCCESS_MSG = f"You lie in {self.container.lowNameArticle(True)}. "
self.container.can_contain_lying_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_in_can_sit(self):
SUCCESS_MSG = f"You sit in {self.container.lowNameArticle(True)}. "
self.container.can_contain_sitting_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_in_can_stand(self):
SUCCESS_MSG = f"You stand in {self.container.lowNameArticle(True)}. "
self.container.can_contain_standing_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
class TestPlayerGetInOpenLid(IFPTestCase):
def setUp(self):
super().setUp()
self.container = Container(self.game, "box")
self.container.has_lid = True
self.container.is_open = True
self.start_room.addThing(self.container)
def test_climb_in_can_lie(self):
SUCCESS_MSG = f"You lie in {self.container.lowNameArticle(True)}. "
self.container.can_contain_lying_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_in_can_sit(self):
SUCCESS_MSG = f"You sit in {self.container.lowNameArticle(True)}. "
self.container.can_contain_sitting_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_in_can_stand(self):
SUCCESS_MSG = f"You stand in {self.container.lowNameArticle(True)}. "
self.container.can_contain_standing_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
class TestPlayerGetInClosedLid(IFPTestCase):
def setUp(self):
super().setUp()
self.container = Container(self.game, "box")
self.container.has_lid = True
self.container.is_open = False
self.start_room.addThing(self.container)
def test_climb_in_can_lie(self):
FAILURE_MSG = (
f"You cannot climb into {self.container.lowNameArticle(True)}, "
"since it is closed. "
)
self.container.can_contain_lying_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, FAILURE_MSG)
def test_climb_in_can_sit(self):
FAILURE_MSG = (
f"You cannot climb into {self.container.lowNameArticle(True)}, "
"since it is closed. "
)
self.container.can_contain_sitting_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, FAILURE_MSG)
def test_climb_in_can_stand(self):
FAILURE_MSG = (
f"You cannot climb into {self.container.lowNameArticle(True)}, "
"since it is closed. "
)
self.container.can_contain_standing_player = True
self.assertIs(
self.me.location, self.start_room, "Player needs to start in start_room"
)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, FAILURE_MSG)
class TestPlayerGetOut(IFPTestCase):
def setUp(self):
super().setUp()
self.container = Container(self.game, "box")
self.container.can_contain_standing_player = True
self.start_room.addThing(self.container)
self.game.turnMain("climb in box")
self.assertIs(self.me.location, self.container)
def test_climb_out_of(self):
SUCCESS_MSG = f"You climb out of {self.container.lowNameArticle(True)}. "
self.game.turnMain("climb out of box")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
def test_climb_out(self):
SUCCESS_MSG = f"You climb out of {self.container.lowNameArticle(True)}. "
self.game.turnMain("climb out")
self.assertIs(self.me.location, self.start_room)
msg = self.app.print_stack.pop()
self.assertEqual(msg, SUCCESS_MSG)
| 32.750842 | 86 | 0.650766 | 1,256 | 9,727 | 4.869427 | 0.056529 | 0.063277 | 0.088947 | 0.100065 | 0.95896 | 0.953074 | 0.944899 | 0.942773 | 0.927404 | 0.906802 | 0 | 0 | 0.242829 | 9,727 | 296 | 87 | 32.861486 | 0.830414 | 0 | 0 | 0.764977 | 0 | 0 | 0.185772 | 0.069086 | 0 | 0 | 0 | 0 | 0.239631 | 1 | 0.110599 | false | 0 | 0.013825 | 0 | 0.152074 | 0.082949 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2fd60118158a572399f6fb138af5091860363cdf | 2,744 | py | Python | src/AuShadha/immunisation/dijit_fields_constants.py | GosthMan/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 46 | 2015-03-04T14:19:47.000Z | 2021-12-09T02:58:46.000Z | src/AuShadha/immunisation/dijit_fields_constants.py | aytida23/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 2 | 2015-06-05T10:29:04.000Z | 2015-12-06T16:54:10.000Z | src/AuShadha/immunisation/dijit_fields_constants.py | aytida23/AuShadha | 3ab48825a0dba19bf880b6ac6141ab7a6adf1f3e | [
"PostgreSQL"
] | 24 | 2015-03-23T01:38:11.000Z | 2022-01-24T16:23:42.000Z | IMMUNISATION_FORM_CONSTANTS ={'vaccine_detail':{
'max_length': 30,
"data-dojo-type": "dijit.form.FilteringSelect",
"data-dojo-props": r"'required': true"
},
'route':{
'max_length': 30,
"data-dojo-type": "dijit.form.Select",
"data-dojo-props": r"'required' : true ,'regExp':'','invalidMessage' : 'Invalid Character'"
},
'injection_site':{
'max_length': 30,
"data-dojo-type": "dijit.form.Select",
"data-dojo-props": r"'required' : true ,'regExp':'','invalidMessage' : 'Invalid Character'"
},
'dose':{
'max_length': 30,
"data-dojo-type": "dijit.form.FilteringSelect",
"data-dojo-props": r"'required' : true ,'regExp':'','invalidMessage' : 'Invalid Character'"
},
#'administrator':{
#'max_length': 30,
#"data-dojo-type": "dijit.form.Select",
#"data-dojo-props": r"'required': true"
#},
'vaccination_date':{
'max_length': 30,
"data-dojo-type": "dijit.form.DateTextBox",
"data-dojo-props": r"'required' : true ,'regExp':'','invalidMessage' : 'Invalid Character'"
},
'next_due':{
'max_length': 30,
"data-dojo-type": "dijit.form.DateTextBox",
"data-dojo-props": r"'required' : true ,'regExp':'','invalidMessage' : 'Invalid Character'"
},
'adverse_reaction':{
'max_length': 150,
"data-dojo-type": "dijit.form.Textarea",
"data-dojo-props": r"'required' : true ,'regExp':'[\\w]+','invalidMessage' : 'Invalid Character'"
}
}
| 65.333333 | 135 | 0.322522 | 161 | 2,744 | 5.403727 | 0.229814 | 0.147126 | 0.110345 | 0.156322 | 0.814943 | 0.790805 | 0.790805 | 0.754023 | 0.754023 | 0.754023 | 0 | 0.014061 | 0.559402 | 2,744 | 41 | 136 | 66.926829 | 0.705542 | 0.040816 | 0 | 0.472222 | 0 | 0 | 0.355919 | 0.105063 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
2fdbcaed70cbc246dbbb66ebc0730bfc3e40c0bf | 12,937 | py | Python | test/unit/Physics/models/GenericSourceTerm.py | thirtywang/OpenPNM | e55ee7ae69a8be3e2b0e6bf24c9ff92b6d24e16a | [
"MIT"
] | 1 | 2021-03-30T21:38:26.000Z | 2021-03-30T21:38:26.000Z | test/unit/Physics/models/GenericSourceTerm.py | thirtywang/OpenPNM | e55ee7ae69a8be3e2b0e6bf24c9ff92b6d24e16a | [
"MIT"
] | null | null | null | test/unit/Physics/models/GenericSourceTerm.py | thirtywang/OpenPNM | e55ee7ae69a8be3e2b0e6bf24c9ff92b6d24e16a | [
"MIT"
] | 1 | 2020-07-02T02:21:10.000Z | 2020-07-02T02:21:10.000Z | import OpenPNM
import numpy as np
import OpenPNM.Physics.models as pm
class GenericSourceTermTest:
def setup_class(self):
self.net = OpenPNM.Network.Cubic(shape=[5, 5, 5])
self.phase = OpenPNM.Phases.GenericPhase(network=self.net)
Ps = self.net.Ps
Ts = self.net.Ts
self.phys = OpenPNM.Physics.GenericPhysics(network=self.net,
phase=self.phase,
pores=Ps, throats=Ts)
self.phys['throat.diffusive_conductance'] = 5e-8
self.phase['pore.mole_fraction'] = 0.
self.alg = OpenPNM.Algorithms.GenericLinearTransport(network=self.net,
phase=self.phase)
BC_pores = np.arange(20, 30)
self.S_pores = np.arange(55, 85)
self.alg.set_boundary_conditions(bctype='Dirichlet',
bcvalue=0.4,
pores=BC_pores)
def test_linear(self):
self.phys['pore.item1'] = 0.5e-11
self.phys['pore.item2'] = 1.5e-12
self.phys.models.add(propname='pore.source1',
model=pm.generic_source_term.linear,
A1='pore.item1',
A2='pore.item2',
x='mole_fraction',
return_rate=False,
regen_mode='on_demand')
self.phys.models.add(propname='pore.source2',
model=pm.generic_source_term.linear,
A1='pore.item1',
A2='pore.item2',
x='mole_fraction',
return_rate=True,
regen_mode='on_demand')
self.alg.set_source_term(source_name='pore.source1',
pores=self.S_pores,
mode='overwrite')
self.alg.run(conductance='throat.diffusive_conductance',
quantity='pore.mole_fraction',
super_pore_conductance=None)
self.alg.return_results()
self.phys.regenerate(props='pore.source1')
self.phys.regenerate(props='pore.source2')
X = self.phase['pore.mole_fraction']
r1 = np.round(np.sum(0.5e-11 * X[self.S_pores] + 1.5e-12), 20)
r2 = np.round(np.sum(self.phys['pore.source2'][self.S_pores]), 20)
r3 = np.round(self.alg.rate(pores=self.S_pores)[0], 20)
assert r1 == r2
assert r2 == -r3
def test_power_law(self):
self.phys['pore.item1'] = 0.5e-12
self.phys['pore.item2'] = 2.5
self.phys['pore.item3'] = -1.4e-11
self.phys.models.add(propname='pore.source1',
model=pm.generic_source_term.power_law,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
x='mole_fraction',
return_rate=False,
regen_mode='on_demand')
self.phys.models.add(propname='pore.source2',
model=pm.generic_source_term.power_law,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
x='mole_fraction',
return_rate=True,
regen_mode='on_demand')
self.alg.set_source_term(source_name='pore.source1',
pores=self.S_pores,
mode='overwrite')
self.alg.run(conductance='throat.diffusive_conductance',
quantity='pore.mole_fraction',
super_pore_conductance=None)
self.alg.return_results()
self.phys.regenerate(props='pore.source1')
self.phys.regenerate(props='pore.source2')
X = self.phase['pore.mole_fraction']
r1 = np.round(np.sum(0.5e-12 * X[self.S_pores] ** 2.5 - 1.4e-11), 20)
r2 = np.round(np.sum(self.phys['pore.source2'][self.S_pores]), 20)
r3 = np.round(self.alg.rate(pores=self.S_pores)[0], 20)
assert r1 == r2
assert r2 == -r3
def test_exponential(self):
self.phys['pore.item1'] = 0.8e-11
self.phys['pore.item2'] = 3
self.phys['pore.item3'] = 0.5
self.phys['pore.item4'] = 2
self.phys['pore.item5'] = -0.34
self.phys['pore.item6'] = 2e-14
self.phys.models.add(propname='pore.source1',
model=pm.generic_source_term.exponential,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
A6='pore.item6',
x='mole_fraction',
return_rate=False,
regen_mode='on_demand')
self.phys.models.add(propname='pore.source2',
model=pm.generic_source_term.exponential,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
A6='pore.item6',
x='mole_fraction',
return_rate=True,
regen_mode='on_demand')
self.alg.set_source_term(source_name='pore.source1',
pores=self.S_pores,
mode='overwrite')
self.alg.run(conductance='throat.diffusive_conductance',
quantity='pore.mole_fraction',
super_pore_conductance=None)
self.alg.return_results()
self.phys.regenerate(props='pore.source1')
self.phys.regenerate(props='pore.source2')
X = self.phase['pore.mole_fraction']
r1 = np.round(np.sum(0.8e-11 * 3 ** (0.5 * X[self.S_pores] ** 2 -
0.34) + 2e-14), 20)
r2 = np.round(np.sum(self.phys['pore.source2'][self.S_pores]), 20)
r3 = np.round(self.alg.rate(pores=self.S_pores)[0], 20)
assert r1 == r2
assert r2 == -r3
def test_natural_exponential(self):
self.phys['pore.item1'] = 0.8e-11
self.phys['pore.item2'] = 0.5
self.phys['pore.item3'] = 2
self.phys['pore.item4'] = -0.34
self.phys['pore.item5'] = 2e-14
self.phys.models.add(propname='pore.source1',
model=pm.generic_source_term.natural_exponential,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
x='mole_fraction',
return_rate=False,
regen_mode='on_demand')
self.phys.models.add(propname='pore.source2',
model=pm.generic_source_term.natural_exponential,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
x='mole_fraction',
return_rate=True,
regen_mode='on_demand')
self.alg.set_source_term(source_name='pore.source1',
pores=self.S_pores,
mode='overwrite')
self.alg.run(conductance='throat.diffusive_conductance',
quantity='pore.mole_fraction',
super_pore_conductance=None)
self.alg.return_results()
self.phys.regenerate(props='pore.source1')
self.phys.regenerate(props='pore.source2')
X = self.phase['pore.mole_fraction']
r1 = np.round(np.sum(0.8e-11 * np.exp(0.5 * X[self.S_pores] ** 2 -
0.34) + 2e-14), 20)
r2 = np.round(np.sum(self.phys['pore.source2'][self.S_pores]), 20)
r3 = np.round(self.alg.rate(pores=self.S_pores)[0], 20)
assert r1 == r2
assert r2 == -r3
def test_logarithm(self):
self.phys['pore.item1'] = 0.16e-13
self.phys['pore.item2'] = 10
self.phys['pore.item3'] = 4
self.phys['pore.item4'] = 1.4
self.phys['pore.item5'] = 0.133
self.phys['pore.item6'] = -5.1e-13
self.phys.models.add(propname='pore.source1',
model=pm.generic_source_term.logarithm,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
A6='pore.item6',
x='mole_fraction',
return_rate=False,
regen_mode='on_demand')
self.phys.models.add(propname='pore.source2',
model=pm.generic_source_term.logarithm,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
A6='pore.item6',
x='mole_fraction',
return_rate=True,
regen_mode='on_demand')
self.alg.set_source_term(source_name='pore.source1',
pores=self.S_pores,
mode='overwrite')
self.alg.run(conductance='throat.diffusive_conductance',
quantity='pore.mole_fraction',
super_pore_conductance=None)
self.alg.return_results()
self.phys.regenerate(props='pore.source1')
self.phys.regenerate(props='pore.source2')
X = self.phase['pore.mole_fraction']
r1 = np.round(np.sum(0.16e-13 * np.log(4 * X[self.S_pores] ** (1.4) +
0.133) / np.log(10) - 5.1e-13), 20)
r2 = np.round(np.sum(self.phys['pore.source2'][self.S_pores]), 20)
r3 = np.round(self.alg.rate(pores=self.S_pores)[0], 20)
assert r1 == r2
assert r2 == -r3
def test_natural_logarithm(self):
self.phys['pore.item1'] = 0.16e-14
self.phys['pore.item2'] = 4
self.phys['pore.item3'] = 1.4
self.phys['pore.item4'] = 0.133
self.phys['pore.item5'] = -5.1e-14
self.phys.models.add(propname='pore.source1',
model=pm.generic_source_term.natural_logarithm,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
x='mole_fraction',
return_rate=False,
regen_mode='on_demand')
self.phys.models.add(propname='pore.source2',
model=pm.generic_source_term.natural_logarithm,
A1='pore.item1',
A2='pore.item2',
A3='pore.item3',
A4='pore.item4',
A5='pore.item5',
x='mole_fraction',
return_rate=True,
regen_mode='on_demand')
self.alg.set_source_term(source_name='pore.source1',
pores=self.S_pores,
mode='overwrite')
self.alg.run(conductance='throat.diffusive_conductance',
quantity='pore.mole_fraction',
super_pore_conductance=None)
self.alg.return_results()
self.phys.regenerate(props='pore.source1')
self.phys.regenerate(props='pore.source2')
X = self.phase['pore.mole_fraction']
r1 = np.round(np.sum(0.16e-14 * np.log(4 * X[self.S_pores] ** (1.4) +
0.133) - 5.1e-14), 20)
r2 = np.round(np.sum(self.phys['pore.source2'][self.S_pores]), 20)
r3 = np.round(self.alg.rate(pores=self.S_pores)[0], 20)
assert r1 == r2
assert r2 == -r3
| 47.738007 | 78 | 0.458143 | 1,369 | 12,937 | 4.203068 | 0.088386 | 0.08203 | 0.068822 | 0.035454 | 0.88773 | 0.837504 | 0.827772 | 0.81943 | 0.807612 | 0.807612 | 0 | 0.057109 | 0.41934 | 12,937 | 270 | 79 | 47.914815 | 0.708866 | 0 | 0 | 0.770992 | 0 | 0 | 0.154518 | 0.01515 | 0 | 0 | 0 | 0 | 0.045802 | 1 | 0.026718 | false | 0 | 0.01145 | 0 | 0.041985 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6406cea25b9d32dc094aca4d009bf560639e3017 | 309 | py | Python | temboo/core/Library/Twilio/AvailablePhoneNumbers/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 7 | 2016-03-07T02:07:21.000Z | 2022-01-21T02:22:41.000Z | temboo/core/Library/Twilio/AvailablePhoneNumbers/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | null | null | null | temboo/core/Library/Twilio/AvailablePhoneNumbers/__init__.py | jordanemedlock/psychtruths | 52e09033ade9608bd5143129f8a1bfac22d634dd | [
"Apache-2.0"
] | 8 | 2016-06-14T06:01:11.000Z | 2020-04-22T09:21:44.000Z | from temboo.Library.Twilio.AvailablePhoneNumbers.LocalList import LocalList, LocalListInputSet, LocalListResultSet, LocalListChoreographyExecution
from temboo.Library.Twilio.AvailablePhoneNumbers.TollFreeList import TollFreeList, TollFreeListInputSet, TollFreeListResultSet, TollFreeListChoreographyExecution
| 103 | 161 | 0.909385 | 22 | 309 | 12.772727 | 0.636364 | 0.071174 | 0.120996 | 0.163701 | 0.313167 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.045307 | 309 | 2 | 162 | 154.5 | 0.952542 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | null | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
640ec62859ce855f7f0c09a17d64d8bca00e6029 | 2,696 | py | Python | tests/unit_tests/uncached_workflow/complex_workflow.py | kyocum/disdat-step-functions | dcec650ce9c99bdbf40310b23a8d88700b69b4fd | [
"Apache-2.0"
] | 1 | 2021-09-13T18:53:18.000Z | 2021-09-13T18:53:18.000Z | tests/unit_tests/uncached_workflow/complex_workflow.py | kyocum/disdat-step-functions | dcec650ce9c99bdbf40310b23a8d88700b69b4fd | [
"Apache-2.0"
] | null | null | null | tests/unit_tests/uncached_workflow/complex_workflow.py | kyocum/disdat-step-functions | dcec650ce9c99bdbf40310b23a8d88700b69b4fd | [
"Apache-2.0"
] | null | null | null | from stepfunctions.steps import states
from stepfunctions.steps import ChoiceRule
from disdat_step_function.caching_wrapper import Caching
class ComplexWorkflow:
@classmethod
def get_workflow(cls):
start = states.Pass(state_id='start')
choice = states.Choice(state_id='choice')
error = states.Chain([states.Wait(state_id='wait_fail', seconds=10), states.Fail(state_id='fail')])
wait_state = states.Wait(state_id='wait', seconds=1)
task_1 = states.Task(state_id='task_1')
task_1.add_catch(states.Catch(next_step=error))
task_2 = states.Task(state_id='task_2')
chain = states.Chain([task_1, task_2])
chain_2 = states.Chain([wait_state, chain])
choice.add_choice(rule=ChoiceRule.BooleanEquals('$', True), next_step=chain_2)
task_3 = states.Task(state_id='task_3')
pass_1 = states.Pass(state_id='pass_1')
choice.default_choice(next_step=start)
parallel = states.Parallel(state_id='parallel')
parallel.add_branch(task_3)
parallel.add_branch(pass_1)
end = states.Pass(state_id='end')
success = states.Succeed(state_id='over')
return states.Chain([start, choice, states.Chain([parallel, end, success])])
@classmethod
def get_expected_def(cls):
caching = Caching(caching_lambda_name='',
s3_bucket_url='s3://...',
context_name='',
verbose=True)
start = states.Pass(state_id='start')
choice = states.Choice(state_id='choice')
error = states.Chain([states.Wait(state_id='wait_fail', seconds=10), states.Fail(state_id='fail')])
wait_state = states.Wait(state_id='wait', seconds=1)
task_1 = states.Task(state_id='task_1')
task_1.add_catch(states.Catch(next_step=error))
task_2 = states.Task(state_id='task_2')
task_1 = caching.cache_step(task_1)
task_2 = caching.cache_step(task_2)
chain = states.Chain([task_1, task_2])
chain_2 = states.Chain([wait_state, chain])
choice.add_choice(rule=ChoiceRule.BooleanEquals('$', True), next_step=chain_2)
task_3 = states.Task(state_id='task_3')
task_3 = caching.cache_step(task_3)
pass_1 = states.Pass(state_id='pass_1')
choice.default_choice(next_step=start)
parallel = states.Parallel(state_id='parallel')
parallel.add_branch(task_3)
parallel.add_branch(pass_1)
end = states.Pass(state_id='end')
success = states.Succeed(state_id='over')
return states.Chain([start, choice, states.Chain([parallel, end, success])]) | 35.012987 | 107 | 0.647626 | 359 | 2,696 | 4.5961 | 0.155989 | 0.101818 | 0.054545 | 0.061818 | 0.797576 | 0.797576 | 0.797576 | 0.797576 | 0.797576 | 0.797576 | 0 | 0.021012 | 0.223294 | 2,696 | 77 | 108 | 35.012987 | 0.766953 | 0 | 0 | 0.754717 | 0 | 0 | 0.053393 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.037736 | false | 0.150943 | 0.056604 | 0 | 0.150943 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
6422ead7bcadcdd485cfba32019fbe461da52580 | 151 | py | Python | tnetwork/dyn_graph/__init__.py | tomjorquera/tnetwork | 684c9574086d369e80a5ae3b822fcc0f0a704ebd | [
"BSD-2-Clause"
] | null | null | null | tnetwork/dyn_graph/__init__.py | tomjorquera/tnetwork | 684c9574086d369e80a5ae3b822fcc0f0a704ebd | [
"BSD-2-Clause"
] | null | null | null | tnetwork/dyn_graph/__init__.py | tomjorquera/tnetwork | 684c9574086d369e80a5ae3b822fcc0f0a704ebd | [
"BSD-2-Clause"
] | null | null | null | from tnetwork.dyn_graph.dyn_graph_ig import DynGraphIG
from tnetwork.dyn_graph.function import *
from tnetwork.dyn_graph.dyn_graph_sn import DynGraphSN | 50.333333 | 54 | 0.880795 | 24 | 151 | 5.25 | 0.416667 | 0.31746 | 0.357143 | 0.47619 | 0.444444 | 0.444444 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072848 | 151 | 3 | 55 | 50.333333 | 0.9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
6437ce0b407ea992d3f301b599c64ba250ffafe8 | 24,163 | py | Python | src/deoxys/data/data_reader.py | huynhngoc/deoxys | b2e9936b723807e129fda36d8d6131ca00db558f | [
"MIT"
] | 1 | 2021-12-28T15:48:45.000Z | 2021-12-28T15:48:45.000Z | src/deoxys/data/data_reader.py | huynhngoc/deoxys | b2e9936b723807e129fda36d8d6131ca00db558f | [
"MIT"
] | 2 | 2020-06-26T11:03:53.000Z | 2020-06-26T11:05:09.000Z | src/deoxys/data/data_reader.py | huynhngoc/deoxys | b2e9936b723807e129fda36d8d6131ca00db558f | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
__author__ = "Ngoc Huynh Bao"
__email__ = "ngoc.huynh.bao@nmbu.no"
import h5py
import numpy as np
from deoxys.keras.preprocessing import ImageDataGenerator
from .data_generator import DataGenerator, HDF5DataGenerator, \
H5DataGenerator, H5PatchGenerator
from ..utils import Singleton, file_finder
class DataReader:
"""
The base class of the Data Reader. Any newly created DataReader will
inherit from this class.
"""
def __init__(self, *args, **kwargs):
# the existence of the data reader is True by default
# if the data reader cannot be loaded because of IO reason,
# set this value to false
self.ready = True
@property
def train_generator(self):
"""
Data Generator for the training dataset
Returns
-------
deoxys.data.DataGenerator
An DataGenerator instance that generates the train dataset
"""
return DataGenerator().generate()
@property
def test_generator(self):
"""
Data Generator for the test dataset
Returns
-------
deoxys.data.DataGenerator
An DataGenerator instance that generates the test dataset
"""
return DataGenerator().generate()
@property
def val_generator(self):
"""
Data Generator for the validation dataset
Returns
-------
deoxys.data.DataGenerator
An DataGenerator instance that generates the validation dataset
"""
return DataGenerator().generate()
@property
def original_test(self):
pass
class HDF5Reader(DataReader):
"""DataReader that use data from an hdf5 file.
Initialize a HDF5 Data Reader, which reads data from a HDF5
file. This file should be split into groups. Each group contain
datasets, each of which is a column in the data.
Example:
The dataset X contain 1000 samples, with 4 columns:
x, y, z, t. Where x is the main input, y and z are supporting
information (index, descriptions) and t is the target for
prediction. We want to test 30% of this dataset, and have a
cross validation of 100 samples.
Then, the hdf5 containing dataset X should have 10 groups,
each group contains 100 samples. We can name these groups
'fold_1', 'fold_2', 'fold_3', ... , 'fold_9', 'fold_10'.
Each group will then have 4 datasets: x, y, z and t, each of
which has 100 items.
Since x is the main input, then `x_name='x'`, and t is the
target for prediction, then `y_name='t'`. We named the groups
in the form of fold_n, then `fold_prefix='fold'`.
Let's assume the data is stratified, we want to test on the
last 30% of the data, so `test_folds=[8, 9, 10]`.
100 samples is used for cross-validation. Thus, one option for
`train_folds` and `val_folds` is `train_folds=[1,2,3,4,5,6]`
and `val_folds=[7]`. Or in another experiment, you can set
`train_folds=[2,3,4,5,6,7]` and `val_folds=[1]`.
If the hdf5 didn't has any formular for group name, then you
can set `fold_prefix=None` then put the full group name
directly to `train_folds`, `val_folds` and `test_folds`.
Parameters
----------
filename : str
The hdf5 file name that contains the data.
batch_size : int, optional
Number of sample to feeds in
the neural network in each step, by default 32
preprocessors : list of deoxys.data.Preprocessor, optional
List of preprocessors to apply on the data, by default None
x_name : str, optional
Dataset name to be use as input, by default 'x'
y_name : str, optional
Dataset name to be use as target, by default 'y'
batch_cache : int, optional
Number of batches to be cached when reading the
file, by default 10
train_folds : list of int, or list of str, optional
List of folds to be use as train data, by default None
test_folds : list of int, or list of str, optional
List of folds to be use as test data, by default None
val_folds : list of int, or list of str, optional
List of folds to be use as validation data, by default None
fold_prefix : str, optional
The prefix of the group name in the HDF5 file, by default 'fold'
"""
def __init__(self, filename, batch_size=32, preprocessors=None,
x_name='x', y_name='y', batch_cache=10,
train_folds=None, test_folds=None, val_folds=None,
fold_prefix='fold'):
"""
Initialize a HDF5 Data Reader, which reads data from a HDF5
file. This file should be split into groups. Each group contain
datasets, each of which is a column in the data.
"""
super().__init__()
h5_filename = file_finder(filename)
if h5_filename is None:
# HDF5DataReader is created, but won't be loaded into model
self.ready = False
return
self.hf = h5py.File(h5_filename, 'r')
self.batch_size = batch_size
self.batch_cache = batch_cache
self.preprocessors = preprocessors
self.x_name = x_name
self.y_name = y_name
self.fold_prefix = fold_prefix
train_folds = list(train_folds) if train_folds else [0]
test_folds = list(test_folds) if test_folds else [2]
val_folds = list(val_folds) if val_folds else [1]
if fold_prefix:
self.train_folds = ['{}_{}'.format(
fold_prefix, train_fold) for train_fold in train_folds]
self.test_folds = ['{}_{}'.format(
fold_prefix, test_fold) for test_fold in test_folds]
self.val_folds = ['{}_{}'.format(
fold_prefix, val_fold) for val_fold in val_folds]
else:
self.train_folds = train_folds
self.test_folds = test_folds
self.val_folds = val_folds
self._original_test = None
self._original_val = None
@property
def train_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for training
"""
return HDF5DataGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.train_folds)
@property
def test_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for testing
"""
return HDF5DataGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.test_folds)
@property
def val_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for validation
"""
return HDF5DataGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.val_folds)
@property
def original_test(self):
"""
Return a dictionary of all data in the test set
"""
if self._original_test is None:
self._original_test = {}
for key in self.hf[self.test_folds[0]].keys():
data = None
for fold in self.test_folds:
new_data = self.hf[fold][key][:]
if data is None:
data = new_data
else:
data = np.concatenate((data, new_data))
self._original_test[key] = data
return self._original_test
@property
def original_val(self):
"""
Return a dictionary of all data in the val set
"""
if self._original_val is None:
self._original_val = {}
for key in self.hf[self.val_folds[0]].keys():
data = None
for fold in self.val_folds:
new_data = self.hf[fold][key][:]
if data is None:
data = new_data
else:
data = np.concatenate((data, new_data))
self._original_val[key] = data
return self._original_val
class H5Reader(DataReader):
"""DataReader that use data from an hdf5 file.
Initialize a HDF5 Data Reader, which reads data from a HDF5
file. This file should be split into groups. Each group contain
datasets, each of which is a column in the data.
Example:
The dataset X contain 1000 samples, with 4 columns:
x, y, z, t. Where x is the main input, y and z are supporting
information (index, descriptions) and t is the target for
prediction. We want to test 30% of this dataset, and have a
cross validation of 100 samples.
Then, the hdf5 containing dataset X should have 10 groups,
each group contains 100 samples. We can name these groups
'fold_1', 'fold_2', 'fold_3', ... , 'fold_9', 'fold_10'.
Each group will then have 4 datasets: x, y, z and t, each of
which has 100 items.
Since x is the main input, then `x_name='x'`, and t is the
target for prediction, then `y_name='t'`. We named the groups
in the form of fold_n, then `fold_prefix='fold'`.
Let's assume the data is stratified, we want to test on the
last 30% of the data, so `test_folds=[8, 9, 10]`.
100 samples is used for cross-validation. Thus, one option for
`train_folds` and `val_folds` is `train_folds=[1,2,3,4,5,6]`
and `val_folds=[7]`. Or in another experiment, you can set
`train_folds=[2,3,4,5,6,7]` and `val_folds=[1]`.
If the hdf5 didn't has any formular for group name, then you
can set `fold_prefix=None` then put the full group name
directly to `train_folds`, `val_folds` and `test_folds`.
Parameters
----------
filename : str
The hdf5 file name that contains the data.
batch_size : int, optional
Number of sample to feeds in
the neural network in each step, by default 32
preprocessors : list of deoxys.data.Preprocessor, optional
List of preprocessors to apply on the data, by default None
x_name : str, optional
Dataset name to be use as input, by default 'x'
y_name : str, optional
Dataset name to be use as target, by default 'y'
batch_cache : int, optional
Number of batches to be cached when reading the
file, by default 10
train_folds : list of int, or list of str, optional
List of folds to be use as train data, by default None
test_folds : list of int, or list of str, optional
List of folds to be use as test data, by default None
val_folds : list of int, or list of str, optional
List of folds to be use as validation data, by default None
fold_prefix : str, optional
The prefix of the group name in the HDF5 file, by default 'fold'
shuffle : bool, optional
shuffle data while training, by default False
augmentations : list of deoxys.data.Preprocessor, optional
apply augmentation when generating traing data, by default None
"""
def __init__(self, filename, batch_size=32, preprocessors=None,
x_name='x', y_name='y', batch_cache=10,
train_folds=None, test_folds=None, val_folds=None,
fold_prefix='fold', shuffle=False, augmentations=None):
"""
Initialize a HDF5 Data Reader, which reads data from a HDF5
file. This file should be split into groups. Each group contain
datasets, each of which is a column in the data.
"""
super().__init__()
h5_filename = file_finder(filename)
if h5_filename is None:
# HDF5DataReader is created, but won't be loaded into model
self.ready = False
return
self.hf = h5py.File(h5_filename, 'r')
self.batch_size = batch_size
self.batch_cache = batch_cache
self.shuffle = shuffle
self.preprocessors = preprocessors
self.augmentations = augmentations
self.x_name = x_name
self.y_name = y_name
self.fold_prefix = fold_prefix
train_folds = list(train_folds) if train_folds else [0]
test_folds = list(test_folds) if test_folds else [2]
val_folds = list(val_folds) if val_folds else [1]
if fold_prefix:
self.train_folds = ['{}_{}'.format(
fold_prefix, train_fold) for train_fold in train_folds]
self.test_folds = ['{}_{}'.format(
fold_prefix, test_fold) for test_fold in test_folds]
self.val_folds = ['{}_{}'.format(
fold_prefix, val_fold) for val_fold in val_folds]
else:
self.train_folds = train_folds
self.test_folds = test_folds
self.val_folds = val_folds
self._original_test = None
self._original_val = None
@property
def train_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for training
"""
return H5DataGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.train_folds, shuffle=self.shuffle,
augmentations=self.augmentations)
@property
def test_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for testing
"""
return H5DataGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.test_folds, shuffle=False)
@property
def val_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for validation
"""
return H5DataGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.val_folds, shuffle=False)
@property
def original_test(self):
"""
Return a dictionary of all data in the test set
"""
if self._original_test is None:
self._original_test = {}
for key in self.hf[self.test_folds[0]].keys():
data = None
for fold in self.test_folds:
new_data = self.hf[fold][key][:]
if data is None:
data = new_data
else:
data = np.concatenate((data, new_data))
self._original_test[key] = data
return self._original_test
@property
def original_val(self):
"""
Return a dictionary of all data in the val set
"""
if self._original_val is None:
self._original_val = {}
for key in self.hf[self.val_folds[0]].keys():
data = None
for fold in self.val_folds:
new_data = self.hf[fold][key][:]
if data is None:
data = new_data
else:
data = np.concatenate((data, new_data))
self._original_val[key] = data
return self._original_val
class H5PatchReader(DataReader):
def __init__(self, filename, batch_size=32, preprocessors=None,
x_name='x', y_name='y', batch_cache=10,
train_folds=None, test_folds=None, val_folds=None,
fold_prefix='fold',
patch_size=128, overlap=0.5, shuffle=False,
augmentations=False, preprocess_first=True,
drop_fraction=0.1, check_drop_channel=None,
bounding_box=False):
super().__init__()
h5_filename = file_finder(filename)
if h5_filename is None:
# HDF5DataReader is created, but won't be loaded into model
self.ready = False
return
self.hf = h5_filename
self.batch_size = batch_size
self.batch_cache = batch_cache
self.shuffle = shuffle
self.patch_size = patch_size
self.overlap = overlap
self.preprocess_first = preprocess_first
self.drop_fraction = drop_fraction
self.check_drop_channel = check_drop_channel
self.bounding_box = bounding_box
self.preprocessors = preprocessors
self.augmentations = augmentations
if preprocessors:
if '__iter__' not in dir(preprocessors):
self.preprocessors = [preprocessors]
if augmentations:
if '__iter__' not in dir(augmentations):
self.augmentations = [augmentations]
self.x_name = x_name
self.y_name = y_name
self.fold_prefix = fold_prefix
train_folds = list(train_folds) if train_folds else [0]
test_folds = list(test_folds) if test_folds else [2]
val_folds = list(val_folds) if val_folds else [1]
if fold_prefix:
self.train_folds = ['{}_{}'.format(
fold_prefix, train_fold) for train_fold in train_folds]
self.test_folds = ['{}_{}'.format(
fold_prefix, test_fold) for test_fold in test_folds]
self.val_folds = ['{}_{}'.format(
fold_prefix, val_fold) for val_fold in val_folds]
else:
self.train_folds = train_folds
self.test_folds = test_folds
self.val_folds = val_folds
self._original_test = None
self._original_val = None
@property
def train_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for training
"""
return H5PatchGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.train_folds,
patch_size=self.patch_size, overlap=self.overlap,
shuffle=self.shuffle,
augmentations=self.augmentations,
preprocess_first=self.preprocess_first,
drop_fraction=self.drop_fraction,
check_drop_channel=self.check_drop_channel,
bounding_box=self.bounding_box)
@property
def test_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for testing
"""
return H5PatchGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.test_folds,
patch_size=self.patch_size, overlap=self.overlap,
shuffle=False, preprocess_first=self.preprocess_first,
drop_fraction=0)
@property
def val_generator(self):
"""
Returns
-------
deoxys.data.DataGenerator
A DataGenerator for generating batches of data for validation
"""
return H5PatchGenerator(
self.hf, batch_size=self.batch_size, batch_cache=self.batch_cache,
preprocessors=self.preprocessors,
x_name=self.x_name, y_name=self.y_name,
folds=self.val_folds,
patch_size=self.patch_size, overlap=self.overlap,
shuffle=False, preprocess_first=self.preprocess_first,
drop_fraction=0)
@property
def original_test(self):
"""
Return a dictionary of all data in the test set
"""
if self._original_test is None:
self._original_test = {}
for key in self.hf[self.test_folds[0]].keys():
data = None
for fold in self.test_folds:
new_data = self.hf[fold][key][:]
if data is None:
data = new_data
else:
data = np.concatenate((data, new_data))
self._original_test[key] = data
return self._original_test
@property
def original_val(self):
"""
Return a dictionary of all data in the val set
"""
if self._original_val is None:
self._original_val = {}
for key in self.hf[self.val_folds[0]].keys():
data = None
for fold in self.val_folds:
new_data = self.hf[fold][key][:]
if data is None:
data = new_data
else:
data = np.concatenate((data, new_data))
self._original_val[key] = data
return self._original_val
class DataReaders(metaclass=Singleton):
"""
A singleton that contains all the registered customized DataReaders
"""
def __init__(self):
self._dataReaders = {
'HDF5Reader': HDF5Reader,
'H5Reader': H5Reader,
'H5PatchReader': H5PatchReader
}
def register(self, key, dr):
if not issubclass(dr, DataReader):
raise ValueError(
"The customized data reader has to be a subclass"
+ " of deoxys.data.DataReader"
)
if key in self._dataReaders:
raise KeyError(
"Duplicated key, please use another key for this data reader"
)
else:
self._dataReaders[key] = dr
def unregister(self, key):
if key in self._dataReaders:
del self._dataReaders[key]
@property
def data_readers(self):
return self._dataReaders
def register_datareader(key, dr):
"""Register the customized data reader.
If the key name is already registered, it will raise a KeyError exception.
Parameters
----------
key : str
The unique key-name of the data reader
dr : deoxys.data.DataReader
The customized data reader class
"""
DataReaders().register(key, dr)
def unregister_datareader(key):
"""
Remove the registered data reader with the key-name
Parameters
----------
key : str
The key-name of the data reader to be removed
"""
DataReaders().unregister(key)
def _deserialize(config, custom_objects={}):
return custom_objects[config['class_name']](**config['config'])
def datareader_from_config(config):
if 'class_name' not in config:
raise ValueError('class_name is needed to define data reader')
if 'config' not in config:
# auto add empty config for data reader with only class_name
config['config'] = {}
return _deserialize(config, custom_objects=DataReaders().data_readers)
| 34.518571 | 78 | 0.591441 | 3,020 | 24,163 | 4.558278 | 0.087748 | 0.024989 | 0.014165 | 0.026151 | 0.83067 | 0.820863 | 0.785559 | 0.782217 | 0.782217 | 0.782217 | 0 | 0.012032 | 0.329264 | 24,163 | 699 | 79 | 34.567954 | 0.837354 | 0.333609 | 0 | 0.752941 | 0 | 0 | 0.025129 | 0.003071 | 0 | 0 | 0 | 0 | 0 | 1 | 0.091176 | false | 0.002941 | 0.014706 | 0.005882 | 0.191176 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ff6c6b7c7f00548ad4ea9f6b88e78d9ac7d86ce0 | 25,516 | py | Python | venv/Lib/site-packages/OpenGL/raw/GL/VERSION/GL_1_0.py | Timicxx/pyGL | 15c1ce5b2a7f7a749004bc1411e752d470f890bb | [
"MIT"
] | null | null | null | venv/Lib/site-packages/OpenGL/raw/GL/VERSION/GL_1_0.py | Timicxx/pyGL | 15c1ce5b2a7f7a749004bc1411e752d470f890bb | [
"MIT"
] | null | null | null | venv/Lib/site-packages/OpenGL/raw/GL/VERSION/GL_1_0.py | Timicxx/pyGL | 15c1ce5b2a7f7a749004bc1411e752d470f890bb | [
"MIT"
] | null | null | null | '''Autogenerated by xml_generate script, do not edit!'''
from OpenGL import platform as _p, arrays
# Code generation uses this
from OpenGL.raw.GL import _types as _cs
# End users want this...
from OpenGL.raw.GL._types import *
from OpenGL.raw.GL import _errors
from OpenGL.constant import Constant as _C
import ctypes
_EXTENSION_NAME = 'GL_VERSION_GL_1_0'
def _f( function ):
return _p.createFunction( function,_p.PLATFORM.GL,'GL_VERSION_GL_1_0',error_checker=_errors._error_checker)
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat)
def glAccum(op,value):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat)
def glAlphaFunc(func,ref):pass
@_f
@_p.types(None,_cs.GLenum)
def glBegin(mode):pass
@_f
@_p.types(None,_cs.GLsizei,_cs.GLsizei,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,arrays.GLubyteArray)
def glBitmap(width,height,xorig,yorig,xmove,ymove,bitmap):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum)
def glBlendFunc(sfactor,dfactor):pass
@_f
@_p.types(None,_cs.GLuint)
def glCallList(list):pass
@_f
@_p.types(None,_cs.GLsizei,_cs.GLenum,ctypes.c_void_p)
def glCallLists(n,type,lists):pass
@_f
@_p.types(None,_cs.GLbitfield)
def glClear(mask):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glClearAccum(red,green,blue,alpha):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glClearColor(red,green,blue,alpha):pass
@_f
@_p.types(None,_cs.GLdouble)
def glClearDepth(depth):pass
@_f
@_p.types(None,_cs.GLfloat)
def glClearIndex(c):pass
@_f
@_p.types(None,_cs.GLint)
def glClearStencil(s):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLdoubleArray)
def glClipPlane(plane,equation):pass
@_f
@_p.types(None,_cs.GLbyte,_cs.GLbyte,_cs.GLbyte)
def glColor3b(red,green,blue):pass
@_f
@_p.types(None,arrays.GLbyteArray)
def glColor3bv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glColor3d(red,green,blue):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glColor3dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glColor3f(red,green,blue):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glColor3fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint)
def glColor3i(red,green,blue):pass
@_f
@_p.types(None,arrays.GLintArray)
def glColor3iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glColor3s(red,green,blue):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glColor3sv(v):pass
@_f
@_p.types(None,_cs.GLubyte,_cs.GLubyte,_cs.GLubyte)
def glColor3ub(red,green,blue):pass
@_f
@_p.types(None,arrays.GLubyteArray)
def glColor3ubv(v):pass
@_f
@_p.types(None,_cs.GLuint,_cs.GLuint,_cs.GLuint)
def glColor3ui(red,green,blue):pass
@_f
@_p.types(None,arrays.GLuintArray)
def glColor3uiv(v):pass
@_f
@_p.types(None,_cs.GLushort,_cs.GLushort,_cs.GLushort)
def glColor3us(red,green,blue):pass
@_f
@_p.types(None,arrays.GLushortArray)
def glColor3usv(v):pass
@_f
@_p.types(None,_cs.GLbyte,_cs.GLbyte,_cs.GLbyte,_cs.GLbyte)
def glColor4b(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLbyteArray)
def glColor4bv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glColor4d(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glColor4dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glColor4f(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glColor4fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint,_cs.GLint)
def glColor4i(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLintArray)
def glColor4iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glColor4s(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glColor4sv(v):pass
@_f
@_p.types(None,_cs.GLubyte,_cs.GLubyte,_cs.GLubyte,_cs.GLubyte)
def glColor4ub(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLubyteArray)
def glColor4ubv(v):pass
@_f
@_p.types(None,_cs.GLuint,_cs.GLuint,_cs.GLuint,_cs.GLuint)
def glColor4ui(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLuintArray)
def glColor4uiv(v):pass
@_f
@_p.types(None,_cs.GLushort,_cs.GLushort,_cs.GLushort,_cs.GLushort)
def glColor4us(red,green,blue,alpha):pass
@_f
@_p.types(None,arrays.GLushortArray)
def glColor4usv(v):pass
@_f
@_p.types(None,_cs.GLboolean,_cs.GLboolean,_cs.GLboolean,_cs.GLboolean)
def glColorMask(red,green,blue,alpha):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum)
def glColorMaterial(face,mode):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLsizei,_cs.GLsizei,_cs.GLenum)
def glCopyPixels(x,y,width,height,type):pass
@_f
@_p.types(None,_cs.GLenum)
def glCullFace(mode):pass
@_f
@_p.types(None,_cs.GLuint,_cs.GLsizei)
def glDeleteLists(list,range):pass
@_f
@_p.types(None,_cs.GLenum)
def glDepthFunc(func):pass
@_f
@_p.types(None,_cs.GLboolean)
def glDepthMask(flag):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble)
def glDepthRange(near,far):pass
@_f
@_p.types(None,_cs.GLenum)
def glDisable(cap):pass
@_f
@_p.types(None,_cs.GLenum)
def glDrawBuffer(buf):pass
@_f
@_p.types(None,_cs.GLsizei,_cs.GLsizei,_cs.GLenum,_cs.GLenum,ctypes.c_void_p)
def glDrawPixels(width,height,format,type,pixels):pass
@_f
@_p.types(None,_cs.GLboolean)
def glEdgeFlag(flag):pass
@_f
@_p.types(None,arrays.GLbooleanArray)
def glEdgeFlagv(flag):pass
@_f
@_p.types(None,_cs.GLenum)
def glEnable(cap):pass
@_f
@_p.types(None,)
def glEnd():pass
@_f
@_p.types(None,)
def glEndList():pass
@_f
@_p.types(None,_cs.GLdouble)
def glEvalCoord1d(u):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glEvalCoord1dv(u):pass
@_f
@_p.types(None,_cs.GLfloat)
def glEvalCoord1f(u):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glEvalCoord1fv(u):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble)
def glEvalCoord2d(u,v):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glEvalCoord2dv(u):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat)
def glEvalCoord2f(u,v):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glEvalCoord2fv(u):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLint)
def glEvalMesh1(mode,i1,i2):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLint,_cs.GLint,_cs.GLint)
def glEvalMesh2(mode,i1,i2,j1,j2):pass
@_f
@_p.types(None,_cs.GLint)
def glEvalPoint1(i):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint)
def glEvalPoint2(i,j):pass
@_f
@_p.types(None,_cs.GLsizei,_cs.GLenum,arrays.GLfloatArray)
def glFeedbackBuffer(size,type,buffer):pass
@_f
@_p.types(None,)
def glFinish():pass
@_f
@_p.types(None,)
def glFlush():pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat)
def glFogf(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLfloatArray)
def glFogfv(pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint)
def glFogi(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLintArray)
def glFogiv(pname,params):pass
@_f
@_p.types(None,_cs.GLenum)
def glFrontFace(mode):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glFrustum(left,right,bottom,top,zNear,zFar):pass
@_f
@_p.types(_cs.GLuint,_cs.GLsizei)
def glGenLists(range):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLbooleanArray)
def glGetBooleanv(pname,data):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLdoubleArray)
def glGetClipPlane(plane,equation):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLdoubleArray)
def glGetDoublev(pname,data):pass
@_f
@_p.types(_cs.GLenum,)
def glGetError():pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLfloatArray)
def glGetFloatv(pname,data):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLintArray)
def glGetIntegerv(pname,data):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glGetLightfv(light,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glGetLightiv(light,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLdoubleArray)
def glGetMapdv(target,query,v):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glGetMapfv(target,query,v):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glGetMapiv(target,query,v):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glGetMaterialfv(face,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glGetMaterialiv(face,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLfloatArray)
def glGetPixelMapfv(map,values):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLuintArray)
def glGetPixelMapuiv(map,values):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLushortArray)
def glGetPixelMapusv(map,values):pass
@_f
@_p.types(None,arrays.GLubyteArray)
def glGetPolygonStipple(mask):pass
@_f
@_p.types(arrays.GLubyteArray,_cs.GLenum)
def glGetString(name):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glGetTexEnvfv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glGetTexEnviv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLdoubleArray)
def glGetTexGendv(coord,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glGetTexGenfv(coord,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glGetTexGeniv(coord,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLenum,_cs.GLenum,ctypes.c_void_p)
def glGetTexImage(target,level,format,type,pixels):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLenum,arrays.GLfloatArray)
def glGetTexLevelParameterfv(target,level,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLenum,arrays.GLintArray)
def glGetTexLevelParameteriv(target,level,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glGetTexParameterfv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glGetTexParameteriv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum)
def glHint(target,mode):pass
@_f
@_p.types(None,_cs.GLuint)
def glIndexMask(mask):pass
@_f
@_p.types(None,_cs.GLdouble)
def glIndexd(c):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glIndexdv(c):pass
@_f
@_p.types(None,_cs.GLfloat)
def glIndexf(c):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glIndexfv(c):pass
@_f
@_p.types(None,_cs.GLint)
def glIndexi(c):pass
@_f
@_p.types(None,arrays.GLintArray)
def glIndexiv(c):pass
@_f
@_p.types(None,_cs.GLshort)
def glIndexs(c):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glIndexsv(c):pass
@_f
@_p.types(None,)
def glInitNames():pass
@_f
@_p.types(_cs.GLboolean,_cs.GLenum)
def glIsEnabled(cap):pass
@_f
@_p.types(_cs.GLboolean,_cs.GLuint)
def glIsList(list):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat)
def glLightModelf(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLfloatArray)
def glLightModelfv(pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint)
def glLightModeli(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,arrays.GLintArray)
def glLightModeliv(pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLfloat)
def glLightf(light,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glLightfv(light,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLint)
def glLighti(light,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glLightiv(light,pname,params):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLushort)
def glLineStipple(factor,pattern):pass
@_f
@_p.types(None,_cs.GLfloat)
def glLineWidth(width):pass
@_f
@_p.types(None,_cs.GLuint)
def glListBase(base):pass
@_f
@_p.types(None,)
def glLoadIdentity():pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glLoadMatrixd(m):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glLoadMatrixf(m):pass
@_f
@_p.types(None,_cs.GLuint)
def glLoadName(name):pass
@_f
@_p.types(None,_cs.GLenum)
def glLogicOp(opcode):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLdouble,_cs.GLdouble,_cs.GLint,_cs.GLint,arrays.GLdoubleArray)
def glMap1d(target,u1,u2,stride,order,points):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat,_cs.GLfloat,_cs.GLint,_cs.GLint,arrays.GLfloatArray)
def glMap1f(target,u1,u2,stride,order,points):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLdouble,_cs.GLdouble,_cs.GLint,_cs.GLint,_cs.GLdouble,_cs.GLdouble,_cs.GLint,_cs.GLint,arrays.GLdoubleArray)
def glMap2d(target,u1,u2,ustride,uorder,v1,v2,vstride,vorder,points):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat,_cs.GLfloat,_cs.GLint,_cs.GLint,_cs.GLfloat,_cs.GLfloat,_cs.GLint,_cs.GLint,arrays.GLfloatArray)
def glMap2f(target,u1,u2,ustride,uorder,v1,v2,vstride,vorder,points):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLdouble,_cs.GLdouble)
def glMapGrid1d(un,u1,u2):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLfloat,_cs.GLfloat)
def glMapGrid1f(un,u1,u2):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLdouble,_cs.GLdouble,_cs.GLint,_cs.GLdouble,_cs.GLdouble)
def glMapGrid2d(un,u1,u2,vn,v1,v2):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLfloat,_cs.GLfloat,_cs.GLint,_cs.GLfloat,_cs.GLfloat)
def glMapGrid2f(un,u1,u2,vn,v1,v2):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLfloat)
def glMaterialf(face,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glMaterialfv(face,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLint)
def glMateriali(face,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glMaterialiv(face,pname,params):pass
@_f
@_p.types(None,_cs.GLenum)
def glMatrixMode(mode):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glMultMatrixd(m):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glMultMatrixf(m):pass
@_f
@_p.types(None,_cs.GLuint,_cs.GLenum)
def glNewList(list,mode):pass
@_f
@_p.types(None,_cs.GLbyte,_cs.GLbyte,_cs.GLbyte)
def glNormal3b(nx,ny,nz):pass
@_f
@_p.types(None,arrays.GLbyteArray)
def glNormal3bv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glNormal3d(nx,ny,nz):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glNormal3dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glNormal3f(nx,ny,nz):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glNormal3fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint)
def glNormal3i(nx,ny,nz):pass
@_f
@_p.types(None,arrays.GLintArray)
def glNormal3iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glNormal3s(nx,ny,nz):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glNormal3sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glOrtho(left,right,bottom,top,zNear,zFar):pass
@_f
@_p.types(None,_cs.GLfloat)
def glPassThrough(token):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLsizei,arrays.GLfloatArray)
def glPixelMapfv(map,mapsize,values):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLsizei,arrays.GLuintArray)
def glPixelMapuiv(map,mapsize,values):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLsizei,arrays.GLushortArray)
def glPixelMapusv(map,mapsize,values):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat)
def glPixelStoref(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint)
def glPixelStorei(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLfloat)
def glPixelTransferf(pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint)
def glPixelTransferi(pname,param):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat)
def glPixelZoom(xfactor,yfactor):pass
@_f
@_p.types(None,_cs.GLfloat)
def glPointSize(size):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum)
def glPolygonMode(face,mode):pass
@_f
@_p.types(None,arrays.GLubyteArray)
def glPolygonStipple(mask):pass
@_f
@_p.types(None,)
def glPopAttrib():pass
@_f
@_p.types(None,)
def glPopMatrix():pass
@_f
@_p.types(None,)
def glPopName():pass
@_f
@_p.types(None,_cs.GLbitfield)
def glPushAttrib(mask):pass
@_f
@_p.types(None,)
def glPushMatrix():pass
@_f
@_p.types(None,_cs.GLuint)
def glPushName(name):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble)
def glRasterPos2d(x,y):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glRasterPos2dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat)
def glRasterPos2f(x,y):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glRasterPos2fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint)
def glRasterPos2i(x,y):pass
@_f
@_p.types(None,arrays.GLintArray)
def glRasterPos2iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort)
def glRasterPos2s(x,y):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glRasterPos2sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glRasterPos3d(x,y,z):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glRasterPos3dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glRasterPos3f(x,y,z):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glRasterPos3fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint)
def glRasterPos3i(x,y,z):pass
@_f
@_p.types(None,arrays.GLintArray)
def glRasterPos3iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glRasterPos3s(x,y,z):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glRasterPos3sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glRasterPos4d(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glRasterPos4dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glRasterPos4f(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glRasterPos4fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint,_cs.GLint)
def glRasterPos4i(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLintArray)
def glRasterPos4iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glRasterPos4s(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glRasterPos4sv(v):pass
@_f
@_p.types(None,_cs.GLenum)
def glReadBuffer(src):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLsizei,_cs.GLsizei,_cs.GLenum,_cs.GLenum,ctypes.c_void_p)
def glReadPixels(x,y,width,height,format,type,pixels):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glRectd(x1,y1,x2,y2):pass
@_f
@_p.types(None,arrays.GLdoubleArray,arrays.GLdoubleArray)
def glRectdv(v1,v2):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glRectf(x1,y1,x2,y2):pass
@_f
@_p.types(None,arrays.GLfloatArray,arrays.GLfloatArray)
def glRectfv(v1,v2):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint,_cs.GLint)
def glRecti(x1,y1,x2,y2):pass
@_f
@_p.types(None,arrays.GLintArray,arrays.GLintArray)
def glRectiv(v1,v2):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glRects(x1,y1,x2,y2):pass
@_f
@_p.types(None,arrays.GLshortArray,arrays.GLshortArray)
def glRectsv(v1,v2):pass
@_f
@_p.types(_cs.GLint,_cs.GLenum)
def glRenderMode(mode):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glRotated(angle,x,y,z):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glRotatef(angle,x,y,z):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glScaled(x,y,z):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glScalef(x,y,z):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLsizei,_cs.GLsizei)
def glScissor(x,y,width,height):pass
@_f
@_p.types(None,_cs.GLsizei,arrays.GLuintArray)
def glSelectBuffer(size,buffer):pass
@_f
@_p.types(None,_cs.GLenum)
def glShadeModel(mode):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLuint)
def glStencilFunc(func,ref,mask):pass
@_f
@_p.types(None,_cs.GLuint)
def glStencilMask(mask):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLenum)
def glStencilOp(fail,zfail,zpass):pass
@_f
@_p.types(None,_cs.GLdouble)
def glTexCoord1d(s):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glTexCoord1dv(v):pass
@_f
@_p.types(None,_cs.GLfloat)
def glTexCoord1f(s):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glTexCoord1fv(v):pass
@_f
@_p.types(None,_cs.GLint)
def glTexCoord1i(s):pass
@_f
@_p.types(None,arrays.GLintArray)
def glTexCoord1iv(v):pass
@_f
@_p.types(None,_cs.GLshort)
def glTexCoord1s(s):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glTexCoord1sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble)
def glTexCoord2d(s,t):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glTexCoord2dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat)
def glTexCoord2f(s,t):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glTexCoord2fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint)
def glTexCoord2i(s,t):pass
@_f
@_p.types(None,arrays.GLintArray)
def glTexCoord2iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort)
def glTexCoord2s(s,t):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glTexCoord2sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glTexCoord3d(s,t,r):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glTexCoord3dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glTexCoord3f(s,t,r):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glTexCoord3fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint)
def glTexCoord3i(s,t,r):pass
@_f
@_p.types(None,arrays.GLintArray)
def glTexCoord3iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glTexCoord3s(s,t,r):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glTexCoord3sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glTexCoord4d(s,t,r,q):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glTexCoord4dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glTexCoord4f(s,t,r,q):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glTexCoord4fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint,_cs.GLint)
def glTexCoord4i(s,t,r,q):pass
@_f
@_p.types(None,arrays.GLintArray)
def glTexCoord4iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glTexCoord4s(s,t,r,q):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glTexCoord4sv(v):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLfloat)
def glTexEnvf(target,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glTexEnvfv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLint)
def glTexEnvi(target,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glTexEnviv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLdouble)
def glTexGend(coord,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLdoubleArray)
def glTexGendv(coord,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLfloat)
def glTexGenf(coord,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glTexGenfv(coord,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLint)
def glTexGeni(coord,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glTexGeniv(coord,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLint,_cs.GLsizei,_cs.GLint,_cs.GLenum,_cs.GLenum,ctypes.c_void_p)
def glTexImage1D(target,level,internalformat,width,border,format,type,pixels):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLint,_cs.GLint,_cs.GLsizei,_cs.GLsizei,_cs.GLint,_cs.GLenum,_cs.GLenum,ctypes.c_void_p)
def glTexImage2D(target,level,internalformat,width,height,border,format,type,pixels):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLfloat)
def glTexParameterf(target,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLfloatArray)
def glTexParameterfv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,_cs.GLint)
def glTexParameteri(target,pname,param):pass
@_f
@_p.types(None,_cs.GLenum,_cs.GLenum,arrays.GLintArray)
def glTexParameteriv(target,pname,params):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glTranslated(x,y,z):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glTranslatef(x,y,z):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble)
def glVertex2d(x,y):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glVertex2dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat)
def glVertex2f(x,y):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glVertex2fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint)
def glVertex2i(x,y):pass
@_f
@_p.types(None,arrays.GLintArray)
def glVertex2iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort)
def glVertex2s(x,y):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glVertex2sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glVertex3d(x,y,z):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glVertex3dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glVertex3f(x,y,z):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glVertex3fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint)
def glVertex3i(x,y,z):pass
@_f
@_p.types(None,arrays.GLintArray)
def glVertex3iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glVertex3s(x,y,z):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glVertex3sv(v):pass
@_f
@_p.types(None,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble,_cs.GLdouble)
def glVertex4d(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLdoubleArray)
def glVertex4dv(v):pass
@_f
@_p.types(None,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat,_cs.GLfloat)
def glVertex4f(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLfloatArray)
def glVertex4fv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLint,_cs.GLint)
def glVertex4i(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLintArray)
def glVertex4iv(v):pass
@_f
@_p.types(None,_cs.GLshort,_cs.GLshort,_cs.GLshort,_cs.GLshort)
def glVertex4s(x,y,z,w):pass
@_f
@_p.types(None,arrays.GLshortArray)
def glVertex4sv(v):pass
@_f
@_p.types(None,_cs.GLint,_cs.GLint,_cs.GLsizei,_cs.GLsizei)
def glViewport(x,y,width,height):pass
| 27.348339 | 139 | 0.782176 | 4,300 | 25,516 | 4.362326 | 0.111163 | 0.032626 | 0.114191 | 0.178857 | 0.767299 | 0.753012 | 0.739311 | 0.71804 | 0.599371 | 0.556136 | 0 | 0.008337 | 0.0504 | 25,516 | 932 | 140 | 27.377682 | 0.765827 | 0.003919 | 0 | 0.603021 | 1 | 0 | 0.001338 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.331176 | false | 0.330097 | 0.006472 | 0.001079 | 0.338727 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
444b284d759e7b5dc38ffc029b785e26ad25c427 | 1,776 | py | Python | main.py | weslanra/marioai | 356a26b1575505cdfca7fe1ce5de43d0d07de45f | [
"MIT"
] | null | null | null | main.py | weslanra/marioai | 356a26b1575505cdfca7fe1ce5de43d0d07de45f | [
"MIT"
] | null | null | null | main.py | weslanra/marioai | 356a26b1575505cdfca7fe1ce5de43d0d07de45f | [
"MIT"
] | null | null | null | import marioai
import agents
import random
def main():
agent = agents.DecisionTreeAgent()
task = marioai.Task()
exp = marioai.Experiment(task, agent)
exp.max_fps = 20
task.env.level_type = 0
task.env.level_difficulty = 1
task.env.init_mario_mode = 2
task.env.time_limit = 100
random.seed(20)
#fase random - 1
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 2
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 3
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 4
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 5
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 6
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 7
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 8
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 9
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
#fase random - 10
task.env.level_seed = random.randint(0, 500)
print "Level: " + str(task.env.level_seed)
exp.doEpisodes(1)
if __name__ == '__main__':
main()
| 25.014085 | 48 | 0.640766 | 261 | 1,776 | 4.229885 | 0.168582 | 0.152174 | 0.23913 | 0.289855 | 0.769928 | 0.769928 | 0.769928 | 0.769928 | 0.769928 | 0.769928 | 0 | 0.051524 | 0.224099 | 1,776 | 70 | 49 | 25.371429 | 0.749637 | 0.085023 | 0 | 0.666667 | 0 | 0 | 0.048297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0.066667 | null | null | 0.222222 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
922b5313f1852e20b8cb3353838f0e540abf1252 | 9,279 | py | Python | tests/unit/test_imageselection.py | shawnmjones/MementoEmbed | 4d1b2eafc934502ff8a9e3ad3efeec8c0ddc8602 | [
"MIT"
] | 11 | 2018-06-27T07:00:20.000Z | 2021-07-14T06:51:46.000Z | tests/unit/test_imageselection.py | shawnmjones/MementoEmbed | 4d1b2eafc934502ff8a9e3ad3efeec8c0ddc8602 | [
"MIT"
] | 131 | 2018-06-07T22:42:20.000Z | 2021-11-15T01:08:53.000Z | tests/unit/test_imageselection.py | shawnmjones/MementoEmbed | 4d1b2eafc934502ff8a9e3ad3efeec8c0ddc8602 | [
"MIT"
] | 2 | 2019-06-06T07:50:54.000Z | 2019-10-29T10:20:04.000Z | import os
import unittest
from mementoembed.imageselection import get_image_list, score_image, get_best_image
class TestImageSelection(unittest.TestCase):
def test_get_image_list(self):
expected_imagelist = [
"http://example.com/images/image1.test", # absolute uri, same domain
"https://example2.com/myimage.test", # absolute uri, different domain
"http://example.com/images/image2.test", # relative uri
"""data:image/gif;base64,R0lGODlhyAAiALMAAFONvX2pzbPN4p6/2tTi7mibxYiw0d/q86nG3r7U5l2UwZO31unx98nb6nOiyf///yH5BAUUAA8ALAAAAADIACIAAAT/8MlJq7046827/2AojmRpnmiqriwGvG/Qjklg28es73wHxz0P4gcgBI9IHVGWzAx/xqZ0KlpSLU9Y9MrtVqzeBwFBJjPCaC44zW4HD4TzZI0h2OUjON7EsMd1fXcrfnsfgYUSeoYLPwoLZ3QTDAgORAoGWxQHNzYSBAY/BQ0XNZw5mgMBRACOpxSpnLE3qKqWC64hk5WNmBebnA8MjC8KFAygMAUCErA2CZoKq6wHkQ8C0dIxhQRED8OrC1hEmQ+12QADFebnABTr0ukh1+wB20QMu0ASCdn16wgTDmCTNlDfhG/sFODi9iMLvAoOi6hj92LZhHfZ3FEEYNEDwnMK/ykwhDEATAN2C/5d3PiDiYSIrALkg6EAz0hiFDNFJKeqgIEyM1nhwShNo0+glhBhgKlA5qqaE25KY1KAYkGAYlYVSEAgQdU1DFbFe3DgKwysWcHZ+QjAAIWdFQaMgkjk2b4ySLtNkCvuh90NYYmMLUsErVRiC8o8OLmkAYF5hZkRKYCHgVmDAiJJLeZpVUdrq/DA7XB5rAV+gkn/MJ0hc8sKm6OuclDoo8tgBQFgffd335p3cykEjSK1gIXLEl+Oq9OgTIKZtymg/hHuAoHmZJ6/5gDcwvDOyysEDS7B9VkJoSsEhuEyN6KSPyxKrf4qsnIoFQ4syL0qum8i9AW0H/9F/l3gngXwwSAfEQ5csIoFUmH1oAVrTEhXQ+Cdd6GGD4z230b+TQdDgB8S6INeG76AlVSsoYeibBg+cOAX2z1g4Vv2sYggER15uFliZFwWnUAAQmhLGUKe+MMFEa1oH40/FMKYht1RMKVB7+AiwTvEMehdeB2CicwLlAlXI1m5kSjBmACUOQF0HWRpAZcZqngBbxWwqZtkZz4QlEsJvkDiejDIcRh5h4kG5pPBrEHkDw06GKMEhAJwGxx+uBIoAIOmlxaH9TWCh4h2fgqDAWcc019AqwTHwDtu1UmMRQnkdpuHRU6gZ3uWOOaHILmuScc6LlFDhKuwwgiqsjQNgAD/UWgFZaKuq/w0AHIAuHIYReR5+A4C12HkEksSfRvuqiuxR4GebSFw7SraMqoRuXvK2t+Z+JDb22bsxDqBh+YRVCO5RgT81JnEGiNtNvvKKwl/IzJKql8ORadqQuSZis7CANCWYnIScOyAiJHayFIUIpM8r0GUstsrbA4HhC2nJi9LwDuihKkuhEQpgAAiEQpjyc99aWHMppz2gSLBlCL9iFQrW2pdz0TDPCkGCRgQjU9GVPpZQAkgIICWHfQhABkNkM1svQxg9wcJfWSn1AlxI5DA3COYjbbaLJBKzhQRuiF4Cn8nMiMXgQ+uOAkBFDDA2wxABkPJiMe8+OUaECVNLMZUJI755xtoHmwXnoNuugUQp4bGLzf0dvrriy2wsAMD4A377YJjSgDfD0QAADs="""
]
htmlcontent = """<html>
<head>
<title>Is this a good title?</title>
</head>
<body>
<p>some text</p>
<img src="{}">
<img src="{}">
<img src="/images/image2.test">
<img src="{}">
</body>
</html>""".format(
expected_imagelist[0], expected_imagelist[1],
expected_imagelist[3]
)
class mock_httpcache:
def get(self, uri):
return mock_Response()
class mock_Response:
@property
def text(self):
return htmlcontent
@property
def status_code(self):
return 200
mh = mock_httpcache()
uri = "http://example.com/example.html"
self.assertEqual(
get_image_list(uri, mh),
expected_imagelist
)
def test_score_image(self):
imagedata = [
"NYT_home_banner.gif",
"dis_PAGEONE_75.jpg",
"go_button.gif",
"jobs.gif",
"line2gray5x468.gif",
"mm_1b.gif",
"mostemailed.gif",
"onthisday.gif",
"p_videopageone.gif",
"serbia.184.1.jpg",
"sfu-160x105.jpg",
"spacer.gif"
]
imagedir = "{}/samples/images".format(
os.path.dirname(os.path.realpath(__file__)
))
#print()
maxscore = None
for imagefile in imagedata:
with open("{}/{}".format(imagedir, imagefile), 'rb') as f:
imagedata = f.read()
score = score_image(imagedata, 0, 0)
if maxscore is None:
maxscore = score
else:
if score > maxscore:
maxscore = score
max_score_image = imagefile
#print("{}: {}".format(imagefile, score))
self.assertEqual(max_score_image, "serbia.184.1.jpg")
def test_best_image(self):
expected_imagelist = [
"http://example.com/images/image1.test", # absolute uri, same domain
"https://example2.com/myimage.test", # absolute uri, different domain
"http://example.com/images/image2.test", # relative uri
"""data:image/gif;base64,R0lGODlhyAAiALMAAFONvX2pzbPN4p6/2tTi7mibxYiw0d/q86nG3r7U5l2UwZO31unx98nb6nOiyf///yH5BAUUAA8ALAAAAADIACIAAAT/8MlJq7046827/2AojmRpnmiqriwGvG/Qjklg28es73wHxz0P4gcgBI9IHVGWzAx/xqZ0KlpSLU9Y9MrtVqzeBwFBJjPCaC44zW4HD4TzZI0h2OUjON7EsMd1fXcrfnsfgYUSeoYLPwoLZ3QTDAgORAoGWxQHNzYSBAY/BQ0XNZw5mgMBRACOpxSpnLE3qKqWC64hk5WNmBebnA8MjC8KFAygMAUCErA2CZoKq6wHkQ8C0dIxhQRED8OrC1hEmQ+12QADFebnABTr0ukh1+wB20QMu0ASCdn16wgTDmCTNlDfhG/sFODi9iMLvAoOi6hj92LZhHfZ3FEEYNEDwnMK/ykwhDEATAN2C/5d3PiDiYSIrALkg6EAz0hiFDNFJKeqgIEyM1nhwShNo0+glhBhgKlA5qqaE25KY1KAYkGAYlYVSEAgQdU1DFbFe3DgKwysWcHZ+QjAAIWdFQaMgkjk2b4ySLtNkCvuh90NYYmMLUsErVRiC8o8OLmkAYF5hZkRKYCHgVmDAiJJLeZpVUdrq/DA7XB5rAV+gkn/MJ0hc8sKm6OuclDoo8tgBQFgffd335p3cykEjSK1gIXLEl+Oq9OgTIKZtymg/hHuAoHmZJ6/5gDcwvDOyysEDS7B9VkJoSsEhuEyN6KSPyxKrf4qsnIoFQ4syL0qum8i9AW0H/9F/l3gngXwwSAfEQ5csIoFUmH1oAVrTEhXQ+Cdd6GGD4z230b+TQdDgB8S6INeG76AlVSsoYeibBg+cOAX2z1g4Vv2sYggER15uFliZFwWnUAAQmhLGUKe+MMFEa1oH40/FMKYht1RMKVB7+AiwTvEMehdeB2CicwLlAlXI1m5kSjBmACUOQF0HWRpAZcZqngBbxWwqZtkZz4QlEsJvkDiejDIcRh5h4kG5pPBrEHkDw06GKMEhAJwGxx+uBIoAIOmlxaH9TWCh4h2fgqDAWcc019AqwTHwDtu1UmMRQnkdpuHRU6gZ3uWOOaHILmuScc6LlFDhKuwwgiqsjQNgAD/UWgFZaKuq/w0AHIAuHIYReR5+A4C12HkEksSfRvuqiuxR4GebSFw7SraMqoRuXvK2t+Z+JDb22bsxDqBh+YRVCO5RgT81JnEGiNtNvvKKwl/IzJKql8ORadqQuSZis7CANCWYnIScOyAiJHayFIUIpM8r0GUstsrbA4HhC2nJi9LwDuihKkuhEQpgAAiEQpjyc99aWHMppz2gSLBlCL9iFQrW2pdz0TDPCkGCRgQjU9GVPpZQAkgIICWHfQhABkNkM1svQxg9wcJfWSn1AlxI5DA3COYjbbaLJBKzhQRuiF4Cn8nMiMXgQ+uOAkBFDDA2wxABkPJiMe8+OUaECVNLMZUJI755xtoHmwXnoNuugUQp4bGLzf0dvrriy2wsAMD4A377YJjSgDfD0QAADs="""
]
htmlcontent = """<html>
<head>
<title>Is this a good title?</title>
</head>
<body>
<p>some text</p>
<img src="{}">
<img src="{}">
<img src="/images/image2.test">
<img src="{}">
</body>
</html>""".format(
expected_imagelist[0], expected_imagelist[1],
expected_imagelist[3]
)
class mock_Response:
def __init__(self, content, headers):
self.content = content
self.headersdict = headers
self.status_code = 200
@property
def text(self):
return self.content
@property
def headers(self):
return self.headersdict
class mock_httpcache:
def __init__(self):
self.uri_to_content = {}
self.uri_to_headers = {}
self.timeout = 15
imagedir = "{}/samples/images".format(
os.path.dirname(os.path.realpath(__file__))
)
with open("{}/spacer.gif".format(imagedir), 'rb') as f:
data = f.read()
uri = "http://example.com/images/image1.test"
self.uri_to_content[uri] = data
self.uri_to_headers[uri] = {'content-type': 'image/testing', 'memento-datetime': 'cheese'}
with open("{}/mm_1b.gif".format(imagedir), 'rb') as f:
data = f.read()
uri = "https://example2.com/myimage.test"
self.uri_to_content[uri] = data
self.uri_to_headers[uri] = {'content-type': 'image/testing', 'memento-datetime': 'cheese'}
with open("{}/serbia.184.1.jpg".format(imagedir), 'rb') as f:
data = f.read()
uri = "http://example.com/images/image2.test"
self.uri_to_content[uri] = data
self.uri_to_headers[uri] = {'content-type': 'image/testing', 'memento-datetime': 'cheese'}
uri = "http://example.com/example.html"
self.uri_to_content[uri] = htmlcontent
self.uri_to_headers[uri] = {'content-type': 'text/html'}
def get(self, uri):
return mock_Response(
self.uri_to_content[uri],
self.uri_to_headers[uri]
)
class mock_future:
def __init__(self, uri, httpcache):
self.uri = uri
self.httpcache = httpcache
def done(self):
return True
def result(self):
return self.httpcache.get(self.uri)
def cancel(self):
pass
class mock_futuressession:
def __init__(self, httpcache):
self.httpcache = httpcache
def get(self, uri):
return mock_future(uri, self.httpcache)
mh = mock_httpcache()
uri = "http://example.com/example.html"
self.assertEqual(
get_best_image(uri, mh, futuressession=mock_futuressession(mh)),
"http://example.com/images/image2.test"
)
| 44.610577 | 1,576 | 0.653088 | 692 | 9,279 | 8.612717 | 0.241329 | 0.021141 | 0.018121 | 0.02349 | 0.790604 | 0.769463 | 0.762416 | 0.734732 | 0.734732 | 0.734732 | 0 | 0.069511 | 0.257355 | 9,279 | 207 | 1,577 | 44.826087 | 0.795385 | 0.020045 | 0 | 0.458065 | 0 | 0 | 0.24618 | 0.008732 | 0 | 1 | 0 | 0 | 0.019355 | 1 | 0.109677 | false | 0.006452 | 0.019355 | 0.058065 | 0.232258 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
925ad26631a48654f3482331f34260dcdb1672ba | 132 | py | Python | actions/autotag.py | jespino/matteractions | a9cfe75271c554ab2df831c0c950be4f1f4e6db9 | [
"BSD-2-Clause"
] | null | null | null | actions/autotag.py | jespino/matteractions | a9cfe75271c554ab2df831c0c950be4f1f4e6db9 | [
"BSD-2-Clause"
] | null | null | null | actions/autotag.py | jespino/matteractions | a9cfe75271c554ab2df831c0c950be4f1f4e6db9 | [
"BSD-2-Clause"
] | null | null | null | import RAKE
Rake = RAKE.Rake(RAKE.SmartStopList())
def autotag(text):
return Rake.run(text, minCharacters=3, maxWords=1)[:4]
| 16.5 | 58 | 0.712121 | 19 | 132 | 4.947368 | 0.684211 | 0.340426 | 0.382979 | 0.340426 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.026316 | 0.136364 | 132 | 7 | 59 | 18.857143 | 0.798246 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.25 | 0.25 | 0.75 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 7 |
9269711865cf790f9cc7a177882f4bae237963e8 | 2,575 | py | Python | rapidsms/contrib/echo/tests.py | catalpainternational/rapidsms | eb7234d04ceb31e4d57187f2d6ba2806d0c54e15 | [
"BSD-3-Clause"
] | 330 | 2015-01-11T03:00:14.000Z | 2022-03-21T11:34:23.000Z | rapidsms/contrib/echo/tests.py | catalpainternational/rapidsms | eb7234d04ceb31e4d57187f2d6ba2806d0c54e15 | [
"BSD-3-Clause"
] | 45 | 2015-01-06T16:14:19.000Z | 2022-03-16T13:12:53.000Z | rapidsms/contrib/echo/tests.py | catalpainternational/rapidsms | eb7234d04ceb31e4d57187f2d6ba2806d0c54e15 | [
"BSD-3-Clause"
] | 166 | 2015-01-30T19:53:38.000Z | 2021-11-09T18:44:44.000Z | #!/usr/bin/env python
# vim: ai ts=4 sts=4 et sw=4
from rapidsms.messages import IncomingMessage
from rapidsms.tests.harness import RapidTest
from rapidsms.contrib.echo.handlers.echo import EchoHandler
from rapidsms.contrib.echo.handlers.ping import PingHandler
class TestEchoHandler(RapidTest):
def setUp(self):
self.connection = self.create_connection()
def _test_handle(self, text, correct_response):
msg = IncomingMessage(self.connection, text)
retVal = EchoHandler.dispatch(self.connection, msg)
if correct_response is not None:
self.assertTrue(retVal)
self.assertEqual(len(msg.responses), 1)
self.assertEqual(msg.responses[0]['text'], correct_response)
else:
self.assertFalse(retVal)
self.assertEqual(len(msg.responses), 0)
def test_no_match(self):
self._test_handle('no match', None)
def test_only_keyword(self):
self._test_handle('echo', 'To echo some text, send: ECHO <ANYTHING>')
def test_keyword_and_whitespace(self):
self._test_handle('echo ', 'To echo some text, send: ECHO <ANYTHING>')
def test_match(self):
self._test_handle('echo hello', 'hello')
def test_case_insensitive_match(self):
self._test_handle('EcHo hello', 'hello')
def test_leading_whitespace(self):
self._test_handle(' echo hello', 'hello')
def test_trailing_whitespace(self):
self._test_handle('echo hello ', 'hello ')
def test_whitespace_after_keyword(self):
self._test_handle('echo hello', 'hello')
class TestPingHandler(RapidTest):
def setUp(self):
self.connection = self.create_connection()
def _test_handle(self, text, correct_response):
msg = IncomingMessage(self.connection, text)
retVal = PingHandler.dispatch(self.connection, msg)
if correct_response is not None:
self.assertTrue(retVal)
self.assertEqual(len(msg.responses), 1)
self.assertEqual(msg.responses[0]['text'], correct_response)
else:
self.assertFalse(retVal)
self.assertEqual(len(msg.responses), 0)
def test_no_match(self):
self._test_handle('no match', None)
def test_match(self):
self._test_handle('ping', 'pong')
def test_leading_whitespace(self):
self._test_handle(' ping', None)
def test_trailing_whitespace(self):
self._test_handle('ping ', None)
def test_case_sensitivity(self):
self._test_handle('PiNg', None)
| 31.402439 | 79 | 0.667184 | 314 | 2,575 | 5.264331 | 0.226115 | 0.072595 | 0.094374 | 0.141561 | 0.831216 | 0.791289 | 0.761041 | 0.727768 | 0.705384 | 0.653358 | 0 | 0.0045 | 0.223301 | 2,575 | 81 | 80 | 31.790123 | 0.822 | 0.018252 | 0 | 0.714286 | 0 | 0 | 0.089074 | 0 | 0 | 0 | 0 | 0 | 0.178571 | 1 | 0.303571 | false | 0 | 0.071429 | 0 | 0.410714 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
927a648f173592e09948ec35786efce26c9bc265 | 17,662 | py | Python | test/test_trace_gen.py | gonzalorodrigo/ScSFWorkload | 2301dacf486df8ed783c0ba33cbbde6e9978c17e | [
"BSD-3-Clause-LBNL"
] | 1 | 2019-03-18T18:27:49.000Z | 2019-03-18T18:27:49.000Z | test/test_trace_gen.py | gonzalorodrigo/ScSFWorkload | 2301dacf486df8ed783c0ba33cbbde6e9978c17e | [
"BSD-3-Clause-LBNL"
] | 1 | 2020-12-17T21:33:15.000Z | 2020-12-17T21:35:41.000Z | test/test_trace_gen.py | gonzalorodrigo/ScSFWorkload | 2301dacf486df8ed783c0ba33cbbde6e9978c17e | [
"BSD-3-Clause-LBNL"
] | 1 | 2021-01-05T08:23:20.000Z | 2021-01-05T08:23:20.000Z | """ Unittests for the slurm simulator trace generator.
python -m unittest test_trace_gen
"""
import unittest
import slurm.trace_gen as trace_gen
class TestTraceGen(unittest.TestCase):
def test_one_record(self):
record=trace_gen.get_job_trace(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
f = open('tmp.trace', 'bw')
f.write(record)
f.close()
records=trace_gen.extract_records(file_name="tmp.trace",
list_trace_location="../bin/list_trace")
# There should be only one record
self.assertEqual(len(records), 1)
read_record=records[0]
self.assertEqual(read_record["JOBID"], "1");
self.assertEqual(read_record["USERNAME"], "name");
self.assertEqual(read_record["PARTITION"], "thepartition");
self.assertEqual(read_record["ACCOUNT"], "theaccount");
self.assertEqual(read_record["QOS"], "theqos");
self.assertEqual(read_record["SUBMIT"], "1034");
self.assertEqual(read_record["DURATION"], "102");
self.assertEqual(read_record["WCLIMIT"], "101");
self.assertEqual(read_record["TASKS"], "23(2,11)");
self.assertEqual(read_record["NUM_TASKS"], 23);
self.assertEqual(read_record["TASKS_PER_NODE"], 2);
self.assertEqual(read_record["CORES_PER_TASK"], 11);
self.assertEqual(read_record["PARTITION"], "thepartition");
self.assertEqual(read_record["RES"], "thereservation");
self.assertEqual(read_record["DEP"], "thedependency");
def test_one_record_workflow(self):
record=trace_gen.get_job_trace(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency",
workflow_manifest="my_manifest.json")
f = open('tmp.trace', 'bw')
f.write(record)
f.close()
records=trace_gen.extract_records(file_name="tmp.trace",
list_trace_location="../bin/list_trace")
# There should be only one record
print(records)
self.assertEqual(len(records), 1)
read_record=records[0]
self.assertEqual(read_record["JOBID"], "1");
self.assertEqual(read_record["USERNAME"], "name");
self.assertEqual(read_record["PARTITION"], "thepartition");
self.assertEqual(read_record["ACCOUNT"], "theaccount");
self.assertEqual(read_record["QOS"], "theqos");
self.assertEqual(read_record["SUBMIT"], "1034");
self.assertEqual(read_record["DURATION"], "102");
self.assertEqual(read_record["WCLIMIT"], "101");
self.assertEqual(read_record["TASKS"], "23(2,11)");
self.assertEqual(read_record["PARTITION"], "thepartition");
self.assertEqual(read_record["RES"], "thereservation");
self.assertEqual(read_record["DEP"], "thedependency");
self.assertEqual(read_record["WF"], "my_manifest.json");
def test_extract_records(self):
records = trace_gen.extract_records(file_name="ref.trace",
list_trace_location="../bin/list_trace")
self.assertGreater(len(records), 1)
self.assertIsNot(records[0]["JOBID"], None)
def test_dump_trace(self):
generator = trace_gen.TraceGenerator()
generator.add_job(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name2",
submit_time=10342,
duration=1022,
wclimit=1012,
tasks = 232,
cpus_per_task= 112,
tasks_per_node= 22,
qosname="theqos2",
partition="thepartition2",
account="theaccount2",
reservation="thereservation2",
dependency="thedependency2")
generator.dump_trace('tmp.trace')
records=trace_gen.extract_records(file_name="tmp.trace",
list_trace_location="../bin/list_trace")
# There should be only one record
self.assertEqual(len(records), 2)
read_record=records[0]
self.assertEqual(read_record["JOBID"], "1");
self.assertEqual(read_record["USERNAME"], "name");
self.assertEqual(read_record["PARTITION"], "thepartition");
self.assertEqual(read_record["ACCOUNT"], "theaccount");
self.assertEqual(read_record["QOS"], "theqos");
self.assertEqual(read_record["SUBMIT"], "1034");
self.assertEqual(read_record["DURATION"], "102");
self.assertEqual(read_record["WCLIMIT"], "101");
self.assertEqual(read_record["TASKS"], "23(2,11)");
self.assertEqual(read_record["PARTITION"], "thepartition");
self.assertEqual(read_record["RES"], "thereservation");
self.assertEqual(read_record["DEP"], "thedependency");
read_record=records[1]
self.assertEqual(read_record["JOBID"], "2");
self.assertEqual(read_record["USERNAME"], "name2");
self.assertEqual(read_record["PARTITION"], "thepartition2");
self.assertEqual(read_record["ACCOUNT"], "theaccount2");
self.assertEqual(read_record["QOS"], "theqos2");
self.assertEqual(read_record["SUBMIT"], "10342");
self.assertEqual(read_record["DURATION"], "1022");
self.assertEqual(read_record["WCLIMIT"], "1012");
self.assertEqual(read_record["TASKS"], "232(22,112)");
self.assertEqual(read_record["PARTITION"], "thepartition2");
self.assertEqual(read_record["RES"], "thereservation2");
self.assertEqual(read_record["DEP"], "thedependency2");
def test_dump_qos(self):
generator = trace_gen.TraceGenerator()
generator.add_job(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name2",
submit_time=10342,
duration=1022,
wclimit=1012,
tasks = 232,
cpus_per_task= 112,
tasks_per_node= 22,
qosname="theqos2",
partition="thepartition2",
account="theaccount2",
reservation="thereservation2",
dependency="thedependency2")
generator.dump_qos("qos.sim")
f = open("qos.sim", "r")
lines = f.readlines()
self.assertEqual(len(lines), 2)
self.assertEqual("theqos", lines[0].strip())
self.assertEqual("theqos2", lines[1].strip())
def test_dump_users(self):
generator = trace_gen.TraceGenerator()
generator.add_job(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name2",
submit_time=10342,
duration=1022,
wclimit=1012,
tasks = 232,
cpus_per_task= 112,
tasks_per_node= 22,
qosname="theqos2",
partition="thepartition2",
account="theaccount2",
reservation="thereservation2",
dependency="thedependency2")
generator.dump_users("users.sim")
f = open("users.sim", "r")
lines = f.readlines()
self.assertEqual(len(lines), 2)
self.assertEqual("name:1024", lines[0].strip())
self.assertEqual("name2:1025", lines[1].strip())
def test_get_core_s_per_period_s(self):
generator = trace_gen.TraceGenerator()
generator.add_job(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name2",
submit_time=1037,
duration=1022,
wclimit=1012,
tasks = 232,
cpus_per_task= 112,
tasks_per_node= 22,
qosname="theqos2",
partition="thepartition2",
account="theaccount2",
reservation="thereservation2",
dependency="thedependency2")
self.assertEqual(generator.get_submitted_core_s(),
(23*11*102+23*11*102+232*112*1022, 3.0))
def test_get_core_s_per_period_s_decay(self):
generator = trace_gen.TraceGenerator()
generator.set_submitted_cores_decay(2)
generator.add_job(job_id=1, username="name",
submit_time=1034,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
generator.add_job(job_id=2, username="name",
submit_time=1035,
duration=102,
wclimit=101,
tasks = 23,
cpus_per_task= 11,
tasks_per_node= 2,
qosname="theqos",
partition="thepartition",
account="theaccount",
reservation="thereservation",
dependency="thedependency")
self.assertEqual(generator.get_submitted_core_s(),
(23*11*102+23*11*102, 1.0))
generator.add_job(job_id=2, username="name2",
submit_time=1038,
duration=1022,
wclimit=1012,
tasks = 232,
cpus_per_task= 112,
tasks_per_node= 22,
qosname="theqos2",
partition="thepartition2",
account="theaccount2",
reservation="thereservation2",
dependency="thedependency2")
self.assertEqual(generator.get_submitted_core_s(),
(232*112*1022, 1.0))
| 51.643275 | 76 | 0.410259 | 1,254 | 17,662 | 5.575758 | 0.096491 | 0.1373 | 0.141304 | 0.185927 | 0.904891 | 0.848541 | 0.842248 | 0.832094 | 0.824943 | 0.824943 | 0 | 0.053925 | 0.500226 | 17,662 | 342 | 77 | 51.643275 | 0.73819 | 0.010361 | 0 | 0.839344 | 0 | 0 | 0.10968 | 0 | 0 | 0 | 0 | 0 | 0.216393 | 1 | 0.02623 | false | 0 | 0.006557 | 0 | 0.036066 | 0.003279 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
2bc2126000e66ea84f8f088f8b42bff3b3f055d0 | 375 | py | Python | lessons/or_statement.py | thepros847/python_programiing | d177f79d0d1f21df434bf3f8663ae6469fcf8357 | [
"MIT"
] | null | null | null | lessons/or_statement.py | thepros847/python_programiing | d177f79d0d1f21df434bf3f8663ae6469fcf8357 | [
"MIT"
] | null | null | null | lessons/or_statement.py | thepros847/python_programiing | d177f79d0d1f21df434bf3f8663ae6469fcf8357 | [
"MIT"
] | null | null | null | won_bet = True
big_win = True
if won_bet or big_win:
print("You can now stop betting!")
won_bet = False
big_win = True
if won_bet or big_win:
print("You can now stop betting!")
won_bet = True
big_win = False
if won_bet or big_win:
print("You can now stop betting!")
won_bet = False
big_win = False
if won_bet or big_win:
print("You can now stop betting!") | 16.304348 | 35 | 0.704 | 72 | 375 | 3.444444 | 0.194444 | 0.193548 | 0.129032 | 0.16129 | 1 | 0.943548 | 0.943548 | 0.943548 | 0.943548 | 0.943548 | 0 | 0 | 0.210667 | 375 | 23 | 36 | 16.304348 | 0.837838 | 0 | 0 | 1 | 0 | 0 | 0.265957 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.25 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
2bee543c940ab0516f3020d405609e85fe964e9a | 27,878 | py | Python | jasmin/protocols/smpp/test/test_smpp_server_credentials.py | balsagoth/jasmin | 53d55f6af8c0d5faca51849e5953452a0dd93452 | [
"Apache-2.0"
] | null | null | null | jasmin/protocols/smpp/test/test_smpp_server_credentials.py | balsagoth/jasmin | 53d55f6af8c0d5faca51849e5953452a0dd93452 | [
"Apache-2.0"
] | null | null | null | jasmin/protocols/smpp/test/test_smpp_server_credentials.py | balsagoth/jasmin | 53d55f6af8c0d5faca51849e5953452a0dd93452 | [
"Apache-2.0"
] | null | null | null | import mock
import copy
from datetime import datetime
from twisted.internet import defer
from jasmin.protocols.smpp.test.test_smpp_server import SMPPClientTestCases
from jasmin.vendor.smpp.twisted.protocol import SMPPSessionStates
from jasmin.vendor.smpp.pdu import pdu_types
from jasmin.vendor.smpp.pdu.constants import priority_flag_value_map
from jasmin.vendor.smpp.pdu.pdu_types import RegisteredDeliveryReceipt, RegisteredDelivery
class AuthorizationsTestCases(SMPPClientTestCases):
@defer.inlineCallbacks
def test_authorized_smpps_send(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('smpps_send', True)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_ROK)
@defer.inlineCallbacks
def test_nonauthorized_smpps_send(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('smpps_send', False)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVSYSID)
@defer.inlineCallbacks
def test_authorized_set_dlr_level(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('set_dlr_level', True)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
SubmitSmPDU = copy.deepcopy(self.SubmitSmPDU)
SubmitSmPDU.params['registered_delivery'] = RegisteredDelivery(RegisteredDeliveryReceipt.SMSC_DELIVERY_RECEIPT_REQUESTED_FOR_FAILURE)
yield self.smppc_factory.lastProto.sendDataRequest(SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_ROK)
@defer.inlineCallbacks
def test_nonauthorized_set_dlr_level(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('set_dlr_level', False)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
SubmitSmPDU = copy.deepcopy(self.SubmitSmPDU)
SubmitSmPDU.params['registered_delivery'] = RegisteredDelivery(RegisteredDeliveryReceipt.SMSC_DELIVERY_RECEIPT_REQUESTED_FOR_FAILURE)
yield self.smppc_factory.lastProto.sendDataRequest(SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVSYSID)
@defer.inlineCallbacks
def test_authorized_set_source_address(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('set_source_address', True)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
SubmitSmPDU = copy.deepcopy(self.SubmitSmPDU)
SubmitSmPDU.params['source_addr'] = 'DEFINED'
yield self.smppc_factory.lastProto.sendDataRequest(SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_ROK)
@defer.inlineCallbacks
def test_nonauthorized_set_source_address(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('set_source_address', False)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
SubmitSmPDU = copy.deepcopy(self.SubmitSmPDU)
SubmitSmPDU.params['source_addr'] = 'DEFINED'
yield self.smppc_factory.lastProto.sendDataRequest(SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVSYSID)
@defer.inlineCallbacks
def test_authorized_set_priority(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('set_priority', True)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
SubmitSmPDU = copy.deepcopy(self.SubmitSmPDU)
SubmitSmPDU.params['priority_flag'] = priority_flag_value_map[3]
yield self.smppc_factory.lastProto.sendDataRequest(SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_ROK)
@defer.inlineCallbacks
def test_nonauthorized_set_priority(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setAuthorization('set_priority', False)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
SubmitSmPDU = copy.deepcopy(self.SubmitSmPDU)
SubmitSmPDU.params['priority_flag'] = priority_flag_value_map[3]
yield self.smppc_factory.lastProto.sendDataRequest(SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVSYSID)
class FiltersTestCases(SMPPClientTestCases):
@defer.inlineCallbacks
def test_filter_destination_address(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setValueFilter('destination_address', r'^A.*')
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVDSTADR)
@defer.inlineCallbacks
def test_filter_source_address(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setValueFilter('source_address', r'^A.*')
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVSRCADR)
@defer.inlineCallbacks
def test_filter_priority(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setValueFilter('priority', r'^A.*')
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RINVPRTFLG)
@defer.inlineCallbacks
def test_filter_content(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setValueFilter('content', r'^A.*')
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RSYSERR)
class QuotasTestCases(SMPPClientTestCases):
@defer.inlineCallbacks
def test_default_unrated_route(self):
"""
Default quotas, everything is unlimited
"""
user = self.routerpb_factory.getUser('u1')
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert User quotas still unlimited
self.assertEqual(user.mt_credential.getQuota('balance'), None)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), None)
@defer.inlineCallbacks
def test_unrated_route_limited_quotas(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', 10.0)
user.mt_credential.setQuota('submit_sm_count', 10)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), 10)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), 9)
@defer.inlineCallbacks
def test_unrated_route_unlimited_quotas(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', None)
user.mt_credential.setQuota('submit_sm_count', None)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), None)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), None)
@defer.inlineCallbacks
def test_rated_route_limited_quotas(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', 10.0)
user.mt_credential.setQuota('submit_sm_count', 10)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 1.2
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), 8.8)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), 9)
@defer.inlineCallbacks
def test_rated_route_unlimited_quotas(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', None)
user.mt_credential.setQuota('submit_sm_count', None)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 1.2
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), None)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), None)
@defer.inlineCallbacks
def test_rated_route_insufficient_balance(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', 1.1)
user.mt_credential.setQuota('submit_sm_count', None)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 1.2
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), 1.1)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), None)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RSYSERR)
@defer.inlineCallbacks
def test_unrated_route_insufficient_submit_sm_count(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', None)
user.mt_credential.setQuota('submit_sm_count', 0)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), None)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), 0)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RSYSERR)
@defer.inlineCallbacks
def test_rated_route_insufficient_submit_sm_count(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', None)
user.mt_credential.setQuota('submit_sm_count', 0)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 1.2
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), None)
self.assertEqual(user.mt_credential.getQuota('submit_sm_count'), 0)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RSYSERR)
@defer.inlineCallbacks
def test_rated_route_early_decrement_balance_percent_insufficient_balance(self):
'''Balance is greater than the early_decrement_balance_percent but lower than the final rate,
user must not be charged in this case, he have to get a balance covering the total rate'''
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', 1.0)
user.mt_credential.setQuota('early_decrement_balance_percent', 25)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 2.0
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# Install mockers
self.smppc_factory.lastProto.PDUReceived = mock.Mock(wraps=self.smppc_factory.lastProto.PDUReceived)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), 1)
# Asserts SMPPClient side
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_count, 2)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].id,
pdu_types.CommandId.submit_sm_resp)
self.assertEqual(self.smppc_factory.lastProto.PDUReceived.call_args_list[0][0][0].status,
pdu_types.CommandStatus.ESME_RSYSERR)
@defer.inlineCallbacks
def test_rated_route_early_decrement_balance_percent(self):
"""Note:
Since this test case have no SMPPClientManagerPB set, message will not be sent
to the routed connector, user will only be charged for earlier (on submit_sm).
Complete test (with charging on submit_sm_resp) is done in
test_router_smpps.BillRequestSubmitSmRespCallbackingTestCases
"""
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', 10.0)
user.mt_credential.setQuota('early_decrement_balance_percent', 25)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 2.0
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), 9.5)
@defer.inlineCallbacks
def test_rated_route_early_decrement_balance_100_percent(self):
"""Note:
Since this test case have no SMPPClientManagerPB set, message will not be sent
to the routed connector, user will only be charged for earlier (on submit_sm).
Complete test (with charging on submit_sm_resp) is done in
test_router_smpps.BillRequestSubmitSmRespCallbackingTestCases
"""
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('balance', 10.0)
user.mt_credential.setQuota('early_decrement_balance_percent', 100)
default_route = self.routerpb_factory.getMTRoutingTable().getAll()[0][0]
default_route.rate = 2.0
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Assert quotas after SMS is sent
self.assertEqual(user.mt_credential.getQuota('balance'), 8.0)
@defer.inlineCallbacks
def test_throughput_limit_rejection(self):
user = self.routerpb_factory.getUser('u1')
user.mt_credential.setQuota('smpps_throughput', 2)
# Connect and bind
yield self.smppc_factory.connectAndBind()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.BOUND_TRX)
# SMPPClient > SMPPServer
# Send a bunch of MT messages
# We should receive a ESME_ROK for success and ESME_RTHROTTLED when throughput is exceeded
start_time = datetime.now()
throughput_exceeded_errors = 0
request_counter = 0
for x in range(5000):
responsePDU = yield self.smppc_factory.lastProto.sendDataRequest(self.SubmitSmPDU)
request_counter+= 1
if str(responsePDU.response.status) == 'ESME_RTHROTTLED':
throughput_exceeded_errors+= 1
end_time = datetime.now()
# Unbind & Disconnect
yield self.smppc_factory.smpp.unbindAndDisconnect()
self.assertEqual(self.smppc_factory.smpp.sessionState, SMPPSessionStates.UNBOUND)
# Asserts (tolerance of -/+ 3 messages)
throughput = 1 / float(user.mt_credential.getQuota('smpps_throughput'))
dt = end_time - start_time
max_unsuccessfull_requests = request_counter - (dt.seconds / throughput)
unsuccessfull_requests = throughput_exceeded_errors
self.assertGreaterEqual(unsuccessfull_requests, max_unsuccessfull_requests - 3)
self.assertLessEqual(unsuccessfull_requests, max_unsuccessfull_requests + 3)
| 41.608955 | 135 | 0.805761 | 3,467 | 27,878 | 6.276896 | 0.063167 | 0.082713 | 0.147045 | 0.119474 | 0.939803 | 0.925007 | 0.918114 | 0.917563 | 0.914806 | 0.91214 | 0 | 0.008942 | 0.093371 | 27,878 | 669 | 136 | 41.671151 | 0.852067 | 0.09373 | 0 | 0.806122 | 0 | 0 | 0.034147 | 0.003826 | 0 | 0 | 0 | 0 | 0.298469 | 0 | null | null | 0 | 0.022959 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9200ec7a667b649615da94c10e7b07c233540c87 | 81 | py | Python | algorithms/algos/__init__.py | FHL1998/ME5406_Project2_Dynamic_Obstacle_Grid_FHL | 61e0beb5689d91faf18126ed752733db9beb147f | [
"MIT"
] | 6 | 2021-12-22T02:14:22.000Z | 2022-02-22T08:57:48.000Z | algorithms/algos/__init__.py | FHL1998/ME5406_Project2_Dynamic_Obstacle_Grid_FHL | 61e0beb5689d91faf18126ed752733db9beb147f | [
"MIT"
] | null | null | null | algorithms/algos/__init__.py | FHL1998/ME5406_Project2_Dynamic_Obstacle_Grid_FHL | 61e0beb5689d91faf18126ed752733db9beb147f | [
"MIT"
] | null | null | null | from algorithms.algos.a2c import A2CAlgo
from algorithms.algos.ppo import PPOAlgo | 40.5 | 40 | 0.864198 | 12 | 81 | 5.833333 | 0.666667 | 0.4 | 0.542857 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.027027 | 0.08642 | 81 | 2 | 41 | 40.5 | 0.918919 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
92052df833db08397b37fefca13d3a93434bd46a | 9,696 | py | Python | src/solveda.py | ymatsumoto/qualign | bdb182f4ea769955bb3e9e067290cabee700bc49 | [
"MIT"
] | null | null | null | src/solveda.py | ymatsumoto/qualign | bdb182f4ea769955bb3e9e067290cabee700bc49 | [
"MIT"
] | null | null | null | src/solveda.py | ymatsumoto/qualign | bdb182f4ea769955bb3e9e067290cabee700bc49 | [
"MIT"
] | null | null | null | import requests
import json
import functools
import operator
def _solve(api_key, q, offset, options, mode):
tmp = [[k1,k2] for ((k1,k2),v) in q.items()]
keys = sorted(set([k for keys in tmp for k in keys]))
#keys = sorted(set(functools.reduce(operator.add, ((k1,k2) for ((k1,k2),v) in q.items()))))
k2i = {keys[i]: i for i in range(len(keys))}
url = "https://api.jp-east-1.digitalannealer.global.fujitsu.com/v1/qubo/solve"
headers = {'X-DA-Access-Key': api_key,
'Accept': 'application/json',
'Content-type': 'application/json'}
qubo = [{"coefficient": v, "polynomials": (k2i[k1], k2i[k2])}
for ((k1,k2),v) in q.items()]
qubo.append({"coefficient": offset})
m = {'DAPT': "fujitsuDAPT",
'DA':"fujitsuDA",
'DAMixed': "fujitsuDAMixedMode"}
payload = {'binary_polynomial': {'terms': qubo},
m[mode]: options}
return requests.post(url, headers=headers, data=json.dumps(payload))
def _solve_v2(api_key, q, offset, options, mode):
tmp = [[k1,k2] for ((k1,k2),v) in q.items()]
keys = sorted(set([k for keys in tmp for k in keys]))
#keys = sorted(set(functools.reduce(operator.add, ((k1,k2) for ((k1,k2),v) in q.items()))))
k2i = {keys[i]: i for i in range(len(keys))}
url = "https://api.jp-east-1.digitalannealer.global.fujitsu.com/v2/async/qubo/solve"
headers = {'X-DA-Access-Key': api_key,
'Accept': 'application/json',
'Content-type': 'application/json'}
qubo = [{"coefficient": v, "polynomials": (k2i[k1], k2i[k2])}
for ((k1,k2),v) in q.items()]
qubo.append({"coefficient": offset})
m = {'DAPT': "fujitsuDA2PT",
'DA':"fujitsuDA2",
'DAMixed': "fujitsuDA2MixedMode"}
payload = {'binary_polynomial': {'terms': qubo},
m[mode]: options}
return requests.post(url, headers=headers, data=json.dumps(payload))
def hobo2qubo(api_key, q, offset, options, mode):
tmp = [[k1,k2] for ((k1,k2),v) in q.items()]
keys = sorted(set([k for keys in tmp for k in keys]))
#keys = sorted(set(functools.reduce(operator.add, ((k1,k2) for ((k1,k2),v) in q.items()))))
k2i = {keys[i]: i for i in range(len(keys))}
url = "https://api.jp-east-1.digitalannealer.global.fujitsu.com/v2/async/qubo/solve"
headers = {'X-DA-Access-Key': api_key,
'Accept': 'application/json',
'Content-type': 'application/json'}
qubo = [{"coefficient": v, "polynomials": (k2i[k1], k2i[k2])}
for ((k1,k2),v) in q.items()]
qubo.append({"coefficient": offset})
m = {'DAPT': "fujitsuDA2PT",
'DA':"fujitsuDA2",
'DAMixed': "fujitsuDA2MixedMode"}
payload = {'binary_polynomial': {'terms': qubo},
m[mode]: options}
return requests.post(url, headers=headers, data=json.dumps(payload))
def jobs(api_key):
url = "https://api.jp-east-1.digitalannealer.global.fujitsu.com/v2/async/jobs"
headers = {'X-DA-Access-Key': api_key,
'Accept': 'application/json',
'Content-type': 'application/json'}
return requests.get(url, headers=headers)
def result(api_key, job):
job_id = json.loads(job.text)['job_id']
url = "https://api.jp-east-1.digitalannealer.global.fujitsu.com/v2/async/jobs/result/"+job_id
headers = {'X-DA-Access-Key': api_key,
'Accept': 'application/json',
'Content-type': 'application/json'}
return requests.get(url, headers=headers)
def to_sol(q, solution):
keys = sorted(set(functools.reduce(operator.add, ((k1,k2) for ((k1,k2),v) in q.items()))))
i2k = {i:keys[i] for i in range(len(keys))}
return {i2k[int(i)]:1 if v else 0 for (i,v) in solution.items()}
def solve_DAPT(api_key, q, offset,
number_iterations=100000,
number_replicas=100,
offset_increase_rate=1000,
solution_mode="COMPLETE",
guidance_config=None):
options = {'number_iterations': number_iterations,
'number_replicas': number_replicas,
'offset_increase_rate': offset_increase_rate,
'solution_mode': solution_mode,
'guidance_config': guidance_config}
options = {k: options[k] for k in options if options[k] is not None}
return _solve(api_key, q, offset, options, mode="DAPT")
def solve_DA2PT(api_key, q, offset,
number_iterations=100000,
number_replicas=100,
offset_increase_rate=1000,
solution_mode="COMPLETE",
guidance_config=None):
options = {'number_iterations': number_iterations,
'number_replicas': number_replicas,
'offset_increase_rate': offset_increase_rate,
'solution_mode': solution_mode,
'guidance_config': guidance_config}
options = {k: options[k] for k in options if options[k] is not None}
return _solve_v2(api_key, q, offset, options, mode="DAPT")
def solve_DA(api_key,
q,
offset,
expert_mode=False,
noise_model=None,
number_iterations=100000,
number_runs=100,
offset_increase_rate=None,
temperature_decay=None,
temperature_interval=None,
temperature_mode=None,
temperature_start=None,
solution_mode="COMPLETE",
guidance_config=None):
options = {'expert_mode': expert_mode,
'noise_model': noise_model,
'number_iterations': number_iterations,
'number_runs': number_runs,
'offset_increase_rate': offset_increase_rate,
'temperature_decay': temperature_decay,
'temperature_interval': temperature_interval,
'temperature_mode': temperature_mode,
'temperature_start': temperature_start,
'solution_mode': solution_mode,
'guidance_config': guidance_config}
options = {k: options[k] for k in options if options[k] is not None}
return _solve(api_key, q, offset, options, mode="DA")
def solve_DA2(api_key,
q,
offset,
expert_mode=False,
noise_model=None,
number_iterations=100000,
number_runs=100,
offset_increase_rate=None,
temperature_decay=None,
temperature_interval=None,
temperature_mode=None,
temperature_start=None,
solution_mode="COMPLETE",
guidance_config=None):
options = {'expert_mode': expert_mode,
'noise_model': noise_model,
'number_iterations': number_iterations,
'number_runs': number_runs,
'offset_increase_rate': offset_increase_rate,
'temperature_decay': temperature_decay,
'temperature_interval': temperature_interval,
'temperature_mode': temperature_mode,
'temperature_start': temperature_start,
'solution_mode': solution_mode,
'guidance_config': guidance_config}
options = {k: options[k] for k in options if options[k] is not None}
return _solve_v2(api_key, q, offset, options, mode="DA")
def solve_DAMixed(api_key, q, offset,
noise_model="METROPOLIS",
number_iterations=100000,
number_runs=100,
offset_increase_rate=1000,
temperature_decay=0.0001,
temperature_interval=100,
temperature_mode="EXPONENTIAL",
temperature_start=1000,
solution_mode="COMPLETE",
guidance_config=None):
options = {'noise_model': noise_model,
'number_iterations': number_iterations,
'number_runs': number_runs,
'offset_increase_rate': offset_increase_rate,
'temperature_decay': temperature_decay,
'temperature_interval': temperature_interval,
'temperature_mode': temperature_mode,
'temperature_start': temperature_start,
'solution_mode': solution_mode,
'guidance_config': guidance_config}
options = {k: options[k] for k in options if options[k] is not None}
return _solve(api_key, q, offset, options, mode="DAMixed")
def solve_DA2Mixed(api_key, q, offset,
noise_model="METROPOLIS",
number_iterations=100000,
number_runs=100,
offset_increase_rate=1000,
temperature_decay=0.0001,
temperature_interval=100,
temperature_mode="EXPONENTIAL",
temperature_start=1000,
solution_mode="COMPLETE",
guidance_config=None):
options = {'noise_model': noise_model,
'number_iterations': number_iterations,
'number_runs': number_runs,
'offset_increase_rate': offset_increase_rate,
'temperature_decay': temperature_decay,
'temperature_interval': temperature_interval,
'temperature_mode': temperature_mode,
'temperature_start': temperature_start,
'solution_mode': solution_mode,
'guidance_config': guidance_config}
options = {k: options[k] for k in options if options[k] is not None}
return _solve_v2(api_key, q, offset, options, mode="DAMixed")
| 42.713656 | 98 | 0.588903 | 1,097 | 9,696 | 5.003646 | 0.105743 | 0.024048 | 0.059027 | 0.035526 | 0.948078 | 0.948078 | 0.948078 | 0.941155 | 0.93988 | 0.931864 | 0 | 0.024765 | 0.287851 | 9,696 | 226 | 99 | 42.902655 | 0.770167 | 0.027847 | 0 | 0.838384 | 0 | 0.025253 | 0.212157 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.060606 | false | 0 | 0.020202 | 0 | 0.141414 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
92091d39659d9ebd137b4d43e0a68bf5b45df77f | 8,358 | py | Python | utils_hdf5.py | billmetangmo/MinIO-HDF5-benchmark | 1ccc849aae8685e460eeb078c313439833f3a8e8 | [
"MIT"
] | null | null | null | utils_hdf5.py | billmetangmo/MinIO-HDF5-benchmark | 1ccc849aae8685e460eeb078c313439833f3a8e8 | [
"MIT"
] | null | null | null | utils_hdf5.py | billmetangmo/MinIO-HDF5-benchmark | 1ccc849aae8685e460eeb078c313439833f3a8e8 | [
"MIT"
] | null | null | null | import tempfile
import time
import os
import random
import threading
import Queue
import h5py
def sequential_batch_rw(data, size):
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
# Time posix sequential (write)
start = time.time()
for i in range(0, size):
path = os.path.join(temp_dir, str(i))
with open(path, 'wb') as fd:
fd.write(data)
end = time.time()
print("time posix seq write =" + str(end - start) + "s")
# Time posix sequential (read)
start = time.time()
for i in range(0, size):
path = os.path.join(temp_dir, str(i))
with open(path, 'rb') as fd:
fd.read()
end = time.time()
print("time posix seq read =" + str(end - start) + "s")
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
f = h5py.File(os.path.join(temp_dir, 'mydataset.hdf5'), 'a')
grp = f.create_group("tmp")
# Time hdf5 sequential (write)
start = time.time()
for i in range(0, size):
grp.create_dataset(str(i), data=data)
end = time.time()
print("time hdf5 seq write =" + str(end - start) + "s")
# Time hdf5 sequential (read)
start = time.time()
for i in range(0, size):
data = grp[str(i)]
end = time.time()
print("time hdf5 seq read =" + str(end - start) + "s")
def sequential_random_rw(data, size):
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
# Prepare for data read and write(update precedent data)
for i in range(0, size):
path = os.path.join(temp_dir, str(i))
with open(path, 'wb') as fd:
fd.write(data)
start = time.time()
for i in range(0, size):
choice = random.randint(1, 2)
path = os.path.join(temp_dir, str(i))
if choice == 1:
with open(path, 'rb') as fd:
fd.read()
else:
with open(path, 'wb') as fd:
fd.write(data)
end = time.time()
print("time posix seq read/write =" + str(end - start) + "s")
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
# Prepare for data read and write(update precedent data)
f = h5py.File(os.path.join(temp_dir, 'mydataset.hdf5'), 'a')
grp = f.create_group("tmp")
for i in range(0, size):
grp.create_dataset(str(i), data=data)
# Time hdf5 sequential (read)
start = time.time()
for i in range(0, size):
choice = random.randint(1, 2)
if choice == 1:
data = grp[str(i)]
else:
grp.create_dataset("k" + str(i), data=data)
end = time.time()
print("time hdf5 seq read/write =" + str(end - start) + "s")
def parallel_batch_rw(data, size, threads):
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
f = h5py.File(os.path.join(temp_dir, 'mydataset.hdf5'), 'a')
grp = f.create_group("tmp")
def write_file_from_q():
while True:
index = q.get()
grp.create_dataset(str(index), data=data)
q.task_done()
def read_file_from_q():
while True:
index = q.get()
data = grp[str(index)]
q.task_done()
# Time hdf5 parallel (write)
start = time.time()
q = queue.Queue()
for i in range(0, threads):
thread = threading.Thread(target=write_file_from_q)
thread.setDaemon(True)
thread.start()
for i in range(0, size):
q.put(i)
q.join()
end = time.time()
print("time hdf5 parallel write =" + str(end - start) + "s")
# Time hdf5 parallel (read)
start = time.time()
q = queue.Queue()
for i in range(0, threads):
thread = threading.Thread(target=read_file_from_q)
thread.setDaemon(True)
thread.start()
for i in range(0, size):
q.put(i)
q.join()
end = time.time()
print("time hdf5 parallel read =" + str(end - start) + "s")
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
def write_file(path):
with open(path, 'wb') as fd:
fd.write(data)
def read_file(path):
with open(path, 'rb') as fd:
fd.read()
def write_file_from_q_seq():
while True:
index = q.get()
path = os.path.join(temp_dir, str(index))
write_file(path)
q.task_done()
def read_file_from_q_seq():
while True:
index = q.get()
path = os.path.join(temp_dir, str(index))
read_file(path)
q.task_done()
# Time posix parallel (write)
start = time.time()
q = queue.Queue()
for i in range(0, threads):
thread = threading.Thread(target=write_file_from_q_seq)
thread.setDaemon(True)
thread.start()
for i in range(0, size):
q.put(i)
q.join()
end = time.time()
print("time posix parallel write =" + str(end - start) + "s")
# Time posix parallel (read)
start = time.time()
q = queue.Queue()
for i in range(0, threads):
thread = threading.Thread(target=read_file_from_q_seq)
thread.setDaemon(True)
thread.start()
for i in range(0, size):
q.put(i)
q.join()
end = time.time()
print("time posix parallel read =" + str(end - start) + "s")
def parallel_random_rw(data, size, threads):
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
def random_rw_file(path):
choice = random.randint(1, 2)
if choice == 1:
with open(path, 'wb') as fd:
fd.write(data)
else:
with open(path, 'rb') as fd:
fd.read()
def random_rw_file_from_q_seq():
while True:
index = q.get()
path = os.path.join(temp_dir, str(index))
random_rw_file(path)
q.task_done()
# Prepare for data read and write(update precedent data)
for i in range(0, size):
path = os.path.join(temp_dir, str(i))
with open(path, 'wb') as fd:
fd.write(data)
# Time posix parallel (read)
start = time.time()
q = queue.Queue()
for i in range(0, threads):
thread = threading.Thread(target=random_rw_file_from_q_seq)
thread.setDaemon(True)
thread.start()
for i in range(0, size):
q.put(i)
q.join()
end = time.time()
print("time posix parallel read/write =" + str(end - start) + "s")
with tempfile.TemporaryDirectory(prefix="format", suffix="-tmp") as temp_dir:
# Prepare for data read and write(update precedent data)
f = h5py.File(os.path.join(temp_dir, 'mydataset.hdf5'), 'a')
grp = f.create_group("tmp")
for i in range(0, size):
grp.create_dataset(str(i),data=data)
def random_rw_file_from_q():
while True:
index = q.get()
choice = random.randint(1, 2)
if choice == 1:
data2 = grp[str(index)]
else:
grp.create_dataset("k"+str(index),data=data)
q.task_done()
# Time hdf5 parallel (write)
start = time.time()
q = queue.Queue()
for i in range(0, threads):
thread = threading.Thread(target=random_rw_file_from_q)
thread.setDaemon(True)
thread.start()
for i in range(0, size):
q.put(i)
q.join()
end = time.time()
print("time hdf5 parallel read/write =" + str(end - start) + "s") | 31.303371 | 81 | 0.516631 | 1,063 | 8,358 | 3.969897 | 0.073377 | 0.045498 | 0.03128 | 0.057346 | 0.935308 | 0.918483 | 0.890284 | 0.854502 | 0.781991 | 0.75545 | 0 | 0.010452 | 0.358938 | 8,358 | 267 | 82 | 31.303371 | 0.777156 | 0.062814 | 0 | 0.771144 | 0 | 0 | 0.062676 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.064677 | false | 0 | 0.034826 | 0 | 0.099502 | 0.059701 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
9210b9d90123dc70072b90cab3a3e27ae88c7450 | 14,274 | py | Python | hfcuda_automate/lib/cudaToolkit91/cusparse/level3.py | IBM/hf | 5245b5e879e1f6a2c17759e0cd3477c6eda945e5 | [
"Apache-2.0"
] | 1 | 2021-11-01T12:54:28.000Z | 2021-11-01T12:54:28.000Z | hfcuda_automate/lib/cudaToolkit91/cusparse/level3.py | IBM/hf | 5245b5e879e1f6a2c17759e0cd3477c6eda945e5 | [
"Apache-2.0"
] | null | null | null | hfcuda_automate/lib/cudaToolkit91/cusparse/level3.py | IBM/hf | 5245b5e879e1f6a2c17759e0cd3477c6eda945e5 | [
"Apache-2.0"
] | 4 | 2020-06-29T15:20:15.000Z | 2022-01-20T18:52:51.000Z | from ...doc import *
cuSPARSE_level3 = [
# 8.1. cusparse<t>csrmm()
func_decl( [ "cusparseScsrmm", "cusparseDcsrmm", "cusparseCcsrmm", "cusparseZcsrmm" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('k', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('B', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldb', PASSBYVALUE, INOUT_IN ),
parm_def('beta', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('C', [ MEMORY_DEVICE, VECTOR], INOUT_INOUT ),
parm_def('ldc', PASSBYVALUE, INOUT_IN ) ] ),
# 8.2. cusparse<t>csrmm2()
func_decl( [ "cusparseScsrmm2", "cusparseDcsrmm2", "cusparseCcsrmm2", "cusparseZcsrmm2" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transB', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('k', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('B', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldb', PASSBYVALUE, INOUT_IN ),
parm_def('beta', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('C', [ MEMORY_DEVICE, VECTOR], INOUT_INOUT ),
parm_def('ldc', PASSBYVALUE, INOUT_IN ) ] ),
# 8.3. cusparse<t>csrsm_analysis()
func_decl( [ "cusparseScsrsm_analysis", "cusparseDcsrsm_analysis", "cusparseCcsrsm_analysis", "cusparseZcsrsm_analysis" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('info', [ MEMORY_HOST, SCALAR ], INOUT_INOUT ) ] ),
# 8.4. cusparse<t>csrsm_solve()
func_decl( [ "cusparseScsrsm_solve", "cusparseDcsrsm_solve", "cusparseCcsrsm_solve", "cusparseZcsrsm_solve" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('info', PASSBYVALUE, INOUT_IN ),
parm_def('F', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldf', PASSBYVALUE, INOUT_IN ),
parm_def('X', [ MEMORY_DEVICE, VECTOR ], INOUT_OUT ),
parm_def('ldx', PASSBYVALUE, INOUT_IN ) ] ),
# 8.5. cusparse<t>csrsm2_bufferSizeExt()
func_decl( [ "cusparseScsrsm2_bufferSizeExt", "cusparseDcsrsm2_bufferSizeExt", "cusparseCcsrsm2_bufferSizeExt", "cusparseZcsrsm2_bufferSizeExt" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('algo', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transB', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('nrhs', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('B', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldb', PASSBYVALUE, INOUT_IN ),
parm_def('info', [ MEMORY_HOST, SCALAR ], INOUT_INOUT ),
parm_def('policy', PASSBYVALUE, INOUT_IN ),
parm_def('pBufferSize', [ MEMORY_HOST, SCALAR ], INOUT_OUT ) ] ),
# 8.6. cusparse<t>csrsm2_analysis()
func_decl( [ "cusparseScsrsm2_analysis", "cusparseDcsrsm2_analysis", "cusparseCcsrsm2_analysis", "cusparseZcsrsm2_analysis" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('algo', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transB', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('nrhs', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('B', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldb', PASSBYVALUE, INOUT_IN ),
parm_def('info', [ MEMORY_HOST, SCALAR ], INOUT_INOUT ),
parm_def('policy', PASSBYVALUE, INOUT_IN ),
parm_def('pBuffer', MEMORY_DEVICE, INOUT_IN ) ] ),
# 8.7. cusparse<t>csrsm2_solve()
func_decl( [ "cusparseScsrsm2_solve", "cusparseDcsrsm2_solve", "cusparseCcsrsm2_solve", "cusparseZcsrsm2_solve" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('algo', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transB', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('nrhs', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('csrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('csrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('B', MEMORY_DEVICE, INOUT_OUT ),
parm_def('ldb', PASSBYVALUE, INOUT_IN ),
parm_def('info', PASSBYVALUE, INOUT_IN ),
parm_def('policy', PASSBYVALUE, INOUT_IN ),
parm_def('pBuffer', MEMORY_DEVICE, INOUT_IN ) ] ),
# 8.8. cusparseXcsrsm2_zeroPivot()
func_decl( [ "cusparseXcsrsm2_zeroPivot" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('info', PASSBYVALUE, INOUT_IN ),
parm_def('position', [ MEMORY_HOST, SCALAR ], INOUT_OUT ) ] ),
# 8.9. cusparse<t>bsrmm()
func_decl( [ "cusparseSbsrmm", "cusparseDbsrmm", "cusparseCbsrmm", "cusparseZbsrmm" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('dirA', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transB', PASSBYVALUE, INOUT_IN ),
parm_def('mb', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('kb', PASSBYVALUE, INOUT_IN ),
parm_def('nnzb', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('bsrSortedValA', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedRowPtrA', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedColIndA', MEMORY_DEVICE, INOUT_IN ),
parm_def('blockSize', PASSBYVALUE, INOUT_IN ),
parm_def('B', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldb', PASSBYVALUE, INOUT_IN ),
parm_def('beta', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('C', [ MEMORY_DEVICE, VECTOR], INOUT_INOUT ),
parm_def('ldc', PASSBYVALUE, INOUT_IN ) ] ),
# 8.10. cusparse<t>bsrsm2_bufferSize()
func_decl( [ "cusparseSbsrsm2_bufferSize", "cusparseDbsrsm2_bufferSize", "cusparseCbsrsm2_bufferSize", "cusparseZbsrsm2_bufferSize" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('dirA', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transXY', PASSBYVALUE, INOUT_IN ),
parm_def('mb', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('nnzb', PASSBYVALUE, INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('bsrSortedVal', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedRowPtr', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedColInd', MEMORY_DEVICE, INOUT_IN ),
parm_def('blockSize', PASSBYVALUE, INOUT_IN ),
parm_def('info', [ MEMORY_HOST, SCALAR ], INOUT_OUT ),
parm_def('pBufferSizeInBytes', [ MEMORY_HOST, SCALAR ], INOUT_OUT ) ] ),
# 8.11. cusparse<t>bsrsm2_analysis()
func_decl( [ "cusparseSbsrsm2_analysis", "cusparseDbsrsm2_analysis", "cusparseCbsrsm2_analysis", "cusparseZbsrsm2_analysis" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('dirA', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transXY', PASSBYVALUE, INOUT_IN ),
parm_def('mb', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('nnzb', PASSBYVALUE, INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('bsrSortedVal', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedRowPtr', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedColInd', MEMORY_DEVICE, INOUT_IN ),
parm_def('blockSize', PASSBYVALUE, INOUT_IN ),
parm_def('info', [ MEMORY_HOST, SCALAR ], INOUT_INOUT ),
parm_def('policy', PASSBYVALUE, INOUT_IN ),
parm_def('pBuffer', MEMORY_DEVICE, INOUT_IN ) ] ),
# 8.12. cusparse<t>bsrsm2_solve()
func_decl( [ "cusparseSbsrsm2_solve", "cusparseDbsrsm2_solve", "cusparseCbsrsm2_solve", "cusparseZbsrsm2_solve" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('dirA', PASSBYVALUE, INOUT_IN ),
parm_def('transA', PASSBYVALUE, INOUT_IN ),
parm_def('transXY', PASSBYVALUE, INOUT_IN ),
parm_def('mb', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('nnzb', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('descrA', PASSBYVALUE, INOUT_IN ),
parm_def('bsrSortedVal', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedRowPtr', MEMORY_DEVICE, INOUT_IN ),
parm_def('bsrSortedColInd', MEMORY_DEVICE, INOUT_IN ),
parm_def('blockSize', PASSBYVALUE, INOUT_IN ),
parm_def('info', PASSBYVALUE, INOUT_IN ),
parm_def('F', MEMORY_DEVICE, INOUT_IN ),
parm_def('ldf', PASSBYVALUE, INOUT_IN ),
parm_def('X', MEMORY_DEVICE, INOUT_OUT ),
parm_def('ldx', PASSBYVALUE, INOUT_IN ),
parm_def('policy', PASSBYVALUE, INOUT_IN ),
parm_def('pBuffer', MEMORY_DEVICE, INOUT_IN ) ] ),
# 8.13. cusparseXbsrsm2_zeroPivot()
func_decl( [ "cusparseXbsrsm2_zeroPivot" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('info', PASSBYVALUE, INOUT_IN ),
parm_def('position', [ MEMORY_HOST, SCALAR ], INOUT_OUT ) ] ),
# 8.14. cusparse<t>gemmi()
func_decl( [ "cusparseSgemmi", "cusparseDgemmi", "cusparseCgemmi", "cusparseZgemmi" ],
[ parm_def('handle', PASSBYVALUE, INOUT_IN ),
parm_def('m', PASSBYVALUE, INOUT_IN ),
parm_def('n', PASSBYVALUE, INOUT_IN ),
parm_def('k', PASSBYVALUE, INOUT_IN ),
parm_def('nnz', PASSBYVALUE, INOUT_IN ),
parm_def('alpha', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('A', MEMORY_DEVICE, INOUT_IN ),
parm_def('lda', PASSBYVALUE, INOUT_IN ),
parm_def('cscValB', MEMORY_DEVICE, INOUT_IN ),
parm_def('cscColPtrB', MEMORY_DEVICE, INOUT_IN ),
parm_def('cscRowIndB', MEMORY_DEVICE, INOUT_IN ),
parm_def('beta', [ MEMORY_HoD_CUSPARSEPOINTERMODE, SCALAR ], INOUT_IN ),
parm_def('C', [ MEMORY_DEVICE, VECTOR], INOUT_INOUT ),
parm_def('ldc', PASSBYVALUE, INOUT_IN ) ] )
]
| 59.228216 | 148 | 0.596469 | 1,480 | 14,274 | 5.388514 | 0.083108 | 0.17116 | 0.234483 | 0.298433 | 0.811285 | 0.807774 | 0.783323 | 0.783323 | 0.774922 | 0.768777 | 0 | 0.007092 | 0.278829 | 14,274 | 240 | 149 | 59.475 | 0.767632 | 0.030475 | 0 | 0.830189 | 0 | 0 | 0.160301 | 0.052228 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0.556604 | 0.004717 | 0 | 0.004717 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 10 |
a60177411de3a0a59ef2aea662a2e99ebdd70ef5 | 9,989 | py | Python | pccm/builder/pybind.py | FindDefinition/PCCM | fa0cc4e41f886f288bbacf92cea1625d927a54ad | [
"MIT"
] | 3 | 2021-10-21T06:26:46.000Z | 2022-03-10T11:14:40.000Z | pccm/builder/pybind.py | FindDefinition/PCCM | fa0cc4e41f886f288bbacf92cea1625d927a54ad | [
"MIT"
] | 1 | 2021-09-13T02:25:05.000Z | 2021-09-13T02:27:50.000Z | pccm/builder/pybind.py | FindDefinition/PCCM | fa0cc4e41f886f288bbacf92cea1625d927a54ad | [
"MIT"
] | null | null | null | from pathlib import Path
from typing import Dict, List, Optional, Union
import ccimport
from ccimport.buildtools.writer import DEFAULT_MSVC_DEP_PREFIX
from pccm.core import Class, CodeFormatter, CodeGenerator, ManualClassGenerator
from pccm.core.buildmeta import BuildMeta
from pccm.middlewares import expose_main, pybind
def build_pybind(cus: List[Class],
out_path: Union[str, Path],
includes: Optional[List[Union[str, Path]]] = None,
libpaths: Optional[List[Union[str, Path]]] = None,
libraries: Optional[List[str]] = None,
compile_options: Optional[List[str]] = None,
link_options: Optional[List[str]] = None,
std="c++14",
disable_hash=True,
load_library=True,
pybind_file_suffix: str = ".cc",
additional_cflags: Optional[Dict[str, List[str]]] = None,
additional_lflags: Optional[Dict[str, List[str]]] = None,
msvc_deps_prefix=DEFAULT_MSVC_DEP_PREFIX,
build_dir: Optional[Union[str, Path]] = None,
namespace_root: Optional[Union[str, Path]] = None,
code_fmt: Optional[CodeFormatter] = None,
out_root: Optional[Union[str, Path]] = None,
suffix_to_compiler: Optional[Dict[str, List[str]]] = None,
disable_pch: bool = False,
disable_anno: bool = False,
objects_folder: Optional[Union[str, Path]] = None,
debug_file_gen: bool = False,
verbose=False):
mod_name = Path(out_path).stem
if build_dir is None:
build_dir = Path(out_path).parent / "build"
if includes is None:
includes = []
if libpaths is None:
libpaths = []
if libraries is None:
libraries = []
if additional_cflags is None:
additional_cflags = {}
if additional_lflags is None:
additional_lflags = {}
if out_root is None:
out_root = build_dir
build_dir = Path(build_dir)
build_dir.mkdir(exist_ok=True, parents=True, mode=0o755)
pb = pybind.Pybind11SplitImpl(mod_name, mod_name, pybind_file_suffix)
cg = CodeGenerator([pb], verbose=verbose)
user_cus = cg.build_graph(cus, namespace_root)
HEADER_ROOT = build_dir / "include"
SRC_ROOT = build_dir / "src"
# build graph for middleware only. so we can't apply middleware again.
cg.build_graph(pb.get_code_units(), namespace_root, run_middleware=False)
header_dict, impl_dict, header_to_impls = cg.code_generation(user_cus)
pch_to_sources = {} # type: Dict[Path, List[Path]]
pch_to_include = {} # type: Dict[Path, str]
if not disable_pch:
for header, impls in header_to_impls.items():
pch_to_sources[HEADER_ROOT /
header] = [SRC_ROOT / p for p in impls]
pch_to_include[HEADER_ROOT / header] = header
includes.append(HEADER_ROOT)
extern_build_meta = BuildMeta(includes, libpaths, libraries,
additional_cflags, additional_lflags)
for cu in user_cus:
extern_build_meta += cu.build_meta
for cu in pb.get_code_units():
extern_build_meta += cu.build_meta
if debug_file_gen:
print("------------PCCM Headers-----------")
for k,v in header_dict.items():
print(k)
print(v.to_string())
print("------------PCCM Impls-----------")
for k,v in impl_dict.items():
print(k)
print(v.to_string())
cg.code_written(HEADER_ROOT, header_dict, code_fmt)
paths = cg.code_written(SRC_ROOT, impl_dict, code_fmt)
header_dict, impl_dict, header_to_impls = cg.code_generation(
pb.get_code_units())
if debug_file_gen:
print("------------PCCM Pybind Headers-----------")
for k,v in header_dict.items():
print(k)
print(v.to_string())
print("------------PCCM Pybind Impls-----------")
for k,v in impl_dict.items():
print(k)
print(v.to_string())
cg.code_written(HEADER_ROOT, header_dict, code_fmt)
paths += cg.code_written(SRC_ROOT, impl_dict, code_fmt)
if not disable_anno:
pyi = pb.generate_python_interface()
for k, v in pyi.items():
k_path = k.replace(".", "/") + ".pyi"
k_path_parts = k.split(".")[:-1]
pyi_path = Path(out_path) / k_path
pyi_path.parent.mkdir(exist_ok=True, parents=True, mode=0o755)
mk_init = Path(out_path)
init_path = (mk_init / "__init__.pyi")
if not init_path.exists():
with init_path.open("w") as f:
f.write("")
for part in k_path_parts:
init_path = (mk_init / part / "__init__.pyi")
if not init_path.exists():
with init_path.open("w") as f:
f.write("")
mk_init = mk_init / part
with pyi_path.open("w") as f:
f.write(v)
return ccimport.ccimport(
paths,
out_path,
extern_build_meta.includes,
extern_build_meta.libpaths,
extern_build_meta.libraries,
compile_options,
link_options,
std=std,
source_paths_for_hash=None,
disable_hash=disable_hash,
load_library=load_library,
additional_cflags=extern_build_meta.compiler_to_cflags,
additional_lflags=extern_build_meta.compiler_to_ldflags,
msvc_deps_prefix=msvc_deps_prefix,
build_dir=build_dir,
out_root=out_root,
pch_to_sources=pch_to_sources,
pch_to_include=pch_to_include,
suffix_to_compiler=suffix_to_compiler,
verbose=verbose,
objects_folder=objects_folder)
def build_library(cus: List[Class],
out_path: Union[str, Path],
middlewares: Optional[List[ManualClassGenerator]] = None,
includes: Optional[List[Union[str, Path]]] = None,
libpaths: Optional[List[Union[str, Path]]] = None,
libraries: Optional[List[str]] = None,
compile_options: Optional[List[str]] = None,
link_options: Optional[List[str]] = None,
std="c++14",
disable_hash=True,
shared: bool = True,
main_file_suffix: str = ".cc",
additional_cflags: Optional[Dict[str, List[str]]] = None,
additional_lflags: Optional[Dict[str, List[str]]] = None,
msvc_deps_prefix=DEFAULT_MSVC_DEP_PREFIX,
build_dir: Optional[Union[str, Path]] = None,
namespace_root: Optional[Union[str, Path]] = None,
code_fmt: Optional[CodeFormatter] = None,
out_root: Optional[Union[str, Path]] = None,
suffix_to_compiler: Optional[Dict[str, List[str]]] = None,
disable_pch: bool = False,
objects_folder: Optional[Union[str, Path]] = None,
verbose=False):
subnamespace = Path(out_path).stem
if build_dir is None:
build_dir = Path(out_path).parent / "build"
if includes is None:
includes = []
if libpaths is None:
libpaths = []
if libraries is None:
libraries = []
if additional_cflags is None:
additional_cflags = {}
if additional_lflags is None:
additional_lflags = {}
if out_root is None:
out_root = build_dir
build_dir = Path(build_dir)
build_dir.mkdir(exist_ok=True, parents=True, mode=0o755)
em = expose_main.ExposeMain(subnamespace, main_file_suffix)
cg = CodeGenerator([em], verbose=verbose)
user_cus = cg.build_graph(cus, namespace_root)
cg.build_graph(em.get_code_units(), namespace_root, run_middleware=False)
HEADER_ROOT = build_dir / "include"
SRC_ROOT = build_dir / "src"
# build graph for middleware only. so we can't apply middleware again.
header_dict, impl_dict, header_to_impls = cg.code_generation(user_cus)
pch_to_sources = {} # type: Dict[Path, List[Path]]
pch_to_include = {} # type: Dict[Path, str]
if not disable_pch:
for header, impls in header_to_impls.items():
pch_to_sources[HEADER_ROOT /
header] = [SRC_ROOT / p for p in impls]
pch_to_include[HEADER_ROOT / header] = header
includes.append(HEADER_ROOT)
extern_build_meta = BuildMeta(includes, libpaths, libraries,
additional_cflags, additional_lflags)
for cu in user_cus:
extern_build_meta += cu.build_meta
cg.code_written(HEADER_ROOT, header_dict, code_fmt)
paths = cg.code_written(SRC_ROOT, impl_dict, code_fmt)
em_cus = em.get_code_units()
if em_cus:
header_dict, impl_dict, header_to_impls = cg.code_generation(em_cus)
cg.code_written(HEADER_ROOT, header_dict, code_fmt)
paths += cg.code_written(SRC_ROOT, impl_dict, code_fmt)
return ccimport.ccimport(
paths,
out_path,
extern_build_meta.includes,
extern_build_meta.libpaths,
extern_build_meta.libraries,
compile_options,
link_options,
std=std,
load_library=False,
source_paths_for_hash=None,
disable_hash=disable_hash,
additional_cflags=extern_build_meta.compiler_to_cflags,
additional_lflags=extern_build_meta.compiler_to_ldflags,
msvc_deps_prefix=msvc_deps_prefix,
build_dir=build_dir,
build_ctype=True,
shared=shared,
out_root=out_root,
pch_to_sources=pch_to_sources,
pch_to_include=pch_to_include,
suffix_to_compiler=suffix_to_compiler,
verbose=verbose,
objects_folder=objects_folder)
| 41.448133 | 79 | 0.605666 | 1,229 | 9,989 | 4.623271 | 0.120423 | 0.030975 | 0.039599 | 0.033791 | 0.825942 | 0.825942 | 0.81327 | 0.810102 | 0.792855 | 0.747624 | 0 | 0.002689 | 0.292622 | 9,989 | 240 | 80 | 41.620833 | 0.801444 | 0.023926 | 0 | 0.768889 | 0 | 0 | 0.023607 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.008889 | false | 0 | 0.04 | 0 | 0.057778 | 0.053333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a6048ef98ffccf7013f6b141277d0cb0ef873cb3 | 119 | py | Python | common/db/models/__init__.py | nmfzone/django-modern-boilerplate | 6c752c5246b4ea14caa06792c60e9c1802a606e4 | [
"MIT"
] | 2 | 2020-07-14T05:10:17.000Z | 2021-04-07T00:17:11.000Z | common/db/models/__init__.py | nmfzone/django-modern-boilerplate | 6c752c5246b4ea14caa06792c60e9c1802a606e4 | [
"MIT"
] | null | null | null | common/db/models/__init__.py | nmfzone/django-modern-boilerplate | 6c752c5246b4ea14caa06792c60e9c1802a606e4 | [
"MIT"
] | null | null | null | from common.db.models.fields import *
from common.db.models.fields import __all__ as fields_all
__all__ = fields_all
| 19.833333 | 57 | 0.806723 | 19 | 119 | 4.526316 | 0.421053 | 0.232558 | 0.27907 | 0.418605 | 0.697674 | 0.697674 | 0 | 0 | 0 | 0 | 0 | 0 | 0.12605 | 119 | 5 | 58 | 23.8 | 0.826923 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.666667 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
a63c8be6649ddab0cd8c52c25bb8804fde9c1dc1 | 3,301 | py | Python | Python/FunctionCallers.py | ibrahimadlani/ProjectM3202c | dd71b708ff3b9e7471d702e1ca35e3446039fc62 | [
"MIT"
] | null | null | null | Python/FunctionCallers.py | ibrahimadlani/ProjectM3202c | dd71b708ff3b9e7471d702e1ca35e3446039fc62 | [
"MIT"
] | null | null | null | Python/FunctionCallers.py | ibrahimadlani/ProjectM3202c | dd71b708ff3b9e7471d702e1ca35e3446039fc62 | [
"MIT"
] | null | null | null | #!/usr/bin/env python3
import Functions as Functions
import Constants as Constants
def callMalthus(number_of_individuals:float, step:float):
result = Functions.malthus(number_of_individuals)
if number_of_individuals + step * result < 0:
return (0 - number_of_individuals) / step
else:
return result
def callVerhulst(number_of_individuals:float, step:float):
result = Functions.verhulst(number_of_individuals)
if number_of_individuals + step * result < 0:
return (0 - number_of_individuals) / step
elif number_of_individuals + step * result > Constants.ENVIRONMENTAL_CAPACITY:
return (Constants.ENVIRONMENTAL_CAPACITY - number_of_individuals) / step
else:
return result
def callLotkaVolterraPrey(number_of_preys:float, number_of_predators:float, step:float):
result = Functions.lotkaVolterraPrey(number_of_preys, number_of_predators)
if number_of_preys + step * result < 0:
return (0 - number_of_preys) / step
else:
return result
def callLotkaVolterraPredator(number_of_preys:float, number_of_predators:float, step:float):
result = Functions.lotkaVolterraPredator(number_of_preys, number_of_predators)
if number_of_predators + step * result < 0:
return (0 - number_of_preys) / step
else:
return result
def callLotkaVolterraVerhulstPrey(number_of_preys:float, number_of_predators:float, step:float):
result = Functions.lotkaVolterraVerhulstPrey(number_of_preys, number_of_predators)
if number_of_preys + step * result < 0:
return (0 - number_of_preys) / step
elif number_of_preys + step * result > Constants.ENVIRONMENTAL_CAPACITY:
return (Constants.ENVIRONMENTAL_CAPACITY - number_of_preys) / step
else:
return result
def callLotkaVolterraVerhulstPredator(number_of_preys:float, number_of_predators:float, step:float):
result = Functions.lotkaVolterraVerhulstPredator(number_of_preys, number_of_predators)
if number_of_predators + step * result < 0:
return (0 - number_of_preys) / step
else:
return result
# def callGausePrey(number_of_preys:float, number_of_predators:float, step:float):
# result = Functions.gausePrey(number_of_preys, number_of_predators)
# if number_of_preys + step * result < 0:
# return (0 - number_of_preys) / step
# elif number_of_preys + step * result > Constants.ENVIRONMENTAL_CAPACITY:
# return (Constants.ENVIRONMENTAL_CAPACITY - number_of_preys) / step
# else:
# return result
# def callHollingIIPrey(number_of_preys:float, number_of_predators:float, step:float):
# result = Functions.hollingIIPrey(number_of_preys, number_of_predators)
# if number_of_preys + step * result < 0:
# return (0 - number_of_preys) / step
# elif number_of_preys + step * result > Constants.ENVIRONMENTAL_CAPACITY:
# return (Constants.ENVIRONMENTAL_CAPACITY - number_of_preys) / step
# else:
# return result
# def callHollingIIPredator(number_of_preys:float, number_of_predators:float, step:float):
# result = Functions.hollingIIPredator(number_of_preys, number_of_predators)
# if number_of_preys + step * result < 0:
# return (0 - number_of_preys) / step
# else:
# return result | 40.256098 | 100 | 0.732202 | 399 | 3,301 | 5.746867 | 0.105263 | 0.202355 | 0.181422 | 0.13345 | 0.824248 | 0.815962 | 0.815962 | 0.815962 | 0.750981 | 0.750981 | 0 | 0.0071 | 0.189337 | 3,301 | 82 | 101 | 40.256098 | 0.849776 | 0.352923 | 0 | 0.571429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.142857 | false | 0 | 0.047619 | 0 | 0.52381 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a6bdacceb70fa6c26c2080211ac05bc22759db74 | 8,946 | py | Python | models/recall/ncf/net.py | ziyoujiyi/PaddleRec | bcddcf46e5cd8d4e6b2c5ee8d0d5521e292a2a81 | [
"Apache-2.0"
] | 2,739 | 2020-04-28T05:12:48.000Z | 2022-03-31T16:01:49.000Z | models/recall/ncf/net.py | jiangcongxu/PaddleRec | 9a107c56af2d1ee282975bcc8edb1ad5fb7e7973 | [
"Apache-2.0"
] | 205 | 2020-05-14T13:29:14.000Z | 2022-03-31T13:01:50.000Z | models/recall/ncf/net.py | jiangcongxu/PaddleRec | 9a107c56af2d1ee282975bcc8edb1ad5fb7e7973 | [
"Apache-2.0"
] | 545 | 2020-05-14T13:19:13.000Z | 2022-03-24T07:53:05.000Z | # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
import numpy as np
import math
class NCF_NeuMF_Layer(nn.Layer):
def __init__(self, num_users, num_items, mf_dim, layers):
super(NCF_NeuMF_Layer, self).__init__()
self.num_users = num_users
self.num_items = num_items
self.mf_dim = mf_dim
self.layers = layers
self.MF_Embedding_User = paddle.nn.Embedding(
self.num_users,
self.mf_dim,
sparse=False,
weight_attr=paddle.ParamAttr(
initializer=nn.initializer.Normal(
mean=0.0, std=0.01),
regularizer=paddle.regularizer.L2Decay(coeff=0)))
self.MF_Embedding_Item = paddle.nn.Embedding(
self.num_items,
self.mf_dim,
sparse=False,
weight_attr=paddle.ParamAttr(
initializer=nn.initializer.Normal(
mean=0.0, std=0.01),
regularizer=paddle.regularizer.L2Decay(coeff=0)))
self.MLP_Embedding_User = paddle.nn.Embedding(
self.num_users,
int(self.layers[0] / 2),
sparse=False,
weight_attr=paddle.ParamAttr(
initializer=nn.initializer.Normal(
mean=0.0, std=0.01),
regularizer=paddle.regularizer.L2Decay(coeff=0)))
self.MLP_Embedding_Item = paddle.nn.Embedding(
self.num_items,
int(self.layers[0] / 2),
sparse=False,
weight_attr=paddle.ParamAttr(
initializer=nn.initializer.Normal(
mean=0.0, std=0.01),
regularizer=paddle.regularizer.L2Decay(coeff=0)))
num_layer = len(self.layers)
self.MLP_fc = []
for i in range(1, num_layer):
Linear = paddle.nn.Linear(
in_features=self.layers[i - 1],
out_features=self.layers[i],
weight_attr=paddle.ParamAttr(
initializer=nn.initializer.TruncatedNormal(
mean=0.0, std=1.0 / math.sqrt(self.layers[i - 1])),
regularizer=paddle.regularizer.L2Decay(coeff=0)),
name='layer_' + str(i))
self.add_sublayer('layer_%d' % i, Linear)
self.MLP_fc.append(Linear)
act = paddle.nn.ReLU()
self.add_sublayer('act_%d' % i, act)
self.MLP_fc.append(act)
self.prediction = paddle.nn.Linear(
in_features=self.layers[2],
out_features=1,
weight_attr=nn.initializer.KaimingUniform(fan_in=self.layers[2] *
2),
name='prediction')
self.sigmoid = paddle.nn.Sigmoid()
def forward(self, input_data):
user_input = input_data[0]
item_input = input_data[1]
label = input_data[2]
# MF part
user_embedding_mf = self.MF_Embedding_User(user_input)
mf_user_latent = paddle.flatten(
x=user_embedding_mf, start_axis=1, stop_axis=2)
item_embedding_mf = self.MF_Embedding_Item(item_input)
mf_item_latent = paddle.flatten(
x=item_embedding_mf, start_axis=1, stop_axis=2)
mf_vector = paddle.multiply(mf_user_latent, mf_item_latent)
# MLP part
# The 0-th layer is the concatenation of embedding layers
user_embedding_mlp = self.MLP_Embedding_User(user_input)
mlp_user_latent = paddle.flatten(
x=user_embedding_mlp, start_axis=1, stop_axis=2)
item_embedding_mlp = self.MLP_Embedding_Item(item_input)
mlp_item_latent = paddle.flatten(
x=item_embedding_mlp, start_axis=1, stop_axis=2)
mlp_vector = paddle.concat(
x=[mlp_user_latent, mlp_item_latent], axis=-1)
for n_layer in self.MLP_fc:
mlp_vector = n_layer(mlp_vector)
# Concatenate MF and MLP parts
predict_vector = paddle.concat(x=[mf_vector, mlp_vector], axis=-1)
# Final prediction layer
prediction = self.prediction(predict_vector)
prediction = self.sigmoid(prediction)
return prediction
class NCF_GMF_Layer(nn.Layer):
def __init__(self, num_users, num_items, mf_dim, layers):
super(NCF_GMF_Layer, self).__init__()
self.num_users = num_users
self.num_items = num_items
self.mf_dim = mf_dim
self.layers = layers
self.MF_Embedding_User = paddle.nn.Embedding(
self.num_users,
self.mf_dim,
sparse=True,
weight_attr=nn.initializer.Normal(
mean=0.0, std=0.01))
self.MF_Embedding_Item = paddle.nn.Embedding(
self.num_items,
self.mf_dim,
sparse=True,
weight_attr=nn.initializer.Normal(
mean=0.0, std=0.01))
self.prediction = paddle.nn.Linear(
in_features=self.layers[3],
out_features=1,
weight_attr=nn.initializer.KaimingUniform(fan_in=None),
name='prediction')
self.sigmoid = paddle.nn.Sigmoid()
def forward(self, input_data):
user_input = input_data[0]
item_input = input_data[1]
label = input_data[2]
user_embedding_mf = self.MF_Embedding_User(user_input)
mf_user_latent = paddle.flatten(
x=user_embedding_mf, start_axis=1, stop_axis=2)
item_embedding_mf = self.MF_Embedding_Item(item_input)
mf_item_latent = paddle.flatten(
x=item_embedding_mf, start_axis=1, stop_axis=2)
mf_vector = paddle.multiply(mf_user_latent, mf_item_latent)
prediction = self.prediction(mf_vector)
prediction = self.sigmoid(prediction)
return prediction
class NCF_MLP_Layer(nn.Layer):
def __init__(self, num_users, num_items, mf_dim, layers):
super(NCF_MLP_Layer, self).__init__()
self.num_users = num_users
self.num_items = num_items
self.mf_dim = mf_dim
self.layers = layers
self.MLP_Embedding_User = paddle.nn.Embedding(
self.num_users,
int(self.layers[0] / 2),
sparse=True,
weight_attr=nn.initializer.Normal(
mean=0.0, std=0.01))
self.MLP_Embedding_Item = paddle.nn.Embedding(
self.num_items,
int(self.layers[0] / 2),
sparse=True,
weight_attr=nn.initializer.Normal(
mean=0.0, std=0.01))
num_layer = len(self.layers)
self.MLP_fc = []
for i in range(1, num_layer):
Linear = paddle.nn.Linear(
in_features=self.layers[i - 1],
out_features=self.layers[i],
weight_attr=paddle.ParamAttr(
initializer=nn.initializer.TruncatedNormal(
mean=0.0, std=1.0 / math.sqrt(self.layers[i - 1]))),
name='layer_' + str(i))
self.add_sublayer('layer_%d' % i, Linear)
self.MLP_fc.append(Linear)
act = paddle.nn.ReLU()
self.add_sublayer('act_%d' % i, act)
self.MLP_fc.append(act)
self.prediction = paddle.nn.Linear(
in_features=self.layers[3],
out_features=1,
weight_attr=nn.initializer.KaimingUniform(fan_in=self.layers[3] *
2),
name='prediction')
self.sigmoid = paddle.nn.Sigmoid()
def forward(self, input_data):
user_input = input_data[0]
item_input = input_data[1]
label = input_data[2]
user_embedding_mlp = self.MLP_Embedding_User(user_input)
mlp_user_latent = paddle.flatten(
x=user_embedding_mlp, start_axis=1, stop_axis=2)
item_embedding_mlp = self.MLP_Embedding_Item(item_input)
mlp_item_latent = paddle.flatten(
x=item_embedding_mlp, start_axis=1, stop_axis=2)
mlp_vector = paddle.concat(
x=[mlp_user_latent, mlp_item_latent], axis=-1)
for n_layer in self.MLP_fc:
mlp_vector = n_layer(mlp_vector)
prediction = self.prediction(mlp_vector)
prediction = self.sigmoid(prediction)
return prediction
| 36.966942 | 77 | 0.59915 | 1,132 | 8,946 | 4.488516 | 0.136926 | 0.03149 | 0.023617 | 0.017713 | 0.838614 | 0.838614 | 0.830545 | 0.820114 | 0.820114 | 0.796103 | 0 | 0.019135 | 0.304829 | 8,946 | 241 | 78 | 37.120332 | 0.797877 | 0.079142 | 0 | 0.883598 | 0 | 0 | 0.008517 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031746 | false | 0 | 0.026455 | 0 | 0.089947 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a6feb5fb33a5f5e7960274dbd540c99673537b8e | 158,193 | py | Python | dnacentersdk/api/v2_2_2_3/devices.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 32 | 2019-09-05T05:16:56.000Z | 2022-03-22T09:50:38.000Z | dnacentersdk/api/v2_2_2_3/devices.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 35 | 2019-09-07T18:58:54.000Z | 2022-03-24T19:29:36.000Z | dnacentersdk/api/v2_2_2_3/devices.py | oboehmer/dnacentersdk | 25c4e99900640deee91a56aa886874d9cb0ca960 | [
"MIT"
] | 18 | 2019-09-09T11:07:21.000Z | 2022-03-25T08:49:59.000Z | # -*- coding: utf-8 -*-
"""Cisco DNA Center Devices API wrapper.
Copyright (c) 2019-2021 Cisco Systems.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from builtins import *
from past.builtins import basestring
from ...restsession import RestSession
from ...utils import (
check_type,
dict_from_items_with_values,
apply_path_params,
dict_of_str,
)
class Devices(object):
"""Cisco DNA Center Devices API (version: 2.2.2.3).
Wraps the DNA Center Devices
API and exposes the API as native Python
methods that return native Python objects.
"""
def __init__(self, session, object_factory, request_validator):
"""Initialize a new Devices
object with the provided RestSession.
Args:
session(RestSession): The RESTful session object to be used for
API calls to the DNA Center service.
Raises:
TypeError: If the parameter types are incorrect.
"""
check_type(session, RestSession)
super(Devices, self).__init__()
self._session = session
self._object_factory = object_factory
self._request_validator = request_validator
def get_device_detail(self,
identifier,
search_by,
timestamp=None,
headers=None,
**request_parameters):
"""Returns detailed Network Device information retrieved by Mac Address, Device Name or UUID for any given point of
time. .
Args:
timestamp(basestring): timestamp query parameter. Epoch time(in milliseconds) when the device data is
required .
search_by(basestring): searchBy query parameter. MAC Address or Device Name value or UUID of the network
device .
identifier(basestring): identifier query parameter. One of keywords : macAddress or uuid or nwDeviceName
.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(timestamp, basestring)
check_type(search_by, basestring,
may_be_none=False)
check_type(identifier, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'timestamp':
timestamp,
'searchBy':
search_by,
'identifier':
identifier,
}
if _params['timestamp'] is None:
_params['timestamp'] = ''
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/device-detail')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_c9ee787eb5a0391309f45ddf392ca_v2_2_2_3', json_data)
def get_device_enrichment_details(self,
headers=None,
**request_parameters):
"""Enriches a given network device context (device id or device Mac Address or device management IP address) with
details about the device and neighbor topology .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
list: JSON response. A list of MyDict objects.
Access the object's properties by using the dot notation
or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'entity_type' in headers:
check_type(headers.get('entity_type'),
basestring, may_be_none=False)
if 'entity_value' in headers:
check_type(headers.get('entity_value'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/device-enrichment-details')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_a20c25e0fa518bb186fd7747450ef6_v2_2_2_3', json_data)
def devices(self,
device_role=None,
end_time=None,
health=None,
limit=None,
offset=None,
site_id=None,
start_time=None,
headers=None,
**request_parameters):
"""Intent API for accessing DNA Assurance Device object for generating reports, creating dashboards or creating
additional value added services. .
Args:
device_role(basestring): deviceRole query parameter. The device role (One of CORE, ACCESS, DISTRIBUTION,
ROUTER, WLC, AP) .
site_id(basestring): siteId query parameter. Assurance site UUID value .
health(basestring): health query parameter. The device overall health (One of POOR, FAIR, GOOD) .
start_time(int): startTime query parameter. UTC epoch time in milliseconds .
end_time(int): endTime query parameter. UTC epoch time in miliseconds .
limit(int): limit query parameter. Max number of device entries in the response (default to 50. Max at
1000) .
offset(int): offset query parameter. The offset of the first device in the returned data .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_role, basestring)
check_type(site_id, basestring)
check_type(health, basestring)
check_type(start_time, int)
check_type(end_time, int)
check_type(limit, int)
check_type(offset, int)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'deviceRole':
device_role,
'siteId':
site_id,
'health':
health,
'startTime':
start_time,
'endTime':
end_time,
'limit':
limit,
'offset':
offset,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/device-health')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_c75e364632e15384a18063458e2ba0e3_v2_2_2_3', json_data)
def get_all_interfaces(self,
limit=None,
offset=None,
headers=None,
**request_parameters):
"""Returns all available interfaces. This endpoint can return a maximum of 500 interfaces .
Args:
offset(int): offset query parameter.
limit(int): limit query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(offset, int)
check_type(limit, int)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'offset':
offset,
'limit':
limit,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_d3d71136d95562afc211b40004d109_v2_2_2_3', json_data)
def get_device_interface_count(self,
headers=None,
**request_parameters):
"""Returns the count of interfaces for all devices .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_da44fbc3e415a99aac0bdd291e9a87a_v2_2_2_3', json_data)
def get_interface_by_ip(self,
ip_address,
headers=None,
**request_parameters):
"""Returns list of interfaces for specified device management IP address .
Args:
ip_address(basestring): ipAddress path parameter. IP address of the interface .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(ip_address, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'ipAddress': ip_address,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/ip-address/{ipAddress}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_cf7fa95e3ed4527aa5ba8ca871a8c142_v2_2_2_3', json_data)
def get_isis_interfaces(self,
headers=None,
**request_parameters):
"""Returns the interfaces that has ISIS enabled .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/isis')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_af71ea437c8755869b00d26ba9234dff_v2_2_2_3', json_data)
def get_interface_info_by_id(self,
device_id,
headers=None,
**request_parameters):
"""Returns list of interfaces by specified device .
Args:
device_id(basestring): deviceId path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceId': device_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/network-device/{deviceId}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_e057192b97615f0d99a10e2b66bab13a_v2_2_2_3', json_data)
def get_device_interface_count_by_id(self,
device_id,
headers=None,
**request_parameters):
"""Returns the interface count for the given device .
Args:
device_id(basestring): deviceId path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceId': device_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/network-'
+ 'device/{deviceId}/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_b7d6c62ea6522081fcf55de7eb9fd7_v2_2_2_3', json_data)
def get_interface_details(self,
device_id,
name,
headers=None,
**request_parameters):
"""Returns interface by specified device Id and interface name .
Args:
device_id(basestring): deviceId path parameter. Device ID .
name(basestring): name query parameter. Interface name .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(name, basestring,
may_be_none=False)
check_type(device_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'name':
name,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceId': device_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/network-'
+ 'device/{deviceId}/interface-name')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_bef9e9b306085d879b877598fad71b51_v2_2_2_3', json_data)
def get_device_interfaces_by_specified_range(self,
device_id,
records_to_return,
start_index,
headers=None,
**request_parameters):
"""Returns the list of interfaces for the device for the specified range .
Args:
device_id(basestring): deviceId path parameter. Device ID .
start_index(int): startIndex path parameter. Start index .
records_to_return(int): recordsToReturn path parameter. Number of records to return .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
check_type(start_index, int,
may_be_none=False)
check_type(records_to_return, int,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceId': device_id,
'startIndex': start_index,
'recordsToReturn': records_to_return,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/network-'
+ 'device/{deviceId}/{startIndex}/{recordsToReturn}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_a3d52c630ba5deaada16fe3b07af744_v2_2_2_3', json_data)
def get_ospf_interfaces(self,
headers=None,
**request_parameters):
"""Returns the interfaces that has OSPF enabled .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/ospf')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_a2868ff45f5621965f6ece01a742ce_v2_2_2_3', json_data)
def get_interface_by_id(self,
id,
headers=None,
**request_parameters):
"""Returns the interface for the given interface ID .
Args:
id(basestring): id path parameter. Interface ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/interface/{id}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_b16bff74ae54ca88a02b34df169218_v2_2_2_3', json_data)
def get_device_list(self,
associated_wlc_ip=None,
collection_interval=None,
collection_status=None,
device_support_level=None,
error_code=None,
error_description=None,
family=None,
hostname=None,
id=None,
license_name=None,
license_status=None,
license_type=None,
location=None,
location_name=None,
mac_address=None,
management_ip_address=None,
module_equpimenttype=None,
module_name=None,
module_operationstatecode=None,
module_partnumber=None,
module_servicestate=None,
module_vendorequipmenttype=None,
not_synced_for_minutes=None,
platform_id=None,
reachability_status=None,
role=None,
serial_number=None,
series=None,
software_type=None,
software_version=None,
type=None,
up_time=None,
headers=None,
**request_parameters):
"""Returns list of network devices based on filter criteria such as management IP address, mac address, hostname,
etc. You can use the .* in any value to conduct a wildcard search. For example, to find all hostnames
beginning with myhost in the IP address range 192.25.18.n, issue the following request: GET
/dna/intent/api/v1/network-device?hostname=myhost.*&managementIpAddress=192.25.18..* If id parameter is
provided with comma separated ids, it will return the list of network-devices for the given ids and
ignores the other request parameters. .
Args:
hostname(basestring, list, set, tuple): hostname query parameter.
management_ip_address(basestring, list, set, tuple): managementIpAddress query parameter.
mac_address(basestring, list, set, tuple): macAddress query parameter.
location_name(basestring, list, set, tuple): locationName query parameter.
serial_number(basestring, list, set, tuple): serialNumber query parameter.
location(basestring, list, set, tuple): location query parameter.
family(basestring, list, set, tuple): family query parameter.
type(basestring, list, set, tuple): type query parameter.
series(basestring, list, set, tuple): series query parameter.
collection_status(basestring, list, set, tuple): collectionStatus query parameter.
collection_interval(basestring, list, set, tuple): collectionInterval query parameter.
not_synced_for_minutes(basestring, list, set, tuple): notSyncedForMinutes query parameter.
error_code(basestring, list, set, tuple): errorCode query parameter.
error_description(basestring, list, set, tuple): errorDescription query parameter.
software_version(basestring, list, set, tuple): softwareVersion query parameter.
software_type(basestring, list, set, tuple): softwareType query parameter.
platform_id(basestring, list, set, tuple): platformId query parameter.
role(basestring, list, set, tuple): role query parameter.
reachability_status(basestring, list, set, tuple): reachabilityStatus query parameter.
up_time(basestring, list, set, tuple): upTime query parameter.
associated_wlc_ip(basestring, list, set, tuple): associatedWlcIp query parameter.
license_name(basestring, list, set, tuple): license.name query parameter.
license_type(basestring, list, set, tuple): license.type query parameter.
license_status(basestring, list, set, tuple): license.status query parameter.
module_name(basestring, list, set, tuple): module+name query parameter.
module_equpimenttype(basestring, list, set, tuple): module+equpimenttype query parameter.
module_servicestate(basestring, list, set, tuple): module+servicestate query parameter.
module_vendorequipmenttype(basestring, list, set, tuple): module+vendorequipmenttype query parameter.
module_partnumber(basestring, list, set, tuple): module+partnumber query parameter.
module_operationstatecode(basestring, list, set, tuple): module+operationstatecode query parameter.
id(basestring): id query parameter. Accepts comma separated ids and return list of network-devices for
the given ids. If invalid or not-found ids are provided, null entry will be returned in
the list. .
device_support_level(basestring): deviceSupportLevel query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(hostname, (basestring, list, set, tuple))
check_type(management_ip_address, (basestring, list, set, tuple))
check_type(mac_address, (basestring, list, set, tuple))
check_type(location_name, (basestring, list, set, tuple))
check_type(serial_number, (basestring, list, set, tuple))
check_type(location, (basestring, list, set, tuple))
check_type(family, (basestring, list, set, tuple))
check_type(type, (basestring, list, set, tuple))
check_type(series, (basestring, list, set, tuple))
check_type(collection_status, (basestring, list, set, tuple))
check_type(collection_interval, (basestring, list, set, tuple))
check_type(not_synced_for_minutes, (basestring, list, set, tuple))
check_type(error_code, (basestring, list, set, tuple))
check_type(error_description, (basestring, list, set, tuple))
check_type(software_version, (basestring, list, set, tuple))
check_type(software_type, (basestring, list, set, tuple))
check_type(platform_id, (basestring, list, set, tuple))
check_type(role, (basestring, list, set, tuple))
check_type(reachability_status, (basestring, list, set, tuple))
check_type(up_time, (basestring, list, set, tuple))
check_type(associated_wlc_ip, (basestring, list, set, tuple))
check_type(license_name, (basestring, list, set, tuple))
check_type(license_type, (basestring, list, set, tuple))
check_type(license_status, (basestring, list, set, tuple))
check_type(module_name, (basestring, list, set, tuple))
check_type(module_equpimenttype, (basestring, list, set, tuple))
check_type(module_servicestate, (basestring, list, set, tuple))
check_type(module_vendorequipmenttype, (basestring, list, set, tuple))
check_type(module_partnumber, (basestring, list, set, tuple))
check_type(module_operationstatecode, (basestring, list, set, tuple))
check_type(id, basestring)
check_type(device_support_level, basestring)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'hostname':
hostname,
'managementIpAddress':
management_ip_address,
'macAddress':
mac_address,
'locationName':
location_name,
'serialNumber':
serial_number,
'location':
location,
'family':
family,
'type':
type,
'series':
series,
'collectionStatus':
collection_status,
'collectionInterval':
collection_interval,
'notSyncedForMinutes':
not_synced_for_minutes,
'errorCode':
error_code,
'errorDescription':
error_description,
'softwareVersion':
software_version,
'softwareType':
software_type,
'platformId':
platform_id,
'role':
role,
'reachabilityStatus':
reachability_status,
'upTime':
up_time,
'associatedWlcIp':
associated_wlc_ip,
'license.name':
license_name,
'license.type':
license_type,
'license.status':
license_status,
'module+name':
module_name,
'module+equpimenttype':
module_equpimenttype,
'module+servicestate':
module_servicestate,
'module+vendorequipmenttype':
module_vendorequipmenttype,
'module+partnumber':
module_partnumber,
'module+operationstatecode':
module_operationstatecode,
'id':
id,
'deviceSupportLevel':
device_support_level,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_fe602e8165035b5cbc304fada4ee2f26_v2_2_2_3', json_data)
def add_device(self,
cliTransport=None,
computeDevice=None,
enablePassword=None,
extendedDiscoveryInfo=None,
httpPassword=None,
httpPort=None,
httpSecure=None,
httpUserName=None,
ipAddress=None,
merakiOrgId=None,
netconfPort=None,
password=None,
serialNumber=None,
snmpAuthPassphrase=None,
snmpAuthProtocol=None,
snmpMode=None,
snmpPrivPassphrase=None,
snmpPrivProtocol=None,
snmpROCommunity=None,
snmpRWCommunity=None,
snmpRetry=None,
snmpTimeout=None,
snmpUserName=None,
snmpVersion=None,
type=None,
updateMgmtIPaddressList=None,
userName=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Adds the device with given credential .
Args:
cliTransport(string): Devices's cliTransport.
computeDevice(boolean): Devices's computeDevice.
enablePassword(string): Devices's enablePassword.
extendedDiscoveryInfo(string): Devices's extendedDiscoveryInfo.
httpPassword(string): Devices's httpPassword.
httpPort(string): Devices's httpPort.
httpSecure(boolean): Devices's httpSecure.
httpUserName(string): Devices's httpUserName.
ipAddress(list): Devices's ipAddress (list of strings).
merakiOrgId(list): Devices's merakiOrgId (list of strings).
netconfPort(string): Devices's netconfPort.
password(string): Devices's password.
serialNumber(string): Devices's serialNumber.
snmpAuthPassphrase(string): Devices's snmpAuthPassphrase.
snmpAuthProtocol(string): Devices's snmpAuthProtocol.
snmpMode(string): Devices's snmpMode.
snmpPrivPassphrase(string): Devices's snmpPrivPassphrase.
snmpPrivProtocol(string): Devices's snmpPrivProtocol.
snmpROCommunity(string): Devices's snmpROCommunity.
snmpRWCommunity(string): Devices's snmpRWCommunity.
snmpRetry(integer): Devices's snmpRetry.
snmpTimeout(integer): Devices's snmpTimeout.
snmpUserName(string): Devices's snmpUserName.
snmpVersion(string): Devices's snmpVersion.
type(string): Devices's type. Available values are 'COMPUTE_DEVICE', 'MERAKI_DASHBOARD',
'NETWORK_DEVICE' and 'NODATACHANGE'.
updateMgmtIPaddressList(list): Devices's updateMgmtIPaddressList (list of objects).
userName(string): Devices's userName.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'cliTransport':
cliTransport,
'computeDevice':
computeDevice,
'enablePassword':
enablePassword,
'extendedDiscoveryInfo':
extendedDiscoveryInfo,
'httpPassword':
httpPassword,
'httpPort':
httpPort,
'httpSecure':
httpSecure,
'httpUserName':
httpUserName,
'ipAddress':
ipAddress,
'merakiOrgId':
merakiOrgId,
'netconfPort':
netconfPort,
'password':
password,
'serialNumber':
serialNumber,
'snmpAuthPassphrase':
snmpAuthPassphrase,
'snmpAuthProtocol':
snmpAuthProtocol,
'snmpMode':
snmpMode,
'snmpPrivPassphrase':
snmpPrivPassphrase,
'snmpPrivProtocol':
snmpPrivProtocol,
'snmpROCommunity':
snmpROCommunity,
'snmpRWCommunity':
snmpRWCommunity,
'snmpRetry':
snmpRetry,
'snmpTimeout':
snmpTimeout,
'snmpUserName':
snmpUserName,
'snmpVersion':
snmpVersion,
'type':
type,
'updateMgmtIPaddressList':
updateMgmtIPaddressList,
'userName':
userName,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_fe3ec7651e79d891fce37a0d860_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_fe3ec7651e79d891fce37a0d860_v2_2_2_3', json_data)
def sync_devices(self,
cliTransport=None,
computeDevice=None,
enablePassword=None,
extendedDiscoveryInfo=None,
httpPassword=None,
httpPort=None,
httpSecure=None,
httpUserName=None,
ipAddress=None,
merakiOrgId=None,
netconfPort=None,
password=None,
serialNumber=None,
snmpAuthPassphrase=None,
snmpAuthProtocol=None,
snmpMode=None,
snmpPrivPassphrase=None,
snmpPrivProtocol=None,
snmpROCommunity=None,
snmpRWCommunity=None,
snmpRetry=None,
snmpTimeout=None,
snmpUserName=None,
snmpVersion=None,
type=None,
updateMgmtIPaddressList=None,
userName=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Sync the devices provided as input .
Args:
cliTransport(string): Devices's cliTransport.
computeDevice(boolean): Devices's computeDevice.
enablePassword(string): Devices's enablePassword.
extendedDiscoveryInfo(string): Devices's extendedDiscoveryInfo.
httpPassword(string): Devices's httpPassword.
httpPort(string): Devices's httpPort.
httpSecure(boolean): Devices's httpSecure.
httpUserName(string): Devices's httpUserName.
ipAddress(list): Devices's ipAddress (list of strings).
merakiOrgId(list): Devices's merakiOrgId (list of strings).
netconfPort(string): Devices's netconfPort.
password(string): Devices's password.
serialNumber(string): Devices's serialNumber.
snmpAuthPassphrase(string): Devices's snmpAuthPassphrase.
snmpAuthProtocol(string): Devices's snmpAuthProtocol.
snmpMode(string): Devices's snmpMode.
snmpPrivPassphrase(string): Devices's snmpPrivPassphrase.
snmpPrivProtocol(string): Devices's snmpPrivProtocol.
snmpROCommunity(string): Devices's snmpROCommunity.
snmpRWCommunity(string): Devices's snmpRWCommunity.
snmpRetry(integer): Devices's snmpRetry.
snmpTimeout(integer): Devices's snmpTimeout.
snmpUserName(string): Devices's snmpUserName.
snmpVersion(string): Devices's snmpVersion.
type(string): Devices's type. Available values are 'COMPUTE_DEVICE', 'MERAKI_DASHBOARD',
'NETWORK_DEVICE' and 'NODATACHANGE'.
updateMgmtIPaddressList(list): Devices's updateMgmtIPaddressList (list of objects).
userName(string): Devices's userName.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'cliTransport':
cliTransport,
'computeDevice':
computeDevice,
'enablePassword':
enablePassword,
'extendedDiscoveryInfo':
extendedDiscoveryInfo,
'httpPassword':
httpPassword,
'httpPort':
httpPort,
'httpSecure':
httpSecure,
'httpUserName':
httpUserName,
'ipAddress':
ipAddress,
'merakiOrgId':
merakiOrgId,
'netconfPort':
netconfPort,
'password':
password,
'serialNumber':
serialNumber,
'snmpAuthPassphrase':
snmpAuthPassphrase,
'snmpAuthProtocol':
snmpAuthProtocol,
'snmpMode':
snmpMode,
'snmpPrivPassphrase':
snmpPrivPassphrase,
'snmpPrivProtocol':
snmpPrivProtocol,
'snmpROCommunity':
snmpROCommunity,
'snmpRWCommunity':
snmpRWCommunity,
'snmpRetry':
snmpRetry,
'snmpTimeout':
snmpTimeout,
'snmpUserName':
snmpUserName,
'snmpVersion':
snmpVersion,
'type':
type,
'updateMgmtIPaddressList':
updateMgmtIPaddressList,
'userName':
userName,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_fe06867e548bba1919024b40d992_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.put(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.put(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_fe06867e548bba1919024b40d992_v2_2_2_3', json_data)
def retrieves_all_network_devices(self,
associated_wlc_ip=None,
collection_interval=None,
collection_status=None,
error_code=None,
family=None,
hostname=None,
limit=None,
mac_address=None,
management_ip_address=None,
offset=None,
platform_id=None,
reachability_failure_reason=None,
reachability_status=None,
role=None,
role_source=None,
serial_number=None,
series=None,
software_type=None,
software_version=None,
type=None,
up_time=None,
vrf_name=None,
headers=None,
**request_parameters):
"""Gets the list of first 500 network devices sorted lexicographically based on host name. It can be filtered using
management IP address, mac address, hostname and location name. If id param is provided, it will be
returning the list of network-devices for the given id's and other request params will be ignored. In
case of autocomplete request, returns the list of specified attributes. .
Args:
vrf_name(basestring): vrfName query parameter.
management_ip_address(basestring): managementIpAddress query parameter.
hostname(basestring): hostname query parameter.
mac_address(basestring): macAddress query parameter.
family(basestring): family query parameter.
collection_status(basestring): collectionStatus query parameter.
collection_interval(basestring): collectionInterval query parameter.
software_version(basestring): softwareVersion query parameter.
software_type(basestring): softwareType query parameter.
reachability_status(basestring): reachabilityStatus query parameter.
reachability_failure_reason(basestring): reachabilityFailureReason query parameter.
error_code(basestring): errorCode query parameter.
platform_id(basestring): platformId query parameter.
series(basestring): series query parameter.
type(basestring): type query parameter.
serial_number(basestring): serialNumber query parameter.
up_time(basestring): upTime query parameter.
role(basestring): role query parameter.
role_source(basestring): roleSource query parameter.
associated_wlc_ip(basestring): associatedWlcIp query parameter.
offset(basestring): offset query parameter.
limit(basestring): limit query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(vrf_name, basestring)
check_type(management_ip_address, basestring)
check_type(hostname, basestring)
check_type(mac_address, basestring)
check_type(family, basestring)
check_type(collection_status, basestring)
check_type(collection_interval, basestring)
check_type(software_version, basestring)
check_type(software_type, basestring)
check_type(reachability_status, basestring)
check_type(reachability_failure_reason, basestring)
check_type(error_code, basestring)
check_type(platform_id, basestring)
check_type(series, basestring)
check_type(type, basestring)
check_type(serial_number, basestring)
check_type(up_time, basestring)
check_type(role, basestring)
check_type(role_source, basestring)
check_type(associated_wlc_ip, basestring)
check_type(offset, basestring)
check_type(limit, basestring)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'vrfName':
vrf_name,
'managementIpAddress':
management_ip_address,
'hostname':
hostname,
'macAddress':
mac_address,
'family':
family,
'collectionStatus':
collection_status,
'collectionInterval':
collection_interval,
'softwareVersion':
software_version,
'softwareType':
software_type,
'reachabilityStatus':
reachability_status,
'reachabilityFailureReason':
reachability_failure_reason,
'errorCode':
error_code,
'platformId':
platform_id,
'series':
series,
'type':
type,
'serialNumber':
serial_number,
'upTime':
up_time,
'role':
role,
'roleSource':
role_source,
'associatedWlcIp':
associated_wlc_ip,
'offset':
offset,
'limit':
limit,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/autocomplete')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_b5a5c8da4aaa526da6a06e97c80a38be_v2_2_2_3', json_data)
def update_device_role(self,
id=None,
role=None,
roleSource=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Updates the role of the device as access, core, distribution, border router .
Args:
id(string): Devices's id.
role(string): Devices's role.
roleSource(string): Devices's roleSource.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'Content-Type' in headers:
check_type(headers.get('Content-Type'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'id':
id,
'role':
role,
'roleSource':
roleSource,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_aa11f09d28165f4ea6c81b8642e59cc4_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/brief')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.put(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.put(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_aa11f09d28165f4ea6c81b8642e59cc4_v2_2_2_3', json_data)
def get_polling_interval_for_all_devices(self,
headers=None,
**request_parameters):
"""Returns polling interval of all devices .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/collection-'
+ 'schedule/global')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_ce94ab18ad505e8a9846f6c4c9df0d2b_v2_2_2_3', json_data)
def get_device_config_for_all_devices(self,
headers=None,
**request_parameters):
"""Returns the config for all devices .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/config')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_ed2bca4be412527198720a4dfec9604a_v2_2_2_3', json_data)
def get_device_config_count(self,
headers=None,
**request_parameters):
"""Returns the count of device configs .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/config/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_dc0a72537a3578ca31cc5ef29131d35_v2_2_2_3', json_data)
def get_device_count(self,
headers=None,
**request_parameters):
"""Returns the count of network devices based on the filter criteria by management IP address, mac address,
hostname and location name .
Args:
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_bbfe7340fe6752e5bc273a303d165654_v2_2_2_3', json_data)
def export_device_list(self,
deviceUuids=None,
id=None,
operationEnum=None,
parameters=None,
password=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Exports the selected network device to a file .
Args:
deviceUuids(list): Devices's deviceUuids (list of strings).
id(string): Devices's id.
operationEnum(string): Devices's operationEnum. Available values are 'CREDENTIALDETAILS' and
'DEVICEDETAILS'.
parameters(list): Devices's parameters (list of strings).
password(string): Devices's password.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'Content-Type' in headers:
check_type(headers.get('Content-Type'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'deviceUuids':
deviceUuids,
'id':
id,
'operationEnum':
operationEnum,
'parameters':
parameters,
'password':
password,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_e6ec627d3c587288978990aae75228_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/file')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_e6ec627d3c587288978990aae75228_v2_2_2_3', json_data)
def get_functional_capability_for_devices(self,
device_id,
function_name=None,
headers=None,
**request_parameters):
"""Returns the functional-capability for given devices .
Args:
device_id(basestring): deviceId query parameter. Accepts comma separated deviceid's and return list of
functional-capabilities for the given id's. If invalid or not-found id's are provided,
null entry will be returned in the list. .
function_name(basestring, list, set, tuple): functionName query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
check_type(function_name, (basestring, list, set, tuple))
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'deviceId':
device_id,
'functionName':
function_name,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/functional-capability')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_ad8cea95d71352f0842a2c869765e6cf_v2_2_2_3', json_data)
def get_functional_capability_by_id(self,
id,
headers=None,
**request_parameters):
"""Returns functional capability with given Id .
Args:
id(basestring): id path parameter. Functional Capability UUID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/functional-'
+ 'capability/{id}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_f494532c45654fdaeda8d46a0d9753d_v2_2_2_3', json_data)
def inventory_insight_device_link_mismatch(self,
category,
site_id,
limit=None,
offset=None,
order=None,
sort_by=None,
headers=None,
**request_parameters):
"""Find all devices with link mismatch (speed / vlan) .
Args:
site_id(basestring): siteId path parameter.
offset(basestring): offset query parameter. Row Number. Default value is 1 .
limit(basestring): limit query parameter. Default value is 500 .
category(basestring): category query parameter. Links mismatch category. Value can be speed-duplex or
vlan. .
sort_by(basestring): sortBy query parameter. Sort By .
order(basestring): order query parameter. Order. Value can be asc or desc. Default value is asc .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(offset, basestring)
check_type(limit, basestring)
check_type(category, basestring,
may_be_none=False)
check_type(sort_by, basestring)
check_type(order, basestring)
check_type(site_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'offset':
offset,
'limit':
limit,
'category':
category,
'sortBy':
sort_by,
'order':
order,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'siteId': site_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-'
+ 'device/insight/{siteId}/device-link')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_eed1595442b757bf94938c858a257ced_v2_2_2_3', json_data)
def get_devices_with_snmpv3_des(self,
site_id,
limit=None,
offset=None,
order=None,
sort_by=None,
headers=None,
**request_parameters):
"""Returns devices added to DNAC with snmp v3 DES, where siteId is mandatory & accepts offset, limit, sortby, order
which are optional. .
Args:
site_id(basestring): siteId path parameter.
offset(basestring): offset query parameter. Row Number. Default value is 1 .
limit(basestring): limit query parameter. Default value is 500 .
sort_by(basestring): sortBy query parameter. Sort By .
order(basestring): order query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(offset, basestring)
check_type(limit, basestring)
check_type(sort_by, basestring)
check_type(order, basestring)
check_type(site_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'offset':
offset,
'limit':
limit,
'sortBy':
sort_by,
'order':
order,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'siteId': site_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-'
+ 'device/insight/{siteId}/insecure-connection')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_bbc074b061d3575d8247084ca33c95d9_v2_2_2_3', json_data)
def get_network_device_by_ip(self,
ip_address,
headers=None,
**request_parameters):
"""Returns the network device by specified IP address .
Args:
ip_address(basestring): ipAddress path parameter. Device IP address .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(ip_address, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'ipAddress': ip_address,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/ip-address/{ipAddress}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_dc74c2052a3a4eb7e2a01eaa8e7_v2_2_2_3', json_data)
def get_modules(self,
device_id,
limit=None,
name_list=None,
offset=None,
operational_state_code_list=None,
part_number_list=None,
vendor_equipment_type_list=None,
headers=None,
**request_parameters):
"""Returns modules by specified device id .
Args:
device_id(basestring): deviceId query parameter.
limit(basestring): limit query parameter.
offset(basestring): offset query parameter.
name_list(basestring, list, set, tuple): nameList query parameter.
vendor_equipment_type_list(basestring, list, set, tuple): vendorEquipmentTypeList query parameter.
part_number_list(basestring, list, set, tuple): partNumberList query parameter.
operational_state_code_list(basestring, list, set, tuple): operationalStateCodeList query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
check_type(limit, basestring)
check_type(offset, basestring)
check_type(name_list, (basestring, list, set, tuple))
check_type(vendor_equipment_type_list, (basestring, list, set, tuple))
check_type(part_number_list, (basestring, list, set, tuple))
check_type(operational_state_code_list, (basestring, list, set, tuple))
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'deviceId':
device_id,
'limit':
limit,
'offset':
offset,
'nameList':
name_list,
'vendorEquipmentTypeList':
vendor_equipment_type_list,
'partNumberList':
part_number_list,
'operationalStateCodeList':
operational_state_code_list,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/module')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_ce9e547725c45c66824afda98179d12f_v2_2_2_3', json_data)
def get_module_count(self,
device_id,
name_list=None,
operational_state_code_list=None,
part_number_list=None,
vendor_equipment_type_list=None,
headers=None,
**request_parameters):
"""Returns Module Count .
Args:
device_id(basestring): deviceId query parameter.
name_list(basestring, list, set, tuple): nameList query parameter.
vendor_equipment_type_list(basestring, list, set, tuple): vendorEquipmentTypeList query parameter.
part_number_list(basestring, list, set, tuple): partNumberList query parameter.
operational_state_code_list(basestring, list, set, tuple): operationalStateCodeList query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
check_type(name_list, (basestring, list, set, tuple))
check_type(vendor_equipment_type_list, (basestring, list, set, tuple))
check_type(part_number_list, (basestring, list, set, tuple))
check_type(operational_state_code_list, (basestring, list, set, tuple))
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'deviceId':
device_id,
'nameList':
name_list,
'vendorEquipmentTypeList':
vendor_equipment_type_list,
'partNumberList':
part_number_list,
'operationalStateCodeList':
operational_state_code_list,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/module/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_fb11f997009751c991884b5fc02087c5_v2_2_2_3', json_data)
def get_module_info_by_id(self,
id,
headers=None,
**request_parameters):
"""Returns Module info by id .
Args:
id(basestring): id path parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/module/{id}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_a4588640da5b018b499c5760f4092a_v2_2_2_3', json_data)
def get_device_by_serial_number(self,
serial_number,
headers=None,
**request_parameters):
"""Returns the network device with given serial number .
Args:
serial_number(basestring): serialNumber path parameter. Device serial number .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(serial_number, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'serialNumber': serial_number,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/serial-'
+ 'number/{serialNumber}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_c53d56c282e5f108c659009d21f9d26_v2_2_2_3', json_data)
def sync_devices_using_forcesync(self,
force_sync=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""Synchronizes the devices. If forceSync param is false (default) then the sync would run in normal priority
thread. If forceSync param is true then the sync would run in high priority thread if available, else
the sync will fail. Result can be seen in the child task of each device .
Args:
force_sync(bool): forceSync query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(list): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, list)
check_type(force_sync, bool)
if headers is not None:
if 'Content-Type' in headers:
check_type(headers.get('Content-Type'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'forceSync':
force_sync,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = payload or []
if active_validation:
self._request_validator('jsd_f2c120b855cb8c852806ce72e54d_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/sync')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.put(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.put(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_f2c120b855cb8c852806ce72e54d_v2_2_2_3', json_data)
def register_device_for_wsa(self,
macaddress=None,
serial_number=None,
headers=None,
**request_parameters):
"""Registers a device for WSA notification .
Args:
serial_number(basestring): serialNumber query parameter. Serial number of the device .
macaddress(basestring): macaddress query parameter. Mac addres of the device .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(serial_number, basestring)
check_type(macaddress, basestring)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'serialNumber':
serial_number,
'macaddress':
macaddress,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/tenantinfo/macaddress')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_b2c39feb5e48913492c33add7f13_v2_2_2_3', json_data)
def get_chassis_details_for_device(self,
device_id,
headers=None,
**request_parameters):
"""Returns chassis details for given device ID .
Args:
device_id(basestring): deviceId path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceId': device_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{deviceId}/chassis')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_a03cee8dfd7514487a134a422f5e0d7_v2_2_2_3', json_data)
def get_stack_details_for_device(self,
device_id,
headers=None,
**request_parameters):
"""Retrieves complete stack details for given device ID .
Args:
device_id(basestring): deviceId path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceId': device_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{deviceId}/stack')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_c07eaefa1fa45faa801764d9094336ae_v2_2_2_3', json_data)
def return_power_supply_fan_details_for_the_given_device(self,
device_uuid,
type,
headers=None,
**request_parameters):
"""Return PowerSupply/ Fan details for the Given device .
Args:
device_uuid(basestring): deviceUuid path parameter.
type(basestring): type query parameter. Type value should be PowerSupply or Fan .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(type, basestring,
may_be_none=False)
check_type(device_uuid, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'type':
type,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceUuid': device_uuid,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{deviceUuid}/equipment')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_c1cb24a2b53ce8d29d119c6ee1112_v2_2_2_3', json_data)
def poe_interface_details(self,
device_uuid,
interface_name_list=None,
headers=None,
**request_parameters):
"""Returns POE interface details for the device, where deviceuuid is mandatory & accepts comma seperated interface
names which is optional and returns information for that particular interfaces where(operStatus =
operationalStatus) .
Args:
device_uuid(basestring): deviceUuid path parameter. uuid of the device .
interface_name_list(basestring): interfaceNameList query parameter. comma seperated interface names .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(interface_name_list, basestring)
check_type(device_uuid, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'interfaceNameList':
interface_name_list,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceUuid': device_uuid,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-'
+ 'device/{deviceUuid}/interface/poe-detail')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_ab3215d9be065533b7cbbc978cb4d905_v2_2_2_3', json_data)
def get_linecard_details(self,
device_uuid,
headers=None,
**request_parameters):
"""Get line card detail for a given deviceuuid. Response will contain serial no, part no, switch no and slot no. .
Args:
device_uuid(basestring): deviceUuid path parameter. instanceuuid of device .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_uuid, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceUuid': device_uuid,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{deviceUuid}/line-card')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_bd31690b61f45d9f880d74d4e682b070_v2_2_2_3', json_data)
def poe_details_(self,
device_uuid,
headers=None,
**request_parameters):
"""Returns POE details for device. .
Args:
device_uuid(basestring): deviceUuid path parameter. uuid of the device .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
return self.poe_details(device_uuid,
headers=headers,
**request_parameters)
def poe_details(self,
device_uuid,
headers=None,
**request_parameters):
"""Returns POE details for device. .
Args:
device_uuid(basestring): deviceUuid path parameter. uuid of the device .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_uuid, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceUuid': device_uuid,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{deviceUuid}/poe')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_f7a67aba0b365a1e9dae62d148511a25_v2_2_2_3', json_data)
def get_supervisor_card_detail(self,
device_uuid,
headers=None,
**request_parameters):
"""Get supervisor card detail for a given deviceuuid. Response will contain serial no, part no, switch no and slot
no. .
Args:
device_uuid(basestring): deviceUuid path parameter. instanceuuid of device .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(device_uuid, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'deviceUuid': device_uuid,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-'
+ 'device/{deviceUuid}/supervisor-card')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_eb13516155a28570e542dcf10a91_v2_2_2_3', json_data)
def get_device_by_id(self,
id,
headers=None,
**request_parameters):
"""Returns the network device details for the given device ID .
Args:
id(basestring): id path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_d86f657f8592f97014d2ebf8d37ac_v2_2_2_3', json_data)
def delete_device_by_id(self,
id,
is_force_delete=None,
headers=None,
**request_parameters):
"""Deletes the network device for the given Id .
Args:
id(basestring): id path parameter. Device ID .
is_force_delete(bool): isForceDelete query parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(is_force_delete, bool)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'isForceDelete':
is_force_delete,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.delete(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.delete(endpoint_full_url, params=_params)
return self._object_factory('bpm_e01233fa258e393239c4b41882806_v2_2_2_3', json_data)
def get_device_summary(self,
id,
headers=None,
**request_parameters):
"""Returns brief summary of device info such as hostname, management IP address for the given device Id .
Args:
id(basestring): id path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}/brief')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_fe0153ca24205608b8741d51f5a6d54a_v2_2_2_3', json_data)
def get_polling_interval_by_id(self,
id,
headers=None,
**request_parameters):
"""Returns polling interval by device id .
Args:
id(basestring): id path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}/collection-'
+ 'schedule')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_f90daf1c279351f884ba3198d3b2d641_v2_2_2_3', json_data)
def get_organization_list_for_meraki(self,
id,
headers=None,
**request_parameters):
"""Returns list of organizations for meraki dashboard .
Args:
id(basestring): id path parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}/meraki-'
+ 'organization')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_b4ba6d23d5e7eb62cbba4c9e1a29d_v2_2_2_3', json_data)
def get_device_interface_vlans(self,
id,
interface_type=None,
headers=None,
**request_parameters):
"""Returns Device Interface VLANs .
Args:
id(basestring): id path parameter.
interface_type(basestring): interfaceType query parameter. Vlan assocaited with sub-interface .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(interface_type, basestring)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
'interfaceType':
interface_type,
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}/vlan')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_fd5fb603cba6523abb25c8ec131fbb8b_v2_2_2_3', json_data)
def get_wireless_lan_controller_details_by_id(self,
id,
headers=None,
**request_parameters):
"""Returns the wireless lan controller info with given device ID .
Args:
id(basestring): id path parameter. Device ID .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'id': id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-device/{id}/wireless-info')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_c01ee650fcf858789ca00c8deda969b9_v2_2_2_3', json_data)
def get_device_config_by_id(self,
network_device_id,
headers=None,
**request_parameters):
"""Returns the device config by specified device ID .
Args:
network_device_id(basestring): networkDeviceId path parameter.
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(network_device_id, basestring,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'networkDeviceId': network_device_id,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-'
+ 'device/{networkDeviceId}/config')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_af0bbf34adb5146b931ec874fc2cc40_v2_2_2_3', json_data)
def get_network_device_by_pagination_range(self,
records_to_return,
start_index,
headers=None,
**request_parameters):
"""Returns the list of network devices for the given pagination range .
Args:
start_index(int): startIndex path parameter. Start index .
records_to_return(int): recordsToReturn path parameter. Number of records to return .
headers(dict): Dictionary of HTTP Headers to send with the Request
.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(start_index, int,
may_be_none=False)
check_type(records_to_return, int,
may_be_none=False)
if headers is not None:
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
'startIndex': start_index,
'recordsToReturn': records_to_return,
}
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/network-'
+ 'device/{startIndex}/{recordsToReturn}')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.get(endpoint_full_url, params=_params,
headers=_headers)
else:
json_data = self._session.get(endpoint_full_url, params=_params)
return self._object_factory('bpm_d7b6ce5abd5dad837e22ace817a6f0_v2_2_2_3', json_data)
def threat_details(self,
endTime=None,
isNewThreat=None,
limit=None,
offset=None,
siteId=None,
startTime=None,
threatLevel=None,
threatType=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""The details for the Rogue and aWIPS threats .
Args:
endTime(integer): Devices's End Time.
isNewThreat(boolean): Devices's Is New Threat.
limit(integer): Devices's Limit.
offset(integer): Devices's Offset.
siteId(list): Devices's Site Id (list of strings).
startTime(integer): Devices's Start Time.
threatLevel(list): Devices's Threat Level (list of strings).
threatType(list): Devices's Threat Type (list of strings).
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'Content-Type' in headers:
check_type(headers.get('Content-Type'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'offset':
offset,
'limit':
limit,
'startTime':
startTime,
'endTime':
endTime,
'siteId':
siteId,
'threatLevel':
threatLevel,
'threatType':
threatType,
'isNewThreat':
isNewThreat,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_f4ce55b5f235924903516ef31dc9e3c_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/security/threats/details')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_f4ce55b5f235924903516ef31dc9e3c_v2_2_2_3', json_data)
def threat_detail_count(self,
endTime=None,
isNewThreat=None,
limit=None,
offset=None,
siteId=None,
startTime=None,
threatLevel=None,
threatType=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""The details count for the Rogue and aWIPS threats .
Args:
endTime(integer): Devices's End Time.
isNewThreat(boolean): Devices's Is New Threat.
limit(integer): Devices's Limit.
offset(integer): Devices's Offset.
siteId(list): Devices's Site Id (list of strings).
startTime(integer): Devices's Start Time.
threatLevel(list): Devices's Threat Level (list of strings).
threatType(list): Devices's Threat Type (list of strings).
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'Content-Type' in headers:
check_type(headers.get('Content-Type'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'offset':
offset,
'limit':
limit,
'startTime':
startTime,
'endTime':
endTime,
'siteId':
siteId,
'threatLevel':
threatLevel,
'threatType':
threatType,
'isNewThreat':
isNewThreat,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_c7266d89581c9601b79b7304fda3_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/security/threats/details/count')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_c7266d89581c9601b79b7304fda3_v2_2_2_3', json_data)
def threat_summary(self,
endTime=None,
siteId=None,
startTime=None,
threatLevel=None,
threatType=None,
headers=None,
payload=None,
active_validation=True,
**request_parameters):
"""The Threat Summary for the Rogues and aWIPS .
Args:
endTime(integer): Devices's End Time.
siteId(list): Devices's Site Id (list of strings).
startTime(integer): Devices's Start Time.
threatLevel(list): Devices's Threat Level (list of strings).
threatType(list): Devices's Threat Type (list of strings).
headers(dict): Dictionary of HTTP Headers to send with the Request
.
payload(dict): A JSON serializable Python object to send in the
body of the Request.
active_validation(bool): Enable/Disable payload validation.
Defaults to True.
**request_parameters: Additional request parameters (provides
support for parameters that may be added in the future).
Returns:
MyDict: JSON response. Access the object's properties by using
the dot notation or the bracket notation.
Raises:
TypeError: If the parameter types are incorrect.
MalformedRequest: If the request body created is invalid.
ApiError: If the DNA Center cloud returns an error.
"""
check_type(headers, dict)
check_type(payload, dict)
if headers is not None:
if 'Content-Type' in headers:
check_type(headers.get('Content-Type'),
basestring, may_be_none=False)
if 'X-Auth-Token' in headers:
check_type(headers.get('X-Auth-Token'),
basestring, may_be_none=False)
_params = {
}
_params.update(request_parameters)
_params = dict_from_items_with_values(_params)
path_params = {
}
_payload = {
'startTime':
startTime,
'endTime':
endTime,
'siteId':
siteId,
'threatLevel':
threatLevel,
'threatType':
threatType,
}
_payload.update(payload or {})
_payload = dict_from_items_with_values(_payload)
if active_validation:
self._request_validator('jsd_e6eed78cb55d51a1bfe669729df25689_v2_2_2_3')\
.validate(_payload)
with_custom_headers = False
_headers = self._session.headers or {}
if headers:
_headers.update(dict_of_str(headers))
with_custom_headers = True
e_url = ('/dna/intent/api/v1/security/threats/summary')
endpoint_full_url = apply_path_params(e_url, path_params)
if with_custom_headers:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload,
headers=_headers)
else:
json_data = self._session.post(endpoint_full_url, params=_params,
json=_payload)
return self._object_factory('bpm_e6eed78cb55d51a1bfe669729df25689_v2_2_2_3', json_data)
| 40.273167 | 123 | 0.571473 | 16,067 | 158,193 | 5.386631 | 0.038775 | 0.026205 | 0.031232 | 0.023271 | 0.866107 | 0.841658 | 0.812148 | 0.790149 | 0.776792 | 0.766635 | 0 | 0.01494 | 0.361938 | 158,193 | 3,927 | 124 | 40.283422 | 0.842496 | 0.317113 | 0 | 0.794242 | 0 | 0 | 0.08918 | 0.054406 | 0 | 0 | 0 | 0 | 0 | 1 | 0.023285 | false | 0.013971 | 0.00254 | 0 | 0.049111 | 0.000423 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
47165868c8702c1e1af2345cd8ff299c9cb0d3dd | 986 | py | Python | tools/patch_codegen/kernwin.py | cclauss/src | 701bb1b44e1fd9e716661bebd896b87086665cfd | [
"BSD-3-Clause"
] | 2 | 2019-07-08T11:58:27.000Z | 2019-07-08T13:23:57.000Z | tools/patch_codegen/kernwin.py | Bia10/src | 15b9ab2535222e492cd21b8528c27f763fb799d6 | [
"BSD-3-Clause"
] | null | null | null | tools/patch_codegen/kernwin.py | Bia10/src | 15b9ab2535222e492cd21b8528c27f763fb799d6 | [
"BSD-3-Clause"
] | null | null | null | {
"vask_file" : [
("va_copy", ("arg4", "temp")),
],
"SwigDirector_UI_Hooks::populating_widget_popup" : [
("director_method_call_arity_cap", (
"populating_widget_popup",
"(method ,(PyObject *)obj0,(PyObject *)obj1,(__argcnt < 3 ? NULL : (PyObject *)obj2), NULL)",
"(swig_get_self(), (PyObject *) swig_method_name ,(PyObject *)obj0,(PyObject *)obj1,(__argcnt < 4 ? NULL : (PyObject *)obj2), NULL)",
)),
],
"SwigDirector_UI_Hooks::finish_populating_widget_popup" : [
("director_method_call_arity_cap", (
"finish_populating_widget_popup",
"(method ,(PyObject *)obj0,(PyObject *)obj1,(__argcnt < 3 ? NULL : (PyObject *)obj2), NULL)",
"(swig_get_self(), (PyObject *) swig_method_name ,(PyObject *)obj0,(PyObject *)obj1,(__argcnt < 4 ? NULL : (PyObject *)obj2), NULL)",
)),
],
"__additional_thread_unsafe__" : ["py_get_ask_form", "py_get_open_form"],
}
| 46.952381 | 145 | 0.597363 | 103 | 986 | 5.213592 | 0.359223 | 0.119181 | 0.156425 | 0.178771 | 0.778399 | 0.778399 | 0.778399 | 0.778399 | 0.603352 | 0.603352 | 0 | 0.02228 | 0.226166 | 986 | 20 | 146 | 49.3 | 0.68152 | 0 | 0 | 0.55 | 0 | 0.2 | 0.745436 | 0.243408 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4723cf7b8f08023bb1153a6435ed96eb2932e99f | 2,115 | py | Python | src/main.py | Krzem5/Python-2D_Magical_Shapes | ac61c65f230e28e05e0daa089629d42197e9975d | [
"BSD-3-Clause"
] | null | null | null | src/main.py | Krzem5/Python-2D_Magical_Shapes | ac61c65f230e28e05e0daa089629d42197e9975d | [
"BSD-3-Clause"
] | null | null | null | src/main.py | Krzem5/Python-2D_Magical_Shapes | ac61c65f230e28e05e0daa089629d42197e9975d | [
"BSD-3-Clause"
] | null | null | null | import turtle
t=turtle
t.speed(0)
t.hideturtle()
t.pensize(5)
t.color('white')
t.bgcolor('black')
t.penup()
t.left(90)
t.fd(100)
t.left(90)
t.pendown()
t.fd(90)
t.left(45)
t.fd(24)
t.left(45)
t.fd(180)
t.left(45)
t.fd(24)
t.left(45)
t.fd(180)
t.left(45)
t.fd(24)
t.left(45)
t.fd(180)
t.left(45)
t.fd(24)
t.left(45)
t.fd(90)
t.penup()
t.left(180)
t.fd(90)
t.right(60)
t.fd(24)
t.right(120)
t.pendown()
t.fd(170.6)
t.left(90)
t.fd(150)
t.penup()
t.setposition(-90,100)
t.pendown()
t.fd(170.6)
t.left(90)
t.fd(150)
t.penup()
t.setposition(-107,-95)
t.pendown()
t.fd(170.6)
t.left(90)
t.fd(150)
t.penup()
t.setposition(86,-113)
t.pendown()
t.fd(170.6)
t.left(90)
t.fd(150)
########
t.penup()
t.setposition(-300,0)
t.right(90)
t.fd(100)
t.left(90)
t.pendown()
t.fd(12)
t.left(60)
t.fd(180)
t.left(60)
t.fd(24)
t.left(60)
t.fd(180)
t.left(60)
t.fd(24)
t.left(60)
t.fd(180)
t.left(60)
t.fd(12)
t.left(180)
t.fd(12)
t.right(120)
t.fd(150.6)
t.left(120)
t.fd(100.8)
t.penup()
t.setposition(-400,-55)
t.pendown()
t.fd(150.6)
t.left(120)
t.fd(100.8)
t.penup()
t.setposition(-210,-73)
t.pendown()
t.fd(150.6)
t.left(120)
t.fd(100.8)
#########
t.penup()
t.setposition(350,0)
t.left(210)
t.fd(100)
t.left(90)
t.pendown()
t.fd(150)
t.left(45)
t.fd(24)
t.left(45)
t.fd(180)
t.left(45)
t.fd(24)
t.left(45)
t.fd(300)
t.left(45)
t.fd(24)
t.left(45)
t.fd(180)
t.left(45)
t.fd(24)
t.left(45)
t.fd(150)
t.penup()
t.left(180)
t.fd(150)
t.right(60)
t.fd(24)
t.right(120)
t.pendown()
t.fd(290.6)
t.left(90)
t.fd(150)
t.penup()
t.setposition(200,100)
t.pendown()
t.fd(170.6)
t.left(90)
t.fd(270)
t.penup()
t.setposition(183,-97)
t.pendown()
t.fd(290.6)
t.left(90)
t.fd(150)
t.penup()
t.setposition(495,-114)
t.pendown()
t.fd(170.6)
t.left(90)
t.fd(270)
########
t.penup()
t.setposition(0,0)
t.pendown()
t.stamp()
t.left(90)
t.stamp()
t.left(90)
t.stamp()
t.left(90)
t.stamp()
########
t.penup()
t.setposition(-300,-10)
t.pendown()
t.stamp()
t.left(90)
t.stamp()
t.left(90)
t.stamp()
t.left(90)
t.stamp()
########
t.penup()
t.setposition(350,0)
t.pendown()
t.stamp()
t.left(90)
t.stamp()
t.left(90)
t.stamp()
t.left(90)
t.stamp()
t.done()
| 11.432432 | 23 | 0.625059 | 507 | 2,115 | 2.607495 | 0.098619 | 0.124811 | 0.111195 | 0.12708 | 0.881997 | 0.860061 | 0.857791 | 0.826021 | 0.826021 | 0.826021 | 0 | 0.179275 | 0.08747 | 2,115 | 184 | 24 | 11.494565 | 0.505699 | 0 | 0 | 0.875 | 0 | 0 | 0.004822 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.005682 | 0 | 0.005682 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5bfb201e59315c771a389045a6e6be7d1dced90e | 1,045 | py | Python | dbcli/dbcli_tests/test_database.py | FabienArcellier/blueprint-database-postgresql | 2d9356a685aef05ecf60aef729af2429218aa3a2 | [
"Unlicense"
] | 1 | 2021-01-21T17:38:36.000Z | 2021-01-21T17:38:36.000Z | dbcli/dbcli_tests/test_database.py | FabienArcellier/blueprint-database-postgresql | 2d9356a685aef05ecf60aef729af2429218aa3a2 | [
"Unlicense"
] | null | null | null | dbcli/dbcli_tests/test_database.py | FabienArcellier/blueprint-database-postgresql | 2d9356a685aef05ecf60aef729af2429218aa3a2 | [
"Unlicense"
] | null | null | null | import unittest
from dbcli.database import Database
class TestDatabase(unittest.TestCase):
def test_init_should_calculate_dsn_from_connection_string(self):
# Acts
database = Database('postgresql://postgres:1234@localhost:5432/postgres')
# Assert
self.assertEqual("dbname='postgres' user='postgres' host='localhost' port='5432' password='1234'", database.postgres_dsn)
def test_init_should_calculate_dsn_for_postgresql_from_connection_string(self):
# Acts
database = Database('postgresql://postgres:1234@localhost:5432/database')
# Assert
self.assertEqual("dbname='postgres' user='postgres' host='localhost' port='5432' password='1234'", database.postgres_dsn)
def test_init_should_calculate_database_name_from_connection_string(self):
# Acts
database = Database('postgresql://postgres:1234@localhost:5432/database')
# Assert
self.assertEqual("database", database.database_name)
if __name__ == '__main__':
unittest.main()
| 31.666667 | 129 | 0.719617 | 115 | 1,045 | 6.234783 | 0.295652 | 0.111576 | 0.046025 | 0.07113 | 0.775453 | 0.775453 | 0.730823 | 0.730823 | 0.730823 | 0.730823 | 0 | 0.046136 | 0.170335 | 1,045 | 32 | 130 | 32.65625 | 0.780854 | 0.033493 | 0 | 0.285714 | 0 | 0 | 0.321037 | 0.149551 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.214286 | false | 0.142857 | 0.142857 | 0 | 0.428571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
75047ad8797375483e803706d910365c285cb652 | 23,321 | py | Python | w6_cmeans/w6_compare_two_means.py | RadarSun/Advanced-algorithm | 9acce0a855b178823ceb202b9beb617db4dce37b | [
"Apache-2.0"
] | 4 | 2021-09-06T08:25:09.000Z | 2021-10-15T13:03:03.000Z | w6_cmeans/w6_compare_two_means.py | RadarSun/SUSTech-Advanced-algorithm | 9acce0a855b178823ceb202b9beb617db4dce37b | [
"Apache-2.0"
] | null | null | null | w6_cmeans/w6_compare_two_means.py | RadarSun/SUSTech-Advanced-algorithm | 9acce0a855b178823ceb202b9beb617db4dce37b | [
"Apache-2.0"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from sklearn import metrics
from matplotlib import animation
from PIL import Image
import matplotlib.patches as mpatches
from math import pi
from numpy import cos, sin
import copy
################################################################################################################################
def kmeans_cal_global_center(sites):
sum_x = 0
sum_y = 0
cnt_sites = 0
for i in range(len(sites)):
cnt_sites = cnt_sites+1
sum_x = sum_x + sites[i].x_location
sum_y = sum_y + sites[i].y_location
x_location = sum_x/cnt_sites
y_location = sum_y/cnt_sites
return x_location,y_location
def kmeans_cal_nextcenter(density,distance_matrix,centers,pre_siteid_of_center):
distance_all_sites = []
n_sites = len(density)
n_centers = len(centers)
for i in range(n_sites):
total_distance = 1
for j in range(n_centers):
total_distance = total_distance * distance_matrix[i][pre_siteid_of_center[j]]
distance_all_sites.append(total_distance)
if density[i] == 0:
distance_all_sites[i] = 0
next_center_id = distance_all_sites.index(max(distance_all_sites))
return next_center_id
def kmeans_assign_center(centers,site):
min_distance = [( ((site.x_location-centers[0].x_location) ** 2) + ((site.y_location-centers[0].y_location)**2) ) ** 0.5,0]
for i in range(len(centers)):
distance = ( ((site.x_location-centers[i].x_location) ** 2) + ((site.y_location-centers[i].y_location) **2) ) ** 0.5
if distance < min_distance[0]:
min_distance[0] = distance
min_distance[1] = centers[i].id
return centers[min_distance[1]]
def kmeans_cal_center(id,sites):
center = Center(id,0,0)
sum_x = 0
sum_y = 0
cnt_sites = 0
for i in range(len(sites)):
if sites[i].center == id:
cnt_sites = cnt_sites+1
sum_x = sum_x + sites[i].x_location
sum_y = sum_y + sites[i].y_location
center.x_location = sum_x/cnt_sites
center.y_location = sum_y/cnt_sites
return center
def k_means(sites, init_centers,algorithm_kind):
centers = init_centers[:]
n_fig = 0
while True:
new_centers = []
changed_centers = []
# assign the center to the site
for site in sites:
center = kmeans_assign_center(centers, site)
site.center = center.id
center.sites.append(site)
# recalculate center
for center in centers:
new_center = kmeans_cal_center(center.id, center.sites)
new_centers.append(new_center)
# if ((new_center.x_location != center.x_location) or (new_center.y_location != center.y_location)):
if (((new_center.x_location-center.x_location)>0.01) or ((new_center.y_location-center.y_location)>0.01)):
changed_centers.append(new_center)
if len(changed_centers) == 0:
return centers,n_fig
centers = new_centers[:]
plt.clf()
color_squence = ['darkorchid','limegreen','sandybrown','lightslategrey','rosybrown','sienna','seagreen']
for j in range(len(centers)):
x_sample_location = []
y_sample_location = []
for i in range(len(sites)):
if sites[i].center == j:
x_sample_location.append(sites[i].x_location)
y_sample_location.append(sites[i].y_location)
plt.scatter(x_sample_location,y_sample_location,marker='o', c = 'white',edgecolors = color_squence[j%7])
x_center_location = []
y_center_location = []
for i in range(len(centers)):
x_center_location.append(centers[i].x_location)
y_center_location.append(centers[i].y_location)
plt.scatter(x_center_location,y_center_location,s=400,marker='*',c='red')
plt.title(algorithm_kind)
plt.xlabel('Number of iterations:' + str(n_fig+1))
plt.savefig(str(algorithm_kind)+ '_' + str(n_fig+1)+'.png')
n_fig = n_fig+1
plt.pause(0.01)
################################################################################################################################
def cmeans_cal_center(sites,initial_centers,nu_matrix,m):
centers = copy.deepcopy(initial_centers)
n_centers = nu_matrix.shape[1]
n_sites = len(sites)
for j in range(n_centers):
up_xtotal = 0.0
down_xtotal = 0.0
up_ytotal = 0.0
down_ytotal = 0.0
for i in range(n_sites):
up_xtotal = up_xtotal + sites[i].x_location * nu_matrix[i][j] ** m
up_ytotal = up_ytotal + sites[i].y_location * nu_matrix[i][j] ** m
down_xtotal = down_xtotal + nu_matrix[i][j] ** m
down_ytotal = down_ytotal + nu_matrix[i][j] ** m
centers[j].x_location = up_xtotal / down_xtotal
centers[j].y_location = up_ytotal / down_ytotal
return centers
def cal_numatrix(distance_matrix,centers,m,n,K):
nu_matrix = np.zeros((n,K))
for i in range(n):
for j in range(K):
total = 0
for k in range(K):
total = total + (distance_matrix[i][j] / distance_matrix[i][k]) ** (2/(m-1))
total = total ** (-1)
nu_matrix[i][j] = total
return nu_matrix
def cmeans_cal_distance(sites,centers):
n_sites = len(sites)
n_centers = len(centers)
distance_matrix = np.zeros((n_sites,n_centers))
for i in range(n_sites):
for j in range(n_centers):
distance_matrix[i][j] = ( ((sites[i].x_location - centers[j].x_location) **2)+((sites[i].y_location - centers[j].y_location) **2) ) **0.5
return distance_matrix
def cmeans_assign_center(site,centers,distance_matrix):
# return the id of center which has the sortest distance from site to this center
site_id = site.id
min_distance = [distance_matrix[site_id][0],0]
for j in range(distance_matrix.shape[1]):# compare distances from site to K centers
if distance_matrix[site_id][j] < min_distance[0]:
min_distance[0] = distance_matrix[site_id][j]
min_distance[1] = j
return centers[min_distance[1]]
def c_means(sites, init_centers,algorithm_kind,m):
centers = copy.deepcopy(init_centers)
n_fig = 0
while True:
distance_matrix = cmeans_cal_distance(sites,centers)
changed_centers = []
# assign the center to the site
for site in sites:
center = cmeans_assign_center(site, centers, distance_matrix)
site.center = center.id
center.sites.append(site)
# recalculate center
nu_matrix = cal_numatrix(distance_matrix,centers,m,len(sites),len(centers))
new_centers = copy.deepcopy(cmeans_cal_center(sites,centers,nu_matrix,m))
for i in range(len(centers)):
if ((abs(centers[i].x_location-new_centers[i].x_location)>0.01) or (abs(centers[i].y_location-new_centers[i].y_location)>0.01)):
changed_centers.append(centers[i])
if len(changed_centers) == 0:
return centers,n_fig
centers = copy.deepcopy(new_centers)
plt.clf()
color_squence = ['darkorchid','limegreen','sandybrown','lightslategrey','rosybrown','sienna','seagreen']
for j in range(len(centers)):
x_sample_location = []
y_sample_location = []
for i in range(len(sites)):
x_sample_location.append(sites[i].x_location)
y_sample_location.append(sites[i].y_location)
plt.scatter(sites[i].x_location,sites[i].y_location,marker='o',c = 'white',alpha = nu_matrix[i][j],edgecolors=color_squence[j%7])
x_center_location = []
y_center_location = []
for i in range(len(centers)):
x_center_location.append(centers[i].x_location)
y_center_location.append(centers[i].y_location)
plt.scatter(x_center_location,y_center_location,s=400,marker='*',c='red')
plt.title(algorithm_kind + ' M=' + str(m))
plt.xlabel('Number of iterations:' + str(n_fig+1))
plt.savefig(str(algorithm_kind)+ '_' +'M=' + str(m) + '_' + str(n_fig+1)+'.png')
n_fig = n_fig+1
plt.pause(0.01)
class Site:
center = 0
def __init__(self,id,x_location,y_location):
self.id = id
self.x_location = x_location
self.y_location = y_location
class Center:
sites = []
def __init__(self,id,x_location,y_location):
self.id = id
self.x_location = x_location
self.y_location = y_location
if __name__ == "__main__":
'''
Define the clusters using super param
'''
CENTERS=[[-1.15,-1.15], [-1.15,1.15], [2.5,2.5], [1.15,-1.15],[1.15,1.15]]
K = len(CENTERS)
N_SAMPLES = 600 # numbel of samples, K samples is used for initial centers
CLUSTER_STD = [0.6, 0.6, 0.05, 0.6, 0.6] # std of each cluster
###############################################################################################################
# K-Means
algorithm_kind = 'K-Means'
'''
Initial sample sites
'''
Object_sites = []
sample_sites_locations = []
temp_sample_sites_locations, cluster_id = make_blobs(n_samples=N_SAMPLES, n_features=2, centers = CENTERS, cluster_std=CLUSTER_STD, random_state =9)
x_sample_location = temp_sample_sites_locations[:,0]
x_sample_location = x_sample_location.tolist()
y_sample_location = temp_sample_sites_locations[:,1]
y_sample_location = y_sample_location.tolist()
for n_sample in range(N_SAMPLES):
# get sites locations [[x,y],[]...],then initial sites objects
sample_sites_locations.append([x_sample_location[n_sample],y_sample_location[n_sample]])
Object_sites.append( Site(n_sample,sample_sites_locations[n_sample][0],sample_sites_locations[n_sample][1]) )
'''
Inital centers
'''
# Random
Object_centers = []
initial_centers_locations= []
x_center_location = []
y_center_location = []
# initial centers randomly
algorithm_kind = 'K-Means '
rand_squence = np.random.randint(0,N_SAMPLES,K) # choose sites randomly
for k in range(K):
# choose sites as centers from samples
x_center_location.append(x_sample_location[rand_squence[k]])
y_center_location.append(y_sample_location[rand_squence[k]])
# pop sites which is chosed as the centers
x_sample_location.pop(rand_squence[k])
y_sample_location.pop(rand_squence[k])
sample_sites_locations.pop(rand_squence[k])
Object_sites.pop(rand_squence[k])
# get centers locations [[x,y],[]...],then initial centers objects
initial_centers_locations.append([x_center_location[k],y_center_location[k]])
Object_centers.append( Center(k,initial_centers_locations[k][0],initial_centers_locations[k][1]) )
fig = plt.figure(figsize=(5,5))
plt.scatter(x_sample_location,y_sample_location, marker='o') # the sites before k-means
plt.scatter(x_center_location,y_center_location,s = 300,marker='*',c = 'red')
plt.title('Init by '+ algorithm_kind)
plt.savefig('KMeans_inital_centers_.png')
'''
K-Means and plt.show
'''
plt.ion()
[Object_centers,n_fig] = k_means(Object_sites, Object_centers,algorithm_kind)
plt.ioff()
plt.show()
'''
Save figs as gif
'''
im = Image.open(str(algorithm_kind) + "_1.png")
images=[]
for i in range(n_fig+1):
if i!=0:
fpath = str(algorithm_kind) + '_' + str(i) + ".png"
images.append(Image.open(fpath))
im.save(str(algorithm_kind) + '.gif', save_all=True, append_images=images,loop=1000,duration=500)
###########################################################################################################################
algorithm_kind = 'C-Means'
M = 1.1
'''
Initial sample sites
'''
Object_sites = []
sample_sites_locations = []
temp_sample_sites_locations, cluster_id = make_blobs(n_samples=N_SAMPLES, n_features=2, centers = CENTERS, cluster_std=CLUSTER_STD, random_state =9)
x_sample_location = temp_sample_sites_locations[:,0]
x_sample_location = x_sample_location.tolist()
y_sample_location = temp_sample_sites_locations[:,1]
y_sample_location = y_sample_location.tolist()
for n_sample in range(N_SAMPLES):
# get sites locations [[x,y],[]...],then initial sites objects
sample_sites_locations.append([x_sample_location[n_sample],y_sample_location[n_sample]])
Object_sites.append( Site(n_sample,sample_sites_locations[n_sample][0],sample_sites_locations[n_sample][1]) )
'''
Init nu matrix and centers
'''
Object_centers = []
initial_centers_locations= []
x_center_location = []
y_center_location = []
# init nu matrix
nu_matrix = np.zeros((N_SAMPLES,K))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = np.random.uniform(0,1)
row_total = []
for i in range(N_SAMPLES):
row_total.append(sum(nu_matrix[i][:]))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = nu_matrix[i][j] / row_total[i]
# init centers
for k in range(K):
Object_centers.append( Center(k,0,0) )
Object_centers = cmeans_cal_center(Object_sites,Object_centers,nu_matrix,M)
for k in range(K):
x_center_location.append(Object_centers[k].x_location)
y_center_location.append(Object_centers[k].y_location)
initial_centers_locations.append([x_center_location[k],y_center_location[k]])
# show the initial situation
fig = plt.figure(figsize=(5,5))
plt.scatter(x_sample_location,y_sample_location, marker='o',edgecolor = 'green',c = 'white') # the sites before k-means
plt.scatter(x_center_location,y_center_location,s = 300,marker='*',c = 'red')
plt.title('Init by '+ algorithm_kind)
plt.savefig('CMeans_inital_centers_.png')
'''
C-Means and plt.show
'''
plt.ion()
[Object_centers,n_fig] = c_means(Object_sites, Object_centers,algorithm_kind,M)
plt.ioff()
plt.show()
'''
Save figs as gif
'''
im = Image.open(str(algorithm_kind)+ '_' +'M=' + str(M) + "_1.png")
images=[]
for i in range(n_fig+1):
if i>1:
fpath = str(algorithm_kind)+ '_' +'M=' + str(M) + '_' + str(i)+ ".png"
images.append(Image.open(fpath))
im.save(str(algorithm_kind)+ '_' +'M=' + str(M) + '.gif', save_all=True, append_images=images,loop=1000,duration=500)
###########################################################################################################################
M = 1.5
'''
Initial sample sites
'''
Object_sites = []
sample_sites_locations = []
temp_sample_sites_locations, cluster_id = make_blobs(n_samples=N_SAMPLES, n_features=2, centers = CENTERS, cluster_std=CLUSTER_STD, random_state =9)
x_sample_location = temp_sample_sites_locations[:,0]
x_sample_location = x_sample_location.tolist()
y_sample_location = temp_sample_sites_locations[:,1]
y_sample_location = y_sample_location.tolist()
for n_sample in range(N_SAMPLES):
# get sites locations [[x,y],[]...],then initial sites objects
sample_sites_locations.append([x_sample_location[n_sample],y_sample_location[n_sample]])
Object_sites.append( Site(n_sample,sample_sites_locations[n_sample][0],sample_sites_locations[n_sample][1]) )
'''
Init nu matrix and centers
'''
Object_centers = []
initial_centers_locations= []
x_center_location = []
y_center_location = []
# init nu matrix
nu_matrix = np.zeros((N_SAMPLES,K))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = np.random.uniform(0,1)
row_total = []
for i in range(N_SAMPLES):
row_total.append(sum(nu_matrix[i][:]))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = nu_matrix[i][j] / row_total[i]
# init centers
for k in range(K):
Object_centers.append( Center(k,0,0) )
Object_centers = cmeans_cal_center(Object_sites,Object_centers,nu_matrix,M)
for k in range(K):
x_center_location.append(Object_centers[k].x_location)
y_center_location.append(Object_centers[k].y_location)
initial_centers_locations.append([x_center_location[k],y_center_location[k]])
# show the initial situation
fig = plt.figure(figsize=(5,5))
plt.scatter(x_sample_location,y_sample_location, marker='o',edgecolor = 'green',c = 'white') # the sites before k-means
plt.scatter(x_center_location,y_center_location,s = 300,marker='*',c = 'red')
plt.title('Init by '+ algorithm_kind)
plt.savefig('CMeans_inital_centers_.png')
'''
C-Means and plt.show
'''
plt.ion()
[Object_centers,n_fig] = c_means(Object_sites, Object_centers,algorithm_kind,M)
plt.ioff()
plt.show()
'''
Save figs as gif
'''
im = Image.open(str(algorithm_kind)+ '_' +'M=' + str(M) + "_1.png")
images=[]
for i in range(n_fig+1):
if i>1:
fpath = str(algorithm_kind)+ '_' +'M=' + str(M) + '_' + str(i)+ ".png"
images.append(Image.open(fpath))
im.save(str(algorithm_kind)+ '_' +'M=' + str(M) + '.gif', save_all=True, append_images=images,loop=1000,duration=500)
###########################################################################################################################
M = 2
'''
Initial sample sites
'''
Object_sites = []
sample_sites_locations = []
temp_sample_sites_locations, cluster_id = make_blobs(n_samples=N_SAMPLES, n_features=2, centers = CENTERS, cluster_std=CLUSTER_STD, random_state =9)
x_sample_location = temp_sample_sites_locations[:,0]
x_sample_location = x_sample_location.tolist()
y_sample_location = temp_sample_sites_locations[:,1]
y_sample_location = y_sample_location.tolist()
for n_sample in range(N_SAMPLES):
# get sites locations [[x,y],[]...],then initial sites objects
sample_sites_locations.append([x_sample_location[n_sample],y_sample_location[n_sample]])
Object_sites.append( Site(n_sample,sample_sites_locations[n_sample][0],sample_sites_locations[n_sample][1]) )
'''
Init nu matrix and centers
'''
Object_centers = []
initial_centers_locations= []
x_center_location = []
y_center_location = []
# init nu matrix
nu_matrix = np.zeros((N_SAMPLES,K))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = np.random.uniform(0,1)
row_total = []
for i in range(N_SAMPLES):
row_total.append(sum(nu_matrix[i][:]))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = nu_matrix[i][j] / row_total[i]
# init centers
for k in range(K):
Object_centers.append( Center(k,0,0) )
Object_centers = cmeans_cal_center(Object_sites,Object_centers,nu_matrix,M)
for k in range(K):
x_center_location.append(Object_centers[k].x_location)
y_center_location.append(Object_centers[k].y_location)
initial_centers_locations.append([x_center_location[k],y_center_location[k]])
# show the initial situation
fig = plt.figure(figsize=(5,5))
plt.scatter(x_sample_location,y_sample_location, marker='o',edgecolor = 'green',c = 'white') # the sites before k-means
plt.scatter(x_center_location,y_center_location,s = 300,marker='*',c = 'red')
plt.title('Init by '+ algorithm_kind)
plt.savefig('CMeans_inital_centers_.png')
'''
C-Means and plt.show
'''
plt.ion()
[Object_centers,n_fig] = c_means(Object_sites, Object_centers,algorithm_kind,M)
plt.ioff()
plt.show()
'''
Save figs as gif
'''
im = Image.open(str(algorithm_kind)+ '_' +'M=' + str(M) + "_1.png")
images=[]
for i in range(n_fig+1):
if i>1:
fpath = str(algorithm_kind)+ '_' +'M=' + str(M) + '_' + str(i)+ ".png"
images.append(Image.open(fpath))
im.save(str(algorithm_kind)+ '_' +'M=' + str(M) + '.gif', save_all=True, append_images=images,loop=1000,duration=500)
###########################################################################################################################
M = 3
'''
Initial sample sites
'''
Object_sites = []
sample_sites_locations = []
temp_sample_sites_locations, cluster_id = make_blobs(n_samples=N_SAMPLES, n_features=2, centers = CENTERS, cluster_std=CLUSTER_STD, random_state =9)
x_sample_location = temp_sample_sites_locations[:,0]
x_sample_location = x_sample_location.tolist()
y_sample_location = temp_sample_sites_locations[:,1]
y_sample_location = y_sample_location.tolist()
for n_sample in range(N_SAMPLES):
# get sites locations [[x,y],[]...],then initial sites objects
sample_sites_locations.append([x_sample_location[n_sample],y_sample_location[n_sample]])
Object_sites.append( Site(n_sample,sample_sites_locations[n_sample][0],sample_sites_locations[n_sample][1]) )
'''
Init nu matrix and centers
'''
Object_centers = []
initial_centers_locations= []
x_center_location = []
y_center_location = []
# init nu matrix
nu_matrix = np.zeros((N_SAMPLES,K))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = np.random.uniform(0,1)
row_total = []
for i in range(N_SAMPLES):
row_total.append(sum(nu_matrix[i][:]))
for i in range(N_SAMPLES):
for j in range(K):
nu_matrix[i][j] = nu_matrix[i][j] / row_total[i]
# init centers
for k in range(K):
Object_centers.append( Center(k,0,0) )
Object_centers = cmeans_cal_center(Object_sites,Object_centers,nu_matrix,M)
for k in range(K):
x_center_location.append(Object_centers[k].x_location)
y_center_location.append(Object_centers[k].y_location)
initial_centers_locations.append([x_center_location[k],y_center_location[k]])
# show the initial situation
fig = plt.figure(figsize=(5,5))
plt.scatter(x_sample_location,y_sample_location, marker='o',edgecolor = 'green',c = 'white') # the sites before k-means
plt.scatter(x_center_location,y_center_location,s = 300,marker='*',c = 'red')
plt.title('Init by '+ algorithm_kind)
plt.savefig('CMeans_inital_centers_.png')
'''
C-Means and plt.show
'''
plt.ion()
[Object_centers,n_fig] = c_means(Object_sites, Object_centers,algorithm_kind,M)
plt.ioff()
plt.show()
'''
Save figs as gif
'''
im = Image.open(str(algorithm_kind)+ '_' +'M=' + str(M) + "_1.png")
images=[]
for i in range(n_fig+1):
if i>1:
fpath = str(algorithm_kind)+ '_' +'M=' + str(M) + '_' + str(i)+ ".png"
images.append(Image.open(fpath))
im.save(str(algorithm_kind)+ '_' +'M=' + str(M) + '.gif', save_all=True, append_images=images,loop=1000,duration=500)
| 39.32715 | 152 | 0.62373 | 3,235 | 23,321 | 4.211747 | 0.05966 | 0.065761 | 0.052844 | 0.023413 | 0.824367 | 0.790606 | 0.756477 | 0.715963 | 0.703927 | 0.703927 | 0 | 0.014262 | 0.215257 | 23,321 | 592 | 153 | 39.393581 | 0.730233 | 0.053557 | 0 | 0.707944 | 0 | 0 | 0.028727 | 0.006373 | 0 | 0 | 0 | 0 | 0 | 1 | 0.028037 | false | 0 | 0.025701 | 0 | 0.086449 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7512fe4941a76979dd118f547608fcc853505af1 | 49 | py | Python | project/hello/__init__.py | LloydTao/python-starter-ci | 0e869913ea01e9f188988b8e9c27e963dff36a54 | [
"MIT"
] | null | null | null | project/hello/__init__.py | LloydTao/python-starter-ci | 0e869913ea01e9f188988b8e9c27e963dff36a54 | [
"MIT"
] | null | null | null | project/hello/__init__.py | LloydTao/python-starter-ci | 0e869913ea01e9f188988b8e9c27e963dff36a54 | [
"MIT"
] | null | null | null | from .hello import hello
from .hello import main
| 16.333333 | 24 | 0.795918 | 8 | 49 | 4.875 | 0.5 | 0.461538 | 0.769231 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.163265 | 49 | 2 | 25 | 24.5 | 0.95122 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
7546320e4f3dc5b48e150ea31a471bf261ad20b0 | 646 | py | Python | src/models/modules/swish.py | takedarts/skipresnet | d6f1e16042f8433a287355009e17e4e5768ad319 | [
"MIT"
] | 3 | 2022-02-03T13:25:12.000Z | 2022-02-04T16:12:23.000Z | src/models/modules/swish.py | takedarts/skipresnet | d6f1e16042f8433a287355009e17e4e5768ad319 | [
"MIT"
] | null | null | null | src/models/modules/swish.py | takedarts/skipresnet | d6f1e16042f8433a287355009e17e4e5768ad319 | [
"MIT"
] | 1 | 2022-02-04T12:28:02.000Z | 2022-02-04T12:28:02.000Z | from ..functions import swish, h_swish
import torch.nn as nn
class Swish(nn.Module):
def __init__(self, inplace: bool = False):
super().__init__()
self.inplace = inplace
def forward(self, x):
return swish(x, inplace=self.inplace)
def extra_repr(self):
return 'inplace={}'.format(self.inplace)
class HSwish(nn.Module):
def __init__(self, inplace=False):
super().__init__()
self.inplace = inplace
def forward(self, x):
return h_swish(x, inplace=self.inplace)
def extra_repr(self):
return 'inplace={}'.format(self.inplace)
| 22.275862 | 49 | 0.605263 | 79 | 646 | 4.696203 | 0.303797 | 0.237197 | 0.161725 | 0.080863 | 0.803235 | 0.803235 | 0.663073 | 0.663073 | 0.663073 | 0.663073 | 0 | 0 | 0.272446 | 646 | 28 | 50 | 23.071429 | 0.789362 | 0 | 0 | 0.555556 | 0 | 0 | 0.032362 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.111111 | 0.222222 | 0.777778 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
755e4edfa4231d57e30973ff48011975baa74fb9 | 207 | py | Python | alfworld/agents/environment/__init__.py | vzhong/alfworld | b0c78fca459020969c9d94eb4bef4085d5ccba6c | [
"MIT"
] | null | null | null | alfworld/agents/environment/__init__.py | vzhong/alfworld | b0c78fca459020969c9d94eb4bef4085d5ccba6c | [
"MIT"
] | null | null | null | alfworld/agents/environment/__init__.py | vzhong/alfworld | b0c78fca459020969c9d94eb4bef4085d5ccba6c | [
"MIT"
] | null | null | null | from alfworld.agents.environment.alfred_tw_env import AlfredTWEnv
# from alfworld.agents.environment.alfred_thor_env import AlfredThorEnv
# from alfworld.agents.environment.alfred_hybrid import AlfredHybrid
| 51.75 | 71 | 0.879227 | 26 | 207 | 6.807692 | 0.5 | 0.20339 | 0.305085 | 0.491525 | 0.59322 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.067633 | 207 | 3 | 72 | 69 | 0.917098 | 0.657005 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
f33c2dc6c95440d0e03e4ff86495c987c82e64d8 | 13,083 | py | Python | mld/rwlock/RWLock.py | leoplo/mld | 07bd19c129acd48ced43df9d480b9cf7eca59e84 | [
"MIT"
] | 3 | 2020-08-07T21:26:09.000Z | 2021-06-12T10:21:41.000Z | mld/rwlock/RWLock.py | leoplo/mld | 07bd19c129acd48ced43df9d480b9cf7eca59e84 | [
"MIT"
] | 6 | 2022-01-21T17:17:12.000Z | 2022-01-26T09:45:53.000Z | mld/rwlock/RWLock.py | leoplo/mld | 07bd19c129acd48ced43df9d480b9cf7eca59e84 | [
"MIT"
] | 3 | 2022-01-24T12:59:00.000Z | 2022-03-25T14:28:56.000Z | #!/usr/bin/env python3
"""
Read Write Lock
"""
import threading
import time
class RWLockRead(object):
"""
A Read/Write lock giving preference to Reader
"""
def __init__(self):
self.V_ReadCount = 0
self.A_Resource = threading.Lock()
self.A_LockReadCount = threading.Lock()
class _aReader(object):
def __init__(self, p_RWLock):
self.A_RWLock = p_RWLock
self.V_Locked = False
def acquire(self, blocking=1, timeout=-1):
p_TimeOut = None if (blocking and timeout < 0) else (timeout if blocking else 0)
c_DeadLine = None if p_TimeOut is None else (time.time() + p_TimeOut)
if not self.A_RWLock.A_LockReadCount.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0, c_DeadLine - time.time())):
return False
self.A_RWLock.V_ReadCount += 1
if self.A_RWLock.V_ReadCount == 1:
if not self.A_RWLock.A_Resource.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0, c_DeadLine - time.time())):
self.A_RWLock.V_ReadCount -= 1
self.A_RWLock.A_LockReadCount.release()
return False
self.A_RWLock.A_LockReadCount.release()
self.V_Locked = True
return True
def release(self):
if not self.V_Locked: raise RuntimeError("cannot release un-acquired lock")
self.V_Locked = False
self.A_RWLock.A_LockReadCount.acquire()
self.A_RWLock.V_ReadCount -= 1
if self.A_RWLock.V_ReadCount == 0:
self.A_RWLock.A_Resource.release()
self.A_RWLock.A_LockReadCount.release()
def locked(self):
return self.V_Locked
def __enter__(self):
self.acquire()
def __exit__(self, p_Type, p_Value, p_Traceback):
self.release()
class _aWriter(object):
def __init__(self, p_RWLock):
self.A_RWLock = p_RWLock
self.V_Locked = False
def acquire(self, blocking=1, timeout=-1):
self.V_Locked = self.A_RWLock.A_Resource.acquire(blocking, timeout)
return self.V_Locked
def release(self):
if not self.V_Locked: raise RuntimeError("cannot release un-acquired lock")
self.V_Locked = False
self.A_RWLock.A_Resource.release()
def locked(self):
return self.V_Locked
def __enter__(self):
self.acquire()
def __exit__(self, p_Type, p_Value, p_Traceback):
self.release()
def genRlock(self):
"""
Generate a reader lock
"""
return RWLockRead._aReader(self)
def genWlock(self):
"""
Generate a writer lock
"""
return RWLockRead._aWriter(self)
class RWLockWrite(object):
"""
A Read/Write lock giving preference to Writer
"""
def __init__(self):
self.V_ReadCount = 0
self.V_WriteCount = 0
self.A_LockReadCount = threading.Lock()
self.A_LockWriteCount = threading.Lock()
self.A_LockReadEntry = threading.Lock()
self.A_LockReadTry = threading.Lock()
self.A_Resource = threading.Lock()
class _aReader(object):
def __init__(self, p_RWLock):
self.A_RWLock = p_RWLock
self.V_Locked = False
def acquire(self, blocking=1, timeout=-1):
p_TimeOut = None if (blocking and timeout < 0) else (timeout if blocking else 0)
c_DeadLine = None if p_TimeOut is None else (time.time() + p_TimeOut)
if not self.A_RWLock.A_LockReadEntry.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
return False
if not self.A_RWLock.A_LockReadTry.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.A_LockReadEntry.release()
return False
if not self.A_RWLock.A_LockReadCount.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.A_LockReadTry.release()
self.A_RWLock.A_LockReadEntry.release()
return False
self.A_RWLock.V_ReadCount += 1
if (self.A_RWLock.V_ReadCount == 1):
if not self.A_RWLock.A_Resource.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.A_LockReadTry.release()
self.A_RWLock.A_LockReadEntry.release()
self.A_RWLock.V_ReadCount -= 1
self.A_RWLock.A_LockReadCount.release()
return False
self.A_RWLock.A_LockReadCount.release()
self.A_RWLock.A_LockReadTry.release()
self.A_RWLock.A_LockReadEntry.release()
self.V_Locked = True
return True
def release(self):
if not self.V_Locked: raise RuntimeError("cannot release un-acquired lock")
self.V_Locked = False
self.A_RWLock.A_LockReadCount.acquire()
self.A_RWLock.V_ReadCount -= 1
if (self.A_RWLock.V_ReadCount == 0):
self.A_RWLock.A_Resource.release()
self.A_RWLock.A_LockReadCount.release()
def locked(self):
return self.V_Locked
def __enter__(self):
self.acquire()
def __exit__(self, p_Type, p_Value, p_Traceback):
self.release()
class _aWriter(object):
def __init__(self, p_RWLock):
self.A_RWLock = p_RWLock
self.V_Locked = False
def acquire(self, blocking=1, timeout=-1):
p_TimeOut = None if (blocking and timeout < 0) else (timeout if blocking else 0)
c_DeadLine = None if p_TimeOut is None else (time.time() + p_TimeOut)
if not self.A_RWLock.A_LockWriteCount.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
return False
self.A_RWLock.V_WriteCount += 1
if (self.A_RWLock.V_WriteCount == 1):
if not self.A_RWLock.A_LockReadTry.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.V_WriteCount -= 1
self.A_RWLock.A_LockWriteCount.release()
return False
self.A_RWLock.A_LockWriteCount.release()
if not self.A_RWLock.A_Resource.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.A_LockWriteCount.acquire()
self.A_RWLock.V_WriteCount -= 1
if self.A_RWLock.V_WriteCount == 0:
self.A_RWLock.A_LockReadTry.release()
self.A_RWLock.A_LockWriteCount.release()
return False
self.V_Locked = True
return True
def release(self):
if not self.V_Locked: raise RuntimeError("cannot release un-acquired lock")
self.V_Locked = False
self.A_RWLock.A_Resource.release()
self.A_RWLock.A_LockWriteCount.acquire()
self.A_RWLock.V_WriteCount -= 1
if (self.A_RWLock.V_WriteCount == 0):
self.A_RWLock.A_LockReadTry.release()
self.A_RWLock.A_LockWriteCount.release()
def locked(self):
return self.V_Locked
def __enter__(self):
self.acquire()
def __exit__(self, p_Type, p_Value, p_Traceback):
self.release()
def genRlock(self):
"""
Generate a reader lock
"""
return RWLockWrite._aReader(self)
def genWlock(self):
"""
Generate a writer lock
"""
return RWLockWrite._aWriter(self)
class RWLockFair(object):
"""
A Read/Write lock giving fairness to both Reader and Writer
"""
def __init__(self):
self.V_ReadCount = 0
self.A_LockReadCount = threading.Lock()
self.A_LockRead = threading.Lock()
self.A_LockWrite = threading.Lock()
class _aReader(object):
def __init__(self, p_RWLock):
self.A_RWLock = p_RWLock
self.V_Locked = False
def acquire(self, blocking=1, timeout=-1):
p_TimeOut = None if (blocking and timeout < 0) else (timeout if blocking else 0)
c_DeadLine = None if p_TimeOut is None else (time.time() + p_TimeOut)
if not self.A_RWLock.A_LockRead.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
return False
if not self.A_RWLock.A_LockReadCount.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.A_LockRead.release()
return False
self.A_RWLock.V_ReadCount += 1
if self.A_RWLock.V_ReadCount == 1:
if not self.A_RWLock.A_LockWrite.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.V_ReadCount -= 1
self.A_RWLock.A_LockReadCount.release()
self.A_RWLock.A_LockRead.release()
return False
self.A_RWLock.A_LockReadCount.release()
self.A_RWLock.A_LockRead.release()
self.V_Locked = True
return True
def release(self):
if not self.V_Locked: raise RuntimeError("cannot release un-acquired lock")
self.V_Locked = False
self.A_RWLock.A_LockReadCount.acquire()
self.A_RWLock.V_ReadCount -= 1
if self.A_RWLock.V_ReadCount == 0:
self.A_RWLock.A_LockWrite.release()
self.A_RWLock.A_LockReadCount.release()
def locked(self):
return self.V_Locked
def __enter__(self):
self.acquire()
def __exit__(self, p_Type, p_Value, p_Traceback):
self.release()
class _aWriter(object):
def __init__(self, p_RWLock):
self.A_RWLock = p_RWLock
self.V_Locked = False
def acquire(self, blocking=1, timeout=-1):
p_TimeOut = None if (blocking and timeout < 0) else (timeout if blocking else 0)
c_DeadLine = None if p_TimeOut is None else (time.time() + p_TimeOut)
if not self.A_RWLock.A_LockRead.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
return False
if not self.A_RWLock.A_LockWrite.acquire(blocking=1, timeout=-1 if c_DeadLine is None else max(0,
c_DeadLine - time.time())):
self.A_RWLock.A_LockRead.release()
return False
self.V_Locked = True
return True
def release(self):
if not self.V_Locked: raise RuntimeError("cannot release un-acquired lock")
self.V_Locked = False
self.A_RWLock.A_LockWrite.release()
self.A_RWLock.A_LockRead.release()
def locked(self):
return self.V_Locked
def __enter__(self):
self.acquire()
def __exit__(self, p_Type, p_Value, p_Traceback):
self.release()
def genRlock(self):
"""
Generate a reader lock
"""
return RWLockFair._aReader(self)
def genWlock(self):
"""
Generate a writer lock
"""
return RWLockFair._aWriter(self)
| 40.504644 | 141 | 0.531759 | 1,497 | 13,083 | 4.396794 | 0.048096 | 0.069128 | 0.135369 | 0.096627 | 0.944698 | 0.930568 | 0.919629 | 0.911729 | 0.886813 | 0.863719 | 0 | 0.011299 | 0.384392 | 13,083 | 322 | 142 | 40.630435 | 0.805935 | 0.024994 | 0 | 0.883333 | 0 | 0 | 0.014841 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.1875 | false | 0 | 0.008333 | 0.025 | 0.366667 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
f3bc855d6fc535613a79a201ece5278720197652 | 4,624 | py | Python | example/tasks/migrations/0001_initial.py | morlandi/django-task | 19c00fd2f73e60c0c11a33fe195546f567f29361 | [
"MIT"
] | 46 | 2017-11-02T22:23:14.000Z | 2022-02-16T11:56:58.000Z | example/tasks/migrations/0001_initial.py | morlandi/django-task | 19c00fd2f73e60c0c11a33fe195546f567f29361 | [
"MIT"
] | 10 | 2018-08-28T06:56:14.000Z | 2021-12-27T17:49:30.000Z | example/tasks/migrations/0001_initial.py | morlandi/django-task | 19c00fd2f73e60c0c11a33fe195546f567f29361 | [
"MIT"
] | 6 | 2018-02-01T12:26:02.000Z | 2021-09-07T11:13:04.000Z | # Generated by Django 2.0.7 on 2018-07-29 07:11
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import uuid
class Migration(migrations.Migration):
initial = True
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
operations = [
migrations.CreateModel(
name='CountBeansTask',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False, unique=True, verbose_name='id')),
('description', models.CharField(blank=True, max_length=256, verbose_name='description')),
('created_on', models.DateTimeField(auto_now_add=True, verbose_name='created on')),
('started_on', models.DateTimeField(null=True, verbose_name='started on')),
('completed_on', models.DateTimeField(null=True, verbose_name='completed on')),
('job_id', models.CharField(blank=True, max_length=128, verbose_name='job id')),
('status', models.CharField(choices=[('PENDING', 'PENDING'), ('RECEIVED', 'RECEIVED'), ('STARTED', 'STARTED'), ('PROGESS', 'PROGESS'), ('SUCCESS', 'SUCCESS'), ('FAILURE', 'FAILURE'), ('REVOKED', 'REVOKED'), ('REJECTED', 'REJECTED'), ('RETRY', 'RETRY'), ('IGNORED', 'IGNORED'), ('REJECTED', 'REJECTED')], db_index=True, default='PENDING', max_length=128, verbose_name='status')),
('mode', models.CharField(choices=[('UNKNOWN', 'UNKNOWN'), ('SYNC', 'SYNC'), ('ASYNC', 'ASYNC')], db_index=True, default='UNKNOWN', max_length=128, verbose_name='mode')),
('failure_reason', models.CharField(blank=True, max_length=256, verbose_name='failure reason')),
('progress', models.IntegerField(blank=True, null=True, verbose_name='progress')),
('log_text', models.TextField(blank=True, verbose_name='log text')),
('num_beans', models.PositiveIntegerField(default=100)),
('created_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('-created_on',),
'get_latest_by': 'created_on',
'abstract': False,
},
),
migrations.CreateModel(
name='SendEmailTask',
fields=[
('id', models.UUIDField(default=uuid.uuid4, editable=False, primary_key=True, serialize=False, unique=True, verbose_name='id')),
('description', models.CharField(blank=True, max_length=256, verbose_name='description')),
('created_on', models.DateTimeField(auto_now_add=True, verbose_name='created on')),
('started_on', models.DateTimeField(null=True, verbose_name='started on')),
('completed_on', models.DateTimeField(null=True, verbose_name='completed on')),
('job_id', models.CharField(blank=True, max_length=128, verbose_name='job id')),
('status', models.CharField(choices=[('PENDING', 'PENDING'), ('RECEIVED', 'RECEIVED'), ('STARTED', 'STARTED'), ('PROGESS', 'PROGESS'), ('SUCCESS', 'SUCCESS'), ('FAILURE', 'FAILURE'), ('REVOKED', 'REVOKED'), ('REJECTED', 'REJECTED'), ('RETRY', 'RETRY'), ('IGNORED', 'IGNORED'), ('REJECTED', 'REJECTED')], db_index=True, default='PENDING', max_length=128, verbose_name='status')),
('mode', models.CharField(choices=[('UNKNOWN', 'UNKNOWN'), ('SYNC', 'SYNC'), ('ASYNC', 'ASYNC')], db_index=True, default='UNKNOWN', max_length=128, verbose_name='mode')),
('failure_reason', models.CharField(blank=True, max_length=256, verbose_name='failure reason')),
('progress', models.IntegerField(blank=True, null=True, verbose_name='progress')),
('log_text', models.TextField(blank=True, verbose_name='log text')),
('sender', models.CharField(max_length=256)),
('recipients', models.TextField(help_text='put addresses in separate rows')),
('subject', models.CharField(max_length=256)),
('message', models.TextField(blank=True)),
('created_by', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to=settings.AUTH_USER_MODEL)),
],
options={
'ordering': ('-created_on',),
'get_latest_by': 'created_on',
'abstract': False,
},
),
]
| 68 | 394 | 0.611375 | 486 | 4,624 | 5.650206 | 0.22428 | 0.088128 | 0.06555 | 0.05244 | 0.827385 | 0.80772 | 0.80772 | 0.80772 | 0.80772 | 0.80772 | 0 | 0.015414 | 0.214317 | 4,624 | 67 | 395 | 69.014925 | 0.740435 | 0.009732 | 0 | 0.666667 | 1 | 0 | 0.215425 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.066667 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
45eb908a8e946fd554c68395879f136b233b8813 | 86 | py | Python | models/modules/__init__.py | lulindev/UNet-pytorch | cf91e251891a2926f46b628985ebdda66bc637a2 | [
"MIT"
] | 3 | 2021-04-07T08:05:44.000Z | 2021-06-25T16:55:56.000Z | models/modules/__init__.py | lulindev/UNet-pytorch | cf91e251891a2926f46b628985ebdda66bc637a2 | [
"MIT"
] | null | null | null | models/modules/__init__.py | lulindev/UNet-pytorch | cf91e251891a2926f46b628985ebdda66bc637a2 | [
"MIT"
] | 2 | 2021-08-19T10:23:32.000Z | 2021-12-15T03:26:11.000Z | import models.modules.aspp
import models.modules.attention
import models.modules.conv
| 21.5 | 31 | 0.860465 | 12 | 86 | 6.166667 | 0.5 | 0.486486 | 0.77027 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.069767 | 86 | 3 | 32 | 28.666667 | 0.925 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
45fe54473410f68e02c9684e2073ff9d98899ac2 | 118 | py | Python | pandapower/test/__init__.py | Zamwell/pandapower | ce51946342109e969b87b60c8883d7eec02d3060 | [
"BSD-3-Clause"
] | 104 | 2017-02-21T17:13:51.000Z | 2022-03-21T13:52:27.000Z | pandapower/test/__init__.py | lvzhibai/pandapower | 24ed3056558887cc89f67d15b5527523990ae9a1 | [
"BSD-3-Clause"
] | 126 | 2017-02-15T17:09:08.000Z | 2018-07-16T13:25:15.000Z | pandapower/test/__init__.py | gdgarcia/pandapower | 630e3278ca012535f78282ae73f1b86f3fe932fc | [
"BSD-3-Clause"
] | 57 | 2017-03-08T13:49:32.000Z | 2022-02-28T10:36:55.000Z | from pandapower.test.conftest import *
from pandapower.test.toolbox import *
from pandapower.test.run_tests import *
| 23.6 | 39 | 0.813559 | 16 | 118 | 5.9375 | 0.5 | 0.442105 | 0.568421 | 0.505263 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.110169 | 118 | 4 | 40 | 29.5 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
3417b1766c58a5716fa4d15e6bb5ae54c6e0bf00 | 102 | py | Python | src/sensai/sklearn/__init__.py | schroedk/sensAI | a2d6d7c6ab7bed9ccd5eac216dd988c49d69aec7 | [
"MIT"
] | 10 | 2020-02-19T09:16:54.000Z | 2022-02-04T16:19:33.000Z | src/sensai/sklearn/__init__.py | schroedk/sensAI | a2d6d7c6ab7bed9ccd5eac216dd988c49d69aec7 | [
"MIT"
] | 47 | 2020-03-11T16:26:51.000Z | 2022-02-04T15:29:40.000Z | src/sensai/sklearn/__init__.py | schroedk/sensAI | a2d6d7c6ab7bed9ccd5eac216dd988c49d69aec7 | [
"MIT"
] | 5 | 2020-03-12T21:33:22.000Z | 2020-12-21T14:43:04.000Z | from . import sklearn_regression as regression
from . import sklearn_classification as classification
| 34 | 54 | 0.862745 | 12 | 102 | 7.166667 | 0.5 | 0.232558 | 0.395349 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.117647 | 102 | 2 | 55 | 51 | 0.955556 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1b041f9cbe6ccc1bf607d8719dc90e2a28cefc29 | 11,008 | py | Python | testcases/generated/kms_test.py | Tanc009/jdcloud-cli | 4e11de77c68501f44e7026c0ad1c24e5d043197e | [
"Apache-2.0"
] | 95 | 2018-06-05T10:49:32.000Z | 2019-12-31T11:07:36.000Z | testcases/generated/kms_test.py | Tanc009/jdcloud-cli | 4e11de77c68501f44e7026c0ad1c24e5d043197e | [
"Apache-2.0"
] | 22 | 2018-06-05T10:58:59.000Z | 2020-07-31T12:13:19.000Z | testcases/generated/kms_test.py | Tanc009/jdcloud-cli | 4e11de77c68501f44e7026c0ad1c24e5d043197e | [
"Apache-2.0"
] | 21 | 2018-06-04T12:50:27.000Z | 2020-11-05T10:55:28.000Z | # coding=utf8
# Copyright 2018 JDCLOUD.COM
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# NOTE: This class is auto generated by the jdcloud code generator program.
import unittest
import os
import json
class KmsTest(unittest.TestCase):
def test_describe_key_list(self):
cmd = """python ../../main.py kms describe-key-list """
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_create_key(self):
cmd = """python ../../main.py kms create-key --key-cfg '{"":""}'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_describe_key(self):
cmd = """python ../../main.py kms describe-key --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_update_key_description(self):
cmd = """python ../../main.py kms update-key-description --key-id 'xxx' --key-cfg '{"":""}'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_enable_key(self):
cmd = """python ../../main.py kms enable-key --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_disable_key(self):
cmd = """python ../../main.py kms disable-key --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_schedule_key_deletion(self):
cmd = """python ../../main.py kms schedule-key-deletion --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_cancel_key_deletion(self):
cmd = """python ../../main.py kms cancel-key-deletion --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_key_rotation(self):
cmd = """python ../../main.py kms key-rotation --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_encrypt(self):
cmd = """python ../../main.py kms encrypt --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_decrypt(self):
cmd = """python ../../main.py kms decrypt --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_get_public_key(self):
cmd = """python ../../main.py kms get-public-key --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_sign(self):
cmd = """python ../../main.py kms sign --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_validate(self):
cmd = """python ../../main.py kms validate --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_generate_data_key(self):
cmd = """python ../../main.py kms generate-data-key --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_describe_key_detail(self):
cmd = """python ../../main.py kms describe-key-detail --key-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_enable_key_version(self):
cmd = """python ../../main.py kms enable-key-version --key-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_disable_key_version(self):
cmd = """python ../../main.py kms disable-key-version --key-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_schedule_key_version_deletion(self):
cmd = """python ../../main.py kms schedule-key-version-deletion --key-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_cancel_key_version_deletion(self):
cmd = """python ../../main.py kms cancel-key-version-deletion --key-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_describe_secret_list(self):
cmd = """python ../../main.py kms describe-secret-list """
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_create_secret(self):
cmd = """python ../../main.py kms create-secret --secret-cfg '{"":""}'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_import_secret(self):
cmd = """python ../../main.py kms import-secret """
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_describe_secret_version_list(self):
cmd = """python ../../main.py kms describe-secret-version-list --secret-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_update_secret(self):
cmd = """python ../../main.py kms update-secret --secret-id 'xxx' --secret-desc-cfg '{"":""}'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_enable_secret(self):
cmd = """python ../../main.py kms enable-secret --secret-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_disable_secret(self):
cmd = """python ../../main.py kms disable-secret --secret-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_delete_secret(self):
cmd = """python ../../main.py kms delete-secret --secret-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_create_secret_version(self):
cmd = """python ../../main.py kms create-secret-version --secret-id 'xxx' --secret-version-cfg '{"":""}'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_export_secret(self):
cmd = """python ../../main.py kms export-secret --secret-id 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_describe_secret_version_info(self):
cmd = """python ../../main.py kms describe-secret-version-info --secret-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_update_secret_version(self):
cmd = """python ../../main.py kms update-secret-version --secret-id 'xxx' --version 'xxx' --secret-time-cfg '{"":""}'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_enable_secret_version(self):
cmd = """python ../../main.py kms enable-secret-version --secret-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_disable_secret_version(self):
cmd = """python ../../main.py kms disable-secret-version --secret-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
def test_delete_secret_version(self):
cmd = """python ../../main.py kms delete-secret-version --secret-id 'xxx' --version 'xxx'"""
with os.popen(cmd) as f:
content = f.read()
print(content)
result = json.loads(content)
self.assertIsInstance(result, dict)
| 32.281525 | 129 | 0.581668 | 1,360 | 11,008 | 4.642647 | 0.091912 | 0.038803 | 0.072062 | 0.094235 | 0.866012 | 0.863161 | 0.84574 | 0.815173 | 0.725055 | 0.678809 | 0 | 0.001126 | 0.274073 | 11,008 | 340 | 130 | 32.376471 | 0.789013 | 0.057594 | 0 | 0.702811 | 0 | 0.048193 | 0.218907 | 0.025203 | 0 | 0 | 0 | 0 | 0.140562 | 1 | 0.140562 | false | 0 | 0.02008 | 0 | 0.164659 | 0.140562 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1b341eb72e2a8c8ad0b3d7ae52c414c77bf45b5e | 137 | py | Python | utils/__init__.py | ibug-group/face_reid | 85355ab2e19276d23557402f22f44e66527d9448 | [
"MIT"
] | 4 | 2021-02-08T08:18:59.000Z | 2022-02-07T11:57:44.000Z | utils/__init__.py | ibug-group/face_reid | 85355ab2e19276d23557402f22f44e66527d9448 | [
"MIT"
] | 1 | 2020-12-19T04:05:37.000Z | 2021-01-21T04:33:45.000Z | utils/__init__.py | IntelligentBehaviourUnderstandingGroup/face_reid | 85355ab2e19276d23557402f22f44e66527d9448 | [
"MIT"
] | 2 | 2021-05-14T11:20:40.000Z | 2022-02-07T11:57:46.000Z | from .dlib_utils import *
from .naive_face_tracker import *
from .head_pose_estimator import *
from .retina_face_pose_estimator import *
| 27.4 | 41 | 0.824818 | 20 | 137 | 5.25 | 0.55 | 0.285714 | 0.361905 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.116788 | 137 | 4 | 42 | 34.25 | 0.867769 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
84b11ee38eb6ee3b188c9788524592dd09e55e68 | 2,326 | py | Python | tests/device/test_get_set_temperature_offset_parameters.py | Sensirion/python-i2c-sen5x | de1fbf0cc73b8fa3f89d6bcd59db321d9bd168d3 | [
"BSD-3-Clause"
] | null | null | null | tests/device/test_get_set_temperature_offset_parameters.py | Sensirion/python-i2c-sen5x | de1fbf0cc73b8fa3f89d6bcd59db321d9bd168d3 | [
"BSD-3-Clause"
] | 1 | 2022-02-21T05:55:15.000Z | 2022-02-21T07:39:58.000Z | tests/device/test_get_set_temperature_offset_parameters.py | Sensirion/python-i2c-sen5x | de1fbf0cc73b8fa3f89d6bcd59db321d9bd168d3 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
# (c) Copyright 2022 Sensirion AG, Switzerland
import pytest
@pytest.mark.needs_device
def test_no_arg(device):
"""
Test if get_temperature_offset_parameters() and
set_temperature_offset_parameters() work as expected when not passing the
raw parameter.
"""
result = device.set_temperature_offset_parameters(1.2, 0.34, 5.6)
assert result is None
offset, slope, time_constant = device.get_temperature_offset_parameters()
assert type(offset) is float
assert type(slope) is float
assert type(time_constant) is int
assert offset == 1.2
assert slope == 0.34
assert time_constant == 6
@pytest.mark.needs_device
def test_raw_false(device):
"""
Test if get_temperature_offset_parameters() and
set_temperature_offset_parameters() work as expected when passing
raw=False.
"""
result = device.set_temperature_offset_parameters(1.2, 0.34, 5.6,
raw=False)
assert result is None
offset, slope, time_constant = \
device.get_temperature_offset_parameters(raw=False)
assert type(offset) is float
assert type(slope) is float
assert type(time_constant) is int
assert offset == 1.2
assert slope == 0.34
assert time_constant == 6
# Check scaling
offset, slope, time_constant = \
device.get_temperature_offset_parameters(raw=True)
assert offset == 240
assert slope == 3400
assert time_constant == 6
@pytest.mark.needs_device
def test_raw_true(device):
"""
Test if get_temperature_offset_parameters() and
set_temperature_offset_parameters() work as expected when passing
raw=True.
"""
result = device.set_temperature_offset_parameters(11, 22, 33, raw=True)
assert result is None
offset, slope, time_constant = \
device.get_temperature_offset_parameters(raw=True)
assert type(offset) is int
assert type(slope) is int
assert type(time_constant) is int
assert offset == 11
assert slope == 22
assert time_constant == 33
# Check scaling
offset, slope, time_constant = \
device.get_temperature_offset_parameters(raw=False)
assert offset == pytest.approx(0.055)
assert slope == pytest.approx(0.0022)
assert time_constant == 33
| 29.443038 | 77 | 0.688306 | 308 | 2,326 | 4.99026 | 0.201299 | 0.154847 | 0.245934 | 0.156148 | 0.81067 | 0.81067 | 0.765127 | 0.765127 | 0.739753 | 0.739753 | 0 | 0.03454 | 0.228289 | 2,326 | 78 | 78 | 29.820513 | 0.821727 | 0.206793 | 0 | 0.638298 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.574468 | 1 | 0.06383 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1700b171da116d6ff1eb2a08a358c93e852a3d71 | 128 | py | Python | src/ostorlab/cli/ci_scan/__init__.py | bbhunter/ostorlab | 968fe4e5b927c0cd159594c13b73f95b71150154 | [
"Apache-2.0"
] | 113 | 2022-02-21T09:30:14.000Z | 2022-03-31T21:54:26.000Z | src/ostorlab/cli/ci_scan/__init__.py | bbhunter/ostorlab | 968fe4e5b927c0cd159594c13b73f95b71150154 | [
"Apache-2.0"
] | 2 | 2022-02-25T10:56:55.000Z | 2022-03-24T13:08:06.000Z | src/ostorlab/cli/ci_scan/__init__.py | bbhunter/ostorlab | 968fe4e5b927c0cd159594c13b73f95b71150154 | [
"Apache-2.0"
] | 20 | 2022-02-28T14:25:04.000Z | 2022-03-30T23:01:11.000Z | """Module for the root command ci_scan"""
from ostorlab.cli.ci_scan import run
from ostorlab.cli.ci_scan.ci_scan import ci_scan
| 32 | 48 | 0.804688 | 24 | 128 | 4.083333 | 0.5 | 0.306122 | 0.306122 | 0.346939 | 0.428571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.109375 | 128 | 3 | 49 | 42.666667 | 0.859649 | 0.273438 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
ca0f331c09766abf64d30233e7da255adc559da1 | 24,362 | py | Python | demo/demo/stubs.py | uralov/swagger-django-generator | 1decd178a4f0041ab74de185bc27a05894fb8dda | [
"BSD-3-Clause"
] | 46 | 2018-01-17T16:49:32.000Z | 2022-01-19T06:15:47.000Z | demo/demo/stubs.py | peppelinux/swagger-django-generator | 43042717d638f9b02f41cf8a09155b011816abf5 | [
"BSD-3-Clause"
] | 17 | 2017-11-07T11:32:17.000Z | 2021-06-30T10:25:50.000Z | demo/demo/stubs.py | peppelinux/swagger-django-generator | 43042717d638f9b02f41cf8a09155b011816abf5 | [
"BSD-3-Clause"
] | 25 | 2018-02-01T19:42:38.000Z | 2021-07-27T18:26:21.000Z | """
Do not modify this file. It is generated from the Swagger specification.
"""
import json
from apitools.datagenerator import DataGenerator
import demo.schemas as schemas
class AbstractStubClass(object):
"""
Implementations need to be derived from this class.
"""
# addPet -- Synchronisation point for meld
@staticmethod
def addPet(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
raise NotImplementedError()
# updatePet -- Synchronisation point for meld
@staticmethod
def updatePet(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
raise NotImplementedError()
# findPetsByStatus -- Synchronisation point for meld
@staticmethod
def findPetsByStatus(request, status=None, *args, **kwargs):
"""
:param request: An HttpRequest
:param status: (optional) Status values that need to be considered for filter
:type status: array
"""
raise NotImplementedError()
# findPetsByTags -- Synchronisation point for meld
@staticmethod
def findPetsByTags(request, tags=None, *args, **kwargs):
"""
:param request: An HttpRequest
:param tags: (optional) Tags to filter by
:type tags: array
"""
raise NotImplementedError()
# deletePet -- Synchronisation point for meld
@staticmethod
def deletePet(request, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param petId: Pet id to delete
:type petId: integer
"""
raise NotImplementedError()
# getPetById -- Synchronisation point for meld
@staticmethod
def getPetById(request, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param petId: ID of pet that needs to be fetched
:type petId: integer
"""
raise NotImplementedError()
# updatePetWithForm -- Synchronisation point for meld
@staticmethod
def updatePetWithForm(request, form_data, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param form_data: A dictionary containing form fields and their values. In the case where the form fields refer to uploaded files, the values will be instances of `django.core.files.uploadedfile.UploadedFile`
:type form_data: dict
:param petId: ID of pet that needs to be updated
:type petId: string
"""
raise NotImplementedError()
# uploadFile -- Synchronisation point for meld
@staticmethod
def uploadFile(request, form_data, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param form_data: A dictionary containing form fields and their values. In the case where the form fields refer to uploaded files, the values will be instances of `django.core.files.uploadedfile.UploadedFile`
:type form_data: dict
:param petId: ID of pet to update
:type petId: integer
"""
raise NotImplementedError()
# getInventory -- Synchronisation point for meld
@staticmethod
def getInventory(request, *args, **kwargs):
"""
:param request: An HttpRequest
"""
raise NotImplementedError()
# placeOrder -- Synchronisation point for meld
@staticmethod
def placeOrder(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
raise NotImplementedError()
# deleteOrder -- Synchronisation point for meld
@staticmethod
def deleteOrder(request, orderId, *args, **kwargs):
"""
:param request: An HttpRequest
:param orderId: ID of the order that needs to be deleted
:type orderId: string
"""
raise NotImplementedError()
# getOrderById -- Synchronisation point for meld
@staticmethod
def getOrderById(request, orderId, *args, **kwargs):
"""
:param request: An HttpRequest
:param orderId: ID of pet that needs to be fetched
:type orderId: string
"""
raise NotImplementedError()
# createUser -- Synchronisation point for meld
@staticmethod
def createUser(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
raise NotImplementedError()
# createUsersWithArrayInput -- Synchronisation point for meld
@staticmethod
def createUsersWithArrayInput(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
raise NotImplementedError()
# createUsersWithListInput -- Synchronisation point for meld
@staticmethod
def createUsersWithListInput(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
raise NotImplementedError()
# loginUser -- Synchronisation point for meld
@staticmethod
def loginUser(request, username=None, password=None, *args, **kwargs):
"""
:param request: An HttpRequest
:param username: (optional) The user name for login
:type username: string
:param password: (optional) The password for login in clear text
:type password: string
"""
raise NotImplementedError()
# logoutUser -- Synchronisation point for meld
@staticmethod
def logoutUser(request, *args, **kwargs):
"""
:param request: An HttpRequest
"""
raise NotImplementedError()
# deleteUser -- Synchronisation point for meld
@staticmethod
def deleteUser(request, username, *args, **kwargs):
"""
:param request: An HttpRequest
:param username: The name that needs to be deleted
:type username: string
"""
raise NotImplementedError()
# getUserByName -- Synchronisation point for meld
@staticmethod
def getUserByName(request, username, *args, **kwargs):
"""
:param request: An HttpRequest
:param username: The name that needs to be fetched. Use user1 for testing.
:type username: string
"""
raise NotImplementedError()
# updateUser -- Synchronisation point for meld
@staticmethod
def updateUser(request, body, username, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
:param username: name that need to be deleted
:type username: string
"""
raise NotImplementedError()
class MockedStubClass(AbstractStubClass):
"""
Provides a mocked implementation of the AbstractStubClass.
"""
GENERATOR = DataGenerator()
@staticmethod
def addPet(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def updatePet(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def findPetsByStatus(request, status=None, *args, **kwargs):
"""
:param request: An HttpRequest
:param status: (optional) Status values that need to be considered for filter
:type status: array
"""
response_schema = json.loads("""{
"items": {
"properties": {
"category": {
"properties": {
"id": {
"format": "int64",
"type": "integer"
},
"name": {
"type": "string"
}
},
"x-scope": [
"",
"#/definitions/Pet"
],
"xml": {
"name": "Category"
}
},
"id": {
"format": "int64",
"type": "integer"
},
"name": {
"example": "doggie",
"type": "string"
},
"photoUrls": {
"items": {
"type": "string"
},
"type": "array",
"xml": {
"name": "photoUrl",
"wrapped": true
}
},
"status": {
"description": "pet status in the store",
"enum": [
"available",
"pending",
"sold"
],
"type": "string"
},
"tags": {
"items": {
"properties": {
"id": {
"format": "int64",
"type": "integer"
},
"name": {
"type": "string"
}
},
"x-scope": [
"",
"#/definitions/Pet"
],
"xml": {
"name": "Tag"
}
},
"type": "array",
"xml": {
"name": "tag",
"wrapped": true
}
}
},
"required": [
"name",
"photoUrls"
],
"x-scope": [
""
],
"xml": {
"name": "Pet"
}
},
"type": "array"
}""")
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def findPetsByTags(request, tags=None, *args, **kwargs):
"""
:param request: An HttpRequest
:param tags: (optional) Tags to filter by
:type tags: array
"""
response_schema = json.loads("""{
"items": {
"properties": {
"category": {
"properties": {
"id": {
"format": "int64",
"type": "integer"
},
"name": {
"type": "string"
}
},
"x-scope": [
"",
"#/definitions/Pet"
],
"xml": {
"name": "Category"
}
},
"id": {
"format": "int64",
"type": "integer"
},
"name": {
"example": "doggie",
"type": "string"
},
"photoUrls": {
"items": {
"type": "string"
},
"type": "array",
"xml": {
"name": "photoUrl",
"wrapped": true
}
},
"status": {
"description": "pet status in the store",
"enum": [
"available",
"pending",
"sold"
],
"type": "string"
},
"tags": {
"items": {
"properties": {
"id": {
"format": "int64",
"type": "integer"
},
"name": {
"type": "string"
}
},
"x-scope": [
"",
"#/definitions/Pet"
],
"xml": {
"name": "Tag"
}
},
"type": "array",
"xml": {
"name": "tag",
"wrapped": true
}
}
},
"required": [
"name",
"photoUrls"
],
"x-scope": [
""
],
"xml": {
"name": "Pet"
}
},
"type": "array"
}""")
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def deletePet(request, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param petId: Pet id to delete
:type petId: integer
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def getPetById(request, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param petId: ID of pet that needs to be fetched
:type petId: integer
"""
response_schema = schemas.Pet
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def updatePetWithForm(request, form_data, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param form_data: A dictionary containing form fields and their values. In the case where the form fields refer to uploaded files, the values will be instances of `django.core.files.uploadedfile.UploadedFile`
:type form_data: dict
:param petId: ID of pet that needs to be updated
:type petId: string
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def uploadFile(request, form_data, petId, *args, **kwargs):
"""
:param request: An HttpRequest
:param form_data: A dictionary containing form fields and their values. In the case where the form fields refer to uploaded files, the values will be instances of `django.core.files.uploadedfile.UploadedFile`
:type form_data: dict
:param petId: ID of pet to update
:type petId: integer
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def getInventory(request, *args, **kwargs):
"""
:param request: An HttpRequest
"""
response_schema = json.loads("""{
"additionalProperties": {
"format": "int32",
"type": "integer"
},
"type": "object"
}""")
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def placeOrder(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
response_schema = schemas.Order
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def deleteOrder(request, orderId, *args, **kwargs):
"""
:param request: An HttpRequest
:param orderId: ID of the order that needs to be deleted
:type orderId: string
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def getOrderById(request, orderId, *args, **kwargs):
"""
:param request: An HttpRequest
:param orderId: ID of pet that needs to be fetched
:type orderId: string
"""
response_schema = schemas.Order
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def createUser(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def createUsersWithArrayInput(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def createUsersWithListInput(request, body, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def loginUser(request, username=None, password=None, *args, **kwargs):
"""
:param request: An HttpRequest
:param username: (optional) The user name for login
:type username: string
:param password: (optional) The password for login in clear text
:type password: string
"""
response_schema = json.loads("""{
"type": "string"
}""")
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def logoutUser(request, *args, **kwargs):
"""
:param request: An HttpRequest
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def deleteUser(request, username, *args, **kwargs):
"""
:param request: An HttpRequest
:param username: The name that needs to be deleted
:type username: string
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def getUserByName(request, username, *args, **kwargs):
"""
:param request: An HttpRequest
:param username: The name that needs to be fetched. Use user1 for testing.
:type username: string
"""
response_schema = schemas.User
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
@staticmethod
def updateUser(request, body, username, *args, **kwargs):
"""
:param request: An HttpRequest
:param body: A dictionary containing the parsed and validated body
:type body: dict
:param username: name that need to be deleted
:type username: string
"""
response_schema = schemas.__UNSPECIFIED__
if "type" not in response_schema:
response_schema["type"] = "object"
if response_schema["type"] == "array" and "type" not in response_schema["items"]:
response_schema["items"]["type"] = "object"
return MockedStubClass.GENERATOR.random_value(response_schema)
| 33.418381 | 216 | 0.547738 | 2,238 | 24,362 | 5.864164 | 0.070152 | 0.149345 | 0.045718 | 0.067053 | 0.941786 | 0.930814 | 0.886468 | 0.886468 | 0.88281 | 0.878086 | 0 | 0.001003 | 0.345374 | 24,362 | 728 | 217 | 33.464286 | 0.821921 | 0.262417 | 0 | 0.866995 | 1 | 0 | 0.347127 | 0.001412 | 0 | 0 | 0 | 0 | 0 | 1 | 0.098522 | false | 0.004926 | 0.007389 | 0 | 0.162562 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
ca108af9b588c1edc2864078a1afdd03af250577 | 210 | py | Python | api/__init__.py | stohrendorf/confckurator | 6497d684739ed750324a081600c2adedb460c144 | [
"MIT"
] | 1 | 2017-10-14T23:47:04.000Z | 2017-10-14T23:47:04.000Z | api/__init__.py | stohrendorf/confckurator | 6497d684739ed750324a081600c2adedb460c144 | [
"MIT"
] | 6 | 2017-10-10T17:44:00.000Z | 2017-11-02T06:46:19.000Z | api/__init__.py | stohrendorf/confckurator | 6497d684739ed750324a081600c2adedb460c144 | [
"MIT"
] | null | null | null | from .pack_api import get_pack_api_blueprint
from .template_api import get_template_api_blueprint
from .environment_api import get_environment_api_blueprint
from .instance_api import get_instance_api_blueprint
| 42 | 58 | 0.904762 | 32 | 210 | 5.4375 | 0.28125 | 0.206897 | 0.275862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.07619 | 210 | 4 | 59 | 52.5 | 0.896907 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 7 |
ca2a038204d8697256bfe68f9a338b0ed46a890f | 97 | py | Python | flask_healthz/__init__.py | retoo/flask-healthz | 7cda21963e379c10e5376c2cfbadf5de12ee1b6b | [
"BSD-3-Clause"
] | 10 | 2020-05-13T15:17:25.000Z | 2022-02-24T13:41:35.000Z | flask_healthz/__init__.py | retoo/flask-healthz | 7cda21963e379c10e5376c2cfbadf5de12ee1b6b | [
"BSD-3-Clause"
] | 23 | 2020-08-03T15:03:35.000Z | 2022-02-15T06:17:05.000Z | flask_healthz/__init__.py | retoo/flask-healthz | 7cda21963e379c10e5376c2cfbadf5de12ee1b6b | [
"BSD-3-Clause"
] | 3 | 2020-08-03T12:36:16.000Z | 2021-10-17T21:30:04.000Z | from .blueprint import HealthError, healthz # noqa: F401
from .ext import Healthz # noqa: F401
| 32.333333 | 57 | 0.752577 | 13 | 97 | 5.615385 | 0.615385 | 0.30137 | 0.410959 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.075 | 0.175258 | 97 | 2 | 58 | 48.5 | 0.8375 | 0.216495 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0.5 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 8 |
ca40fbdb96b8646fc3e1f162baffbb95d87897b9 | 1,135 | py | Python | module_question/api/serializers/general_serializers.py | NicolasMuras/Lookdaluv | 0c46d8871aa8e65139620b4afba82ca11d57ce63 | [
"MIT"
] | 1 | 2021-12-16T16:48:45.000Z | 2021-12-16T16:48:45.000Z | module_question/api/serializers/general_serializers.py | NicolasMuras/Lookdaluv | 0c46d8871aa8e65139620b4afba82ca11d57ce63 | [
"MIT"
] | null | null | null | module_question/api/serializers/general_serializers.py | NicolasMuras/Lookdaluv | 0c46d8871aa8e65139620b4afba82ca11d57ce63 | [
"MIT"
] | null | null | null | from rest_framework import serializers
from module_question.models import QuestionModuleStatistics
class QuestionModuleStatisticsSerializer(serializers.ModelSerializer):
class Meta:
model = QuestionModuleStatistics
exclude = ('state', 'created_date', 'modified_date', 'deleted_date')
def to_representation(self, instance):
return {
'module': instance.module.__str__(),
'completed': instance.completed,
'max_step_reached': instance.max_step_reached,
'value_generated': instance.value_generated,
'trap_passed': instance.trap_passed,
}
class QuestionModuleStatisticsMinimalSerializer(serializers.ModelSerializer):
class Meta:
model = QuestionModuleStatistics
exclude = ('state', 'created_date', 'modified_date', 'deleted_date')
def to_representation(self, instance):
return {
'completed': instance.completed,
'max_step_reached': instance.max_step_reached,
'value_generated': instance.value_generated,
'trap_passed': instance.trap_passed,
} | 33.382353 | 77 | 0.680176 | 99 | 1,135 | 7.494949 | 0.363636 | 0.037736 | 0.075472 | 0.09434 | 0.735849 | 0.735849 | 0.735849 | 0.735849 | 0.735849 | 0.735849 | 0 | 0 | 0.230837 | 1,135 | 34 | 78 | 33.382353 | 0.849943 | 0 | 0 | 0.72 | 0 | 0 | 0.169014 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.08 | false | 0.08 | 0.08 | 0.08 | 0.4 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
ca531e54bb72b44a8476255741fa95a4591bef85 | 37,516 | py | Python | connectfour/agents/agent_student.py | rmit-huirong/AI1901-ConnectFour | ccd29ae334857044c164e937c6d31e7f29a98ab0 | [
"MIT"
] | null | null | null | connectfour/agents/agent_student.py | rmit-huirong/AI1901-ConnectFour | ccd29ae334857044c164e937c6d31e7f29a98ab0 | [
"MIT"
] | null | null | null | connectfour/agents/agent_student.py | rmit-huirong/AI1901-ConnectFour | ccd29ae334857044c164e937c6d31e7f29a98ab0 | [
"MIT"
] | null | null | null | from connectfour.agents.agent import Agent
"""
Student Name:
Huirong Huang
Student ID:
s3615907
"""
class StudentAgent(Agent):
def __init__(self, name):
super().__init__(name)
self.MaxDepth = 1
def get_move(self, board):
"""
Args:
board: An instance of `Board` that is the current state of the board.
Returns:
A tuple of two integers, (row, col)
"""
valid_moves = board.valid_moves()
vals = []
moves = []
for move in valid_moves:
next_state = board.next_state(self.id, move[1])
moves.append(move)
vals.append(self.dfMiniMax(next_state, 1))
bestMove = moves[vals.index(max(vals))]
return bestMove
def dfMiniMax(self, board, depth):
# Goal return column with maximized scores of all possible next states
if depth == self.MaxDepth:
return self.evaluateBoardState(board)
valid_moves = board.valid_moves()
vals = []
moves = []
for move in valid_moves:
if depth % 2 == 1:
next_state = board.next_state(self.id % 2 + 1, move[1])
else:
next_state = board.next_state(self.id, move[1])
moves.append(move)
vals.append(self.dfMiniMax(next_state, depth + 1))
if depth % 2 == 1:
if len(vals) != 0:
bestVal = min(vals)
else:
bestVal = 0
else:
if len(vals) != 0:
bestVal = max(vals)
else:
bestVal = 0
return bestVal
def evaluateBoardState(self, board):
"""
Your evaluation function should look at the current state and return a score for it.
As an example, the random agent provided works as follows:
If the opponent has won this game, return -1.
If we have won the game, return 1.
If neither of the players has won, return a random number.
"""
"""
These are the variables and functions for board objects which may be helpful when creating your Agent.
Look into board.py for more information/descriptions of each, or to look for any other definitions which may help you.
Board Variables:
board.width
board.height
board.last_move
board.num_to_connect
board.winning_zones
board.score_array
board.current_player_score
Board Functions:
get_cell_value(row, col)
try_move(col)
valid_move(row, col)
valid_moves()
terminal(self)
legal_moves()
next_state(turn)
winner()
"""
# print the valid moves on board for current player
move = board.last_move
# enemy agent's id
enemy = self.id % 2 + 1
value = self.evaluateRows(board, enemy) + self.evaluateCols(board, enemy) + self.evaluateBackwardDiagonals(board, enemy) + self.evaluateForwardDiagonals(board, enemy)
return value
# evaluation of rows (-)
def evaluateRows(self, board, enemy):
myValue = 0
enemyValue = 0
# 0 <= x < 6
for x in range(0, board.DEFAULT_HEIGHT):
# 0 <= y < 4
for y in range(0, board.DEFAULT_WIDTH - board.num_to_connect + 1):
# create a list for storing temporary tokens for row
temp = []
for col in range(0, board.num_to_connect):
temp.append(board.get_cell_value(x, y + col))
# boolean value to check if there is any opponent token in the list
has_oppo = False
# boolean value to check if there is any enemy's opponent token in the list
enemy_has_oppo = False
for curr in temp:
if curr == enemy:
has_oppo = True
if curr == self.id:
enemy_has_oppo = True
# if there isn't opponent token and at least one my side token
if has_oppo is False and temp.__contains__(self.id):
# condition: [1,X,1,1] place "1" in X cell, must win in this move
# win -> [1,1,1,1]
if temp.count(self.id) == 4:
# print("win: [1,X,1,1]")
return 1000000
# if there are only three my side tokens
elif temp.count(self.id) == 3:
if y + board.num_to_connect < board.DEFAULT_WIDTH:
# condition: [_,1,X,1,_] place "1" in X cell, must win after next move
# -> [_,1,1,1,_]
# -> [_,1,1,1,2] or [2,1,1,1,_]
# win -> [1,1,1,1,2] or [2,1,1,1,1]
if x == board.last_move[0] and y + temp.index(self.id) + 1 == board.last_move[1] and board.get_cell_value(x, y) == 0 and board.get_cell_value(x, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("winnable: [_,1,X,1,_]")
myValue += 10000
else:
myValue += 1000
# if there are only two my side tokens
elif temp.count(self.id) == 2:
myValue += 100
else:
myValue += 10
# if there is at least one enemy's opponent token
if enemy_has_oppo is True and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
# condition: [2,2,X,2] place "1" in X cell, or will lose after this move
# ok -> [2,2,1,2]
# lose -> [2,2,2,2]
if board.last_move[0] == x and board.last_move[1] == y + temp.index(self.id):
# print("lose: [2,2,X,2]")
myValue += 100000
# if there are only two enemy's tokens
elif temp.count(enemy) == 2:
if x == board.last_move[0] and y == board.last_move[1] and temp[temp.index(enemy) + 1] == enemy and temp.index(enemy) in range(1, board.num_to_connect - 2):
# condition: [_,X,2,2,_] place "1" in X cell, or will lose after next move
# ok -> [2,1,2,2,_] or [_,1,2,2,2]
# -----------
# -> [_,2,2,2,_]
# lose -> [2,2,2,2,_] or [_,2,2,2,2]
if y - 1 >= 0:
if board.get_cell_value(x, y - 1) == 0:
next_board1 = board.next_state(enemy, y - 1)
next_board2 = board.next_state(enemy, y + board.num_to_connect - 1)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,X,2,2,_]")
myValue += 10000
if x == board.last_move[0] and y == board.last_move[1] - board.num_to_connect + 1 and temp[temp.index(enemy) + 1] == enemy and temp.index(enemy) in range(1, board.num_to_connect - 2):
# condition: [_,2,2,X,_] place "1" in X cell, or will lose after next move
# ok -> [2,2,2,1,_] or [_,2,2,1,2]
# -----------
# -> [_,2,2,2,_]
# lose -> [2,2,2,2,_] or [_,2,2,2,2]
if y + board.num_to_connect < board.DEFAULT_WIDTH:
if board.get_cell_value(x, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,2,2,X,_]")
myValue += 10000
if y + board.num_to_connect < board.DEFAULT_WIDTH:
# condition: [_,2,X,2,_] place "1" in X cell, or will lose after next move
# ok -> [2,2,1,2,_] or [_,2,1,2,2]
# -----------
# -> [_,2,2,2,_]
# lose -> [2,2,2,2,_] or [_,2,2,2,2]
if x == board.last_move[0] and y + temp.index(self.id) == board.last_move[1] and board.get_cell_value(x, y) == 0 and board.get_cell_value(x, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,2,X,2,_]")
myValue += 10000
# if there is not any enemy's opponent token and at least one enemy's token
if enemy_has_oppo is False and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
next_board = board.next_state(enemy, y + temp.index(0))
if next_board != 0:
# condition: [2,2,_,2] place "1" in X cell, must lose after this move
# [1,2,X,1]
# ---------
# -> [2,2,_,2]
# [1,2,1,1]
# ---------
# lose -> [2,2,2,2]
# [1,2,1,1]
if x == board.last_move[0] - 1:
# print("lose: [2,2,_,2]")
# print(" [1,2,X,1]")
enemyValue += 100000
# condition: [2,2,_,2] place "1" in X cell, may lose in the end
# [1,2,_,1]
# [1,1,X,2]
# ---------
# ok -> [2,2,_,2]
# [1,2,_,1]
# [1,1,1,2]
else:
# print("losable: [2,2,_,2]")
# print(" [1,2,_,1]")
# print(" [1,1,X,2]")
enemyValue += 1000
# if there is only two enemy's tokens
elif temp.count(enemy) == 2:
# print("other conditions")
enemyValue += 100
else:
enemyValue += 10
return myValue - enemyValue
# evaluation of columns (|)
def evaluateCols(self, board, enemy):
myValue = 0
enemyValue = 0
# 0 <= y < 7
for y in range(0, board.DEFAULT_WIDTH):
# 0 <= x < 3
for x in range(0, board.DEFAULT_HEIGHT - board.num_to_connect + 1):
# create a list for storing temporary tokens for col
temp = []
for row in range(0, board.num_to_connect):
temp.append(board.get_cell_value(x + row, y))
# boolean value to check if there is any opponent token in the list
has_oppo = False
# boolean value to check if there is any enemy's opponent token in the list
enemy_has_oppo = False
for curr in temp:
if curr == enemy:
has_oppo = True
if curr == self.id:
enemy_has_oppo = True
# if there isn't opponent token and at least one my side token
if has_oppo is False and temp.__contains__(self.id):
# condition: [X] place "1" in X cell, must win in this move
# [1]
# [1]
# [1]
# ---
# win -> [1]
# [1]
# [1]
# [1]
if temp.count(self.id) == 4:
# print("win: [X]")
# print(" [1]")
# print(" [1]")
# print(" [1]")
return 1000000
# if there are only three my side tokens
elif temp.count(self.id) == 3:
# condition: [_] place "1" in X cell, may win in the end
# [X]
# [1]
# [1]
# ---
# ok -> [_]
# [1]
# [1]
# [1]
if x - 1 == board.last_move[0] and y == board.last_move[1] and board.get_cell_value(x - 1, y) == 0:
myValue += 1000
# if there are only two my side tokens
elif temp.count(self.id) == 2:
myValue += 100
else:
myValue += 10
# if there is at least one enemy's opponent token
if enemy_has_oppo is True and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
# condition: [X] place "1" in X cell, or will lose after this move
# [2]
# [2]
# [2]
# ---
# lose -> [2]
# [2]
# [2]
# [2]
if board.last_move[0] == x and board.last_move[1] == y:
# print("losable: [X]")
# print(" [2]")
# print(" [2]")
# print(" [2]")
myValue += 100000
# if there is not any enemy's opponent token and at least one enemy's token
if enemy_has_oppo is False and temp.__contains__(enemy):
# print("enemy")
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
next_board = board.next_state(enemy, y)
# condition: [_] place "1" in another cell, must lose after this move
# [2]
# [2]
# [2]
# ---
# lose -> [2]
# [2]
# [2]
# [2]
if next_board != 0:
# print("lose: [_]")
# print(" [2]")
# print(" [2]")
# print(" [2]")
enemyValue += 100000
# if there is only two enemy's tokens
elif temp.count(enemy) == 2:
# print("other conditions")
enemyValue += 100
else:
enemyValue += 10
return myValue - enemyValue
# evaluation of backward diagonals (/)
def evaluateBackwardDiagonals(self, board, enemy):
myValue = 0
enemyValue = 0
# 3 <= x < 6
for x in range(board.num_to_connect - 1, board.DEFAULT_HEIGHT):
# 0 <= y < 4
for y in range(0, board.DEFAULT_WIDTH - board.num_to_connect + 1):
# create a list for storing temporary tokens for backward diagonal
temp = []
for back_diag in range(0, board.num_to_connect):
temp.append(board.get_cell_value(x - back_diag, y + back_diag))
# boolean value to check if there is any opponent token in the list
has_oppo = False
# boolean value to check if there is any enemy's opponent token in the list
enemy_has_oppo = False
for curr in temp:
if curr == enemy:
has_oppo = True
if curr == self.id:
enemy_has_oppo = True
# if there isn't opponent token and at least one my side token
if has_oppo is False and temp.__contains__(self.id):
# condition: [_,_,_,X] place "1" in X cell, must win in this move
# [_,_,1,2]
# [_,1,2,1]
# [1,1,2,1]
if temp.count(self.id) == 4:
# print("win: [_,_,_,X]")
# print("win: [_,_,1,2]")
# print("win: [_,1,2,1]")
# print("win: [1,1,2,1]")
return 1000000
# if there are only three my side tokens
elif temp.count(self.id) == 3:
if x - board.num_to_connect >= 0 and y + board.num_to_connect < board.DEFAULT_WIDTH:
# condition: [_,_,_,_,_] place "1" in X cell, must win after next move
# [_,_,_,1,1]
# [_,_,X,1,2]
# [_,1,2,2,1]
# [_,2,1,1,2]
if x - temp.index(self.id) - 1 == board.last_move[0] and y + temp.index(self.id) + 1 == board.last_move[1] and board.get_cell_value(x, y) == 0 and board.get_cell_value(x - board.num_to_connect, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("winnable: [_,_,_,_,_]")
# print(" [_,_,_,1,1]")
# print(" [_,_,X,1,2]")
# print(" [_,1,2,2,1]")
# print(" [_,2,1,1,2]")
myValue += 10000
else:
myValue += 5000
# if there are only two my side tokens
elif temp.count(self.id) == 2:
myValue += 500
else:
myValue += 50
# if there is at least one enemy's opponent token
if enemy_has_oppo is True and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
# condition: [_,_,_,2] place "1" in X cell, or will lose after this move
# [_,_,X,1]
# [_,2,1,1]
# [2,1,2,2]
if board.last_move[0] == x - temp.index(self.id) and board.last_move[1] == y + temp.index(self.id):
# print("lose: [_,_,_,2]")
# print(" [_,_,X,1]")
# print(" [_,2,1,1]")
# print(" [2,1,2,2]")
myValue += 100000
# if there are only two enemy's tokens
elif temp.count(enemy) == 2:
if board.last_move[0] == x and board.last_move[1] == y and temp[temp.index(enemy) + 1] == enemy and temp.index(enemy) in range(1, board.num_to_connect - 2):
# condition: [_,_,_,_,_] place "1" in X cell, or will lose after next move
# [_,_,_,2,1]
# [_,_,2,1,2]
# [_,X,1,2,1]
# [_,2,1,1,2]
if x + 1 < board.DEFAULT_HEIGHT and y - 1 >= 0:
if board.get_cell_value(x + 1, y - 1) == 0:
next_board1 = board.next_state(enemy, y - 1)
next_board2 = board.next_state(enemy, y + board.num_to_connect - 1)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,_,_,_,_]")
# print(" [_,_,_,2,1]")
# print(" [_,_,2,1,2]")
# print(" [_,X,1,2,1]")
# print(" [_,2,1,1,2]")
myValue += 10000
if board.last_move[0] == x - board.num_to_connect + 1 and board.last_move[1] == y + board.num_to_connect - 1 and temp[temp.index(enemy) + 1] == enemy and temp.index(enemy) in range(1, board.num_to_connect - 2):
# condition: [_,_,_,_,_] place "1" in X cell, or will lose after next move
# [_,_,_,X,1]
# [_,_,2,1,2]
# [_,2,1,2,1]
# [_,2,1,1,2]
if x - board.num_to_connect >= 0 and y + board.num_to_connect < board.DEFAULT_WIDTH:
if board.get_cell_value(x - board.num_to_connect, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,_,_,_,_]")
# print(" [_,_,_,X,1]")
# print(" [_,_,2,1,2]")
# print(" [_,2,1,2,1]")
# print(" [_,2,1,1,2]")
myValue += 10000
if x - board.num_to_connect >= 0 and y + board.num_to_connect < board.DEFAULT_WIDTH:
# condition: [_,_,_,_,_] place "1" in X cell, or will lose after next move
# [_,_,_,2,1]
# [_,_,X,1,2]
# [_,2,1,2,1]
# [_,2,1,1,2]
if x - temp.index(self.id) == board.last_move[0] and y + temp.index(self.id) == board.last_move[1] and board.get_cell_value(x, y) == 0 and board.get_cell_value(x - board.num_to_connect, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,_,_,_,_]")
# print(" [_,_,_,2,1]")
# print(" [_,_,X,1,2]")
# print(" [_,2,1,2,1]")
# print(" [_,2,1,1,2]")
myValue += 10000
# if there is not any enemy's opponent token and at least one enemy's token
if enemy_has_oppo is False and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
next_board = board.next_state(enemy, y + temp.index(0))
if next_board != 0:
# condition: [_,_,_,2] place "1" in X cell, must lose after this move
# [_,_,_,1]
# [_,2,X,2]
# [2,2,1,1]
if x - temp.index(0) == board.last_move[0] - 1:
# print("lose: [_,_,_,2]")
# print(" [_,_,_,1]")
# print(" [_,2,X,2]")
# print(" [2,2,1,1]")
enemyValue += 100000
# condition: [_,_,_,2] place "1" in X cell, may lose in the end
# [_,_,_,1]
# [_,2,_,2]
# [2,2,X,1]
else:
# print("losable: [_,_,_,2]")
# print(" [_,_,_,1]")
# print(" [_,2,_,2]")
# print(" [2,2,X,1]")
enemyValue += 5000
# if there is only two enemy's tokens
elif temp.count(enemy) == 2:
# print("other conditions")
enemyValue += 500
else:
enemyValue += 50
return myValue - enemyValue
# evaluation of forward diagonals (\)
def evaluateForwardDiagonals(self, board, enemy):
myValue = 0
enemyValue = 0
# 0 <= x < 3
for x in range(0, board.DEFAULT_HEIGHT - board.num_to_connect + 1):
# 0 <= y < 4
for y in range(0, board.DEFAULT_WIDTH - board.num_to_connect + 1):
# create a list for storing temporary tokens for forward diagonal
temp = []
for for_diag in range(0, board.num_to_connect):
temp.append(board.get_cell_value(x + for_diag, y + for_diag))
# boolean value to check if there is any opponent token in the list
has_oppo = False
# boolean value to check if there is any enemy's opponent token in the list
enemy_has_oppo = False
for curr in temp:
if curr == enemy:
has_oppo = True
if curr == self.id:
enemy_has_oppo = True
# if there isn't opponent token and at least one my side token
if has_oppo is False and temp.__contains__(self.id):
# condition: [X,_,_,_] place "1" in X cell, must win in this move
# [2,1,_,_]
# [1,2,1,_]
# [1,2,1,1]
if temp.count(self.id) == 4:
# print("win: [X,_,_,_]")
# print("win: [2,1,_,_]")
# print("win: [1,2,1,_]")
# print("win: [1,2,1,1]")
return 1000000
# if there are only three my side tokens
elif temp.count(self.id) == 3:
if x + board.num_to_connect < board.DEFAULT_HEIGHT and y + board.num_to_connect < board.DEFAULT_WIDTH:
# condition: [_,_,_,_,_] place "1" in X cell, must win after next move
# [1,1,_,_,_]
# [2,1,X,_,_]
# [1,2,2,1,_]
# [2,1,1,2,_]
if x + temp.index(self.id) + 1 == board.last_move[0] and y + temp.index(self.id) + 1 == board.last_move[1] and board.get_cell_value(x, y) == 0 and board.get_cell_value(x + board.num_to_connect, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("winnable: [_,_,_,_,_]")
# print(" [1,1,_,_,_]")
# print(" [2,1,X,_,_]")
# print(" [1,2,2,1,_]")
# print(" [2,1,1,2,_]")
myValue += 10000
else:
myValue += 5000
# if there are only two my side tokens
elif temp.count(self.id) == 2:
myValue += 500
else:
myValue += 50
# if there is at least one enemy's opponent token
if enemy_has_oppo is True and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
# condition: [2,_,_,_] place "1" in X cell, or will lose after this move
# [1,X,_,_]
# [1,1,2,_]
# [2,2,1,2]
if board.last_move[0] == x + temp.index(self.id) and board.last_move[1] == y + temp.index(self.id):
# print("lose: [2,_,_,_]")
# print(" [1,X,_,_]")
# print(" [1,1,2,_]")
# print(" [2,2,1,2]")
myValue += 100000
# if there are only two enemy's tokens
elif temp.count(enemy) == 2:
if board.last_move[0] == x and board.last_move[1] == y and temp[temp.index(enemy) + 1] == enemy and temp.index(enemy) in range(1, board.num_to_connect - 2):
# condition: [_,_,_,_,_] place "1" in X cell, or will lose after next move
# [1,2,_,_,_]
# [2,1,2,_,_]
# [1,2,1,X,_]
# [2,1,1,2,_]
if x + board.num_to_connect < board.DEFAULT_HEIGHT and y + board.num_to_connect < board.DEFAULT_WIDTH:
if board.get_cell_value(x + board.num_to_connect, y + board.num_to_connect):
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,_,_,_,_]")
# print(" [1,2,_,_,_]")
# print(" [2,1,2,_,_]")
# print(" [1,2,1,X,_]")
# print(" [2,1,1,2,_]")
myValue += 10000
if board.last_move[0] == x and board.last_move[1] == y and temp[temp.index(enemy) + 1] == enemy and temp.index(enemy) in range(1, board.num_to_connect - 2):
# condition: [_,_,_,_,_] place "1" in X cell, or will lose after next move
# [1,X,_,_,_]
# [2,1,2,_,_]
# [1,2,1,2,_]
# [2,1,1,2,_]
if x - 1 >= 0 and y - 1 >= 0:
if board.get_cell_value(x - 1, y - 1) == 0:
next_board1 = board.next_state(enemy, y - 1)
next_board2 = board.next_state(enemy, y + board.num_to_connect - 1)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,_,_,_,_]")
# print(" [1,X,_,_,_]")
# print(" [2,1,2,_,_]")
# print(" [1,2,1,2,_]")
# print(" [2,1,1,2,_]")
myValue += 10000
if x + board.num_to_connect < board.DEFAULT_HEIGHT and y + board.num_to_connect < board.DEFAULT_WIDTH:
# condition: [_,_,_,_,_] place "1" in X cell, or will lose after next move
# [1,2,_,_,_]
# [2,1,X,_,_]
# [1,2,1,2,_]
# [2,1,1,2,_]
if x + temp.index(self.id) == board.last_move[0] and y + temp.index(self.id) == board.last_move[1] and board.get_cell_value(x, y) == 0 and board.get_cell_value(x + board.num_to_connect, y + board.num_to_connect) == 0:
next_board1 = board.next_state(enemy, y)
next_board2 = board.next_state(enemy, y + board.num_to_connect)
if next_board1 != 0 and next_board2 != 0:
# print("losable: [_,_,_,_,_]")
# print(" [1,2,_,_,_]")
# print(" [2,1,X,_,_]")
# print(" [1,2,1,2,_]")
# print(" [2,1,1,2,_]")
myValue += 10000
# if there is not any enemy's opponent token and at least one enemy's token
if enemy_has_oppo is False and temp.__contains__(enemy):
# if there are only three enemy's tokens
if temp.count(enemy) == 3:
next_board = board.next_state(enemy, y + temp.index(0))
if next_board != 0:
# condition: [2,_,_,_] place "1" in X cell, must lose after this move
# [1,_,_,_]
# [2,X,2,_]
# [1,1,2,2]
if x + temp.index(0) == board.last_move[0] - 1:
# print("lose: [2,_,_,_]")
# print(" [1,_,_,_]")
# print(" [2,X,2,_]")
# print(" [1,1,2,2]")
enemyValue += 100000
# condition: [2,_,_,_] place "1" in X cell, may lose in the end
# [1,_,_,_]
# [2,_,2,_]
# [1,X,2,2]
else:
# print("losable: [2,_,_,_]")
# print(" [1,_,_,_]")
# print(" [2,_,2,_]")
# print(" [1,X,2,2]")
enemyValue += 5000
# if there is only two enemy's tokens
elif temp.count(enemy) == 2:
# print("other conditions")
enemyValue += 500
else:
enemyValue += 50
return myValue - enemyValue
| 49.624339 | 253 | 0.376693 | 3,839 | 37,516 | 3.473561 | 0.054181 | 0.015148 | 0.046494 | 0.07904 | 0.860517 | 0.844244 | 0.827522 | 0.820472 | 0.807274 | 0.798125 | 0 | 0.058203 | 0.519592 | 37,516 | 755 | 254 | 49.690066 | 0.681685 | 0.273963 | 0 | 0.775862 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.027586 | false | 0 | 0.003448 | 0 | 0.075862 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
04b7cb906218c8d9724e029c5ff69c7c75682a7e | 30,667 | py | Python | testing/rnn_rgp_test.py | zhenwendai/RGP | be679607d3457a1038a2fe39b36b816ea380ea39 | [
"BSD-3-Clause"
] | 17 | 2016-10-24T01:31:30.000Z | 2021-07-31T08:12:02.000Z | testing/rnn_rgp_test.py | zhenwendai/RGP | be679607d3457a1038a2fe39b36b816ea380ea39 | [
"BSD-3-Clause"
] | null | null | null | testing/rnn_rgp_test.py | zhenwendai/RGP | be679607d3457a1038a2fe39b36b816ea380ea39 | [
"BSD-3-Clause"
] | 11 | 2017-07-11T09:11:48.000Z | 2022-01-25T12:10:48.000Z | #!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Tue Aug 29 10:16:16 2017
@author: grigoral
"""
import unittest
import numpy as np
import GPy
import os
import copy
import autoreg
from autoreg.data_streamers import TrivialDataStreamer, RandomPermutationDataStreamer, StdMemoryDataStreamer
def generate_data( seq_num, seq_length, u_dim = 1, y_dim = 1):
"""
Generates data
"""
#np.random.seed()
U = []
Y = []
for i in range(seq_num):
uu = np.random.randn( seq_length, u_dim ) * 10
yy = np.random.randn( seq_length, y_dim ) * 100
U.append(uu)
Y.append(yy)
return U, Y
class Rnn_RGP_Test(unittest.TestCase):
"""
Test the Deepautoreg_rnn model (svi, minibatch, back_cstr). Test rnn as a recognition model.
The test classes [ Rnn_RGP_Test, Lstm_RGP_Test, Gru_RGP_Test, Gru_bidirect_RGP_Test ],
do exactly the same testing except the back constrain neural network is different for each of them.
"""
def setUp(self):
u_dim = 2
y_dim = 3
ts_length = 20
sequences_no = 3
#U, Y = generate_data( sequences_no, ts_length, u_dim = u_dim, y_dim = y_dim)
U_2, Y_2 = generate_data( sequences_no*2, ts_length, u_dim = u_dim, y_dim = y_dim)
Q = 3 # 200 # Inducing points num. Take small number ofr speed
back_cstr = True
inference_method = 'svi'
minibatch_inference = True
# # 1 layer:
# win_out = 3
# win_in = 2
# wins = [0, win_out] # 0-th is output layer
# nDims = [y_dim,2]
# 2 layers:
win_out = 3
win_in = 2
wins = [0, win_out, win_out]
nDims = [y_dim, 2,3] #
rnn_hidden_dims = [9,] # rnn hidden dimension
rnn_type='rnn'
rnn_bidirectional=False
rnn_h0_init='zero'
#print("Input window: ", win_in)
#print("Output window: ", win_out)
data_streamer = RandomPermutationDataStreamer(Y_2, U_2)
minibatch_index, minibatch_indices, Y_mb, X_mb = data_streamer.next_minibatch()
m_1 = autoreg.DeepAutoreg_rnn(wins, Y_mb, U=X_mb, U_win=win_in,
num_inducing=Q, back_cstr=back_cstr, nDims=nDims,
rnn_hidden_dims=rnn_hidden_dims,
rnn_type=rnn_type,
rnn_bidirectional=rnn_bidirectional,
rnn_h0_init=rnn_h0_init,
inference_method=inference_method, # Inference method
minibatch_inference = minibatch_inference,
mb_inf_tot_data_size = sequences_no*2,
mb_inf_sample_idxes = minibatch_indices,
# # 1 layer:
# kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
# GPy.kern.RBF( (win_in + win_out) * nDims[1], ARD=True,inv_l=True)] )
# 2 layers:
kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[1] + win_out*nDims[2],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[2] + win_in*u_dim,ARD=True,inv_l=True)])
self.model_1 = m_1
self.model_1._trigger_params_changed()
self.mll_1_1 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_1 = self.model_1._log_likelihood_gradients().copy()
self.model_1.checkgrad(verbose=False)
# self.model_2 = copy.deepcopy(m_1)
self.model_1.set_DataStreamer(data_streamer)
self.model_1._trigger_params_changed()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_1_2 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_2 = self.model_1._log_likelihood_gradients().copy()
data_streamer_1 = StdMemoryDataStreamer(Y_2, U_2, sequences_no)
self.model_1.set_DataStreamer(data_streamer_1)
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_1 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_1 = self.model_1._log_likelihood_gradients().copy()
#import pdb; pdb.set_trace()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_2 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_2 = self.model_1._log_likelihood_gradients().copy()
def test_perm_ds_two_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_almost_equal( self.mll_1_2, self.mll_1_1, decimal=9, err_msg="Likelihoods must be equal" )
np.testing.assert_equal( np.isclose(self.mll_1_2, self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_1_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_1_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
def test_perm_ds_sum_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_equal( self.mll_2_1 + self.mll_2_2, self.mll_1_1, err_msg="Likelihoods must be equal" ) #decimal=9
np.testing.assert_equal( np.isclose(float(self.mll_2_1) + float(self.mll_2_2), self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
class Lstm_RGP_Test(unittest.TestCase):
"""
Test the Deepautoreg_rnn model (svi, minibatch, back_cstr). Test rnn as a recognition model.
The test classes [ Rnn_RGP_Test, Lstm_RGP_Test, Gru_RGP_Test, Gru_bidirect_RGP_Test ],
do exactly the same testing except the back constrain neural network is different for each of them.
"""
def setUp(self):
u_dim = 2
y_dim = 3
ts_length = 20
sequences_no = 3
#U, Y = generate_data( sequences_no, ts_length, u_dim = u_dim, y_dim = y_dim)
U_2, Y_2 = generate_data( sequences_no*2, ts_length, u_dim = u_dim, y_dim = y_dim)
Q = 3 # 200 # Inducing points num. Take small number ofr speed
back_cstr = True
inference_method = 'svi'
minibatch_inference = True
# # 1 layer:
# win_out = 3
# win_in = 2
# wins = [0, win_out] # 0-th is output layer
# nDims = [y_dim,2]
# 2 layers:
win_out = 3
win_in = 2
wins = [0, win_out, win_out]
nDims = [y_dim, 2,3] #
rnn_hidden_dims = [9,] # rnn hidden dimension
rnn_type='lstm'
rnn_bidirectional=False
rnn_h0_init='zero'
#print("Input window: ", win_in)
#print("Output window: ", win_out)
data_streamer = RandomPermutationDataStreamer(Y_2, U_2)
minibatch_index, minibatch_indices, Y_mb, X_mb = data_streamer.next_minibatch()
m_1 = autoreg.DeepAutoreg_rnn(wins, Y_mb, U=X_mb, U_win=win_in,
num_inducing=Q, back_cstr=back_cstr, nDims=nDims,
rnn_hidden_dims=rnn_hidden_dims,
rnn_type=rnn_type,
rnn_bidirectional=rnn_bidirectional,
rnn_h0_init=rnn_h0_init,
inference_method=inference_method, # Inference method
minibatch_inference = minibatch_inference,
mb_inf_tot_data_size = sequences_no*2,
mb_inf_sample_idxes = minibatch_indices,
# # 1 layer:
# kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
# GPy.kern.RBF( (win_in + win_out) * nDims[1], ARD=True,inv_l=True)] )
# 2 layers:
kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[1] + win_out*nDims[2],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[2] + win_in*u_dim,ARD=True,inv_l=True)])
self.model_1 = m_1
self.model_1._trigger_params_changed()
self.mll_1_1 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_1 = self.model_1._log_likelihood_gradients().copy()
self.model_1.checkgrad(verbose=False)
# self.model_2 = copy.deepcopy(m_1)
self.model_1.set_DataStreamer(data_streamer)
self.model_1._trigger_params_changed()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_1_2 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_2 = self.model_1._log_likelihood_gradients().copy()
data_streamer_1 = StdMemoryDataStreamer(Y_2, U_2, sequences_no)
self.model_1.set_DataStreamer(data_streamer_1)
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_1 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_1 = self.model_1._log_likelihood_gradients().copy()
#import pdb; pdb.set_trace()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_2 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_2 = self.model_1._log_likelihood_gradients().copy()
def test_perm_ds_two_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_almost_equal( self.mll_1_2, self.mll_1_1, decimal=9, err_msg="Likelihoods must be equal" )
np.testing.assert_equal( np.isclose(self.mll_1_2, self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_1_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_1_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
def test_perm_ds_sum_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_equal( self.mll_2_1 + self.mll_2_2, self.mll_1_1, err_msg="Likelihoods must be equal" ) #decimal=9
np.testing.assert_equal( np.isclose(float(self.mll_2_1) + float(self.mll_2_2), self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
class Gru_RGP_Test(unittest.TestCase):
"""
Test the Deepautoreg_rnn model (svi, minibatch, back_cstr). Test rnn as a recognition model.
The test classes [ Rnn_RGP_Test, Lstm_RGP_Test, Gru_RGP_Test, Gru_bidirect_RGP_Test ],
do exactly the same testing except the back constrain neural network is different for each of them.
"""
def setUp(self):
u_dim = 2
y_dim = 3
ts_length = 20
sequences_no = 3
#U, Y = generate_data( sequences_no, ts_length, u_dim = u_dim, y_dim = y_dim)
U_2, Y_2 = generate_data( sequences_no*2, ts_length, u_dim = u_dim, y_dim = y_dim)
Q = 3 # 200 # Inducing points num. Take small number ofr speed
back_cstr = True
inference_method = 'svi'
minibatch_inference = True
# # 1 layer:
# win_out = 3
# win_in = 2
# wins = [0, win_out] # 0-th is output layer
# nDims = [y_dim,2]
# 2 layers:
win_out = 3
win_in = 2
wins = [0, win_out, win_out]
nDims = [y_dim, 2,3] #
rnn_hidden_dims = [9,] # rnn hidden dimension
rnn_type='gru'
rnn_bidirectional=False
rnn_h0_init='zero'
#print("Input window: ", win_in)
#print("Output window: ", win_out)
data_streamer = RandomPermutationDataStreamer(Y_2, U_2)
minibatch_index, minibatch_indices, Y_mb, X_mb = data_streamer.next_minibatch()
m_1 = autoreg.DeepAutoreg_rnn(wins, Y_mb, U=X_mb, U_win=win_in,
num_inducing=Q, back_cstr=back_cstr, nDims=nDims,
rnn_hidden_dims=rnn_hidden_dims,
rnn_type=rnn_type,
rnn_bidirectional=rnn_bidirectional,
rnn_h0_init=rnn_h0_init,
inference_method=inference_method, # Inference method
minibatch_inference = minibatch_inference,
mb_inf_tot_data_size = sequences_no*2,
mb_inf_sample_idxes = minibatch_indices,
# # 1 layer:
# kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
# GPy.kern.RBF( (win_in + win_out) * nDims[1], ARD=True,inv_l=True)] )
# 2 layers:
kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[1] + win_out*nDims[2],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[2] + win_in*u_dim,ARD=True,inv_l=True)])
self.model_1 = m_1
self.model_1._trigger_params_changed()
self.mll_1_1 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_1 = self.model_1._log_likelihood_gradients().copy()
self.model_1.checkgrad(verbose=False)
# self.model_2 = copy.deepcopy(m_1)
self.model_1.set_DataStreamer(data_streamer)
self.model_1._trigger_params_changed()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_1_2 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_2 = self.model_1._log_likelihood_gradients().copy()
data_streamer_1 = StdMemoryDataStreamer(Y_2, U_2, sequences_no)
self.model_1.set_DataStreamer(data_streamer_1)
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_1 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_1 = self.model_1._log_likelihood_gradients().copy()
#import pdb; pdb.set_trace()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_2 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_2 = self.model_1._log_likelihood_gradients().copy()
def test_perm_ds_two_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_almost_equal( self.mll_1_2, self.mll_1_1, decimal=9, err_msg="Likelihoods must be equal" )
np.testing.assert_equal( np.isclose(self.mll_1_2, self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_1_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_1_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
def test_perm_ds_sum_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_equal( self.mll_2_1 + self.mll_2_2, self.mll_1_1, err_msg="Likelihoods must be equal" ) #decimal=9
np.testing.assert_equal( np.isclose(float(self.mll_2_1) + float(self.mll_2_2), self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
class Gru_bidirect_RGP_Test(unittest.TestCase):
"""
Test the Deepautoreg_rnn model (svi, minibatch, back_cstr). Test rnn as a recognition model.
The test classes [ Rnn_RGP_Test, Lstm_RGP_Test, Gru_RGP_Test, Gru_bidirect_RGP_Test ],
do exactly the same testing except the back constrain neural network is different for each of them.
"""
def setUp(self):
u_dim = 2
y_dim = 3
ts_length = 20
sequences_no = 3
#U, Y = generate_data( sequences_no, ts_length, u_dim = u_dim, y_dim = y_dim)
U_2, Y_2 = generate_data( sequences_no*2, ts_length, u_dim = u_dim, y_dim = y_dim)
Q = 3 # 200 # Inducing points num. Take small number ofr speed
back_cstr = True
inference_method = 'svi'
minibatch_inference = True
# # 1 layer:
# win_out = 3
# win_in = 2
# wins = [0, win_out] # 0-th is output layer
# nDims = [y_dim,2]
# 2 layers:
win_out = 3
win_in = 2
wins = [0, win_out, win_out]
nDims = [y_dim, 2,3] #
rnn_hidden_dims = [9,] # rnn hidden dimension
rnn_type='gru'
rnn_bidirectional=True
rnn_h0_init='zero'
#print("Input window: ", win_in)
#print("Output window: ", win_out)
data_streamer = RandomPermutationDataStreamer(Y_2, U_2)
minibatch_index, minibatch_indices, Y_mb, X_mb = data_streamer.next_minibatch()
m_1 = autoreg.DeepAutoreg_rnn(wins, Y_mb, U=X_mb, U_win=win_in,
num_inducing=Q, back_cstr=back_cstr, nDims=nDims,
rnn_hidden_dims=rnn_hidden_dims,
rnn_type=rnn_type,
rnn_bidirectional=rnn_bidirectional,
rnn_h0_init=rnn_h0_init,
inference_method=inference_method, # Inference method
minibatch_inference = minibatch_inference,
mb_inf_tot_data_size = sequences_no*2,
mb_inf_sample_idxes = minibatch_indices,
# # 1 layer:
# kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
# GPy.kern.RBF( (win_in + win_out) * nDims[1], ARD=True,inv_l=True)] )
# 2 layers:
kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[1] + win_out*nDims[2],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[2] + win_in*u_dim,ARD=True,inv_l=True)])
self.model_1 = m_1
self.model_1._trigger_params_changed()
self.mll_1_1 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_1 = self.model_1._log_likelihood_gradients().copy()
self.model_1.checkgrad(verbose=False)
# self.model_2 = copy.deepcopy(m_1)
self.model_1.set_DataStreamer(data_streamer)
self.model_1._trigger_params_changed()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_1_2 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_2 = self.model_1._log_likelihood_gradients().copy()
data_streamer_1 = StdMemoryDataStreamer(Y_2, U_2, sequences_no)
self.model_1.set_DataStreamer(data_streamer_1)
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_1 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_1 = self.model_1._log_likelihood_gradients().copy()
#import pdb; pdb.set_trace()
self.model_1._next_minibatch()
self.model_1._trigger_params_changed()
self.mll_2_2 = float(self.model_1._log_marginal_likelihood)
# exclude 'init_Xs' and 'X_var' from gradients
#self.g_mll_2_2 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_2_2 = self.model_1._log_likelihood_gradients().copy()
def test_perm_ds_two_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_almost_equal( self.mll_1_2, self.mll_1_1, decimal=9, err_msg="Likelihoods must be equal" )
np.testing.assert_equal( np.isclose(self.mll_1_2, self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_1_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_1_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
def test_perm_ds_sum_minibatches(self,):
#import pdb; pdb.set_trace()
#np.testing.assert_equal( self.mll_2_1 + self.mll_2_2, self.mll_1_1, err_msg="Likelihoods must be equal" ) #decimal=9
np.testing.assert_equal( np.isclose(float(self.mll_2_1) + float(self.mll_2_2), self.mll_1_1, atol = 0, rtol = 1e-14), True, err_msg="Likelihoods must be equal" )
#np.testing.assert_array_equal( self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, err_msg="Likelihood gradients must be equal" )
np.testing.assert_equal( np.all( np.isclose(self.g_mll_2_1 + self.g_mll_2_2, self.g_mll_1_1, atol = 0, rtol = 1e-11)), True, err_msg="Likelihood gradients must be equal" )
class Lstm_RGP_not_minibatch_Test(unittest.TestCase):
"""
Test the Deepautoreg_rnn model (svi, minibatch=False, back_cstr). Test rnn as a recognition model.
The test classes [ Rnn_RGP_Test, Lstm_RGP_Test, Gru_RGP_Test, Gru_bidirect_RGP_Test ],
do exactly the same testing except the back constrain neural network is different for each of them.
"""
def setUp(self):
u_dim = 2
y_dim = 3
ts_length = 20
sequences_no = 3
#U, Y = generate_data( sequences_no, ts_length, u_dim = u_dim, y_dim = y_dim)
U_2, Y_2 = generate_data( sequences_no*2, ts_length, u_dim = u_dim, y_dim = y_dim)
Q = 3 # 200 # Inducing points num. Take small number ofr speed
back_cstr = True
inference_method = 'svi'
minibatch_inference = False
# # 1 layer:
# win_out = 3
# win_in = 2
# wins = [0, win_out] # 0-th is output layer
# nDims = [y_dim,2]
# 2 layers:
win_out = 3
win_in = 2
wins = [0, win_out, win_out]
nDims = [y_dim, 2,3] #
rnn_hidden_dims = [9,] # rnn hidden dimension
rnn_type='lstm'
rnn_bidirectional=False
rnn_h0_init='zero'
#print("Input window: ", win_in)
#print("Output window: ", win_out)
m_1 = autoreg.DeepAutoreg_rnn(wins, Y_2, U=U_2, U_win=win_in,
num_inducing=Q, back_cstr=back_cstr, nDims=nDims,
rnn_hidden_dims=rnn_hidden_dims,
rnn_type=rnn_type,
rnn_bidirectional=rnn_bidirectional,
rnn_h0_init=rnn_h0_init,
inference_method=inference_method, # Inference method
minibatch_inference = minibatch_inference,
# # 1 layer:
# kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
# GPy.kern.RBF( (win_in + win_out) * nDims[1], ARD=True,inv_l=True)] )
# 2 layers:
kernels=[GPy.kern.RBF(win_out*nDims[1],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[1] + win_out*nDims[2],ARD=True,inv_l=True),
GPy.kern.RBF(win_out*nDims[2] + win_in*u_dim,ARD=True,inv_l=True)])
self.model_1 = m_1
self.model_1._trigger_params_changed()
self.mll_1_1 = float(self.model_1._log_marginal_likelihood)
#self.g_mll_1_1 = np.hstack( self.model_1[pp.replace(' ', '_')].gradient.flatten() for pp in self.model_1.parameter_names() if ('init_Xs' not in pp) and ('X_var' not in pp) ).copy()
self.g_mll_1_1 = self.model_1._log_likelihood_gradients().copy()
self.model_1.checkgrad(verbose=False)
# self.model_2 = copy.deepcopy(m_1)
#self.model_1.optimize('bfgs',messages=1,max_iters=5)
def test_grad(self,):
#import pdb; pdb.set_trace()
self.model_1.optimize('bfgs',messages=0,max_iters=5)
self.model_1.checkgrad(verbose=False)
if __name__ == '__main__':
pass
# tt1 = Rnn_RGP_Test('test_perm_ds_two_minibatches')
# tt1.setUp()
# tt1.test_perm_ds_two_minibatches()
# #tt.test_gradients()
#
tt2 = Lstm_RGP_not_minibatch_Test('test_grad')
tt2.setUp()
tt2.test_grad() | 46.046547 | 189 | 0.58545 | 4,324 | 30,667 | 3.809667 | 0.048566 | 0.069386 | 0.074061 | 0.022947 | 0.968858 | 0.961149 | 0.956596 | 0.954896 | 0.953075 | 0.953075 | 0 | 0.033886 | 0.31004 | 30,667 | 666 | 190 | 46.046547 | 0.744648 | 0.347442 | 0 | 0.894389 | 0 | 0 | 0.027702 | 0 | 0 | 0 | 0 | 0 | 0.052805 | 1 | 0.049505 | false | 0.0033 | 0.023102 | 0 | 0.092409 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b6d01edb45291b548f267aa383d7c57d29f1ff37 | 17,903 | py | Python | losantrest/flow_version.py | Losant/losant-rest-python | 50a6ce13dfef7acefb930fe45893c7bae862f784 | [
"MIT"
] | 5 | 2016-06-16T20:18:11.000Z | 2022-03-09T11:41:59.000Z | losantrest/flow_version.py | Losant/losant-rest-python | 50a6ce13dfef7acefb930fe45893c7bae862f784 | [
"MIT"
] | 4 | 2021-07-13T06:09:16.000Z | 2022-03-07T14:24:49.000Z | losantrest/flow_version.py | Losant/losant-rest-python | 50a6ce13dfef7acefb930fe45893c7bae862f784 | [
"MIT"
] | 6 | 2016-11-18T03:19:17.000Z | 2022-03-09T11:41:47.000Z | """
The MIT License (MIT)
Copyright (c) 2021 Losant IoT, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import json
""" Module for Losant API FlowVersion wrapper class """
# pylint: disable=C0301
class FlowVersion(object):
""" Class containing all the actions for the Flow Version Resource """
def __init__(self, client):
self.client = client
def delete(self, **kwargs):
"""
Deletes a flow version
Authentication:
The client must be configured with a valid api
access token to call this action. The token
must include at least one of the following scopes:
all.Application, all.Organization, all.User, flowVersion.*, or flowVersion.delete.
Parameters:
* {string} applicationId - ID associated with the application
* {string} flowId - ID associated with the flow
* {string} flowVersionId - Version ID or version name associated with the flow version
* {string} losantdomain - Domain scope of request (rarely needed)
* {boolean} _actions - Return resource actions in response
* {boolean} _links - Return resource link in response
* {boolean} _embedded - Return embedded resources in response
Responses:
* 200 - If flow version was successfully deleted (https://api.losant.com/#/definitions/success)
Errors:
* 400 - Error if malformed request (https://api.losant.com/#/definitions/error)
* 404 - Error if flow version was not found (https://api.losant.com/#/definitions/error)
"""
query_params = {"_actions": "false", "_links": "true", "_embedded": "true"}
path_params = {}
headers = {}
body = None
if "applicationId" in kwargs:
path_params["applicationId"] = kwargs["applicationId"]
if "flowId" in kwargs:
path_params["flowId"] = kwargs["flowId"]
if "flowVersionId" in kwargs:
path_params["flowVersionId"] = kwargs["flowVersionId"]
if "losantdomain" in kwargs:
headers["losantdomain"] = kwargs["losantdomain"]
if "_actions" in kwargs:
query_params["_actions"] = kwargs["_actions"]
if "_links" in kwargs:
query_params["_links"] = kwargs["_links"]
if "_embedded" in kwargs:
query_params["_embedded"] = kwargs["_embedded"]
path = "/applications/{applicationId}/flows/{flowId}/versions/{flowVersionId}".format(**path_params)
return self.client.request("DELETE", path, params=query_params, headers=headers, body=body)
def errors(self, **kwargs):
"""
Get information about errors that occurred during runs of this workflow version
Authentication:
The client must be configured with a valid api
access token to call this action. The token
must include at least one of the following scopes:
all.Application, all.Application.read, all.Organization, all.Organization.read, all.User, all.User.read, flowVersion.*, or flowVersion.errors.
Parameters:
* {string} applicationId - ID associated with the application
* {string} flowId - ID associated with the flow
* {string} flowVersionId - Version ID or version name associated with the flow version
* {string} duration - Duration of time range in milliseconds
* {string} end - End of time range in milliseconds since epoch
* {string} limit - Maximum number of errors to return
* {string} sortDirection - Direction to sort the results by. Accepted values are: asc, desc
* {string} deviceId - For edge workflows, the Device ID to return workflow errors for. When not included, will be errors for all device IDs.
* {string} losantdomain - Domain scope of request (rarely needed)
* {boolean} _actions - Return resource actions in response
* {boolean} _links - Return resource link in response
* {boolean} _embedded - Return embedded resources in response
Responses:
* 200 - Workflow error information (https://api.losant.com/#/definitions/flowErrors)
Errors:
* 400 - Error if malformed request (https://api.losant.com/#/definitions/error)
* 404 - Error if flow version was not found (https://api.losant.com/#/definitions/error)
"""
query_params = {"_actions": "false", "_links": "true", "_embedded": "true"}
path_params = {}
headers = {}
body = None
if "applicationId" in kwargs:
path_params["applicationId"] = kwargs["applicationId"]
if "flowId" in kwargs:
path_params["flowId"] = kwargs["flowId"]
if "flowVersionId" in kwargs:
path_params["flowVersionId"] = kwargs["flowVersionId"]
if "duration" in kwargs:
query_params["duration"] = kwargs["duration"]
if "end" in kwargs:
query_params["end"] = kwargs["end"]
if "limit" in kwargs:
query_params["limit"] = kwargs["limit"]
if "sortDirection" in kwargs:
query_params["sortDirection"] = kwargs["sortDirection"]
if "deviceId" in kwargs:
query_params["deviceId"] = kwargs["deviceId"]
if "losantdomain" in kwargs:
headers["losantdomain"] = kwargs["losantdomain"]
if "_actions" in kwargs:
query_params["_actions"] = kwargs["_actions"]
if "_links" in kwargs:
query_params["_links"] = kwargs["_links"]
if "_embedded" in kwargs:
query_params["_embedded"] = kwargs["_embedded"]
path = "/applications/{applicationId}/flows/{flowId}/versions/{flowVersionId}/errors".format(**path_params)
return self.client.request("GET", path, params=query_params, headers=headers, body=body)
def get(self, **kwargs):
"""
Retrieves information on a flow version
Authentication:
The client must be configured with a valid api
access token to call this action. The token
must include at least one of the following scopes:
all.Application, all.Application.read, all.Organization, all.Organization.read, all.User, all.User.read, flowVersion.*, or flowVersion.get.
Parameters:
* {string} applicationId - ID associated with the application
* {string} flowId - ID associated with the flow
* {string} flowVersionId - Version ID or version name associated with the flow version
* {string} includeCustomNodes - If the result of the request should also include the details of any custom nodes referenced by the returned workflows
* {string} losantdomain - Domain scope of request (rarely needed)
* {boolean} _actions - Return resource actions in response
* {boolean} _links - Return resource link in response
* {boolean} _embedded - Return embedded resources in response
Responses:
* 200 - Flow version information (https://api.losant.com/#/definitions/flowVersion)
Errors:
* 400 - Error if malformed request (https://api.losant.com/#/definitions/error)
* 404 - Error if flow version was not found (https://api.losant.com/#/definitions/error)
"""
query_params = {"_actions": "false", "_links": "true", "_embedded": "true"}
path_params = {}
headers = {}
body = None
if "applicationId" in kwargs:
path_params["applicationId"] = kwargs["applicationId"]
if "flowId" in kwargs:
path_params["flowId"] = kwargs["flowId"]
if "flowVersionId" in kwargs:
path_params["flowVersionId"] = kwargs["flowVersionId"]
if "includeCustomNodes" in kwargs:
query_params["includeCustomNodes"] = kwargs["includeCustomNodes"]
if "losantdomain" in kwargs:
headers["losantdomain"] = kwargs["losantdomain"]
if "_actions" in kwargs:
query_params["_actions"] = kwargs["_actions"]
if "_links" in kwargs:
query_params["_links"] = kwargs["_links"]
if "_embedded" in kwargs:
query_params["_embedded"] = kwargs["_embedded"]
path = "/applications/{applicationId}/flows/{flowId}/versions/{flowVersionId}".format(**path_params)
return self.client.request("GET", path, params=query_params, headers=headers, body=body)
def get_log_entries(self, **kwargs):
"""
Retrieve the recent log entries about runs of this workflow version
Authentication:
The client must be configured with a valid api
access token to call this action. The token
must include at least one of the following scopes:
all.Application, all.Application.read, all.Organization, all.Organization.read, all.User, all.User.read, flowVersion.*, or flowVersion.log.
Parameters:
* {string} applicationId - ID associated with the application
* {string} flowId - ID associated with the flow
* {string} flowVersionId - Version ID or version name associated with the flow version
* {string} limit - Max log entries to return (ordered by time descending)
* {string} since - Look for log entries since this time (ms since epoch)
* {string} losantdomain - Domain scope of request (rarely needed)
* {boolean} _actions - Return resource actions in response
* {boolean} _links - Return resource link in response
* {boolean} _embedded - Return embedded resources in response
Responses:
* 200 - Recent log entries (https://api.losant.com/#/definitions/flowLog)
Errors:
* 400 - Error if malformed request (https://api.losant.com/#/definitions/error)
* 404 - Error if flow version was not found (https://api.losant.com/#/definitions/error)
"""
query_params = {"_actions": "false", "_links": "true", "_embedded": "true"}
path_params = {}
headers = {}
body = None
if "applicationId" in kwargs:
path_params["applicationId"] = kwargs["applicationId"]
if "flowId" in kwargs:
path_params["flowId"] = kwargs["flowId"]
if "flowVersionId" in kwargs:
path_params["flowVersionId"] = kwargs["flowVersionId"]
if "limit" in kwargs:
query_params["limit"] = kwargs["limit"]
if "since" in kwargs:
query_params["since"] = kwargs["since"]
if "losantdomain" in kwargs:
headers["losantdomain"] = kwargs["losantdomain"]
if "_actions" in kwargs:
query_params["_actions"] = kwargs["_actions"]
if "_links" in kwargs:
query_params["_links"] = kwargs["_links"]
if "_embedded" in kwargs:
query_params["_embedded"] = kwargs["_embedded"]
path = "/applications/{applicationId}/flows/{flowId}/versions/{flowVersionId}/logs".format(**path_params)
return self.client.request("GET", path, params=query_params, headers=headers, body=body)
def patch(self, **kwargs):
"""
Updates information about a flow version
Authentication:
The client must be configured with a valid api
access token to call this action. The token
must include at least one of the following scopes:
all.Application, all.Organization, all.User, flowVersion.*, or flowVersion.patch.
Parameters:
* {string} applicationId - ID associated with the application
* {string} flowId - ID associated with the flow
* {string} flowVersionId - Version ID or version name associated with the flow version
* {string} includeCustomNodes - If the result of the request should also include the details of any custom nodes referenced by the returned workflows
* {hash} flowVersion - Object containing new properties of the flow version (https://api.losant.com/#/definitions/flowVersionPatch)
* {string} losantdomain - Domain scope of request (rarely needed)
* {boolean} _actions - Return resource actions in response
* {boolean} _links - Return resource link in response
* {boolean} _embedded - Return embedded resources in response
Responses:
* 200 - Updated flow version information (https://api.losant.com/#/definitions/flowVersion)
Errors:
* 400 - Error if malformed request (https://api.losant.com/#/definitions/error)
* 404 - Error if flow version was not found (https://api.losant.com/#/definitions/error)
"""
query_params = {"_actions": "false", "_links": "true", "_embedded": "true"}
path_params = {}
headers = {}
body = None
if "applicationId" in kwargs:
path_params["applicationId"] = kwargs["applicationId"]
if "flowId" in kwargs:
path_params["flowId"] = kwargs["flowId"]
if "flowVersionId" in kwargs:
path_params["flowVersionId"] = kwargs["flowVersionId"]
if "includeCustomNodes" in kwargs:
query_params["includeCustomNodes"] = kwargs["includeCustomNodes"]
if "flowVersion" in kwargs:
body = kwargs["flowVersion"]
if "losantdomain" in kwargs:
headers["losantdomain"] = kwargs["losantdomain"]
if "_actions" in kwargs:
query_params["_actions"] = kwargs["_actions"]
if "_links" in kwargs:
query_params["_links"] = kwargs["_links"]
if "_embedded" in kwargs:
query_params["_embedded"] = kwargs["_embedded"]
path = "/applications/{applicationId}/flows/{flowId}/versions/{flowVersionId}".format(**path_params)
return self.client.request("PATCH", path, params=query_params, headers=headers, body=body)
def stats(self, **kwargs):
"""
Get statistics about workflow runs for this workflow version
Authentication:
The client must be configured with a valid api
access token to call this action. The token
must include at least one of the following scopes:
all.Application, all.Application.read, all.Organization, all.Organization.read, all.User, all.User.read, flowVersion.*, or flowVersion.stats.
Parameters:
* {string} applicationId - ID associated with the application
* {string} flowId - ID associated with the flow
* {string} flowVersionId - Version ID or version name associated with the flow version
* {string} duration - Duration of time range in milliseconds
* {string} end - End of time range in milliseconds since epoch
* {string} resolution - Resolution in milliseconds
* {string} deviceId - For edge workflows, the device ID to return workflow stats for. When not included, will be aggregate for all device IDs.
* {string} losantdomain - Domain scope of request (rarely needed)
* {boolean} _actions - Return resource actions in response
* {boolean} _links - Return resource link in response
* {boolean} _embedded - Return embedded resources in response
Responses:
* 200 - Statistics for workflow runs (https://api.losant.com/#/definitions/flowStats)
Errors:
* 400 - Error if malformed request (https://api.losant.com/#/definitions/error)
* 404 - Error if flow version was not found (https://api.losant.com/#/definitions/error)
"""
query_params = {"_actions": "false", "_links": "true", "_embedded": "true"}
path_params = {}
headers = {}
body = None
if "applicationId" in kwargs:
path_params["applicationId"] = kwargs["applicationId"]
if "flowId" in kwargs:
path_params["flowId"] = kwargs["flowId"]
if "flowVersionId" in kwargs:
path_params["flowVersionId"] = kwargs["flowVersionId"]
if "duration" in kwargs:
query_params["duration"] = kwargs["duration"]
if "end" in kwargs:
query_params["end"] = kwargs["end"]
if "resolution" in kwargs:
query_params["resolution"] = kwargs["resolution"]
if "deviceId" in kwargs:
query_params["deviceId"] = kwargs["deviceId"]
if "losantdomain" in kwargs:
headers["losantdomain"] = kwargs["losantdomain"]
if "_actions" in kwargs:
query_params["_actions"] = kwargs["_actions"]
if "_links" in kwargs:
query_params["_links"] = kwargs["_links"]
if "_embedded" in kwargs:
query_params["_embedded"] = kwargs["_embedded"]
path = "/applications/{applicationId}/flows/{flowId}/versions/{flowVersionId}/stats".format(**path_params)
return self.client.request("GET", path, params=query_params, headers=headers, body=body)
| 46.501299 | 158 | 0.644585 | 2,006 | 17,903 | 5.668495 | 0.127119 | 0.039398 | 0.035441 | 0.051798 | 0.822003 | 0.812154 | 0.804503 | 0.804503 | 0.804503 | 0.796412 | 0 | 0.004646 | 0.254538 | 17,903 | 384 | 159 | 46.622396 | 0.84737 | 0.497738 | 0 | 0.85443 | 0 | 0 | 0.286245 | 0.055328 | 0 | 0 | 0 | 0 | 0 | 1 | 0.044304 | false | 0 | 0.006329 | 0 | 0.094937 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b6dfd66310a4f6e1ef7dcbf04d85a78c27ad5af5 | 187 | py | Python | Beecrowd/Python/2756-Output-10.py | nazmul629/OJ-Problem_Solution | cf5e01ab8cf062441bfe901e12d98cbaa1d727f9 | [
"MIT"
] | null | null | null | Beecrowd/Python/2756-Output-10.py | nazmul629/OJ-Problem_Solution | cf5e01ab8cf062441bfe901e12d98cbaa1d727f9 | [
"MIT"
] | null | null | null | Beecrowd/Python/2756-Output-10.py | nazmul629/OJ-Problem_Solution | cf5e01ab8cf062441bfe901e12d98cbaa1d727f9 | [
"MIT"
] | null | null | null | print(" A");
print(" B B");
print(" C C");
print(" D D");
print(" E E");
print(" D D");
print(" C C");
print(" B B");
print(" A");
| 18.7 | 22 | 0.326203 | 25 | 187 | 2.44 | 0.24 | 0.196721 | 0.229508 | 0.393443 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.433155 | 187 | 9 | 23 | 20.777778 | 0.575472 | 0 | 0 | 0.888889 | 0 | 0 | 0.470588 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
8e142a227f4b3792729957040a5af59492249cb9 | 41,026 | py | Python | components/core/qcg/pilotjob/tests/test_slurmenv_api.py | LourensVeen/QCG-PilotJob | e78c35a9b16b1042a2d5b54352a2ca2e3a58c6b9 | [
"Apache-2.0"
] | null | null | null | components/core/qcg/pilotjob/tests/test_slurmenv_api.py | LourensVeen/QCG-PilotJob | e78c35a9b16b1042a2d5b54352a2ca2e3a58c6b9 | [
"Apache-2.0"
] | null | null | null | components/core/qcg/pilotjob/tests/test_slurmenv_api.py | LourensVeen/QCG-PilotJob | e78c35a9b16b1042a2d5b54352a2ca2e3a58c6b9 | [
"Apache-2.0"
] | null | null | null | import pytest
import tempfile
from os.path import join, abspath, exists
from shutil import rmtree
from pathlib import Path
from time import sleep
from qcg.pilotjob.slurmres import in_slurm_allocation, get_num_slurm_nodes
from qcg.pilotjob.tests.utils import get_slurm_resources_binded, set_pythonpath_to_qcg_module, find_single_aux_dir
from qcg.pilotjob.api.manager import LocalManager
from qcg.pilotjob.api.job import Jobs
from qcg.pilotjob.api.errors import ConnectionError
from qcg.pilotjob.api.jobinfo import JobInfo
from qcg.pilotjob.executionjob import ExecutionJob
from qcg.pilotjob.tests.utils import SHARED_PATH, submit_2_manager_and_wait_4_info
def test_slurmenv_api_resources():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
api_res = m.resources()
assert all(('total_nodes' in api_res, 'total_cores' in api_res))
assert all((api_res['total_nodes'] == resources.total_nodes, api_res['total_cores'] == resources.total_cores))
aux_dir = find_single_aux_dir(str(tmpdir))
assert all((exists(join(tmpdir, '.qcgpjm-client', 'api.log')),
exists(join(aux_dir, 'service.log'))))
finally:
if m:
m.finish()
# stopManager is using 'terminate' method on service process, which is not a best option when using
# pytest and gathering code coverage
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_submit_simple():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
jobs = Jobs().\
add_std({ 'name': 'host',
'execution': {
'exec': '/bin/hostname',
'args': [ '--fqdn' ],
'stdout': 'std.out',
'stderr': 'std.err'
}})
assert submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_submit_many_cores():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
jobs = Jobs(). \
add_std({ 'name': 'host',
'execution': {
'exec': '/bin/hostname',
'args': [ '--fqdn' ],
'stdout': 'out',
},
'resources': { 'numCores': { 'exact': resources.total_cores } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
# check working directories of job's inside working directory of service
assert tmpdir == jinfos['host'].wdir, str(jinfos['host'].wdir)
assert all((len(jinfos['host'].nodes) == resources.total_nodes,
jinfos['host'].total_cores == resources.total_cores)), str(jinfos['host'])
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_submit_resource_ranges():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
jobs = Jobs(). \
add_std({ 'name': 'host',
'execution': {
'exec': '/bin/hostname',
'args': [ '--fqdn' ],
'stdout': 'out',
},
'resources': { 'numCores': { 'min': 1 } }
})
# job should faile because of missing 'max' parameter
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'FAILED')
jinfo = jinfos['host']
assert "Both core's range boundaries (min, max) must be defined" in jinfo.messages, str(jinfo)
jobs = Jobs(). \
add_std({ 'name': 'host2',
'execution': {
'exec': '/bin/hostname',
'args': [ '--fqdn' ],
'stdout': 'out',
},
'resources': {
'numNodes': { 'exact': 1 },
'numCores': { 'min': 1, 'max': resources.nodes[0].total + 1 } }
})
# job should run on single node (the first free) with all available cores
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
jinfo = jinfos['host2']
assert all((len(jinfo.nodes) == 1, jinfo.total_cores == resources.nodes[0].total)), str(jinfo)
jobs = Jobs(). \
add_std({ 'name': 'host3',
'execution': {
'exec': '/bin/hostname',
'args': [ '--fqdn' ],
'stdout': 'out',
},
'resources': {
'numCores': { 'min': 1, 'max': resources.nodes[0].total + 1 } }
})
# job should run on at least two nodes with total maximum given cores
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
jinfo = jinfos['host3']
assert all((len(jinfo.nodes) == 2, jinfo.total_cores == resources.nodes[0].total + 1)), str(jinfo)
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_submit_exceed_total_cores():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
jobs = Jobs(). \
add_std({ 'name': 'date',
'execution': { 'exec': '/bin/date' },
'resources': {
'numCores': { 'exact': resources.total_cores + 1 }
}})
with pytest.raises(ConnectionError, match=r".*Not enough resources.*"):
m.submit(jobs)
assert len(m.list()) == 0
jobs = Jobs(). \
add_std({ 'name': 'date',
'execution': { 'exec': '/bin/date' },
'resources': {
'numNodes': { 'exact': resources.total_nodes + 1 }
}})
with pytest.raises(ConnectionError, match=r".*Not enough resources.*"):
ids = m.submit(jobs)
assert len(m.list()) == 0
jobs = Jobs(). \
add_std({ 'name': 'date',
'execution': {
'exec': '/bin/date',
'stdout': 'std.out',
},
'resources': { 'numCores': { 'exact': resources.total_cores } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
assert jinfos['date'].total_cores == resources.total_cores
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_std_streams():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
jobs = Jobs(). \
add_std({ 'name': 'host',
'execution': {
'exec': 'cat',
'stdin': '/etc/system-release',
'stdout': 'out',
'stderr': 'err'
}})
assert submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
assert all((exists(join(tmpdir, 'out')), exists(join(tmpdir, 'err'))))
with open(join(tmpdir, 'out'), 'rt') as out_f:
out = out_f.read()
with open(join('/etc/system-release'), 'rt') as sr_f:
system_release = sr_f.read()
assert system_release in out
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_std_streams_many_cores():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
jobs = Jobs(). \
add_std({ 'name': 'host',
'execution': {
'exec': 'cat',
'stdin': '/etc/system-release',
'stdout': 'out',
'stderr': 'err'
},
'resources': {
'numCores': { 'exact': 2 }
}
})
assert submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
assert all((exists(join(tmpdir, 'out')), exists(join(tmpdir, 'err'))))
with open(join(tmpdir, 'out'), 'rt') as out_f:
out = out_f.read()
with open(join('/etc/system-release'), 'rt') as sr_f:
system_release = sr_f.read()
assert system_release in out
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_iteration_simple():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
its = 2
jobs = Jobs(). \
add_std({ 'name': 'host',
'iteration': { 'stop': its },
'execution': { 'exec': 'hostname', 'args': [ '--fqdn' ], 'stdout': 'out' },
'resources': { 'numCores': { 'exact': 1 } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED')
assert jinfos
jinfo = jinfos['host']
print('jinfo: {}'.format(jinfo))
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0))
its = 2
jobs = Jobs(). \
add_std({ 'name': 'host2',
'iteration': { 'stop': its },
'execution': { 'exec': 'hostname', 'args': [ '--fqdn' ], 'stdout': 'out' },
'resources': { 'numCores': { 'exact': 1 } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos['host2']
print('jinfo: {}'.format(jinfo))
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0))
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format('host2', iteration),
job_it.wdir == tmpdir, job_it.total_cores == 1))
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_iteration_core_scheduling():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
# in that case the 'split-into' is default the number of iterations
# so total available resources should be splited into two partitions and each of the
# iteration should run on its own partition
jname = 'host'
its = 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'hostname', 'args': [ '--fqdn' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'split-into' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
# all iterations has been scheduled across all resources
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores
assert all(child.total_cores == resources.total_cores / its for child in jinfo.childs)
# we explicity specify the 'split-into' parameter to 2, behavior should be the same as in the
# previous example
jname = 'host2'
its = 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'hostname', 'args': [ '--fqdn' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'split-into', 'params': { 'parts': 2 } } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
# all iterations has been scheduled across all resources
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores
assert all(child.total_cores == resources.total_cores / 2 for child in jinfo.childs)
# we explicity specify the 'split-into' parameter to 4, the two iterations should be sheduled
# on half of the available resources
jname = 'host3'
its = 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'hostname', 'args': [ '--fqdn' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'split-into', 'params': { 'parts': 4 } } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
# all iterations has been scheduled across all resources
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores / 2
assert all(child.total_cores == resources.total_cores / 4 for child in jinfo.childs)
# we explicity specify the 'split-into' parameter to 2, but the number of iterations is larger than
# available partitions in the same time, so they should be executed serially (by parts)
jname = 'host4'
its = 10
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'hostname', 'args': [ '--fqdn' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'split-into', 'params': { 'parts': 2 } } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
assert all(child.total_cores == resources.total_cores / 2 for child in jinfo.childs)
# the 'maximum-iters' scheduler is trying to launch as many iterations in the same time on all available
# resources
jname = 'host5'
its = 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores
# the 'maximum-iters' scheduler is trying to launch as many iterations in the same time on all available
# resources
jname = 'host6'
its = resources.total_cores
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores
# in case where number of iterations exceeds the number of available resources, the 'maximum-iters' schedulers
# splits iterations into 'steps' minimizing this number, and allocates as many resources as possible for each
# iteration inside 'step'
jname = 'host7'
its = resources.total_cores
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
assert (child.total_cores == 1 for child in jinfo.childs)
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores
# in case where number of iterations exceeds the number of available resources, the 'maximum-iters' schedulers
# splits iterations into 'steps' minimizing this number, and allocates as many resources as possible for each
# iteration inside 'step'
jname = 'host8'
its = resources.total_cores * 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1, job_it.total_cores < resources.total_cores)), str(job_it)
assert (child.total_cores == 1 for child in jinfo.childs)
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores * 2
# in case where number of iterations exceeds the number of available resources, the 'maximum-iters' schedulers
# splits iterations into 'steps' minimizing this number, and allocates as many resources as possible for each
# iteration inside 'step'
jname = 'host9'
its = resources.total_cores + 1
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores >= 1)), str(job_it)
assert (child.total_cores == 1 for child in jinfo.childs)
# because all iterations will be splited in two 'steps' and in each step the iterations that has been assigned
# for the step should usage maximum available resources
assert sum([ child.total_cores for child in jinfo.childs ]) == resources.total_cores * 2
# in this case where two iterations can't fit at once on resources, all the iterations should be scheduled
# serially on all available resources
jname = 'host10'
its = resources.total_nodes
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'min': resources.total_cores - 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores == resources.total_cores, len(job_it.nodes) == resources.total_nodes)),\
str(job_it)
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_iteration_node_scheduling():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
# TODO: it's hard to write comprehensive iteration scheduling node tests on only two nodes (in slurm's \
# development docker)
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
# in that case the 'split-into' is default the number of iterations
# so total available resources should be splited into two partitions and each of the
# iteration should run on its own partition
jname = 'host'
its = 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out_${it}', 'stderr': 'err_${it}' },
'resources': { 'numCores': { 'exact': resources.nodes[0].total },
'numNodes': { 'min': 1,
'scheduler': { 'name': 'split-into' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores == resources.nodes[0].total, len(job_it.nodes) == 1)), str(job_it)
# all iterations has been scheduled across all nodes
assert sum([ len(child.nodes) for child in jinfo.childs ]) == resources.total_nodes
# the iterations should execute on different node
assert list(jinfo.childs[0].nodes)[0] != list(jinfo.childs[1].nodes)[0]
# we explicity specify the 'split-into' parameter to 2, behavior should be the same as in the
# previous example
jname = 'host2'
its = 2
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'exact': resources.nodes[0].total },
'numNodes': { 'min': 1,
'scheduler': { 'name': 'split-into', 'params': { 'parts': 2 } } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores == resources.nodes[0].total, len(job_it.nodes) == 1)), str(job_it)
# all iterations has been scheduled across all nodes
assert sum([ len(child.nodes) for child in jinfo.childs ]) == resources.total_nodes
# the iterations should execute on different node
assert list(jinfo.childs[0].nodes)[0] != list(jinfo.childs[1].nodes)[0]
# the 'maximum-iters' scheduler is trying to launch as many iterations in the same time on all available
# resources
jname = 'host3'
its = 4
jobs = Jobs(). \
add_std({ 'name': jname,
'iteration': { 'stop': its },
'execution': { 'exec': 'sleep', 'args': [ '2s' ], 'stdout': 'out' },
'resources': { 'numCores': { 'exact': resources.nodes[0].total },
'numNodes': { 'min': 1,
'scheduler': { 'name': 'maximum-iters' } } }
})
jinfos = submit_2_manager_and_wait_4_info(m, jobs, 'SUCCEED', withChilds=True)
assert jinfos
jinfo = jinfos[jname]
assert all((jinfo.iterations, jinfo.iterations.get('start', -1) == 0,
jinfo.iterations.get('stop', 0) == its, jinfo.iterations.get('total', 0) == its,
jinfo.iterations.get('finished', 0) == its, jinfo.iterations.get('failed', -1) == 0)), str(jinfo)
assert len(jinfo.childs) == its
for iteration in range(its):
job_it = jinfo.childs[iteration]
print('job iteration {}: {}'.format(iteration, str(job_it)))
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jname, iteration),
job_it.total_cores == resources.nodes[0].total, len(job_it.nodes) == 1)), str(job_it)
assert sum([len(child.nodes) for child in jinfo.childs]) == its
finally:
if m:
m.finish()
# m.stopManager()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_cancel_nl():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
print(f'tmpdir: {tmpdir}')
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
iters=10
ids = m.submit(Jobs().
add(exec='/bin/sleep', args=['5s'], iteration=iters, stdout='sleep.out.${it}',
stderr='sleep.err.${it}', numCores=1)
)
jid = ids[0]
assert len(m.list()) == 1
list_jid = list(m.list().keys())[0]
assert list_jid == jid
# wait for job to start executing
sleep(2)
m.cancel([jid])
m.wait4(m.list())
jinfos = m.info_parsed(ids, withChilds=True)
assert all((len(jinfos) == 1, jid in jinfos, jinfos[jid].status == 'CANCELED'))
# the canceled iterations are included in 'failed' entry in job statistics
# the cancel status is presented in 'childs/state' entry
assert all((jinfos[jid].iterations, jinfos[jid].iterations.get('start', -1) == 0,
jinfos[jid].iterations.get('stop', 0) == iters, jinfos[jid].iterations.get('total', 0) == iters,
jinfos[jid].iterations.get('finished', 0) == iters, jinfos[jid].iterations.get('failed', -1) == iters))
assert len(jinfos[jid].childs) == iters
for iteration in range(iters):
job_it = jinfos[jid].childs[iteration]
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jid, iteration),
job_it.status == 'CANCELED')), str(job_it)
m.remove(jid)
finally:
m.finish()
m.cleanup()
rmtree(tmpdir)
def test_slurmenv_api_cancel_kill_nl():
if not in_slurm_allocation() or get_num_slurm_nodes() < 2:
pytest.skip('test not run in slurm allocation or allocation is smaller than 2 nodes')
resources, allocation = get_slurm_resources_binded()
set_pythonpath_to_qcg_module()
tmpdir = str(tempfile.mkdtemp(dir=SHARED_PATH))
print(f'tmpdir: {tmpdir}')
try:
m = LocalManager(['--log', 'debug', '--wd', tmpdir, '--report-format', 'json'], {'wdir': str(tmpdir)})
iters=10
ids = m.submit(Jobs().
add(script='trap "" SIGTERM; sleep 30s', iteration=iters, stdout='sleep.out.${it}',
stderr='sleep.err.${it}', numCores=1)
)
jid = ids[0]
assert len(m.list()) == 1
list_jid = list(m.list().keys())[0]
assert list_jid == jid
# wait for job to start executing
sleep(2)
m.cancel([jid])
# wait for SIGTERM job cancel
sleep(2)
jinfos = m.info_parsed(ids)
assert all((len(jinfos) == 1, jid in jinfos, jinfos[jid].status == 'QUEUED'))
# wait for SIGKILL job cancel (~ExecutionJob.SIG_KILL_TIMEOUT)
sleep(ExecutionJob.SIG_KILL_TIMEOUT)
jinfos = m.info_parsed(ids, withChilds=True)
assert all((len(jinfos) == 1, jid in jinfos, jinfos[jid].status == 'CANCELED'))
# the canceled iterations are included in 'failed' entry in job statistics
# the cancel status is presented in 'childs/state' entry
assert all((jinfos[jid].iterations, jinfos[jid].iterations.get('start', -1) == 0,
jinfos[jid].iterations.get('stop', 0) == iters, jinfos[jid].iterations.get('total', 0) == iters,
jinfos[jid].iterations.get('finished', 0) == iters, jinfos[jid].iterations.get('failed', -1) == iters))
assert len(jinfos[jid].childs) == iters
for iteration in range(iters):
job_it = jinfos[jid].childs[iteration]
assert all((job_it.iteration == iteration, job_it.name == '{}:{}'.format(jid, iteration),
job_it.status == 'CANCELED')), str(job_it)
m.remove(jid)
finally:
m.finish()
m.cleanup()
# rmtree(tmpdir)
| 46.462061 | 123 | 0.546141 | 4,641 | 41,026 | 4.694462 | 0.063779 | 0.023638 | 0.061964 | 0.039244 | 0.901776 | 0.890531 | 0.884105 | 0.874834 | 0.87286 | 0.867031 | 0 | 0.011655 | 0.311924 | 41,026 | 882 | 124 | 46.514739 | 0.76014 | 0.093989 | 0 | 0.866667 | 0 | 0 | 0.134821 | 0 | 0 | 0 | 0 | 0.001134 | 0.161481 | 1 | 0.017778 | false | 0 | 0.020741 | 0 | 0.038519 | 0.022222 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
8e2536d12b670785253e2e5db899171bc928a8ec | 48 | py | Python | applied/tasks/relex/models/__init__.py | ndoll1998/AppliedTransformers | 76cbdef6fdd765b2178af71038a61e3e71e0cec9 | [
"MIT"
] | 3 | 2020-09-02T03:51:49.000Z | 2020-09-18T14:09:48.000Z | applied/tasks/relex/models/__init__.py | ndoll1998/AppliedTransformers | 76cbdef6fdd765b2178af71038a61e3e71e0cec9 | [
"MIT"
] | null | null | null | applied/tasks/relex/models/__init__.py | ndoll1998/AppliedTransformers | 76cbdef6fdd765b2178af71038a61e3e71e0cec9 | [
"MIT"
] | 2 | 2021-01-30T12:37:43.000Z | 2021-05-19T06:29:31.000Z | from .matchingTheBlanks import MatchingTheBlanks | 48 | 48 | 0.916667 | 4 | 48 | 11 | 0.75 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.977778 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
8e4bed67f87679b4b5032080c8e7c20aae1e5cce | 114 | py | Python | dosing_rl_gym/envs/__init__.py | strongio/dosing-rl-gym | e9f0553080830dc621e97e0652c68b86788b7296 | [
"MIT"
] | 6 | 2020-01-30T11:31:53.000Z | 2021-12-02T10:35:27.000Z | dosing_rl_gym/envs/__init__.py | strongio/dosing-rl-gym | e9f0553080830dc621e97e0652c68b86788b7296 | [
"MIT"
] | null | null | null | dosing_rl_gym/envs/__init__.py | strongio/dosing-rl-gym | e9f0553080830dc621e97e0652c68b86788b7296 | [
"MIT"
] | 3 | 2019-11-13T15:56:14.000Z | 2021-04-12T07:20:23.000Z | from dosing_rl_gym.envs.diabetic_env import Diabetic0Env
from dosing_rl_gym.envs.diabetic_env import Diabetic1Env
| 38 | 56 | 0.894737 | 18 | 114 | 5.333333 | 0.555556 | 0.208333 | 0.25 | 0.3125 | 0.75 | 0.75 | 0.75 | 0.75 | 0 | 0 | 0 | 0.018868 | 0.070175 | 114 | 2 | 57 | 57 | 0.886792 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
f3d6bafb01d9bc1b377d866b7e512b4a6f375db5 | 7,875 | py | Python | api/customer.py | ThinkmanWang/NotesServer | 86a1f7f56b30f94aaccd3d70941e3873cc1713e2 | [
"Apache-2.0"
] | null | null | null | api/customer.py | ThinkmanWang/NotesServer | 86a1f7f56b30f94aaccd3d70941e3873cc1713e2 | [
"Apache-2.0"
] | 1 | 2021-06-01T21:40:51.000Z | 2021-06-01T21:40:51.000Z | api/customer.py | ThinkmanWang/NotesServer | 86a1f7f56b30f94aaccd3d70941e3873cc1713e2 | [
"Apache-2.0"
] | null | null | null |
import sys
import os
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'models'))
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'utils'))
from imp import reload
import MySQLdb
import json
import hashlib
import time
import uuid
from flask import Flask, render_template, request, redirect, url_for, send_from_directory
from flask import render_template
from werkzeug import secure_filename
from utils.mysql_python import MysqlPython
from utils.object2json import obj2json
from models.RetModel import RetModel
from utils.user_db_utils import *
from user import *
from models.Customer import Customer
from utils.customer_db_utils import *
from error_code import *
from utils.mysql_python import MysqlPython
from utils.object2json import obj2json
from flask import Blueprint
customer_api = Blueprint('customer_api', __name__)
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'models'))
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'utils'))
#For Customer
@customer_api.route("/api/get_customer_list", methods=['POST', 'GET'])
def get_customer_list():
if request.method == 'GET':
return obj2json(RetModel(1, dict_err_code[1], {}) )
if (request.form.get('uid', None) is None or request.form.get('token', None) is None):
return obj2json(RetModel(21, dict_err_code[21]))
if (False == verify_user_token(request.form['uid'], request.form['token'])):
return obj2json(RetModel(21, dict_err_code[21], {}) )
if (request.form.get('member_uid', None) is not None):
lstCustomer = select_customer_list(request.form['member_uid'], request.form.get('type', '0'))
szRet = obj2json(RetModel(0, dict_err_code[0], lstCustomer) )
return szRet
else:
lstCustomer = select_customer_list(request.form['uid'], request.form.get('type', '0'))
szRet = obj2json(RetModel(0, dict_err_code[0], lstCustomer) )
return szRet
@customer_api.route("/api/get_customer", methods=['POST', 'GET'])
def get_customer():
if request.method == 'GET':
return obj2json(RetModel(1, dict_err_code[1], {}) )
if (request.form.get('uid', None) is None or request.form.get('token', None) is None):
return obj2json(RetModel(21, dict_err_code[21]))
if (False == verify_user_token(request.form['uid'], request.form['token'])):
return obj2json(RetModel(21, dict_err_code[21], {}) )
if (request.form.get('id', None) is None):
return obj2json(RetModel(31, dict_err_code[31], {}) )
customer = select_customer(request.form['uid'], request.form['id'])
szRet = ""
if (customer is None):
szRet = obj2json(RetModel(30, dict_err_code[30], {}) )
else:
szRet = obj2json(RetModel(0, dict_err_code[0], customer) )
return szRet
@customer_api.route("/api/add_customer", methods=['POST', 'GET'])
def add_customer():
if request.method == 'GET':
return obj2json(RetModel(1, dict_err_code[1], {}) )
if (request.form.get('uid', None) is None or request.form.get('token', None) is None):
return obj2json(RetModel(21, dict_err_code[21]))
if (False == verify_user_token(request.form['uid'], request.form['token'])):
return obj2json(RetModel(21, dict_err_code[21], {}) )
if (request.form.get('id', None) is None):
return obj2json(RetModel(31, dict_err_code[31], {}) )
if (request.form.get('name', None) is None):
return obj2json(RetModel(32, dict_err_code[32], {}) )
if (request.form.get('address', None) is None):
return obj2json(RetModel(33, dict_err_code[33], {}) )
if (request.form.get('longitude', None) is None):
return obj2json(RetModel(34, dict_err_code[34], {}) )
if (request.form.get('latitude', None) is None):
return obj2json(RetModel(35, dict_err_code[35], {}) )
customer = Customer()
customer.id = request.form['id']
customer.uid = request.form['uid']
customer.name = request.form['name']
customer.group_name = request.form.get('group_name', '')
customer.spell = request.form.get('spell', '')
customer.address = request.form['address']
customer.longitude = request.form['longitude']
customer.latitude = request.form['latitude']
customer.boss = request.form.get('boss', '')
customer.phone = request.form.get('phone', '')
customer.email = request.form.get('email', '')
customer.description = request.form.get('description', '')
customer.update_date = request.form.get('update_date', int(time.time()))
if (True == insert_customer(request.form['uid'], customer)):
szRet = obj2json(RetModel(0, dict_err_code[0], {}) )
else:
szRet = obj2json(RetModel(1000, dict_err_code[1000], {}) )
return szRet
@customer_api.route("/api/update_customer", methods=['POST', 'GET'])
def update_customer():
if request.method == 'GET':
return obj2json(RetModel(1, dict_err_code[1], {}) )
if (request.form.get('uid', None) is None or request.form.get('token', None) is None):
return obj2json(RetModel(21, dict_err_code[21]))
if (False == verify_user_token(request.form['uid'], request.form['token'])):
return obj2json(RetModel(21, dict_err_code[21], {}) )
if (request.form.get('id', None) is None):
return obj2json(RetModel(31, dict_err_code[31], {}) )
if (request.form.get('name', None) is None):
return obj2json(RetModel(32, dict_err_code[32], {}) )
if (request.form.get('address', None) is None):
return obj2json(RetModel(33, dict_err_code[33], {}) )
if (request.form.get('longitude', None) is None):
return obj2json(RetModel(34, dict_err_code[34], {}) )
if (request.form.get('latitude', None) is None):
return obj2json(RetModel(35, dict_err_code[35], {}) )
customer = Customer()
customer.id = request.form['id']
customer.uid = request.form['uid']
customer.name = request.form['name']
customer.group_name = request.form.get('group_name', '')
customer.spell = request.form.get('spell', '')
customer.address = request.form['address']
customer.longitude = request.form['longitude']
customer.latitude = request.form['latitude']
customer.boss = request.form.get('boss', '')
customer.phone = request.form.get('phone', '')
customer.email = request.form.get('email', '')
customer.description = request.form.get('description', '')
customer.update_date = request.form.get('update_date', int(time.time()))
szRet = ''
if (False == if_customer_exists(customer)):
szRet = obj2json(RetModel(30, dict_err_code[30], {}) )
else:
if (True == update_customer_info(request.form['uid'], customer)):
szRet = obj2json(RetModel(0, dict_err_code[0], {}) )
else:
szRet = obj2json(RetModel(1000, dict_err_code[1000], {}) )
return szRet
@customer_api.route("/api/delete_customer", methods=['POST', 'GET'])
def delete_customer():
if request.method == 'GET':
return obj2json(RetModel(1, dict_err_code[1], {}) )
if (request.form.get('uid', None) is None or request.form.get('token', None) is None):
return obj2json(RetModel(21, dict_err_code[21], {}))
if (False == verify_user_token(request.form['uid'], request.form['token'])):
return obj2json(RetModel(21, dict_err_code[21], {}) )
szRet = obj2json(RetModel(1024, dict_err_code[1024], {}) )
return szRet
| 39.572864 | 102 | 0.62781 | 996 | 7,875 | 4.809237 | 0.100402 | 0.151566 | 0.111065 | 0.056785 | 0.855115 | 0.834238 | 0.788727 | 0.788727 | 0.781628 | 0.764927 | 0 | 0.02884 | 0.216254 | 7,875 | 198 | 103 | 39.772727 | 0.747246 | 0.001524 | 0 | 0.722973 | 0 | 0 | 0.070739 | 0.002871 | 0 | 0 | 0 | 0 | 0 | 1 | 0.033784 | false | 0 | 0.148649 | 0 | 0.398649 | 0.013514 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6d0d98865c09bb327eb8efab0bcd99cd0fbbe2e7 | 68,573 | py | Python | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_ml/SystemIPC_2/EightThreads_calculix/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_ml/SystemIPC_2/EightThreads_calculix/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | benchmarks/SimResults/_bigLittle_hrrs_spec_tugberk_ml/SystemIPC_2/EightThreads_calculix/power.py | TugberkArkose/MLScheduler | e493b6cbf7b9d29a2c9300d7dd6f0c2f102e4061 | [
"Unlicense"
] | null | null | null | power = {'BUSES': {'Area': 1.33155,
'Bus/Area': 1.33155,
'Bus/Gate Leakage': 0.00662954,
'Bus/Peak Dynamic': 0.0,
'Bus/Runtime Dynamic': 0.0,
'Bus/Subthreshold Leakage': 0.0691322,
'Bus/Subthreshold Leakage with power gating': 0.0259246,
'Gate Leakage': 0.00662954,
'Peak Dynamic': 0.0,
'Runtime Dynamic': 0.0,
'Subthreshold Leakage': 0.0691322,
'Subthreshold Leakage with power gating': 0.0259246},
'Core': [{'Area': 32.6082,
'Execution Unit/Area': 8.2042,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.437307,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.546169,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 2.35921,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.122718,
'Execution Unit/Instruction Scheduler/Area': 2.17927,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.328073,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.00115349,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.20978,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.745379,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.017004,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00962066,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00730101,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 1.00996,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00529112,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 2.07911,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 1.29073,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0800117,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0455351,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 4.84781,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.841232,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.000856399,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.55892,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.740268,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.0178624,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00897339,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 2.77637,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.114878,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.0641291,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.375075,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 9.86321,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.445706,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0270206,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.35921,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.199834,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.804916,
'Execution Unit/Register Files/Runtime Dynamic': 0.226854,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0442632,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00607074,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.987804,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 1.95506,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.0920413,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0345155,
'Execution Unit/Runtime Dynamic': 5.90983,
'Execution Unit/Subthreshold Leakage': 1.83518,
'Execution Unit/Subthreshold Leakage with power gating': 0.709678,
'Gate Leakage': 0.372997,
'Instruction Fetch Unit/Area': 5.86007,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.00166967,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.00166967,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.00144369,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000553083,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00287063,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00765365,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.0163871,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0590479,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.192105,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.43323,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.496045,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.652476,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.96874,
'Instruction Fetch Unit/Runtime Dynamic': 1.36467,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932587,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.408542,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.130006,
'L2/Runtime Dynamic': 0.0121988,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80969,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 4.79701,
'Load Store Unit/Data Cache/Runtime Dynamic': 1.71602,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0351387,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.115171,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.115171,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 5.34308,
'Load Store Unit/Runtime Dynamic': 2.39917,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.283991,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.567983,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591622,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283406,
'Memory Management Unit/Area': 0.434579,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.100789,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.102727,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00813591,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.399995,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0813638,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.732739,
'Memory Management Unit/Runtime Dynamic': 0.184091,
'Memory Management Unit/Subthreshold Leakage': 0.0769113,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0399462,
'Peak Dynamic': 29.5995,
'Renaming Unit/Area': 0.369768,
'Renaming Unit/FP Front End RAT/Area': 0.168486,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00489731,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 3.33511,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 1.55497,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0437281,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.024925,
'Renaming Unit/Free List/Area': 0.0414755,
'Renaming Unit/Free List/Gate Leakage': 4.15911e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0401324,
'Renaming Unit/Free List/Runtime Dynamic': 0.0568259,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000670426,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000377987,
'Renaming Unit/Gate Leakage': 0.00863632,
'Renaming Unit/Int Front End RAT/Area': 0.114751,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.00038343,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.86945,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.35097,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00611897,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00348781,
'Renaming Unit/Peak Dynamic': 4.56169,
'Renaming Unit/Runtime Dynamic': 1.96276,
'Renaming Unit/Subthreshold Leakage': 0.070483,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0362779,
'Runtime Dynamic': 11.8327,
'Subthreshold Leakage': 6.21877,
'Subthreshold Leakage with power gating': 2.58311},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.24865,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.397989,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.3425,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.336942,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.543475,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.274328,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.15475,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.179539,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.53994,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.253627,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0141329,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.1953,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.104521,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.448927,
'Execution Unit/Register Files/Runtime Dynamic': 0.118654,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.473619,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.914929,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.9917,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000740569,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000740569,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000643871,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000248617,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00150146,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00362646,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00714208,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.100479,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.39132,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.242371,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.341272,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.92002,
'Instruction Fetch Unit/Runtime Dynamic': 0.69489,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0648988,
'L2/Runtime Dynamic': 0.00292717,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.90684,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.804424,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0540193,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0540193,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.16193,
'Load Store Unit/Runtime Dynamic': 1.12485,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.133202,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.266405,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.047274,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0482414,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.397389,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0397543,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.634706,
'Memory Management Unit/Runtime Dynamic': 0.0879957,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 22.911,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.667177,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0233213,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.155251,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.845749,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.74811,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.24413,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.394439,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.3179,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.334409,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.539389,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.272265,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.14606,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.180414,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.49841,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.248979,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0140266,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.192847,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.103735,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.441826,
'Execution Unit/Register Files/Runtime Dynamic': 0.117762,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.467321,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.907757,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 2.9714,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000748629,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000748629,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000650741,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000251194,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00149017,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.00363817,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00722474,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.0997235,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.34327,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.242952,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.338706,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.86964,
'Instruction Fetch Unit/Runtime Dynamic': 0.692245,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0647766,
'L2/Runtime Dynamic': 0.00269142,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.92854,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.81425,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0547214,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0547215,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.18694,
'Load Store Unit/Runtime Dynamic': 1.13884,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.134934,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.269868,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0478884,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0488539,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.394401,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0398502,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.632774,
'Memory Management Unit/Runtime Dynamic': 0.0887042,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 22.842,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.65495,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0230582,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.154315,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.832323,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.7262,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328},
{'Area': 32.0201,
'Execution Unit/Area': 7.68434,
'Execution Unit/Complex ALUs/Area': 0.235435,
'Execution Unit/Complex ALUs/Gate Leakage': 0.0132646,
'Execution Unit/Complex ALUs/Peak Dynamic': 0.25018,
'Execution Unit/Complex ALUs/Runtime Dynamic': 0.399191,
'Execution Unit/Complex ALUs/Subthreshold Leakage': 0.20111,
'Execution Unit/Complex ALUs/Subthreshold Leakage with power gating': 0.0754163,
'Execution Unit/Floating Point Units/Area': 4.6585,
'Execution Unit/Floating Point Units/Gate Leakage': 0.0656156,
'Execution Unit/Floating Point Units/Peak Dynamic': 1.35064,
'Execution Unit/Floating Point Units/Runtime Dynamic': 0.304033,
'Execution Unit/Floating Point Units/Subthreshold Leakage': 0.994829,
'Execution Unit/Floating Point Units/Subthreshold Leakage with power gating': 0.373061,
'Execution Unit/Gate Leakage': 0.120359,
'Execution Unit/Instruction Scheduler/Area': 1.66526,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Area': 0.275653,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Gate Leakage': 0.000977433,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Peak Dynamic': 1.04181,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Runtime Dynamic': 0.338243,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage': 0.0143453,
'Execution Unit/Instruction Scheduler/FP Instruction Window/Subthreshold Leakage with power gating': 0.00810519,
'Execution Unit/Instruction Scheduler/Gate Leakage': 0.00568913,
'Execution Unit/Instruction Scheduler/Instruction Window/Area': 0.805223,
'Execution Unit/Instruction Scheduler/Instruction Window/Gate Leakage': 0.00414562,
'Execution Unit/Instruction Scheduler/Instruction Window/Peak Dynamic': 1.6763,
'Execution Unit/Instruction Scheduler/Instruction Window/Runtime Dynamic': 0.545573,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage': 0.0625755,
'Execution Unit/Instruction Scheduler/Instruction Window/Subthreshold Leakage with power gating': 0.0355964,
'Execution Unit/Instruction Scheduler/Peak Dynamic': 3.82262,
'Execution Unit/Instruction Scheduler/ROB/Area': 0.584388,
'Execution Unit/Instruction Scheduler/ROB/Gate Leakage': 0.00056608,
'Execution Unit/Instruction Scheduler/ROB/Peak Dynamic': 1.10451,
'Execution Unit/Instruction Scheduler/ROB/Runtime Dynamic': 0.275387,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage': 0.00906853,
'Execution Unit/Instruction Scheduler/ROB/Subthreshold Leakage with power gating': 0.00364446,
'Execution Unit/Instruction Scheduler/Runtime Dynamic': 1.1592,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage': 0.0859892,
'Execution Unit/Instruction Scheduler/Subthreshold Leakage with power gating': 0.047346,
'Execution Unit/Integer ALUs/Area': 0.47087,
'Execution Unit/Integer ALUs/Gate Leakage': 0.0265291,
'Execution Unit/Integer ALUs/Peak Dynamic': 0.179778,
'Execution Unit/Integer ALUs/Runtime Dynamic': 0.101344,
'Execution Unit/Integer ALUs/Subthreshold Leakage': 0.40222,
'Execution Unit/Integer ALUs/Subthreshold Leakage with power gating': 0.150833,
'Execution Unit/Peak Dynamic': 6.55482,
'Execution Unit/Register Files/Area': 0.570804,
'Execution Unit/Register Files/Floating Point RF/Area': 0.208131,
'Execution Unit/Register Files/Floating Point RF/Gate Leakage': 0.000232788,
'Execution Unit/Register Files/Floating Point RF/Peak Dynamic': 0.255164,
'Execution Unit/Register Files/Floating Point RF/Runtime Dynamic': 0.0141874,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage': 0.00399698,
'Execution Unit/Register Files/Floating Point RF/Subthreshold Leakage with power gating': 0.00176968,
'Execution Unit/Register Files/Gate Leakage': 0.000622708,
'Execution Unit/Register Files/Integer RF/Area': 0.362673,
'Execution Unit/Register Files/Integer RF/Gate Leakage': 0.00038992,
'Execution Unit/Register Files/Integer RF/Peak Dynamic': 0.196273,
'Execution Unit/Register Files/Integer RF/Runtime Dynamic': 0.104925,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage': 0.00614175,
'Execution Unit/Register Files/Integer RF/Subthreshold Leakage with power gating': 0.00246675,
'Execution Unit/Register Files/Peak Dynamic': 0.451437,
'Execution Unit/Register Files/Runtime Dynamic': 0.119112,
'Execution Unit/Register Files/Subthreshold Leakage': 0.0101387,
'Execution Unit/Register Files/Subthreshold Leakage with power gating': 0.00423643,
'Execution Unit/Results Broadcast Bus/Area Overhead': 0.0390912,
'Execution Unit/Results Broadcast Bus/Gate Leakage': 0.00537402,
'Execution Unit/Results Broadcast Bus/Peak Dynamic': 0.47605,
'Execution Unit/Results Broadcast Bus/Runtime Dynamic': 0.919194,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage': 0.081478,
'Execution Unit/Results Broadcast Bus/Subthreshold Leakage with power gating': 0.0305543,
'Execution Unit/Runtime Dynamic': 3.00208,
'Execution Unit/Subthreshold Leakage': 1.79543,
'Execution Unit/Subthreshold Leakage with power gating': 0.688821,
'Gate Leakage': 0.368936,
'Instruction Fetch Unit/Area': 5.85939,
'Instruction Fetch Unit/Branch Predictor/Area': 0.138516,
'Instruction Fetch Unit/Branch Predictor/Chooser/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Chooser/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Chooser/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Chooser/Runtime Dynamic': 0.000737641,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Chooser/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/Gate Leakage': 0.000757657,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Area': 0.0435221,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Gate Leakage': 0.000278362,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Peak Dynamic': 0.0168831,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Runtime Dynamic': 0.000737641,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage': 0.00759719,
'Instruction Fetch Unit/Branch Predictor/Global Predictor/Subthreshold Leakage with power gating': 0.0039236,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Area': 0.0257064,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Gate Leakage': 0.000154548,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Peak Dynamic': 0.0142575,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Runtime Dynamic': 0.000641271,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage': 0.00384344,
'Instruction Fetch Unit/Branch Predictor/L1_Local Predictor/Subthreshold Leakage with power gating': 0.00198631,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Area': 0.0151917,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Gate Leakage': 8.00196e-05,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Peak Dynamic': 0.00527447,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Runtime Dynamic': 0.000247583,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage': 0.00181347,
'Instruction Fetch Unit/Branch Predictor/L2_Local Predictor/Subthreshold Leakage with power gating': 0.000957045,
'Instruction Fetch Unit/Branch Predictor/Peak Dynamic': 0.0597838,
'Instruction Fetch Unit/Branch Predictor/RAS/Area': 0.0105732,
'Instruction Fetch Unit/Branch Predictor/RAS/Gate Leakage': 4.63858e-05,
'Instruction Fetch Unit/Branch Predictor/RAS/Peak Dynamic': 0.0117602,
'Instruction Fetch Unit/Branch Predictor/RAS/Runtime Dynamic': 0.00150725,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage': 0.000932505,
'Instruction Fetch Unit/Branch Predictor/RAS/Subthreshold Leakage with power gating': 0.000494733,
'Instruction Fetch Unit/Branch Predictor/Runtime Dynamic': 0.0036238,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage': 0.0199703,
'Instruction Fetch Unit/Branch Predictor/Subthreshold Leakage with power gating': 0.0103282,
'Instruction Fetch Unit/Branch Target Buffer/Area': 0.64954,
'Instruction Fetch Unit/Branch Target Buffer/Gate Leakage': 0.00272758,
'Instruction Fetch Unit/Branch Target Buffer/Peak Dynamic': 0.177867,
'Instruction Fetch Unit/Branch Target Buffer/Runtime Dynamic': 0.00711579,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage': 0.0811682,
'Instruction Fetch Unit/Branch Target Buffer/Subthreshold Leakage with power gating': 0.0435357,
'Instruction Fetch Unit/Gate Leakage': 0.0589979,
'Instruction Fetch Unit/Instruction Buffer/Area': 0.0226323,
'Instruction Fetch Unit/Instruction Buffer/Gate Leakage': 6.83558e-05,
'Instruction Fetch Unit/Instruction Buffer/Peak Dynamic': 0.606827,
'Instruction Fetch Unit/Instruction Buffer/Runtime Dynamic': 0.100867,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage': 0.00151885,
'Instruction Fetch Unit/Instruction Buffer/Subthreshold Leakage with power gating': 0.000701682,
'Instruction Fetch Unit/Instruction Cache/Area': 3.14635,
'Instruction Fetch Unit/Instruction Cache/Gate Leakage': 0.029931,
'Instruction Fetch Unit/Instruction Cache/Peak Dynamic': 6.41599,
'Instruction Fetch Unit/Instruction Cache/Runtime Dynamic': 0.242911,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage': 0.367022,
'Instruction Fetch Unit/Instruction Cache/Subthreshold Leakage with power gating': 0.180386,
'Instruction Fetch Unit/Instruction Decoder/Area': 1.85799,
'Instruction Fetch Unit/Instruction Decoder/Gate Leakage': 0.0222493,
'Instruction Fetch Unit/Instruction Decoder/Peak Dynamic': 1.37404,
'Instruction Fetch Unit/Instruction Decoder/Runtime Dynamic': 0.342589,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage': 0.442943,
'Instruction Fetch Unit/Instruction Decoder/Subthreshold Leakage with power gating': 0.166104,
'Instruction Fetch Unit/Peak Dynamic': 8.94589,
'Instruction Fetch Unit/Runtime Dynamic': 0.697106,
'Instruction Fetch Unit/Subthreshold Leakage': 0.932286,
'Instruction Fetch Unit/Subthreshold Leakage with power gating': 0.40843,
'L2/Area': 4.53318,
'L2/Gate Leakage': 0.015464,
'L2/Peak Dynamic': 0.0649039,
'L2/Runtime Dynamic': 0.00266718,
'L2/Subthreshold Leakage': 0.834142,
'L2/Subthreshold Leakage with power gating': 0.401066,
'Load Store Unit/Area': 8.80901,
'Load Store Unit/Data Cache/Area': 6.84535,
'Load Store Unit/Data Cache/Gate Leakage': 0.0279261,
'Load Store Unit/Data Cache/Peak Dynamic': 2.91102,
'Load Store Unit/Data Cache/Runtime Dynamic': 0.805993,
'Load Store Unit/Data Cache/Subthreshold Leakage': 0.527675,
'Load Store Unit/Data Cache/Subthreshold Leakage with power gating': 0.25085,
'Load Store Unit/Gate Leakage': 0.0350888,
'Load Store Unit/LoadQ/Area': 0.0836782,
'Load Store Unit/LoadQ/Gate Leakage': 0.00059896,
'Load Store Unit/LoadQ/Peak Dynamic': 0.0541545,
'Load Store Unit/LoadQ/Runtime Dynamic': 0.0541546,
'Load Store Unit/LoadQ/Subthreshold Leakage': 0.00941961,
'Load Store Unit/LoadQ/Subthreshold Leakage with power gating': 0.00536918,
'Load Store Unit/Peak Dynamic': 3.16675,
'Load Store Unit/Runtime Dynamic': 1.12722,
'Load Store Unit/StoreQ/Area': 0.322079,
'Load Store Unit/StoreQ/Gate Leakage': 0.00329971,
'Load Store Unit/StoreQ/Peak Dynamic': 0.133536,
'Load Store Unit/StoreQ/Runtime Dynamic': 0.267072,
'Load Store Unit/StoreQ/Subthreshold Leakage': 0.0345621,
'Load Store Unit/StoreQ/Subthreshold Leakage with power gating': 0.0197004,
'Load Store Unit/Subthreshold Leakage': 0.591321,
'Load Store Unit/Subthreshold Leakage with power gating': 0.283293,
'Memory Management Unit/Area': 0.4339,
'Memory Management Unit/Dtlb/Area': 0.0879726,
'Memory Management Unit/Dtlb/Gate Leakage': 0.00088729,
'Memory Management Unit/Dtlb/Peak Dynamic': 0.0473923,
'Memory Management Unit/Dtlb/Runtime Dynamic': 0.0483599,
'Memory Management Unit/Dtlb/Subthreshold Leakage': 0.0155699,
'Memory Management Unit/Dtlb/Subthreshold Leakage with power gating': 0.00887485,
'Memory Management Unit/Gate Leakage': 0.00808595,
'Memory Management Unit/Itlb/Area': 0.301552,
'Memory Management Unit/Itlb/Gate Leakage': 0.00393464,
'Memory Management Unit/Itlb/Peak Dynamic': 0.398923,
'Memory Management Unit/Itlb/Runtime Dynamic': 0.0398431,
'Memory Management Unit/Itlb/Subthreshold Leakage': 0.0413758,
'Memory Management Unit/Itlb/Subthreshold Leakage with power gating': 0.0235842,
'Memory Management Unit/Peak Dynamic': 0.636443,
'Memory Management Unit/Runtime Dynamic': 0.0882029,
'Memory Management Unit/Subthreshold Leakage': 0.0766103,
'Memory Management Unit/Subthreshold Leakage with power gating': 0.0398333,
'Peak Dynamic': 22.9583,
'Renaming Unit/Area': 0.303608,
'Renaming Unit/FP Front End RAT/Area': 0.131045,
'Renaming Unit/FP Front End RAT/Gate Leakage': 0.00351123,
'Renaming Unit/FP Front End RAT/Peak Dynamic': 2.51468,
'Renaming Unit/FP Front End RAT/Runtime Dynamic': 0.671221,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage': 0.0308571,
'Renaming Unit/FP Front End RAT/Subthreshold Leakage with power gating': 0.0175885,
'Renaming Unit/Free List/Area': 0.0340654,
'Renaming Unit/Free List/Gate Leakage': 2.5481e-05,
'Renaming Unit/Free List/Peak Dynamic': 0.0306032,
'Renaming Unit/Free List/Runtime Dynamic': 0.0234292,
'Renaming Unit/Free List/Subthreshold Leakage': 0.000370144,
'Renaming Unit/Free List/Subthreshold Leakage with power gating': 0.000201064,
'Renaming Unit/Gate Leakage': 0.00708398,
'Renaming Unit/Int Front End RAT/Area': 0.0941223,
'Renaming Unit/Int Front End RAT/Gate Leakage': 0.000283242,
'Renaming Unit/Int Front End RAT/Peak Dynamic': 0.731965,
'Renaming Unit/Int Front End RAT/Runtime Dynamic': 0.15581,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage': 0.00435488,
'Renaming Unit/Int Front End RAT/Subthreshold Leakage with power gating': 0.00248228,
'Renaming Unit/Peak Dynamic': 3.58947,
'Renaming Unit/Runtime Dynamic': 0.850461,
'Renaming Unit/Subthreshold Leakage': 0.0552466,
'Renaming Unit/Subthreshold Leakage with power gating': 0.0276461,
'Runtime Dynamic': 5.76773,
'Subthreshold Leakage': 6.16288,
'Subthreshold Leakage with power gating': 2.55328}],
'DRAM': {'Area': 0,
'Gate Leakage': 0,
'Peak Dynamic': 1.0350161315414401,
'Runtime Dynamic': 1.0350161315414401,
'Subthreshold Leakage': 4.252,
'Subthreshold Leakage with power gating': 4.252},
'L3': [{'Area': 61.9075,
'Gate Leakage': 0.0484137,
'Peak Dynamic': 0.572695,
'Runtime Dynamic': 0.330595,
'Subthreshold Leakage': 6.80085,
'Subthreshold Leakage with power gating': 3.32364}],
'Processor': {'Area': 191.908,
'Gate Leakage': 1.53485,
'Peak Dynamic': 98.8834,
'Peak Power': 131.996,
'Runtime Dynamic': 29.4054,
'Subthreshold Leakage': 31.5774,
'Subthreshold Leakage with power gating': 13.9484,
'Total Cores/Area': 128.669,
'Total Cores/Gate Leakage': 1.4798,
'Total Cores/Peak Dynamic': 98.3107,
'Total Cores/Runtime Dynamic': 29.0748,
'Total Cores/Subthreshold Leakage': 24.7074,
'Total Cores/Subthreshold Leakage with power gating': 10.2429,
'Total L3s/Area': 61.9075,
'Total L3s/Gate Leakage': 0.0484137,
'Total L3s/Peak Dynamic': 0.572695,
'Total L3s/Runtime Dynamic': 0.330595,
'Total L3s/Subthreshold Leakage': 6.80085,
'Total L3s/Subthreshold Leakage with power gating': 3.32364,
'Total Leakage': 33.1122,
'Total NoCs/Area': 1.33155,
'Total NoCs/Gate Leakage': 0.00662954,
'Total NoCs/Peak Dynamic': 0.0,
'Total NoCs/Runtime Dynamic': 0.0,
'Total NoCs/Subthreshold Leakage': 0.0691322,
'Total NoCs/Subthreshold Leakage with power gating': 0.0259246}} | 75.025164 | 124 | 0.681916 | 8,082 | 68,573 | 5.779881 | 0.067558 | 0.123649 | 0.113031 | 0.093507 | 0.939524 | 0.931497 | 0.918845 | 0.887055 | 0.861602 | 0.843149 | 0 | 0.13136 | 0.224447 | 68,573 | 914 | 125 | 75.025164 | 0.747001 | 0 | 0 | 0.642232 | 0 | 0 | 0.657757 | 0.048123 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
6d251a5bb21c563c01c2684a367b5b479f5176a9 | 116 | py | Python | det3d/version.py | shovington/Det3D | 5de5bff96d64da1363e0caf0e273407da231e859 | [
"Apache-2.0"
] | null | null | null | det3d/version.py | shovington/Det3D | 5de5bff96d64da1363e0caf0e273407da231e859 | [
"Apache-2.0"
] | null | null | null | det3d/version.py | shovington/Det3D | 5de5bff96d64da1363e0caf0e273407da231e859 | [
"Apache-2.0"
] | null | null | null | # GENERATED VERSION FILE
# TIME: Wed Oct 28 08:52:06 2020
__version__ = '1.0.rc0+ae5a3ae'
short_version = '1.0.rc0'
| 23.2 | 32 | 0.715517 | 21 | 116 | 3.714286 | 0.761905 | 0.205128 | 0.230769 | 0.307692 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.20202 | 0.146552 | 116 | 4 | 33 | 29 | 0.585859 | 0.456897 | 0 | 0 | 1 | 0 | 0.366667 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
edbbda057017bda93d9e8d89fa65b183d06ec920 | 93,406 | py | Python | com/vmware/content/library/item_client.py | vishal-12/vsphere-automation-sdk-python | 9cf363971db77ea5a12928eecd5cf5170a7fcd8a | [
"MIT"
] | null | null | null | com/vmware/content/library/item_client.py | vishal-12/vsphere-automation-sdk-python | 9cf363971db77ea5a12928eecd5cf5170a7fcd8a | [
"MIT"
] | null | null | null | com/vmware/content/library/item_client.py | vishal-12/vsphere-automation-sdk-python | 9cf363971db77ea5a12928eecd5cf5170a7fcd8a | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
#---------------------------------------------------------------------------
# Copyright 2019 VMware, Inc. All rights reserved.
# AUTO GENERATED FILE -- DO NOT MODIFY!
#
# vAPI stub file for package com.vmware.content.library.item.
#---------------------------------------------------------------------------
"""
The Content Library Item module provides classes and classes for managing files
in a library item.
"""
__author__ = 'VMware, Inc.'
__docformat__ = 'restructuredtext en'
import sys
from vmware.vapi.bindings import type
from vmware.vapi.bindings.converter import TypeConverter
from vmware.vapi.bindings.enum import Enum
from vmware.vapi.bindings.error import VapiError
from vmware.vapi.bindings.struct import VapiStruct
from vmware.vapi.bindings.stub import (
ApiInterfaceStub, StubFactoryBase, VapiInterface)
from vmware.vapi.bindings.common import raise_core_exception
from vmware.vapi.data.validator import (UnionValidator, HasFieldsOfValidator)
from vmware.vapi.exception import CoreException
from vmware.vapi.lib.constants import TaskType
from vmware.vapi.lib.rest import OperationRestMetadata
class TransferStatus(Enum):
"""
The ``TransferStatus`` class defines the transfer state of a file.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
WAITING_FOR_TRANSFER = None
"""
Indicates that a file has been defined for a library item and its content
needs to be uploaded.
"""
TRANSFERRING = None
"""
Indicates that data is being transferred to the file.
"""
READY = None
"""
Indicates that the file has been fully transferred and is ready to be used.
"""
VALIDATING = None
"""
Indicates that the file is being validated (checksum, type adapters).
"""
ERROR = None
"""
Indicates that there was an error transferring or validating the file.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`TransferStatus` instance.
"""
Enum.__init__(string)
TransferStatus._set_values([
TransferStatus('WAITING_FOR_TRANSFER'),
TransferStatus('TRANSFERRING'),
TransferStatus('READY'),
TransferStatus('VALIDATING'),
TransferStatus('ERROR'),
])
TransferStatus._set_binding_type(type.EnumType(
'com.vmware.content.library.item.transfer_status',
TransferStatus))
class DownloadSessionModel(VapiStruct):
"""
The ``DownloadSessionModel`` class provides information on an active
:class:`DownloadSession` resource.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
id=None,
library_item_id=None,
library_item_content_version=None,
error_message=None,
client_progress=None,
state=None,
expiration_time=None,
):
"""
:type id: :class:`str`
:param id: The identifier of this download session.
When clients pass a value of this class as a parameter, the
attribute must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``. When methods
return a value of this class as a return value, the attribute will
be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
This attribute is not used for the ``create`` method. It will not
be present in the return value of the ``get`` or ``list`` methods.
It is not used for the ``update`` method.
:type library_item_id: :class:`str`
:param library_item_id: The identifier of the library item whose content is being
downloaded.
When clients pass a value of this class as a parameter, the
attribute must be an identifier for the resource type:
``com.vmware.content.library.Item``. When methods return a value of
this class as a return value, the attribute will be an identifier
for the resource type: ``com.vmware.content.library.Item``.
This attribute must be provided for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type library_item_content_version: :class:`str`
:param library_item_content_version: The content version of the library item whose content is being
downloaded. This value is the
:attr:`com.vmware.content.library_client.ItemModel.content_version`
at the time when the session is created for the library item.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type error_message: :class:`com.vmware.vapi.std_client.LocalizableMessage`
:param error_message: If the session is in the :attr:`DownloadSessionModel.State.ERROR`
status this property will have more details about the error.
This attribute is not used for the ``create`` method. It is
optional in the return value of the ``get`` or ``list`` methods. It
is not used for the ``update`` method.
:type client_progress: :class:`long`
:param client_progress: The progress that has been made with the download. This property is
to be updated by the client during the download process to indicate
the progress of its work in completing the download. The initial
progress is 0 until updated by the client. The maximum value is
100, which indicates that the download is complete.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is optional for the ``update`` method.
:type state: :class:`DownloadSessionModel.State`
:param state: The current state (ACTIVE, CANCELED, ERROR) of the download
session.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type expiration_time: :class:`datetime.datetime`
:param expiration_time: Indicates the time after which the session will expire. The session
is guaranteed not to expire before this time.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
"""
self.id = id
self.library_item_id = library_item_id
self.library_item_content_version = library_item_content_version
self.error_message = error_message
self.client_progress = client_progress
self.state = state
self.expiration_time = expiration_time
VapiStruct.__init__(self)
class State(Enum):
"""
The state of the download session.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
ACTIVE = None
"""
The session is active. Individual files may be in the process of being
transferred and may become ready for download at different times.
"""
CANCELED = None
"""
The session has been canceled. On-going downloads may fail. The session
will stay in this state until it is either deleted by the user or
automatically cleaned up by the Content Library Service.
"""
ERROR = None
"""
Indicates there was an error during the session lifecycle.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`State` instance.
"""
Enum.__init__(string)
State._set_values([
State('ACTIVE'),
State('CANCELED'),
State('ERROR'),
])
State._set_binding_type(type.EnumType(
'com.vmware.content.library.item.download_session_model.state',
State))
DownloadSessionModel._set_binding_type(type.StructType(
'com.vmware.content.library.item.download_session_model', {
'id': type.OptionalType(type.IdType()),
'library_item_id': type.OptionalType(type.IdType()),
'library_item_content_version': type.OptionalType(type.StringType()),
'error_message': type.OptionalType(type.ReferenceType('com.vmware.vapi.std_client', 'LocalizableMessage')),
'client_progress': type.OptionalType(type.IntegerType()),
'state': type.OptionalType(type.ReferenceType(__name__, 'DownloadSessionModel.State')),
'expiration_time': type.OptionalType(type.DateTimeType()),
},
DownloadSessionModel,
True,
["id"]))
class TransferEndpoint(VapiStruct):
"""
The ``TransferEndpoint`` class encapsulates a URI along with extra
information about it.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
uri=None,
ssl_certificate_thumbprint=None,
):
"""
:type uri: :class:`str`
:param uri: Transfer endpoint URI. The supported URI schemes are: ``http``,
``https``, ``file``, and ``ds``.
An endpoint URI with the ``ds`` scheme specifies the location of
the file on the datastore. The format of the datastore URI is:
* ds:///vmfs/volumes/uuid/path
Some examples of valid file URI formats are:
* file:///path
* file:///C:/path
* file://unc-server/path
When the transfer endpoint is a file or datastore location, the
server can import the file directly from the storage backing
without the overhead of streaming over HTTP.
:type ssl_certificate_thumbprint: :class:`str` or ``None``
:param ssl_certificate_thumbprint: Thumbprint of the expected SSL certificate for this endpoint. Only
used for HTTPS connections. The thumbprint is the SHA-1 hash of the
DER encoding of the remote endpoint's SSL certificate. If set, the
remote endpoint's SSL certificate is only accepted if it matches
this thumbprint, and no other certificate validation is performed.
If not specified, standard certificate validation is performed.
"""
self.uri = uri
self.ssl_certificate_thumbprint = ssl_certificate_thumbprint
VapiStruct.__init__(self)
TransferEndpoint._set_binding_type(type.StructType(
'com.vmware.content.library.item.transfer_endpoint', {
'uri': type.URIType(),
'ssl_certificate_thumbprint': type.OptionalType(type.StringType()),
},
TransferEndpoint,
False,
None))
class UpdateSessionModel(VapiStruct):
"""
The ``UpdateSessionModel`` class provides information on an active
:class:`UpdateSession` resource.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
_validator_list = [
UnionValidator(
'state',
{
'ACTIVE' : [('preview_info', False)],
'DONE' : [],
'ERROR' : [],
'CANCELED' : [],
}
),
]
def __init__(self,
id=None,
library_item_id=None,
library_item_content_version=None,
error_message=None,
client_progress=None,
state=None,
expiration_time=None,
preview_info=None,
warning_behavior=None,
):
"""
:type id: :class:`str`
:param id: The identifier of this update session.
When clients pass a value of this class as a parameter, the
attribute must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``. When methods
return a value of this class as a return value, the attribute will
be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
This attribute is not used for the ``create`` method. It will not
be present in the return value of the ``get`` or ``list`` methods.
It is not used for the ``update`` method.
:type library_item_id: :class:`str`
:param library_item_id: The identifier of the library item to which content will be
uploaded or removed.
When clients pass a value of this class as a parameter, the
attribute must be an identifier for the resource type:
``com.vmware.content.library.Item``. When methods return a value of
this class as a return value, the attribute will be an identifier
for the resource type: ``com.vmware.content.library.Item``.
This attribute must be provided for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type library_item_content_version: :class:`str`
:param library_item_content_version: The content version of the library item whose content is being
modified. This value is the
:attr:`com.vmware.content.library_client.ItemModel.content_version`
at the time when the session is created for the library item.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type error_message: :class:`com.vmware.vapi.std_client.LocalizableMessage`
:param error_message: If the session is in the :attr:`UpdateSessionModel.State.ERROR`
status this property will have more details about the error.
This attribute is not used for the ``create`` method. It is
optional in the return value of the ``get`` or ``list`` methods. It
is not used for the ``update`` method.
:type client_progress: :class:`long`
:param client_progress: The progress that has been made with the upload. This property is
to be updated by the client during the upload process to indicate
the progress of its work in completing the upload. The initial
progress is 0 until updated by the client. The maximum value is
100, which indicates that the update is complete.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type state: :class:`UpdateSessionModel.State`
:param state: The current state (ACTIVE, DONE, ERROR, CANCELED) of the update
session. This attribute was added in vSphere API 6.8.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type expiration_time: :class:`datetime.datetime`
:param expiration_time: Indicates the time after which the session will expire. The session
is guaranteed not to expire earlier than this time.
This attribute is not used for the ``create`` method. It will
always be present in the return value of the ``get`` or ``list``
methods. It is not used for the ``update`` method.
:type preview_info: :class:`com.vmware.content.library.item.updatesession_client.PreviewInfo`
:param preview_info: A preview of the files currently being uploaded in the session.
This property will be set only when the session is in the
:attr:`UpdateSessionModel.State.ACTIVE`. This attribute was added
in vSphere API 6.8.
This attribute is optional and it is only relevant when the value
of ``state`` is :attr:`UpdateSessionModel.State.ACTIVE`.
:type warning_behavior: :class:`list` of :class:`com.vmware.content.library.item.updatesession_client.WarningBehavior`
:param warning_behavior: Indicates the update session behavior if warnings are raised in the
session preview. Any warning which is raised by session preview but
not ignored by the client will, by default, fail the update
session. This attribute was added in vSphere API 6.8.
This attribute is optional for the ``create`` method. It is
optional in the return value of the ``get`` or ``list`` methods. It
is optional for the ``update`` method.
"""
self.id = id
self.library_item_id = library_item_id
self.library_item_content_version = library_item_content_version
self.error_message = error_message
self.client_progress = client_progress
self.state = state
self.expiration_time = expiration_time
self.preview_info = preview_info
self.warning_behavior = warning_behavior
VapiStruct.__init__(self)
class State(Enum):
"""
The state of an update session.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
ACTIVE = None
"""
The session is currently active. This is the initial state when the session
is created. Files may be uploaded by the client or pulled by the Content
Library Service at this stage.
"""
DONE = None
"""
The session is done and all its effects are now visible.
"""
ERROR = None
"""
There was an error during the session.
"""
CANCELED = None
"""
The session has been canceled.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`State` instance.
"""
Enum.__init__(string)
State._set_values([
State('ACTIVE'),
State('DONE'),
State('ERROR'),
State('CANCELED'),
])
State._set_binding_type(type.EnumType(
'com.vmware.content.library.item.update_session_model.state',
State))
UpdateSessionModel._set_binding_type(type.StructType(
'com.vmware.content.library.item.update_session_model', {
'id': type.OptionalType(type.IdType()),
'library_item_id': type.OptionalType(type.IdType()),
'library_item_content_version': type.OptionalType(type.StringType()),
'error_message': type.OptionalType(type.ReferenceType('com.vmware.vapi.std_client', 'LocalizableMessage')),
'client_progress': type.OptionalType(type.IntegerType()),
'state': type.OptionalType(type.ReferenceType(__name__, 'UpdateSessionModel.State')),
'expiration_time': type.OptionalType(type.DateTimeType()),
'preview_info': type.OptionalType(type.ReferenceType('com.vmware.content.library.item.updatesession_client', 'PreviewInfo')),
'warning_behavior': type.OptionalType(type.ListType(type.ReferenceType('com.vmware.content.library.item.updatesession_client', 'WarningBehavior'))),
},
UpdateSessionModel,
True,
["id"]))
class DownloadSession(VapiInterface):
"""
The ``DownloadSession`` class manipulates download sessions, which are used
to download content from the Content Library Service.
A download session is an object that tracks the download of content (that
is, downloading content from the Content Library Service) and acts as a
lease to keep the download links available.
The :class:`com.vmware.content.library.item.downloadsession_client.File`
class provides access to the download links.
"""
RESOURCE_TYPE = "com.vmware.content.library.item.DownloadSession"
"""
Resource type for a download session.
"""
_VAPI_SERVICE_ID = 'com.vmware.content.library.item.download_session'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _DownloadSessionStub)
def create(self,
create_spec,
client_token=None,
):
"""
Creates a new download session.
:type client_token: :class:`str` or ``None``
:param client_token: A unique token generated by the client for each creation request.
The token should be a universally unique identifier (UUID), for
example: ``b8a2a2e3-2314-43cd-a871-6ede0f429751``. This token can
be used to guarantee idempotent creation.
If not specified creation is not idempotent.
:type create_spec: :class:`DownloadSessionModel`
:param create_spec: Specification for the new download session to be created.
:rtype: :class:`str`
:return: Identifier of the new download session being created.
The return value will be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
if the session specification is not valid.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
format.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the library item targeted by the download does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.Item`` referenced by
the attribute :attr:`DownloadSessionModel.library_item_id` requires
``ContentLibrary.DownloadSession``.
"""
return self._invoke('create',
{
'client_token': client_token,
'create_spec': create_spec,
})
def get(self,
download_session_id,
):
"""
Gets the download session with the specified identifier, including the
most up-to-date status information for the session.
:type download_session_id: :class:`str`
:param download_session_id: Identifier of the download session to retrieve.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:rtype: :class:`DownloadSessionModel`
:return: The :class:`DownloadSessionModel` instance with the given
``download_session_id``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no download session with the given ``download_session_id``
exists.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('get',
{
'download_session_id': download_session_id,
})
def list(self,
library_item_id=None,
):
"""
Lists the identifiers of the download sessions created by the calling
user. Optionally may filter by library item.
:type library_item_id: :class:`str` or ``None``
:param library_item_id: Library item identifier on which to filter results.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
If not specified all download session identifiers are listed.
:rtype: :class:`list` of :class:`str`
:return: The :class:`list` of identifiers of all download sessions created
by the calling user.
The return value will contain identifiers for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if a library item identifier is given for an item which does not
exist.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.item.DownloadSession``
referenced by the parameter ``library_item_id`` requires
``ContentLibrary.DownloadSession``.
"""
return self._invoke('list',
{
'library_item_id': library_item_id,
})
def keep_alive(self,
download_session_id,
progress=None,
):
"""
Keeps a download session alive. This operation is allowed only if the
session is in the :attr:`DownloadSessionModel.State.ACTIVE` state.
If there is no activity for a download session for a certain period of
time, the download session will expire. The download session expiration
timeout is configurable in the Content Library Service system
configuration. The default is five minutes. Invoking this method
enables a client to specifically extend the lifetime of an active
download session.
:type download_session_id: :class:`str`
:param download_session_id: Identifier of the download session whose lifetime should be
extended.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:type progress: :class:`long` or ``None``
:param progress: Optional update to the progress property of the session. If
specified, the new progress should be greater then the current
progress. See :attr:`DownloadSessionModel.client_progress`.
If not specified the progress is not updated.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no download session with the given identifier exists.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the download session is not in the
:attr:`DownloadSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('keep_alive',
{
'download_session_id': download_session_id,
'progress': progress,
})
def cancel(self,
download_session_id,
):
"""
Cancels the download session. This method will abort any ongoing
transfers and invalidate transfer urls that the client may be
downloading from.
:type download_session_id: :class:`str`
:param download_session_id: Identifer of the download session that should be canceled.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no download session with the given identifier exists.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the download session is not in the
:attr:`DownloadSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('cancel',
{
'download_session_id': download_session_id,
})
def delete(self,
download_session_id,
):
"""
Deletes a download session. This removes the session and all
information associated with it.
Removing a download session leaves any current transfers for that
session in an indeterminate state (there is no guarantee that the
transfers will be able to complete). However there will no longer be a
means of inspecting the status of those downloads except by seeing the
effect on the library item.
Download sessions for which there is no download activity or which are
complete will automatically be expired and then deleted after a period
of time.
:type download_session_id: :class:`str`
:param download_session_id: Identifier of the download session to be deleted.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the download session does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('delete',
{
'download_session_id': download_session_id,
})
def fail(self,
download_session_id,
client_error_message,
):
"""
Terminates the download session with a client specified error message.
This is useful in transmitting client side failures (for example, not
being able to download a file) to the server side.
:type download_session_id: :class:`str`
:param download_session_id: Identifier of the download session to fail.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.DownloadSession``.
:type client_error_message: :class:`str`
:param client_error_message: Client side error message. This can be useful in providing some
extra details about the client side failure. Note that the message
won't be translated to the user's locale.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the download session does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the download session is not in the
:attr:`DownloadSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('fail',
{
'download_session_id': download_session_id,
'client_error_message': client_error_message,
})
class File(VapiInterface):
"""
The ``File`` class can be used to query for information on the files within
a library item. Files are objects which are added to a library item through
the :class:`UpdateSession` and
:class:`com.vmware.content.library.item.updatesession_client.File` classes.
"""
_VAPI_SERVICE_ID = 'com.vmware.content.library.item.file'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _FileStub)
class ChecksumAlgorithm(Enum):
"""
The ``File.ChecksumAlgorithm`` class defines the valid checksum algorithms.
.. note::
This class represents an enumerated type in the interface language
definition. The class contains class attributes which represent the
values in the current version of the enumerated type. Newer versions of
the enumerated type may contain new values. To use new values of the
enumerated type in communication with a server that supports the newer
version of the API, you instantiate this class. See :ref:`enumerated
type description page <enumeration_description>`.
"""
SHA1 = None
"""
Checksum algorithm: SHA-1
"""
MD5 = None
"""
Checksum algorithm: MD5
"""
SHA256 = None
"""
Checksum algorithm: SHA-256. This class attribute was added in vSphere API
6.8.
"""
SHA512 = None
"""
Checksum algorithm: SHA-512. This class attribute was added in vSphere API
6.8.
"""
def __init__(self, string):
"""
:type string: :class:`str`
:param string: String value for the :class:`ChecksumAlgorithm` instance.
"""
Enum.__init__(string)
ChecksumAlgorithm._set_values([
ChecksumAlgorithm('SHA1'),
ChecksumAlgorithm('MD5'),
ChecksumAlgorithm('SHA256'),
ChecksumAlgorithm('SHA512'),
])
ChecksumAlgorithm._set_binding_type(type.EnumType(
'com.vmware.content.library.item.file.checksum_algorithm',
ChecksumAlgorithm))
class ChecksumInfo(VapiStruct):
"""
Provides checksums for a :class:`File.Info` object.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
algorithm=None,
checksum=None,
):
"""
:type algorithm: :class:`File.ChecksumAlgorithm` or ``None``
:param algorithm: The checksum algorithm (SHA1, MD5, SHA256, SHA512) used to
calculate the checksum.
If not specified the default checksum algorithm is
:attr:`File.ChecksumAlgorithm.SHA1`.
:type checksum: :class:`str`
:param checksum: The checksum value calculated with
:attr:`File.ChecksumInfo.algorithm`.
"""
self.algorithm = algorithm
self.checksum = checksum
VapiStruct.__init__(self)
ChecksumInfo._set_binding_type(type.StructType(
'com.vmware.content.library.item.file.checksum_info', {
'algorithm': type.OptionalType(type.ReferenceType(__name__, 'File.ChecksumAlgorithm')),
'checksum': type.StringType(),
},
ChecksumInfo,
False,
None))
class Info(VapiStruct):
"""
The ``File.Info`` class provides information about a file in Content
Library Service storage.
A file is an actual stored object for a library item. An item will have
zero files initially, but one or more can be uploaded to the item.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
checksum_info=None,
name=None,
size=None,
cached=None,
version=None,
):
"""
:type checksum_info: :class:`File.ChecksumInfo` or ``None``
:param checksum_info: A checksum for validating the content of the file.
This value can be used to verify that a transfer was completed
without errors.
A checksum cannot always be calculated, and the value will be None
if the file does not have content.
:type name: :class:`str`
:param name: The name of the file.
This value will be unique within the library item for each file. It
cannot be an empty string.
:type size: :class:`long`
:param size: The file size, in bytes. The file size is the storage used and not
the uploaded or provisioned size. For example, when uploading a
disk to a datastore, the amount of storage that the disk consumes
may be different from the disk file size. When the file is not
cached, the size is 0.
:type cached: :class:`bool`
:param cached: Indicates whether the file is on disk or not.
:type version: :class:`str`
:param version: The version of this file; incremented when a new copy of the file
is uploaded.
"""
self.checksum_info = checksum_info
self.name = name
self.size = size
self.cached = cached
self.version = version
VapiStruct.__init__(self)
Info._set_binding_type(type.StructType(
'com.vmware.content.library.item.file.info', {
'checksum_info': type.OptionalType(type.ReferenceType(__name__, 'File.ChecksumInfo')),
'name': type.StringType(),
'size': type.IntegerType(),
'cached': type.BooleanType(),
'version': type.StringType(),
},
Info,
False,
None))
def get(self,
library_item_id,
name,
):
"""
Retrieves the information for a single file in a library item by its
name.
:type library_item_id: :class:`str`
:param library_item_id: Identifier of the library item whose file information should be
returned.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.Item``.
:type name: :class:`str`
:param name: Name of the file in the library item whose information should be
returned.
:rtype: :class:`File.Info`
:return: The :class:`File.Info` object with information on the specified
file.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if ``library_item_id`` refers to a library item that does not
exist.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if ``name`` refers to a file that does not exist in the library
item.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.Item`` referenced by
the parameter ``library_item_id`` requires ``System.Read``.
"""
return self._invoke('get',
{
'library_item_id': library_item_id,
'name': name,
})
def list(self,
library_item_id,
):
"""
Lists all of the files that are stored within a given library item.
:type library_item_id: :class:`str`
:param library_item_id: Identifier of the library item whose files should be listed.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.Item``.
:rtype: :class:`list` of :class:`File.Info`
:return: The :class:`list` of all of the files that are stored within the
given library item.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if ``library_item_id`` refers to a library item that does not
exist.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.Item`` referenced by
the parameter ``library_item_id`` requires ``System.Read``.
"""
return self._invoke('list',
{
'library_item_id': library_item_id,
})
class Storage(VapiInterface):
"""
``Storage`` is a resource that represents a specific instance of a file
stored on a storage backing. Unlike :class:`File`, which is abstract,
storage represents concrete files on the various storage backings. A file
is only represented once in :class:`File`, but will be represented multiple
times (once for each storage backing) in ``Storage``. The ``Storage`` class
provides information on the storage backing and the specific location of
the file in that backing to privileged users who want direct access to the
file on the storage medium.
"""
_VAPI_SERVICE_ID = 'com.vmware.content.library.item.storage'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _StorageStub)
class Info(VapiStruct):
"""
The ``Storage.Info`` class is the expanded form of :class:`File.Info` that
includes details about the storage backing for a file in a library item.
.. tip::
The arguments are used to initialize data attributes with the same
names.
"""
def __init__(self,
storage_backing=None,
storage_uris=None,
checksum_info=None,
name=None,
size=None,
cached=None,
version=None,
):
"""
:type storage_backing: :class:`com.vmware.content.library_client.StorageBacking`
:param storage_backing: The storage backing on which this object resides. This might not be
the same as the default storage backing associated with the
library.
:type storage_uris: :class:`list` of :class:`str`
:param storage_uris: URIs that identify the file on the storage backing.
These URIs may be specific to the backing and may need
interpretation by the client. A client that understands a URI
scheme in this list may use that URI to directly access the file on
the storage backing. This can provide high-performance support for
file manipulation.
:type checksum_info: :class:`File.ChecksumInfo` or ``None``
:param checksum_info: A checksum for validating the content of the file.
This value can be used to verify that a transfer was completed
without errors.
A checksum cannot always be calculated, and the value will be None
if the file does not have content.
:type name: :class:`str`
:param name: The name of the file.
This value will be unique within the library item for each file. It
cannot be an empty string.
:type size: :class:`long`
:param size: The file size, in bytes. The file size is the storage used and not
the uploaded or provisioned size. For example, when uploading a
disk to a datastore, the amount of storage that the disk consumes
may be different from the disk file size. When the file is not
cached, the size is 0.
:type cached: :class:`bool`
:param cached: Indicates whether the file is on disk or not.
:type version: :class:`str`
:param version: The version of this file; incremented when a new copy of the file
is uploaded.
"""
self.storage_backing = storage_backing
self.storage_uris = storage_uris
self.checksum_info = checksum_info
self.name = name
self.size = size
self.cached = cached
self.version = version
VapiStruct.__init__(self)
Info._set_binding_type(type.StructType(
'com.vmware.content.library.item.storage.info', {
'storage_backing': type.ReferenceType('com.vmware.content.library_client', 'StorageBacking'),
'storage_uris': type.ListType(type.URIType()),
'checksum_info': type.OptionalType(type.ReferenceType(__name__, 'File.ChecksumInfo')),
'name': type.StringType(),
'size': type.IntegerType(),
'cached': type.BooleanType(),
'version': type.StringType(),
},
Info,
False,
None))
def get(self,
library_item_id,
file_name,
):
"""
Retrieves the storage information for a specific file in a library
item.
:type library_item_id: :class:`str`
:param library_item_id: Identifier of the library item whose storage information should be
retrieved.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.Item``.
:type file_name: :class:`str`
:param file_name: Name of the file for which the storage information should be
listed.
:rtype: :class:`list` of :class:`Storage.Info`
:return: The :class:`list` of all the storage items for the given file
within the given library item.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the specified library item does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the specified file does not exist in the given library item.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.Item`` referenced by
the parameter ``library_item_id`` requires
``ContentLibrary.ReadStorage``.
"""
return self._invoke('get',
{
'library_item_id': library_item_id,
'file_name': file_name,
})
def list(self,
library_item_id,
):
"""
Lists all storage items for a given library item.
:type library_item_id: :class:`str`
:param library_item_id: Identifier of the library item whose storage information should be
listed.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.Item``.
:rtype: :class:`list` of :class:`Storage.Info`
:return: The :class:`list` of all storage items for a given library item.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the specified library item does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.Item`` referenced by
the parameter ``library_item_id`` requires
``ContentLibrary.ReadStorage``.
"""
return self._invoke('list',
{
'library_item_id': library_item_id,
})
class UpdateSession(VapiInterface):
"""
The ``UpdateSession`` class manipulates sessions that are used to upload
content into the Content Library Service, and/or to remove files from a
library item.
An update session is a resource which tracks changes to content. An update
session is created with a set of files that are intended to be uploaded to
a specific :class:`com.vmware.content.library_client.ItemModel`, or removed
from an item. The session object can be used to track the uploads and
inspect the changes that are being made to the item by that upload. It can
also serve as a channel to check on the result of the upload, and status
messages such as errors and warnings for the upload.
Modifications are not visible to other clients unless the session is
completed and all necessary files have been received.
The management of the files within the session is done through the
:class:`com.vmware.content.library.item.updatesession_client.File` class.
"""
RESOURCE_TYPE = "com.vmware.content.library.item.UpdateSession"
"""
Resource type for an update session.
"""
_VAPI_SERVICE_ID = 'com.vmware.content.library.item.update_session'
"""
Identifier of the service in canonical form.
"""
def __init__(self, config):
"""
:type config: :class:`vmware.vapi.bindings.stub.StubConfiguration`
:param config: Configuration to be used for creating the stub.
"""
VapiInterface.__init__(self, config, _UpdateSessionStub)
def create(self,
create_spec,
client_token=None,
):
"""
Creates a new update session. An update session is used to make
modifications to a library item. Modifications are not visible to other
clients unless the session is completed and all necessary files have
been received.
Content Library Service allows only one single update session to be
active for a specific library item.
:type client_token: :class:`str` or ``None``
:param client_token: Unique token generated by the client for each creation request. The
token should be a universally unique identifier (UUID), for
example: ``b8a2a2e3-2314-43cd-a871-6ede0f429751``. This token can
be used to guarantee idempotent creation.
If not specified creation is not idempotent.
:type create_spec: :class:`UpdateSessionModel`
:param create_spec: Specification for the new update session to be created.
:rtype: :class:`str`
:return: Identifier of the new update session being created.
The return value will be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
if the session specification is not valid.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
if the ``client_token`` does not conform to the UUID format.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidElementType`
if the update session is being created on a subscribed library
item.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the item targeted for update does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.ResourceBusy`
if there is another update session on the same library item.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.Item`` referenced by
the attribute :attr:`UpdateSessionModel.library_item_id` requires
``ContentLibrary.UpdateSession``.
"""
return self._invoke('create',
{
'client_token': client_token,
'create_spec': create_spec,
})
def get(self,
update_session_id,
):
"""
Gets the update session with the specified identifier, including the
most up-to-date status information for the session.
:type update_session_id: :class:`str`
:param update_session_id: Identifier of the update session to retrieve.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:rtype: :class:`UpdateSessionModel`
:return: The :class:`UpdateSessionModel` instance with the given
``update_session_id``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no update session with the given identifier exists.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('get',
{
'update_session_id': update_session_id,
})
def list(self,
library_item_id=None,
):
"""
Lists the identifiers of the update session created by the calling
user. Optionally may filter by library item.
:type library_item_id: :class:`str` or ``None``
:param library_item_id: Optional library item identifier on which to filter results.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
If not specified the results are not filtered.
:rtype: :class:`list` of :class:`str`
:return: The :class:`list` of identifiers of all update sessions created by
the calling user.
The return value will contain identifiers for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if a library item identifier is given for an item which does not
exist.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* The resource ``com.vmware.content.library.item.UpdateSession``
referenced by the parameter ``library_item_id`` requires
``ContentLibrary.UpdateSession``.
"""
return self._invoke('list',
{
'library_item_id': library_item_id,
})
def complete(self,
update_session_id,
):
"""
Completes the update session. This indicates that the client has
finished making all the changes required to the underlying library
item. If the client is pushing the content to the server, the library
item will be updated once this call returns. If the server is pulling
the content, the call may return before the changes become visible. In
that case, the client can track the session to know when the server is
done.
This method requires the session to be in the
:attr:`UpdateSessionModel.State.ACTIVE` state.
Depending on the type of the library item associated with this session,
a type adapter may be invoked to verify the validity of the files
uploaded. The user can explicitly validate the session before
completing the session by using the
:func:`com.vmware.content.library.item.updatesession_client.File.validate`
method.
Modifications are not visible to other clients unless the session is
completed and all necessary files have been received.
:type update_session_id: :class:`str`
:param update_session_id: Identifier of the update session that should be completed.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no update session with the given identifier exists.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the update session is not in the
:attr:`UpdateSessionModel.State.ACTIVE` state, or if some of the
files that will be uploaded by the client aren't received
correctly.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('complete',
{
'update_session_id': update_session_id,
})
def keep_alive(self,
update_session_id,
client_progress=None,
):
"""
Keeps an update session alive.
If there is no activity for an update session after a period of time,
the update session will expire, then be deleted. The update session
expiration timeout is configurable in the Content Library Service
system configuration. The default is five minutes. Invoking this method
enables a client to specifically extend the lifetime of the update
session.
:type update_session_id: :class:`str`
:param update_session_id: Identifier of the update session whose lifetime should be extended.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:type client_progress: :class:`long` or ``None``
:param client_progress: Optional update to the progress property of the session. If
specified, the new progress should be greater then the current
progress. See :attr:`UpdateSessionModel.client_progress`.
If not specified the progress is not updated.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no update session with the given identifier exists.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the update session is not in the
:attr:`UpdateSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('keep_alive',
{
'update_session_id': update_session_id,
'client_progress': client_progress,
})
def cancel(self,
update_session_id,
):
"""
Cancels the update session and sets its state to
:attr:`UpdateSessionModel.State.CANCELED`. This method will free up any
temporary resources currently associated with the session.
This method is not allowed if the session has been already completed.
Cancelling an update session will cancel any in progress transfers
(either uploaded by the client or pulled by the server). Any content
that has been already received will be scheduled for deletion.
:type update_session_id: :class:`str`
:param update_session_id: Identifier of the update session that should be canceled.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if no update session with the given identifier exists.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the update session is not in the
:attr:`UpdateSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('cancel',
{
'update_session_id': update_session_id,
})
def fail(self,
update_session_id,
client_error_message,
):
"""
Terminates the update session with a client specified error message.
This is useful in transmitting client side failures (for example, not
being able to access a file) to the server side.
:type update_session_id: :class:`str`
:param update_session_id: Identifier of the update session to fail.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:type client_error_message: :class:`str`
:param client_error_message: Client side error message. This can be useful in providing some
extra details about the client side failure. Note that the message
won't be translated to the user's locale.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the update session does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the update session is not in the
:attr:`UpdateSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('fail',
{
'update_session_id': update_session_id,
'client_error_message': client_error_message,
})
def delete(self,
update_session_id,
):
"""
Deletes an update session. This removes the session and all information
associated with it.
Removing an update session leaves any current transfers for that
session in an indeterminate state (there is no guarantee that the
server will terminate the transfers, or that the transfers can be
completed). However there will no longer be a means of inspecting the
status of those uploads except by seeing the effect on the library
item.
Update sessions for which there is no upload activity or which are
complete will automatically be deleted after a period of time.
:type update_session_id: :class:`str`
:param update_session_id: Identifer of the update session to delete.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the update session does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the update session is in the
:attr:`UpdateSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('delete',
{
'update_session_id': update_session_id,
})
def update(self,
update_session_id,
update_spec,
):
"""
Updates the properties of an update session.
This is an incremental update to the update session. Any attribute in
the :class:`UpdateSessionModel` class that is None will not be
modified.
This method will only update the property
:attr:`UpdateSessionModel.warning_behavior` of the update session. This
will not, for example, update the
:attr:`UpdateSessionModel.library_item_id` or
:attr:`UpdateSessionModel.state` of an update session.
This method requires the session to be in the
:attr:`UpdateSessionModel.State.ACTIVE` state.. This method was added
in vSphere API 6.8.
:type update_session_id: :class:`str`
:param update_session_id: Identifer of the update session that should be updated.
The parameter must be an identifier for the resource type:
``com.vmware.content.library.item.UpdateSession``.
:type update_spec: :class:`UpdateSessionModel`
:param update_spec: Specification for the new property values to be set on the update
session.
:raise: :class:`com.vmware.vapi.std.errors_client.NotFound`
if the update session does not exist.
:raise: :class:`com.vmware.vapi.std.errors_client.NotAllowedInCurrentState`
if the update session is not in the
:attr:`UpdateSessionModel.State.ACTIVE` state.
:raise: :class:`com.vmware.vapi.std.errors_client.InvalidArgument`
if the update session specification is not valid.
:raise: :class:`com.vmware.vapi.std.errors_client.Unauthorized`
if you do not have all of the privileges described as follows:
* Method execution requires ``System.Anonymous``.
"""
return self._invoke('update',
{
'update_session_id': update_session_id,
'update_spec': update_spec,
})
class _DownloadSessionStub(ApiInterfaceStub):
def __init__(self, config):
# properties for create operation
create_input_type = type.StructType('operation-input', {
'client_token': type.OptionalType(type.StringType()),
'create_spec': type.ReferenceType(__name__, 'DownloadSessionModel'),
})
create_error_dict = {
'com.vmware.vapi.std.errors.invalid_argument':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidArgument'),
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
create_input_value_validator_list = [
]
create_output_validator_list = [
]
create_rest_metadata = None
# properties for get operation
get_input_type = type.StructType('operation-input', {
'download_session_id': type.IdType(resource_types='com.vmware.content.library.item.DownloadSession'),
})
get_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = None
# properties for list operation
list_input_type = type.StructType('operation-input', {
'library_item_id': type.OptionalType(type.IdType()),
})
list_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
list_input_value_validator_list = [
]
list_output_validator_list = [
]
list_rest_metadata = None
# properties for keep_alive operation
keep_alive_input_type = type.StructType('operation-input', {
'download_session_id': type.IdType(resource_types='com.vmware.content.library.item.DownloadSession'),
'progress': type.OptionalType(type.IntegerType()),
})
keep_alive_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
keep_alive_input_value_validator_list = [
]
keep_alive_output_validator_list = [
]
keep_alive_rest_metadata = None
# properties for cancel operation
cancel_input_type = type.StructType('operation-input', {
'download_session_id': type.IdType(resource_types='com.vmware.content.library.item.DownloadSession'),
})
cancel_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
cancel_input_value_validator_list = [
]
cancel_output_validator_list = [
]
cancel_rest_metadata = None
# properties for delete operation
delete_input_type = type.StructType('operation-input', {
'download_session_id': type.IdType(resource_types='com.vmware.content.library.item.DownloadSession'),
})
delete_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
delete_input_value_validator_list = [
]
delete_output_validator_list = [
]
delete_rest_metadata = None
# properties for fail operation
fail_input_type = type.StructType('operation-input', {
'download_session_id': type.IdType(resource_types='com.vmware.content.library.item.DownloadSession'),
'client_error_message': type.StringType(),
})
fail_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
fail_input_value_validator_list = [
]
fail_output_validator_list = [
]
fail_rest_metadata = None
operations = {
'create': {
'input_type': create_input_type,
'output_type': type.IdType(resource_types='com.vmware.content.library.item.DownloadSession'),
'errors': create_error_dict,
'input_value_validator_list': create_input_value_validator_list,
'output_validator_list': create_output_validator_list,
'task_type': TaskType.NONE,
},
'get': {
'input_type': get_input_type,
'output_type': type.ReferenceType(__name__, 'DownloadSessionModel'),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'list': {
'input_type': list_input_type,
'output_type': type.ListType(type.IdType()),
'errors': list_error_dict,
'input_value_validator_list': list_input_value_validator_list,
'output_validator_list': list_output_validator_list,
'task_type': TaskType.NONE,
},
'keep_alive': {
'input_type': keep_alive_input_type,
'output_type': type.VoidType(),
'errors': keep_alive_error_dict,
'input_value_validator_list': keep_alive_input_value_validator_list,
'output_validator_list': keep_alive_output_validator_list,
'task_type': TaskType.NONE,
},
'cancel': {
'input_type': cancel_input_type,
'output_type': type.VoidType(),
'errors': cancel_error_dict,
'input_value_validator_list': cancel_input_value_validator_list,
'output_validator_list': cancel_output_validator_list,
'task_type': TaskType.NONE,
},
'delete': {
'input_type': delete_input_type,
'output_type': type.VoidType(),
'errors': delete_error_dict,
'input_value_validator_list': delete_input_value_validator_list,
'output_validator_list': delete_output_validator_list,
'task_type': TaskType.NONE,
},
'fail': {
'input_type': fail_input_type,
'output_type': type.VoidType(),
'errors': fail_error_dict,
'input_value_validator_list': fail_input_value_validator_list,
'output_validator_list': fail_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'create': create_rest_metadata,
'get': get_rest_metadata,
'list': list_rest_metadata,
'keep_alive': keep_alive_rest_metadata,
'cancel': cancel_rest_metadata,
'delete': delete_rest_metadata,
'fail': fail_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.content.library.item.download_session',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=True)
class _FileStub(ApiInterfaceStub):
def __init__(self, config):
# properties for get operation
get_input_type = type.StructType('operation-input', {
'library_item_id': type.IdType(resource_types='com.vmware.content.library.Item'),
'name': type.StringType(),
})
get_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = None
# properties for list operation
list_input_type = type.StructType('operation-input', {
'library_item_id': type.IdType(resource_types='com.vmware.content.library.Item'),
})
list_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
list_input_value_validator_list = [
]
list_output_validator_list = [
]
list_rest_metadata = None
operations = {
'get': {
'input_type': get_input_type,
'output_type': type.ReferenceType(__name__, 'File.Info'),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'list': {
'input_type': list_input_type,
'output_type': type.ListType(type.ReferenceType(__name__, 'File.Info')),
'errors': list_error_dict,
'input_value_validator_list': list_input_value_validator_list,
'output_validator_list': list_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'get': get_rest_metadata,
'list': list_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.content.library.item.file',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=True)
class _StorageStub(ApiInterfaceStub):
def __init__(self, config):
# properties for get operation
get_input_type = type.StructType('operation-input', {
'library_item_id': type.IdType(resource_types='com.vmware.content.library.Item'),
'file_name': type.StringType(),
})
get_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = None
# properties for list operation
list_input_type = type.StructType('operation-input', {
'library_item_id': type.IdType(resource_types='com.vmware.content.library.Item'),
})
list_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
list_input_value_validator_list = [
]
list_output_validator_list = [
]
list_rest_metadata = None
operations = {
'get': {
'input_type': get_input_type,
'output_type': type.ListType(type.ReferenceType(__name__, 'Storage.Info')),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'list': {
'input_type': list_input_type,
'output_type': type.ListType(type.ReferenceType(__name__, 'Storage.Info')),
'errors': list_error_dict,
'input_value_validator_list': list_input_value_validator_list,
'output_validator_list': list_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'get': get_rest_metadata,
'list': list_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.content.library.item.storage',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=True)
class _UpdateSessionStub(ApiInterfaceStub):
def __init__(self, config):
# properties for create operation
create_input_type = type.StructType('operation-input', {
'client_token': type.OptionalType(type.StringType()),
'create_spec': type.ReferenceType(__name__, 'UpdateSessionModel'),
})
create_error_dict = {
'com.vmware.vapi.std.errors.invalid_argument':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidArgument'),
'com.vmware.vapi.std.errors.invalid_element_type':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidElementType'),
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.resource_busy':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'ResourceBusy'),
}
create_input_value_validator_list = [
]
create_output_validator_list = [
]
create_rest_metadata = None
# properties for get operation
get_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
})
get_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
get_input_value_validator_list = [
]
get_output_validator_list = [
]
get_rest_metadata = None
# properties for list operation
list_input_type = type.StructType('operation-input', {
'library_item_id': type.OptionalType(type.IdType()),
})
list_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
}
list_input_value_validator_list = [
]
list_output_validator_list = [
]
list_rest_metadata = None
# properties for complete operation
complete_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
})
complete_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
complete_input_value_validator_list = [
]
complete_output_validator_list = [
]
complete_rest_metadata = None
# properties for keep_alive operation
keep_alive_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
'client_progress': type.OptionalType(type.IntegerType()),
})
keep_alive_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
keep_alive_input_value_validator_list = [
]
keep_alive_output_validator_list = [
]
keep_alive_rest_metadata = None
# properties for cancel operation
cancel_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
})
cancel_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
cancel_input_value_validator_list = [
]
cancel_output_validator_list = [
]
cancel_rest_metadata = None
# properties for fail operation
fail_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
'client_error_message': type.StringType(),
})
fail_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
fail_input_value_validator_list = [
]
fail_output_validator_list = [
]
fail_rest_metadata = None
# properties for delete operation
delete_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
})
delete_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
}
delete_input_value_validator_list = [
]
delete_output_validator_list = [
]
delete_rest_metadata = None
# properties for update operation
update_input_type = type.StructType('operation-input', {
'update_session_id': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
'update_spec': type.ReferenceType(__name__, 'UpdateSessionModel'),
})
update_error_dict = {
'com.vmware.vapi.std.errors.not_found':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotFound'),
'com.vmware.vapi.std.errors.not_allowed_in_current_state':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'NotAllowedInCurrentState'),
'com.vmware.vapi.std.errors.invalid_argument':
type.ReferenceType('com.vmware.vapi.std.errors_client', 'InvalidArgument'),
}
update_input_value_validator_list = [
]
update_output_validator_list = [
]
update_rest_metadata = None
operations = {
'create': {
'input_type': create_input_type,
'output_type': type.IdType(resource_types='com.vmware.content.library.item.UpdateSession'),
'errors': create_error_dict,
'input_value_validator_list': create_input_value_validator_list,
'output_validator_list': create_output_validator_list,
'task_type': TaskType.NONE,
},
'get': {
'input_type': get_input_type,
'output_type': type.ReferenceType(__name__, 'UpdateSessionModel'),
'errors': get_error_dict,
'input_value_validator_list': get_input_value_validator_list,
'output_validator_list': get_output_validator_list,
'task_type': TaskType.NONE,
},
'list': {
'input_type': list_input_type,
'output_type': type.ListType(type.IdType()),
'errors': list_error_dict,
'input_value_validator_list': list_input_value_validator_list,
'output_validator_list': list_output_validator_list,
'task_type': TaskType.NONE,
},
'complete': {
'input_type': complete_input_type,
'output_type': type.VoidType(),
'errors': complete_error_dict,
'input_value_validator_list': complete_input_value_validator_list,
'output_validator_list': complete_output_validator_list,
'task_type': TaskType.NONE,
},
'keep_alive': {
'input_type': keep_alive_input_type,
'output_type': type.VoidType(),
'errors': keep_alive_error_dict,
'input_value_validator_list': keep_alive_input_value_validator_list,
'output_validator_list': keep_alive_output_validator_list,
'task_type': TaskType.NONE,
},
'cancel': {
'input_type': cancel_input_type,
'output_type': type.VoidType(),
'errors': cancel_error_dict,
'input_value_validator_list': cancel_input_value_validator_list,
'output_validator_list': cancel_output_validator_list,
'task_type': TaskType.NONE,
},
'fail': {
'input_type': fail_input_type,
'output_type': type.VoidType(),
'errors': fail_error_dict,
'input_value_validator_list': fail_input_value_validator_list,
'output_validator_list': fail_output_validator_list,
'task_type': TaskType.NONE,
},
'delete': {
'input_type': delete_input_type,
'output_type': type.VoidType(),
'errors': delete_error_dict,
'input_value_validator_list': delete_input_value_validator_list,
'output_validator_list': delete_output_validator_list,
'task_type': TaskType.NONE,
},
'update': {
'input_type': update_input_type,
'output_type': type.VoidType(),
'errors': update_error_dict,
'input_value_validator_list': update_input_value_validator_list,
'output_validator_list': update_output_validator_list,
'task_type': TaskType.NONE,
},
}
rest_metadata = {
'create': create_rest_metadata,
'get': get_rest_metadata,
'list': list_rest_metadata,
'complete': complete_rest_metadata,
'keep_alive': keep_alive_rest_metadata,
'cancel': cancel_rest_metadata,
'fail': fail_rest_metadata,
'delete': delete_rest_metadata,
'update': update_rest_metadata,
}
ApiInterfaceStub.__init__(
self, iface_name='com.vmware.content.library.item.update_session',
config=config, operations=operations, rest_metadata=rest_metadata,
is_vapi_rest=True)
class StubFactory(StubFactoryBase):
_attrs = {
'DownloadSession': DownloadSession,
'File': File,
'Storage': Storage,
'UpdateSession': UpdateSession,
'downloadsession': 'com.vmware.content.library.item.downloadsession_client.StubFactory',
'updatesession': 'com.vmware.content.library.item.updatesession_client.StubFactory',
}
| 43.770384 | 156 | 0.611759 | 10,499 | 93,406 | 5.280408 | 0.058482 | 0.03604 | 0.030484 | 0.037519 | 0.802522 | 0.787514 | 0.768178 | 0.742131 | 0.725356 | 0.707949 | 0 | 0.001587 | 0.305291 | 93,406 | 2,133 | 157 | 43.790905 | 0.852766 | 0.473503 | 0 | 0.681034 | 1 | 0 | 0.228296 | 0.147569 | 0 | 0 | 0 | 0 | 0 | 1 | 0.040948 | false | 0 | 0.012931 | 0 | 0.109914 | 0.003233 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
eddfc75e58eecfaae493f506df65050ac8fafca9 | 15,435 | py | Python | janeladetalhadafinancas.py | vinerodrigues/sistema-loja_main | 15024e5f42ae446935986fbbf27dec470741e5d8 | [
"MIT"
] | null | null | null | janeladetalhadafinancas.py | vinerodrigues/sistema-loja_main | 15024e5f42ae446935986fbbf27dec470741e5d8 | [
"MIT"
] | null | null | null | janeladetalhadafinancas.py | vinerodrigues/sistema-loja_main | 15024e5f42ae446935986fbbf27dec470741e5d8 | [
"MIT"
] | null | null | null | from tkinter import*
import tkinter as tk
from functools import partial
from datetime import datetime
import time
import main_menu
import dbmfinancas
class abrir_janela_detalhada(object):
def __init__(self, i):
self.carregar_scrollbars(i)
self.listar_financas(i)
def listar_financas(self, i):
a = dbmfinancas.financas.keys()
##print("Janela nova",a)
aux = ''
nome = ''
cont = 0
sinal = ''
for j in a:
aux = ''
##print("Aux", aux)
nome = ''
##print("Nome", aux)
cont = 0
##print("Cont", cont)
sinal = ''
##print("Sinal", sinal)
##print("Imprimindo o J",j)
x = j.decode()
##print("Verificar o que esta havendo", x)
y = dbmfinancas.financas[x]
y = y.decode()
##print("Valores: ",y)
#cont = 0
for k in y:
##print(k)
cont += 1
##print("Imprimindo o K", k)
if k == '¹':
nome = aux
# #print("Nome: ",nome)
aux = ''
elif k == '²':
valor = aux
sinal = valor[len(valor)-1:len(valor):1 ]
valor = valor[0:len(valor) -1:1]
aux = ''
# #print("valor: ",valor)
elif k == '¢':
comentario = aux
# #print("Comentario: ",comentario)
aux = ''
if sinal == "+":
fg = 'green'
self.frame_tabela_1 = Frame(self.frame_auxiliar_scrollbar, width = 10, height = 5, relief = RIDGE, borderwidth = '3', bg = 'black')
self.frame_tabela_1.pack(pady = 10)
self.label_teste = Label(self.frame_tabela_1, text="Nome do produto: "+ nome, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1 )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Valor recebido no produto: "+ valor+",00 R$" , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Data: "+x , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Comentarios: "+ comentario, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
aux = ''
else:
fg = 'red'
self.frame_tabela_1 = Frame(self.frame_auxiliar_scrollbar, width = 10, height = 5, relief = RIDGE, borderwidth = '3', bg = 'black')
self.frame_tabela_1.pack(pady = 10)
self.label_teste = Label(self.frame_tabela_1, text="Nome do produto: "+ nome, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1 )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Valor gasto no produto: "+ valor+",00 R$" , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Data: "+x , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Comentarios: "+ comentario, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
aux = ''
elif k == '§':
aux = ''
pass
else:
aux = aux + str(k)
def carregar_scrollbars(self, i):
self.my_canvas = Canvas(i, width = 480)
self.my_canvas.pack(side= LEFT, fill = BOTH)
self.my_scrollsbars = Scrollbar(i, orient = VERTICAL, command = self.my_canvas.yview)
self.my_scrollsbars.pack(side = LEFT, fill= Y)
self.my_canvas.configure( yscrollcommand = self.my_scrollsbars.set, bg = 'black')
self.my_canvas.bind('<Configure>', lambda e: self.my_canvas.configure(scrollregion = self.my_canvas.bbox("all") ))
self.frame_scrollbar = Frame(self.my_canvas)
self.my_canvas.create_window((0,0),window=self.frame_scrollbar, anchor = "nw")
self.frame_auxiliar_scrollbar = Frame(self.frame_scrollbar, bg = 'black')#FRAME ESPECIAL AUXILIAR PARA A EXCLUSÃO E CONSTRUÇÃO DOS BOTÕES
self.frame_auxiliar_scrollbar.pack()
class abrir_janela_detalhada_diaria(object):
def __init__(self, i, data):
self.data = data
self.carregar_scrollbars(i)
self.listar_financas(i)
def listar_financas(self, i):
a = dbmfinancas.financas.keys()
##print("Janela nova",a)
aux = ''
nome = ''
cont = 0
sinal = ''
for j in a:
aux = ''
##print("Aux", aux)
nome = ''
##print("Nome", aux)
cont = 0
##print("Cont", cont)
sinal = ''
##print("Sinal", sinal)
##print("Imprimindo o J",j)
x = j.decode()
if (x == self.data):
##print("Verificar o que esta havendo", x)
y = dbmfinancas.financas[x]
y = y.decode()
##print("Valores: ",y)
#cont = 0
for k in y:
##print(k)
cont += 1
##print("Imprimindo o K", k)
if k == '¹':
nome = aux
# #print("Nome: ",nome)
aux = ''
elif k == '²':
valor = aux
sinal = valor[len(valor)-1:len(valor):1 ]
valor = valor[0:len(valor) -1:1]
aux = ''
# #print("valor: ",valor)
elif k == '¢':
comentario = aux
# #print("Comentario: ",comentario)
aux = ''
if sinal == "+":
fg = 'green'
self.frame_tabela_1 = Frame(self.frame_auxiliar_scrollbar, width = 10, height = 5, relief = RIDGE, borderwidth = '3', bg = 'black')
self.frame_tabela_1.pack(pady = 10)
self.label_teste = Label(self.frame_tabela_1, text="Nome do produto: "+ nome, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1 )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Valor recebido no produto: "+ valor+",00 R$" , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Data: "+x , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Comentarios: "+ comentario, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
aux = ''
else:
fg = 'red'
self.frame_tabela_1 = Frame(self.frame_auxiliar_scrollbar, width = 10, height = 5, relief = RIDGE, borderwidth = '3', bg = 'black')
self.frame_tabela_1.pack(pady = 10)
self.label_teste = Label(self.frame_tabela_1, text="Nome do produto: "+ nome, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1 )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Valor gasto no produto: "+ valor+",00 R$" , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Data: "+x , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Comentarios: "+ comentario, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
aux = ''
elif k == '§':
aux = ''
pass
else:
aux = aux + str(k)
def carregar_scrollbars(self, i):
self.my_canvas = Canvas(i, width = 480)
self.my_canvas.pack(side= LEFT, fill = BOTH)
self.my_scrollsbars = Scrollbar(i, orient = VERTICAL, command = self.my_canvas.yview)
self.my_scrollsbars.pack(side = LEFT, fill= Y)
self.my_canvas.configure( yscrollcommand = self.my_scrollsbars.set, bg = 'black')
self.my_canvas.bind('<Configure>', lambda e: self.my_canvas.configure(scrollregion = self.my_canvas.bbox("all") ))
self.frame_scrollbar = Frame(self.my_canvas)
self.my_canvas.create_window((0,0),window=self.frame_scrollbar, anchor = "nw")
self.frame_auxiliar_scrollbar = Frame(self.frame_scrollbar, bg = 'black')#FRAME ESPECIAL AUXILIAR PARA A EXCLUSÃO E CONSTRUÇÃO DOS BOTÕES
self.frame_auxiliar_scrollbar.pack()
class abrir_janela_detalhada_mensal(object):
def __init__(self, i, data):
self.data = data
self.carregar_scrollbars(i)
self.listar_financas(i)
def listar_financas(self, i):
a = dbmfinancas.financas.keys()
##print("Janela nova",a)
aux = ''
nome = ''
cont = 0
sinal = ''
for j in a:
aux = ''
##print("Aux", aux)
nome = ''
##print("Nome", aux)
cont = 0
##print("Cont", cont)
sinal = ''
##print("Sinal", sinal)
##print("Imprimindo o J",j)
x = j.decode()
#print("Do mês com X", x)
do_mes = x[2::]
mes_requerido = self.data[2::]
#print("Do mês tratado ", do_mes)
#print("Do mês requerido", mes_requerido)
if (do_mes == mes_requerido):
##print("Verificar o que esta havendo", x)
y = dbmfinancas.financas[x]
y = y.decode()
##print("Valores: ",y)
#cont = 0
for k in y:
##print(k)
cont += 1
##print("Imprimindo o K", k)
if k == '¹':
nome = aux
# #print("Nome: ",nome)
aux = ''
elif k == '²':
valor = aux
sinal = valor[len(valor)-1:len(valor):1 ]
valor = valor[0:len(valor) -1:1]
aux = ''
# #print("valor: ",valor)
elif k == '¢':
comentario = aux
# #print("Comentario: ",comentario)
aux = ''
if sinal == "+":
fg = 'green'
self.frame_tabela_1 = Frame(self.frame_auxiliar_scrollbar, width = 10, height = 5, relief = RIDGE, borderwidth = '3', bg = 'black')
self.frame_tabela_1.pack(pady = 10)
self.label_teste = Label(self.frame_tabela_1, text="Nome do produto: "+ nome, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1 )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Valor recebido no produto: "+ valor+",00 R$" , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Data: "+x , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Comentarios: "+ comentario, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
aux = ''
else:
fg = 'red'
self.frame_tabela_1 = Frame(self.frame_auxiliar_scrollbar, width = 10, height = 5, relief = RIDGE, borderwidth = '3', bg = 'black')
self.frame_tabela_1.pack(pady = 10)
self.label_teste = Label(self.frame_tabela_1, text="Nome do produto: "+ nome, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1 )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Valor gasto no produto: "+ valor+",00 R$" , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Data: "+x , bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
self.label_teste = Label(self.frame_tabela_1, text="Comentarios: "+ comentario, bg = 'black', fg = fg, font = ('Franklin Gothic Medium', 15), relief = RIDGE, borderwidth = '1', width = 51, height = 1, justify = 'left', anchor = 'w', )
self.label_teste.pack()
aux = ''
elif k == '§':
aux = ''
pass
else:
aux = aux + str(k)
def carregar_scrollbars(self, i):
self.my_canvas = Canvas(i, width = 480)
self.my_canvas.pack(side= LEFT, fill = BOTH)
self.my_scrollsbars = Scrollbar(i, orient = VERTICAL, command = self.my_canvas.yview)
self.my_scrollsbars.pack(side = LEFT, fill= Y)
self.my_canvas.configure( yscrollcommand = self.my_scrollsbars.set, bg = 'black')
self.my_canvas.bind('<Configure>', lambda e: self.my_canvas.configure(scrollregion = self.my_canvas.bbox("all") ))
self.frame_scrollbar = Frame(self.my_canvas)
self.my_canvas.create_window((0,0),window=self.frame_scrollbar, anchor = "nw")
self.frame_auxiliar_scrollbar = Frame(self.frame_scrollbar, bg = 'black')#FRAME ESPECIAL AUXILIAR PARA A EXCLUSÃO E CONSTRUÇÃO DOS BOTÕES
self.frame_auxiliar_scrollbar.pack()
def abrir_janela_detalhada_fc():
janela_detalhada = tk.Tk()
abrir_janela_detalhada(janela_detalhada)
janela_detalhada.title("Finanças Completa")
width = 500
height = 800
x = 850
y = 0
#TAKE THE WINDOW SIZE AND PUT IN GEOMETRY
##print(aux)
janela_detalhada.geometry(f'{width}x{height}+{x}+{y}')
#janela_detalhada.geometry(("600x700"))
janela_detalhada.wm_iconbitmap('imagens/lou.ico')
janela_detalhada.mainloop()
def abrir_janela_detalhada_fc_diaria(data):
janela_detalhada_diaria = tk.Tk()
abrir_janela_detalhada_diaria(janela_detalhada_diaria, data)
janela_detalhada_diaria.title("Finanças Completa")
width = 500
height = 800
x = 850
y = 0
#TAKE THE WINDOW SIZE AND PUT IN GEOMETRY
##print(aux)
janela_detalhada_diaria.geometry(f'{width}x{height}+{x}+{y}')
#janela_detalhada.geometry(("600x700"))
janela_detalhada_diaria.wm_iconbitmap('imagens/lou.ico')
janela_detalhada_diaria.mainloop()
def abrir_janela_detalhada_fc_mensal(data):
janela_detalhada_mensal = tk.Tk()
abrir_janela_detalhada_mensal(janela_detalhada_mensal, data)
janela_detalhada_mensal.title("Finanças Mensal")
width = 500
height = 800
x = 850
y = 0
#TAKE THE WINDOW SIZE AND PUT IN GEOMETRY
##print(aux)
janela_detalhada_mensal.geometry(f'{width}x{height}+{x}+{y}')
#janela_detalhada.geometry(("600x700"))
janela_detalhada_mensal.wm_iconbitmap('imagens/lou.ico')
janela_detalhada_mensal.mainloop() | 40.941645 | 260 | 0.626628 | 2,116 | 15,435 | 4.438563 | 0.074669 | 0.054621 | 0.07155 | 0.061329 | 0.953684 | 0.928237 | 0.92121 | 0.908433 | 0.908433 | 0.908433 | 0 | 0.026882 | 0.214318 | 15,435 | 377 | 261 | 40.941645 | 0.747093 | 0.094655 | 0 | 0.864151 | 0 | 0 | 0.107091 | 0.005189 | 0 | 0 | 0 | 0 | 0 | 1 | 0.045283 | false | 0.011321 | 0.026415 | 0 | 0.083019 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b621d87de118acdceea9139bf8b1f24158f13dd6 | 1,268 | py | Python | serial_scripts/perf/spirent_ixia/test_ixia_perf.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | 1 | 2017-06-13T04:42:34.000Z | 2017-06-13T04:42:34.000Z | serial_scripts/perf/spirent_ixia/test_ixia_perf.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | serial_scripts/perf/spirent_ixia/test_ixia_perf.py | vkolli/contrail-test-perf | db04b8924a2c330baabe3059788b149d957a7d67 | [
"Apache-2.0"
] | null | null | null | from base import PerfBaseIxia
import time
from tcutils.wrappers import preposttest_wrapper
import test
class PerfIxiaTest(PerfBaseIxia):
@classmethod
def setUpClass(cls):
super(PerfIxiaTest, cls).setUpClass()
@preposttest_wrapper
def test_ixia_pps_tcp_v4_2_3si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',2,3)
@preposttest_wrapper
def test_ixia_pps_tcp_v4_2_2si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',2,2)
@preposttest_wrapper
def test_ixia_pps_tcp_v4_2_4si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',2,4)
@preposttest_wrapper
def test_ixia_pps_tcp_v4_2_1si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',2,1)
@preposttest_wrapper
def test_ixia_pps_tcp_v4_4_1si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',4,1)
@preposttest_wrapper
def test_ixia_perf_tcp_vm_to_vm_compute_8_1si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',8,1)
@preposttest_wrapper
def test_ixia_perf_tcp_vm_to_vm_compute_4_2si(self):
return self.run_ixia_perf_tests_pps('THROUGHPUT','TCP','v4',4,2)
#end PerfIxiaTest
| 27.565217 | 72 | 0.737382 | 194 | 1,268 | 4.386598 | 0.206186 | 0.070505 | 0.056404 | 0.20564 | 0.760282 | 0.760282 | 0.759107 | 0.759107 | 0.715629 | 0.537015 | 0 | 0.037418 | 0.15694 | 1,268 | 45 | 73 | 28.177778 | 0.758653 | 0.012618 | 0 | 0.241379 | 0 | 0 | 0.08427 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.275862 | false | 0 | 0.137931 | 0.241379 | 0.689655 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 8 |
fcc753e39a50a1b57b06944e9ffd976a59eb9178 | 2,150 | py | Python | hidrocomp/eflow/exceptions.py | clebsonpy/HydroComp | 9d17fa533e8a15c760030df5246ff531ddb4cb22 | [
"MIT"
] | 4 | 2020-05-14T20:03:49.000Z | 2020-05-22T19:56:43.000Z | hidrocomp/eflow/exceptions.py | clebsonpy/HydroComp | 9d17fa533e8a15c760030df5246ff531ddb4cb22 | [
"MIT"
] | 19 | 2019-06-27T18:12:27.000Z | 2020-04-28T13:28:03.000Z | hidrocomp/eflow/exceptions.py | clebsonpy/HydroComp | 9d17fa533e8a15c760030df5246ff531ddb4cb22 | [
"MIT"
] | null | null | null | class NotStation(Exception):
def __init__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "FitError: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class FitNotExist(Exception):
def __init__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "FitNotExist: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class NotStatistic(Exception):
def __init__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "NotStatistic: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class NotRva(Exception):
def __init__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "NotRva: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class NotTypePandas(Exception):
def __init__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "NotRva: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class ObjectError(Exception):
def __int__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "ObjectErro: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class VariableError(Exception):
def __int__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "VariableError: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!")
class StatusError(Exception):
def __int__(self, message, line=0):
self.message = message
self.line = line
def __str__(self):
return "StatusError: {}".format(self.message) + (" the line {}!".format(self.line) if self.line > 0 else "!") | 30.714286 | 119 | 0.610698 | 264 | 2,150 | 4.731061 | 0.094697 | 0.211369 | 0.096077 | 0.102482 | 0.874299 | 0.874299 | 0.874299 | 0.874299 | 0.874299 | 0.874299 | 0 | 0.009703 | 0.233023 | 2,150 | 70 | 120 | 30.714286 | 0.747726 | 0 | 0 | 0.708333 | 0 | 0 | 0.102743 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0 | 0.166667 | 0.666667 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 11 |
fcdc156aa21b00d9af8f7afb0be0e13df2634ce5 | 6,727 | py | Python | segmentation_models_pytorch/utils/metrics.py | Olimon660/segmentation_models.pytorch | 28f9d56cc5bb61b33432b6fd038d13161da9ea6b | [
"MIT"
] | null | null | null | segmentation_models_pytorch/utils/metrics.py | Olimon660/segmentation_models.pytorch | 28f9d56cc5bb61b33432b6fd038d13161da9ea6b | [
"MIT"
] | null | null | null | segmentation_models_pytorch/utils/metrics.py | Olimon660/segmentation_models.pytorch | 28f9d56cc5bb61b33432b6fd038d13161da9ea6b | [
"MIT"
] | null | null | null | from . import base
from . import functional as F
from .base import Activation
class IoU(base.Metric):
__name__ = 'iou_score'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.iou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
class mIoU(base.Metric):
__name__ = 'miou_score'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.miou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
class cemIoU(base.Metric):
__name__ = 'cemiou'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.cemiou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
class classIoU1(base.Metric):
__name__ = 'class_iou1'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
self.class_idx = 0
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.class_iou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
class_idx=self.class_idx
)
class classIoU2(base.Metric):
__name__ = 'class_iou2'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
self.class_idx = 1
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.class_iou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
class_idx=self.class_idx
)
class ceIoU1(base.Metric):
__name__ = 'ce_iou1'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
self.class_idx = 0
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.ce_iou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
class_idx=self.class_idx
)
class ceIoU2(base.Metric):
__name__ = 'ce_iou2'
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
self.class_idx = 1
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.ce_iou(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
class_idx=self.class_idx
)
class Fscore(base.Metric):
def __init__(self, beta=1, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.beta = beta
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.f_score(
y_pr, y_gt,
eps=self.eps,
beta=self.beta,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
class Accuracy(base.Metric):
def __init__(self, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.accuracy(
y_pr, y_gt,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
class Recall(base.Metric):
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.recall(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
class Precision(base.Metric):
def __init__(self, eps=1e-7, threshold=0.5, activation=None, ignore_channels=None, **kwargs):
super().__init__(**kwargs)
self.eps = eps
self.threshold = threshold
self.activation = Activation(activation)
self.ignore_channels = ignore_channels
def forward(self, y_pr, y_gt):
y_pr = self.activation(y_pr)
return F.precision(
y_pr, y_gt,
eps=self.eps,
threshold=self.threshold,
ignore_channels=self.ignore_channels,
)
| 29.504386 | 105 | 0.609187 | 820 | 6,727 | 4.680488 | 0.057317 | 0.200625 | 0.103179 | 0.034393 | 0.913236 | 0.902293 | 0.902293 | 0.898124 | 0.898124 | 0.867379 | 0 | 0.011466 | 0.286904 | 6,727 | 227 | 106 | 29.634361 | 0.788618 | 0 | 0 | 0.765027 | 0 | 0 | 0.008771 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.120219 | false | 0 | 0.016393 | 0 | 0.295082 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1e211fcd022de78c734161f4bfdd5f28880b7545 | 24,450 | py | Python | tests/test_api.py | otto-torino/paython | 2f364b9845180b774372fad3b255f97d77ba69d9 | [
"MIT"
] | 3 | 2020-10-20T10:51:04.000Z | 2021-01-21T22:41:03.000Z | tests/test_api.py | otto-torino/paython | 2f364b9845180b774372fad3b255f97d77ba69d9 | [
"MIT"
] | null | null | null | tests/test_api.py | otto-torino/paython | 2f364b9845180b774372fad3b255f97d77ba69d9 | [
"MIT"
] | null | null | null | import json
from pathlib import Path
import respx
from cryptography.hazmat.primitives import serialization
from httpx import Headers
from pytest import fixture, mark
import satispaython
from satispaython import AsyncSatispayClient
import pytest
@fixture(scope='module')
def public_key(rsa_key):
key_encoding = serialization.Encoding.PEM
key_format = serialization.PublicFormat.SubjectPublicKeyInfo
public_pem = rsa_key.public_key().public_bytes(key_encoding, key_format)
return public_pem.decode()
@fixture(scope='module')
def key_id():
path = Path(__file__).resolve().parent / 'data/key_id.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture(scope='module')
def payment_id():
return '2936affa-ab4c-4daa-9bec-7cafbce4caa1'
@fixture()
def test_authentication_signature():
path = Path(__file__).resolve().parent / 'data/test_authentication_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture()
def create_payment_staging_signature():
path = Path(__file__).resolve().parent / 'data/create_payment_staging_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture()
def create_payment_production_signature():
path = Path(__file__).resolve().parent / 'data/create_payment_production_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture()
def create_payment_staging_no_optionals_signature():
path = Path(__file__).resolve().parent / 'data/create_payment_staging_no_optionals_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture()
def create_payment_production_no_optionals_signature():
path = Path(__file__).resolve().parent / 'data/create_payment_production_no_optionals_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture()
def get_payment_details_staging_signature():
path = Path(__file__).resolve().parent / 'data/get_payment_details_staging_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
@fixture()
def get_payment_details_production_signature():
path = Path(__file__).resolve().parent / 'data/get_payment_details_production_signature.txt'
with open(path, 'r') as file:
return file.read().strip()
class TestObtainKeyID:
@respx.mock
def test_staging(self, rsa_key, public_key):
route = respx.post('https://staging.authservices.satispay.com/g_business/v1/authentication_keys')
satispaython.obtain_key_id('623ECX', rsa_key, True)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {'public_key': public_key, 'token': '623ECX'}
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
@respx.mock
def test_production(self, rsa_key, public_key):
route = respx.post('https://authservices.satispay.com/g_business/v1/authentication_keys')
satispaython.obtain_key_id('623ECX', rsa_key)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {'public_key': public_key, 'token': '623ECX'}
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
class TestTestAuthentication:
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_test_authentication(self, key_id, rsa_key, test_authentication_signature):
route = respx.post('https://staging.authservices.satispay.com/wally-services/protocol/tests/signature')
satispaython.test_authentication(key_id, rsa_key)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert request.content is b''
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{test_authentication_signature}"'
class TestCreatePaymet:
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_staging(self, key_id, rsa_key, create_payment_staging_signature):
route = respx.post('https://staging.authservices.satispay.com/g_business/v1/payments')
body_params = {
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
headers = Headers({'Idempotency-Key': 'test_idempotency_key'})
satispaython.create_payment(key_id, rsa_key, 100, 'EUR', body_params, headers, True)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
assert request.headers['Idempotency-Key'] == 'test_idempotency_key'
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=dOjZtX6Has9wFZQDmriLhIfThHD11nuxFZNIjp7FwR0='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_staging_signature}"'
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_production(self, key_id, rsa_key, create_payment_production_signature):
route = respx.post('https://authservices.satispay.com/g_business/v1/payments')
body_params = {
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
headers = Headers({'Idempotency-Key': 'test_idempotency_key'})
satispaython.create_payment(key_id, rsa_key, 100, 'EUR', body_params, headers)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
assert request.headers['Idempotency-Key'] == 'test_idempotency_key'
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=dOjZtX6Has9wFZQDmriLhIfThHD11nuxFZNIjp7FwR0='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_production_signature}"'
@pytest.mark.asyncio
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
async def test_staging_async(self, key_id, rsa_key, create_payment_staging_signature):
route = respx.post('https://staging.authservices.satispay.com/g_business/v1/payments')
body_params = {
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
headers = {'Idempotency-Key': 'test_idempotency_key'}
async with AsyncSatispayClient(key_id, rsa_key, True) as client:
await client.create_payment(100, 'EUR', body_params, headers)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
assert request.headers['Idempotency-Key'] == 'test_idempotency_key'
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=dOjZtX6Has9wFZQDmriLhIfThHD11nuxFZNIjp7FwR0='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_staging_signature}"'
@pytest.mark.asyncio
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
async def test_production_async(self, key_id, rsa_key, create_payment_production_signature):
route = respx.post('https://authservices.satispay.com/g_business/v1/payments')
body_params = {
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
headers = {'Idempotency-Key': 'test_idempotency_key'}
async with AsyncSatispayClient(key_id, rsa_key) as client:
await client.create_payment(100, 'EUR', body_params, headers)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
'callback_url': 'https://test.test?payment_id={uuid}',
'expiration_date': '2019-03-18T16:10:24.000Z',
'external_code': 'test_code',
'metadata': {'metadata': 'test'}
}
assert request.headers['Idempotency-Key'] == 'test_idempotency_key'
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=dOjZtX6Has9wFZQDmriLhIfThHD11nuxFZNIjp7FwR0='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_production_signature}"'
class TestWithNoHeadersAndBody:
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_staging(self, key_id, rsa_key, create_payment_staging_no_optionals_signature):
route = respx.post('https://staging.authservices.satispay.com/g_business/v1/payments')
satispaython.create_payment(key_id, rsa_key, 100, 'EUR', staging=True)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
}
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=a5UF/fcWo+KdzPGADk9XDV/CwKsGyrNLNKGind53oVM='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_staging_no_optionals_signature}"'
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_production(self, key_id, rsa_key, create_payment_production_no_optionals_signature):
route = respx.post('https://authservices.satispay.com/g_business/v1/payments')
satispaython.create_payment(key_id, rsa_key, 100, 'EUR')
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
}
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=a5UF/fcWo+KdzPGADk9XDV/CwKsGyrNLNKGind53oVM='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_production_no_optionals_signature}"'
@pytest.mark.asyncio
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
async def test_staging_async(self, key_id, rsa_key, create_payment_staging_no_optionals_signature):
route = respx.post('https://staging.authservices.satispay.com/g_business/v1/payments')
async with AsyncSatispayClient(key_id, rsa_key, True) as client:
await client.create_payment(100, 'EUR')
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
}
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=a5UF/fcWo+KdzPGADk9XDV/CwKsGyrNLNKGind53oVM='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_staging_no_optionals_signature}"'
@pytest.mark.asyncio
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
async def test_production_async(self, key_id, rsa_key, create_payment_production_no_optionals_signature):
route = respx.post('https://authservices.satispay.com/g_business/v1/payments')
async with AsyncSatispayClient(key_id, rsa_key) as client:
await client.create_payment(100, 'EUR')
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'POST'
assert json.loads(request.content.decode()) == {
'flow': 'MATCH_CODE',
'amount_unit': 100,
'currency': 'EUR',
}
assert request.headers['Accept'] == 'application/json'
assert request.headers['Content-Type'] == 'application/json'
assert request.headers['Host'] == 'authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=a5UF/fcWo+KdzPGADk9XDV/CwKsGyrNLNKGind53oVM='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{create_payment_production_no_optionals_signature}"'
class TestGetPaymentDetails:
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_staging(self, key_id, rsa_key, payment_id, get_payment_details_staging_signature):
route = respx.get(f'https://staging.authservices.satispay.com/g_business/v1/payments/{payment_id}')
satispaython.get_payment_details(key_id, rsa_key, payment_id, staging=True)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'GET'
assert request.content is b''
assert request.headers['Accept'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{get_payment_details_staging_signature}"'
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
def test_production(self, key_id, rsa_key, payment_id, get_payment_details_production_signature):
route = respx.get(f'https://authservices.satispay.com/g_business/v1/payments/{payment_id}')
satispaython.get_payment_details(key_id, rsa_key, payment_id)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'GET'
assert request.content is b''
assert request.headers['Accept'] == 'application/json'
assert request.headers['Host'] == 'authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{get_payment_details_production_signature}"'
@pytest.mark.asyncio
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
async def test_staging_async(self, key_id, rsa_key, payment_id, get_payment_details_staging_signature):
route = respx.get(f'https://staging.authservices.satispay.com/g_business/v1/payments/{payment_id}')
async with AsyncSatispayClient(key_id, rsa_key, True) as client:
await client.get_payment_details(payment_id)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'GET'
assert request.content is b''
assert request.headers['Accept'] == 'application/json'
assert request.headers['Host'] == 'staging.authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{get_payment_details_staging_signature}"'
@pytest.mark.asyncio
@respx.mock
@mark.freeze_time('Mon, 18 Mar 2019 15:10:24 +0000')
async def test_production_async(self, key_id, rsa_key, payment_id, get_payment_details_production_signature):
route = respx.get(f'https://authservices.satispay.com/g_business/v1/payments/{payment_id}')
async with AsyncSatispayClient(key_id, rsa_key) as client:
await client.get_payment_details(payment_id)
assert route.called
assert route.call_count == 1
request = route.calls.last.request
assert request.method == 'GET'
assert request.content is b''
assert request.headers['Accept'] == 'application/json'
assert request.headers['Host'] == 'authservices.satispay.com'
assert request.headers['Date'] == 'Mon, 18 Mar 2019 15:10:24 +0000'
assert request.headers['Digest'] == 'SHA-256=47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU='
assert request.headers['Authorization'] == f'Signature keyId="{key_id}", ' \
f'algorithm="rsa-sha256", ' \
f'headers="(request-target) host date digest", ' \
f'signature="{get_payment_details_production_signature}"'
| 52.24359 | 120 | 0.600491 | 2,633 | 24,450 | 5.404482 | 0.058868 | 0.093183 | 0.115249 | 0.021926 | 0.939775 | 0.938721 | 0.936683 | 0.934013 | 0.932818 | 0.926072 | 0 | 0.044895 | 0.272106 | 24,450 | 467 | 121 | 52.35546 | 0.754678 | 0 | 0 | 0.816667 | 0 | 0 | 0.324131 | 0.11955 | 0 | 0 | 0 | 0 | 0.338095 | 1 | 0.045238 | false | 0 | 0.021429 | 0.002381 | 0.102381 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1e568ecc3be36534a3e7568d6b2283cc463e82c2 | 1,024 | py | Python | scripts/plot_results.py | heheqianqian/DeepQuaternionNetworks | 199d261f080896c9408e771f980b8a98e159f847 | [
"MIT"
] | null | null | null | scripts/plot_results.py | heheqianqian/DeepQuaternionNetworks | 199d261f080896c9408e771f980b8a98e159f847 | [
"MIT"
] | 1 | 2020-01-03T17:03:45.000Z | 2020-01-04T00:02:46.000Z | scripts/plot_results.py | heheqianqian/DeepQuaternionNetworks | 199d261f080896c9408e771f980b8a98e159f847 | [
"MIT"
] | null | null | null | import numpy as np
import matplotlib.pyplot as plt
r = np.genfromtxt('C:/Users/Administrator/PycharmProjects/DeepQuaternionNetworks/scripts/real_seg_train_loss.txt')
c = np.genfromtxt('C:/Users/Administrator/PycharmProjects/DeepQuaternionNetworks/scripts/complex_seg_train_loss.txt')
q = np.genfromtxt('C:/Users/Administrator/PycharmProjects/DeepQuaternionNetworks/scripts/quaternion_seg_train_loss.txt')
plt.plot(r, c='g')
plt.plot(c, c='b')
plt.plot(q, c='r')
r = np.genfromtxt('C:/Users/Administrator/PycharmProjects/DeepQuaternionNetworks/scripts/real_seg_val_loss.txt')
c = np.genfromtxt('C:/Users/Administrator/PycharmProjects/DeepQuaternionNetworks/scripts/complex_seg_val_loss.txt')
q = np.genfromtxt('C:/Users/Administrator/PycharmProjects/DeepQuaternionNetworks/scripts/quaternion_seg_val_loss.txt')
print(min(r))
print(min(c))
print(min(q))
plt.plot(r, '--', c='g')
plt.plot(c, '--', c='b')
plt.plot(q, '--', c='r')
plt.title("Kitti Segmentation Loss Plot")
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.show() | 39.384615 | 120 | 0.77832 | 146 | 1,024 | 5.335616 | 0.246575 | 0.092426 | 0.100128 | 0.138639 | 0.775353 | 0.775353 | 0.775353 | 0.775353 | 0.775353 | 0.775353 | 0 | 0 | 0.053711 | 1,024 | 26 | 121 | 39.384615 | 0.803922 | 0 | 0 | 0 | 0 | 0 | 0.604878 | 0.556098 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.095238 | 0 | 0.095238 | 0.142857 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1ea0fb8199ca12d0b519989ce49739ed81bd7baf | 138 | py | Python | edsnlp/pipelines/misc/measures/__init__.py | MohamedBsh/edsnlp | a58b31d62e14b029ed390364a7e15d99c1decd16 | [
"BSD-3-Clause"
] | 32 | 2022-03-08T16:45:09.000Z | 2022-03-31T15:21:00.000Z | edsnlp/pipelines/misc/measures/__init__.py | MohamedBsh/edsnlp | a58b31d62e14b029ed390364a7e15d99c1decd16 | [
"BSD-3-Clause"
] | 19 | 2022-03-09T11:44:43.000Z | 2022-03-31T14:32:06.000Z | edsnlp/pipelines/misc/measures/__init__.py | MohamedBsh/edsnlp | a58b31d62e14b029ed390364a7e15d99c1decd16 | [
"BSD-3-Clause"
] | 1 | 2022-03-11T16:14:21.000Z | 2022-03-11T16:14:21.000Z | from edsnlp.pipelines.misc.measures.measures import Measures
from edsnlp.pipelines.misc.measures.patterns import *
from . import factory
| 27.6 | 60 | 0.833333 | 18 | 138 | 6.388889 | 0.444444 | 0.173913 | 0.330435 | 0.4 | 0.53913 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.094203 | 138 | 4 | 61 | 34.5 | 0.92 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
1eb2a98fc5930a2220de5d192a404b7b640adea8 | 12,581 | py | Python | tests/v3_validation/cattlevalidationtest/core/test_storage_nfs_driver.py | bmdepesa/validation-tests | 23e7ab95ce76744483a0657f790b42a88a93436d | [
"Apache-2.0"
] | 7 | 2015-11-18T17:43:08.000Z | 2021-07-14T09:48:18.000Z | tests/v3_validation/cattlevalidationtest/core/test_storage_nfs_driver.py | bmdepesa/validation-tests | 23e7ab95ce76744483a0657f790b42a88a93436d | [
"Apache-2.0"
] | 175 | 2015-07-09T18:41:24.000Z | 2021-06-10T21:23:27.000Z | tests/v3_validation/cattlevalidationtest/core/test_storage_nfs_driver.py | bmdepesa/validation-tests | 23e7ab95ce76744483a0657f790b42a88a93436d | [
"Apache-2.0"
] | 25 | 2015-08-08T04:54:24.000Z | 2021-05-25T21:10:37.000Z | from common_fixtures import * # NOQA
from cattle import ApiError
if_test_rancher_nfs = pytest.mark.skipif(
not os.environ.get('TEST_NFS'),
reason='Rancher NFS test not enabled')
volume_driver = "rancher-nfs"
@if_test_rancher_nfs
def test_nfs_services_with_shared_vol(client):
assert check_for_nfs_driver(client)
services_with_shared_vol(client, volume_driver=volume_driver)
@if_test_rancher_nfs
def test_nfs_services_with_shared_vol_scaleup(client):
assert check_for_nfs_driver(client)
services_with_shared_vol_scaleup(client, volume_driver=volume_driver)
@if_test_rancher_nfs
def test_nfs_multiple_services_with_same_shared_vol(client):
assert check_for_nfs_driver(client)
multiple_services_with_same_shared_vol(client, volume_driver=volume_driver)
@if_test_rancher_nfs
def test_nfs_delete_volume(client):
assert check_for_nfs_driver(client)
delete_volume_after_service_deletes(client, volume_driver=volume_driver)
def services_with_shared_vol(client, volume_driver):
# Create Environment with service that has shared volume from
# volume_driver
volume_name = random_str()
path = "/myvol"
port = "1000"
launch_config = {"image": SSH_IMAGE_UUID,
"volumeDriver": volume_driver,
"dataVolumes": [volume_name + ":" + path],
"ports": [port + ":22/tcp"],
"labels":
{"io.rancher.scheduler.affinity:container_label_ne":
"io.rancher.stack_service.name" +
"=${stack_name}/${service_name}"}
}
service, env = create_env_and_svc(client, launch_config, 2)
service = service.activate()
service = client.wait_success(service, 120)
assert service.state == "active"
container_list = get_service_container_list(service)
assert len(container_list) == service.scale
assert container_list[0].dockerHostIp != container_list[1].dockerHostIp
volumes = client.list_volume(removed_null=True,
name=volume_name)
print volumes
assert len(volumes) == 1
assert volumes[0].state == "active"
filename = "test"
content = random_str()
write_data(container_list[0], int(port), path, filename, content)
file_content = \
read_data(container_list[1], int(port), path, filename)
assert file_content == content
delete_all(client, [env])
delete_volume(client, volumes[0])
def services_with_shared_vol_scaleup(client, volume_driver):
# Create Environment with service that has shared volume from
# volume_driver
volume_name = random_str()
path = "/myvol"
port = "1001"
launch_config = {"image": SSH_IMAGE_UUID,
"volumeDriver": volume_driver,
"dataVolumes": [volume_name + ":" + path],
"ports": [port + ":22/tcp"],
"labels":
{"io.rancher.scheduler.affinity:container_label_ne":
"io.rancher.stack_service.name" +
"=${stack_name}/${service_name}"}
}
service, env = create_env_and_svc(client, launch_config, 2)
service = service.activate()
service = client.wait_success(service, 120)
assert service.state == "active"
container_list = get_service_container_list(client, service)
assert len(container_list) == service.scale
volumes = client.list_volume(removed_null=True,
name=volume_name)
print volumes
assert len(volumes) == 1
assert volumes[0].state == "active"
assert container_list[0].dockerHostIp != container_list[1].dockerHostIp
filename = "test"
content = random_str()
write_data(container_list[0], int(port), path, filename, content)
file_content = \
read_data(container_list[1], int(port), path, filename)
assert file_content == content
# Scale service
final_scale = 3
service = client.update(service, name=service.name, scale=final_scale)
service = client.wait_success(service, 120)
assert service.state == "active"
assert service.scale == final_scale
# After scale up , make sure all container share the same volume by making
# sure all containers are able to access the contents of the file
# the was created before scaling service
container_list = get_service_container_list(client, service)
assert len(container_list) == service.scale
for container in container_list:
file_content = \
read_data(container_list[1], int(port), path, filename)
assert file_content == content
filename = "test1"
content = random_str()
write_data(container_list[2], int(port), path, filename, content)
for container in container_list:
file_content = \
read_data(container_list[1], int(port), path, filename)
assert file_content == content
delete_all(client, [env])
delete_volume(client, volumes[0])
def multiple_services_with_same_shared_vol(client, volume_driver):
# Create Environment with service that has shared volume from
# volume_driver
volume_name = random_str()
path = "/myvol"
port = "1002"
launch_config = {"image": SSH_IMAGE_UUID,
"volumeDriver": volume_driver,
"dataVolumes": [volume_name + ":" + path],
"ports": [port + ":22/tcp"],
"labels":
{"io.rancher.scheduler.affinity:container_label_ne":
"io.rancher.stack_service.name" +
"=${stack_name}/${service_name}"}
}
service, env = create_env_and_svc(client, launch_config, 2)
service = service.activate()
service = client.wait_success(service, 120)
assert service.state == "active"
container_list = get_service_container_list(client, service)
assert len(container_list) == service.scale
volumes = client.list_volume(removed_null=True,
name=volume_name)
print volumes
assert len(volumes) == 1
assert volumes[0].state == "active"
filename = "test"
content = random_str()
write_data(container_list[0], int(port), path, filename, content)
file_content = \
read_data(container_list[1], int(port), path, filename)
assert file_content == content
# create another service using the same volume
port = "1003"
path = "/myvoltest"
launch_config = {"image": SSH_IMAGE_UUID,
"volumeDriver": volume_driver,
"dataVolumes": [volume_name + ":" + path],
"ports": [port + ":22/tcp"],
"labels":
{"io.rancher.scheduler.affinity:container_label_ne":
"io.rancher.stack_service.name" +
"=${stack_name}/${service_name}"}
}
service1, env1 = create_env_and_svc(client, launch_config, 2)
service1 = service1.activate()
service1 = client.wait_success(service1, 120)
assert service1.state == "active"
container_list = get_service_container_list(client, service1)
assert len(container_list) == service1.scale
# Make sure all container of this service share the same volume as the
# first service created with this volume name by making sure all
# containers of this service are able to access the contents of the file
# that was created from container in first service
for container in container_list:
file_content = \
read_data(container, int(port), path, filename)
assert file_content == content
delete_all(client, [env, env1])
delete_volume(client, volumes[0])
def delete_volume_after_service_deletes(client, volume_driver):
# Create Environment with service that has shared volume from
# volume_driver
volume_name = random_str()
path = "/myvol"
port = "1004"
launch_config = {"image": SSH_IMAGE_UUID,
"volumeDriver": volume_driver,
"dataVolumes": [volume_name + ":" + path],
"ports": [port + ":22/tcp"],
"labels":
{"io.rancher.scheduler.affinity:container_label_ne":
"io.rancher.stack_service.name" +
"=${stack_name}/${service_name}"}
}
service, env = create_env_and_svc(client, launch_config, 2)
service = service.activate()
service = client.wait_success(service, 120)
assert service.state == "active"
container_list = get_service_container_list(client, service)
assert len(container_list) == service.scale
volumes = client.list_volume(removed_null=True,
name=volume_name)
assert len(volumes) == 1
volume = volumes[0]
assert volume.state == "active"
filename = "test"
content = random_str()
write_data(container_list[0], int(port), path, filename, content)
file_content = \
read_data(container_list[1], int(port), path, filename)
assert file_content == content
# create another service using the same volume
port = "1005"
path = "/myvoltest"
launch_config = {"image": SSH_IMAGE_UUID,
"volumeDriver": volume_driver,
"dataVolumes": [volume_name + ":" + path],
"ports": [port + ":22/tcp"],
"labels":
{"io.rancher.scheduler.affinity:container_label_ne":
"io.rancher.stack_service.name" +
"=${stack_name}/${service_name}"}
}
service1, env1 = create_env_and_svc(client, launch_config, 2)
service1 = service1.activate()
service1 = client.wait_success(service1, 120)
assert service1.state == "active"
container_list = get_service_container_list(client, service1)
assert len(container_list) == service1.scale
# Make sure all container share the same volume as the first service
# created with this volume name by making sure all containers of this
# service are able to access the contents of the file
# the was created before scale
for container in container_list:
file_content = \
read_data(container, int(port), path, filename)
assert file_content == content
# After deleting one of the services that uses the volumes , volume state
# should still be active and we should not be allowed to delete the volume
delete_all(client, [service])
container_list = get_service_container_list(client, service)
for container in container_list:
wait_for_condition(
client, container,
lambda x: x.state == 'purged',
lambda x: 'State is: ' + x.state)
volume = client.reload(volume)
volume = client.reload(volume)
assert volume.state == "active"
with pytest.raises(ApiError) as e:
volume = client.wait_success(client.delete(volume))
assert e.value.error.status == 405
assert e.value.error.code == 'Method not allowed'
volume = client.reload(volume)
assert volume.state == "active"
# After deleting all the services that uses the volumes , volume state
# should be detached and we should be allowed to delete the volume
delete_all(client, [service1])
container_list = get_service_container_list(client, service1)
for container in container_list:
wait_for_condition(
client, container,
lambda x: x.state == 'purged',
lambda x: 'State is: ' + x.state)
delete_volume(client, volume)
def delete_volume(client, volume):
volume = wait_for_condition(
client, volume,
lambda x: x.state == 'detached',
lambda x: 'State is: ' + x.state,
timeout=600)
assert volume.state == "detached"
volume = client.wait_success(client.delete(volume))
assert volume.state == "removed"
volume = client.wait_success(volume.purge())
assert volume.state == "purged"
def check_for_nfs_driver(client):
nfs_driver = False
env = client.list_stack(name="nfs")
if len(env) == 1:
service = get_service_by_name(client, env[0],
"nfs-driver")
if service.state == "active":
nfs_driver = True
return nfs_driver
| 34.947222 | 79 | 0.631667 | 1,458 | 12,581 | 5.217421 | 0.109739 | 0.078612 | 0.018798 | 0.03247 | 0.870777 | 0.859077 | 0.850795 | 0.831077 | 0.779677 | 0.721178 | 0 | 0.013086 | 0.271123 | 12,581 | 359 | 80 | 35.044568 | 0.816467 | 0.105477 | 0 | 0.751969 | 0 | 0 | 0.110992 | 0.057189 | 0 | 0 | 0 | 0 | 0.173228 | 0 | null | null | 0 | 0.007874 | null | null | 0.011811 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1ecd9e8848d90621cdb541b21db3f2cf43d9f02f | 113 | py | Python | autharch_sharc/editor/models/__init__.py | kingsdigitallab/autharch_sharc | 92de5fbec8cc72ce48a9e25eb634d40ac2cc83ca | [
"MIT"
] | null | null | null | autharch_sharc/editor/models/__init__.py | kingsdigitallab/autharch_sharc | 92de5fbec8cc72ce48a9e25eb634d40ac2cc83ca | [
"MIT"
] | null | null | null | autharch_sharc/editor/models/__init__.py | kingsdigitallab/autharch_sharc | 92de5fbec8cc72ce48a9e25eb634d40ac2cc83ca | [
"MIT"
] | null | null | null | from autharch_sharc.editor.models.iiif import * # noqa
from autharch_sharc.editor.models.pages import * # noqa
| 37.666667 | 56 | 0.787611 | 16 | 113 | 5.4375 | 0.5625 | 0.275862 | 0.390805 | 0.528736 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.123894 | 113 | 2 | 57 | 56.5 | 0.878788 | 0.079646 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
94afba4563819fd21bbae84daa6c8060d4609aae | 134 | py | Python | exercises/practice/two_product_production_decision/two_product_production_decision.py | exercism-bot/z3 | 5e32374acd1fa31f15919aa09880f04f1f17f975 | [
"MIT"
] | 6 | 2021-02-16T18:12:57.000Z | 2021-03-18T16:44:26.000Z | exercises/practice/two_product_production_decision/two_product_production_decision.py | exercism-bot/z3 | 5e32374acd1fa31f15919aa09880f04f1f17f975 | [
"MIT"
] | 38 | 2021-02-16T15:17:49.000Z | 2021-08-24T07:28:39.000Z | exercises/practice/two_product_production_decision/two_product_production_decision.py | exercism-bot/z3 | 5e32374acd1fa31f15919aa09880f04f1f17f975 | [
"MIT"
] | 7 | 2021-02-17T14:04:33.000Z | 2021-06-01T08:16:50.000Z | from z3 import *
def find_production_and_profit(a_hours, b_hours, total_hours, prices):
# TODO: Write your code here
pass | 19.142857 | 70 | 0.723881 | 21 | 134 | 4.333333 | 0.904762 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.009434 | 0.208955 | 134 | 7 | 71 | 19.142857 | 0.849057 | 0.19403 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.142857 | 0 | 1 | 0.333333 | false | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 7 |
bf8bc9d3549e09c2c5006809914e3d879d72e5ff | 164 | py | Python | boa3_test/test_sc/interop_test/binary/SerializeDict.py | DanPopa46/neo3-boa | e4ef340744b5bd25ade26f847eac50789b97f3e9 | [
"Apache-2.0"
] | null | null | null | boa3_test/test_sc/interop_test/binary/SerializeDict.py | DanPopa46/neo3-boa | e4ef340744b5bd25ade26f847eac50789b97f3e9 | [
"Apache-2.0"
] | null | null | null | boa3_test/test_sc/interop_test/binary/SerializeDict.py | DanPopa46/neo3-boa | e4ef340744b5bd25ade26f847eac50789b97f3e9 | [
"Apache-2.0"
] | null | null | null | from boa3.builtin import public
from boa3.builtin.interop.binary import serialize
@public
def serialize_dict() -> bytes:
return serialize({1: 1, 2: 1, 3: 2})
| 20.5 | 49 | 0.72561 | 25 | 164 | 4.72 | 0.6 | 0.135593 | 0.254237 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.057971 | 0.158537 | 164 | 7 | 50 | 23.428571 | 0.797101 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.2 | true | 0 | 0.4 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
bfa5aeada64106bb8c93636454276aa96cfff673 | 83 | py | Python | ACM-Solution/ANAGRAM.py | wasi0013/Python-CodeBase | 4a7a36395162f68f84ded9085fa34cc7c9b19233 | [
"MIT"
] | 2 | 2016-04-26T15:40:40.000Z | 2018-07-18T10:16:42.000Z | ACM-Solution/ANAGRAM.py | wasi0013/Python-CodeBase | 4a7a36395162f68f84ded9085fa34cc7c9b19233 | [
"MIT"
] | 1 | 2016-04-26T15:44:15.000Z | 2016-04-29T14:44:40.000Z | ACM-Solution/ANAGRAM.py | wasi0013/Python-CodeBase | 4a7a36395162f68f84ded9085fa34cc7c9b19233 | [
"MIT"
] | 1 | 2018-10-02T16:12:19.000Z | 2018-10-02T16:12:19.000Z | exec('a,b=input().split();print("YNEOS"[sorted(a)!=sorted(b)::2]);'*int(input()))
| 41.5 | 82 | 0.590361 | 14 | 83 | 3.5 | 0.714286 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.012346 | 0.024096 | 83 | 1 | 83 | 83 | 0.592593 | 0 | 0 | 0 | 0 | 1 | 0.731707 | 0.731707 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
449ac2c8d6f1c92e64c6d362bff9a1d2d8200125 | 129 | py | Python | language-examples/python-hello-world/extra.py | Richargh/semantic-parser-kt-krdl-sandbox | e7227b22139fb61c545ae4b87827bce7f6796e58 | [
"MIT"
] | null | null | null | language-examples/python-hello-world/extra.py | Richargh/semantic-parser-kt-krdl-sandbox | e7227b22139fb61c545ae4b87827bce7f6796e58 | [
"MIT"
] | null | null | null | language-examples/python-hello-world/extra.py | Richargh/semantic-parser-kt-krdl-sandbox | e7227b22139fb61c545ae4b87827bce7f6796e58 | [
"MIT"
] | null | null | null | class Extra:
def __init__(self, name):
self.names = []
def add_name(self, name):
self.names.append(name) | 21.5 | 31 | 0.589147 | 17 | 129 | 4.176471 | 0.529412 | 0.338028 | 0.338028 | 0.478873 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.27907 | 129 | 6 | 31 | 21.5 | 0.763441 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0 | 0.6 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
44c3fcc98e81dbe53a883960e06e0b20b33836d0 | 545 | py | Python | ex7/drawengine.py | AbdManian/pythonclass-exercises | 8159b675aa71b4b8580430077d831914715ee409 | [
"MIT"
] | null | null | null | ex7/drawengine.py | AbdManian/pythonclass-exercises | 8159b675aa71b4b8580430077d831914715ee409 | [
"MIT"
] | null | null | null | ex7/drawengine.py | AbdManian/pythonclass-exercises | 8159b675aa71b4b8580430077d831914715ee409 | [
"MIT"
] | null | null | null | import turtle
# fill_box(x, y, dx, dy, [color=None])
# box((x1,y1), (x2,y2), [color=None], [width=None])
# box(x1,y1,x2,y2, [color=None], [width=None])
# fill_box_array(x, y, dx, dy, cnt, [add_x=0], [add_y=0], [color=None])
# line(x1, y1, x2, y2, [color=None], [width=None])
# multiline((x1,y1), (x2,y2), ...., (xn,yn) , params)
# set_color([color='black'])
# circle(x, y, r, [color=None], [width=None], [fill_color=None], [fill=False])
# circle_array(x, y, r, cnt, [dx=0], [dy=0], [color=None], [width=None], [fill_color=None], [fill=False]):
| 45.416667 | 106 | 0.601835 | 97 | 545 | 3.28866 | 0.298969 | 0.253919 | 0.219436 | 0.282132 | 0.539185 | 0.526646 | 0.526646 | 0.526646 | 0.445141 | 0.194357 | 0 | 0.041408 | 0.113761 | 545 | 11 | 107 | 49.545455 | 0.619048 | 0.937615 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
783013638e7e7947658105047ba5bfedb0a79a6a | 19 | py | Python | WEEKS/CD_Sata-Structures/general/practice/volleyballPositions/solution.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/general/practice/volleyballPositions/solution.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | WEEKS/CD_Sata-Structures/general/practice/volleyballPositions/solution.py | webdevhub42/Lambda | b04b84fb5b82fe7c8b12680149e25ae0d27a0960 | [
"MIT"
] | null | null | null | print(1000000 % 6)
| 9.5 | 18 | 0.684211 | 3 | 19 | 4.333333 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 0.157895 | 19 | 1 | 19 | 19 | 0.3125 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
785e791edf87e33231d65f6588a82a8b3edc0081 | 152,660 | py | Python | operators/konveyor-operator/python/pulumi_pulumi_kubernetes_crds_operators_konveyor_operator/velero/v1/outputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | operators/konveyor-operator/python/pulumi_pulumi_kubernetes_crds_operators_konveyor_operator/velero/v1/outputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | 2 | 2020-09-18T17:12:23.000Z | 2020-12-30T19:40:56.000Z | operators/konveyor-operator/python/pulumi_pulumi_kubernetes_crds_operators_konveyor_operator/velero/v1/outputs.py | pulumi/pulumi-kubernetes-crds | 372c4c0182f6b899af82d6edaad521aa14f22150 | [
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by crd2pulumi. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Mapping, Optional, Sequence, Union
from ... import _utilities, _tables
from . import outputs
__all__ = [
'BackupSpec',
'BackupSpecHooks',
'BackupSpecHooksResources',
'BackupSpecHooksResourcesLabelSelector',
'BackupSpecHooksResourcesLabelSelectorMatchExpressions',
'BackupSpecHooksResourcesPost',
'BackupSpecHooksResourcesPostExec',
'BackupSpecHooksResourcesPre',
'BackupSpecHooksResourcesPreExec',
'BackupSpecLabelSelector',
'BackupSpecLabelSelectorMatchExpressions',
'BackupStatus',
'BackupStatusProgress',
'BackupStorageLocationSpec',
'BackupStorageLocationSpecObjectStorage',
'BackupStorageLocationStatus',
'DeleteBackupRequestSpec',
'DeleteBackupRequestStatus',
'DownloadRequestSpec',
'DownloadRequestSpecTarget',
'DownloadRequestStatus',
'PodVolumeBackupSpec',
'PodVolumeBackupSpecPod',
'PodVolumeBackupStatus',
'PodVolumeBackupStatusProgress',
'PodVolumeRestoreSpec',
'PodVolumeRestoreSpecPod',
'PodVolumeRestoreStatus',
'PodVolumeRestoreStatusProgress',
'ResticRepositorySpec',
'ResticRepositoryStatus',
'RestoreSpec',
'RestoreSpecLabelSelector',
'RestoreSpecLabelSelectorMatchExpressions',
'RestoreStatus',
'RestoreStatusPodVolumeRestoreErrors',
'RestoreStatusPodVolumeRestoreVerifyErrors',
'ScheduleSpec',
'ScheduleSpecTemplate',
'ScheduleSpecTemplateHooks',
'ScheduleSpecTemplateHooksResources',
'ScheduleSpecTemplateHooksResourcesLabelSelector',
'ScheduleSpecTemplateHooksResourcesLabelSelectorMatchExpressions',
'ScheduleSpecTemplateHooksResourcesPost',
'ScheduleSpecTemplateHooksResourcesPostExec',
'ScheduleSpecTemplateHooksResourcesPre',
'ScheduleSpecTemplateHooksResourcesPreExec',
'ScheduleSpecTemplateLabelSelector',
'ScheduleSpecTemplateLabelSelectorMatchExpressions',
'ScheduleStatus',
'ServerStatusRequestStatus',
'ServerStatusRequestStatusPlugins',
'VolumeSnapshotLocationSpec',
'VolumeSnapshotLocationStatus',
]
@pulumi.output_type
class BackupSpec(dict):
"""
BackupSpec defines the specification for a Velero backup.
"""
def __init__(__self__, *,
excluded_namespaces: Optional[Sequence[str]] = None,
excluded_resources: Optional[Sequence[str]] = None,
hooks: Optional['outputs.BackupSpecHooks'] = None,
include_cluster_resources: Optional[bool] = None,
included_namespaces: Optional[Sequence[str]] = None,
included_resources: Optional[Sequence[str]] = None,
label_selector: Optional['outputs.BackupSpecLabelSelector'] = None,
snapshot_volumes: Optional[bool] = None,
storage_location: Optional[str] = None,
ttl: Optional[str] = None,
volume_snapshot_locations: Optional[Sequence[str]] = None):
"""
BackupSpec defines the specification for a Velero backup.
:param Sequence[str] excluded_namespaces: ExcludedNamespaces contains a list of namespaces that are not included in the backup.
:param Sequence[str] excluded_resources: ExcludedResources is a slice of resource names that are not included in the backup.
:param 'BackupSpecHooksArgs' hooks: Hooks represent custom behaviors that should be executed at different phases of the backup.
:param bool include_cluster_resources: IncludeClusterResources specifies whether cluster-scoped resources should be included for consideration in the backup.
:param Sequence[str] included_namespaces: IncludedNamespaces is a slice of namespace names to include objects from. If empty, all namespaces are included.
:param Sequence[str] included_resources: IncludedResources is a slice of resource names to include in the backup. If empty, all resources are included.
:param 'BackupSpecLabelSelectorArgs' label_selector: LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
:param bool snapshot_volumes: SnapshotVolumes specifies whether to take cloud snapshots of any PV's referenced in the set of objects included in the Backup.
:param str storage_location: StorageLocation is a string containing the name of a BackupStorageLocation where the backup should be stored.
:param str ttl: TTL is a time.Duration-parseable string describing how long the Backup should be retained for.
:param Sequence[str] volume_snapshot_locations: VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations associated with this backup.
"""
if excluded_namespaces is not None:
pulumi.set(__self__, "excluded_namespaces", excluded_namespaces)
if excluded_resources is not None:
pulumi.set(__self__, "excluded_resources", excluded_resources)
if hooks is not None:
pulumi.set(__self__, "hooks", hooks)
if include_cluster_resources is not None:
pulumi.set(__self__, "include_cluster_resources", include_cluster_resources)
if included_namespaces is not None:
pulumi.set(__self__, "included_namespaces", included_namespaces)
if included_resources is not None:
pulumi.set(__self__, "included_resources", included_resources)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if snapshot_volumes is not None:
pulumi.set(__self__, "snapshot_volumes", snapshot_volumes)
if storage_location is not None:
pulumi.set(__self__, "storage_location", storage_location)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
if volume_snapshot_locations is not None:
pulumi.set(__self__, "volume_snapshot_locations", volume_snapshot_locations)
@property
@pulumi.getter(name="excludedNamespaces")
def excluded_namespaces(self) -> Optional[Sequence[str]]:
"""
ExcludedNamespaces contains a list of namespaces that are not included in the backup.
"""
return pulumi.get(self, "excluded_namespaces")
@property
@pulumi.getter(name="excludedResources")
def excluded_resources(self) -> Optional[Sequence[str]]:
"""
ExcludedResources is a slice of resource names that are not included in the backup.
"""
return pulumi.get(self, "excluded_resources")
@property
@pulumi.getter
def hooks(self) -> Optional['outputs.BackupSpecHooks']:
"""
Hooks represent custom behaviors that should be executed at different phases of the backup.
"""
return pulumi.get(self, "hooks")
@property
@pulumi.getter(name="includeClusterResources")
def include_cluster_resources(self) -> Optional[bool]:
"""
IncludeClusterResources specifies whether cluster-scoped resources should be included for consideration in the backup.
"""
return pulumi.get(self, "include_cluster_resources")
@property
@pulumi.getter(name="includedNamespaces")
def included_namespaces(self) -> Optional[Sequence[str]]:
"""
IncludedNamespaces is a slice of namespace names to include objects from. If empty, all namespaces are included.
"""
return pulumi.get(self, "included_namespaces")
@property
@pulumi.getter(name="includedResources")
def included_resources(self) -> Optional[Sequence[str]]:
"""
IncludedResources is a slice of resource names to include in the backup. If empty, all resources are included.
"""
return pulumi.get(self, "included_resources")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.BackupSpecLabelSelector']:
"""
LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter(name="snapshotVolumes")
def snapshot_volumes(self) -> Optional[bool]:
"""
SnapshotVolumes specifies whether to take cloud snapshots of any PV's referenced in the set of objects included in the Backup.
"""
return pulumi.get(self, "snapshot_volumes")
@property
@pulumi.getter(name="storageLocation")
def storage_location(self) -> Optional[str]:
"""
StorageLocation is a string containing the name of a BackupStorageLocation where the backup should be stored.
"""
return pulumi.get(self, "storage_location")
@property
@pulumi.getter
def ttl(self) -> Optional[str]:
"""
TTL is a time.Duration-parseable string describing how long the Backup should be retained for.
"""
return pulumi.get(self, "ttl")
@property
@pulumi.getter(name="volumeSnapshotLocations")
def volume_snapshot_locations(self) -> Optional[Sequence[str]]:
"""
VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations associated with this backup.
"""
return pulumi.get(self, "volume_snapshot_locations")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooks(dict):
"""
Hooks represent custom behaviors that should be executed at different phases of the backup.
"""
def __init__(__self__, *,
resources: Optional[Sequence['outputs.BackupSpecHooksResources']] = None):
"""
Hooks represent custom behaviors that should be executed at different phases of the backup.
:param Sequence['BackupSpecHooksResourcesArgs'] resources: Resources are hooks that should be executed when backing up individual instances of a resource.
"""
if resources is not None:
pulumi.set(__self__, "resources", resources)
@property
@pulumi.getter
def resources(self) -> Optional[Sequence['outputs.BackupSpecHooksResources']]:
"""
Resources are hooks that should be executed when backing up individual instances of a resource.
"""
return pulumi.get(self, "resources")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResources(dict):
"""
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on the rules defined for namespaces, resources, and label selector.
"""
def __init__(__self__, *,
name: str,
excluded_namespaces: Optional[Sequence[str]] = None,
excluded_resources: Optional[Sequence[str]] = None,
included_namespaces: Optional[Sequence[str]] = None,
included_resources: Optional[Sequence[str]] = None,
label_selector: Optional['outputs.BackupSpecHooksResourcesLabelSelector'] = None,
post: Optional[Sequence['outputs.BackupSpecHooksResourcesPost']] = None,
pre: Optional[Sequence['outputs.BackupSpecHooksResourcesPre']] = None):
"""
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on the rules defined for namespaces, resources, and label selector.
:param str name: Name is the name of this hook.
:param Sequence[str] excluded_namespaces: ExcludedNamespaces specifies the namespaces to which this hook spec does not apply.
:param Sequence[str] excluded_resources: ExcludedResources specifies the resources to which this hook spec does not apply.
:param Sequence[str] included_namespaces: IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies to all namespaces.
:param Sequence[str] included_resources: IncludedResources specifies the resources to which this hook spec applies. If empty, it applies to all resources.
:param 'BackupSpecHooksResourcesLabelSelectorArgs' label_selector: LabelSelector, if specified, filters the resources to which this hook spec applies.
:param Sequence['BackupSpecHooksResourcesPostArgs'] post: PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup. These are executed after all "additional items" from item actions are processed.
:param Sequence['BackupSpecHooksResourcesPreArgs'] pre: PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup. These are executed before any "additional items" from item actions are processed.
"""
pulumi.set(__self__, "name", name)
if excluded_namespaces is not None:
pulumi.set(__self__, "excluded_namespaces", excluded_namespaces)
if excluded_resources is not None:
pulumi.set(__self__, "excluded_resources", excluded_resources)
if included_namespaces is not None:
pulumi.set(__self__, "included_namespaces", included_namespaces)
if included_resources is not None:
pulumi.set(__self__, "included_resources", included_resources)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if post is not None:
pulumi.set(__self__, "post", post)
if pre is not None:
pulumi.set(__self__, "pre", pre)
@property
@pulumi.getter
def name(self) -> str:
"""
Name is the name of this hook.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="excludedNamespaces")
def excluded_namespaces(self) -> Optional[Sequence[str]]:
"""
ExcludedNamespaces specifies the namespaces to which this hook spec does not apply.
"""
return pulumi.get(self, "excluded_namespaces")
@property
@pulumi.getter(name="excludedResources")
def excluded_resources(self) -> Optional[Sequence[str]]:
"""
ExcludedResources specifies the resources to which this hook spec does not apply.
"""
return pulumi.get(self, "excluded_resources")
@property
@pulumi.getter(name="includedNamespaces")
def included_namespaces(self) -> Optional[Sequence[str]]:
"""
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies to all namespaces.
"""
return pulumi.get(self, "included_namespaces")
@property
@pulumi.getter(name="includedResources")
def included_resources(self) -> Optional[Sequence[str]]:
"""
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies to all resources.
"""
return pulumi.get(self, "included_resources")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.BackupSpecHooksResourcesLabelSelector']:
"""
LabelSelector, if specified, filters the resources to which this hook spec applies.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter
def post(self) -> Optional[Sequence['outputs.BackupSpecHooksResourcesPost']]:
"""
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup. These are executed after all "additional items" from item actions are processed.
"""
return pulumi.get(self, "post")
@property
@pulumi.getter
def pre(self) -> Optional[Sequence['outputs.BackupSpecHooksResourcesPre']]:
"""
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup. These are executed before any "additional items" from item actions are processed.
"""
return pulumi.get(self, "pre")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResourcesLabelSelector(dict):
"""
LabelSelector, if specified, filters the resources to which this hook spec applies.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.BackupSpecHooksResourcesLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
LabelSelector, if specified, filters the resources to which this hook spec applies.
:param Sequence['BackupSpecHooksResourcesLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.BackupSpecHooksResourcesLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResourcesLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResourcesPost(dict):
"""
BackupResourceHook defines a hook for a resource.
"""
def __init__(__self__, *,
exec_: 'outputs.BackupSpecHooksResourcesPostExec'):
"""
BackupResourceHook defines a hook for a resource.
:param 'BackupSpecHooksResourcesPostExecArgs' exec_: Exec defines an exec hook.
"""
pulumi.set(__self__, "exec_", exec_)
@property
@pulumi.getter(name="exec")
def exec_(self) -> 'outputs.BackupSpecHooksResourcesPostExec':
"""
Exec defines an exec hook.
"""
return pulumi.get(self, "exec_")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResourcesPostExec(dict):
"""
Exec defines an exec hook.
"""
def __init__(__self__, *,
command: Sequence[str],
container: Optional[str] = None,
on_error: Optional[str] = None,
timeout: Optional[str] = None):
"""
Exec defines an exec hook.
:param Sequence[str] command: Command is the command and arguments to execute.
:param str container: Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
:param str on_error: OnError specifies how Velero should behave if it encounters an error executing this hook.
:param str timeout: Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
pulumi.set(__self__, "command", command)
if container is not None:
pulumi.set(__self__, "container", container)
if on_error is not None:
pulumi.set(__self__, "on_error", on_error)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter
def command(self) -> Sequence[str]:
"""
Command is the command and arguments to execute.
"""
return pulumi.get(self, "command")
@property
@pulumi.getter
def container(self) -> Optional[str]:
"""
Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
"""
return pulumi.get(self, "container")
@property
@pulumi.getter(name="onError")
def on_error(self) -> Optional[str]:
"""
OnError specifies how Velero should behave if it encounters an error executing this hook.
"""
return pulumi.get(self, "on_error")
@property
@pulumi.getter
def timeout(self) -> Optional[str]:
"""
Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
return pulumi.get(self, "timeout")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResourcesPre(dict):
"""
BackupResourceHook defines a hook for a resource.
"""
def __init__(__self__, *,
exec_: 'outputs.BackupSpecHooksResourcesPreExec'):
"""
BackupResourceHook defines a hook for a resource.
:param 'BackupSpecHooksResourcesPreExecArgs' exec_: Exec defines an exec hook.
"""
pulumi.set(__self__, "exec_", exec_)
@property
@pulumi.getter(name="exec")
def exec_(self) -> 'outputs.BackupSpecHooksResourcesPreExec':
"""
Exec defines an exec hook.
"""
return pulumi.get(self, "exec_")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecHooksResourcesPreExec(dict):
"""
Exec defines an exec hook.
"""
def __init__(__self__, *,
command: Sequence[str],
container: Optional[str] = None,
on_error: Optional[str] = None,
timeout: Optional[str] = None):
"""
Exec defines an exec hook.
:param Sequence[str] command: Command is the command and arguments to execute.
:param str container: Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
:param str on_error: OnError specifies how Velero should behave if it encounters an error executing this hook.
:param str timeout: Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
pulumi.set(__self__, "command", command)
if container is not None:
pulumi.set(__self__, "container", container)
if on_error is not None:
pulumi.set(__self__, "on_error", on_error)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter
def command(self) -> Sequence[str]:
"""
Command is the command and arguments to execute.
"""
return pulumi.get(self, "command")
@property
@pulumi.getter
def container(self) -> Optional[str]:
"""
Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
"""
return pulumi.get(self, "container")
@property
@pulumi.getter(name="onError")
def on_error(self) -> Optional[str]:
"""
OnError specifies how Velero should behave if it encounters an error executing this hook.
"""
return pulumi.get(self, "on_error")
@property
@pulumi.getter
def timeout(self) -> Optional[str]:
"""
Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
return pulumi.get(self, "timeout")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecLabelSelector(dict):
"""
LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.BackupSpecLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
:param Sequence['BackupSpecLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.BackupSpecLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupSpecLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupStatus(dict):
"""
BackupStatus captures the current status of a Velero backup.
"""
def __init__(__self__, *,
completion_timestamp: Optional[str] = None,
errors: Optional[int] = None,
expiration: Optional[str] = None,
format_version: Optional[str] = None,
phase: Optional[str] = None,
progress: Optional['outputs.BackupStatusProgress'] = None,
start_timestamp: Optional[str] = None,
validation_errors: Optional[Sequence[str]] = None,
version: Optional[int] = None,
volume_snapshots_attempted: Optional[int] = None,
volume_snapshots_completed: Optional[int] = None,
warnings: Optional[int] = None):
"""
BackupStatus captures the current status of a Velero backup.
:param str completion_timestamp: CompletionTimestamp records the time a backup was completed. Completion time is recorded even on failed backups. Completion time is recorded before uploading the backup object. The server's time is used for CompletionTimestamps
:param int errors: Errors is a count of all error messages that were generated during execution of the backup. The actual errors are in the backup's log file in object storage.
:param str expiration: Expiration is when this Backup is eligible for garbage-collection.
:param str format_version: FormatVersion is the backup format version, including major, minor, and patch version.
:param str phase: Phase is the current state of the Backup.
:param 'BackupStatusProgressArgs' progress: Progress contains information about the backup's execution progress. Note that this information is best-effort only -- if Velero fails to update it during a backup for any reason, it may be inaccurate/stale.
:param str start_timestamp: StartTimestamp records the time a backup was started. Separate from CreationTimestamp, since that value changes on restores. The server's time is used for StartTimestamps
:param Sequence[str] validation_errors: ValidationErrors is a slice of all validation errors (if applicable).
:param int version: Version is the backup format major version. Deprecated: Please see FormatVersion
:param int volume_snapshots_attempted: VolumeSnapshotsAttempted is the total number of attempted volume snapshots for this backup.
:param int volume_snapshots_completed: VolumeSnapshotsCompleted is the total number of successfully completed volume snapshots for this backup.
:param int warnings: Warnings is a count of all warning messages that were generated during execution of the backup. The actual warnings are in the backup's log file in object storage.
"""
if completion_timestamp is not None:
pulumi.set(__self__, "completion_timestamp", completion_timestamp)
if errors is not None:
pulumi.set(__self__, "errors", errors)
if expiration is not None:
pulumi.set(__self__, "expiration", expiration)
if format_version is not None:
pulumi.set(__self__, "format_version", format_version)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if progress is not None:
pulumi.set(__self__, "progress", progress)
if start_timestamp is not None:
pulumi.set(__self__, "start_timestamp", start_timestamp)
if validation_errors is not None:
pulumi.set(__self__, "validation_errors", validation_errors)
if version is not None:
pulumi.set(__self__, "version", version)
if volume_snapshots_attempted is not None:
pulumi.set(__self__, "volume_snapshots_attempted", volume_snapshots_attempted)
if volume_snapshots_completed is not None:
pulumi.set(__self__, "volume_snapshots_completed", volume_snapshots_completed)
if warnings is not None:
pulumi.set(__self__, "warnings", warnings)
@property
@pulumi.getter(name="completionTimestamp")
def completion_timestamp(self) -> Optional[str]:
"""
CompletionTimestamp records the time a backup was completed. Completion time is recorded even on failed backups. Completion time is recorded before uploading the backup object. The server's time is used for CompletionTimestamps
"""
return pulumi.get(self, "completion_timestamp")
@property
@pulumi.getter
def errors(self) -> Optional[int]:
"""
Errors is a count of all error messages that were generated during execution of the backup. The actual errors are in the backup's log file in object storage.
"""
return pulumi.get(self, "errors")
@property
@pulumi.getter
def expiration(self) -> Optional[str]:
"""
Expiration is when this Backup is eligible for garbage-collection.
"""
return pulumi.get(self, "expiration")
@property
@pulumi.getter(name="formatVersion")
def format_version(self) -> Optional[str]:
"""
FormatVersion is the backup format version, including major, minor, and patch version.
"""
return pulumi.get(self, "format_version")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the Backup.
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter
def progress(self) -> Optional['outputs.BackupStatusProgress']:
"""
Progress contains information about the backup's execution progress. Note that this information is best-effort only -- if Velero fails to update it during a backup for any reason, it may be inaccurate/stale.
"""
return pulumi.get(self, "progress")
@property
@pulumi.getter(name="startTimestamp")
def start_timestamp(self) -> Optional[str]:
"""
StartTimestamp records the time a backup was started. Separate from CreationTimestamp, since that value changes on restores. The server's time is used for StartTimestamps
"""
return pulumi.get(self, "start_timestamp")
@property
@pulumi.getter(name="validationErrors")
def validation_errors(self) -> Optional[Sequence[str]]:
"""
ValidationErrors is a slice of all validation errors (if applicable).
"""
return pulumi.get(self, "validation_errors")
@property
@pulumi.getter
def version(self) -> Optional[int]:
"""
Version is the backup format major version. Deprecated: Please see FormatVersion
"""
return pulumi.get(self, "version")
@property
@pulumi.getter(name="volumeSnapshotsAttempted")
def volume_snapshots_attempted(self) -> Optional[int]:
"""
VolumeSnapshotsAttempted is the total number of attempted volume snapshots for this backup.
"""
return pulumi.get(self, "volume_snapshots_attempted")
@property
@pulumi.getter(name="volumeSnapshotsCompleted")
def volume_snapshots_completed(self) -> Optional[int]:
"""
VolumeSnapshotsCompleted is the total number of successfully completed volume snapshots for this backup.
"""
return pulumi.get(self, "volume_snapshots_completed")
@property
@pulumi.getter
def warnings(self) -> Optional[int]:
"""
Warnings is a count of all warning messages that were generated during execution of the backup. The actual warnings are in the backup's log file in object storage.
"""
return pulumi.get(self, "warnings")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupStatusProgress(dict):
"""
Progress contains information about the backup's execution progress. Note that this information is best-effort only -- if Velero fails to update it during a backup for any reason, it may be inaccurate/stale.
"""
def __init__(__self__, *,
items_backed_up: Optional[int] = None,
total_items: Optional[int] = None):
"""
Progress contains information about the backup's execution progress. Note that this information is best-effort only -- if Velero fails to update it during a backup for any reason, it may be inaccurate/stale.
:param int items_backed_up: ItemsBackedUp is the number of items that have actually been written to the backup tarball so far.
:param int total_items: TotalItems is the total number of items to be backed up. This number may change throughout the execution of the backup due to plugins that return additional related items to back up, the velero.io/exclude-from-backup label, and various other filters that happen as items are processed.
"""
if items_backed_up is not None:
pulumi.set(__self__, "items_backed_up", items_backed_up)
if total_items is not None:
pulumi.set(__self__, "total_items", total_items)
@property
@pulumi.getter(name="itemsBackedUp")
def items_backed_up(self) -> Optional[int]:
"""
ItemsBackedUp is the number of items that have actually been written to the backup tarball so far.
"""
return pulumi.get(self, "items_backed_up")
@property
@pulumi.getter(name="totalItems")
def total_items(self) -> Optional[int]:
"""
TotalItems is the total number of items to be backed up. This number may change throughout the execution of the backup due to plugins that return additional related items to back up, the velero.io/exclude-from-backup label, and various other filters that happen as items are processed.
"""
return pulumi.get(self, "total_items")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupStorageLocationSpec(dict):
"""
BackupStorageLocationSpec defines the specification for a Velero BackupStorageLocation.
"""
def __init__(__self__, *,
object_storage: 'outputs.BackupStorageLocationSpecObjectStorage',
provider: str,
access_mode: Optional[str] = None,
backup_sync_period: Optional[str] = None,
config: Optional[Mapping[str, str]] = None):
"""
BackupStorageLocationSpec defines the specification for a Velero BackupStorageLocation.
:param 'BackupStorageLocationSpecObjectStorageArgs' object_storage: ObjectStorageLocation specifies the settings necessary to connect to a provider's object storage.
:param str provider: Provider is the provider of the backup storage.
:param str access_mode: AccessMode defines the permissions for the backup storage location.
:param str backup_sync_period: BackupSyncPeriod defines how frequently to sync backup API objects from object storage. A value of 0 disables sync.
:param Mapping[str, str] config: Config is for provider-specific configuration fields.
"""
pulumi.set(__self__, "object_storage", object_storage)
pulumi.set(__self__, "provider", provider)
if access_mode is not None:
pulumi.set(__self__, "access_mode", access_mode)
if backup_sync_period is not None:
pulumi.set(__self__, "backup_sync_period", backup_sync_period)
if config is not None:
pulumi.set(__self__, "config", config)
@property
@pulumi.getter(name="objectStorage")
def object_storage(self) -> 'outputs.BackupStorageLocationSpecObjectStorage':
"""
ObjectStorageLocation specifies the settings necessary to connect to a provider's object storage.
"""
return pulumi.get(self, "object_storage")
@property
@pulumi.getter
def provider(self) -> str:
"""
Provider is the provider of the backup storage.
"""
return pulumi.get(self, "provider")
@property
@pulumi.getter(name="accessMode")
def access_mode(self) -> Optional[str]:
"""
AccessMode defines the permissions for the backup storage location.
"""
return pulumi.get(self, "access_mode")
@property
@pulumi.getter(name="backupSyncPeriod")
def backup_sync_period(self) -> Optional[str]:
"""
BackupSyncPeriod defines how frequently to sync backup API objects from object storage. A value of 0 disables sync.
"""
return pulumi.get(self, "backup_sync_period")
@property
@pulumi.getter
def config(self) -> Optional[Mapping[str, str]]:
"""
Config is for provider-specific configuration fields.
"""
return pulumi.get(self, "config")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupStorageLocationSpecObjectStorage(dict):
"""
ObjectStorageLocation specifies the settings necessary to connect to a provider's object storage.
"""
def __init__(__self__, *,
bucket: str,
ca_cert: Optional[str] = None,
prefix: Optional[str] = None):
"""
ObjectStorageLocation specifies the settings necessary to connect to a provider's object storage.
:param str bucket: Bucket is the bucket to use for object storage.
:param str ca_cert: CACert defines a CA bundle to use when verifying TLS connections to the provider.
:param str prefix: Prefix is the path inside a bucket to use for Velero storage. Optional.
"""
pulumi.set(__self__, "bucket", bucket)
if ca_cert is not None:
pulumi.set(__self__, "ca_cert", ca_cert)
if prefix is not None:
pulumi.set(__self__, "prefix", prefix)
@property
@pulumi.getter
def bucket(self) -> str:
"""
Bucket is the bucket to use for object storage.
"""
return pulumi.get(self, "bucket")
@property
@pulumi.getter(name="caCert")
def ca_cert(self) -> Optional[str]:
"""
CACert defines a CA bundle to use when verifying TLS connections to the provider.
"""
return pulumi.get(self, "ca_cert")
@property
@pulumi.getter
def prefix(self) -> Optional[str]:
"""
Prefix is the path inside a bucket to use for Velero storage. Optional.
"""
return pulumi.get(self, "prefix")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class BackupStorageLocationStatus(dict):
"""
BackupStorageLocationStatus describes the current status of a Velero BackupStorageLocation.
"""
def __init__(__self__, *,
access_mode: Optional[str] = None,
last_synced_revision: Optional[str] = None,
last_synced_time: Optional[str] = None,
phase: Optional[str] = None):
"""
BackupStorageLocationStatus describes the current status of a Velero BackupStorageLocation.
:param str access_mode: AccessMode is an unused field.
Deprecated: there is now an AccessMode field on the Spec and this field will be removed entirely as of v2.0.
:param str last_synced_revision: LastSyncedRevision is the value of the `metadata/revision` file in the backup storage location the last time the BSL's contents were synced into the cluster.
Deprecated: this field is no longer updated or used for detecting changes to the location's contents and will be removed entirely in v2.0.
:param str last_synced_time: LastSyncedTime is the last time the contents of the location were synced into the cluster.
:param str phase: Phase is the current state of the BackupStorageLocation.
"""
if access_mode is not None:
pulumi.set(__self__, "access_mode", access_mode)
if last_synced_revision is not None:
pulumi.set(__self__, "last_synced_revision", last_synced_revision)
if last_synced_time is not None:
pulumi.set(__self__, "last_synced_time", last_synced_time)
if phase is not None:
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter(name="accessMode")
def access_mode(self) -> Optional[str]:
"""
AccessMode is an unused field.
Deprecated: there is now an AccessMode field on the Spec and this field will be removed entirely as of v2.0.
"""
return pulumi.get(self, "access_mode")
@property
@pulumi.getter(name="lastSyncedRevision")
def last_synced_revision(self) -> Optional[str]:
"""
LastSyncedRevision is the value of the `metadata/revision` file in the backup storage location the last time the BSL's contents were synced into the cluster.
Deprecated: this field is no longer updated or used for detecting changes to the location's contents and will be removed entirely in v2.0.
"""
return pulumi.get(self, "last_synced_revision")
@property
@pulumi.getter(name="lastSyncedTime")
def last_synced_time(self) -> Optional[str]:
"""
LastSyncedTime is the last time the contents of the location were synced into the cluster.
"""
return pulumi.get(self, "last_synced_time")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the BackupStorageLocation.
"""
return pulumi.get(self, "phase")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeleteBackupRequestSpec(dict):
"""
DeleteBackupRequestSpec is the specification for which backups to delete.
"""
def __init__(__self__, *,
backup_name: str):
"""
DeleteBackupRequestSpec is the specification for which backups to delete.
"""
pulumi.set(__self__, "backup_name", backup_name)
@property
@pulumi.getter(name="backupName")
def backup_name(self) -> str:
return pulumi.get(self, "backup_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DeleteBackupRequestStatus(dict):
"""
DeleteBackupRequestStatus is the current status of a DeleteBackupRequest.
"""
def __init__(__self__, *,
errors: Optional[Sequence[str]] = None,
phase: Optional[str] = None):
"""
DeleteBackupRequestStatus is the current status of a DeleteBackupRequest.
:param Sequence[str] errors: Errors contains any errors that were encountered during the deletion process.
:param str phase: Phase is the current state of the DeleteBackupRequest.
"""
if errors is not None:
pulumi.set(__self__, "errors", errors)
if phase is not None:
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter
def errors(self) -> Optional[Sequence[str]]:
"""
Errors contains any errors that were encountered during the deletion process.
"""
return pulumi.get(self, "errors")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the DeleteBackupRequest.
"""
return pulumi.get(self, "phase")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DownloadRequestSpec(dict):
"""
DownloadRequestSpec is the specification for a download request.
"""
def __init__(__self__, *,
target: 'outputs.DownloadRequestSpecTarget'):
"""
DownloadRequestSpec is the specification for a download request.
:param 'DownloadRequestSpecTargetArgs' target: Target is what to download (e.g. logs for a backup).
"""
pulumi.set(__self__, "target", target)
@property
@pulumi.getter
def target(self) -> 'outputs.DownloadRequestSpecTarget':
"""
Target is what to download (e.g. logs for a backup).
"""
return pulumi.get(self, "target")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DownloadRequestSpecTarget(dict):
"""
Target is what to download (e.g. logs for a backup).
"""
def __init__(__self__, *,
kind: str,
name: str):
"""
Target is what to download (e.g. logs for a backup).
:param str kind: Kind is the type of file to download.
:param str name: Name is the name of the kubernetes resource with which the file is associated.
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def kind(self) -> str:
"""
Kind is the type of file to download.
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> str:
"""
Name is the name of the kubernetes resource with which the file is associated.
"""
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class DownloadRequestStatus(dict):
"""
DownloadRequestStatus is the current status of a DownloadRequest.
"""
def __init__(__self__, *,
download_url: Optional[str] = None,
expiration: Optional[str] = None,
phase: Optional[str] = None):
"""
DownloadRequestStatus is the current status of a DownloadRequest.
:param str download_url: DownloadURL contains the pre-signed URL for the target file.
:param str expiration: Expiration is when this DownloadRequest expires and can be deleted by the system.
:param str phase: Phase is the current state of the DownloadRequest.
"""
if download_url is not None:
pulumi.set(__self__, "download_url", download_url)
if expiration is not None:
pulumi.set(__self__, "expiration", expiration)
if phase is not None:
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter(name="downloadURL")
def download_url(self) -> Optional[str]:
"""
DownloadURL contains the pre-signed URL for the target file.
"""
return pulumi.get(self, "download_url")
@property
@pulumi.getter
def expiration(self) -> Optional[str]:
"""
Expiration is when this DownloadRequest expires and can be deleted by the system.
"""
return pulumi.get(self, "expiration")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the DownloadRequest.
"""
return pulumi.get(self, "phase")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeBackupSpec(dict):
"""
PodVolumeBackupSpec is the specification for a PodVolumeBackup.
"""
def __init__(__self__, *,
backup_storage_location: str,
node: str,
pod: 'outputs.PodVolumeBackupSpecPod',
repo_identifier: str,
volume: str,
tags: Optional[Mapping[str, str]] = None):
"""
PodVolumeBackupSpec is the specification for a PodVolumeBackup.
:param str backup_storage_location: BackupStorageLocation is the name of the backup storage location where the restic repository is stored.
:param str node: Node is the name of the node that the Pod is running on.
:param 'PodVolumeBackupSpecPodArgs' pod: Pod is a reference to the pod containing the volume to be backed up.
:param str repo_identifier: RepoIdentifier is the restic repository identifier.
:param str volume: Volume is the name of the volume within the Pod to be backed up.
:param Mapping[str, str] tags: Tags are a map of key-value pairs that should be applied to the volume backup as tags.
"""
pulumi.set(__self__, "backup_storage_location", backup_storage_location)
pulumi.set(__self__, "node", node)
pulumi.set(__self__, "pod", pod)
pulumi.set(__self__, "repo_identifier", repo_identifier)
pulumi.set(__self__, "volume", volume)
if tags is not None:
pulumi.set(__self__, "tags", tags)
@property
@pulumi.getter(name="backupStorageLocation")
def backup_storage_location(self) -> str:
"""
BackupStorageLocation is the name of the backup storage location where the restic repository is stored.
"""
return pulumi.get(self, "backup_storage_location")
@property
@pulumi.getter
def node(self) -> str:
"""
Node is the name of the node that the Pod is running on.
"""
return pulumi.get(self, "node")
@property
@pulumi.getter
def pod(self) -> 'outputs.PodVolumeBackupSpecPod':
"""
Pod is a reference to the pod containing the volume to be backed up.
"""
return pulumi.get(self, "pod")
@property
@pulumi.getter(name="repoIdentifier")
def repo_identifier(self) -> str:
"""
RepoIdentifier is the restic repository identifier.
"""
return pulumi.get(self, "repo_identifier")
@property
@pulumi.getter
def volume(self) -> str:
"""
Volume is the name of the volume within the Pod to be backed up.
"""
return pulumi.get(self, "volume")
@property
@pulumi.getter
def tags(self) -> Optional[Mapping[str, str]]:
"""
Tags are a map of key-value pairs that should be applied to the volume backup as tags.
"""
return pulumi.get(self, "tags")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeBackupSpecPod(dict):
"""
Pod is a reference to the pod containing the volume to be backed up.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
Pod is a reference to the pod containing the volume to be backed up.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeBackupStatus(dict):
"""
PodVolumeBackupStatus is the current status of a PodVolumeBackup.
"""
def __init__(__self__, *,
completion_timestamp: Optional[str] = None,
message: Optional[str] = None,
path: Optional[str] = None,
phase: Optional[str] = None,
progress: Optional['outputs.PodVolumeBackupStatusProgress'] = None,
snapshot_id: Optional[str] = None,
start_timestamp: Optional[str] = None):
"""
PodVolumeBackupStatus is the current status of a PodVolumeBackup.
:param str completion_timestamp: CompletionTimestamp records the time a backup was completed. Completion time is recorded even on failed backups. Completion time is recorded before uploading the backup object. The server's time is used for CompletionTimestamps
:param str message: Message is a message about the pod volume backup's status.
:param str path: Path is the full path within the controller pod being backed up.
:param str phase: Phase is the current state of the PodVolumeBackup.
:param 'PodVolumeBackupStatusProgressArgs' progress: Progress holds the total number of bytes of the volume and the current number of backed up bytes. This can be used to display progress information about the backup operation.
:param str snapshot_id: SnapshotID is the identifier for the snapshot of the pod volume.
:param str start_timestamp: StartTimestamp records the time a backup was started. Separate from CreationTimestamp, since that value changes on restores. The server's time is used for StartTimestamps
"""
if completion_timestamp is not None:
pulumi.set(__self__, "completion_timestamp", completion_timestamp)
if message is not None:
pulumi.set(__self__, "message", message)
if path is not None:
pulumi.set(__self__, "path", path)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if progress is not None:
pulumi.set(__self__, "progress", progress)
if snapshot_id is not None:
pulumi.set(__self__, "snapshot_id", snapshot_id)
if start_timestamp is not None:
pulumi.set(__self__, "start_timestamp", start_timestamp)
@property
@pulumi.getter(name="completionTimestamp")
def completion_timestamp(self) -> Optional[str]:
"""
CompletionTimestamp records the time a backup was completed. Completion time is recorded even on failed backups. Completion time is recorded before uploading the backup object. The server's time is used for CompletionTimestamps
"""
return pulumi.get(self, "completion_timestamp")
@property
@pulumi.getter
def message(self) -> Optional[str]:
"""
Message is a message about the pod volume backup's status.
"""
return pulumi.get(self, "message")
@property
@pulumi.getter
def path(self) -> Optional[str]:
"""
Path is the full path within the controller pod being backed up.
"""
return pulumi.get(self, "path")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the PodVolumeBackup.
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter
def progress(self) -> Optional['outputs.PodVolumeBackupStatusProgress']:
"""
Progress holds the total number of bytes of the volume and the current number of backed up bytes. This can be used to display progress information about the backup operation.
"""
return pulumi.get(self, "progress")
@property
@pulumi.getter(name="snapshotID")
def snapshot_id(self) -> Optional[str]:
"""
SnapshotID is the identifier for the snapshot of the pod volume.
"""
return pulumi.get(self, "snapshot_id")
@property
@pulumi.getter(name="startTimestamp")
def start_timestamp(self) -> Optional[str]:
"""
StartTimestamp records the time a backup was started. Separate from CreationTimestamp, since that value changes on restores. The server's time is used for StartTimestamps
"""
return pulumi.get(self, "start_timestamp")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeBackupStatusProgress(dict):
"""
Progress holds the total number of bytes of the volume and the current number of backed up bytes. This can be used to display progress information about the backup operation.
"""
def __init__(__self__, *,
bytes_done: Optional[int] = None,
total_bytes: Optional[int] = None):
"""
Progress holds the total number of bytes of the volume and the current number of backed up bytes. This can be used to display progress information about the backup operation.
"""
if bytes_done is not None:
pulumi.set(__self__, "bytes_done", bytes_done)
if total_bytes is not None:
pulumi.set(__self__, "total_bytes", total_bytes)
@property
@pulumi.getter(name="bytesDone")
def bytes_done(self) -> Optional[int]:
return pulumi.get(self, "bytes_done")
@property
@pulumi.getter(name="totalBytes")
def total_bytes(self) -> Optional[int]:
return pulumi.get(self, "total_bytes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeRestoreSpec(dict):
"""
PodVolumeRestoreSpec is the specification for a PodVolumeRestore.
"""
def __init__(__self__, *,
backup_storage_location: str,
pod: 'outputs.PodVolumeRestoreSpecPod',
repo_identifier: str,
snapshot_id: str,
volume: str):
"""
PodVolumeRestoreSpec is the specification for a PodVolumeRestore.
:param str backup_storage_location: BackupStorageLocation is the name of the backup storage location where the restic repository is stored.
:param 'PodVolumeRestoreSpecPodArgs' pod: Pod is a reference to the pod containing the volume to be restored.
:param str repo_identifier: RepoIdentifier is the restic repository identifier.
:param str snapshot_id: SnapshotID is the ID of the volume snapshot to be restored.
:param str volume: Volume is the name of the volume within the Pod to be restored.
"""
pulumi.set(__self__, "backup_storage_location", backup_storage_location)
pulumi.set(__self__, "pod", pod)
pulumi.set(__self__, "repo_identifier", repo_identifier)
pulumi.set(__self__, "snapshot_id", snapshot_id)
pulumi.set(__self__, "volume", volume)
@property
@pulumi.getter(name="backupStorageLocation")
def backup_storage_location(self) -> str:
"""
BackupStorageLocation is the name of the backup storage location where the restic repository is stored.
"""
return pulumi.get(self, "backup_storage_location")
@property
@pulumi.getter
def pod(self) -> 'outputs.PodVolumeRestoreSpecPod':
"""
Pod is a reference to the pod containing the volume to be restored.
"""
return pulumi.get(self, "pod")
@property
@pulumi.getter(name="repoIdentifier")
def repo_identifier(self) -> str:
"""
RepoIdentifier is the restic repository identifier.
"""
return pulumi.get(self, "repo_identifier")
@property
@pulumi.getter(name="snapshotID")
def snapshot_id(self) -> str:
"""
SnapshotID is the ID of the volume snapshot to be restored.
"""
return pulumi.get(self, "snapshot_id")
@property
@pulumi.getter
def volume(self) -> str:
"""
Volume is the name of the volume within the Pod to be restored.
"""
return pulumi.get(self, "volume")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeRestoreSpecPod(dict):
"""
Pod is a reference to the pod containing the volume to be restored.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
Pod is a reference to the pod containing the volume to be restored.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeRestoreStatus(dict):
"""
PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
"""
def __init__(__self__, *,
completion_timestamp: Optional[str] = None,
errors: Optional[int] = None,
message: Optional[str] = None,
phase: Optional[str] = None,
progress: Optional['outputs.PodVolumeRestoreStatusProgress'] = None,
restic_pod: Optional[str] = None,
start_timestamp: Optional[str] = None,
verify_errors: Optional[int] = None):
"""
PodVolumeRestoreStatus is the current status of a PodVolumeRestore.
:param str completion_timestamp: CompletionTimestamp records the time a restore was completed. Completion time is recorded even on failed restores. The server's time is used for CompletionTimestamps
:param int errors: Errors is a count of all error messages that were generated during execution of the pod volume restore. The actual errors are in the restic log
:param str message: Message is a message about the pod volume restore's status.
:param str phase: Phase is the current state of the PodVolumeRestore.
:param 'PodVolumeRestoreStatusProgressArgs' progress: Progress holds the total number of bytes of the snapshot and the current number of restored bytes. This can be used to display progress information about the restore operation.
:param str restic_pod: ResticPod is the name of the restic pod which processed the restore. Any errors referenced in Errors or VerifyErrors will be logged in this pod's log.
:param str start_timestamp: StartTimestamp records the time a restore was started. The server's time is used for StartTimestamps
:param int verify_errors: VerifyErrors is a count of all verification-related error messages that were generated during execution of the pod volume restore. The actual errors are in the restic log
"""
if completion_timestamp is not None:
pulumi.set(__self__, "completion_timestamp", completion_timestamp)
if errors is not None:
pulumi.set(__self__, "errors", errors)
if message is not None:
pulumi.set(__self__, "message", message)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if progress is not None:
pulumi.set(__self__, "progress", progress)
if restic_pod is not None:
pulumi.set(__self__, "restic_pod", restic_pod)
if start_timestamp is not None:
pulumi.set(__self__, "start_timestamp", start_timestamp)
if verify_errors is not None:
pulumi.set(__self__, "verify_errors", verify_errors)
@property
@pulumi.getter(name="completionTimestamp")
def completion_timestamp(self) -> Optional[str]:
"""
CompletionTimestamp records the time a restore was completed. Completion time is recorded even on failed restores. The server's time is used for CompletionTimestamps
"""
return pulumi.get(self, "completion_timestamp")
@property
@pulumi.getter
def errors(self) -> Optional[int]:
"""
Errors is a count of all error messages that were generated during execution of the pod volume restore. The actual errors are in the restic log
"""
return pulumi.get(self, "errors")
@property
@pulumi.getter
def message(self) -> Optional[str]:
"""
Message is a message about the pod volume restore's status.
"""
return pulumi.get(self, "message")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the PodVolumeRestore.
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter
def progress(self) -> Optional['outputs.PodVolumeRestoreStatusProgress']:
"""
Progress holds the total number of bytes of the snapshot and the current number of restored bytes. This can be used to display progress information about the restore operation.
"""
return pulumi.get(self, "progress")
@property
@pulumi.getter(name="resticPod")
def restic_pod(self) -> Optional[str]:
"""
ResticPod is the name of the restic pod which processed the restore. Any errors referenced in Errors or VerifyErrors will be logged in this pod's log.
"""
return pulumi.get(self, "restic_pod")
@property
@pulumi.getter(name="startTimestamp")
def start_timestamp(self) -> Optional[str]:
"""
StartTimestamp records the time a restore was started. The server's time is used for StartTimestamps
"""
return pulumi.get(self, "start_timestamp")
@property
@pulumi.getter(name="verifyErrors")
def verify_errors(self) -> Optional[int]:
"""
VerifyErrors is a count of all verification-related error messages that were generated during execution of the pod volume restore. The actual errors are in the restic log
"""
return pulumi.get(self, "verify_errors")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class PodVolumeRestoreStatusProgress(dict):
"""
Progress holds the total number of bytes of the snapshot and the current number of restored bytes. This can be used to display progress information about the restore operation.
"""
def __init__(__self__, *,
bytes_done: Optional[int] = None,
total_bytes: Optional[int] = None):
"""
Progress holds the total number of bytes of the snapshot and the current number of restored bytes. This can be used to display progress information about the restore operation.
"""
if bytes_done is not None:
pulumi.set(__self__, "bytes_done", bytes_done)
if total_bytes is not None:
pulumi.set(__self__, "total_bytes", total_bytes)
@property
@pulumi.getter(name="bytesDone")
def bytes_done(self) -> Optional[int]:
return pulumi.get(self, "bytes_done")
@property
@pulumi.getter(name="totalBytes")
def total_bytes(self) -> Optional[int]:
return pulumi.get(self, "total_bytes")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ResticRepositorySpec(dict):
"""
ResticRepositorySpec is the specification for a ResticRepository.
"""
def __init__(__self__, *,
backup_storage_location: str,
maintenance_frequency: str,
restic_identifier: str,
volume_namespace: str):
"""
ResticRepositorySpec is the specification for a ResticRepository.
:param str backup_storage_location: BackupStorageLocation is the name of the BackupStorageLocation that should contain this repository.
:param str maintenance_frequency: MaintenanceFrequency is how often maintenance should be run.
:param str restic_identifier: ResticIdentifier is the full restic-compatible string for identifying this repository.
:param str volume_namespace: VolumeNamespace is the namespace this restic repository contains pod volume backups for.
"""
pulumi.set(__self__, "backup_storage_location", backup_storage_location)
pulumi.set(__self__, "maintenance_frequency", maintenance_frequency)
pulumi.set(__self__, "restic_identifier", restic_identifier)
pulumi.set(__self__, "volume_namespace", volume_namespace)
@property
@pulumi.getter(name="backupStorageLocation")
def backup_storage_location(self) -> str:
"""
BackupStorageLocation is the name of the BackupStorageLocation that should contain this repository.
"""
return pulumi.get(self, "backup_storage_location")
@property
@pulumi.getter(name="maintenanceFrequency")
def maintenance_frequency(self) -> str:
"""
MaintenanceFrequency is how often maintenance should be run.
"""
return pulumi.get(self, "maintenance_frequency")
@property
@pulumi.getter(name="resticIdentifier")
def restic_identifier(self) -> str:
"""
ResticIdentifier is the full restic-compatible string for identifying this repository.
"""
return pulumi.get(self, "restic_identifier")
@property
@pulumi.getter(name="volumeNamespace")
def volume_namespace(self) -> str:
"""
VolumeNamespace is the namespace this restic repository contains pod volume backups for.
"""
return pulumi.get(self, "volume_namespace")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ResticRepositoryStatus(dict):
"""
ResticRepositoryStatus is the current status of a ResticRepository.
"""
def __init__(__self__, *,
last_maintenance_time: Optional[str] = None,
message: Optional[str] = None,
phase: Optional[str] = None):
"""
ResticRepositoryStatus is the current status of a ResticRepository.
:param str last_maintenance_time: LastMaintenanceTime is the last time maintenance was run.
:param str message: Message is a message about the current status of the ResticRepository.
:param str phase: Phase is the current state of the ResticRepository.
"""
if last_maintenance_time is not None:
pulumi.set(__self__, "last_maintenance_time", last_maintenance_time)
if message is not None:
pulumi.set(__self__, "message", message)
if phase is not None:
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter(name="lastMaintenanceTime")
def last_maintenance_time(self) -> Optional[str]:
"""
LastMaintenanceTime is the last time maintenance was run.
"""
return pulumi.get(self, "last_maintenance_time")
@property
@pulumi.getter
def message(self) -> Optional[str]:
"""
Message is a message about the current status of the ResticRepository.
"""
return pulumi.get(self, "message")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the ResticRepository.
"""
return pulumi.get(self, "phase")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RestoreSpec(dict):
"""
RestoreSpec defines the specification for a Velero restore.
"""
def __init__(__self__, *,
backup_name: str,
excluded_namespaces: Optional[Sequence[str]] = None,
excluded_resources: Optional[Sequence[str]] = None,
include_cluster_resources: Optional[bool] = None,
included_namespaces: Optional[Sequence[str]] = None,
included_resources: Optional[Sequence[str]] = None,
label_selector: Optional['outputs.RestoreSpecLabelSelector'] = None,
namespace_mapping: Optional[Mapping[str, str]] = None,
restore_pvs: Optional[bool] = None,
schedule_name: Optional[str] = None):
"""
RestoreSpec defines the specification for a Velero restore.
:param str backup_name: BackupName is the unique name of the Velero backup to restore from.
:param Sequence[str] excluded_namespaces: ExcludedNamespaces contains a list of namespaces that are not included in the restore.
:param Sequence[str] excluded_resources: ExcludedResources is a slice of resource names that are not included in the restore.
:param bool include_cluster_resources: IncludeClusterResources specifies whether cluster-scoped resources should be included for consideration in the restore. If null, defaults to true.
:param Sequence[str] included_namespaces: IncludedNamespaces is a slice of namespace names to include objects from. If empty, all namespaces are included.
:param Sequence[str] included_resources: IncludedResources is a slice of resource names to include in the restore. If empty, all resources in the backup are included.
:param 'RestoreSpecLabelSelectorArgs' label_selector: LabelSelector is a metav1.LabelSelector to filter with when restoring individual objects from the backup. If empty or nil, all objects are included. Optional.
:param Mapping[str, str] namespace_mapping: NamespaceMapping is a map of source namespace names to target namespace names to restore into. Any source namespaces not included in the map will be restored into namespaces of the same name.
:param bool restore_pvs: RestorePVs specifies whether to restore all included PVs from snapshot (via the cloudprovider).
:param str schedule_name: ScheduleName is the unique name of the Velero schedule to restore from. If specified, and BackupName is empty, Velero will restore from the most recent successful backup created from this schedule.
"""
pulumi.set(__self__, "backup_name", backup_name)
if excluded_namespaces is not None:
pulumi.set(__self__, "excluded_namespaces", excluded_namespaces)
if excluded_resources is not None:
pulumi.set(__self__, "excluded_resources", excluded_resources)
if include_cluster_resources is not None:
pulumi.set(__self__, "include_cluster_resources", include_cluster_resources)
if included_namespaces is not None:
pulumi.set(__self__, "included_namespaces", included_namespaces)
if included_resources is not None:
pulumi.set(__self__, "included_resources", included_resources)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if namespace_mapping is not None:
pulumi.set(__self__, "namespace_mapping", namespace_mapping)
if restore_pvs is not None:
pulumi.set(__self__, "restore_pvs", restore_pvs)
if schedule_name is not None:
pulumi.set(__self__, "schedule_name", schedule_name)
@property
@pulumi.getter(name="backupName")
def backup_name(self) -> str:
"""
BackupName is the unique name of the Velero backup to restore from.
"""
return pulumi.get(self, "backup_name")
@property
@pulumi.getter(name="excludedNamespaces")
def excluded_namespaces(self) -> Optional[Sequence[str]]:
"""
ExcludedNamespaces contains a list of namespaces that are not included in the restore.
"""
return pulumi.get(self, "excluded_namespaces")
@property
@pulumi.getter(name="excludedResources")
def excluded_resources(self) -> Optional[Sequence[str]]:
"""
ExcludedResources is a slice of resource names that are not included in the restore.
"""
return pulumi.get(self, "excluded_resources")
@property
@pulumi.getter(name="includeClusterResources")
def include_cluster_resources(self) -> Optional[bool]:
"""
IncludeClusterResources specifies whether cluster-scoped resources should be included for consideration in the restore. If null, defaults to true.
"""
return pulumi.get(self, "include_cluster_resources")
@property
@pulumi.getter(name="includedNamespaces")
def included_namespaces(self) -> Optional[Sequence[str]]:
"""
IncludedNamespaces is a slice of namespace names to include objects from. If empty, all namespaces are included.
"""
return pulumi.get(self, "included_namespaces")
@property
@pulumi.getter(name="includedResources")
def included_resources(self) -> Optional[Sequence[str]]:
"""
IncludedResources is a slice of resource names to include in the restore. If empty, all resources in the backup are included.
"""
return pulumi.get(self, "included_resources")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.RestoreSpecLabelSelector']:
"""
LabelSelector is a metav1.LabelSelector to filter with when restoring individual objects from the backup. If empty or nil, all objects are included. Optional.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter(name="namespaceMapping")
def namespace_mapping(self) -> Optional[Mapping[str, str]]:
"""
NamespaceMapping is a map of source namespace names to target namespace names to restore into. Any source namespaces not included in the map will be restored into namespaces of the same name.
"""
return pulumi.get(self, "namespace_mapping")
@property
@pulumi.getter(name="restorePVs")
def restore_pvs(self) -> Optional[bool]:
"""
RestorePVs specifies whether to restore all included PVs from snapshot (via the cloudprovider).
"""
return pulumi.get(self, "restore_pvs")
@property
@pulumi.getter(name="scheduleName")
def schedule_name(self) -> Optional[str]:
"""
ScheduleName is the unique name of the Velero schedule to restore from. If specified, and BackupName is empty, Velero will restore from the most recent successful backup created from this schedule.
"""
return pulumi.get(self, "schedule_name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RestoreSpecLabelSelector(dict):
"""
LabelSelector is a metav1.LabelSelector to filter with when restoring individual objects from the backup. If empty or nil, all objects are included. Optional.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.RestoreSpecLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
LabelSelector is a metav1.LabelSelector to filter with when restoring individual objects from the backup. If empty or nil, all objects are included. Optional.
:param Sequence['RestoreSpecLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.RestoreSpecLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RestoreSpecLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RestoreStatus(dict):
"""
RestoreStatus captures the current status of a Velero restore
"""
def __init__(__self__, *,
errors: Optional[int] = None,
failure_reason: Optional[str] = None,
phase: Optional[str] = None,
pod_volume_restore_errors: Optional[Sequence['outputs.RestoreStatusPodVolumeRestoreErrors']] = None,
pod_volume_restore_verify_errors: Optional[Sequence['outputs.RestoreStatusPodVolumeRestoreVerifyErrors']] = None,
validation_errors: Optional[Sequence[str]] = None,
warnings: Optional[int] = None):
"""
RestoreStatus captures the current status of a Velero restore
:param int errors: Errors is a count of all error messages that were generated during execution of the restore. The actual errors are stored in object storage.
:param str failure_reason: FailureReason is an error that caused the entire restore to fail.
:param str phase: Phase is the current state of the Restore
:param Sequence['RestoreStatusPodVolumeRestoreErrorsArgs'] pod_volume_restore_errors: PodVolumeRestoreErrors is a slice of all PodVolumeRestores with errors (errors encountered by restic when restoring a pod) (if applicable)
:param Sequence['RestoreStatusPodVolumeRestoreVerifyErrorsArgs'] pod_volume_restore_verify_errors: PodVolumeRestoreVerifyErrors is a slice of all PodVolumeRestore errors from restore verification (errors encountered by restic when verifying a pod restore) (if applicable)
:param Sequence[str] validation_errors: ValidationErrors is a slice of all validation errors (if applicable)
:param int warnings: Warnings is a count of all warning messages that were generated during execution of the restore. The actual warnings are stored in object storage.
"""
if errors is not None:
pulumi.set(__self__, "errors", errors)
if failure_reason is not None:
pulumi.set(__self__, "failure_reason", failure_reason)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if pod_volume_restore_errors is not None:
pulumi.set(__self__, "pod_volume_restore_errors", pod_volume_restore_errors)
if pod_volume_restore_verify_errors is not None:
pulumi.set(__self__, "pod_volume_restore_verify_errors", pod_volume_restore_verify_errors)
if validation_errors is not None:
pulumi.set(__self__, "validation_errors", validation_errors)
if warnings is not None:
pulumi.set(__self__, "warnings", warnings)
@property
@pulumi.getter
def errors(self) -> Optional[int]:
"""
Errors is a count of all error messages that were generated during execution of the restore. The actual errors are stored in object storage.
"""
return pulumi.get(self, "errors")
@property
@pulumi.getter(name="failureReason")
def failure_reason(self) -> Optional[str]:
"""
FailureReason is an error that caused the entire restore to fail.
"""
return pulumi.get(self, "failure_reason")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current state of the Restore
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter(name="podVolumeRestoreErrors")
def pod_volume_restore_errors(self) -> Optional[Sequence['outputs.RestoreStatusPodVolumeRestoreErrors']]:
"""
PodVolumeRestoreErrors is a slice of all PodVolumeRestores with errors (errors encountered by restic when restoring a pod) (if applicable)
"""
return pulumi.get(self, "pod_volume_restore_errors")
@property
@pulumi.getter(name="podVolumeRestoreVerifyErrors")
def pod_volume_restore_verify_errors(self) -> Optional[Sequence['outputs.RestoreStatusPodVolumeRestoreVerifyErrors']]:
"""
PodVolumeRestoreVerifyErrors is a slice of all PodVolumeRestore errors from restore verification (errors encountered by restic when verifying a pod restore) (if applicable)
"""
return pulumi.get(self, "pod_volume_restore_verify_errors")
@property
@pulumi.getter(name="validationErrors")
def validation_errors(self) -> Optional[Sequence[str]]:
"""
ValidationErrors is a slice of all validation errors (if applicable)
"""
return pulumi.get(self, "validation_errors")
@property
@pulumi.getter
def warnings(self) -> Optional[int]:
"""
Warnings is a count of all warning messages that were generated during execution of the restore. The actual warnings are stored in object storage.
"""
return pulumi.get(self, "warnings")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RestoreStatusPodVolumeRestoreErrors(dict):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class RestoreStatusPodVolumeRestoreVerifyErrors(dict):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
"""
def __init__(__self__, *,
api_version: Optional[str] = None,
field_path: Optional[str] = None,
kind: Optional[str] = None,
name: Optional[str] = None,
namespace: Optional[str] = None,
resource_version: Optional[str] = None,
uid: Optional[str] = None):
"""
ObjectReference contains enough information to let you inspect or modify the referred object.
:param str api_version: API version of the referent.
:param str field_path: If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
:param str kind: Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
:param str name: Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
:param str namespace: Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
:param str resource_version: Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
:param str uid: UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
if api_version is not None:
pulumi.set(__self__, "api_version", api_version)
if field_path is not None:
pulumi.set(__self__, "field_path", field_path)
if kind is not None:
pulumi.set(__self__, "kind", kind)
if name is not None:
pulumi.set(__self__, "name", name)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if resource_version is not None:
pulumi.set(__self__, "resource_version", resource_version)
if uid is not None:
pulumi.set(__self__, "uid", uid)
@property
@pulumi.getter(name="apiVersion")
def api_version(self) -> Optional[str]:
"""
API version of the referent.
"""
return pulumi.get(self, "api_version")
@property
@pulumi.getter(name="fieldPath")
def field_path(self) -> Optional[str]:
"""
If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.
"""
return pulumi.get(self, "field_path")
@property
@pulumi.getter
def kind(self) -> Optional[str]:
"""
Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
"""
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> Optional[str]:
"""
Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
"""
return pulumi.get(self, "name")
@property
@pulumi.getter
def namespace(self) -> Optional[str]:
"""
Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
"""
return pulumi.get(self, "namespace")
@property
@pulumi.getter(name="resourceVersion")
def resource_version(self) -> Optional[str]:
"""
Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
"""
return pulumi.get(self, "resource_version")
@property
@pulumi.getter
def uid(self) -> Optional[str]:
"""
UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
"""
return pulumi.get(self, "uid")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpec(dict):
"""
ScheduleSpec defines the specification for a Velero schedule
"""
def __init__(__self__, *,
schedule: str,
template: 'outputs.ScheduleSpecTemplate'):
"""
ScheduleSpec defines the specification for a Velero schedule
:param str schedule: Schedule is a Cron expression defining when to run the Backup.
:param 'ScheduleSpecTemplateArgs' template: Template is the definition of the Backup to be run on the provided schedule
"""
pulumi.set(__self__, "schedule", schedule)
pulumi.set(__self__, "template", template)
@property
@pulumi.getter
def schedule(self) -> str:
"""
Schedule is a Cron expression defining when to run the Backup.
"""
return pulumi.get(self, "schedule")
@property
@pulumi.getter
def template(self) -> 'outputs.ScheduleSpecTemplate':
"""
Template is the definition of the Backup to be run on the provided schedule
"""
return pulumi.get(self, "template")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplate(dict):
"""
Template is the definition of the Backup to be run on the provided schedule
"""
def __init__(__self__, *,
excluded_namespaces: Optional[Sequence[str]] = None,
excluded_resources: Optional[Sequence[str]] = None,
hooks: Optional['outputs.ScheduleSpecTemplateHooks'] = None,
include_cluster_resources: Optional[bool] = None,
included_namespaces: Optional[Sequence[str]] = None,
included_resources: Optional[Sequence[str]] = None,
label_selector: Optional['outputs.ScheduleSpecTemplateLabelSelector'] = None,
snapshot_volumes: Optional[bool] = None,
storage_location: Optional[str] = None,
ttl: Optional[str] = None,
volume_snapshot_locations: Optional[Sequence[str]] = None):
"""
Template is the definition of the Backup to be run on the provided schedule
:param Sequence[str] excluded_namespaces: ExcludedNamespaces contains a list of namespaces that are not included in the backup.
:param Sequence[str] excluded_resources: ExcludedResources is a slice of resource names that are not included in the backup.
:param 'ScheduleSpecTemplateHooksArgs' hooks: Hooks represent custom behaviors that should be executed at different phases of the backup.
:param bool include_cluster_resources: IncludeClusterResources specifies whether cluster-scoped resources should be included for consideration in the backup.
:param Sequence[str] included_namespaces: IncludedNamespaces is a slice of namespace names to include objects from. If empty, all namespaces are included.
:param Sequence[str] included_resources: IncludedResources is a slice of resource names to include in the backup. If empty, all resources are included.
:param 'ScheduleSpecTemplateLabelSelectorArgs' label_selector: LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
:param bool snapshot_volumes: SnapshotVolumes specifies whether to take cloud snapshots of any PV's referenced in the set of objects included in the Backup.
:param str storage_location: StorageLocation is a string containing the name of a BackupStorageLocation where the backup should be stored.
:param str ttl: TTL is a time.Duration-parseable string describing how long the Backup should be retained for.
:param Sequence[str] volume_snapshot_locations: VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations associated with this backup.
"""
if excluded_namespaces is not None:
pulumi.set(__self__, "excluded_namespaces", excluded_namespaces)
if excluded_resources is not None:
pulumi.set(__self__, "excluded_resources", excluded_resources)
if hooks is not None:
pulumi.set(__self__, "hooks", hooks)
if include_cluster_resources is not None:
pulumi.set(__self__, "include_cluster_resources", include_cluster_resources)
if included_namespaces is not None:
pulumi.set(__self__, "included_namespaces", included_namespaces)
if included_resources is not None:
pulumi.set(__self__, "included_resources", included_resources)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if snapshot_volumes is not None:
pulumi.set(__self__, "snapshot_volumes", snapshot_volumes)
if storage_location is not None:
pulumi.set(__self__, "storage_location", storage_location)
if ttl is not None:
pulumi.set(__self__, "ttl", ttl)
if volume_snapshot_locations is not None:
pulumi.set(__self__, "volume_snapshot_locations", volume_snapshot_locations)
@property
@pulumi.getter(name="excludedNamespaces")
def excluded_namespaces(self) -> Optional[Sequence[str]]:
"""
ExcludedNamespaces contains a list of namespaces that are not included in the backup.
"""
return pulumi.get(self, "excluded_namespaces")
@property
@pulumi.getter(name="excludedResources")
def excluded_resources(self) -> Optional[Sequence[str]]:
"""
ExcludedResources is a slice of resource names that are not included in the backup.
"""
return pulumi.get(self, "excluded_resources")
@property
@pulumi.getter
def hooks(self) -> Optional['outputs.ScheduleSpecTemplateHooks']:
"""
Hooks represent custom behaviors that should be executed at different phases of the backup.
"""
return pulumi.get(self, "hooks")
@property
@pulumi.getter(name="includeClusterResources")
def include_cluster_resources(self) -> Optional[bool]:
"""
IncludeClusterResources specifies whether cluster-scoped resources should be included for consideration in the backup.
"""
return pulumi.get(self, "include_cluster_resources")
@property
@pulumi.getter(name="includedNamespaces")
def included_namespaces(self) -> Optional[Sequence[str]]:
"""
IncludedNamespaces is a slice of namespace names to include objects from. If empty, all namespaces are included.
"""
return pulumi.get(self, "included_namespaces")
@property
@pulumi.getter(name="includedResources")
def included_resources(self) -> Optional[Sequence[str]]:
"""
IncludedResources is a slice of resource names to include in the backup. If empty, all resources are included.
"""
return pulumi.get(self, "included_resources")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.ScheduleSpecTemplateLabelSelector']:
"""
LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter(name="snapshotVolumes")
def snapshot_volumes(self) -> Optional[bool]:
"""
SnapshotVolumes specifies whether to take cloud snapshots of any PV's referenced in the set of objects included in the Backup.
"""
return pulumi.get(self, "snapshot_volumes")
@property
@pulumi.getter(name="storageLocation")
def storage_location(self) -> Optional[str]:
"""
StorageLocation is a string containing the name of a BackupStorageLocation where the backup should be stored.
"""
return pulumi.get(self, "storage_location")
@property
@pulumi.getter
def ttl(self) -> Optional[str]:
"""
TTL is a time.Duration-parseable string describing how long the Backup should be retained for.
"""
return pulumi.get(self, "ttl")
@property
@pulumi.getter(name="volumeSnapshotLocations")
def volume_snapshot_locations(self) -> Optional[Sequence[str]]:
"""
VolumeSnapshotLocations is a list containing names of VolumeSnapshotLocations associated with this backup.
"""
return pulumi.get(self, "volume_snapshot_locations")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooks(dict):
"""
Hooks represent custom behaviors that should be executed at different phases of the backup.
"""
def __init__(__self__, *,
resources: Optional[Sequence['outputs.ScheduleSpecTemplateHooksResources']] = None):
"""
Hooks represent custom behaviors that should be executed at different phases of the backup.
:param Sequence['ScheduleSpecTemplateHooksResourcesArgs'] resources: Resources are hooks that should be executed when backing up individual instances of a resource.
"""
if resources is not None:
pulumi.set(__self__, "resources", resources)
@property
@pulumi.getter
def resources(self) -> Optional[Sequence['outputs.ScheduleSpecTemplateHooksResources']]:
"""
Resources are hooks that should be executed when backing up individual instances of a resource.
"""
return pulumi.get(self, "resources")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResources(dict):
"""
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on the rules defined for namespaces, resources, and label selector.
"""
def __init__(__self__, *,
name: str,
excluded_namespaces: Optional[Sequence[str]] = None,
excluded_resources: Optional[Sequence[str]] = None,
included_namespaces: Optional[Sequence[str]] = None,
included_resources: Optional[Sequence[str]] = None,
label_selector: Optional['outputs.ScheduleSpecTemplateHooksResourcesLabelSelector'] = None,
post: Optional[Sequence['outputs.ScheduleSpecTemplateHooksResourcesPost']] = None,
pre: Optional[Sequence['outputs.ScheduleSpecTemplateHooksResourcesPre']] = None):
"""
BackupResourceHookSpec defines one or more BackupResourceHooks that should be executed based on the rules defined for namespaces, resources, and label selector.
:param str name: Name is the name of this hook.
:param Sequence[str] excluded_namespaces: ExcludedNamespaces specifies the namespaces to which this hook spec does not apply.
:param Sequence[str] excluded_resources: ExcludedResources specifies the resources to which this hook spec does not apply.
:param Sequence[str] included_namespaces: IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies to all namespaces.
:param Sequence[str] included_resources: IncludedResources specifies the resources to which this hook spec applies. If empty, it applies to all resources.
:param 'ScheduleSpecTemplateHooksResourcesLabelSelectorArgs' label_selector: LabelSelector, if specified, filters the resources to which this hook spec applies.
:param Sequence['ScheduleSpecTemplateHooksResourcesPostArgs'] post: PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup. These are executed after all "additional items" from item actions are processed.
:param Sequence['ScheduleSpecTemplateHooksResourcesPreArgs'] pre: PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup. These are executed before any "additional items" from item actions are processed.
"""
pulumi.set(__self__, "name", name)
if excluded_namespaces is not None:
pulumi.set(__self__, "excluded_namespaces", excluded_namespaces)
if excluded_resources is not None:
pulumi.set(__self__, "excluded_resources", excluded_resources)
if included_namespaces is not None:
pulumi.set(__self__, "included_namespaces", included_namespaces)
if included_resources is not None:
pulumi.set(__self__, "included_resources", included_resources)
if label_selector is not None:
pulumi.set(__self__, "label_selector", label_selector)
if post is not None:
pulumi.set(__self__, "post", post)
if pre is not None:
pulumi.set(__self__, "pre", pre)
@property
@pulumi.getter
def name(self) -> str:
"""
Name is the name of this hook.
"""
return pulumi.get(self, "name")
@property
@pulumi.getter(name="excludedNamespaces")
def excluded_namespaces(self) -> Optional[Sequence[str]]:
"""
ExcludedNamespaces specifies the namespaces to which this hook spec does not apply.
"""
return pulumi.get(self, "excluded_namespaces")
@property
@pulumi.getter(name="excludedResources")
def excluded_resources(self) -> Optional[Sequence[str]]:
"""
ExcludedResources specifies the resources to which this hook spec does not apply.
"""
return pulumi.get(self, "excluded_resources")
@property
@pulumi.getter(name="includedNamespaces")
def included_namespaces(self) -> Optional[Sequence[str]]:
"""
IncludedNamespaces specifies the namespaces to which this hook spec applies. If empty, it applies to all namespaces.
"""
return pulumi.get(self, "included_namespaces")
@property
@pulumi.getter(name="includedResources")
def included_resources(self) -> Optional[Sequence[str]]:
"""
IncludedResources specifies the resources to which this hook spec applies. If empty, it applies to all resources.
"""
return pulumi.get(self, "included_resources")
@property
@pulumi.getter(name="labelSelector")
def label_selector(self) -> Optional['outputs.ScheduleSpecTemplateHooksResourcesLabelSelector']:
"""
LabelSelector, if specified, filters the resources to which this hook spec applies.
"""
return pulumi.get(self, "label_selector")
@property
@pulumi.getter
def post(self) -> Optional[Sequence['outputs.ScheduleSpecTemplateHooksResourcesPost']]:
"""
PostHooks is a list of BackupResourceHooks to execute after storing the item in the backup. These are executed after all "additional items" from item actions are processed.
"""
return pulumi.get(self, "post")
@property
@pulumi.getter
def pre(self) -> Optional[Sequence['outputs.ScheduleSpecTemplateHooksResourcesPre']]:
"""
PreHooks is a list of BackupResourceHooks to execute prior to storing the item in the backup. These are executed before any "additional items" from item actions are processed.
"""
return pulumi.get(self, "pre")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResourcesLabelSelector(dict):
"""
LabelSelector, if specified, filters the resources to which this hook spec applies.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.ScheduleSpecTemplateHooksResourcesLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
LabelSelector, if specified, filters the resources to which this hook spec applies.
:param Sequence['ScheduleSpecTemplateHooksResourcesLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.ScheduleSpecTemplateHooksResourcesLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResourcesLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResourcesPost(dict):
"""
BackupResourceHook defines a hook for a resource.
"""
def __init__(__self__, *,
exec_: 'outputs.ScheduleSpecTemplateHooksResourcesPostExec'):
"""
BackupResourceHook defines a hook for a resource.
:param 'ScheduleSpecTemplateHooksResourcesPostExecArgs' exec_: Exec defines an exec hook.
"""
pulumi.set(__self__, "exec_", exec_)
@property
@pulumi.getter(name="exec")
def exec_(self) -> 'outputs.ScheduleSpecTemplateHooksResourcesPostExec':
"""
Exec defines an exec hook.
"""
return pulumi.get(self, "exec_")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResourcesPostExec(dict):
"""
Exec defines an exec hook.
"""
def __init__(__self__, *,
command: Sequence[str],
container: Optional[str] = None,
on_error: Optional[str] = None,
timeout: Optional[str] = None):
"""
Exec defines an exec hook.
:param Sequence[str] command: Command is the command and arguments to execute.
:param str container: Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
:param str on_error: OnError specifies how Velero should behave if it encounters an error executing this hook.
:param str timeout: Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
pulumi.set(__self__, "command", command)
if container is not None:
pulumi.set(__self__, "container", container)
if on_error is not None:
pulumi.set(__self__, "on_error", on_error)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter
def command(self) -> Sequence[str]:
"""
Command is the command and arguments to execute.
"""
return pulumi.get(self, "command")
@property
@pulumi.getter
def container(self) -> Optional[str]:
"""
Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
"""
return pulumi.get(self, "container")
@property
@pulumi.getter(name="onError")
def on_error(self) -> Optional[str]:
"""
OnError specifies how Velero should behave if it encounters an error executing this hook.
"""
return pulumi.get(self, "on_error")
@property
@pulumi.getter
def timeout(self) -> Optional[str]:
"""
Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
return pulumi.get(self, "timeout")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResourcesPre(dict):
"""
BackupResourceHook defines a hook for a resource.
"""
def __init__(__self__, *,
exec_: 'outputs.ScheduleSpecTemplateHooksResourcesPreExec'):
"""
BackupResourceHook defines a hook for a resource.
:param 'ScheduleSpecTemplateHooksResourcesPreExecArgs' exec_: Exec defines an exec hook.
"""
pulumi.set(__self__, "exec_", exec_)
@property
@pulumi.getter(name="exec")
def exec_(self) -> 'outputs.ScheduleSpecTemplateHooksResourcesPreExec':
"""
Exec defines an exec hook.
"""
return pulumi.get(self, "exec_")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateHooksResourcesPreExec(dict):
"""
Exec defines an exec hook.
"""
def __init__(__self__, *,
command: Sequence[str],
container: Optional[str] = None,
on_error: Optional[str] = None,
timeout: Optional[str] = None):
"""
Exec defines an exec hook.
:param Sequence[str] command: Command is the command and arguments to execute.
:param str container: Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
:param str on_error: OnError specifies how Velero should behave if it encounters an error executing this hook.
:param str timeout: Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
pulumi.set(__self__, "command", command)
if container is not None:
pulumi.set(__self__, "container", container)
if on_error is not None:
pulumi.set(__self__, "on_error", on_error)
if timeout is not None:
pulumi.set(__self__, "timeout", timeout)
@property
@pulumi.getter
def command(self) -> Sequence[str]:
"""
Command is the command and arguments to execute.
"""
return pulumi.get(self, "command")
@property
@pulumi.getter
def container(self) -> Optional[str]:
"""
Container is the container in the pod where the command should be executed. If not specified, the pod's first container is used.
"""
return pulumi.get(self, "container")
@property
@pulumi.getter(name="onError")
def on_error(self) -> Optional[str]:
"""
OnError specifies how Velero should behave if it encounters an error executing this hook.
"""
return pulumi.get(self, "on_error")
@property
@pulumi.getter
def timeout(self) -> Optional[str]:
"""
Timeout defines the maximum amount of time Velero should wait for the hook to complete before considering the execution a failure.
"""
return pulumi.get(self, "timeout")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateLabelSelector(dict):
"""
LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
"""
def __init__(__self__, *,
match_expressions: Optional[Sequence['outputs.ScheduleSpecTemplateLabelSelectorMatchExpressions']] = None,
match_labels: Optional[Mapping[str, str]] = None):
"""
LabelSelector is a metav1.LabelSelector to filter with when adding individual objects to the backup. If empty or nil, all objects are included. Optional.
:param Sequence['ScheduleSpecTemplateLabelSelectorMatchExpressionsArgs'] match_expressions: matchExpressions is a list of label selector requirements. The requirements are ANDed.
:param Mapping[str, str] match_labels: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
if match_expressions is not None:
pulumi.set(__self__, "match_expressions", match_expressions)
if match_labels is not None:
pulumi.set(__self__, "match_labels", match_labels)
@property
@pulumi.getter(name="matchExpressions")
def match_expressions(self) -> Optional[Sequence['outputs.ScheduleSpecTemplateLabelSelectorMatchExpressions']]:
"""
matchExpressions is a list of label selector requirements. The requirements are ANDed.
"""
return pulumi.get(self, "match_expressions")
@property
@pulumi.getter(name="matchLabels")
def match_labels(self) -> Optional[Mapping[str, str]]:
"""
matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
"""
return pulumi.get(self, "match_labels")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleSpecTemplateLabelSelectorMatchExpressions(dict):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
"""
def __init__(__self__, *,
key: str,
operator: str,
values: Optional[Sequence[str]] = None):
"""
A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
:param str key: key is the label key that the selector applies to.
:param str operator: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
:param Sequence[str] values: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
pulumi.set(__self__, "key", key)
pulumi.set(__self__, "operator", operator)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def key(self) -> str:
"""
key is the label key that the selector applies to.
"""
return pulumi.get(self, "key")
@property
@pulumi.getter
def operator(self) -> str:
"""
operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
"""
return pulumi.get(self, "operator")
@property
@pulumi.getter
def values(self) -> Optional[Sequence[str]]:
"""
values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
"""
return pulumi.get(self, "values")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ScheduleStatus(dict):
"""
ScheduleStatus captures the current state of a Velero schedule
"""
def __init__(__self__, *,
last_backup: Optional[str] = None,
phase: Optional[str] = None,
validation_errors: Optional[Sequence[str]] = None):
"""
ScheduleStatus captures the current state of a Velero schedule
:param str last_backup: LastBackup is the last time a Backup was run for this Schedule schedule
:param str phase: Phase is the current phase of the Schedule
:param Sequence[str] validation_errors: ValidationErrors is a slice of all validation errors (if applicable)
"""
if last_backup is not None:
pulumi.set(__self__, "last_backup", last_backup)
if phase is not None:
pulumi.set(__self__, "phase", phase)
if validation_errors is not None:
pulumi.set(__self__, "validation_errors", validation_errors)
@property
@pulumi.getter(name="lastBackup")
def last_backup(self) -> Optional[str]:
"""
LastBackup is the last time a Backup was run for this Schedule schedule
"""
return pulumi.get(self, "last_backup")
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current phase of the Schedule
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter(name="validationErrors")
def validation_errors(self) -> Optional[Sequence[str]]:
"""
ValidationErrors is a slice of all validation errors (if applicable)
"""
return pulumi.get(self, "validation_errors")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ServerStatusRequestStatus(dict):
"""
ServerStatusRequestStatus is the current status of a ServerStatusRequest.
"""
def __init__(__self__, *,
phase: Optional[str] = None,
plugins: Optional[Sequence['outputs.ServerStatusRequestStatusPlugins']] = None,
processed_timestamp: Optional[str] = None,
server_version: Optional[str] = None):
"""
ServerStatusRequestStatus is the current status of a ServerStatusRequest.
:param str phase: Phase is the current lifecycle phase of the ServerStatusRequest.
:param Sequence['ServerStatusRequestStatusPluginsArgs'] plugins: Plugins list information about the plugins running on the Velero server
:param str processed_timestamp: ProcessedTimestamp is when the ServerStatusRequest was processed by the ServerStatusRequestController.
:param str server_version: ServerVersion is the Velero server version.
"""
if phase is not None:
pulumi.set(__self__, "phase", phase)
if plugins is not None:
pulumi.set(__self__, "plugins", plugins)
if processed_timestamp is not None:
pulumi.set(__self__, "processed_timestamp", processed_timestamp)
if server_version is not None:
pulumi.set(__self__, "server_version", server_version)
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
Phase is the current lifecycle phase of the ServerStatusRequest.
"""
return pulumi.get(self, "phase")
@property
@pulumi.getter
def plugins(self) -> Optional[Sequence['outputs.ServerStatusRequestStatusPlugins']]:
"""
Plugins list information about the plugins running on the Velero server
"""
return pulumi.get(self, "plugins")
@property
@pulumi.getter(name="processedTimestamp")
def processed_timestamp(self) -> Optional[str]:
"""
ProcessedTimestamp is when the ServerStatusRequest was processed by the ServerStatusRequestController.
"""
return pulumi.get(self, "processed_timestamp")
@property
@pulumi.getter(name="serverVersion")
def server_version(self) -> Optional[str]:
"""
ServerVersion is the Velero server version.
"""
return pulumi.get(self, "server_version")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class ServerStatusRequestStatusPlugins(dict):
"""
PluginInfo contains attributes of a Velero plugin
"""
def __init__(__self__, *,
kind: str,
name: str):
"""
PluginInfo contains attributes of a Velero plugin
"""
pulumi.set(__self__, "kind", kind)
pulumi.set(__self__, "name", name)
@property
@pulumi.getter
def kind(self) -> str:
return pulumi.get(self, "kind")
@property
@pulumi.getter
def name(self) -> str:
return pulumi.get(self, "name")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VolumeSnapshotLocationSpec(dict):
"""
VolumeSnapshotLocationSpec defines the specification for a Velero VolumeSnapshotLocation.
"""
def __init__(__self__, *,
provider: str,
config: Optional[Mapping[str, str]] = None):
"""
VolumeSnapshotLocationSpec defines the specification for a Velero VolumeSnapshotLocation.
:param str provider: Provider is the provider of the volume storage.
:param Mapping[str, str] config: Config is for provider-specific configuration fields.
"""
pulumi.set(__self__, "provider", provider)
if config is not None:
pulumi.set(__self__, "config", config)
@property
@pulumi.getter
def provider(self) -> str:
"""
Provider is the provider of the volume storage.
"""
return pulumi.get(self, "provider")
@property
@pulumi.getter
def config(self) -> Optional[Mapping[str, str]]:
"""
Config is for provider-specific configuration fields.
"""
return pulumi.get(self, "config")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
@pulumi.output_type
class VolumeSnapshotLocationStatus(dict):
"""
VolumeSnapshotLocationStatus describes the current status of a Velero VolumeSnapshotLocation.
"""
def __init__(__self__, *,
phase: Optional[str] = None):
"""
VolumeSnapshotLocationStatus describes the current status of a Velero VolumeSnapshotLocation.
:param str phase: VolumeSnapshotLocationPhase is the lifecyle phase of a Velero VolumeSnapshotLocation.
"""
if phase is not None:
pulumi.set(__self__, "phase", phase)
@property
@pulumi.getter
def phase(self) -> Optional[str]:
"""
VolumeSnapshotLocationPhase is the lifecyle phase of a Velero VolumeSnapshotLocation.
"""
return pulumi.get(self, "phase")
def _translate_property(self, prop):
return _tables.CAMEL_TO_SNAKE_CASE_TABLE.get(prop) or prop
| 45.624626 | 681 | 0.675612 | 18,190 | 152,660 | 5.524519 | 0.035404 | 0.019345 | 0.027943 | 0.040839 | 0.884636 | 0.871113 | 0.862943 | 0.848743 | 0.821825 | 0.80252 | 0 | 0.000552 | 0.241045 | 152,660 | 3,345 | 682 | 45.638266 | 0.866781 | 0.43006 | 0 | 0.788536 | 1 | 0 | 0.135137 | 0.06368 | 0 | 0 | 0 | 0.002392 | 0 | 1 | 0.180301 | false | 0 | 0.003339 | 0.033945 | 0.36394 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7880ccbf5bfe35f534e246897a50711e0b912990 | 45,442 | py | Python | BoManifolds/kernel_utils/kernels_spd_tf.py | NoemieJaquier/GaBOflow | 82c647fbaf7a511b38f64c8bf0429cbcf4e8de60 | [
"MIT"
] | 5 | 2019-12-20T07:27:47.000Z | 2020-05-05T22:02:21.000Z | BoManifolds/kernel_utils/kernels_spd_tf.py | NoemieJaquier/GaBOflow | 82c647fbaf7a511b38f64c8bf0429cbcf4e8de60 | [
"MIT"
] | 1 | 2020-01-15T13:59:48.000Z | 2020-02-05T08:19:04.000Z | BoManifolds/kernel_utils/kernels_spd_tf.py | NoemieJaquier/GaBOflow | 82c647fbaf7a511b38f64c8bf0429cbcf4e8de60 | [
"MIT"
] | 3 | 2019-11-24T19:41:52.000Z | 2021-05-01T08:06:49.000Z | import numpy as np
import gpflow
import tensorflow as tf
from BoManifolds.Riemannian_utils.SPD_utils_tf import vector_to_symmetric_matrix_tf, affine_invariant_distance_tf
'''
Authors: Noemie Jaquier and Leonel Rozo, 2019
License: MIT
Contact: noemie.jaquier@idiap.ch, leonel.rozo@de.bosch.com
'''
# The SPD kernels are implemented here for GPflow version = 0.5 (used by GPflowOpt) and version >=1.2.0.
if gpflow.__version__ == '0.5':
class SpdSteinGaussianKernel(gpflow.kernels.Kern):
"""
Instances of this class represent a Stein covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.continuous_param_space_limit: low limit of the continous parameter space where the inverse square
lengthscale parameter beta results in PD kernels
self.beta_shifted: equal to beta-continuous_param_space_limit and used to optimize beta in the space of
beta values in the continuous parameter space resulting in PD kernels
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Kdiag(point1_in_SPD):
update_beta(new_beta_value):
get_beta():
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta, variance=1.0):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Lower limit of the continuous space where beta can be optimized continuously
# beta \in [j/2: 1 <= j <= n-1] U ]0.5(n-1), +inf[
self.continuous_param_space_limit = 0.5 * (self.matrix_dim - 1)
# Parameter initialization
# Beta shifted is used to optimize beta in the continuous part of its space
# The values of beta in the discrete part of the space have to be tested separetely, i.e. by comparing the
# obtained log marginal likelihood with the discrete value to the one obtained by optimizing in the
# continous part of the space. Therefore, do not optimize the kernel for initial values of the parameter
# beta smaller than self.continuous_param_space_limit.
self.beta_shifted = gpflow.param.Param(beta - self.continuous_param_space_limit, transform=gpflow.transforms.positive)
self.variance = gpflow.param.Param(variance, transform=gpflow.transforms.positive)
def K(self, X, X2=None):
"""
Computes the Stein kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute beta value from beta_shifted
beta = self.beta_shifted + self.continuous_param_space_limit
# Compute the kernel
X = tf.expand_dims(X, 1)
X2 = tf.expand_dims(X2, 0)
X = tf.tile(X, [1, tf.shape(X2)[1], 1, 1])
X2 = tf.tile(X2, [tf.shape(X)[0], 1, 1, 1])
mult_XX2 = tf.matmul(X, X2)
add_halfXX2 = 0.5 * tf.add(X, X2)
detmult_XX2 = tf.linalg.det(mult_XX2)
detadd_halfXX2 = tf.linalg.det(add_halfXX2)
dist = tf.divide(tf.math.pow(detmult_XX2, beta), tf.math.pow(detadd_halfXX2, beta))
return tf.multiply(self.variance, dist)
def Kdiag(self, X):
"""
Computes the diagonal of Gaussian kernel matrix of inputs X belonging to a sphere manifold.
Parameters
----------
:param X: input points on the SPD manifold
Optional parameters
-------------------
Returns
-------
:return: diagonal of the kernel matrix of X
"""
return tf.linalg.tensor_diag_part(self.K(X))
def update_beta(self, beta):
"""
Update the parameter beta of the class.
Parameters
----------
:param beta: new value of beta
Optional parameters
-------------------
Returns
-------
:return:
"""
self.beta_shifted = (beta - self.continuous_param_space_limit)
def get_beta(self):
"""
Return the parameter beta of the class.
Parameters
----------
Optional parameters
-------------------
Returns
-------
:return: value of beta
"""
return self.beta_shifted.value + self.continuous_param_space_limit
class SpdAffineInvariantGaussianKernel(gpflow.kernels.Kern):
"""
Instances of this class represent a Gaussian (RBF) covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_min: minimum value of the inverse square lengthscale parameter beta
self.beta_shifted: equal to beta-beta_min and used to optimize beta in the space of beta values resulting in
PD kernels
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Kdiag(point1_in_SPD):
update_beta(new_beta_value):
get_beta():
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta_min, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
:param beta_min: minimum value of the inverse square lengthscale parameter beta
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
# Beta shifted is used to optimize beta in the space of beta values resulting in PD kernels
self.beta_shifted = gpflow.param.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.param.Param(variance, transform=gpflow.transforms.positive)
self.beta_min_value = beta_min
def K(self, X, X2=None):
"""
Computes the Gaussian kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute beta value from beta_shifted
beta = self.beta_shifted + self.beta_min_value
# Compute the kernel
aff_inv_dist = affine_invariant_distance_tf(X, X2, full_dist_mat=True)
aff_inv_dist2 = tf.square(aff_inv_dist)
# aff_inv_dist = tf.divide(aff_inv_dist, tf.multiply(self.beta_param, self.beta_param))
aff_inv_dist2 = tf.multiply(aff_inv_dist2, beta)
return tf.multiply(self.variance, tf.exp(-aff_inv_dist2))
def Kdiag(self, X):
"""
Computes the diagonal of Gaussian kernel matrix of inputs X belonging to a sphere manifold.
Parameters
----------
:param X: input points on the SPD manifold
Optional parameters
-------------------
Returns
-------
:return: diagonal of the kernel matrix of X
"""
return tf.fill(tf.stack([tf.shape(X)[0]]), tf.squeeze(self.variance))
def update_beta(self, beta):
"""
Update the parameter beta of the class.
Parameters
----------
:param beta: new value of beta
Optional parameters
-------------------
Returns
-------
:return:
"""
self.beta_shifted = beta - self.beta_min_value
def get_beta(self):
"""
Return the parameter beta of the class.
Parameters
----------
Optional parameters
-------------------
Returns
-------
:return: value of beta
"""
return self.beta_shifted.value + self.beta_min_value
# Checks for affine-invariant kernel
# X = tf.convert_to_tensor(y_man_mat)
# X2 = tf.convert_to_tensor(y_man_mat_test)
#
# aff_inv_dist = affine_inv_distance_tf(X, X2, full_dist_mat=True)
#
# with tf.Session() as sess:
# x_np = sess.run(X)
# x2_np = sess.run(X2)
# affinv_np = sess.run(aff_inv_dist)
#
# affinv_check = np.zeros((y_man_mat.shape[0], y_man_mat_test.shape[0]))
# for m in range(y_man_mat.shape[0]):
# for n in range(y_man_mat_test.shape[0]):
# affinv_check[m, n] = aff_invariant_distance(y_man_mat[m], y_man_mat_test[n])
#
# kernel_check = np.exp(- affinv_check**2 / 1.0)
class SpdAffineInvariantLaplaceKernel(gpflow.kernels.Kern):
"""
Instances of this class represent a Laplace covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_min: minimum value of the inverse square lengthscale parameter beta
self.beta_shifted: equal to beta-beta_min and used to optimize beta in the space of beta values resulting in
PD kernels
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Kdiag(point1_in_SPD):
update_beta(new_beta_value):
get_beta():
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta_min=0., beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
:param beta_min: minimum value of the inverse square lengthscale parameter beta
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
# Beta shifted is used to optimize beta in the space of beta values resulting in PD kernels
self.beta_shifted = gpflow.param.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.param.Param(variance, transform=gpflow.transforms.positive)
self.beta_min_value = beta_min
def K(self, X, X2=None):
"""
Computes the Laplace kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute beta value from beta_shifted
beta = self.beta_shifted + self.beta_min_value
# Compute the kernel
aff_inv_dist = affine_invariant_distance_tf(X, X2, full_dist_mat=True)
aff_inv_dist = tf.multiply(aff_inv_dist, beta)
return tf.multiply(self.variance, tf.exp(-aff_inv_dist))
def Kdiag(self, X):
"""
Computes the diagonal of Gaussian kernel matrix of inputs X belonging to a sphere manifold.
Parameters
----------
:param X: input points on the SPD manifold
Optional parameters
-------------------
Returns
-------
:return: diagonal of the kernel matrix of X
"""
return tf.fill(tf.stack([tf.shape(X)[0]]), tf.squeeze(self.variance))
def update_beta(self, beta):
"""
Update the parameter beta of the class.
Parameters
----------
:param beta: new value of beta
Optional parameters
-------------------
Returns
-------
:return:
"""
self.beta_shifted = beta - self.beta_min_value
def get_beta(self):
"""
Return the parameter beta of the class.
Parameters
----------
Optional parameters
-------------------
Returns
-------
:return: value of beta
"""
return self.beta_shifted.value + self.beta_min_value
class SpdFrobeniusGaussianKernel(gpflow.kernels.Kern):
"""
Instances of this class represent a Frobenius covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_param: inverse square lengthscale parameter beta
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Kdiag(point1_in_SPD):
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
self.beta_param = gpflow.param.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.param.Param(variance, transform=gpflow.transforms.positive)
def K(self, X, X2=None):
"""
Computes the Frobenius kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute the kernel
X = tf.expand_dims(X, 1)
X2 = tf.expand_dims(X2, 0)
X = tf.tile(X, [1, tf.shape(X2)[1], 1, 1])
X2 = tf.tile(X2, [tf.shape(X)[0], 1, 1, 1])
diff_XX2 = tf.subtract(X, X2)
frob_dist = tf.norm(diff_XX2, axis=(-2, -1))
frob_dist2 = tf.square(frob_dist)
frob_dist2 = tf.multiply(frob_dist2, self.beta_param)
return tf.multiply(self.variance, tf.exp(-frob_dist2))
def Kdiag(self, X):
"""
Computes the diagonal of Gaussian kernel matrix of inputs X belonging to a sphere manifold.
Parameters
----------
:param X: input points on the SPD manifold
Optional parameters
-------------------
Returns
-------
:return: diagonal of the kernel matrix of X
"""
return tf.fill(tf.stack([tf.shape(X)[0]]), tf.squeeze(self.variance))
# Numpy check
# frobdist = np.zeros((y_man_mat.shape[0], y_man_mat_test.shape[0]))
# for m in range(y_man_mat.shape[0]):
# for n in range(y_man_mat_test.shape[0]):
# frobdist[m, n] = np.linalg.norm(y_man_mat[m] - y_man_mat_test[n])
#
# kernel_check = np.exp(- frobdist**2 / 1.0)
class SpdLogEuclideanGaussianKernel(gpflow.kernels.Kern):
"""
Instances of this class represent a Log-Euclidean covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_param: inverse square lengthscale parameter beta
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Kdiag(point1_in_SPD):
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
self.beta_param = gpflow.param.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.param.Param(variance, transform=gpflow.transforms.positive)
def K(self, X, X2=None):
"""
Computes the Log-Euclidean kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute the kernel
X = tf.expand_dims(X, 1)
X2 = tf.expand_dims(X2, 0)
X = tf.tile(X, [1, tf.shape(X2)[1], 1, 1])
X2 = tf.tile(X2, [tf.shape(X)[0], 1, 1, 1])
diff_XX2 = tf.cast(tf.subtract(tf.linalg.logm(tf.cast(X, dtype=tf.complex64)), tf.linalg.logm(tf.cast(X2, dtype=tf.complex64))), dtype=tf.float64)
logeucl_dist = tf.norm(diff_XX2, axis=(-2, -1))
logeucl_dist2 = tf.square(logeucl_dist)
logeucl_dist2 = tf.multiply(logeucl_dist2, self.beta_param)
return tf.multiply(self.variance, tf.exp(-logeucl_dist2))
def Kdiag(self, X):
"""
Computes the diagonal of Gaussian kernel matrix of inputs X belonging to a sphere manifold.
Parameters
----------
:param X: input points on the SPD manifold
Optional parameters
-------------------
Returns
-------
:return: diagonal of the kernel matrix of X
"""
return tf.fill(tf.stack([tf.shape(X)[0]]), tf.squeeze(self.variance))
# Numpy/scipy check
# logeucldist = np.zeros((y_man_mat.shape[0], y_man_mat_test.shape[0]))
# for m in range(y_man_mat.shape[0]):
# for n in range(y_man_mat_test.shape[0]):
# logeucldist[m, n] = np.linalg.norm(sc.linalg.logm(y_man_mat[m]) - sc.linalg.logm(y_man_mat_test[n]))
#
# kernel_check = np.exp(- logeucldist**2 / 1.0)
else:
class SpdSteinGaussianKernel(gpflow.kernels.Kernel):
"""
Instances of this class represent a Stein covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.continuous_param_space_limit: low limit of the continous parameter space where the inverse square
lengthscale parameter beta results in PD kernels
self.beta_shifted: equal to beta-continuous_param_space_limit and used to optimize beta in the space of
beta values in the continuous parameter space resulting in PD kernels
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
update_beta(new_beta_value):
get_beta():
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta, variance=1.0):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Lower limit of the continuous space where beta can be optimized continuously
# beta \in [j/2: 1 <= j <= n-1] U ]0.5(n-1), +inf[
self.continuous_param_space_limit = 0.5 * (self.matrix_dim - 1)
# Parameter initialization
# Beta shifted is used to optimize beta in the continuous part of its space
# The values of beta in the discrete part of the space have to be tested separetely, i.e. by comparing the
# obtained log marginal likelihood with the discrete value to the one obtained by optimizing in the continous
# part of the space. Therefore, do not optimize the kernel for initial values of the parameter beta smaller than
# self.continuous_param_space_limit.
self.beta_shifted = gpflow.Param(beta - self.continuous_param_space_limit, transform=gpflow.transforms.positive)
self.variance = gpflow.Param(variance, transform=gpflow.transforms.positive)
@gpflow.params_as_tensors
def K(self, X, X2=None):
"""
Computes the Stein kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute beta value from beta_shifted
beta = self.beta_shifted + self.continuous_param_space_limit
# Compute the kernel
X = tf.expand_dims(X, 1)
X2 = tf.expand_dims(X2, 0)
X = tf.tile(X, [1, tf.shape(X2)[1], 1, 1])
X2 = tf.tile(X2, [tf.shape(X)[0], 1, 1, 1])
mult_XX2 = tf.matmul(X, X2)
add_halfXX2 = 0.5 * tf.add(X, X2)
detmult_XX2 = tf.linalg.det(mult_XX2)
detadd_halfXX2 = tf.linalg.det(add_halfXX2)
dist = tf.divide(tf.math.pow(detmult_XX2, beta), tf.math.pow(detadd_halfXX2, beta))
return tf.multiply(self.variance, dist)
def update_beta(self, beta):
"""
Update the parameter beta of the class.
Parameters
----------
:param beta: new value of beta
Optional parameters
-------------------
Returns
-------
:return:
"""
self.beta_shifted.assign(beta-self.continuous_param_space_limit)
def get_beta(self):
"""
Return the parameter beta of the class.
Parameters
----------
Optional parameters
-------------------
Returns
-------
:return: value of beta
"""
return self.read_trainables()['GPR/kern/beta_shifted'] + self.continuous_param_space_limit
class SpdAffineInvariantGaussianKernel(gpflow.kernels.Kernel):
"""
Instances of this class represent a Gaussian (RBF) covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_min: minimum value of the inverse square lengthscale parameter beta
self.beta_shifted: equal to beta-beta_min and used to optimize beta in the space of beta values resulting in
PD kernels
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
update_beta(new_beta_value):
get_beta():
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta_min, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
:param beta_min: minimum value of the inverse square lengthscale parameter beta
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
# Beta shifted is used to optimize beta in the space of beta values resulting in PD kernels
self.beta_shifted = gpflow.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.Param(variance, transform=gpflow.transforms.positive)
self.beta_min_value = beta_min
@gpflow.params_as_tensors
def K(self, X, X2=None):
"""
Computes the Gaussian kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute beta value from beta_shifted
beta = self.beta_shifted + self.beta_min_value
# Compute the kernel
aff_inv_dist = affine_invariant_distance_tf(X, X2, full_dist_mat=True)
aff_inv_dist2 = tf.square(aff_inv_dist)
# aff_inv_dist = tf.divide(aff_inv_dist, tf.multiply(self.beta_param, self.beta_param))
aff_inv_dist2 = tf.multiply(aff_inv_dist2, beta)
return tf.multiply(self.variance, tf.exp(-aff_inv_dist2))
def update_beta(self, beta):
"""
Update the parameter beta of the class.
Parameters
----------
:param beta: new value of beta
Optional parameters
-------------------
Returns
-------
:return:
"""
self.beta_shifted.assign(beta-self.beta_min_value)
def get_beta(self):
"""
Return the parameter beta of the class.
Parameters
----------
Optional parameters
-------------------
Returns
-------
:return: value of beta
"""
return self.read_trainables()['GPR/kern/beta_shifted'] + self.beta_min_value
# Checks for affine-invariant kernel
# X = tf.convert_to_tensor(y_man_mat)
# X2 = tf.convert_to_tensor(y_man_mat_test)
#
# aff_inv_dist = affine_inv_distance_tf(X, X2, full_dist_mat=True)
#
# with tf.Session() as sess:
# x_np = sess.run(X)
# x2_np = sess.run(X2)
# affinv_np = sess.run(aff_inv_dist)
#
# affinv_check = np.zeros((y_man_mat.shape[0], y_man_mat_test.shape[0]))
# for m in range(y_man_mat.shape[0]):
# for n in range(y_man_mat_test.shape[0]):
# affinv_check[m, n] = aff_invariant_distance(y_man_mat[m], y_man_mat_test[n])
#
# kernel_check = np.exp(- affinv_check**2 / 1.0)
class SpdAffineInvariantLaplaceKernel(gpflow.kernels.Kernel):
"""
Instances of this class represent a Laplace covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_min: minimum value of the inverse square lengthscale parameter beta
self.beta_shifted: equal to beta-beta_min and used to optimize beta in the space of beta values resulting in
PD kernels
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
update_beta(new_beta_value):
get_beta():
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta_min, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
:param beta_min: minimum value of the inverse square lengthscale parameter beta
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
# Beta shifted is used to optimize beta in the space of beta values resulting in PD kernels
self.beta_shifted = gpflow.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.Param(variance, transform=gpflow.transforms.positive)
self.beta_min_value = beta_min
@gpflow.params_as_tensors
def K(self, X, X2=None):
"""
Computes the Laplace kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute beta value from beta_shifted
beta = self.beta_shifted + self.beta_min_value
# Compute the kernel
aff_inv_dist = affine_invariant_distance_tf(X, X2, full_dist_mat=True)
aff_inv_dist = tf.multiply(aff_inv_dist, beta)
return tf.multiply(self.variance, tf.exp(-aff_inv_dist))
def update_beta(self, beta):
"""
Update the parameter beta of the class.
Parameters
----------
:param beta: new value of beta
Optional parameters
-------------------
Returns
-------
:return:
"""
self.beta_shifted.assign(beta-self.beta_min_value)
def get_beta(self):
"""
Return the parameter beta of the class.
Parameters
----------
Optional parameters
-------------------
Returns
-------
:return: value of beta
"""
return self.read_trainables()['GPR/kern/beta_shifted'] + self.beta_min_value
class SpdFrobeniusGaussianKernel(gpflow.kernels.Kernel):
"""
Instances of this class represent a Frobenius covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_param: inverse square lengthscale parameter beta
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
self.beta_param = gpflow.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.Param(variance, transform=gpflow.transforms.positive)
@gpflow.params_as_tensors
def K(self, X, X2=None):
"""
Computes the Frobenius kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute the kernel
X = tf.expand_dims(X, 1)
X2 = tf.expand_dims(X2, 0)
X = tf.tile(X, [1, tf.shape(X2)[1], 1, 1])
X2 = tf.tile(X2, [tf.shape(X)[0], 1, 1, 1])
diff_XX2 = tf.subtract(X, X2)
frob_dist = tf.norm(diff_XX2, axis=(-2, -1))
frob_dist2 = tf.square(frob_dist)
frob_dist2 = tf.multiply(frob_dist2, self.beta_param)
return tf.multiply(self.variance, tf.exp(-frob_dist2))
# Numpy check
# frobdist = np.zeros((y_man_mat.shape[0], y_man_mat_test.shape[0]))
# for m in range(y_man_mat.shape[0]):
# for n in range(y_man_mat_test.shape[0]):
# frobdist[m, n] = np.linalg.norm(y_man_mat[m] - y_man_mat_test[n])
#
# kernel_check = np.exp(- frobdist**2 / 1.0)
class SpdLogEuclideanGaussianKernel(gpflow.kernels.Kernel):
"""
Instances of this class represent a Log-Euclidean covariance matrix between input points on the SPD manifold
using the affine-invariant distance.
Attributes
----------
self.matrix_dim: SPD matrix dimension, computed from the dimension of the inputs (given with Mandel notation)
self.beta_param: inverse square lengthscale parameter beta
self.variance: variance parameter of the kernel
Methods
-------
K(point1_in_SPD, point2_in_SPD):
Static methods
--------------
"""
def __init__(self, input_dim, active_dims, beta = 1., variance=1.):
"""
Initialisation.
Parameters
----------
:param input_dim: input dimension (in Mandel notation form)
:param active_dims: dimensions of the input used for kernel computation (in Mandel notation form),
defined as range(input_dim) if all the input dimensions are considered.
Optional parameters
-------------------
:param beta: value of beta
:param variance: value of the variance
"""
super().__init__(input_dim=input_dim, active_dims=active_dims)
# Matrix dimension from input vector dimension
self.matrix_dim = int((-1.0 + (1.0 + 8.0 * input_dim) ** 0.5) / 2.0)
# Parameter initialization
self.beta_param = gpflow.Param(beta, transform=gpflow.transforms.positive)
self.variance = gpflow.Param(variance, transform=gpflow.transforms.positive)
@gpflow.params_as_tensors
def K(self, X, X2=None):
"""
Computes the Log-Euclidean kernel matrix between inputs X (and X2) belonging to a SPD manifold.
Parameters
----------
:param X: input points on the SPD manifold (Mandel notation)
Optional parameters
-------------------
:param X2: input points on the SPD manifold (Mandel notation)
Returns
-------
:return: kernel matrix of X or between X and X2
"""
# Transform input vector to matrices
X = vector_to_symmetric_matrix_tf(X, self.matrix_dim)
if X2 is None:
X2 = X
else:
X2 = vector_to_symmetric_matrix_tf(X2, self.matrix_dim)
# Compute the kernel
X = tf.expand_dims(X, 1)
X2 = tf.expand_dims(X2, 0)
X = tf.tile(X, [1, tf.shape(X2)[1], 1, 1])
X2 = tf.tile(X2, [tf.shape(X)[0], 1, 1, 1])
diff_XX2 = tf.cast(tf.subtract(tf.linalg.logm(tf.cast(X, dtype=tf.complex64)), tf.linalg.logm(tf.cast(X2, dtype=tf.complex64))), dtype=tf.float64)
logeucl_dist = tf.norm(diff_XX2, axis=(-2, -1))
logeucl_dist2 = tf.square(logeucl_dist)
logeucl_dist2 = tf.multiply(logeucl_dist2, self.beta_param)
return tf.multiply(self.variance, tf.exp(-logeucl_dist2))
# Numpy/scipy check
# logeucldist = np.zeros((y_man_mat.shape[0], y_man_mat_test.shape[0]))
# for m in range(y_man_mat.shape[0]):
# for n in range(y_man_mat_test.shape[0]):
# logeucldist[m, n] = np.linalg.norm(sc.linalg.logm(y_man_mat[m]) - sc.linalg.logm(y_man_mat_test[n]))
#
# kernel_check = np.exp(- logeucldist**2 / 1.0)
| 36.735651 | 158 | 0.560473 | 5,358 | 45,442 | 4.595745 | 0.045726 | 0.015229 | 0.022173 | 0.022742 | 0.984121 | 0.983106 | 0.983106 | 0.983106 | 0.980588 | 0.970354 | 0 | 0.015975 | 0.334668 | 45,442 | 1,236 | 159 | 36.765372 | 0.798472 | 0.495247 | 0 | 0.906883 | 0 | 0 | 0.004022 | 0.003839 | 0 | 0 | 0 | 0 | 0 | 1 | 0.149798 | false | 0 | 0.016194 | 0 | 0.291498 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
78a16bc8163f548bf0dd29a7de1e2aabfa214bf0 | 78,788 | py | Python | neuro-sdk/tests/test_storage.py | neuro-inc/neuro-cli | 72bd2a825cc319bbc79c6df16f33380796fad4f5 | [
"Apache-2.0"
] | 5 | 2019-09-24T15:37:47.000Z | 2020-08-04T09:25:29.000Z | neuro-sdk/tests/test_storage.py | neuromation/platform-client-python | 72bd2a825cc319bbc79c6df16f33380796fad4f5 | [
"Apache-2.0"
] | 748 | 2019-08-05T14:57:11.000Z | 2020-09-28T09:54:41.000Z | neuro-sdk/tests/test_storage.py | neuro-inc/neuro-cli | 72bd2a825cc319bbc79c6df16f33380796fad4f5 | [
"Apache-2.0"
] | 3 | 2019-10-07T19:25:22.000Z | 2020-06-29T01:41:26.000Z | import asyncio
import errno
import json
import os
from filecmp import dircmp
from pathlib import Path
from shutil import copytree
from typing import Any, AsyncIterator, Callable, List, Tuple
from unittest import mock
import pytest
from aiohttp import web
from yarl import URL
from neuro_sdk import (
Action,
Client,
DiskUsageInfo,
FileStatus,
FileStatusType,
IllegalArgumentError,
StorageProgressComplete,
StorageProgressDelete,
StorageProgressStart,
StorageProgressStep,
)
from neuro_sdk._storage import _parse_content_range
from tests import _RawTestServerFactory, _TestServerFactory
_MakeClient = Callable[..., Client]
FOLDER = Path(__file__).parent
DATA_FOLDER = FOLDER / "data"
def calc_diff(dcmp: "dircmp[str]", *, pre: str = "") -> List[Tuple[str, str]]:
ret = []
for name in dcmp.diff_files:
ret.append((pre + name, pre + name))
for name in dcmp.left_only:
ret.append((pre + name, ""))
for name in dcmp.right_only:
ret.append(("", pre + name))
for name, sub_dcmp in dcmp.subdirs.items():
ret.extend(calc_diff(sub_dcmp, pre=name + "/"))
return ret
@pytest.fixture
def small_block_size(monkeypatch: Any) -> None:
import neuro_sdk._storage
monkeypatch.setattr(neuro_sdk._storage, "READ_SIZE", 300)
@pytest.fixture
def storage_path(tmp_path: Path) -> Path:
ret = tmp_path / "storage"
ret.mkdir()
return ret
@pytest.fixture
async def storage_server(
aiohttp_raw_server: _RawTestServerFactory, storage_path: Path
) -> Any:
PREFIX = "/storage/user"
PREFIX_LEN = len(PREFIX)
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
op = request.query["op"]
path = request.path
assert path.startswith(PREFIX)
path = path[PREFIX_LEN:]
if path.startswith("/"):
path = path[1:]
local_path = storage_path / path
if op == "CREATE":
content = await request.read()
local_path.write_bytes(content)
return web.Response(status=201)
elif op == "WRITE":
rng = _parse_content_range(request.headers.get("Content-Range"))
content = await request.read()
assert rng.stop - rng.start == len(content)
with open(local_path, "r+b") as f:
f.seek(rng.start)
f.write(content)
return web.Response(status=200)
elif op == "OPEN":
rng = request.http_range
content = local_path.read_bytes()
response = web.StreamResponse()
start, stop, _ = rng.indices(len(content))
if not (rng.start is rng.stop is None):
if start >= stop:
raise RuntimeError
response.set_status(web.HTTPPartialContent.status_code)
response.headers[
"Content-Range"
] = f"bytes {start}-{stop-1}/{len(content)}"
response.content_length = stop - start
await response.prepare(request)
chunk_size = 200
if stop - start > chunk_size:
await response.write(content[start : start + chunk_size])
raise RuntimeError
else:
await response.write(content[start:stop])
await response.write_eof()
return response
elif op == "GETFILESTATUS":
if not local_path.exists():
raise web.HTTPNotFound()
stat = local_path.lstat()
status = {
"path": local_path.name,
"length": stat.st_size,
"modificationTime": stat.st_mtime,
"permission": "write",
}
if local_path.is_symlink():
status["type"] = "SYMLINK"
status["target"] = os.readlink(local_path)
elif local_path.is_file():
status["type"] = "FILE"
elif local_path.is_dir():
status["type"] = "DIRECTORY"
else:
status["type"] = "UNKNOWN"
return web.json_response({"FileStatus": status})
elif op == "MKDIRS":
try:
local_path.mkdir(parents=True, exist_ok=True)
except FileExistsError:
raise web.HTTPBadRequest(
text=json.dumps({"error": "File exists", "errno": "EEXIST"}),
content_type="application/json",
)
return web.Response(status=201)
elif op == "LISTSTATUS":
if not local_path.exists():
raise web.HTTPNotFound()
ret = []
for child in local_path.iterdir():
stat = child.lstat()
status = {
"path": child.name,
"length": stat.st_size,
"modificationTime": stat.st_mtime,
"permission": "write",
}
if child.is_symlink():
status["type"] = "SYMLINK"
status["target"] = os.readlink(local_path)
elif child.is_file():
status["type"] = "FILE"
elif child.is_dir():
status["type"] = "DIRECTORY"
else:
status["type"] = "UNKNOWN"
ret.append(status)
return await make_listiter_response(request, ret)
else:
raise web.HTTPInternalServerError(text=f"Unsupported operation {op}")
return await aiohttp_raw_server(handler)
async def test_storage_ls_legacy(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
JSON = {
"FileStatuses": {
"FileStatus": [
{
"path": "foo",
"length": 1024,
"type": "FILE",
"modificationTime": 0,
"permission": "read",
},
{
"path": "bar",
"length": 4 * 1024,
"type": "DIRECTORY",
"modificationTime": 0,
"permission": "read",
},
{
"path": "baz",
"length": 1,
"type": "SYMLINK",
"modificationTime": 0,
"permission": "read",
"target": "foo",
},
{
"path": "spam",
"length": 1,
"type": "SPAM",
"modificationTime": 0,
"permission": "read",
},
]
}
}
async def handler(request: web.Request) -> web.Response:
assert "b3" in request.headers
assert request.path == "/storage/user/folder"
assert request.query == {"op": "LISTSTATUS"}
return web.json_response(JSON)
app = web.Application()
app.router.add_get("/storage/user/folder", handler)
srv = await aiohttp_server(app)
expected = [
FileStatus(
path="foo",
size=1024,
type=FileStatusType.FILE,
modification_time=0,
permission=Action.READ,
uri=URL("storage://default/user/folder/foo"),
),
FileStatus(
path="bar",
size=4 * 1024,
type=FileStatusType.DIRECTORY,
modification_time=0,
permission=Action.READ,
uri=URL("storage://default/user/folder/bar"),
),
FileStatus(
path="baz",
size=1,
type=FileStatusType.SYMLINK,
modification_time=0,
permission=Action.READ,
target="foo",
uri=URL("storage://default/user/folder/baz"),
),
FileStatus(
path="spam",
size=1,
type=FileStatusType.UNKNOWN,
modification_time=0,
permission=Action.READ,
uri=URL("storage://default/user/folder/spam"),
),
]
async with make_client(srv.make_url("/")) as client:
async with client.storage.list(URL("storage:folder")) as it:
ret = [file async for file in it]
assert ret == expected
async def make_listiter_response(
request: web.Request, file_statuses: List[Any]
) -> web.StreamResponse:
assert request.query == {"op": "LISTSTATUS"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
for item in file_statuses:
await resp.write(json.dumps({"FileStatus": item}).encode() + b"\n")
return resp
async def test_storage_ls(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
file_statuses = [
{
"path": "foo",
"length": 1024,
"type": "FILE",
"modificationTime": 0,
"permission": "read",
},
{
"path": "bar",
"length": 4 * 1024,
"type": "DIRECTORY",
"modificationTime": 0,
"permission": "read",
},
{
"path": "baz",
"length": 1,
"type": "SYMLINK",
"modificationTime": 0,
"permission": "read",
"target": "foo",
},
{
"path": "spam",
"length": 1,
"type": "SPAM",
"modificationTime": 0,
"permission": "read",
},
]
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage/user/folder"
assert request.query == {"op": "LISTSTATUS"}
return await make_listiter_response(request, file_statuses)
app = web.Application()
app.router.add_get("/storage/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
async with client.storage.list(URL("storage:folder")) as it:
ret = [file async for file in it]
assert ret == [
FileStatus(
path="foo",
size=1024,
type=FileStatusType.FILE,
modification_time=0,
permission=Action.READ,
uri=URL("storage://default/user/folder/foo"),
),
FileStatus(
path="bar",
size=4 * 1024,
type=FileStatusType.DIRECTORY,
modification_time=0,
permission=Action.READ,
uri=URL("storage://default/user/folder/bar"),
),
FileStatus(
path="baz",
size=1,
type=FileStatusType.SYMLINK,
modification_time=0,
permission=Action.READ,
target="foo",
uri=URL("storage://default/user/folder/baz"),
),
FileStatus(
path="spam",
size=1,
type=FileStatusType.UNKNOWN,
modification_time=0,
permission=Action.READ,
uri=URL("storage://default/user/folder/spam"),
),
]
async def test_storage_disk_usage(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage/user"
assert request.query == {"op": "GETDISKUSAGE"}
return web.json_response({"total": 100, "used": 20, "free": 80})
app = web.Application()
app.router.add_get("/storage/user", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
res = await client.storage.disk_usage()
assert res == DiskUsageInfo(total=100, used=20, free=80, cluster_name="default")
async def test_storage_disk_usage_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage2/user"
assert request.query == {"op": "GETDISKUSAGE"}
return web.json_response({"total": 100, "used": 20, "free": 80})
app = web.Application()
app.router.add_get("/storage2/user", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
res = await client.storage.disk_usage(cluster_name="another")
assert res == DiskUsageInfo(total=100, used=20, free=80, cluster_name="another")
async def test_storage_disk_usage_another_org(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage/org/user"
assert request.query == {"op": "GETDISKUSAGE"}
return web.json_response({"total": 100, "used": 20, "free": 80})
app = web.Application()
app.router.add_get("/storage/org/user", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
res = await client.storage.disk_usage(org_name="org")
assert res == DiskUsageInfo(
total=100, used=20, free=80, cluster_name="default", org_name="org"
)
async def test_storage_disk_usage_path(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage/user/dir"
assert request.query == {"op": "GETDISKUSAGE"}
return web.json_response({"total": 100, "used": 20, "free": 80})
app = web.Application()
app.router.add_get("/storage/user/dir", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
res = await client.storage.disk_usage(uri=URL("storage:dir"))
assert res == DiskUsageInfo(
total=100, used=20, free=80, cluster_name="default", uri=URL("storage:dir")
)
async def test_storage_ls_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
file_statuses = [
{
"path": "foo",
"length": 1024,
"type": "FILE",
"modificationTime": 0,
"permission": "read",
},
{
"path": "bar",
"length": 4 * 1024,
"type": "DIRECTORY",
"modificationTime": 0,
"permission": "read",
},
{
"path": "baz",
"length": 1,
"type": "SYMLINK",
"modificationTime": 0,
"permission": "read",
"target": "foo",
},
{
"path": "spam",
"length": 1,
"type": "SPAM",
"modificationTime": 0,
"permission": "read",
},
]
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage2/user/folder"
assert request.query == {"op": "LISTSTATUS"}
return await make_listiter_response(request, file_statuses)
app = web.Application()
app.router.add_get("/storage2/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
async with client.storage.list(URL("storage://another/user/folder")) as it:
ret = [file async for file in it]
assert ret == [
FileStatus(
path="foo",
size=1024,
type=FileStatusType.FILE,
modification_time=0,
permission=Action.READ,
uri=URL("storage://another/user/folder/foo"),
),
FileStatus(
path="bar",
size=4 * 1024,
type=FileStatusType.DIRECTORY,
modification_time=0,
permission=Action.READ,
uri=URL("storage://another/user/folder/bar"),
),
FileStatus(
path="baz",
size=1,
type=FileStatusType.SYMLINK,
modification_time=0,
permission=Action.READ,
target="foo",
uri=URL("storage://another/user/folder/baz"),
),
FileStatus(
path="spam",
size=1,
type=FileStatusType.UNKNOWN,
modification_time=0,
permission=Action.READ,
uri=URL("storage://another/user/folder/spam"),
),
]
async def test_storage_ls_error_in_server_response(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
error_result = {"error": "Server is to busy", "errno": "EBUSY"}
async def handler(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage/user/folder"
assert request.query == {"op": "LISTSTATUS"}
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(error_result).encode() + b"\n")
return resp
app = web.Application()
app.router.add_get("/storage/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
with pytest.raises(OSError) as err:
async with client.storage.list(URL("storage:folder")) as it:
async for _ in it:
pass
assert err.value.strerror == "Server is to busy"
assert err.value.errno == errno.EBUSY
async def test_storage_glob(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler_home(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path == "/storage/user"
assert request.query == {"op": "LISTSTATUS"}
return await make_listiter_response(
request,
[
{
"path": "folder",
"length": 0,
"type": "DIRECTORY",
"modificationTime": 0,
"permission": "read",
}
],
)
async def handler_folder(request: web.Request) -> web.StreamResponse:
assert "b3" in request.headers
assert request.path.rstrip("/") == "/storage/user/folder"
assert request.query["op"] in ("GETFILESTATUS", "LISTSTATUS")
if request.query["op"] == "GETFILESTATUS":
return web.json_response(
{
"FileStatus": {
"path": "/user/folder",
"type": "DIRECTORY",
"length": 0,
"modificationTime": 0,
"permission": "read",
}
}
)
elif request.query["op"] == "LISTSTATUS":
return await make_listiter_response(
request,
[
{
"path": "foo",
"length": 1024,
"type": "FILE",
"modificationTime": 0,
"permission": "read",
},
{
"path": "bar",
"length": 0,
"type": "DIRECTORY",
"modificationTime": 0,
"permission": "read",
},
],
)
else:
raise web.HTTPInternalServerError
async def handler_foo(request: web.Request) -> web.Response:
assert "b3" in request.headers
assert request.path == "/storage/user/folder/foo"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/folder/foo",
"length": 1024,
"type": "FILE",
"modificationTime": 0,
"permission": "read",
}
}
)
async def handler_bar(request: web.Request) -> web.StreamResponse:
assert request.path.rstrip("/") == "/storage/user/folder/bar"
if request.query["op"] == "GETFILESTATUS":
return web.json_response(
{
"FileStatus": {
"path": "/user/folder/bar",
"length": 0,
"type": "DIRECTORY",
"modificationTime": 0,
"permission": "read",
}
}
)
elif request.query["op"] == "LISTSTATUS":
return await make_listiter_response(
request,
[
{
"path": "baz",
"length": 0,
"type": "FILE",
"modificationTime": 0,
"permission": "read",
}
],
)
else:
raise web.HTTPInternalServerError
app = web.Application()
app.router.add_get("/storage/user", handler_home)
app.router.add_get("/storage/user/", handler_home)
app.router.add_get("/storage/user/folder", handler_folder)
app.router.add_get("/storage/user/folder/", handler_folder)
app.router.add_get("/storage/user/folder/foo", handler_foo)
app.router.add_get("/storage/user/folder/foo/", handler_foo)
app.router.add_get("/storage/user/folder/bar", handler_bar)
app.router.add_get("/storage/user/folder/bar/", handler_bar)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
async def glob(pattern: str) -> List[URL]:
async with client.storage.glob(URL(pattern)) as it:
return [uri async for uri in it]
assert await glob("storage:folder") == [URL("storage:folder")]
assert await glob("storage:folder/") == [URL("storage:folder/")]
assert await glob("storage:folder/*") == [
URL("storage:folder/foo"),
URL("storage:folder/bar"),
]
assert await glob("storage:folder/foo") == [URL("storage:folder/foo")]
assert await glob("storage:folder/[a-d]*") == [URL("storage:folder/bar")]
assert await glob("storage:folder/*/") == [URL("storage:folder/bar/")]
assert await glob("storage:*") == [URL("storage:folder")]
assert await glob("storage:**") == [
URL("storage:"),
URL("storage:folder"),
URL("storage:folder/foo"),
URL("storage:folder/bar"),
URL("storage:folder/bar/baz"),
]
assert await glob("storage:*/foo") == [URL("storage:folder/foo")]
assert await glob("storage:*/f*") == [URL("storage:folder/foo")]
assert await glob("storage:**/foo") == [URL("storage:folder/foo")]
assert await glob("storage:**/f*") == [
URL("storage:folder"),
URL("storage:folder/foo"),
]
assert await glob("storage:**/f*/") == [URL("storage:folder/")]
assert await glob("storage:**/b*") == [
URL("storage:folder/bar"),
URL("storage:folder/bar/baz"),
]
assert await glob("storage:**/b*/") == [URL("storage:folder/bar/")]
async def test_storage_rm_file(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
remove_listing = {"path": "/user/file", "is_dir": False}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
assert request.query == {"op": "DELETE", "recursive": "false"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(remove_listing).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage/user/file", delete_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.rm(URL("storage:file"))
async def test_storage_rm_file_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
remove_listing = {"path": "/user/file", "is_dir": False}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage2/user/file"
assert request.query == {"op": "DELETE", "recursive": "false"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(remove_listing).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage2/user/file", delete_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.rm(URL("storage://another/user/file"))
async def test_storage_rm_file_progress(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
remove_listing = {"path": "/user/file", "is_dir": False}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
assert request.query == {"op": "DELETE", "recursive": "false"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(remove_listing).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage/user/file", delete_handler)
srv = await aiohttp_server(app)
progress = mock.Mock()
async with make_client(srv.make_url("/")) as client:
await client.storage.rm(URL("storage:file"), progress=progress)
progress.delete.assert_called_with(
StorageProgressDelete(
uri=URL("storage://default/user/file"),
is_dir=False,
)
)
async def test_storage_rm_file_progress_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
remove_listing = {"path": "/user/file", "is_dir": False}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage2/user/file"
assert request.query == {"op": "DELETE", "recursive": "false"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(remove_listing).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage2/user/file", delete_handler)
srv = await aiohttp_server(app)
progress = mock.Mock()
async with make_client(srv.make_url("/")) as client:
await client.storage.rm(URL("storage://another/user/file"), progress=progress)
progress.delete.assert_called_with(
StorageProgressDelete(
uri=URL("storage://another/user/file"),
is_dir=False,
)
)
async def test_storage_rm_directory(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def delete_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "DELETE", "recursive": "false"}
return web.json_response(
{"error": "Target is a directory", "errno": "EISDIR"},
status=web.HTTPBadRequest.status_code,
)
app = web.Application()
app.router.add_delete("/storage/user/folder", delete_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
with pytest.raises(IsADirectoryError, match="Target is a directory") as cm:
await client.storage.rm(URL("storage:folder"))
assert cm.value.errno == errno.EISDIR
async def test_storage_rm_recursive(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
remove_listing = {
"path": "/user/folder",
"is_dir": True,
}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "DELETE", "recursive": "true"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(remove_listing).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage/user/folder", delete_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.rm(URL("storage:folder"), recursive=True)
async def test_storage_rm_oserror_in_the_response_stream(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
error_result = {"error": "Server is to busy", "errno": "EBUSY"}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
assert request.query == {"op": "DELETE", "recursive": "false"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(error_result).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage/user/file", delete_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
with pytest.raises(OSError) as err:
await client.storage.rm(URL("storage:file"))
assert err.value.strerror == "Server is to busy"
assert err.value.errno == errno.EBUSY
async def test_storage_rm_generic_error_in_the_response_stream(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
error_result = {"error": "Server failed", "errno": None}
async def delete_handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
assert request.query == {"op": "DELETE", "recursive": "false"}
assert request.headers["Accept"] == "application/x-ndjson"
resp = web.StreamResponse()
resp.headers["Content-Type"] = "application/x-ndjson"
await resp.prepare(request)
await resp.write(json.dumps(error_result).encode() + b"\n")
return resp
app = web.Application()
app.router.add_delete("/storage/user/file", delete_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
with pytest.raises(Exception) as err:
await client.storage.rm(URL("storage:file"))
assert err.value.args[0] == "Server failed"
async def test_storage_mv(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "RENAME", "destination": "/user/other"}
return web.Response(status=204)
app = web.Application()
app.router.add_post("/storage/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mv(URL("storage:folder"), URL("storage:other"))
async def test_storage_mv_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage2/user/folder"
assert request.query == {"op": "RENAME", "destination": "/user/other"}
return web.Response(status=204)
app = web.Application()
app.router.add_post("/storage2/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mv(
URL("storage://another/user/folder"), URL("storage://another/user/other")
)
async def test_storage_mv_different_clusters(make_client: _MakeClient) -> None:
async with make_client("https://example.com") as client:
with pytest.raises(ValueError, match="Cannot move cross-cluster"):
await client.storage.mv(
URL("storage:folder"), URL("storage://another/user/other")
)
with pytest.raises(ValueError, match="Cannot move cross-cluster"):
await client.storage.mv(
URL("storage://another/user/folder"), URL("storage:other")
)
async def test_storage_mv_unknown_cluster(make_client: _MakeClient) -> None:
async with make_client("https://example.com") as client:
with pytest.raises(
RuntimeError,
match="Cluster unknown doesn't exist in a list of available clusters",
):
await client.storage.mv(
URL("storage://unknown/user/folder"),
URL("storage://unknown/user/other"),
)
async def test_storage_mkdir_parents_exist_ok(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder/sub"
assert request.query == {"op": "MKDIRS"}
return web.Response(status=204)
app = web.Application()
app.router.add_put("/storage/user/folder/sub", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mkdir(
URL("storage:folder/sub"), parents=True, exist_ok=True
)
async def test_storage_mkdir_parents_exist_ok_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage2/user/folder/sub"
assert request.query == {"op": "MKDIRS"}
return web.Response(status=204)
app = web.Application()
app.router.add_put("/storage2/user/folder/sub", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mkdir(
URL("storage://another/user/folder/sub"), parents=True, exist_ok=True
)
async def test_storage_mkdir_parents(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def get_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder/sub"
assert request.query == {"op": "GETFILESTATUS"}
return web.Response(status=404)
async def put_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder/sub"
assert request.query == {"op": "MKDIRS"}
return web.Response(status=204)
app = web.Application()
app.router.add_get("/storage/user/folder/sub", get_handler)
app.router.add_put("/storage/user/folder/sub", put_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mkdir(URL("storage:folder/sub"), parents=True)
async def test_storage_mkdir_exist_ok(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def get_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/folder",
"type": "DIRECTORY",
"length": 1234,
"modificationTime": 3456,
"permission": "read",
}
}
)
async def put_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder/sub"
assert request.query == {"op": "MKDIRS"}
return web.Response(status=204)
app = web.Application()
app.router.add_get("/storage/user/folder", get_handler)
app.router.add_put("/storage/user/folder/sub", put_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mkdir(URL("storage:folder/sub"), exist_ok=True)
async def test_storage_mkdir(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def get_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder/sub"
assert request.query == {"op": "GETFILESTATUS"}
return web.Response(status=404)
async def parent_get_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/folder",
"type": "DIRECTORY",
"length": 1234,
"modificationTime": 3456,
"permission": "read",
}
}
)
async def put_handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder/sub"
assert request.query == {"op": "MKDIRS"}
return web.Response(status=204)
app = web.Application()
app.router.add_get("/storage/user/folder/sub", get_handler)
app.router.add_get("/storage/user/folder", parent_get_handler)
app.router.add_put("/storage/user/folder/sub", put_handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.mkdir(URL("storage:folder/sub"))
async def test_storage_create(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/file"
assert request.query == {"op": "CREATE"}
content = await request.read()
assert content == b"01234"
return web.Response(status=201)
app = web.Application()
app.router.add_put("/storage/user/file", handler)
srv = await aiohttp_server(app)
async def gen() -> AsyncIterator[bytes]:
for i in range(5):
yield str(i).encode("ascii")
async with make_client(srv.make_url("/")) as client:
await client.storage.create(URL("storage:file"), gen())
async def test_storage_create_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage2/user/file"
assert request.query == {"op": "CREATE"}
content = await request.read()
assert content == b"01234"
return web.Response(status=201)
app = web.Application()
app.router.add_put("/storage2/user/file", handler)
srv = await aiohttp_server(app)
async def gen() -> AsyncIterator[bytes]:
for i in range(5):
yield str(i).encode("ascii")
async with make_client(srv.make_url("/")) as client:
await client.storage.create(URL("storage://another/user/file"), gen())
async def test_storage_write(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/file"
assert request.query == {"op": "WRITE"}
rng = _parse_content_range(request.headers.get("Content-Range"))
assert rng == slice(4, 9)
content = await request.read()
assert content == b"01234"
return web.Response(status=200)
app = web.Application()
app.router.add_patch("/storage/user/file", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.write(URL("storage:file"), b"01234", 4)
async def test_storage_write_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage2/user/file"
assert request.query == {"op": "WRITE"}
rng = _parse_content_range(request.headers.get("Content-Range"))
assert rng == slice(4, 9)
content = await request.read()
assert content == b"01234"
return web.Response(status=200)
app = web.Application()
app.router.add_patch("/storage2/user/file", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
await client.storage.write(URL("storage://another/user/file"), b"01234", 4)
async def test_storage_stats(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/folder",
"type": "DIRECTORY",
"length": 1234,
"modificationTime": 3456,
"permission": "read",
}
}
)
app = web.Application()
app.router.add_get("/storage/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
stats = await client.storage.stat(URL("storage:folder"))
assert stats == FileStatus(
path="/user/folder",
type=FileStatusType.DIRECTORY,
size=1234,
modification_time=3456,
permission=Action.READ,
uri=URL("storage://default/user/folder"),
)
async def test_storage_stats_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage2/user/folder"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/folder",
"type": "DIRECTORY",
"length": 1234,
"modificationTime": 3456,
"permission": "read",
}
}
)
app = web.Application()
app.router.add_get("/storage2/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
stats = await client.storage.stat(URL("storage://another/user/folder"))
assert stats == FileStatus(
path="/user/folder",
type=FileStatusType.DIRECTORY,
size=1234,
modification_time=3456,
permission=Action.READ,
uri=URL("storage://another/user/folder"),
)
async def test_storage_stats_symlink(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/link"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/link",
"type": "SYMLINK",
"length": 1234,
"modificationTime": 3456,
"permission": "read",
"target": "folder/subfolder/file",
}
}
)
app = web.Application()
app.router.add_get("/storage/user/link", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
stats = await client.storage.stat(URL("storage:link"))
assert stats == FileStatus(
path="/user/link",
type=FileStatusType.SYMLINK,
size=1234,
modification_time=3456,
permission=Action.READ,
target="folder/subfolder/file",
uri=URL("storage://default/user/link"),
)
async def test_storage_open(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
if request.query["op"] == "OPEN":
resp = web.StreamResponse()
await resp.prepare(request)
for i in range(5):
await resp.write(str(i).encode("ascii"))
return resp
else:
raise AssertionError(f"Unknown operation {request.query['op']}")
app = web.Application()
app.router.add_get("/storage/user/file", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
buf = bytearray()
async with client.storage.open(URL("storage:file")) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b"01234"
async def test_storage_open_another_cluster(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage2/user/file"
if request.query["op"] == "OPEN":
resp = web.StreamResponse()
await resp.prepare(request)
for i in range(5):
await resp.write(str(i).encode("ascii"))
return resp
else:
raise AssertionError(f"Unknown operation {request.query['op']}")
app = web.Application()
app.router.add_get("/storage2/user/file", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
buf = bytearray()
async with client.storage.open(URL("storage://another/user/file")) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b"01234"
async def test_storage_open_partial_read(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
if request.query["op"] == "OPEN":
rng = request.http_range
data = b"ababahalamaha"
start, stop, _ = rng.indices(len(data))
return web.Response(
status=web.HTTPPartialContent.status_code,
headers={"Content-Range": f"bytes {start}-{stop-1}/{len(data)}"},
body=data[start:stop],
)
else:
raise AssertionError(f"Unknown operation {request.query['op']}")
app = web.Application()
app.router.add_get("/storage/user/file", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
buf = bytearray()
async with client.storage.open(URL("storage:file"), 5) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b"halamaha"
buf = bytearray()
async with client.storage.open(URL("storage:file"), 5, 4) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b"hala"
buf = bytearray()
async with client.storage.open(URL("storage:file"), 5, 20) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b"halamaha"
buf = bytearray()
async with client.storage.open(URL("storage:file"), 5, 0) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b""
async def test_storage_open_unsupported_partial_read(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.StreamResponse:
assert request.path == "/storage/user/file"
if request.query["op"] == "OPEN":
resp = web.StreamResponse()
await resp.prepare(request)
for i in range(5):
await resp.write(str(i).encode("ascii"))
return resp
else:
raise AssertionError(f"Unknown operation {request.query['op']}")
app = web.Application()
app.router.add_get("/storage/user/file", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
buf = bytearray()
async with client.storage.open(URL("storage:file"), 0) as it:
async for chunk in it:
buf.extend(chunk)
assert buf == b"01234"
with pytest.raises(RuntimeError):
async with client.storage.open(URL("storage:file"), 5) as it:
async for chunk in it:
pass
async def test_storage_open_directory(
aiohttp_server: _TestServerFactory, make_client: _MakeClient
) -> None:
async def handler(request: web.Request) -> web.Response:
assert request.path == "/storage/user/folder"
assert request.query == {"op": "GETFILESTATUS"}
return web.json_response(
{
"FileStatus": {
"path": "/user/folder",
"type": "DIRECTORY",
"length": 5,
"modificationTime": 3456,
"permission": "read",
}
}
)
app = web.Application()
app.router.add_get("/storage/user/folder", handler)
srv = await aiohttp_server(app)
async with make_client(srv.make_url("/")) as client:
buf = bytearray()
with pytest.raises((IsADirectoryError, IllegalArgumentError)):
async with client.storage.open(URL("storage:folder")) as it:
async for chunk in it:
buf.extend(chunk)
assert not buf
# test normalizers
# high level API
async def test_storage_upload_file_does_not_exists(make_client: _MakeClient) -> None:
async with make_client("https://example.com") as client:
with pytest.raises(FileNotFoundError):
await client.storage.upload_file(
URL("file:///not-exists-file"), URL("storage://host/path/to/file.txt")
)
async def test_storage_upload_dir_doesnt_exist(make_client: _MakeClient) -> None:
async with make_client("https://example.com") as client:
with pytest.raises(IsADirectoryError):
await client.storage.upload_file(
URL(FOLDER.as_uri()), URL("storage://host/path/to")
)
async def test_storage_upload_not_a_file(
storage_server: Any,
make_client: _MakeClient,
storage_path: Path,
small_block_size: None,
) -> None:
file_path = Path(os.devnull).absolute()
target_path = storage_path / "file.txt"
progress = mock.Mock()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:file.txt"), progress=progress
)
uploaded = target_path.read_bytes()
assert uploaded == b""
src = URL(file_path.as_uri())
dst = URL("storage://default/user/file.txt")
progress.start.assert_called_with(StorageProgressStart(src, dst, 0))
progress.step.assert_not_called()
progress.complete.assert_called_with(StorageProgressComplete(src, dst, 0))
async def test_storage_upload_regular_file_to_existing_file_target(
storage_server: Any,
make_client: _MakeClient,
storage_path: Path,
small_block_size: None,
) -> None:
file_path = DATA_FOLDER / "file.txt"
file_size = file_path.stat().st_size
target_path = storage_path / "file.txt"
progress = mock.Mock()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:file.txt"), progress=progress
)
expected = file_path.read_bytes()
uploaded = target_path.read_bytes()
assert uploaded == expected
src = URL(file_path.as_uri())
dst = URL("storage://default/user/file.txt")
progress.start.assert_called_with(StorageProgressStart(src, dst, file_size))
progress.step.assert_called_with(
StorageProgressStep(src, dst, file_size, file_size)
)
progress.complete.assert_called_with(StorageProgressComplete(src, dst, file_size))
async def test_storage_upload_regular_file_to_existing_dir(
storage_server: Any,
make_client: _MakeClient,
storage_path: Path,
small_block_size: None,
) -> None:
file_path = DATA_FOLDER / "file.txt"
folder = storage_path / "folder"
folder.mkdir()
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises(IsADirectoryError):
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:folder")
)
async def test_storage_upload_regular_file_to_existing_file(
storage_server: Any,
make_client: _MakeClient,
storage_path: Path,
small_block_size: None,
) -> None:
file_path = DATA_FOLDER / "file.txt"
folder = storage_path / "folder"
folder.mkdir()
target_path = folder / "file.txt"
target_path.write_bytes(b"existing file")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:folder/file.txt")
)
expected = file_path.read_bytes()
uploaded = target_path.read_bytes()
assert uploaded == expected
async def test_storage_upload_regular_file_to_existing_dir_with_trailing_slash(
storage_server: Any,
make_client: _MakeClient,
storage_path: Path,
small_block_size: None,
) -> None:
file_path = DATA_FOLDER / "file.txt"
folder = storage_path / "folder"
folder.mkdir()
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises(IsADirectoryError):
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:folder/")
)
async def test_storage_upload_regular_file_to_existing_non_dir(
storage_server: Any,
make_client: _MakeClient,
storage_path: Path,
small_block_size: None,
) -> None:
file_path = DATA_FOLDER / "file.txt"
path = storage_path / "file"
path.write_bytes(b"dummy")
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises(NotADirectoryError):
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:file/subfile.txt")
)
async def test_storage_upload_regular_file_to_not_existing(
storage_server: Any, make_client: _MakeClient, small_block_size: None
) -> None:
file_path = DATA_FOLDER / "file.txt"
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises(NotADirectoryError):
await client.storage.upload_file(
URL(file_path.as_uri()), URL("storage:absent-dir/absent-file.txt")
)
async def test_storage_upload_recursive_src_doesnt_exist(
make_client: _MakeClient,
) -> None:
async with make_client("https://example.com") as client:
with pytest.raises(FileNotFoundError):
await client.storage.upload_dir(
URL("file:does_not_exist"), URL("storage://host/path/to")
)
async def test_storage_upload_recursive_src_is_a_file(make_client: _MakeClient) -> None:
file_path = DATA_FOLDER / "file.txt"
async with make_client("https://example.com") as client:
with pytest.raises(NotADirectoryError):
await client.storage.upload_dir(
URL(file_path.as_uri()), URL("storage://host/path/to")
)
async def test_storage_upload_recursive_target_is_a_file(
storage_server: Any, make_client: _MakeClient, storage_path: Path
) -> None:
target_file = storage_path / "file.txt"
target_file.write_bytes(b"dummy")
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises(NotADirectoryError):
await client.storage.upload_dir(
URL(DATA_FOLDER.as_uri()), URL("storage:file.txt")
)
async def test_storage_upload_empty_dir(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
target_dir = storage_path / "folder"
assert not target_dir.exists()
src_dir = tmp_path / "empty"
src_dir.mkdir()
assert list(src_dir.iterdir()) == []
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(URL(src_dir.as_uri()), URL("storage:folder"))
assert list(target_dir.iterdir()) == []
async def test_storage_upload_recursive_ok(
storage_server: Any, make_client: _MakeClient, storage_path: Path
) -> None:
target_dir = storage_path / "folder"
target_dir.mkdir()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(
URL(DATA_FOLDER.as_uri()) / "nested", URL("storage:folder")
)
diff = dircmp(DATA_FOLDER / "nested", target_dir)
assert not calc_diff(diff)
async def test_storage_upload_recursive_slash_ending(
storage_server: Any, make_client: _MakeClient, storage_path: Path
) -> None:
target_dir = storage_path / "folder"
target_dir.mkdir()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(
URL(DATA_FOLDER.as_uri()) / "nested", URL("storage:folder/")
)
diff = dircmp(DATA_FOLDER / "nested", target_dir)
assert not calc_diff(diff)
async def test_storage_download_regular_file_to_absent_file(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
src_file = DATA_FOLDER / "file.txt"
storage_file = storage_path / "file.txt"
storage_file.write_bytes(src_file.read_bytes())
local_dir = tmp_path / "local"
local_dir.mkdir()
local_file = local_dir / "file.txt"
progress = mock.Mock()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(
URL("storage:file.txt"), URL(local_file.as_uri()), progress=progress
)
expected = src_file.read_bytes()
downloaded = local_file.read_bytes()
assert downloaded == expected
src = URL("storage://default/user/file.txt")
dst = URL(local_file.as_uri())
file_size = src_file.stat().st_size
progress.start.assert_called_with(StorageProgressStart(src, dst, file_size))
progress.step.assert_called_with(
StorageProgressStep(src, dst, file_size, file_size)
)
progress.complete.assert_called_with(StorageProgressComplete(src, dst, file_size))
async def test_storage_download_regular_file_to_existing_file(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
src_file = DATA_FOLDER / "file.txt"
storage_file = storage_path / "file.txt"
storage_file.write_bytes(src_file.read_bytes())
local_dir = tmp_path / "local"
local_dir.mkdir()
local_file = local_dir / "file.txt"
local_file.write_bytes(b"Previous data")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(
URL("storage:file.txt"), URL(local_file.as_uri())
)
expected = src_file.read_bytes()
downloaded = local_file.read_bytes()
assert downloaded == expected
async def test_storage_download_regular_file_to_dir(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
src_file = DATA_FOLDER / "file.txt"
storage_file = storage_path / "file.txt"
storage_file.write_bytes(src_file.read_bytes())
local_dir = tmp_path / "local"
local_dir.mkdir()
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises((IsADirectoryError, PermissionError)):
await client.storage.download_file(
URL("storage:file.txt"), URL(local_dir.as_uri())
)
async def test_storage_download_regular_file_to_dir_slash_ended(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
src_file = DATA_FOLDER / "file.txt"
storage_file = storage_path / "file.txt"
storage_file.write_bytes(src_file.read_bytes())
local_dir = tmp_path / "local"
local_dir.mkdir()
async with make_client(storage_server.make_url("/")) as client:
with pytest.raises((IsADirectoryError, PermissionError)):
await client.storage.download_file(
URL("storage:file.txt"), URL(local_dir.as_uri() + "/")
)
async def test_storage_download_regular_file_to_non_file(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
src_file = DATA_FOLDER / "file.txt"
storage_file = storage_path / "file.txt"
storage_file.write_bytes(src_file.read_bytes())
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(
URL("storage:file.txt"), URL(Path(os.devnull).absolute().as_uri())
)
async def test_storage_download_empty_dir(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
storage_dir = storage_path / "folder"
storage_dir.mkdir()
assert list(storage_dir.iterdir()) == []
target_dir = tmp_path / "empty"
assert not target_dir.exists()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(
URL("storage:folder"), URL(target_dir.as_uri())
)
assert list(target_dir.iterdir()) == []
async def test_storage_download_dir(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
storage_dir = storage_path / "folder"
copytree(DATA_FOLDER / "nested", storage_dir)
local_dir = tmp_path / "local"
local_dir.mkdir()
target_dir = local_dir / "nested"
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(
URL("storage:folder"), URL(target_dir.as_uri())
)
diff = dircmp(DATA_FOLDER / "nested", target_dir)
assert not calc_diff(diff)
async def test_storage_download_dir_slash_ending(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
storage_dir = storage_path / "folder"
copytree(DATA_FOLDER / "nested", storage_dir / "nested")
local_dir = tmp_path / "local"
local_dir.mkdir()
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(
URL("storage:folder"), URL(local_dir.as_uri() + "/")
)
diff = dircmp(DATA_FOLDER / "nested", local_dir / "nested")
assert not calc_diff(diff)
@pytest.fixture
def zero_time_threshold(monkeypatch: Any) -> None:
import neuro_sdk._storage
monkeypatch.setattr(neuro_sdk._storage, "TIME_THRESHOLD", 0.0)
async def test_storage_upload_file_update(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
small_block_size: None,
) -> None:
storage_file = storage_path / "file.txt"
local_file = tmp_path / "file.txt"
src = URL(local_file.as_uri())
dst = URL("storage:file.txt")
# No destination file
assert not storage_file.exists()
local_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, update=True)
assert storage_file.read_bytes() == b"old content"
# Source file is newer
local_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, update=True)
assert storage_file.read_bytes() == b"new content"
# Destination file is newer, same size
await asyncio.sleep(5)
storage_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, update=True)
assert storage_file.read_bytes() == b"old"
async def test_storage_upload_file_continue(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
small_block_size: None,
) -> None:
storage_file = storage_path / "file.txt"
local_file = tmp_path / "file.txt"
src = URL(local_file.as_uri())
dst = URL("storage:file.txt")
# No destination file
assert not storage_file.exists()
local_file.write_bytes(b"content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, continue_=True)
assert storage_file.read_bytes() == b"content"
# Source file is newer
local_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, continue_=True)
assert storage_file.read_bytes() == b"new content"
# Destination file is newer, same size
await asyncio.sleep(5)
storage_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, continue_=True)
assert storage_file.read_bytes() == b"old content"
# Destination file is shorter
storage_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, continue_=True)
assert storage_file.read_bytes() == b"old content"
# Destination file is longer
storage_file.write_bytes(b"old long content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_file(src, dst, continue_=True)
assert storage_file.read_bytes() == b"new content"
async def test_storage_upload_dir_update(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
) -> None:
storage_file = storage_path / "folder" / "nested" / "file.txt"
local_dir = tmp_path / "folder"
local_file = local_dir / "nested" / "file.txt"
local_file.parent.mkdir(parents=True)
src = URL(local_dir.as_uri())
dst = URL("storage:folder")
# No destination file
assert not storage_file.exists()
local_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, update=True)
assert storage_file.read_bytes() == b"old content"
# Source file is newer
local_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, update=True)
assert storage_file.read_bytes() == b"new content"
# Destination file is newer, same size
await asyncio.sleep(5)
storage_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, update=True)
assert storage_file.read_bytes() == b"old"
async def test_storage_upload_dir_continue(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
small_block_size: None,
) -> None:
storage_file = storage_path / "folder" / "nested" / "file.txt"
local_dir = tmp_path / "folder"
local_file = local_dir / "nested" / "file.txt"
local_file.parent.mkdir(parents=True)
src = URL(local_dir.as_uri())
dst = URL("storage:folder")
# No destination file
assert not storage_file.exists()
local_file.write_bytes(b"content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, continue_=True)
assert storage_file.read_bytes() == b"content"
# Source file is newer
local_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, continue_=True)
assert storage_file.read_bytes() == b"new content"
# Destination file is newer, same size
await asyncio.sleep(5)
storage_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, continue_=True)
assert storage_file.read_bytes() == b"old content"
# Destination file is shorter
storage_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, continue_=True)
assert storage_file.read_bytes() == b"old content"
# Destination file is longer
storage_file.write_bytes(b"old long content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(src, dst, continue_=True)
assert storage_file.read_bytes() == b"new content"
async def test_storage_download_file_update(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
) -> None:
storage_file = storage_path / "file.txt"
local_file = tmp_path / "file.txt"
src = URL("storage:file.txt")
dst = URL(local_file.as_uri())
# No destination file
assert not local_file.exists()
storage_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, update=True)
assert local_file.read_bytes() == b"old content"
# Source file is newer
storage_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, update=True)
assert local_file.read_bytes() == b"new content"
# Destination file is newer
await asyncio.sleep(2)
local_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, update=True)
assert local_file.read_bytes() == b"old"
async def test_storage_download_file_continue(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
small_block_size: None,
) -> None:
storage_file = storage_path / "file.txt"
local_file = tmp_path / "file.txt"
src = URL("storage:file.txt")
dst = URL(local_file.as_uri())
# No destination file
assert not local_file.exists()
storage_file.write_bytes(b"content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, continue_=True)
assert local_file.read_bytes() == b"content"
# Source file is newer
storage_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, continue_=True)
assert local_file.read_bytes() == b"new content"
# Destination file is newer, same size
await asyncio.sleep(2)
local_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, continue_=True)
assert local_file.read_bytes() == b"old content"
# Destination file is shorter
local_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, continue_=True)
assert local_file.read_bytes() == b"old content"
# Destination file is longer
local_file.write_bytes(b"old long content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_file(src, dst, continue_=True)
assert local_file.read_bytes() == b"new content"
async def test_storage_download_dir_update(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
) -> None:
storage_file = storage_path / "folder" / "nested" / "file.txt"
local_dir = tmp_path / "folder"
local_file = local_dir / "nested" / "file.txt"
storage_file.parent.mkdir(parents=True)
src = URL("storage:folder")
dst = URL(local_dir.as_uri())
# No destination file
assert not local_file.exists()
storage_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, update=True)
assert local_file.read_bytes() == b"old content"
# Source file is newer
storage_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, update=True)
assert local_file.read_bytes() == b"new content"
# Destination file is newer
await asyncio.sleep(2)
local_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, update=True)
assert local_file.read_bytes() == b"old"
async def test_storage_download_dir_continue(
storage_server: Any,
make_client: _MakeClient,
tmp_path: Path,
storage_path: Path,
zero_time_threshold: None,
) -> None:
storage_file = storage_path / "folder" / "nested" / "file.txt"
local_dir = tmp_path / "folder"
local_file = local_dir / "nested" / "file.txt"
storage_file.parent.mkdir(parents=True)
src = URL("storage:folder")
dst = URL(local_dir.as_uri())
# No destination file
assert not local_file.exists()
storage_file.write_bytes(b"content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, continue_=True)
assert local_file.read_bytes() == b"content"
# Source file is newer
storage_file.write_bytes(b"new content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, continue_=True)
assert local_file.read_bytes() == b"new content"
# Destination file is newer, same size
await asyncio.sleep(2)
local_file.write_bytes(b"old content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, continue_=True)
assert local_file.read_bytes() == b"old content"
# Destination file is shorter
local_file.write_bytes(b"old")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, continue_=True)
assert local_file.read_bytes() == b"old content"
# Destination file is longer
local_file.write_bytes(b"old long content")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.download_dir(src, dst, continue_=True)
assert local_file.read_bytes() == b"new content"
async def test_storage_upload_dir_with_ignore_file_names(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
local_dir = tmp_path / "folder"
local_dir2 = local_dir / "nested"
local_dir2.mkdir(parents=True)
for name in "one", "two", "three":
(local_dir / name).write_bytes(b"")
(local_dir2 / name).write_bytes(b"")
(local_dir / ".neuroignore").write_text("one")
(local_dir2 / ".gitignore").write_text("two")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(
URL(local_dir.as_uri()),
URL("storage:folder"),
ignore_file_names={".neuroignore", ".gitignore"},
)
names = sorted(os.listdir(storage_path / "folder"))
assert names == [".neuroignore", "nested", "three", "two"]
names = sorted(os.listdir(storage_path / "folder" / "nested"))
assert names == [".gitignore", "three"]
async def test_storage_upload_dir_with_parent_ignore_file_names(
storage_server: Any, make_client: _MakeClient, tmp_path: Path, storage_path: Path
) -> None:
parent_dir = tmp_path / "parent"
local_dir = parent_dir / "folder"
local_dir2 = local_dir / "nested"
local_dir2.mkdir(parents=True)
for name in "one", "two", "three":
(local_dir / name).write_bytes(b"")
(local_dir2 / name).write_bytes(b"")
(tmp_path / ".neuroignore").write_text("one")
(parent_dir / ".gitignore").write_text("*/two")
async with make_client(storage_server.make_url("/")) as client:
await client.storage.upload_dir(
URL(local_dir.as_uri()),
URL("storage:folder"),
ignore_file_names={".neuroignore", ".gitignore"},
)
names = sorted(os.listdir(storage_path / "folder"))
assert names == ["nested", "three"]
names = sorted(os.listdir(storage_path / "folder" / "nested"))
assert names == ["three", "two"]
| 34.877379 | 88 | 0.616998 | 9,185 | 78,788 | 5.099728 | 0.037997 | 0.035439 | 0.026366 | 0.038535 | 0.919408 | 0.898166 | 0.883329 | 0.864307 | 0.84806 | 0.830661 | 0 | 0.007016 | 0.258326 | 78,788 | 2,258 | 89 | 34.892826 | 0.794575 | 0.010827 | 0 | 0.718869 | 0 | 0 | 0.131861 | 0.027086 | 0 | 0 | 0 | 0 | 0.121262 | 1 | 0.002175 | false | 0.001088 | 0.009244 | 0 | 0.041871 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
153ac94fe160c68f4fd90c0e55d69e6f5a56123a | 24,496 | py | Python | pytorch-wopred/models_pytorch.py | EIHW/CAANet_DCASE_ASC | eaaf9b36820bbc8acf8a98dcbd872be86f838970 | [
"MIT"
] | 1 | 2021-01-31T14:14:06.000Z | 2021-01-31T14:14:06.000Z | pytorch-wopred/models_pytorch.py | zhaoren91/CAANet_DCASE_ASC | 5d7415581494cdad4f58433ec815cd0afaad42d8 | [
"MIT"
] | null | null | null | pytorch-wopred/models_pytorch.py | zhaoren91/CAANet_DCASE_ASC | 5d7415581494cdad4f58433ec815cd0afaad42d8 | [
"MIT"
] | 2 | 2020-11-09T14:25:48.000Z | 2020-12-04T08:51:39.000Z | import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import numpy as np
def move_data_to_gpu(x, cuda):
if 'float' in str(x.dtype):
x = torch.Tensor(x)
elif 'int' in str(x.dtype):
x = torch.LongTensor(x)
else:
raise Exception("Error!")
if cuda:
x = x.cuda()
x = Variable(x)
return x
def init_layer(layer):
"""Initialize a Linear or Convolutional layer.
Ref: He, Kaiming, et al. "Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification." Proceedings of the
IEEE international conference on computer vision. 2015.
"""
if layer.weight.ndimension() == 4:
(n_out, n_in, height, width) = layer.weight.size()
n = n_in * height * width
elif layer.weight.ndimension() == 2:
(n_out, n) = layer.weight.size()
std = math.sqrt(2. / n)
scale = std * math.sqrt(3.)
layer.weight.data.uniform_(-scale, scale)
if layer.bias is not None:
layer.bias.data.fill_(0.)
def init_bn(bn):
"""Initialize a Batchnorm layer. """
bn.bias.data.fill_(0.)
bn.weight.data.fill_(1.)
####################################################################################################
class EmbeddingLayers_Nopooling(nn.Module):
def __init__(self, cond_layer=1):
super(EmbeddingLayers_Nopooling, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=128,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.conv3 = nn.Conv2d(in_channels=128, out_channels=256,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.conv4 = nn.Conv2d(in_channels=256, out_channels=512,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.condlayer = cond_layer
if cond_layer==1:
self.cond = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(1, 1))
elif cond_layer==2:
self.cond2 = nn.Conv2d(in_channels=3, out_channels=128, kernel_size=(1, 1))
elif cond_layer==3:
self.cond3 = nn.Conv2d(in_channels=3, out_channels=256, kernel_size=(1, 1))
elif cond_layer==4:
self.cond4 = nn.Conv2d(in_channels=3, out_channels=512, kernel_size=(1, 1))
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(128)
self.bn3 = nn.BatchNorm2d(256)
self.bn4 = nn.BatchNorm2d(512)
self.init_weights()
def init_weights(self):
init_layer(self.conv1)
init_layer(self.conv2)
init_layer(self.conv3)
init_layer(self.conv4)
if self.condlayer==1:
init_layer(self.cond)
elif self.condlayer==2:
init_layer(self.cond2)
elif self.condlayer==3:
init_layer(self.cond3)
elif self.condlayer==4:
init_layer(self.cond4)
init_bn(self.bn1)
init_bn(self.bn2)
init_bn(self.bn3)
init_bn(self.bn4)
def forward(self, input, device, return_layers=False):
(_, seq_len, mel_bins) = input.shape
x = input.view(-1, 1, seq_len, mel_bins)
"""(samples_num, feature_maps, time_steps, freq_num)"""
device = torch.unsqueeze(torch.unsqueeze(device, 2), 3)
device = device.expand(-1, -1, seq_len, mel_bins)
if self.condlayer==1:
x = F.relu(self.bn1(torch.add(self.conv1(x), self.cond(device))))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
emb = F.relu(self.bn4(self.conv4(x)))
elif self.condlayer==2:
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(torch.add(self.conv2(x), self.cond2(device))))
x = F.relu(self.bn3(self.conv3(x)))
emb = F.relu(self.bn4(self.conv4(x)))
elif self.condlayer==3:
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(torch.add(self.conv3(x), self.cond3(device))))
emb = F.relu(self.bn4(self.conv4(x)))
elif self.condlayer==4:
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
emb = F.relu(self.bn4(torch.add(self.conv4(x), self.cond4(device))))
if return_layers is False:
return emb
class CnnNoPooling_Max(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnNoPooling_Max, self).__init__()
self.emb = EmbeddingLayers_Nopooling(cond_layer)
self.fc_final = nn.Linear(512, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.max_pool2d(x, kernel_size=x.shape[2:])
x = x.view(x.shape[0:2])
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class CnnNoPooling_Avg(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnNoPooling_Avg, self).__init__()
self.emb = EmbeddingLayers_Nopooling(cond_layer)
self.fc_final = nn.Linear(512, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.avg_pool2d(x, kernel_size=x.shape[2:])
x = x.view(x.shape[0:2])
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class CnnNoPooling_roi(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnNoPooling_roi, self).__init__()
self.emb = EmbeddingLayers_Nopooling(cond_layer)
self.fc_final = nn.Linear(40960, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.max_pool2d(x, kernel_size= (16, 16), stride=(16, 16))
x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class CnnNoPooling_roi_attention(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnNoPooling_roi_attention, self).__init__()
self.emb = EmbeddingLayers_Nopooling(cond_layer)
self.attention = Attention2d(
512,
classes_num,
att_activation='sigmoid',
cla_activation='log_softmax')
def init_weights(self):
pass
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.max_pool2d(x, kernel_size= (16, 16), stride=(16, 16))
output = self.attention(x)
return output
class CnnNoPooling_Attention(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnNoPooling_Attention, self).__init__()
self.emb = EmbeddingLayers_Nopooling(cond_layer)
self.attention = Attention2d(
512,
classes_num,
att_activation='sigmoid',
cla_activation='log_softmax')
def init_weights(self):
pass
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
output = self.attention(x)
return output
#####################################################################################################
class EmbeddingLayers_atrous(nn.Module):
def __init__(self, cond_layer=4):
super(EmbeddingLayers_atrous, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64,
kernel_size=(5, 5), stride=(1, 1), dilation=1,
padding=(2, 2), bias=False)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=128,
kernel_size=(5, 5), stride=(1, 1), dilation=2,
padding=(4, 4), bias=False)
self.conv3 = nn.Conv2d(in_channels=128, out_channels=256,
kernel_size=(5, 5), stride=(1, 1), dilation=4,
padding=(8, 8), bias=False)
self.conv4 = nn.Conv2d(in_channels=256, out_channels=512,
kernel_size=(5, 5), stride=(1, 1), dilation=8,
padding=(16, 16), bias=False)
self.condlayer = cond_layer
if cond_layer==1:
self.cond = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(1, 1))
elif cond_layer==2:
self.cond2 = nn.Conv2d(in_channels=3, out_channels=128, kernel_size=(1, 1))
elif cond_layer==3:
self.cond3 = nn.Conv2d(in_channels=3, out_channels=256, kernel_size=(1, 1))
elif cond_layer==4:
self.cond4 = nn.Conv2d(in_channels=3, out_channels=512, kernel_size=(1, 1))
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(128)
self.bn3 = nn.BatchNorm2d(256)
self.bn4 = nn.BatchNorm2d(512)
self.init_weights()
def init_weights(self):
init_layer(self.conv1)
init_layer(self.conv2)
init_layer(self.conv3)
init_layer(self.conv4)
if self.condlayer==1:
init_layer(self.cond)
elif self.condlayer==2:
init_layer(self.cond2)
elif self.condlayer==3:
init_layer(self.cond3)
elif self.condlayer==4:
init_layer(self.cond4)
init_bn(self.bn1)
init_bn(self.bn2)
init_bn(self.bn3)
init_bn(self.bn4)
def forward(self, input, device, return_layers=False):
(_, seq_len, mel_bins) = input.shape
x = input.view(-1, 1, seq_len, mel_bins)
"""(samples_num, feature_maps, time_steps, freq_num)"""
device = torch.unsqueeze(torch.unsqueeze(device, 2), 3)
device = device.expand(-1, -1, seq_len, mel_bins)
if self.condlayer==1:
x = F.relu(self.bn1(torch.add(self.conv1(x), self.cond(device))))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
x = F.relu(self.bn4(self.conv4(x)))
elif self.condlayer==2:
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(torch.add(self.conv2(x), self.cond2(device))))
x = F.relu(self.bn3(self.conv3(x)))
x = F.relu(self.bn4(self.conv4(x)))
elif self.condlayer==3:
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(torch.add(self.conv3(x), self.cond3(device))))
x = F.relu(self.bn4(self.conv4(x)))
elif self.condlayer==4:
x = F.relu(self.bn1(self.conv1(x)))
x = F.relu(self.bn2(self.conv2(x)))
x = F.relu(self.bn3(self.conv3(x)))
x = F.relu(self.bn4(torch.add(self.conv4(x), self.cond4(device))))
return x
class CnnAtrous_Max(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnAtrous_Max, self).__init__()
self.emb = EmbeddingLayers_atrous(cond_layer)
self.fc_final = nn.Linear(512, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.max_pool2d(x, kernel_size=x.shape[2:])
x = x.view(x.shape[0:2])
x = F.log_softmax(self.fc_final(x), dim=-1)
return x
class CnnAtrous_Avg(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnAtrous_Avg, self).__init__()
self.emb = EmbeddingLayers_atrous(cond_layer)
self.fc_final = nn.Linear(512, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.avg_pool2d(x, kernel_size=x.shape[2:])
x = x.view(x.shape[0:2])
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class CnnAtrous_roi(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnAtrous_roi, self).__init__()
self.emb = EmbeddingLayers_atrous(cond_layer)
self.fc_final = nn.Linear(40960, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.max_pool2d(x, kernel_size= (16, 16), stride=(16, 16))
x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class CnnAtrous_roi_attention(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnAtrous_roi_attention, self).__init__()
self.emb = EmbeddingLayers_atrous(cond_layer)
self.attention = Attention2d(
512,
classes_num,
att_activation='sigmoid',
cla_activation='log_softmax')
def init_weights(self):
pass
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
x = F.max_pool2d(x, kernel_size= (16, 16), stride=(16, 16))
output = self.attention(x)
return output
class CnnAtrous_Attention(nn.Module):
def __init__(self, classes_num, cond_layer):
super(CnnAtrous_Attention, self).__init__()
self.emb = EmbeddingLayers_atrous(cond_layer)
self.attention = Attention2d(
512,
classes_num,
att_activation='sigmoid',
cla_activation='log_softmax')
def init_weights(self):
pass
def forward(self, input, device):
"""(samples_num, feature_maps, time_steps, freq_num)"""
x = self.emb(input, device)
output = self.attention(x)
return output
#####################################################################################################
class EmbeddingLayers(nn.Module):
def __init__(self, cond_layer=3):
super(EmbeddingLayers, self).__init__()
self.conv1 = nn.Conv2d(in_channels=1, out_channels=64,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.conv2 = nn.Conv2d(in_channels=64, out_channels=128,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.conv3 = nn.Conv2d(in_channels=128, out_channels=256,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.conv4 = nn.Conv2d(in_channels=256, out_channels=512,
kernel_size=(5, 5), stride=(1, 1),
padding=(2, 2), bias=False)
self.condlayer = cond_layer
if cond_layer==1:
self.cond = nn.Conv2d(in_channels=3, out_channels=64, kernel_size=(1, 1))
elif cond_layer==2:
self.cond2 = nn.Conv2d(in_channels=3, out_channels=128, kernel_size=(1, 1))
elif cond_layer==3:
self.cond3 = nn.Conv2d(in_channels=3, out_channels=256, kernel_size=(1, 1))
elif cond_layer==4:
self.cond4 = nn.Conv2d(in_channels=3, out_channels=512, kernel_size=(1, 1))
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(128)
self.bn3 = nn.BatchNorm2d(256)
self.bn4 = nn.BatchNorm2d(512)
self.init_weights()
def init_weights(self):
init_layer(self.conv1)
init_layer(self.conv2)
init_layer(self.conv3)
init_layer(self.conv4)
if self.condlayer==1:
init_layer(self.cond)
elif self.condlayer==2:
init_layer(self.cond2)
elif self.condlayer==3:
init_layer(self.cond3)
elif self.condlayer==4:
init_layer(self.cond4)
init_bn(self.bn1)
init_bn(self.bn2)
init_bn(self.bn3)
init_bn(self.bn4)
def forward(self, input, device, return_layers=False):
(batch_size, seq_len, mel_bins) = input.shape
x = input.view(-1, 1, seq_len, mel_bins)
"""(samples_num, feature_maps, time_steps, freq_num)"""
if self.condlayer==1:
device1 = torch.unsqueeze(torch.unsqueeze(device, 2), 3)
device1 = device1.expand(-1, -1, seq_len, mel_bins)
x = F.relu(self.bn1(torch.add(self.conv1(x), self.cond(device1)))) # 1*1
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn2(self.conv2(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn3(self.conv3(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn4(self.conv4(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
elif self.condlayer==2:
device2 = torch.unsqueeze(torch.unsqueeze(device, 2), 3)
device2 = device2.expand(-1, -1, seq_len/2, mel_bins/2)
x = F.relu(self.bn1(self.conv1(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn2(torch.add(self.conv2(x), self.cond2(device2))))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn3(self.conv3(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn4(self.conv4(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
elif self.condlayer==3:
device3 = torch.unsqueeze(torch.unsqueeze(device, 2), 3)
device3 = device3.expand(-1, -1, seq_len/4, mel_bins/4)
x = F.relu(self.bn1(self.conv1(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn2(self.conv2(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn3(torch.add(self.conv3(x), self.cond3(device3))))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn4(self.conv4(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
elif self.condlayer==4:
device4 = torch.unsqueeze(torch.unsqueeze(device, 2), 3)
device4 = device4.expand(-1, -1, seq_len/8, mel_bins/8)
x = F.relu(self.bn1(self.conv1(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn2(self.conv2(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn3(self.conv3(x)))
x = F.max_pool2d(x, kernel_size=(2, 2))
x = F.relu(self.bn4(torch.add(self.conv4(x), self.cond4(device4))))
x = F.max_pool2d(x, kernel_size=(2, 2))
if return_layers is False:
return x
else:
return [x, x]
class DecisionLevelMaxPooling(nn.Module):
def __init__(self, classes_num, cond_layer):
super(DecisionLevelMaxPooling, self).__init__()
self.emb = EmbeddingLayers(cond_layer)
self.fc_final = nn.Linear(512, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""input: (samples_num, channel, time_steps, freq_bins)
"""
# (samples_num, channel, time_steps, freq_bins)
x = self.emb(input, device)
# (samples_num, 512, hidden_units)
output = F.max_pool2d(x, kernel_size=x.shape[2:])
output = output.view(output.shape[0:2])
output = F.log_softmax(self.fc_final(output), dim=-1)
return output
class DecisionLevelAvgPooling(nn.Module):
def __init__(self, classes_num, cond_layer):
super(DecisionLevelAvgPooling, self).__init__()
self.emb = EmbeddingLayers(cond_layer)
self.fc_final = nn.Linear(512, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""input: (samples_num, channel, time_steps, freq_bins)
"""
# (samples_num, channel, time_steps, freq_bins)
x = self.emb(input, device)
# (samples_num, 512, hidden_units)
x = F.avg_pool2d(x, kernel_size=x.shape[2:])
x = x.view(x.shape[0:2])
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class DecisionLevelFlatten(nn.Module):
def __init__(self, classes_num, cond_layer):
super(DecisionLevelFlatten, self).__init__()
self.emb = EmbeddingLayers(cond_layer)
self.fc_final = nn.Linear(40960, classes_num)
self.init_weights()
def init_weights(self):
init_layer(self.fc_final)
def forward(self, input, device):
"""input: (samples_num, channel, time_steps, freq_bins)
"""
# (samples_num, channel, time_steps, freq_bins)
x = self.emb(input, device)
# (samples_num, 512, hidden_units)
x = x.view(x.size(0), x.size(1) * x.size(2) * x.size(3))
output = F.log_softmax(self.fc_final(x), dim=-1)
return output
class Attention2d(nn.Module):
def __init__(self, n_in, n_out, att_activation, cla_activation):
super(Attention2d, self).__init__()
self.att_activation = att_activation
self.cla_activation = cla_activation
self.att = nn.Conv2d(
in_channels=n_in, out_channels=n_out, kernel_size=(
1, 1), stride=(
1, 1), padding=(
0, 0), bias=True)
self.cla = nn.Conv2d(
in_channels=n_in, out_channels=n_out, kernel_size=(
1, 1), stride=(
1, 1), padding=(
0, 0), bias=True)
self.init_weights()
def init_weights(self):
init_layer(self.att)
init_layer(self.cla)
self.att.weight.data.fill_(0.)
def activate(self, x, activation):
if activation == 'linear':
return x
elif activation == 'relu':
return F.relu(x)
elif activation == 'sigmoid':
return F.sigmoid(x)+0.1
elif activation == 'log_softmax':
return F.log_softmax(x, dim=1)
def forward(self, x):
"""input: (samples_num, channel, time_steps, freq_bins)
"""
att = self.att(x)
att = self.activate(att, self.att_activation)
cla = self.cla(x)
cla = self.activate(cla, self.cla_activation)
# (samples_num, channel, time_steps * freq_bins)
att = att.view(att.size(0), att.size(1), att.size(2) * att.size(3))
cla = cla.view(cla.size(0), cla.size(1), cla.size(2) * cla.size(3))
epsilon = 0.1 # 1e-7
att = torch.clamp(att, epsilon, 1. - epsilon)
norm_att = att / torch.sum(att, dim=2)[:, :, None]
x = torch.sum(norm_att * cla, dim=2)
Return_heatmap = False
if Return_heatmap:
return x, norm_att
else:
return x
class DecisionLevelSingleAttention(nn.Module):
def __init__(self, classes_num, cond_layer):
super(DecisionLevelSingleAttention, self).__init__()
self.emb = EmbeddingLayers(cond_layer)
self.attention = Attention2d(
512,
classes_num,
att_activation='sigmoid',
cla_activation='log_softmax')
def init_weights(self):
pass
def forward(self, input, device):
"""input: (samples_num, freq_bins, time_steps, 1)
"""
# (samples_num, hidden_units, time_steps, 1)
b1 = self.emb(input, device)
# (samples_num, classes_num, time_steps, 1)
output = self.attention(b1)
return output
| 31.73057 | 101 | 0.578339 | 3,323 | 24,496 | 4.060788 | 0.056876 | 0.010375 | 0.032014 | 0.032607 | 0.841782 | 0.835705 | 0.826219 | 0.807766 | 0.796206 | 0.792352 | 0 | 0.043997 | 0.274412 | 24,496 | 771 | 102 | 31.771725 | 0.715202 | 0.01539 | 0 | 0.753846 | 0 | 0 | 0.005877 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0.009615 | 0.011538 | null | null | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
155c14f912e2096bf46a22afbee3260b8ea17d71 | 26 | py | Python | wode.py | huhu0923/hu | fd26da0cb6ab6b49598cd1322ad3699839cba6f2 | [
"Apache-2.0"
] | null | null | null | wode.py | huhu0923/hu | fd26da0cb6ab6b49598cd1322ad3699839cba6f2 | [
"Apache-2.0"
] | null | null | null | wode.py | huhu0923/hu | fd26da0cb6ab6b49598cd1322ad3699839cba6f2 | [
"Apache-2.0"
] | null | null | null | print("123")
print("123")
| 8.666667 | 12 | 0.615385 | 4 | 26 | 4 | 0.5 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.25 | 0.076923 | 26 | 2 | 13 | 13 | 0.416667 | 0 | 0 | 1 | 0 | 0 | 0.230769 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 9 |
155eb41a05cb4ce5769b7e38690e2eff07fa67a3 | 4,239 | py | Python | Basic/Addition.py | JHP4911/Quantum-Computing-UK | bd724442a6d8061966b24cad870153022b1402e4 | [
"CC0-1.0"
] | 51 | 2020-11-28T16:23:59.000Z | 2022-03-11T01:39:18.000Z | Basic/Addition.py | JHP4911/Quantum-Computing-UK | bd724442a6d8061966b24cad870153022b1402e4 | [
"CC0-1.0"
] | null | null | null | Basic/Addition.py | JHP4911/Quantum-Computing-UK | bd724442a6d8061966b24cad870153022b1402e4 | [
"CC0-1.0"
] | 22 | 2020-11-28T16:34:36.000Z | 2022-03-20T21:50:45.000Z | print('\n Quantum Full Adder')
print('---------------------')
from qiskit import QuantumRegister
from qiskit import QuantumRegister, ClassicalRegister
from qiskit import QuantumCircuit, execute,IBMQ
IBMQ.enable_account('INSERT API TOKEN HERE')
provider = IBMQ.get_provider(hub='ibm-q')
######## A ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[0])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
########################################
backend = provider.get_backend('ibmq_qasm_simulator')
job = execute(circuit, backend, shots=1)
print('\nExecuting...\n')
print('\nA\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
######## B ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[1])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
######################################
job = execute(circuit, backend, shots=1)
print('\nB\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
######## A + B ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[0])
circuit.x(q[1])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
######################################
job = execute(circuit, backend, shots=1)
print('\nA + B\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
######## Cin ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[2])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
######################################
job = execute(circuit, backend, shots=1)
print('\nCin\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
######## Cin + A ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[2])
circuit.x(q[0])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
######################################
job = execute(circuit, backend, shots=1)
print('\nCin + A\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
######## Cin + B ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[2])
circuit.x(q[1])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
######################################
job = execute(circuit, backend, shots=1)
print('\nCin + B\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
######## Cin + A + B ###########################
q = QuantumRegister(5,'q')
c = ClassicalRegister(2,'c')
circuit = QuantumCircuit(q,c)
circuit.x(q[2])
circuit.x(q[1])
circuit.x(q[0])
circuit.cx(q[0],q[3])
circuit.cx(q[1],q[3])
circuit.cx(q[2],q[3])
circuit.ccx(q[0],q[1],q[4])
circuit.ccx(q[0],q[2],q[4])
circuit.ccx(q[1],q[2],q[4])
circuit.measure(q[3],c[0])
circuit.measure(q[4],c[1])
######################################
job = execute(circuit, backend, shots=1)
print('\nCin + A + B\n')
result = job.result()
counts = result.get_counts(circuit)
print('RESULT: ',counts,'\n')
print('Press any key to close')
input() | 28.449664 | 53 | 0.576551 | 726 | 4,239 | 3.349862 | 0.077135 | 0.023026 | 0.086349 | 0.063322 | 0.875 | 0.875 | 0.875 | 0.860609 | 0.860609 | 0.860609 | 0 | 0.042271 | 0.073602 | 4,239 | 149 | 54 | 28.449664 | 0.577031 | 0.011088 | 0 | 0.866667 | 0 | 0 | 0.073953 | 0.00571 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.022222 | 0 | 0.022222 | 0.133333 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1595bb4850dff323122b0e88bc867bf50bf0ef17 | 2,552 | py | Python | grafics/kamisado/colorchanger_script.py | rosaUNTIER/webGames | a1c3c05b67557a56a855f8465884977ddaccf90e | [
"MIT"
] | 2 | 2018-03-09T14:18:27.000Z | 2019-02-04T23:49:58.000Z | grafics/kamisado/colorchanger_script.py | rosaUNTIER/webKamisado | a1c3c05b67557a56a855f8465884977ddaccf90e | [
"MIT"
] | 1 | 2018-03-08T18:19:19.000Z | 2018-03-09T10:56:14.000Z | grafics/kamisado/colorchanger_script.py | rosaUNTIER/webKamisado | a1c3c05b67557a56a855f8465884977ddaccf90e | [
"MIT"
] | null | null | null | import bpy
ob = bpy.data.objects['farbe']
# Get material
mat = bpy.data.materials.get("yellow")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
bpy.context.scene.render.filepath = '//yellow'
bpy.ops.render.render(animation=True)
#--------------------------------------------
mat = bpy.data.materials.get("orange")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//orange'
bpy.ops.render.render(animation=True)
#--------------------------------------------
# Get material
mat = bpy.data.materials.get("blue")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//blue'
bpy.ops.render.render(animation=True)
#--------------------------------------------
# Get material
mat = bpy.data.materials.get("violett")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//violett'
bpy.ops.render.render(animation=True)
#--------------------------------------------
# Get material
mat = bpy.data.materials.get("rosa")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//rosa'
bpy.ops.render.render(animation=True)
#--------------------------------------------
# Get material
mat = bpy.data.materials.get("red")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//red'
bpy.ops.render.render(animation=True)
#--------------------------------------------
# Get material
mat = bpy.data.materials.get("green")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//green'
bpy.ops.render.render(animation=True)
#--------------------------------------------
# Get material
mat = bpy.data.materials.get("brown")
# Assign it to object
if ob.data.materials:
# assign to 1st material slot
ob.data.materials[0] = mat
#Render results
bpy.context.scene.render.filepath = '//brown'
bpy.ops.render.render(animation=True)
#--------------------------------------------
| 23.2 | 47 | 0.601097 | 323 | 2,552 | 4.749226 | 0.105263 | 0.20339 | 0.156454 | 0.099087 | 0.928292 | 0.895046 | 0.854628 | 0.833116 | 0.833116 | 0.833116 | 0 | 0.007343 | 0.14616 | 2,552 | 109 | 48 | 23.412844 | 0.69665 | 0.362069 | 0 | 0.571429 | 0 | 0 | 0.063562 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.02381 | 0 | 0.02381 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
15a870fa85945edd7f40afad6fa762b96ccc165e | 40 | py | Python | squares/__init__.py | Vivokas20/SKEL | d8766ceaa8aa766ea3580bbb61b747572ebfe77c | [
"Apache-2.0"
] | 1 | 2022-01-20T14:57:30.000Z | 2022-01-20T14:57:30.000Z | squares/__init__.py | Vivokas20/SKEL | d8766ceaa8aa766ea3580bbb61b747572ebfe77c | [
"Apache-2.0"
] | null | null | null | squares/__init__.py | Vivokas20/SKEL | d8766ceaa8aa766ea3580bbb61b747572ebfe77c | [
"Apache-2.0"
] | null | null | null | from . import config
from . import util
| 13.333333 | 20 | 0.75 | 6 | 40 | 5 | 0.666667 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.2 | 40 | 2 | 21 | 20 | 0.9375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
ec630a4b9dc121a97b5b73fd5bb72f9e8f3bc542 | 38,569 | py | Python | experiment/data_collection_probing.py | DeerKK/Deformable-Modeling | 97b14be152e78f44dd6e783059bc5380a3a74a68 | [
"MIT"
] | 4 | 2020-11-16T16:06:08.000Z | 2022-03-30T03:53:54.000Z | experiment/data_collection_probing.py | DeerKK/Deformable-Modeling | 97b14be152e78f44dd6e783059bc5380a3a74a68 | [
"MIT"
] | null | null | null | experiment/data_collection_probing.py | DeerKK/Deformable-Modeling | 97b14be152e78f44dd6e783059bc5380a3a74a68 | [
"MIT"
] | null | null | null | import time
import numpy as np
import cv2
from copy import deepcopy
from klampt import *
from klampt.math import vectorops,so3,se3
from klampt.io import loader
from klampt.model import ik
from klampt import vis
from klampt.model import collide
import math
import random
from robot_api.RobotController import UR5WithGripperController
import matplotlib.pyplot as plt
from scipy import signal
from utils.collision_detecting import check_collision_single,check_collision_linear
import os
###
def run_poking(config):
"""
this is poking api entrance.
"""
# init params
tableHeight = config.tableHeight
probeLength = config.probeLength
forceLimit = config.forceLimit
dt=config.dt #250Hz
moveStep=0.002*dt #2mm /s
shortServoTime=config.shortServoTime
longServoTime=config.longServoTime
IKErrorTolerence=config.IKErrorTolerence
maxDev=config.maxDev
EEZLimit=config.EEZLimit
intermediateConfig = config.intermediateConfig
probe_transform = config.probe_transform
point_probe = np.array([[0,0,0,1],
[1-probeLength,0,0,1],
[0,-1,0,1]]) # means the point in probe coordinate.
point_probe_to_local = np.dot(probe_transform, point_probe.T)
point_probe_to_local = point_probe_to_local[0:3,:].T
point_probe_to_local = point_probe_to_local.tolist()
print("[*]Debug: probe coodinate transform to EE:")
print(point_probe_to_local)
# init robot
world = WorldModel()
res = world.readFile(config.robot_model_path)
robot = world.robot(0)
ee_link=config.ee_link_number #UR5 model is 7.
link=robot.link(ee_link)
CONTROLLER = config.mode
collider = collide.WorldCollider(world)
print '---------------------model loaded -----------------------------'
# visualization
vis.add("world",world)
# create folder
data_folder = config.exp_path+'exp_'+str(config.exp_number)+'/'+config.probe_type
if not os.path.exists(data_folder):
os.mkdir(data_folder)
# begin loop
if config.probe_type == 'point':
run_poking_point_probe(config,tableHeight,probeLength,forceLimit,dt,moveStep,shortServoTime,longServoTime,
IKErrorTolerence,maxDev,EEZLimit,probe_transform,point_probe_to_local,world,res,robot,link,CONTROLLER,collider,intermediateConfig)
elif config.probe_type == 'line':
run_poking_line_probe(config,tableHeight,probeLength,forceLimit,dt,moveStep,shortServoTime,longServoTime,
IKErrorTolerence,maxDev,EEZLimit,probe_transform,point_probe_to_local,world,res,robot,link,CONTROLLER,collider,intermediateConfig)
elif config.probe_type == 'ellipse':
run_poking_ellipse_probe(config,tableHeight,probeLength,forceLimit,dt,moveStep,shortServoTime,longServoTime,
IKErrorTolerence,maxDev,EEZLimit,probe_transform,point_probe_to_local,world,res,robot,link,CONTROLLER,collider,intermediateConfig)
else:
print('[!]Probe type no exist')
def run_poking_point_probe(config,tableHeight,probeLength,forceLimit,dt,moveStep,shortServoTime,longServoTime,
IKErrorTolerence,maxDev,EEZLimit,probe_transform,point_probe_to_local,world,res,
robot,link,CONTROLLER,collider,intermediateConfig):
"""
this is the main function of poking object. - point probe
"""
# Read In the pcd
points, normals = load_pcd(config.exp_path+'exp_'+str(config.exp_number)+'/probePcd.txt')
# control interface
if CONTROLLER == 'physical':
robotControlApi = UR5WithGripperController(host=config.robot_host,gripper=False)
robotControlApi.start()
time.sleep(2)
print '---------------------robot started -----------------------------'
constantVServo(robotControlApi,4,intermediateConfig,dt)#controller format
# in simulation ,set
robot.setConfig(controller_2_klampt(robot,intermediateConfig))
print '---------------------at home configuration -----------------------------'
if CONTROLLER == 'debugging':
differences=[]
print('[*]Debug: Poking process start!')
for i in range(len(points)):
print('point %d, pos: %s, normals: %s'%(i,points[i],normals[i]))
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0)) #get unit vector in the direction '- normals'
## perform IK
local_NY_UnitV=vectorops.unit(vectorops.cross([0,1,0],approachVector))
pt1=goalPosition
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength)) # use 1m in normals direction.
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],maxDev,
IKErrorTolerence,EEZLimit,collider,use_collision_detect=True)
differences.append(difference)
print('difference: %f'%difference)
### now start colecting data..
travel = 0.0
stepVector = vectorops.mul(approachVector,moveStep)
while travel<0.0001: #just try 0.1mm?
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],maxDev,
IKErrorTolerence,EEZLimit,collider,use_const=False)
travel = travel + moveStep
### move the probe away, note: a bit different to physical mode
pt1=vectorops.add(points[i],vectorops.mul(approachVector,-0.05)) ## move the probe 5 cm from the object surface
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],maxDev,
IKErrorTolerence,EEZLimit,collider)
### move back to intermediate config
robot.setConfig(controller_2_klampt(robot,intermediateConfig))
print('[*]Debug: Poking process done, with max difference:%f'%max(differences))
vis.show()
while vis.shown():
time.sleep(1.0)
elif CONTROLLER == 'physical':
input('There are %d poking point, go?'%len(points))
point_list = range(112,116) # !delete 85, 90
#point_list = random.sample(range(97),97)
#point_list = [40,37,67,68]
for i in point_list:
print('point %d, pos: %s, normals: %s'%(i,points[i],normals[i]))
travel = -0.01
#init record file
forceData=open(config.exp_path+'exp_'+str(config.exp_number)+'/point/force_'+str(i)+'.txt','w')
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
#calculate start position
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0))
#### Make sure no contact, backup 0.01m
local_NY_UnitV=vectorops.unit(vectorops.cross([0,1,0],approachVector))
pt1=vectorops.add(goalPosition,vectorops.mul(approachVector,travel))
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime-1,dt) #TODO:
time.sleep(0.2)
# Zero the sensor before straight line push, Note that the force is recorded in the global frame..
counter = 0.0
totalF = [0,0,0]
startTime=time.time()
while (time.time()-startTime) < 1: # use 1s to cal the Force
totalF = vectorops.add(totalF,robotControlApi.getWrench()[0:3])
counter = counter + 1.0
time.sleep(dt)
forceBias = vectorops.mul(totalF,1.0/float(counter)) # when probe no touch the obj, F_avr = sum(F)/n
### now start collecting data..
wrench = robotControlApi.getWrench()
Force = vectorops.sub(wrench[0:3],forceBias)
Force_normal = math.fabs(vectorops.dot(Force,approachVector)) #|F||n|cos(theta) = F dot n, set it >= 0
forceHistory = [Force]
force_normalHistory = [Force_normal]
displacementHistory = [travel]
stepVector = vectorops.mul(approachVector,moveStep)
while Force_normal < forceLimit:
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime,dt,use_const=False)
time.sleep(dt)
Force = vectorops.sub(robotControlApi.getWrench()[0:3],forceBias)
Force_normal = math.fabs(vectorops.dot(Force,approachVector))
travel = travel + moveStep
forceHistory.append([Force[0],Force[1],Force[2]])
force_normalHistory.append(Force_normal)
displacementHistory.append(travel)
#record all the data in 2 files, one N*2 containts all the force data collected at various locations, another
#file specifies the number of datapoints at each detected point
for (f,fn,d) in zip(forceHistory,force_normalHistory,displacementHistory):
forceData.write(str(f[0])+' '+str(f[1])+' '+str(f[2])+' '+str(fn)+' '+str(d)+'\n')
forceData.close()
### move the probe away
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
pt1=vectorops.add(points[i],vectorops.mul(approachVector,-0.10)) ## move the probe 8 cm from the object surface
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,shortServoTime,dt)
#constantVServo(robotControlApi,longServoTime,intermediateConfig,dt)
print'----------------------- pt '+str(i)+' completed -------------------------------'
#### move back to intermediate config
constantVServo(robotControlApi,shortServoTime,intermediateConfig,dt)
robotControlApi.stop()
def run_poking_line_probe(config,tableHeight,probeLength,forceLimit,dt,moveStep,shortServoTime,longServoTime,
IKErrorTolerence,maxDev,EEZLimit,probe_transform,point_probe_to_local,world,res,
robot,link,CONTROLLER,collider,intermediateConfig):
"""
this is the main function of poking object. - line probe
"""
# reconstruct probepcd.txt
if input('[*]Reconstruct probe pcd?') == 1:
theta_list_num = input('---need theta list number: ')
reconstruct_pcd(config.exp_path+'exp_'+str(config.exp_number)+'/probePcd.txt',
config.exp_path+'exp_'+str(config.exp_number)+'/probePcd_theta.txt',
theta_list_num)
print('---New probe list done')
# Read In the pcd
points, normals, theta_list, theta, pti = load_pcd(config.exp_path+'exp_'+str(config.exp_number)+'/probePcd_theta.txt', pcdtype='xyzrgbntheta')
# control interface
if CONTROLLER == 'physical':
robotControlApi = UR5WithGripperController(host=config.robot_host,gripper=False)
robotControlApi.start()
time.sleep(2)
print '---------------------robot started -----------------------------'
constantVServo(robotControlApi,4,intermediateConfig,dt)#controller format
# set in simul model
robot.setConfig(controller_2_klampt(robot,intermediateConfig))
print '---------------------at home configuration -----------------------------'
if CONTROLLER == 'debugging':
differences=[]
print('[*]Debug: Poking process start')
i = 0 # use this to catch points
pti_ = pti[i]
while(i < len(points)):
robotCurrentConfig=intermediateConfig
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0)) #get unit vector in the direction '- normals'
_pti = pti_
if pti[i] == _pti:
print('point %d, pos: %s, normals: %s, theta: %s, -> %f'%(i,points[i],normals[i],theta_list[i],theta[i]))
## perform IK
local_NY_UnitV = vectorops.unit(back_2_line(approachVector,theta_list[i])) # the probe's line direction
pt1=goalPosition
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength)) # use 1m in normals direction.
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,use_collision_detect=False,use_ik_detect=True)
differences.append(difference)
print('difference: %f'%difference)
### now start colecting data..
travel = 0.0
stepVector = vectorops.mul(approachVector,moveStep)
while travel<0.0001:
robotCurrentConfig=klampt_2_controller(robot.getConfig())
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,use_collision_detect=False)
travel = travel + moveStep
### move the probe away, note: a bit different to physical mode
robotCurrentConfig=klampt_2_controller(robot.getConfig())
pt1=vectorops.add(points[i],vectorops.mul(approachVector,-0.05)) ## move the probe 5 cm from the object surface
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,use_collision_detect=False)
### move back to intermediate config
robot.setConfig(controller_2_klampt(robot,intermediateConfig))
i = i + 1 # important
else:
pti_ = pti[i]
print('[*]Debug: Poking process done, with max difference:%f'%max(differences))
vis.show()
while vis.shown():
time.sleep(1.0)
elif CONTROLLER == 'physical':
exe_number = input('There are %d poking point, go?'%len(points))
start_i = 0 #72,93,94,97,99,100,101,108,116,125,128,147,148,150,151,152,189,194~197,207~210 !40,37,67,68 -> 111 112 113 120 121 122 201 206
end_i = 1 #len(points)
i = start_i # use this to catch points, set manully! # TODO:
pti_ = pti[i]
probe_list = random.sample(range(282),282) #18,15
finish_list= range(16)+range(17,21)+range(25,44)+range(45,51)+range(52,57)+[58,59,60,63,67,68,71,73,75,76,78]+range(81,96)\
+[97,99,101,103,104,108,110,112,114,115,117,118,120,124,125,128,129,130]+range(132,138)+[141,142,144,147,149,150,151]+range(153,156)\
+[159,160,161,167,168,170,172,175,176,177,178]+range(180,186)\
+[189,192,195,196,199,200,201,203]+range(204,210)+range(211,217)+[219]+range(221,229)+range(230,236)+range(237,241)\
+range(244,250)+[251]+range(254,261)+[262]+range(264,276)+range(277,282)
probe_list = [x for x in probe_list if x not in finish_list] +[227]
probe_list = [95]
for i in probe_list:
print('point %d, pos: %s, normals: %s, theta: %s, -> %f'%(i,points[i],normals[i],theta_list[i],theta[i]))
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
# calculate start position
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0))
# init record file
forceData=open(config.exp_path+'exp_'+str(config.exp_number)+'/line/force_'+str(i)+'.txt','w')
torqueData=open(config.exp_path+'exp_'+str(config.exp_number)+'/line/torque_'+str(i)+'.txt','w')
travel = -0.025
## perform IK
local_NY_UnitV = vectorops.unit(back_2_line(approachVector,theta_list[i])) # the probe's line direction
#### Make sure no contact, backup 0.01m
pt1=vectorops.add(goalPosition,vectorops.mul(approachVector,travel))
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength)) # use 1m in normals direction.
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,shortServoTime,dt) #TODO:
time.sleep(0.2)
# Zero the sensor before straight line push, Note that the force is recorded in the global frame..
counter = 0.0
totalF = [0,0,0]
totalTorque = [0,0,0]
startTime=time.time()
while (time.time()-startTime) < 1: # use 1s to cal the Force
totalF = vectorops.add(totalF,robotControlApi.getWrench()[0:3])
totalTorque = vectorops.add(totalTorque,robotControlApi.getWrench()[3:6])
counter = counter + 1.0
time.sleep(dt)
forceBias = vectorops.mul(totalF,1.0/float(counter)) # when probe no touch the obj, F_avr = sum(F)/n
torqueBias = vectorops.mul(totalTorque,1.0/float(counter))
### now start collecting data..
wrench = robotControlApi.getWrench()
Force = vectorops.sub(wrench[0:3],forceBias)
Torque = vectorops.sub(wrench[3:6],torqueBias)
Force_normal = math.fabs(vectorops.dot(Force,approachVector)) #|F||n|cos(theta) = F dot n, set it >= 0
local_Z_UnitV = vectorops.cross(normals[i],local_NY_UnitV)
Torque_normal = vectorops.dot(Torque,local_Z_UnitV) #TODO:
forceHistory = [Force]
force_normalHistory = [Force_normal]
torqueHistory = [Torque]
torque_normalHistory = [Torque_normal]
displacementHistory = [travel]
stepVector = vectorops.mul(approachVector,moveStep)
while Force_normal < forceLimit:
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime,dt,use_const=False)
time.sleep(dt)
Force = vectorops.sub(robotControlApi.getWrench()[0:3],forceBias)
Torque = vectorops.sub(robotControlApi.getWrench()[3:6],torqueBias)
Force_normal = math.fabs(vectorops.dot(Force,approachVector))
local_Z_UnitV = vectorops.cross(normals[i],local_NY_UnitV)
Torque_normal = vectorops.dot(Torque,local_Z_UnitV)
travel = travel + moveStep
forceHistory.append([Force[0],Force[1],Force[2]])
force_normalHistory.append(Force_normal)
torqueHistory.append([Torque[0],Torque[1],Torque[2]])
torque_normalHistory.append(Torque_normal)
displacementHistory.append(travel)
#record all the data in 2 files, one N*2 containts all the force data collected at various locations, another
#file specifies the number of datapoints at each detected point
for (f,fn,d) in zip(forceHistory,force_normalHistory,displacementHistory):
forceData.write(str(f[0])+' '+str(f[1])+' '+str(f[2])+' '+str(fn)+' '+str(d)+'\n')
for (t,tn,d) in zip(torqueHistory,torque_normalHistory,displacementHistory):
torqueData.write(str(t[0])+' '+str(t[1])+' '+str(t[2])+' '+str(tn)+' '+str(d)+'\n')
### move the probe away, sometimes z up 5cm is better than normal direction up 5cm...
pt1=vectorops.add(pt1,[0,0,0.05]) ## move the probe 10 cm up-z-axis, find another point
pt2=vectorops.add(pt2,[0,0,0.05])
pt3=vectorops.add(pt3,[0,0,0.05])
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,shortServoTime-1,dt)
constantVServo(robotControlApi,longServoTime,intermediateConfig,dt)#TODO:
# close record file for point i
forceData.close()
torqueData.close()
print'----------------------- pt '+str(i)+' completed -------------------------------'
'''
while(i < end_i):
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
# calculate start position
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0))
# init record file
forceData=open(config.exp_path+'exp_'+str(config.exp_number)+'/line/force_'+str(i)+'.txt','w')
torqueData=open(config.exp_path+'exp_'+str(config.exp_number)+'/line/torque_'+str(i)+'.txt','w')
_pti = pti_
if pti[i] == _pti:
print('point %d, pos: %s, normals: %s, theta: %s, -> %f'%(i,points[i],normals[i],theta_list[i],theta[i]))
travel = -0.015
## perform IK
local_NY_UnitV = vectorops.unit(back_2_line(approachVector,theta_list[i])) # the probe's line direction
#### Make sure no contact, backup 0.01m
pt1=vectorops.add(goalPosition,vectorops.mul(approachVector,travel))
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength)) # use 1m in normals direction.
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime-1,dt) #TODO:
time.sleep(0.2)
# Zero the sensor before straight line push, Note that the force is recorded in the global frame..
counter = 0.0
totalF = [0,0,0]
totalTorque = [0,0,0]
startTime=time.time()
while (time.time()-startTime) < 1: # use 1s to cal the Force
totalF = vectorops.add(totalF,robotControlApi.getWrench()[0:3])
totalTorque = vectorops.add(totalTorque,robotControlApi.getWrench()[3:6])
counter = counter + 1.0
time.sleep(dt)
forceBias = vectorops.mul(totalF,1.0/float(counter)) # when probe no touch the obj, F_avr = sum(F)/n
torqueBias = vectorops.mul(totalTorque,1.0/float(counter))
### now start collecting data..
wrench = robotControlApi.getWrench()
Force = vectorops.sub(wrench[0:3],forceBias)
Torque = vectorops.sub(wrench[3:6],torqueBias)
Force_normal = math.fabs(vectorops.dot(Force,approachVector)) #|F||n|cos(theta) = F dot n, set it >= 0
local_Z_UnitV = vectorops.cross(normals[i],local_NY_UnitV)
Torque_normal = vectorops.dot(Torque,local_Z_UnitV) #TODO:
forceHistory = [Force]
force_normalHistory = [Force_normal]
torqueHistory = [Torque]
torque_normalHistory = [Torque_normal]
displacementHistory = [travel]
stepVector = vectorops.mul(approachVector,moveStep)
while Force_normal < forceLimit:
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime,dt,use_const=False)
time.sleep(dt)
Force = vectorops.sub(robotControlApi.getWrench()[0:3],forceBias)
Torque = vectorops.sub(robotControlApi.getWrench()[3:6],torqueBias)
Force_normal = math.fabs(vectorops.dot(Force,approachVector))
local_Z_UnitV = vectorops.cross(normals[i],local_NY_UnitV)
Torque_normal = vectorops.dot(Torque,local_Z_UnitV)
travel = travel + moveStep
forceHistory.append([Force[0],Force[1],Force[2]])
force_normalHistory.append(Force_normal)
torqueHistory.append([Torque[0],Torque[1],Torque[2]])
torque_normalHistory.append(Torque_normal)
displacementHistory.append(travel)
#record all the data in 2 files, one N*2 containts all the force data collected at various locations, another
#file specifies the number of datapoints at each detected point
for (f,fn,d) in zip(forceHistory,force_normalHistory,displacementHistory):
forceData.write(str(f[0])+' '+str(f[1])+' '+str(f[2])+' '+str(fn)+' '+str(d)+'\n')
for (t,tn,d) in zip(torqueHistory,torque_normalHistory,displacementHistory):
torqueData.write(str(t[0])+' '+str(t[1])+' '+str(t[2])+' '+str(tn)+' '+str(d)+'\n')
### move the probe away, sometimes z up 5cm is better than normal direction up 5cm...
pt1=vectorops.add(pt1,[0,0,0.05]) ## move the probe 10 cm up-z-axis, find another point
pt2=vectorops.add(pt2,[0,0,0.05])
pt3=vectorops.add(pt3,[0,0,0.05])
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,shortServoTime-0.5,dt)
#constantVServo(robotControlApi,longServoTime,intermediateConfig,dt)#TODO:
i = i + 1
# close record file for point i
forceData.close()
torqueData.close()
print'----------------------- pt '+str(i)+' completed -------------------------------'
else:
# up 10cm is faster but not good.
# since the points are close, no need to go back home
constantVServo(robotControlApi,longServoTime,intermediateConfig,dt)
pti_ = pti[i]
'''
#### move back to intermediate config
constantVServo(robotControlApi,shortServoTime,intermediateConfig,dt)
# finish all points
robotControlApi.stop()
def run_poking_ellipse_probe(config,tableHeight,probeLength,forceLimit,dt,moveStep,shortServoTime,longServoTime,
IKErrorTolerence,maxDev,EEZLimit,probe_transform,point_probe_to_local,world,res,
robot,link,CONTROLLER,collider,intermediateConfig):
"""
this is the main function of poking object. - point probe
"""
########################## Read In the pcd ######################################
points, normals = load_pcd(config.exp_path+'exp_'+str(config.exp_number)+'/probePcd.txt')
# control interface
if CONTROLLER == 'physical':
robotControlApi = UR5WithGripperController(host=config.robot_host,gripper=False)
robotControlApi.start()
time.sleep(2)
print '---------------------robot started -----------------------------'
## Record some home configuration
intermediateConfig = config.intermediateConfig
intermediateConfig = intermediateConfig
if CONTROLLER == "physical":
constantVServo(robotControlApi,4,intermediateConfig,dt)#controller format
robot.setConfig(controller_2_klampt(robot,intermediateConfig))
print '---------------------at home configuration -----------------------------'
if CONTROLLER == 'debugging':
differences=[]
print('[*]Debug: Poking process start!')
for i in range(len(points)):
print('point %d, pos: %s, normals: %s'%(i,points[i],normals[i]))
#robotCurrentConfig=intermediateConfig # TODO: compare to the intermediateConfig, I comment it
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0)) #get unit vector in the direction '- normals'
## perform IK
local_NY_UnitV=vectorops.unit(vectorops.cross([0,1,0],approachVector))
pt1=goalPosition
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength)) # use 1m in normals direction.
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],maxDev,
IKErrorTolerence,EEZLimit,collider,use_collision_detect=True,use_ik_detect=True)
differences.append(difference)
print('difference: %f'%difference)
### now start colecting data..
travel = 0.0
stepVector = vectorops.mul(approachVector,moveStep)
while travel<0.0001: #just try 0.1mm?
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],maxDev,
IKErrorTolerence,EEZLimit,collider,use_const=False)
travel = travel + moveStep
### move the probe away, note: a bit different to physical mode
pt1=vectorops.add(points[i],vectorops.mul(approachVector,-0.05)) ## move the probe 5 cm from the object surface
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],maxDev,
IKErrorTolerence,EEZLimit,collider)
### move back to intermediate config
robot.setConfig(controller_2_klampt(robot,intermediateConfig))
print('[*]Debug: Poking process done, with max difference:%f'%max(differences))
vis.show()
while vis.shown():
time.sleep(1.0)
elif CONTROLLER == 'physical':
######################################## Ready to Take Measurements ################################################
input('[!]Warning: There are %d poking point, Robot act!:'%len(points))
point_list = range(65,120) #64
#point_list = random.sample(range(0,94),94)
#finish_list = [0,1,4,9,10,11,12,13,14,15,17,18,19,20,25,26,28,30,33,34,35,37,42,43,44,46,47,50,51,53,54,57,58,59,64,69,72,73,74,75,76,77,78,79,81,83,86,95]
#point_list = [x for x in point_list if x not in finish_list]
point_list = [64]
for i in point_list:
print('point %d, pos: %s, normals: %s'%(i,points[i],normals[i]))
#init record file
forceData=open(config.exp_path+'exp_'+str(config.exp_number)+'/ellipse/force_'+str(i)+'.txt','w')
torqueData=open(config.exp_path+'exp_'+str(config.exp_number)+'/ellipse/torque_'+str(i)+'.txt','w')
#init the backforward distance
travel = -0.018
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
#calculate start position
goalPosition=deepcopy(points[i])
approachVector=vectorops.unit(vectorops.mul(normals[i],-1.0))
#### Make sure no contact, backup 0.01m
local_NY_UnitV=vectorops.unit(vectorops.cross([0,1,0],approachVector))
pt1=vectorops.add(goalPosition,vectorops.mul(approachVector,travel))
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime-1,dt,
use_ik_detect=False,use_collision_detect=False)
time.sleep(0.2)
## Zero the sensor before straight line push
#
# Note that the force is recorded in the global frame..
# And the global frame has x and y axis flipped w.r.t the URDF....
counter = 0.0
totalF = [0,0,0]
totalTorque = [0,0,0]
startTime=time.time()
while (time.time()-startTime) < 1: # use 1s to cal the Force
totalF = vectorops.add(totalF,robotControlApi.getWrench()[0:3])
totalTorque = vectorops.add(totalTorque,robotControlApi.getWrench()[3:6])
counter = counter + 1.0
time.sleep(dt)
forceBias = vectorops.mul(totalF,1.0/float(counter)) # when probe no touch the obj, F_avr = sum(F)/n
torqueBias = vectorops.mul(totalTorque,1.0/float(counter))
### now start collecting data..
# Force direction x, y inverse, refer to correct force.py
wrench = robotControlApi.getWrench()
Force = fix_direction(vectorops.sub(wrench[0:3],forceBias))
Force_normal = math.fabs(vectorops.dot(Force,approachVector)) #|F||n|cos(theta) = F dot n, set it >= 0
Torque = vectorops.sub(wrench[3:6],torqueBias)
Torque = fix_direction(Torque)
forceHistory = [Force]
torqueHistory = [Torque]
force_normalHistory = [Force_normal]
displacementHistory = [travel]
stepVector = vectorops.mul(approachVector,moveStep)
while Force_normal < forceLimit:
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
pt1=vectorops.add(pt1,stepVector)
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,longServoTime,dt,
use_const=False,use_ik_detect=False)
time.sleep(dt)
Force = fix_direction(vectorops.sub(robotControlApi.getWrench()[0:3],forceBias))
Force_normal = math.fabs(vectorops.dot(Force,approachVector))
Torque = vectorops.sub(robotControlApi.getWrench()[3:6],torqueBias)
travel = travel + moveStep
forceHistory.append([Force[0],Force[1],Force[2]])
torqueHistory.append([Torque[0],Torque[1],Torque[2]])
force_normalHistory.append(Force_normal)
displacementHistory.append(travel)
#record all the data in 2 files, one N*2 containts all the force data collected at various locations, another
#file specifies the number of datapoints at each detected point
for (f,fn,d) in zip(forceHistory,force_normalHistory,displacementHistory):
forceData.write(str(f[0])+' '+str(f[1])+' '+str(f[2])+' '+str(fn)+' '+str(d)+'\n')
for (t,d) in zip(torqueHistory,displacementHistory):
torqueData.write(str(t[0])+' '+str(t[1])+' '+str(t[2])+' '+str(d)+'\n')
forceData.close()
torqueData.close()
### move the probe away
robotCurrentConfig=robotControlApi.getConfig()
robot.setConfig(controller_2_klampt(robot,robotCurrentConfig))
pt1=vectorops.add(points[i],vectorops.mul(approachVector,-0.10)) ## move the probe 5 cm from the object surface
pt2=vectorops.add(pt1,vectorops.mul(approachVector,1.0-probeLength))
pt3=vectorops.add(pt1,local_NY_UnitV)
[robot,difference] = robot_move(CONTROLLER,world,robot,link,point_probe_to_local,[pt1,pt2,pt3],
maxDev,IKErrorTolerence,EEZLimit,collider,robotControlApi,shortServoTime,dt,
use_ik_detect=False)
#constantVServo(robotControlApi,longServoTime,intermediateConfig,dt)
print'----------------------- pt '+str(i)+' completed -------------------------------'
#### move back to intermediate config
constantVServo(robotControlApi,shortServoTime,intermediateConfig,dt)
robotControlApi.stop()
def controller_2_klampt(robot,controllerQ):
qOrig=robot.getConfig()
q=[v for v in qOrig]
for i in range(6):
q[i+1]=controllerQ[i]
return q
def klampt_2_controller(robotQ):
temp=robotQ[1:7]
temp.append(0)
return temp
def constantVServo(controller,servoTime,target,dt):
currentTime=0.0
goalConfig=deepcopy(target)
currentConfig=controller.getConfig()
difference=vectorops.sub(goalConfig,currentConfig)
while currentTime < servoTime:
setConfig=vectorops.madd(currentConfig,difference,currentTime/servoTime)
controller.setConfig(setConfig)
time.sleep(dt)
currentTime=currentTime+dt
#print currentTime
return 0
def fix_direction(Force):
Force[0] = Force[0]
Force[1] = Force[1]
return Force
def robot_move(mode,world,robot,link,point_ee,point_world,maxDev,IKErrorTolerence,
EEZLimit,collider,robotControlApi=None,ServoTime=9999.0,dt=1.0,
use_const = True,vis=vis,use_collision_detect = False,use_ik_detect = False):
robotCurrentConfig=klampt_2_controller(robot.getConfig())
goal=ik.objective(link,local=point_ee,world=point_world)
res=ik.solve_nearby(goal,maxDeviation=maxDev,tol=0.00001)
#res=ik.solve_global(goal,tol=0.00001)
if res:
# collision detect
if check_collision_linear(robot,collider,controller_2_klampt(robot,robotCurrentConfig),robot.getConfig(),10):
print "[!]Warning: collision detected!"
if use_collision_detect == True:
vis.show()
if input('continue?') != 1:
exit()
else:
pass
# cal difference
diff=np.max(np.absolute((np.array(vectorops.sub(robotCurrentConfig[0:5],klampt_2_controller(robot.getConfig())[0:5])))))
EEZPos=link.getTransform()[1]
if diff<IKErrorTolerence and EEZPos>EEZLimit: #126 degrees
if mode == 'debugging':
pass
elif mode == 'physical':
if use_const:
constantVServo(robotControlApi,ServoTime,klampt_2_controller(robot.getConfig()),dt)
else:
robotControlApi.setConfig(klampt_2_controller(robot.getConfig()))
else:
print "[!]IK too far away"
if use_ik_detect == True:
if input('continue?') != 1:
exit()
else:
diff = 9999.0
print "[!]IK failture"
if use_ik_detect == True:
vis.show()
if input('continue?') != 1:
exit()
return robot, diff
def load_pcd(path, pcdtype='xyzrgbn'):
points=[]
normals=[]
normal_theta=[]
theta=[]
pt_index=[]
dataFile=open(path,'r')
for line in dataFile:
line=line.rstrip()
l=[num for num in line.split(' ')]
l2=[float(num) for num in l]
points.append(l2[0:3])
normals.append(l2[6:9])
if pcdtype == 'xyzrgbntheta':
normal_theta.append(l2[10:13])
theta.append(l2[13])
pt_index.append(l2[14])
dataFile.close()
print '---------------------pcd loaded -----------------------------'
if pcdtype == 'xyzrgbn':
return points, normals
elif pcdtype == 'xyzrgbntheta':
return points, normals, normal_theta, theta, pt_index
def reconstruct_pcd(oripath,newpath,theta_list_num):
oriFile=open(oripath,'r')
newFile=open(newpath,'w')
pt_index=0
for line in oriFile:
line = line.rstrip()
l=[num for num in line.split(' ')]
tmp_list = random.sample(range(100+1),theta_list_num) #TODO:
theta_list = [(math.pi*tmp/100 - math.pi*(0.0/4.0)) for tmp in tmp_list]
for theta in theta_list:
normal_theta = [math.cos(theta),math.sin(theta),0] # means the line probe's line direction
newFile.write(str(l[0])+' '+str(l[1])+' '+str(l[2])+' '+str(l[3])+' '+str(l[4])+' '+
str(l[5])+' '+str(l[6])+' '+str(l[7])+' '+str(l[8])+' '+str(l[9])+' '+
str(normal_theta[0])+' '+str(normal_theta[1])+' '+str(normal_theta[2])+' '+
str(theta)+' '+str(pt_index)+'\n')
pt_index = pt_index + 1
oriFile.close()
newFile.close()
def back_2_line(normal, projection):
projection[2] = -(normal[0]*projection[0]+normal[1]*projection[1])/normal[2]
return projection
| 42.854444 | 159 | 0.692784 | 5,036 | 38,569 | 5.200556 | 0.106037 | 0.030699 | 0.026919 | 0.02142 | 0.807331 | 0.7832 | 0.778656 | 0.763841 | 0.758839 | 0.74685 | 0 | 0.036749 | 0.157588 | 38,569 | 899 | 160 | 42.902113 | 0.769321 | 0.110011 | 0 | 0.618875 | 0 | 0.00363 | 0.07313 | 0.021364 | 0 | 0 | 0 | 0.006674 | 0 | 0 | null | null | 0.00363 | 0.030853 | null | null | 0.059891 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
ec94a9120576dd4c80fdba6177ed74f386c4b491 | 14,478 | py | Python | music.py | josemariasosa/music-theory | 7dbdf4bc7cb81942cf7811abf83ffd2c449aa603 | [
"OLDAP-2.7"
] | 4 | 2019-07-17T01:06:16.000Z | 2022-02-24T05:25:37.000Z | music.py | josemariasosa/music-theory | 7dbdf4bc7cb81942cf7811abf83ffd2c449aa603 | [
"OLDAP-2.7"
] | null | null | null | music.py | josemariasosa/music-theory | 7dbdf4bc7cb81942cf7811abf83ffd2c449aa603 | [
"OLDAP-2.7"
] | null | null | null | music = {
""
}
# https://www.pianoscales.org/major-harmonizing.html
[
{
"root": "a",
"major": ["a", "b", "c#", "d", "e", "f#", "g#", "a"],
"minor": {
"natural": ["a", "b", "c", "d", "e", "f", "g", "a"],
"harmonic": ["a", "b", "c", "d", "e", "f", "g#", "a"],
"melodic": ["a", "b", "c", "d", "e", "f#", "g#", "a"]
},
"harmonization": {
"major": [
{
"mode": None,
"degree": "I",
"name": "A",
"full_name": "a major"
"notes": {
"triad": ["a", "c#", "e"]
}
},
{
"mode": None,
"degree": "ii",
"name": "Bmin",
"full_name": "b minor"
"notes": {
"triad": ["b", "d", "f#"]
}
},
{
"mode": None,
"degree": "iii",
"name": "C#min",
"full_name": "c# minor"
"notes": {
"triad": ["c#", "e", "g#"]
}
},
{
"mode": None,
"degree": "IV",
"name": "D",
"full_name": "d major"
"notes": {
"triad": ["d", "f#", "a"]
}
},
{
"mode": None,
"degree": "V",
"name": "E",
"full_name": "e major"
"notes": {
"triad": ["e", "g#", "b"]
}
},
{
"mode": None,
"degree": "vi",
"name": "F#min",
"full_name": "f# minor"
"notes": {
"triad": ["f#", "a", "c#"]
}
},
{
"mode": None,
"degree": "vii°",
"name": "G#dim",
"full_name": "g# diminished"
"notes": {
"triad": ["g#", "b", "d"]
}
}
],
"minor": {
"natural": [
{
"mode": None,
"degree": "i",
"name": "Amin",
"full_name": "a minor"
"notes": {
"triad": ["a", "c", "e"]
}
},
{
"mode": None,
"degree": "ii°",
"name": "Bdim",
"full_name": "b diminished"
"notes": {
"triad": ["b", "d", "f"]
}
},
{
"mode": None,
"degree": "III",
"name": "C",
"full_name": "c major"
"notes": {
"triad": ["c", "e", "g"]
}
},
{
"mode": None,
"degree": "iv",
"name": "Dmin",
"full_name": "d minor"
"notes": {
"triad": ["d", "f", "a"]
}
},
{
"mode": None,
"degree": "v",
"name": "Emin",
"full_name": "e minor"
"notes": {
"triad": ["e", "g", "b"]
}
},
{
"mode": None,
"degree": "VI",
"name": "F",
"full_name": "f major"
"notes": {
"triad": ["f", "a", "c"]
}
},
{
"mode": None,
"degree": "VII",
"name": "G",
"full_name": "g major"
"notes": {
"triad": ["g", "b", "d"]
}
}
]
}
}
},
{
"root": "a#/bb",
"major": ["a#", "c", "d", "d#", "f", "g", "a", "a#"],
"minor": {
"natural": ["a#", "b#", "c#", "d#", "e#", "f#", "g#", "a#"],
"harmonic": ["a#", "b#", "c#", "d#", "e#", "f#", "a", "a#"],
"melodic": ["a#", "b#", "c#", "d#", "e#", "g", "a", "a#"]
},
"harmonization": {
"major": [
{
"mode": None,
"degree": "I",
"name": "A#",
"full_name": "a# major"
"notes": {
"triad": ["a#", "d", "f"]
}
},
{
"mode": None,
"degree": "ii",
"name": "Cmin",
"full_name": "c minor"
"notes": {
"triad": ["c", "d#", "g"]
}
},
{
"mode": None,
"degree": "iii",
"name": "Dmin",
"full_name": "d minor"
"notes": {
"triad": ["d", "f", "a"]
}
},
{
"mode": None,
"degree": "IV",
"name": "D#",
"full_name": "d# major"
"notes": {
"triad": ["d#", "g", "a#"]
}
},
{
"mode": None,
"degree": "V",
"name": "F",
"full_name": "f major"
"notes": {
"triad": ["f", "a", "c"]
}
},
{
"mode": None,
"degree": "vi",
"name": "Gmin",
"full_name": "g minor"
"notes": {
"triad": ["g", "a#", "d"]
}
},
{
"mode": None,
"degree": "vii°",
"name": "Adim",
"full_name": "A diminished"
"notes": {
"triad": ["a", "c", "d#"]
}
}
],
"minor": {
"natural": [
{
"mode": None,
"degree": "i",
"name": "A#min",
"full_name": "a# minor"
"notes": {
"triad": ["a#", "c#", "e#"]
}
},
{
"mode": None,
"degree": "ii°",
"name": "B#dim",
"full_name": "b# diminished"
"notes": {
"triad": ["b#", "d#", "f#"]
}
},
{
"mode": None,
"degree": "III",
"name": "C#",
"full_name": "c# major"
"notes": {
"triad": ["c#", "e#", "g#"]
}
},
{
"mode": None,
"degree": "iv",
"name": "D#min",
"full_name": "d# minor"
"notes": {
"triad": ["d#", "f#", "a#"]
}
},
{
"mode": None,
"degree": "v",
"name": "E#min",
"full_name": "e# minor"
"notes": {
"triad": ["e#", "g#", "b#"]
}
},
{
"mode": None,
"degree": "VI",
"name": "F#",
"full_name": "f# major"
"notes": {
"triad": ["f#", "a#", "c#"]
}
},
{
"mode": None,
"degree": "VII",
"name": "G#",
"full_name": "g# major"
"notes": {
"triad": ["g#", "b#", "d#"]
}
}
]
}
}
},
{
"root": "b",
"major": ["b", "c#", "d#", "e", "f#", "g#", "a#", "b"],
"minor": {
"natural": ["b", "c#", "d", "e", "f#", "g", "a", "b"],
"harmonic": ["b", "c#", "d", "e", "f#", "g", "a#", "b"],
"melodic": ["b", "c#", "d", "e", "f#", "g#", "a#", "b"]
},
"harmonization": {
"major": [
{
"mode": None,
"degree": "I",
"name": "B",
"full_name": "b major"
"notes": {
"triad": ["b", "d#", "f#"]
}
},
{
"mode": None,
"degree": "ii",
"name": "C#min",
"full_name": "c# minor"
"notes": {
"triad": ["c#", "e", "g#"]
}
},
{
"mode": None,
"degree": "iii",
"name": "D#min",
"full_name": "d# minor"
"notes": {
"triad": ["d#", "f#", "a#"]
}
},
{
"mode": None,
"degree": "IV",
"name": "E",
"full_name": "e major"
"notes": {
"triad": ["e", "g#", "b"]
}
},
{
"mode": None,
"degree": "V",
"name": "F#",
"full_name": "f# major"
"notes": {
"triad": ["f#", "a#", "c#"]
}
},
{
"mode": None,
"degree": "vi",
"name": "G#min",
"full_name": "g# minor"
"notes": {
"triad": ["g#", "b", "d#"]
}
},
{
"mode": None,
"degree": "vii°",
"name": "A#dim",
"full_name": "A# diminished"
"notes": {
"triad": ["a#", "c#", "e"]
}
}
],
"minor": {
"natural": [
{
"mode": None,
"degree": "i",
"name": "Bmin",
"full_name": "b minor"
"notes": {
"triad": ["b", "d", "f#"]
}
},
{
"mode": None,
"degree": "ii°",
"name": "C#dim",
"full_name": "c# diminished"
"notes": {
"triad": ["c#", "e", "f"]
}
},
{
"mode": None,
"degree": "III",
"name": "D",
"full_name": "d major"
"notes": {
"triad": ["d", "f#", "a"]
}
},
{
"mode": None,
"degree": "iv",
"name": "Emin",
"full_name": "e minor"
"notes": {
"triad": ["e", "g", "b"]
}
},
{
"mode": None,
"degree": "v",
"name": "F#min",
"full_name": "f# minor"
"notes": {
"triad": ["f#", "a", "c#"]
}
},
{
"mode": None,
"degree": "VI",
"name": "G",
"full_name": "g major"
"notes": {
"triad": ["g", "b", "d"]
}
},
{
"mode": None,
"degree": "VII",
"name": "A",
"full_name": "a major"
"notes": {
"triad": ["a", "c#", "e"]
}
}
]
}
}
}
] | 33.130435 | 72 | 0.17309 | 787 | 14,478 | 3.138501 | 0.060991 | 0.136032 | 0.238057 | 0.017814 | 0.910526 | 0.905263 | 0.884615 | 0.804453 | 0.734008 | 0.694737 | 0 | 0 | 0.663006 | 14,478 | 437 | 73 | 33.130435 | 0.505022 | 0.003454 | 0 | 0.497696 | 0 | 0 | 0.1719 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
ece778d3100b804be4033d19b8016ba1b99a9f76 | 81,764 | py | Python | third_party/python-peachpy/test/x86_64/encoding/test_fma.py | gautamkmr/caffe2 | cde7f21d1e34ec714bc08dbfab945a1ad30e92ff | [
"MIT"
] | 40 | 2021-06-01T07:37:59.000Z | 2022-03-25T01:42:09.000Z | third_party/python-peachpy/test/x86_64/encoding/test_fma.py | gautamkmr/caffe2 | cde7f21d1e34ec714bc08dbfab945a1ad30e92ff | [
"MIT"
] | 14 | 2021-06-01T11:52:46.000Z | 2022-03-25T02:13:08.000Z | third_party/python-peachpy/test/x86_64/encoding/test_fma.py | gautamkmr/caffe2 | cde7f21d1e34ec714bc08dbfab945a1ad30e92ff | [
"MIT"
] | 7 | 2021-07-20T19:34:26.000Z | 2022-03-13T21:07:36.000Z | # This file is auto-generated by /codegen/x86_64_test_encoding.py
# Reference opcodes are generated by:
# GNU assembler (GNU Binutils) 2.28.51.20170402
from peachpy.x86_64 import *
import unittest
class TestVFMADD132SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x99, 0x74, 0x24, 0xE0]), VFMADD132SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x99, 0xCB]), VFMADD132SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x99, 0x4C, 0xCC, 0x9D]), VFMADD132SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0x99, 0xF3]), VFMADD132SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMADD213SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xA9, 0x74, 0x24, 0xE0]), VFMADD213SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xA9, 0xCB]), VFMADD213SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xA9, 0x4C, 0xCC, 0x9D]), VFMADD213SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xA9, 0xF3]), VFMADD213SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMADD231SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xB9, 0x74, 0x24, 0xE0]), VFMADD231SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xB9, 0xCB]), VFMADD231SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xB9, 0x4C, 0xCC, 0x9D]), VFMADD231SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xB9, 0xF3]), VFMADD231SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMADDSS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6A, 0xC9, 0x30]), VFMADDSS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6A, 0x4C, 0xCC, 0x9D, 0x30]), VFMADDSS(xmm1, xmm14, xmm3, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x6A, 0x4C, 0xCC, 0x9D, 0x90]), VFMADDSS(xmm1, xmm14, dword[r12 + rcx*8 - 99], xmm9).encode())
class TestVFMSUB132SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x9B, 0x74, 0x24, 0xE0]), VFMSUB132SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x9B, 0xCB]), VFMSUB132SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x9B, 0x4C, 0xCC, 0x9D]), VFMSUB132SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0x9B, 0xF3]), VFMSUB132SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMSUB213SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xAB, 0x74, 0x24, 0xE0]), VFMSUB213SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xAB, 0xCB]), VFMSUB213SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xAB, 0x4C, 0xCC, 0x9D]), VFMSUB213SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xAB, 0xF3]), VFMSUB213SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMSUB231SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xBB, 0x74, 0x24, 0xE0]), VFMSUB231SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xBB, 0xCB]), VFMSUB231SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xBB, 0x4C, 0xCC, 0x9D]), VFMSUB231SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xBB, 0xF3]), VFMSUB231SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMSUBSS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6E, 0xC9, 0x30]), VFMSUBSS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6E, 0x4C, 0xCC, 0x9D, 0x30]), VFMSUBSS(xmm1, xmm14, xmm3, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x6E, 0x4C, 0xCC, 0x9D, 0x90]), VFMSUBSS(xmm1, xmm14, dword[r12 + rcx*8 - 99], xmm9).encode())
class TestVFNMADD132SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x9D, 0x74, 0x24, 0xE0]), VFNMADD132SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x9D, 0xCB]), VFNMADD132SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x9D, 0x4C, 0xCC, 0x9D]), VFNMADD132SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0x9D, 0xF3]), VFNMADD132SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMADD213SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xAD, 0x74, 0x24, 0xE0]), VFNMADD213SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xAD, 0xCB]), VFNMADD213SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xAD, 0x4C, 0xCC, 0x9D]), VFNMADD213SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xAD, 0xF3]), VFNMADD213SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMADD231SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xBD, 0x74, 0x24, 0xE0]), VFNMADD231SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xBD, 0xCB]), VFNMADD231SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xBD, 0x4C, 0xCC, 0x9D]), VFNMADD231SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xBD, 0xF3]), VFNMADD231SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMADDSS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7A, 0xC9, 0x30]), VFNMADDSS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7A, 0x4C, 0xCC, 0x9D, 0x30]), VFNMADDSS(xmm1, xmm14, xmm3, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x7A, 0x4C, 0xCC, 0x9D, 0x90]), VFNMADDSS(xmm1, xmm14, dword[r12 + rcx*8 - 99], xmm9).encode())
class TestVFNMSUB132SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x9F, 0x74, 0x24, 0xE0]), VFNMSUB132SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x9F, 0xCB]), VFNMSUB132SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x9F, 0x4C, 0xCC, 0x9D]), VFNMSUB132SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0x9F, 0xF3]), VFNMSUB132SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMSUB213SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xAF, 0x74, 0x24, 0xE0]), VFNMSUB213SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xAF, 0xCB]), VFNMSUB213SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xAF, 0x4C, 0xCC, 0x9D]), VFNMSUB213SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xAF, 0xF3]), VFNMSUB213SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMSUB231SS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xBF, 0x74, 0x24, 0xE0]), VFNMSUB231SS(xmm30(k2.z), xmm4, dword[r12 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xBF, 0xCB]), VFNMSUB231SS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xBF, 0x4C, 0xCC, 0x9D]), VFNMSUB231SS(xmm1, xmm14, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x9A, 0xBF, 0xF3]), VFNMSUB231SS(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMSUBSS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7E, 0xC9, 0x30]), VFNMSUBSS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7E, 0x4C, 0xCC, 0x9D, 0x30]), VFNMSUBSS(xmm1, xmm14, xmm3, dword[r12 + rcx*8 - 99]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x7E, 0x4C, 0xCC, 0x9D, 0x90]), VFNMSUBSS(xmm1, xmm14, dword[r12 + rcx*8 - 99], xmm9).encode())
class TestVFMADD132SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x99, 0x73, 0xF0]), VFMADD132SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x99, 0xCB]), VFMADD132SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x99, 0x4C, 0xD3, 0xA8]), VFMADD132SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0x99, 0xF3]), VFMADD132SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMADD213SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xA9, 0x73, 0xF0]), VFMADD213SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xA9, 0xCB]), VFMADD213SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xA9, 0x4C, 0xD3, 0xA8]), VFMADD213SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xA9, 0xF3]), VFMADD213SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMADD231SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xB9, 0x73, 0xF0]), VFMADD231SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xB9, 0xCB]), VFMADD231SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xB9, 0x4C, 0xD3, 0xA8]), VFMADD231SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xB9, 0xF3]), VFMADD231SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMADDSD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6B, 0xC9, 0x30]), VFMADDSD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6B, 0x4C, 0xD3, 0xA8, 0x30]), VFMADDSD(xmm1, xmm14, xmm3, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x6B, 0x4C, 0xD3, 0xA8, 0x90]), VFMADDSD(xmm1, xmm14, qword[r11 + rdx*8 - 88], xmm9).encode())
class TestVFMSUB132SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x9B, 0x73, 0xF0]), VFMSUB132SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x9B, 0xCB]), VFMSUB132SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x9B, 0x4C, 0xD3, 0xA8]), VFMSUB132SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0x9B, 0xF3]), VFMSUB132SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMSUB213SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xAB, 0x73, 0xF0]), VFMSUB213SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xAB, 0xCB]), VFMSUB213SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xAB, 0x4C, 0xD3, 0xA8]), VFMSUB213SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xAB, 0xF3]), VFMSUB213SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMSUB231SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xBB, 0x73, 0xF0]), VFMSUB231SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xBB, 0xCB]), VFMSUB231SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xBB, 0x4C, 0xD3, 0xA8]), VFMSUB231SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xBB, 0xF3]), VFMSUB231SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFMSUBSD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6F, 0xC9, 0x30]), VFMSUBSD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6F, 0x4C, 0xD3, 0xA8, 0x30]), VFMSUBSD(xmm1, xmm14, xmm3, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x6F, 0x4C, 0xD3, 0xA8, 0x90]), VFMSUBSD(xmm1, xmm14, qword[r11 + rdx*8 - 88], xmm9).encode())
class TestVFNMADD132SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x9D, 0x73, 0xF0]), VFNMADD132SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x9D, 0xCB]), VFNMADD132SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x9D, 0x4C, 0xD3, 0xA8]), VFNMADD132SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0x9D, 0xF3]), VFNMADD132SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMADD213SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xAD, 0x73, 0xF0]), VFNMADD213SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xAD, 0xCB]), VFNMADD213SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xAD, 0x4C, 0xD3, 0xA8]), VFNMADD213SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xAD, 0xF3]), VFNMADD213SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMADD231SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xBD, 0x73, 0xF0]), VFNMADD231SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xBD, 0xCB]), VFNMADD231SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xBD, 0x4C, 0xD3, 0xA8]), VFNMADD231SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xBD, 0xF3]), VFNMADD231SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMADDSD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7B, 0xC9, 0x30]), VFNMADDSD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7B, 0x4C, 0xD3, 0xA8, 0x30]), VFNMADDSD(xmm1, xmm14, xmm3, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x7B, 0x4C, 0xD3, 0xA8, 0x90]), VFNMADDSD(xmm1, xmm14, qword[r11 + rdx*8 - 88], xmm9).encode())
class TestVFNMSUB132SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x9F, 0x73, 0xF0]), VFNMSUB132SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x9F, 0xCB]), VFNMSUB132SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x9F, 0x4C, 0xD3, 0xA8]), VFNMSUB132SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0x9F, 0xF3]), VFNMSUB132SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMSUB213SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xAF, 0x73, 0xF0]), VFNMSUB213SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xAF, 0xCB]), VFNMSUB213SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xAF, 0x4C, 0xD3, 0xA8]), VFNMSUB213SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xAF, 0xF3]), VFNMSUB213SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMSUB231SD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xBF, 0x73, 0xF0]), VFNMSUB231SD(xmm30(k2.z), xmm4, qword[r11 - 128]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xBF, 0xCB]), VFNMSUB231SD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xBF, 0x4C, 0xD3, 0xA8]), VFNMSUB231SD(xmm1, xmm14, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x9A, 0xBF, 0xF3]), VFNMSUB231SD(xmm30(k2.z), xmm4, xmm19, {rn_sae}).encode())
class TestVFNMSUBSD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7F, 0xC9, 0x30]), VFNMSUBSD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7F, 0x4C, 0xD3, 0xA8, 0x30]), VFNMSUBSD(xmm1, xmm14, xmm3, qword[r11 + rdx*8 - 88]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x7F, 0x4C, 0xD3, 0xA8, 0x90]), VFNMSUBSD(xmm1, xmm14, qword[r11 + rdx*8 - 88], xmm9).encode())
class TestVFMADD132PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x98, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADD132PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0x98, 0xF3]), VFMADD132PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0x98, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD132PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0x98, 0xDC]), VFMADD132PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0x98, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD132PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x98, 0xCB]), VFMADD132PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x98, 0x4C, 0xC2, 0xB3]), VFMADD132PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0x98, 0xD4]), VFMADD132PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0x98, 0x54, 0xD9, 0xBE]), VFMADD132PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0x98, 0xC9]), VFMADD132PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADD213PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xA8, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADD213PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xA8, 0xF3]), VFMADD213PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xA8, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD213PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xA8, 0xDC]), VFMADD213PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xA8, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD213PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xA8, 0xCB]), VFMADD213PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xA8, 0x4C, 0xC2, 0xB3]), VFMADD213PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xA8, 0xD4]), VFMADD213PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xA8, 0x54, 0xD9, 0xBE]), VFMADD213PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xA8, 0xC9]), VFMADD213PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADD231PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xB8, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADD231PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xB8, 0xF3]), VFMADD231PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xB8, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD231PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xB8, 0xDC]), VFMADD231PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xB8, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD231PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xB8, 0xCB]), VFMADD231PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xB8, 0x4C, 0xC2, 0xB3]), VFMADD231PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xB8, 0xD4]), VFMADD231PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xB8, 0x54, 0xD9, 0xBE]), VFMADD231PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xB8, 0xC9]), VFMADD231PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDPS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x68, 0xC9, 0x30]), VFMADDPS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x68, 0x4C, 0xC2, 0xB3, 0x30]), VFMADDPS(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x68, 0x4C, 0xC2, 0xB3, 0x90]), VFMADDPS(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x68, 0xD2, 0x40]), VFMADDPS(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x68, 0x54, 0xD9, 0xBE, 0x40]), VFMADDPS(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x68, 0x54, 0xD9, 0xBE, 0xA0]), VFMADDPS(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMSUB132PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x9A, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUB132PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0x9A, 0xF3]), VFMSUB132PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0x9A, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB132PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0x9A, 0xDC]), VFMSUB132PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0x9A, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB132PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x9A, 0xCB]), VFMSUB132PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x9A, 0x4C, 0xC2, 0xB3]), VFMSUB132PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0x9A, 0xD4]), VFMSUB132PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0x9A, 0x54, 0xD9, 0xBE]), VFMSUB132PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0x9A, 0xC9]), VFMSUB132PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUB213PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xAA, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUB213PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xAA, 0xF3]), VFMSUB213PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xAA, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB213PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xAA, 0xDC]), VFMSUB213PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xAA, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB213PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xAA, 0xCB]), VFMSUB213PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xAA, 0x4C, 0xC2, 0xB3]), VFMSUB213PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xAA, 0xD4]), VFMSUB213PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xAA, 0x54, 0xD9, 0xBE]), VFMSUB213PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xAA, 0xC9]), VFMSUB213PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUB231PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xBA, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUB231PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xBA, 0xF3]), VFMSUB231PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xBA, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB231PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xBA, 0xDC]), VFMSUB231PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xBA, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB231PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xBA, 0xCB]), VFMSUB231PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xBA, 0x4C, 0xC2, 0xB3]), VFMSUB231PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xBA, 0xD4]), VFMSUB231PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xBA, 0x54, 0xD9, 0xBE]), VFMSUB231PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xBA, 0xC9]), VFMSUB231PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBPS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6C, 0xC9, 0x30]), VFMSUBPS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6C, 0x4C, 0xC2, 0xB3, 0x30]), VFMSUBPS(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x6C, 0x4C, 0xC2, 0xB3, 0x90]), VFMSUBPS(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x6C, 0xD2, 0x40]), VFMSUBPS(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x6C, 0x54, 0xD9, 0xBE, 0x40]), VFMSUBPS(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x6C, 0x54, 0xD9, 0xBE, 0xA0]), VFMSUBPS(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFNMADD132PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x9C, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMADD132PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0x9C, 0xF3]), VFNMADD132PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0x9C, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD132PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0x9C, 0xDC]), VFNMADD132PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0x9C, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD132PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x9C, 0xCB]), VFNMADD132PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x9C, 0x4C, 0xC2, 0xB3]), VFNMADD132PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0x9C, 0xD4]), VFNMADD132PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0x9C, 0x54, 0xD9, 0xBE]), VFNMADD132PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0x9C, 0xC9]), VFNMADD132PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMADD213PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xAC, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMADD213PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xAC, 0xF3]), VFNMADD213PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xAC, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD213PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xAC, 0xDC]), VFNMADD213PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xAC, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD213PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xAC, 0xCB]), VFNMADD213PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xAC, 0x4C, 0xC2, 0xB3]), VFNMADD213PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xAC, 0xD4]), VFNMADD213PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xAC, 0x54, 0xD9, 0xBE]), VFNMADD213PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xAC, 0xC9]), VFNMADD213PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMADD231PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xBC, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMADD231PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xBC, 0xF3]), VFNMADD231PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xBC, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD231PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xBC, 0xDC]), VFNMADD231PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xBC, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD231PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xBC, 0xCB]), VFNMADD231PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xBC, 0x4C, 0xC2, 0xB3]), VFNMADD231PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xBC, 0xD4]), VFNMADD231PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xBC, 0x54, 0xD9, 0xBE]), VFNMADD231PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xBC, 0xC9]), VFNMADD231PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMADDPS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x78, 0xC9, 0x30]), VFNMADDPS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x78, 0x4C, 0xC2, 0xB3, 0x30]), VFNMADDPS(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x78, 0x4C, 0xC2, 0xB3, 0x90]), VFNMADDPS(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x78, 0xD2, 0x40]), VFNMADDPS(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x78, 0x54, 0xD9, 0xBE, 0x40]), VFNMADDPS(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x78, 0x54, 0xD9, 0xBE, 0xA0]), VFNMADDPS(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFNMSUB132PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x9E, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMSUB132PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0x9E, 0xF3]), VFNMSUB132PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0x9E, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB132PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0x9E, 0xDC]), VFNMSUB132PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0x9E, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB132PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x9E, 0xCB]), VFNMSUB132PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x9E, 0x4C, 0xC2, 0xB3]), VFNMSUB132PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0x9E, 0xD4]), VFNMSUB132PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0x9E, 0x54, 0xD9, 0xBE]), VFNMSUB132PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0x9E, 0xC9]), VFNMSUB132PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMSUB213PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xAE, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMSUB213PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xAE, 0xF3]), VFNMSUB213PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xAE, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB213PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xAE, 0xDC]), VFNMSUB213PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xAE, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB213PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xAE, 0xCB]), VFNMSUB213PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xAE, 0x4C, 0xC2, 0xB3]), VFNMSUB213PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xAE, 0xD4]), VFNMSUB213PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xAE, 0x54, 0xD9, 0xBE]), VFNMSUB213PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xAE, 0xC9]), VFNMSUB213PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMSUB231PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xBE, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMSUB231PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xBE, 0xF3]), VFNMSUB231PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xBE, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB231PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xBE, 0xDC]), VFNMSUB231PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xBE, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB231PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xBE, 0xCB]), VFNMSUB231PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xBE, 0x4C, 0xC2, 0xB3]), VFNMSUB231PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xBE, 0xD4]), VFNMSUB231PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xBE, 0x54, 0xD9, 0xBE]), VFNMSUB231PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xBE, 0xC9]), VFNMSUB231PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMSUBPS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7C, 0xC9, 0x30]), VFNMSUBPS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7C, 0x4C, 0xC2, 0xB3, 0x30]), VFNMSUBPS(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x7C, 0x4C, 0xC2, 0xB3, 0x90]), VFNMSUBPS(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x7C, 0xD2, 0x40]), VFNMSUBPS(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x7C, 0x54, 0xD9, 0xBE, 0x40]), VFNMSUBPS(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x7C, 0x54, 0xD9, 0xBE, 0xA0]), VFNMSUBPS(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMADD132PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x98, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADD132PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0x98, 0xF3]), VFMADD132PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0x98, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD132PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0x98, 0xDC]), VFMADD132PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0x98, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD132PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x98, 0xCB]), VFMADD132PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x98, 0x4C, 0xC2, 0xB3]), VFMADD132PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0x98, 0xD4]), VFMADD132PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0x98, 0x54, 0xD9, 0xBE]), VFMADD132PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0x98, 0xC9]), VFMADD132PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADD213PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xA8, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADD213PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xA8, 0xF3]), VFMADD213PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xA8, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD213PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xA8, 0xDC]), VFMADD213PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xA8, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD213PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xA8, 0xCB]), VFMADD213PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xA8, 0x4C, 0xC2, 0xB3]), VFMADD213PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xA8, 0xD4]), VFMADD213PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xA8, 0x54, 0xD9, 0xBE]), VFMADD213PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xA8, 0xC9]), VFMADD213PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADD231PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xB8, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADD231PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xB8, 0xF3]), VFMADD231PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xB8, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD231PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xB8, 0xDC]), VFMADD231PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xB8, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADD231PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xB8, 0xCB]), VFMADD231PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xB8, 0x4C, 0xC2, 0xB3]), VFMADD231PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xB8, 0xD4]), VFMADD231PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xB8, 0x54, 0xD9, 0xBE]), VFMADD231PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xB8, 0xC9]), VFMADD231PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDPD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x69, 0xC9, 0x30]), VFMADDPD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x69, 0x4C, 0xC2, 0xB3, 0x30]), VFMADDPD(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x69, 0x4C, 0xC2, 0xB3, 0x90]), VFMADDPD(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x69, 0xD2, 0x40]), VFMADDPD(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x69, 0x54, 0xD9, 0xBE, 0x40]), VFMADDPD(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x69, 0x54, 0xD9, 0xBE, 0xA0]), VFMADDPD(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMSUB132PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x9A, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUB132PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0x9A, 0xF3]), VFMSUB132PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0x9A, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB132PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0x9A, 0xDC]), VFMSUB132PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0x9A, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB132PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x9A, 0xCB]), VFMSUB132PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x9A, 0x4C, 0xC2, 0xB3]), VFMSUB132PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0x9A, 0xD4]), VFMSUB132PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0x9A, 0x54, 0xD9, 0xBE]), VFMSUB132PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0x9A, 0xC9]), VFMSUB132PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUB213PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xAA, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUB213PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xAA, 0xF3]), VFMSUB213PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xAA, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB213PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xAA, 0xDC]), VFMSUB213PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xAA, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB213PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xAA, 0xCB]), VFMSUB213PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xAA, 0x4C, 0xC2, 0xB3]), VFMSUB213PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xAA, 0xD4]), VFMSUB213PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xAA, 0x54, 0xD9, 0xBE]), VFMSUB213PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xAA, 0xC9]), VFMSUB213PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUB231PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xBA, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUB231PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xBA, 0xF3]), VFMSUB231PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xBA, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB231PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xBA, 0xDC]), VFMSUB231PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xBA, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUB231PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xBA, 0xCB]), VFMSUB231PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xBA, 0x4C, 0xC2, 0xB3]), VFMSUB231PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xBA, 0xD4]), VFMSUB231PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xBA, 0x54, 0xD9, 0xBE]), VFMSUB231PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xBA, 0xC9]), VFMSUB231PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBPD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6D, 0xC9, 0x30]), VFMSUBPD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x6D, 0x4C, 0xC2, 0xB3, 0x30]), VFMSUBPD(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x6D, 0x4C, 0xC2, 0xB3, 0x90]), VFMSUBPD(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x6D, 0xD2, 0x40]), VFMSUBPD(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x6D, 0x54, 0xD9, 0xBE, 0x40]), VFMSUBPD(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x6D, 0x54, 0xD9, 0xBE, 0xA0]), VFMSUBPD(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFNMADD132PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x9C, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMADD132PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0x9C, 0xF3]), VFNMADD132PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0x9C, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD132PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0x9C, 0xDC]), VFNMADD132PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0x9C, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD132PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x9C, 0xCB]), VFNMADD132PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x9C, 0x4C, 0xC2, 0xB3]), VFNMADD132PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0x9C, 0xD4]), VFNMADD132PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0x9C, 0x54, 0xD9, 0xBE]), VFNMADD132PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0x9C, 0xC9]), VFNMADD132PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMADD213PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xAC, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMADD213PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xAC, 0xF3]), VFNMADD213PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xAC, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD213PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xAC, 0xDC]), VFNMADD213PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xAC, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD213PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xAC, 0xCB]), VFNMADD213PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xAC, 0x4C, 0xC2, 0xB3]), VFNMADD213PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xAC, 0xD4]), VFNMADD213PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xAC, 0x54, 0xD9, 0xBE]), VFNMADD213PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xAC, 0xC9]), VFNMADD213PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMADD231PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xBC, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMADD231PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xBC, 0xF3]), VFNMADD231PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xBC, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD231PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xBC, 0xDC]), VFNMADD231PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xBC, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMADD231PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xBC, 0xCB]), VFNMADD231PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xBC, 0x4C, 0xC2, 0xB3]), VFNMADD231PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xBC, 0xD4]), VFNMADD231PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xBC, 0x54, 0xD9, 0xBE]), VFNMADD231PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xBC, 0xC9]), VFNMADD231PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMADDPD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x79, 0xC9, 0x30]), VFNMADDPD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x79, 0x4C, 0xC2, 0xB3, 0x30]), VFNMADDPD(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x79, 0x4C, 0xC2, 0xB3, 0x90]), VFNMADDPD(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x79, 0xD2, 0x40]), VFNMADDPD(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x79, 0x54, 0xD9, 0xBE, 0x40]), VFNMADDPD(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x79, 0x54, 0xD9, 0xBE, 0xA0]), VFNMADDPD(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFNMSUB132PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x9E, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMSUB132PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0x9E, 0xF3]), VFNMSUB132PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0x9E, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB132PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0x9E, 0xDC]), VFNMSUB132PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0x9E, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB132PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x9E, 0xCB]), VFNMSUB132PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x9E, 0x4C, 0xC2, 0xB3]), VFNMSUB132PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0x9E, 0xD4]), VFNMSUB132PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0x9E, 0x54, 0xD9, 0xBE]), VFNMSUB132PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0x9E, 0xC9]), VFNMSUB132PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMSUB213PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xAE, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMSUB213PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xAE, 0xF3]), VFNMSUB213PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xAE, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB213PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xAE, 0xDC]), VFNMSUB213PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xAE, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB213PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xAE, 0xCB]), VFNMSUB213PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xAE, 0x4C, 0xC2, 0xB3]), VFNMSUB213PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xAE, 0xD4]), VFNMSUB213PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xAE, 0x54, 0xD9, 0xBE]), VFNMSUB213PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xAE, 0xC9]), VFNMSUB213PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMSUB231PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xBE, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFNMSUB231PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xBE, 0xF3]), VFNMSUB231PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xBE, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB231PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xBE, 0xDC]), VFNMSUB231PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xBE, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFNMSUB231PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xBE, 0xCB]), VFNMSUB231PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xBE, 0x4C, 0xC2, 0xB3]), VFNMSUB231PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xBE, 0xD4]), VFNMSUB231PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xBE, 0x54, 0xD9, 0xBE]), VFNMSUB231PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xBE, 0xC9]), VFNMSUB231PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFNMSUBPD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7D, 0xC9, 0x30]), VFNMSUBPD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x7D, 0x4C, 0xC2, 0xB3, 0x30]), VFNMSUBPD(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x7D, 0x4C, 0xC2, 0xB3, 0x90]), VFNMSUBPD(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x7D, 0xD2, 0x40]), VFNMSUBPD(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x7D, 0x54, 0xD9, 0xBE, 0x40]), VFNMSUBPD(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x7D, 0x54, 0xD9, 0xBE, 0xA0]), VFNMSUBPD(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMADDSUB132PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x96, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADDSUB132PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0x96, 0xF3]), VFMADDSUB132PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0x96, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB132PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0x96, 0xDC]), VFMADDSUB132PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0x96, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB132PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x96, 0xCB]), VFMADDSUB132PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x96, 0x4C, 0xC2, 0xB3]), VFMADDSUB132PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0x96, 0xD4]), VFMADDSUB132PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0x96, 0x54, 0xD9, 0xBE]), VFMADDSUB132PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0x96, 0xC9]), VFMADDSUB132PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDSUB213PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xA6, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADDSUB213PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xA6, 0xF3]), VFMADDSUB213PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xA6, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB213PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xA6, 0xDC]), VFMADDSUB213PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xA6, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB213PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xA6, 0xCB]), VFMADDSUB213PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xA6, 0x4C, 0xC2, 0xB3]), VFMADDSUB213PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xA6, 0xD4]), VFMADDSUB213PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xA6, 0x54, 0xD9, 0xBE]), VFMADDSUB213PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xA6, 0xC9]), VFMADDSUB213PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDSUB231PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xB6, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADDSUB231PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xB6, 0xF3]), VFMADDSUB231PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xB6, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB231PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xB6, 0xDC]), VFMADDSUB231PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xB6, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB231PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xB6, 0xCB]), VFMADDSUB231PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xB6, 0x4C, 0xC2, 0xB3]), VFMADDSUB231PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xB6, 0xD4]), VFMADDSUB231PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xB6, 0x54, 0xD9, 0xBE]), VFMADDSUB231PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xB6, 0xC9]), VFMADDSUB231PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDSUBPS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5C, 0xC9, 0x30]), VFMADDSUBPS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5C, 0x4C, 0xC2, 0xB3, 0x30]), VFMADDSUBPS(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x5C, 0x4C, 0xC2, 0xB3, 0x90]), VFMADDSUBPS(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5C, 0xD2, 0x40]), VFMADDSUBPS(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5C, 0x54, 0xD9, 0xBE, 0x40]), VFMADDSUBPS(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x5C, 0x54, 0xD9, 0xBE, 0xA0]), VFMADDSUBPS(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMSUBADD132PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0x97, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUBADD132PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0x97, 0xF3]), VFMSUBADD132PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0x97, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD132PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0x97, 0xDC]), VFMSUBADD132PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0x97, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD132PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0x97, 0xCB]), VFMSUBADD132PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0x97, 0x4C, 0xC2, 0xB3]), VFMSUBADD132PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0x97, 0xD4]), VFMSUBADD132PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0x97, 0x54, 0xD9, 0xBE]), VFMSUBADD132PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0x97, 0xC9]), VFMSUBADD132PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBADD213PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xA7, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUBADD213PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xA7, 0xF3]), VFMSUBADD213PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xA7, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD213PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xA7, 0xDC]), VFMSUBADD213PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xA7, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD213PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xA7, 0xCB]), VFMSUBADD213PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xA7, 0x4C, 0xC2, 0xB3]), VFMSUBADD213PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xA7, 0xD4]), VFMSUBADD213PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xA7, 0x54, 0xD9, 0xBE]), VFMSUBADD213PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xA7, 0xC9]), VFMSUBADD213PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBADD231PS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0x5D, 0x8A, 0xB7, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUBADD231PS(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0x5D, 0x8A, 0xB7, 0xF3]), VFMSUBADD231PS(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0x55, 0xAD, 0xB7, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD231PS(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0x55, 0xAD, 0xB7, 0xDC]), VFMSUBADD231PS(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0xC6, 0xB7, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD231PS(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x09, 0xB7, 0xCB]), VFMSUBADD231PS(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x09, 0xB7, 0x4C, 0xC2, 0xB3]), VFMSUBADD231PS(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x05, 0xB7, 0xD4]), VFMSUBADD231PS(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x05, 0xB7, 0x54, 0xD9, 0xBE]), VFMSUBADD231PS(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0x2D, 0x96, 0xB7, 0xC9]), VFMSUBADD231PS(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBADDPS(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5E, 0xC9, 0x30]), VFMSUBADDPS(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5E, 0x4C, 0xC2, 0xB3, 0x30]), VFMSUBADDPS(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x5E, 0x4C, 0xC2, 0xB3, 0x90]), VFMSUBADDPS(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5E, 0xD2, 0x40]), VFMSUBADDPS(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5E, 0x54, 0xD9, 0xBE, 0x40]), VFMSUBADDPS(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x5E, 0x54, 0xD9, 0xBE, 0xA0]), VFMSUBADDPS(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMADDSUB132PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x96, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADDSUB132PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0x96, 0xF3]), VFMADDSUB132PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0x96, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB132PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0x96, 0xDC]), VFMADDSUB132PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0x96, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB132PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x96, 0xCB]), VFMADDSUB132PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x96, 0x4C, 0xC2, 0xB3]), VFMADDSUB132PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0x96, 0xD4]), VFMADDSUB132PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0x96, 0x54, 0xD9, 0xBE]), VFMADDSUB132PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0x96, 0xC9]), VFMADDSUB132PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDSUB213PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xA6, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADDSUB213PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xA6, 0xF3]), VFMADDSUB213PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xA6, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB213PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xA6, 0xDC]), VFMADDSUB213PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xA6, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB213PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xA6, 0xCB]), VFMADDSUB213PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xA6, 0x4C, 0xC2, 0xB3]), VFMADDSUB213PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xA6, 0xD4]), VFMADDSUB213PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xA6, 0x54, 0xD9, 0xBE]), VFMADDSUB213PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xA6, 0xC9]), VFMADDSUB213PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDSUB231PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xB6, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMADDSUB231PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xB6, 0xF3]), VFMADDSUB231PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xB6, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB231PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xB6, 0xDC]), VFMADDSUB231PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xB6, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMADDSUB231PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xB6, 0xCB]), VFMADDSUB231PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xB6, 0x4C, 0xC2, 0xB3]), VFMADDSUB231PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xB6, 0xD4]), VFMADDSUB231PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xB6, 0x54, 0xD9, 0xBE]), VFMADDSUB231PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xB6, 0xC9]), VFMADDSUB231PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMADDSUBPD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5D, 0xC9, 0x30]), VFMADDSUBPD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5D, 0x4C, 0xC2, 0xB3, 0x30]), VFMADDSUBPD(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x5D, 0x4C, 0xC2, 0xB3, 0x90]), VFMADDSUBPD(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5D, 0xD2, 0x40]), VFMADDSUBPD(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5D, 0x54, 0xD9, 0xBE, 0x40]), VFMADDSUBPD(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x5D, 0x54, 0xD9, 0xBE, 0xA0]), VFMADDSUBPD(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
class TestVFMSUBADD132PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0x97, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUBADD132PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0x97, 0xF3]), VFMSUBADD132PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0x97, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD132PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0x97, 0xDC]), VFMSUBADD132PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0x97, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD132PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0x97, 0xCB]), VFMSUBADD132PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0x97, 0x4C, 0xC2, 0xB3]), VFMSUBADD132PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0x97, 0xD4]), VFMSUBADD132PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0x97, 0x54, 0xD9, 0xBE]), VFMSUBADD132PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0x97, 0xC9]), VFMSUBADD132PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBADD213PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xA7, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUBADD213PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xA7, 0xF3]), VFMSUBADD213PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xA7, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD213PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xA7, 0xDC]), VFMSUBADD213PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xA7, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD213PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xA7, 0xCB]), VFMSUBADD213PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xA7, 0x4C, 0xC2, 0xB3]), VFMSUBADD213PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xA7, 0xD4]), VFMSUBADD213PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xA7, 0x54, 0xD9, 0xBE]), VFMSUBADD213PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xA7, 0xC9]), VFMSUBADD213PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBADD231PD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0x62, 0x42, 0xDD, 0x8A, 0xB7, 0xB4, 0xC2, 0xB3, 0xFF, 0xFF, 0xFF]), VFMSUBADD231PD(xmm30(k2.z), xmm4, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0x62, 0x22, 0xDD, 0x8A, 0xB7, 0xF3]), VFMSUBADD231PD(xmm30(k2.z), xmm4, xmm19).encode())
self.assertEqual(bytearray([0x62, 0xC2, 0xD5, 0xAD, 0xB7, 0x9C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD231PD(ymm19(k5.z), ymm5, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0xA2, 0xD5, 0xAD, 0xB7, 0xDC]), VFMSUBADD231PD(ymm19(k5.z), ymm5, ymm20).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0xC6, 0xB7, 0x8C, 0xD9, 0xBE, 0xFF, 0xFF, 0xFF]), VFMSUBADD231PD(zmm9(k6.z), zmm26, zword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x89, 0xB7, 0xCB]), VFMSUBADD231PD(xmm1, xmm14, xmm3).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x89, 0xB7, 0x4C, 0xC2, 0xB3]), VFMSUBADD231PD(xmm1, xmm14, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xE2, 0x85, 0xB7, 0xD4]), VFMSUBADD231PD(ymm2, ymm15, ymm4).encode())
self.assertEqual(bytearray([0xC4, 0xC2, 0x85, 0xB7, 0x54, 0xD9, 0xBE]), VFMSUBADD231PD(ymm2, ymm15, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0x62, 0x52, 0xAD, 0x96, 0xB7, 0xC9]), VFMSUBADD231PD(zmm9(k6.z), zmm26, zmm9, {rn_sae}).encode())
class TestVFMSUBADDPD(unittest.TestCase):
def runTest(self):
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5F, 0xC9, 0x30]), VFMSUBADDPD(xmm1, xmm14, xmm3, xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x89, 0x5F, 0x4C, 0xC2, 0xB3, 0x30]), VFMSUBADDPD(xmm1, xmm14, xmm3, oword[r10 + rax*8 - 77]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x09, 0x5F, 0x4C, 0xC2, 0xB3, 0x90]), VFMSUBADDPD(xmm1, xmm14, oword[r10 + rax*8 - 77], xmm9).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5F, 0xD2, 0x40]), VFMSUBADDPD(ymm2, ymm15, ymm4, ymm10).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x85, 0x5F, 0x54, 0xD9, 0xBE, 0x40]), VFMSUBADDPD(ymm2, ymm15, ymm4, hword[r9 + rbx*8 - 66]).encode())
self.assertEqual(bytearray([0xC4, 0xC3, 0x05, 0x5F, 0x54, 0xD9, 0xBE, 0xA0]), VFMSUBADDPD(ymm2, ymm15, hword[r9 + rbx*8 - 66], ymm10).encode())
| 92.808173 | 172 | 0.669842 | 10,888 | 81,764 | 5.024339 | 0.030125 | 0.151357 | 0.242172 | 0.258843 | 0.838296 | 0.780386 | 0.764153 | 0.761228 | 0.704195 | 0.668056 | 0 | 0.194127 | 0.149161 | 81,764 | 880 | 173 | 92.913636 | 0.592226 | 0.001822 | 0 | 0.112045 | 1 | 0 | 0 | 0 | 0 | 0 | 0.196841 | 0 | 0.773109 | 1 | 0.112045 | false | 0 | 0.002801 | 0 | 0.226891 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
01c6518e14890a770d9c5293e77198c20983581e | 4,087 | py | Python | tests/generator/test_generator_types.py | leonmende/chia-blockchain | add5b3bbc9ec247e926b01e6b3afe64ba0544bdc | [
"Apache-2.0"
] | 1 | 2021-04-15T09:43:32.000Z | 2021-04-15T09:43:32.000Z | tests/generator/test_generator_types.py | Mateus-dang/chia-blockchain | 2d2693496591b0b786461d16929b99a980d2528f | [
"Apache-2.0"
] | null | null | null | tests/generator/test_generator_types.py | Mateus-dang/chia-blockchain | 2d2693496591b0b786461d16929b99a980d2528f | [
"Apache-2.0"
] | null | null | null | from typing import Dict
from unittest import TestCase
from chia.types.blockchain_format.program import Program, SerializedProgram
from chia.types.generator_types import GeneratorBlockCacheInterface
from chia.full_node.generator import create_block_generator, make_generator_args
from chia.util.byte_types import hexstr_to_bytes
from chia.util.ints import uint32
gen0 = SerializedProgram.from_bytes(
hexstr_to_bytes(
"ff01ffffffa00000000000000000000000000000000000000000000000000000000000000000ff830186a080ffffff02ffff01ff02ffff01ff02ffff03ff0bffff01ff02ffff03ffff09ff05ffff1dff0bffff1effff0bff0bffff02ff06ffff04ff02ffff04ff17ff8080808080808080ffff01ff02ff17ff2f80ffff01ff088080ff0180ffff01ff04ffff04ff04ffff04ff05ffff04ffff02ff06ffff04ff02ffff04ff17ff80808080ff80808080ffff02ff17ff2f808080ff0180ffff04ffff01ff32ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff06ffff04ff02ffff04ff09ff80808080ffff02ff06ffff04ff02ffff04ff0dff8080808080ffff01ff0bffff0101ff058080ff0180ff018080ffff04ffff01b081963921826355dcb6c355ccf9c2637c18adf7d38ee44d803ea9ca41587e48c913d8d46896eb830aeadfc13144a8eac3ff018080ffff80ffff01ffff33ffa06b7a83babea1eec790c947db4464ab657dbe9b887fe9acc247062847b8c2a8a9ff830186a08080ff8080808080" # noqa
)
)
gen1 = SerializedProgram.from_bytes(
hexstr_to_bytes(
"ff01ffffffa00000000000000000000000000000000000000000000000000000000000000000ff830186a080ffffff02ffff01ff02ffff01ff02ffff03ff0bffff01ff02ffff03ffff09ff05ffff1dff0bffff1effff0bff0bffff02ff06ffff04ff02ffff04ff17ff8080808080808080ffff01ff02ff17ff2f80ffff01ff088080ff0180ffff01ff04ffff04ff04ffff04ff05ffff04ffff02ff06ffff04ff02ffff04ff17ff80808080ff80808080ffff02ff17ff2f808080ff0180ffff04ffff01ff32ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff06ffff04ff02ffff04ff09ff80808080ffff02ff06ffff04ff02ffff04ff0dff8080808080ffff01ff0bffff0101ff058080ff0180ff018080ffff04ffff01b081963921826355dcb6c355ccf9c2637c18adf7d38ee44d803ea9ca41587e48c913d8d46896eb830aeadfc13144a8eac3ff018080ffff80ffff01ffff33ffa06b7a83babea1eec790c947db4464ab657dbe9b887fe9acc247062847b8c2a8a9ff830186a08080ff8080808080" # noqa
)
)
gen2 = SerializedProgram.from_bytes(
hexstr_to_bytes(
"ff01ffffffa00000000000000000000000000000000000000000000000000000000000000000ff830186a080ffffff02ffff01ff02ffff01ff02ffff03ff0bffff01ff02ffff03ffff09ff05ffff1dff0bffff1effff0bff0bffff02ff06ffff04ff02ffff04ff17ff8080808080808080ffff01ff02ff17ff2f80ffff01ff088080ff0180ffff01ff04ffff04ff04ffff04ff05ffff04ffff02ff06ffff04ff02ffff04ff17ff80808080ff80808080ffff02ff17ff2f808080ff0180ffff04ffff01ff32ff02ffff03ffff07ff0580ffff01ff0bffff0102ffff02ff06ffff04ff02ffff04ff09ff80808080ffff02ff06ffff04ff02ffff04ff0dff8080808080ffff01ff0bffff0101ff058080ff0180ff018080ffff04ffff01b081963921826355dcb6c355ccf9c2637c18adf7d38ee44d803ea9ca41587e48c913d8d46896eb830aeadfc13144a8eac3ff018080ffff80ffff01ffff33ffa06b7a83babea1eec790c947db4464ab657dbe9b887fe9acc247062847b8c2a8a9ff830186a08080ff8080808080" # noqa
)
)
class BlockDict(GeneratorBlockCacheInterface):
def __init__(self, d: Dict[uint32, SerializedProgram]):
self.d = d
def get_generator_for_block_height(self, index: uint32) -> SerializedProgram:
return self.d[index]
class TestGeneratorTypes(TestCase):
def test_make_generator(self):
block_dict = BlockDict({1: gen1})
gen = create_block_generator(gen2, [1], block_dict)
print(gen)
def test_make_generator_args(self):
generator_ref_list = [gen1]
gen_args = make_generator_args(generator_ref_list)
gen_args_as_program = Program.from_bytes(bytes(gen_args))
d = gen_args_as_program.first()
# First arguemnt: clvm deserializer
b = hexstr_to_bytes("ff8568656c6c6fff86667269656e6480") # ("hello" "friend")
cost, output = d.run_with_cost([b])
# print(cost, output)
out = Program.to(output)
assert out == Program.from_bytes(b)
# Second Argument
arg2 = gen_args_as_program.rest().first().first()
print(arg2)
assert bytes(arg2) == bytes(gen1)
| 64.873016 | 804 | 0.870321 | 210 | 4,087 | 16.657143 | 0.328571 | 0.011435 | 0.018582 | 0.027444 | 0.710978 | 0.710978 | 0.710978 | 0.710978 | 0 | 0 | 0 | 0.360356 | 0.09151 | 4,087 | 62 | 805 | 65.919355 | 0.58174 | 0.025202 | 0 | 0.136364 | 0 | 0 | 0.601107 | 0.601107 | 0 | 1 | 0 | 0 | 0.045455 | 1 | 0.090909 | false | 0 | 0.159091 | 0.022727 | 0.318182 | 0.045455 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
bf04e1d84cb0c9289fd5f53d9077af3313d5fbdc | 23,778 | py | Python | dlkit/abstract_osid/grading/search_orders.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 2 | 2018-02-23T12:16:11.000Z | 2020-10-08T17:54:24.000Z | dlkit/abstract_osid/grading/search_orders.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 87 | 2017-04-21T18:57:15.000Z | 2021-12-13T19:43:57.000Z | dlkit/abstract_osid/grading/search_orders.py | UOC/dlkit | a9d265db67e81b9e0f405457464e762e2c03f769 | [
"MIT"
] | 1 | 2018-03-01T16:44:25.000Z | 2018-03-01T16:44:25.000Z | """Implementations of grading abstract base class search_orders."""
# pylint: disable=invalid-name
# Method names comply with OSID specification.
# pylint: disable=no-init
# Abstract classes do not define __init__.
# pylint: disable=too-few-public-methods
# Some interfaces are specified as 'markers' and include no methods.
# pylint: disable=too-many-public-methods
# Number of methods are defined in specification
# pylint: disable=too-many-ancestors
# Inheritance defined in specification
# pylint: disable=too-many-arguments
# Argument signature defined in specification.
# pylint: disable=duplicate-code
# All apparent duplicates have been inspected. They aren't.
import abc
class GradeSearchOrder:
"""An interface for specifying the ordering of search results."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def order_by_grade_system(self, style):
"""Specified a preference for ordering results by the grade system.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_grade_system_search_order(self):
"""Tests if a ``GradeSystemSearchOrder`` interface is available for grade systems.
:return: ``true`` if a grade system search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_grade_system_search_order(self):
"""Gets the search order for a grade system.
:return: the grade system search order
:rtype: ``osid.grading.GradeSystemSearchOrder``
:raise: ``Unimplemented`` -- ``supports_grade_system_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_system_search_order()`` is ``true``.*
"""
return # osid.grading.GradeSystemSearchOrder
grade_system_search_order = property(fget=get_grade_system_search_order)
@abc.abstractmethod
def order_by_input_score_start_range(self, style):
"""Specified a preference for ordering results by start of the input score range.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_input_score_end_range(self, style):
"""Specified a preference for ordering results by end of the input score range.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_output_score(self, style):
"""Specified a preference for ordering results by the output score.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def get_grade_search_order_record(self, grade_record_type):
"""Gets the grade search order record corresponding to the given grade record ``Type``.
Multiple retrievals return the same underlying object.
:param grade_record_type: a grade record type
:type grade_record_type: ``osid.type.Type``
:return: the grade search order record
:rtype: ``osid.grading.records.GradeSearchOrderRecord``
:raise: ``NullArgument`` -- ``grade_record_type`` is ``null``
:raise: ``OperationFailed`` -- unable to complete request
:raise: ``Unsupported`` -- ``has_record_type(grade_record_type)`` is ``false``
*compliance: mandatory -- This method must be implemented.*
"""
return # osid.grading.records.GradeSearchOrderRecord
class GradeSystemSearchOrder:
"""An interface for specifying the ordering of search results."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def order_by_based_on_grades(self, style):
"""Orders the results by systems based on grades.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_lowest_numeric_score(self, style):
"""Orders the results by lowest score.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_numeric_score_increment(self, style):
"""Orders the results by score increment.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_highest_numeric_score(self, style):
"""Orders the results by highest score.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def get_grade_system_search_order_record(self, grade_system_record_type):
"""Gets the grade system search order record corresponding to the given grade entry record ``Type``.
Multiple retrievals return the same underlying object.
:param grade_system_record_type: a grade system record type
:type grade_system_record_type: ``osid.type.Type``
:return: the grade system search order record
:rtype: ``osid.grading.records.GradeSystemSearchOrderRecord``
:raise: ``NullArgument`` -- ``grade_system_record_type`` is ``null``
:raise: ``OperationFailed`` -- unable to complete request
:raise: ``Unsupported`` -- ``has_record_type(grade_system_record_type)`` is ``false``
*compliance: mandatory -- This method must be implemented.*
"""
return # osid.grading.records.GradeSystemSearchOrderRecord
class GradeEntrySearchOrder:
"""An interface for specifying the ordering of search results."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def order_by_gradebook_column(self, style):
"""Specified a preference for ordering results by the gradebook column.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_gradebook_column_search_order(self):
"""Tests if a ``GradebookColumnSearchOrder`` is available.
:return: ``true`` if a gradebook column search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_gradebook_column_search_order(self):
"""Gets the search order for a gradebook column.
:return: the gradebook column search order
:rtype: ``osid.grading.GradebookColumnSearchOrder``
:raise: ``Unimplemented`` -- ``supports_gradebook_column_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_gradebook_column_search_order()`` is ``true``.*
"""
return # osid.grading.GradebookColumnSearchOrder
gradebook_column_search_order = property(fget=get_gradebook_column_search_order)
@abc.abstractmethod
def order_by_key_resource(self, style):
"""Specified a preference for ordering results by the key resource.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_key_resource_search_order(self):
"""Tests if a ``ResourceSearchOrder`` is available.
:return: ``true`` if a key resource search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_key_resource_search_order(self):
"""Gets the search order for a resource.
:return: the key resource search order
:rtype: ``osid.resource.ResourceSearchOrder``
:raise: ``Unimplemented`` -- ``supports_key_resource_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_key_resource_search_order()`` is ``true``.*
"""
return # osid.resource.ResourceSearchOrder
key_resource_search_order = property(fget=get_key_resource_search_order)
@abc.abstractmethod
def order_by_derived(self, style):
"""Specified a preference for ordering results by the derived entries.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_ignored_for_calculations(self, style):
"""Specified a preference for ordering results by the ignore for calculations flag.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_grade(self, style):
"""Specified a preference for ordering results by the grade or score.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_grade_search_order(self):
"""Tests if a ``GradeSearchOrder`` is available.
:return: ``true`` if a grade search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_grade_search_order(self):
"""Gets the search order for a grade.
:return: the grade search order
:rtype: ``osid.grading.GradeSearchOrder``
:raise: ``Unimplemented`` -- ``supports_grade_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_search_order()`` is ``true``.*
"""
return # osid.grading.GradeSearchOrder
grade_search_order = property(fget=get_grade_search_order)
@abc.abstractmethod
def order_by_time_graded(self, style):
"""Specified a preference for ordering results by the time graded.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_grader(self, style):
"""Specified a preference for ordering results by the grader.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_grader_search_order(self):
"""Tests if a ``ResourceSearchOrder`` is available for grader resources.
:return: ``true`` if a resource search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_grader_search_order(self):
"""Gets the search order for a grader.
:return: the resource search order
:rtype: ``osid.resource.ResourceSearchOrder``
:raise: ``Unimplemented`` -- ``supports_grader_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grader_search_order()`` is ``true``.*
"""
return # osid.resource.ResourceSearchOrder
grader_search_order = property(fget=get_grader_search_order)
@abc.abstractmethod
def order_by_grading_agent(self, style):
"""Specified a preference for ordering results by the grading agent.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_grading_agent_search_order(self):
"""Tests if an ``AgentSearchOrder`` is available fo grading agents.
:return: ``true`` if an agent search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_grading_agent_search_order(self):
"""Gets the search order for a grading agent.
:return: the agent search order
:rtype: ``osid.authentication.AgentSearchOrder``
:raise: ``Unimplemented`` -- ``supports_grading_agent_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grading_agent_search_order()`` is ``true``.*
"""
return # osid.authentication.AgentSearchOrder
grading_agent_search_order = property(fget=get_grading_agent_search_order)
@abc.abstractmethod
def get_grade_entry_search_order_record(self, grade_entry_record_type):
"""Gets the grade entry search order record corresponding to the given grade entry record ``Type``.
Multiple retrievals return the same underlying object.
:param grade_entry_record_type: a grade entry record type
:type grade_entry_record_type: ``osid.type.Type``
:return: the grade entry search order record
:rtype: ``osid.grading.records.GradeEntrySearchOrderRecord``
:raise: ``NullArgument`` -- ``grade_entry_record_type`` is ``null``
:raise: ``OperationFailed`` -- unable to complete request
:raise: ``Unsupported`` -- ``has_record_type(grade_entry_record_type)`` is ``false``
*compliance: mandatory -- This method must be implemented.*
"""
return # osid.grading.records.GradeEntrySearchOrderRecord
class GradebookColumnSearchOrder:
"""An interface for specifying the ordering of search results."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def order_by_grade_system(self, style):
"""Specified a preference for ordering results by the grade system.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def supports_grade_system_search_order(self):
"""Tests if a ``GradeSystemSearchOrder`` is available for grade systems.
:return: ``true`` if a grade system search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_gradebook_column_summary_search_order(self):
"""Gets the search order for a grade system.
:return: the grade system search order
:rtype: ``osid.grading.GradeSystemSearchOrder``
:raise: ``Unimplemented`` -- ``supports_grade_system_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_grade_system_search_order()`` is ``true``.*
"""
return # osid.grading.GradeSystemSearchOrder
gradebook_column_summary_search_order = property(fget=get_gradebook_column_summary_search_order)
@abc.abstractmethod
def supports_gradebook_column_summary_search_order(self):
"""Tests if a ``GradebookColumnSummarySearchOrder`` is available for gradebook column summaries.
:return: ``true`` if a gradebook column summary search order is available, ``false`` otherwise
:rtype: ``boolean``
*compliance: mandatory -- This method must be implemented.*
"""
return # boolean
@abc.abstractmethod
def get_grade_system_search_order(self):
"""Gets the search order for a gradebook column summary search order.
:return: the gradebook column summary search order
:rtype: ``osid.grading.GradebookColumnSummarySearchOrder``
:raise: ``Unimplemented`` -- ``supports_gradebook_column_summary_search_order()`` is ``false``
*compliance: optional -- This method must be implemented if
``supports_gradebook_column_summary_search_order()`` is
``true``.*
"""
return # osid.grading.GradebookColumnSummarySearchOrder
grade_system_search_order = property(fget=get_grade_system_search_order)
@abc.abstractmethod
def get_gradebook_column_search_order_record(self, gradebook_column_record_type):
"""Gets the gradebook column search order record corresponding to the given gradebook column record ``Type``.
Multiple retrievals return the same underlying object.
:param gradebook_column_record_type: a gradebook column record type
:type gradebook_column_record_type: ``osid.type.Type``
:return: the gradebook column search order record
:rtype: ``osid.grading.records.GradebookColumnSearchOrderRecord``
:raise: ``NullArgument`` -- ``gradebook_column_record_type`` is ``null``
:raise: ``OperationFailed`` -- unable to complete request
:raise: ``Unsupported`` -- ``has_record_type(gradebook_column_record_type)`` is ``false``
*compliance: mandatory -- This method must be implemented.*
"""
return # osid.grading.records.GradebookColumnSearchOrderRecord
class GradebookColumnSummarySearchOrder:
"""An interface for specifying the ordering of search results."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def order_by_mean(self, style):
"""Specified a preference for ordering results by the mean.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_median(self, style):
"""Specified a preference for ordering results by the median.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_mode(self, style):
"""Specified a preference for ordering results by the mode.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_rms(self, style):
"""Specified a preference for ordering results by the root mean square.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_standard_deviation(self, style):
"""Specified a preference for ordering results by the standard deviation.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def order_by_sum(self, style):
"""Specified a preference for ordering results by the sum.
:param style: search order style
:type style: ``osid.SearchOrderStyle``
:raise: ``NullArgument`` -- ``style`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
pass
@abc.abstractmethod
def get_gradebook_column_summary_search_order_record(self, gradebook_column_summary_record_type):
"""Gets the gradebook column summary search order record corresponding to the given gradebook column summary record ``Type``.
Multiple retrievals return the same underlying object.
:param gradebook_column_summary_record_type: a gradebook column summary record type
:type gradebook_column_summary_record_type: ``osid.type.Type``
:return: the gradebook column summary search order record
:rtype: ``osid.grading.records.GradebookColumnSummarySearchOrderRecord``
:raise: ``NullArgument`` -- ``gradebook_column_summary_record_type`` is ``null``
:raise: ``OperationFailed`` -- unable to complete request
:raise: ``Unsupported`` -- ``has_record_type(gradebook_column_summary_record_type)`` is ``false``
*compliance: mandatory -- This method must be implemented.*
"""
return # osid.grading.records.GradebookColumnSummarySearchOrderRecord
class GradebookSearchOrder:
"""An interface for specifying the ordering of search results."""
__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def get_gradebook_search_order_record(self, gradebook_record_type):
"""Gets the gradebook search order record corresponding to the given gradebook record ``Type``.
Multiple retrievals return the same underlying object.
:param gradebook_record_type: a gradebook record type
:type gradebook_record_type: ``osid.type.Type``
:return: the gradebook search order record
:rtype: ``osid.grading.records.GradebookSearchOrderRecord``
:raise: ``NullArgument`` -- ``gradebook_record_type`` is ``null``
:raise: ``OperationFailed`` -- unable to complete request
:raise: ``Unsupported`` -- ``has_record_type(gradebook_record_type)`` is ``false``
*compliance: mandatory -- This method must be implemented.*
"""
return # osid.grading.records.GradebookSearchOrderRecord
| 34.311688 | 133 | 0.659349 | 2,557 | 23,778 | 5.973797 | 0.071177 | 0.082095 | 0.05892 | 0.047136 | 0.855254 | 0.80635 | 0.758363 | 0.705401 | 0.67437 | 0.644321 | 0 | 0 | 0.235554 | 23,778 | 692 | 134 | 34.361272 | 0.840348 | 0.664059 | 0 | 0.666667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.288462 | false | 0.147436 | 0.00641 | 0 | 0.564103 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 8 |
bf11f57fae8d6ef0ab0c21759c9895decdb35a01 | 25,909 | py | Python | experiments/EntEval/enteval/tools/validation.py | diegoolano/biomedical_interpretable_entity_representations | 3c35f02ee8dd7ee0f2a23b0014e4b112beab6461 | [
"MIT"
] | 2 | 2021-09-24T08:54:33.000Z | 2021-11-15T05:15:52.000Z | experiments/EntEval/enteval/tools/validation.py | diegoolano/biomedical_interpretable_entity_representations | 3c35f02ee8dd7ee0f2a23b0014e4b112beab6461 | [
"MIT"
] | null | null | null | experiments/EntEval/enteval/tools/validation.py | diegoolano/biomedical_interpretable_entity_representations | 3c35f02ee8dd7ee0f2a23b0014e4b112beab6461 | [
"MIT"
] | 2 | 2021-07-05T20:19:01.000Z | 2021-08-01T01:01:41.000Z | # Copyright (c) 2017-present, Facebook, Inc.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.
#
"""
Validation and classification
(train) : inner-kfold classifier
(train, test) : kfold classifier
(train, dev, test) : split classifier
"""
from __future__ import absolute_import, division, unicode_literals
import logging
import numpy as np
from enteval.tools.classifier import MLP, MaskMLP, MLPLayerWeighst
import enteval.tools.multiclassclassifier as multiclassclassifier
import sklearn
assert(sklearn.__version__ >= "0.18.0"), \
"need to update sklearn to version >= 0.18.0"
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import StratifiedKFold
import torch
import time
def get_classif_name(classifier_config, usepytorch):
if not usepytorch:
modelname = 'sklearn-LogReg'
else:
nhid = classifier_config['nhid']
optim = 'adam' if 'optim' not in classifier_config else classifier_config['optim']
bs = 64 if 'batch_size' not in classifier_config else classifier_config['batch_size']
modelname = 'pytorch-MLP-nhid%s-%s-bs%s' % (nhid, optim, bs)
return modelname
# Pytorch version
class InnerKFoldClassifier(object):
"""
(train) split classifier : InnerKfold.
"""
def __init__(self, X, y, config):
self.X = X
self.y = y
self.featdim = X.shape[1]
self.nclasses = config['nclasses']
self.seed = config['seed']
self.devresults = []
self.testresults = []
self.usepytorch = config['usepytorch']
self.classifier_config = config['classifier']
self.modelname = get_classif_name(self.classifier_config, self.usepytorch)
self.k = 5 if 'kfold' not in config else config['kfold']
def run(self):
logging.info('Training {0} with (inner) {1}-fold cross-validation'
.format(self.modelname, self.k))
regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-2, 4, 1)]
skf = StratifiedKFold(n_splits=self.k, shuffle=True, random_state=1111)
innerskf = StratifiedKFold(n_splits=self.k, shuffle=True,
random_state=1111)
count = 0
for train_idx, test_idx in skf.split(self.X, self.y):
count += 1
X_train, X_test = self.X[train_idx], self.X[test_idx]
y_train, y_test = self.y[train_idx], self.y[test_idx]
scores = []
for reg in regs:
regscores = []
for inner_train_idx, inner_test_idx in innerskf.split(X_train, y_train):
X_in_train, X_in_test = X_train[inner_train_idx], X_train[inner_test_idx]
y_in_train, y_in_test = y_train[inner_train_idx], y_train[inner_test_idx]
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=reg,
seed=self.seed)
clf.fit(X_in_train, y_in_train,
validation_data=(X_in_test, y_in_test))
else:
clf = LogisticRegression(C=reg, random_state=self.seed)
clf.fit(X_in_train, y_in_train)
regscores.append(clf.score(X_in_test, y_in_test))
scores.append(round(100*np.mean(regscores), 2))
optreg = regs[np.argmax(scores)]
logging.info('Best param found at split {0}: l2reg = {1} \
with score {2}'.format(count, optreg, np.max(scores)))
self.devresults.append(np.max(scores))
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=optreg,
seed=self.seed)
clf.fit(X_train, y_train, validation_split=0.05)
else:
clf = LogisticRegression(C=optreg, random_state=self.seed)
clf.fit(X_train, y_train)
self.testresults.append(round(100*clf.score(X_test, y_test), 2))
devaccuracy = round(np.mean(self.devresults), 2)
testaccuracy = round(np.mean(self.testresults), 2)
return devaccuracy, testaccuracy
class KFoldClassifier(object):
"""
(train, test) split classifier : cross-validation on train.
"""
def __init__(self, train, test, config):
self.train = train
self.test = test
self.featdim = self.train['X'].shape[1]
self.nclasses = config['nclasses']
self.seed = config['seed']
self.usepytorch = config['usepytorch']
self.classifier_config = config['classifier']
self.modelname = get_classif_name(self.classifier_config, self.usepytorch)
self.k = 5 if 'kfold' not in config else config['kfold']
def run(self):
# cross-validation
logging.info('Training {0} with {1}-fold cross-validation'
.format(self.modelname, self.k))
regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-1, 6, 1)]
skf = StratifiedKFold(n_splits=self.k, shuffle=True,
random_state=self.seed)
scores = []
for reg in regs:
scanscores = []
for train_idx, test_idx in skf.split(self.train['X'],
self.train['y']):
# Split data
X_train, y_train = self.train['X'][train_idx], self.train['y'][train_idx]
X_test, y_test = self.train['X'][test_idx], self.train['y'][test_idx]
# Train classifier
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=reg,
seed=self.seed)
clf.fit(X_train, y_train, validation_data=(X_test, y_test))
else:
clf = LogisticRegression(C=reg, random_state=self.seed)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
scanscores.append(score)
# Append mean score
scores.append(round(100*np.mean(scanscores), 2))
# evaluation
logging.info([('reg:' + str(regs[idx]), scores[idx])
for idx in range(len(scores))])
optreg = regs[np.argmax(scores)]
devaccuracy = np.max(scores)
logging.info('Cross-validation : best param found is reg = {0} \
with score {1}'.format(optreg, devaccuracy))
logging.info('Evaluating...')
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=optreg,
seed=self.seed)
clf.fit(self.train['X'], self.train['y'], validation_split=0.05)
else:
clf = LogisticRegression(C=optreg, random_state=self.seed)
clf.fit(self.train['X'], self.train['y'])
yhat = clf.predict(self.test['X'])
testaccuracy = clf.score(self.test['X'], self.test['y'])
testaccuracy = round(100*testaccuracy, 2)
return devaccuracy, testaccuracy, yhat
class SplitClassifier(object):
"""
(train, valid, test) split classifier.
"""
def __init__(self, X, y, config):
self.X = X
self.y = y
self.nclasses = config['nclasses']
self.featdim = self.X['train'].shape[1]
self.seed = config['seed']
self.usepytorch = config['usepytorch']
self.classifier_config = config['classifier']
self.cudaEfficient = False if 'cudaEfficient' not in config else \
config['cudaEfficient']
self.modelname = get_classif_name(self.classifier_config, self.usepytorch)
self.noreg = False if 'noreg' not in config else config['noreg']
self.hardmask = None if 'hardmask' not in self.classifier_config else self.classifier_config['hardmask']
self.file_header = 'model' if 'file_header' not in self.classifier_config\
else self.classifier_config['file_header']
self.config = config
def run(self, return_score=False):
logging.info('Training {0} with standard validation..'
.format(self.modelname))
regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-2, 4, 1)]
if self.noreg:
regs = [1e-9 if self.usepytorch else 1e9]
scores = []
for reg in regs:
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=reg,
seed=self.seed, cudaEfficient=self.cudaEfficient)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=reg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
scores.append(round(100*clf.score(self.X['valid'],
self.y['valid']), 2))
logging.info([('reg:'+str(regs[idx]), scores[idx])
for idx in range(len(scores))])
optreg = regs[np.argmax(scores)]
devaccuracy = np.max(scores)
logging.info('Validation : best param found is reg = {0} with score \
{1}'.format(optreg, devaccuracy))
clf = LogisticRegression(C=optreg, random_state=self.seed)
logging.info('Evaluating...')
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=optreg,
seed=self.seed, cudaEfficient=self.cudaEfficient)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=optreg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
#######
print('==> Saving a trained model: ')
#save_to = '/scratch/cluster/yasumasa/entity/data/EntEval/Conllyago_best_model2.pkl'
#save_to = '/scratch/cluster/yasumasa/entity/data/EntEval/models/' + self.file_header + time.strftime("_%Y-%m-%d_%H:%M:%S", time.localtime()) + '.model'
save_to = self.config['saveout'] if 'saveout' in self.config else '/dccstor/redrug_ier/diego/redrug-ier/experiments/ehr_baselines/model_out/wlned_baseline_' + time.strftime("_%Y-%m-%d_%H:%M:%S", time.localtime()) + '.model'
print('==> Saving a trained model: ', save_to)
torch.save(clf.model.state_dict(), save_to)
#######
logging.info("start predicting on test")
_devaccuracy = clf.score(self.X['valid'], self.y['valid'], test=True,
return_score=return_score)
testaccuracy = clf.score(self.X['test'], self.y['test'], test=True,
return_score=return_score)
if not return_score:
testaccuracy = round(100*testaccuracy, 2)
return devaccuracy, testaccuracy, _devaccuracy
class SplitClassifierWithLayerWeights(object):
"""
(train, valid, test) split classifier.
"""
def __init__(self, X, y, config):
self.X = X
self.y = y
self.nclasses = config['nclasses']
self.n_layers = self.X['train'].shape[1]
self.featdim = self.X['train'].shape[2] # (n_examples, n_layers, dim)
self.seed = config['seed']
self.usepytorch = config['usepytorch']
self.classifier_config = config['classifier']
self.cudaEfficient = False if 'cudaEfficient' not in config else \
config['cudaEfficient']
self.modelname = get_classif_name(self.classifier_config, self.usepytorch)
self.noreg = False if 'noreg' not in config else config['noreg']
self.hardmask = None if 'hardmask' not in self.classifier_config else self.classifier_config['hardmask']
self.file_header = 'model' if 'file_header' not in self.classifier_config\
else self.classifier_config['file_header']
self.config = config
def run(self, return_score=False):
logging.info('Training {0} with standard validation..'
.format(self.modelname))
regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-2, 4, 1)]
if self.noreg:
regs = [1e-9 if self.usepytorch else 1e9]
scores = []
for reg in regs:
if self.usepytorch:
clf = MLPLayerWeighst(
self.classifier_config,
inputdim=self.featdim,
nclasses=self.nclasses,
l2reg=reg,
seed=self.seed,
cudaEfficient=self.cudaEfficient,
n_layers=self.n_layers
)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=reg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
scores.append(round(100*clf.score(self.X['valid'],
self.y['valid']), 2))
logging.info([('reg:'+str(regs[idx]), scores[idx])
for idx in range(len(scores))])
optreg = regs[np.argmax(scores)]
devaccuracy = np.max(scores)
logging.info('Validation : best param found is reg = {0} with score \
{1}'.format(optreg, devaccuracy))
clf = LogisticRegression(C=optreg, random_state=self.seed)
logging.info('Evaluating...')
if self.usepytorch:
clf = MLPLayerWeighst(
self.classifier_config,
inputdim=self.featdim,
nclasses=self.nclasses,
l2reg=optreg,
seed=self.seed,
cudaEfficient=self.cudaEfficient,
n_layers=self.n_layers
)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=optreg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
#######
print('==> Saving a trained model: ')
#save_to = '/scratch/cluster/yasumasa/entity/data/EntEval/Conllyago_best_model2.pkl'
save_to = '/scratch/cluster/yasumasa/entity/data/EntEval/models/' + self.file_header +\
time.strftime("_%Y-%m-%d_%H:%M:%S", time.localtime()) + '.model'
print('==> Saving a trained model: ', save_to)
torch.save(clf.model.state_dict(), save_to)
#######
logging.info("start predicting on test")
_devaccuracy = clf.score(self.X['valid'], self.y['valid'], test=True,
return_score=return_score)
testaccuracy = clf.score(self.X['test'], self.y['test'], test=True,
return_score=return_score)
if not return_score:
testaccuracy = round(100*testaccuracy, 2)
return devaccuracy, testaccuracy, _devaccuracy
class SplitClassifierWithSoftMask(SplitClassifier):
def __init__(self, X, y, config):
super(SplitClassifierWithSoftMask, self).__init__(X, y, config)
def run(self, return_score=False):
logging.info('Training {0} with standard validation..'
.format(self.modelname))
l2regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-2, 4, 1)]
l1regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-2, 4, 1)]
if self.noreg:
l2regs = [1e-9 if self.usepytorch else 1e9]
if self.hardmask is not None:
l1regs = [1e-9 if self.usepytorch else 1e9]
scores = []
for l2reg in l2regs:
for l1reg in l1regs:
if self.usepytorch:
clf = MaskMLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=l2reg,
seed=self.seed, cudaEfficient=self.cudaEfficient, l1_coefficient=l1reg,
hardmask=self.hardmask
)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else: # not using
clf = LogisticRegression(C=l2reg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
scores.append(
(str(l2reg) + '_' + str(l1reg), round(100*clf.score(self.X['valid'], self.y['valid']), 2))
)
logging.info(scores)
oprl2_optl1, devaccuracy = list(sorted(scores, key=lambda x: x[1], reverse=True))[0]
opt_l2reg, opt_l1reg = oprl2_optl1.split('_')
opt_l2reg, opt_l1reg = float(opt_l2reg), float(opt_l1reg)
logging.info('Validation : best param found is l2 reg = {0}, l1 reg = {1} with score \
{2}'.format(opt_l2reg, opt_l1reg, devaccuracy))
print('Validation : best param found is l2 reg = {0}, l1 reg = {1} with score \
{2}'.format(opt_l2reg, opt_l1reg, devaccuracy))
clf = LogisticRegression(C=opt_l2reg, random_state=self.seed) # not using
logging.info('Evaluating...')
if self.usepytorch:
clf = MaskMLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=opt_l2reg,
seed=self.seed, cudaEfficient=self.cudaEfficient,
l1_coefficient=opt_l1reg,
hardmask=self.hardmask
)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=opt_l2reg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
#######
print('==> Saving a trained model: ')
#save_to = '/scratch/cluster/yasumasa/entity/data/EntEval/Conllyago_best_model2.pkl'
save_to = '/scratch/cluster/yasumasa/entity/data/EntEval/models/' + self.file_header +\
time.strftime("_%Y-%m-%d_%H:%M:%S", time.localtime()) + '.model'
print('==> Saving a trained model: ', save_to)
torch.save(clf.model.state_dict(), save_to)
#######
logging.info("start predicting on test")
testaccuracy = clf.score(self.X['test'], self.y['test'], test=True, return_score=return_score)
if not return_score:
testaccuracy = round(100*testaccuracy, 2)
return devaccuracy, testaccuracy
class SplitMultiClassClassifier(object):
"""
(train, valid, test) split classifier.
"""
def __init__(self, X, y, config):
self.X = X
self.y = y
self.nclasses = config['nclasses']
self.featdim = self.X['train'].shape[-1]
self.seed = config['seed']
self.usepytorch = config['usepytorch']
self.classifier_config = config['classifier']
self.cudaEfficient = False if 'cudaEfficient' not in config else \
config['cudaEfficient']
self.modelname = get_classif_name(self.classifier_config, self.usepytorch)
self.noreg = False if 'noreg' not in config else config['noreg']
self.config = config
def run(self):
logging.info('Training {0} with standard validation..'
.format(self.modelname))
regs = [10**t for t in range(-5, -1)] if self.usepytorch else \
[2**t for t in range(-2, 4, 1)]
if self.noreg:
regs = [1e-9 if self.usepytorch else 1e9]
scores = []
for reg in regs:
if self.usepytorch:
clf = multiclassclassifier.MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=reg,
seed=self.seed, cudaEfficient=self.cudaEfficient)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=reg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
scores.append(round(100*clf.score(self.X['valid'],
self.y['valid']), 2))
logging.info([('reg:'+str(regs[idx]), scores[idx])
for idx in range(len(scores))])
optreg = regs[np.argmax(scores)]
devaccuracy = np.max(scores)
logging.info('Validation : best param found is reg = {0} with score \
{1}'.format(optreg, devaccuracy))
clf = LogisticRegression(C=optreg, random_state=self.seed)
logging.info('Evaluating...')
if self.usepytorch:
clf = multiclassclassifier.MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=optreg,
seed=self.seed, cudaEfficient=self.cudaEfficient)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
logging.info("have to use pytorch with the SplitMultiClassClassifier.")
exit(-1)
testaccuracy = clf.score(self.X['test'], self.y['test'])
testaccuracy = round(100*testaccuracy, 2)
return devaccuracy, testaccuracy
class SplitClassifierCustom(object):
"""
(train, valid, test) split classifier.
"""
def __init__(self, X, y, config):
self.X = X
self.y = y
self.nclasses = config['nclasses']
self.featdim = config['classifier']["nhid"] if isinstance(self.X['train'], list) else self.X['train'].shape[1]
self.seed = config['seed']
self.usepytorch = config['usepytorch']
self.classifier_config = config['classifier']
self.cudaEfficient = False if 'cudaEfficient' not in config else \
config['cudaEfficient']
self.modelname = get_classif_name(self.classifier_config, self.usepytorch)
self.noreg = False if 'noreg' not in config else config['noreg']
self.config = config
def run(self, return_score=False):
logging.info('Training {0} with standard validation..'
.format(self.modelname))
regs = [10 ** t for t in range(-5, -1)] if self.usepytorch else \
[2 ** t for t in range(-2, 4, 1)]
if self.noreg:
regs = [1e-9 if self.usepytorch else 1e9]
scores = []
for reg in regs:
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=reg,
seed=self.seed, cudaEfficient=self.cudaEfficient)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=reg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
scores.append(round(100 * clf.score(self.X['valid'],
self.y['valid']), 2))
logging.info([('reg:' + str(regs[idx]), scores[idx])
for idx in range(len(scores))])
optreg = regs[np.argmax(scores)]
devaccuracy = np.max(scores)
logging.info('Validation : best param found is reg = {0} with score \
{1}'.format(optreg, devaccuracy))
clf = LogisticRegression(C=optreg, random_state=self.seed)
logging.info('Evaluating...')
if self.usepytorch:
clf = MLP(self.classifier_config, inputdim=self.featdim,
nclasses=self.nclasses, l2reg=optreg,
seed=self.seed, cudaEfficient=self.cudaEfficient)
# TODO: Find a hack for reducing nb epoches in SNLI
clf.fit(self.X['train'], self.y['train'],
validation_data=(self.X['valid'], self.y['valid']))
else:
clf = LogisticRegression(C=optreg, random_state=self.seed)
clf.fit(self.X['train'], self.y['train'])
logging.info("start predicting on test")
testaccuracy = clf.score(self.X['test'], self.y['test'], test=True, return_score=return_score)
if not return_score:
testaccuracy = round(100 * testaccuracy, 2)
return devaccuracy, testaccuracy
| 45.614437 | 231 | 0.569532 | 3,053 | 25,909 | 4.732067 | 0.078611 | 0.021112 | 0.047069 | 0.024988 | 0.821624 | 0.809857 | 0.796774 | 0.787084 | 0.781131 | 0.754759 | 0 | 0.015059 | 0.305415 | 25,909 | 567 | 232 | 45.694885 | 0.787731 | 0.062758 | 0 | 0.742291 | 0 | 0.004405 | 0.083561 | 0.010202 | 0 | 0 | 0 | 0.007055 | 0.002203 | 1 | 0.03304 | false | 0 | 0.022026 | 0 | 0.088106 | 0.015419 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bf1d71b04f9f1d410aa4c4f29e9ec783c58d6738 | 2,734 | py | Python | pysnowball/utls.py | ChristmasCris/pysnowball | 78c03b244acb99d9e185327ee664a0001fe3d2f2 | [
"Apache-2.0"
] | 1 | 2022-01-25T14:11:28.000Z | 2022-01-25T14:11:28.000Z | pysnowball/utls.py | ChristmasCris/pysnowball | 78c03b244acb99d9e185327ee664a0001fe3d2f2 | [
"Apache-2.0"
] | null | null | null | pysnowball/utls.py | ChristmasCris/pysnowball | 78c03b244acb99d9e185327ee664a0001fe3d2f2 | [
"Apache-2.0"
] | null | null | null | import requests
import json
import pysnowball.cons as cons
import pysnowball.token as token
def fetch(url, host="stock.xueqiu.com"):
HEADERS = {'Host': host,
'Accept': 'application/json',
'Cookie': token.get_token(),
'User-Agent': 'Xueqiu iPhone 11.8',
'Accept-Language': 'zh-Hans-CN;q=1, ja-JP;q=0.9',
'Accept-Encoding': 'br, gzip, deflate',
'Connection': 'keep-alive'}
response = requests.get(url, headers=HEADERS)
# print(url)
# print(HEADERS)
# print(response)
# print(response.content)
if response.status_code != 200:
raise Exception(response.content)
return json.loads(response.content)
def fetch_without_token(url, host="stock.xueqiu.com"):
HEADERS = {'Host': host,
'Accept': 'application/json',
'User-Agent': 'Xueqiu iPhone 11.8',
'Accept-Language': 'zh-Hans-CN;q=1, ja-JP;q=0.9',
'Accept-Encoding': 'br, gzip, deflate',
'Connection': 'keep-alive'}
response = requests.get(url, headers=HEADERS)
# print(url)
# print(HEADERS)
# print(response)
# print(response.content)
if response.status_code != 200:
raise Exception(response.content)
return json.loads(response.content)
def fetch_eastmoney(url):
HEADERS = {"Host": "datacenter-web.eastmoney.com",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,cy;q=0.6"}
response = requests.get(url, headers=HEADERS)
if response.status_code != 200:
raise Exception(response.content)
return json.loads(response.content)
def fetch_csindex(url):
HEADERS = {"Host": "www.csindex.com.cn",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36",
"Accept": "application/json, text/plain, */*",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,cy;q=0.6"}
response = requests.get(url, headers=HEADERS)
# print(url)
# print(HEADERS)
# print(response)
# print(response.content)
if response.status_code != 200:
raise Exception(response.content)
return json.loads(response.content)
| 33.341463 | 163 | 0.607169 | 364 | 2,734 | 4.524725 | 0.263736 | 0.015786 | 0.010929 | 0.05343 | 0.801457 | 0.799636 | 0.799636 | 0.799636 | 0.799636 | 0.799636 | 0 | 0.048302 | 0.235187 | 2,734 | 81 | 164 | 33.753086 | 0.739359 | 0.072056 | 0 | 0.723404 | 0 | 0.106383 | 0.400951 | 0.099445 | 0 | 0 | 0 | 0 | 0 | 1 | 0.085106 | false | 0 | 0.085106 | 0 | 0.255319 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
bf3219c9ef12afd844939668b550a6d964cc011e | 111,026 | py | Python | test/connectivity/acts/tests/google/tel/live/TelLiveVideoTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/tel/live/TelLiveVideoTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | null | null | null | test/connectivity/acts/tests/google/tel/live/TelLiveVideoTest.py | Keneral/atools | 055e76621340c7dced125e9de56e2645b5e1cdfb | [
"Unlicense"
] | 1 | 2018-02-24T19:13:01.000Z | 2018-02-24T19:13:01.000Z | #!/usr/bin/env python3.4
#
# Copyright 2016 - Google
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Test Script for VT live call test
"""
import time
from queue import Empty
from acts.test_utils.tel.TelephonyBaseTest import TelephonyBaseTest
from acts.test_utils.tel.tel_defines import AUDIO_ROUTE_EARPIECE
from acts.test_utils.tel.tel_defines import AUDIO_ROUTE_SPEAKER
from acts.test_utils.tel.tel_defines import CALL_STATE_ACTIVE
from acts.test_utils.tel.tel_defines import CALL_STATE_HOLDING
from acts.test_utils.tel.tel_defines import CALL_CAPABILITY_MANAGE_CONFERENCE
from acts.test_utils.tel.tel_defines import CALL_CAPABILITY_MERGE_CONFERENCE
from acts.test_utils.tel.tel_defines import CALL_CAPABILITY_SWAP_CONFERENCE
from acts.test_utils.tel.tel_defines import CALL_PROPERTY_CONFERENCE
from acts.test_utils.tel.tel_defines import MAX_WAIT_TIME_VIDEO_SESSION_EVENT
from acts.test_utils.tel.tel_defines import MAX_WAIT_TIME_VOLTE_ENABLED
from acts.test_utils.tel.tel_defines import VT_STATE_AUDIO_ONLY
from acts.test_utils.tel.tel_defines import VT_STATE_BIDIRECTIONAL
from acts.test_utils.tel.tel_defines import VT_STATE_BIDIRECTIONAL_PAUSED
from acts.test_utils.tel.tel_defines import VT_VIDEO_QUALITY_DEFAULT
from acts.test_utils.tel.tel_defines import VT_STATE_RX_ENABLED
from acts.test_utils.tel.tel_defines import VT_STATE_TX_ENABLED
from acts.test_utils.tel.tel_defines import WAIT_TIME_ANDROID_STATE_SETTLING
from acts.test_utils.tel.tel_defines import WAIT_TIME_IN_CALL
from acts.test_utils.tel.tel_defines import EVENT_VIDEO_SESSION_EVENT
from acts.test_utils.tel.tel_defines import EventTelecomVideoCallSessionEvent
from acts.test_utils.tel.tel_defines import SESSION_EVENT_RX_PAUSE
from acts.test_utils.tel.tel_defines import SESSION_EVENT_RX_RESUME
from acts.test_utils.tel.tel_test_utils import call_setup_teardown
from acts.test_utils.tel.tel_test_utils import disconnect_call_by_id
from acts.test_utils.tel.tel_test_utils import hangup_call
from acts.test_utils.tel.tel_test_utils import multithread_func
from acts.test_utils.tel.tel_test_utils import num_active_calls
from acts.test_utils.tel.tel_test_utils import verify_http_connection
from acts.test_utils.tel.tel_test_utils import verify_incall_state
from acts.test_utils.tel.tel_test_utils import wait_for_video_enabled
from acts.test_utils.tel.tel_video_utils import get_call_id_in_video_state
from acts.test_utils.tel.tel_video_utils import \
is_phone_in_call_video_bidirectional
from acts.test_utils.tel.tel_video_utils import is_phone_in_call_voice_hd
from acts.test_utils.tel.tel_video_utils import phone_setup_video
from acts.test_utils.tel.tel_video_utils import \
verify_video_call_in_expected_state
from acts.test_utils.tel.tel_video_utils import video_call_downgrade
from acts.test_utils.tel.tel_video_utils import video_call_modify_video
from acts.test_utils.tel.tel_video_utils import video_call_setup_teardown
from acts.test_utils.tel.tel_voice_utils import get_audio_route
from acts.test_utils.tel.tel_voice_utils import is_phone_in_call_volte
from acts.test_utils.tel.tel_voice_utils import phone_setup_volte
from acts.test_utils.tel.tel_voice_utils import set_audio_route
from acts.test_utils.tel.tel_voice_utils import get_cep_conference_call_id
from acts.utils import load_config
class TelLiveVideoTest(TelephonyBaseTest):
def __init__(self, controllers):
TelephonyBaseTest.__init__(self, controllers)
self.tests = (
"test_call_video_to_video",
"test_call_video_accept_as_voice",
"test_call_video_to_video_mo_disable_camera",
"test_call_video_to_video_mt_disable_camera",
"test_call_video_to_video_mo_mt_disable_camera",
"test_call_video_to_video_mt_mo_disable_camera",
"test_call_volte_to_volte_mo_upgrade_bidirectional",
"test_call_video_accept_as_voice_mo_upgrade_bidirectional",
"test_call_volte_to_volte_mo_upgrade_reject",
"test_call_video_accept_as_voice_mo_upgrade_reject",
"test_call_video_to_video_mo_to_backgroundpause_foregroundresume",
"test_call_video_to_video_mt_to_backgroundpause_foregroundresume",
# Video Call + Voice Call
"test_call_video_add_mo_voice",
"test_call_video_add_mt_voice",
"test_call_volte_add_mo_video",
"test_call_volte_add_mt_video",
"test_call_video_add_mt_voice_swap_once_local_drop",
"test_call_video_add_mt_voice_swap_twice_remote_drop_voice_unhold_video",
# Video + Video
"test_call_video_add_mo_video",
"test_call_video_add_mt_video",
"test_call_mt_video_add_mt_video",
"test_call_mt_video_add_mo_video",
# VT conference
"test_call_volte_add_mo_video_accept_as_voice_merge_drop",
"test_call_volte_add_mt_video_accept_as_voice_merge_drop",
"test_call_video_add_mo_voice_swap_downgrade_merge_drop",
"test_call_video_add_mt_voice_swap_downgrade_merge_drop",
"test_call_volte_add_mo_video_downgrade_merge_drop",
"test_call_volte_add_mt_video_downgrade_merge_drop",
# VT conference - Conference Event Package
"test_call_volte_add_mo_video_accept_as_voice_merge_drop_cep",
"test_call_volte_add_mt_video_accept_as_voice_merge_drop_cep",
"test_call_video_add_mo_voice_swap_downgrade_merge_drop_cep",
"test_call_video_add_mt_voice_swap_downgrade_merge_drop_cep",
"test_call_volte_add_mo_video_downgrade_merge_drop_cep",
"test_call_volte_add_mt_video_downgrade_merge_drop_cep",
# Disable Data, VT not available
"test_disable_data_vt_unavailable", )
self.simconf = load_config(self.user_params["sim_conf_file"])
self.stress_test_number = int(self.user_params["stress_test_number"])
self.wifi_network_ssid = self.user_params["wifi_network_ssid"]
try:
self.wifi_network_pass = self.user_params["wifi_network_pass"]
except KeyError:
self.wifi_network_pass = None
""" Tests Begin """
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video(self):
""" Test VT<->VT call functionality.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Bi-Directional Video,
Accept on PhoneB as video call, hang up on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
ads[0],
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup+teardown a call")
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_video_accept_as_voice(self):
""" Test VT<->VT call functionality.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Bi-Directional Video,
Accept on PhoneB as audio only, hang up on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
ads[0],
video_state=VT_STATE_AUDIO_ONLY,
verify_caller_func=is_phone_in_call_voice_hd,
verify_callee_func=is_phone_in_call_voice_hd):
self.log.error("Failed to setup+teardown a call")
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video_mo_disable_camera(self):
""" Test VT<->VT call functionality.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Bi-Directional Video,
Accept on PhoneB as video call.
On PhoneA disabled video transmission.
Verify PhoneA as RX_ENABLED and PhoneB as TX_ENABLED.
Hangup on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Disable video on PhoneA:{}".format(ads[0].serial))
if not video_call_downgrade(
self.log, ads[0], get_call_id_in_video_state(
self.log, ads[0], VT_STATE_BIDIRECTIONAL), ads[1],
get_call_id_in_video_state(self.log, ads[1],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneA.")
return False
return hangup_call(self.log, ads[0])
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video_mt_disable_camera(self):
""" Test VT<->VT call functionality.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Bi-Directional Video,
Accept on PhoneB as video call.
On PhoneB disabled video transmission.
Verify PhoneB as RX_ENABLED and PhoneA as TX_ENABLED.
Hangup on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Disable video on PhoneB:{}".format(ads[1].serial))
if not video_call_downgrade(
self.log, ads[1], get_call_id_in_video_state(
self.log, ads[1], VT_STATE_BIDIRECTIONAL), ads[0],
get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneB.")
return False
return hangup_call(self.log, ads[0])
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video_mo_mt_disable_camera(self):
""" Test VT<->VT call functionality.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Bi-Directional Video,
Accept on PhoneB as video call.
On PhoneA disabled video transmission.
Verify PhoneA as RX_ENABLED and PhoneB as TX_ENABLED.
On PhoneB disabled video transmission.
Verify PhoneA as AUDIO_ONLY and PhoneB as AUDIO_ONLY.
Hangup on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Disable video on PhoneA:{}".format(ads[0].serial))
if not video_call_downgrade(
self.log, ads[0], get_call_id_in_video_state(
self.log, ads[0], VT_STATE_BIDIRECTIONAL), ads[1],
get_call_id_in_video_state(self.log, ads[1],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneA.")
return False
self.log.info("Disable video on PhoneB:{}".format(ads[1].serial))
if not video_call_downgrade(
self.log, ads[1], get_call_id_in_video_state(
self.log, ads[1], VT_STATE_TX_ENABLED), ads[0],
get_call_id_in_video_state(self.log, ads[0],
VT_STATE_RX_ENABLED)):
self.log.error("Failed to disable video on PhoneB.")
return False
return hangup_call(self.log, ads[0])
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video_mt_mo_disable_camera(self):
""" Test VT<->VT call functionality.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Bi-Directional Video,
Accept on PhoneB as video call.
On PhoneB disabled video transmission.
Verify PhoneB as RX_ENABLED and PhoneA as TX_ENABLED.
On PhoneA disabled video transmission.
Verify PhoneA as AUDIO_ONLY and PhoneB as AUDIO_ONLY.
Hangup on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Disable video on PhoneB:{}".format(ads[1].serial))
if not video_call_downgrade(
self.log, ads[1], get_call_id_in_video_state(
self.log, ads[1], VT_STATE_BIDIRECTIONAL), ads[0],
get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneB.")
return False
self.log.info("Disable video on PhoneA:{}".format(ads[0].serial))
if not video_call_downgrade(
self.log, ads[0], get_call_id_in_video_state(
self.log, ads[0], VT_STATE_TX_ENABLED), ads[1],
get_call_id_in_video_state(self.log, ads[1],
VT_STATE_RX_ENABLED)):
self.log.error("Failed to disable video on PhoneB.")
return False
return hangup_call(self.log, ads[0])
def _mo_upgrade_bidirectional(self, ads):
"""Send + accept an upgrade request from Phone A to B.
Returns:
True if pass; False if fail.
"""
call_id_requester = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_AUDIO_ONLY)
call_id_responder = get_call_id_in_video_state(self.log, ads[1],
VT_STATE_AUDIO_ONLY)
if not call_id_requester or not call_id_responder:
self.log.error("Couldn't find a candidate call id {}:{}, {}:{}"
.format(ads[0].serial, call_id_requester, ads[
1].serial, call_id_responder))
return False
if not video_call_modify_video(self.log, ads[0], call_id_requester,
ads[1], call_id_responder,
VT_STATE_BIDIRECTIONAL):
self.log.error("Failed to upgrade video call!")
return False
#Wait for a completed upgrade and ensure the call is stable
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1]], True):
self.log.error("_mo_upgrade_bidirectional: Call Drop!")
return False
if (get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL) !=
call_id_requester):
self.log.error("Caller not in correct state: {}".format(
VT_STATE_BIDIRECTIONAL))
return False
if (get_call_id_in_video_state(self.log, ads[1],
VT_STATE_BIDIRECTIONAL) !=
call_id_responder):
self.log.error("Callee not in correct state: {}".format(
VT_STATE_BIDIRECTIONAL))
return False
return hangup_call(self.log, ads[0])
@TelephonyBaseTest.tel_test_wrap
def test_call_video_accept_as_voice_mo_upgrade_bidirectional(self):
""" Test Upgrading from VoLTE to Bi-Directional VT.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Video, accept on PhoneB as audio only.
Send + accept an upgrade request from Phone A to B.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_AUDIO_ONLY,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
return self._mo_upgrade_bidirectional(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_to_volte_mo_upgrade_bidirectional(self):
""" Test Upgrading from VoLTE to Bi-Directional VT.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as VoLTE, accept on PhoneB.
Send + accept an upgrade request from Phone A to B.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not call_setup_teardown(self.log, ads[0], ads[1], None,
is_phone_in_call_volte,
is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
return self._mo_upgrade_bidirectional(ads)
def _mo_upgrade_reject(self, ads):
"""Send + reject an upgrade request from Phone A to B.
Returns:
True if pass; False if fail.
"""
call_id_requester = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_AUDIO_ONLY)
call_id_responder = get_call_id_in_video_state(self.log, ads[1],
VT_STATE_AUDIO_ONLY)
if not call_id_requester or not call_id_responder:
self.log.error("Couldn't find a candidate call id {}:{}, {}:{}"
.format(ads[0].serial, call_id_requester, ads[
1].serial, call_id_responder))
return False
if not video_call_modify_video(
self.log, ads[0], call_id_requester, ads[1], call_id_responder,
VT_STATE_BIDIRECTIONAL, VT_VIDEO_QUALITY_DEFAULT,
VT_STATE_AUDIO_ONLY, VT_VIDEO_QUALITY_DEFAULT):
self.log.error("Failed to upgrade video call!")
return False
time.sleep(WAIT_TIME_IN_CALL)
if not is_phone_in_call_voice_hd(self.log, ads[0]):
self.log.error("PhoneA not in correct state.")
return False
if not is_phone_in_call_voice_hd(self.log, ads[1]):
self.log.error("PhoneB not in correct state.")
return False
return hangup_call(self.log, ads[0])
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_to_volte_mo_upgrade_reject(self):
""" Test Upgrading from VoLTE to Bi-Directional VT and reject.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as VoLTE, accept on PhoneB.
Send an upgrade request from Phone A to PhoneB.
Reject on PhoneB. Verify PhoneA and PhoneB ad AUDIO_ONLY.
Verify call continues.
Hangup on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not call_setup_teardown(self.log, ads[0], ads[1], None,
is_phone_in_call_volte,
is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
return self._mo_upgrade_reject(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_accept_as_voice_mo_upgrade_reject(self):
""" Test Upgrading from VoLTE to Bi-Directional VT and reject.
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Call from PhoneA to PhoneB as Video, accept on PhoneB as audio only.
Send an upgrade request from Phone A to PhoneB.
Reject on PhoneB. Verify PhoneA and PhoneB ad AUDIO_ONLY.
Verify call continues.
Hangup on PhoneA.
Returns:
True if pass; False if fail.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_AUDIO_ONLY,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
return self._mo_upgrade_reject(ads)
def _test_put_call_to_backgroundpause_and_foregroundresume(
self, ad_requester, ad_responder):
call_id_requester = get_call_id_in_video_state(self.log, ad_requester,
VT_STATE_BIDIRECTIONAL)
call_id_responder = get_call_id_in_video_state(self.log, ad_responder,
VT_STATE_BIDIRECTIONAL)
ad_requester.droid.telecomCallVideoStartListeningForEvent(
call_id_requester, EVENT_VIDEO_SESSION_EVENT)
ad_responder.droid.telecomCallVideoStartListeningForEvent(
call_id_responder, EVENT_VIDEO_SESSION_EVENT)
self.log.info("Put In-Call UI on {} to background.".format(
ad_requester.serial))
ad_requester.droid.showHomeScreen()
try:
event_on_responder = ad_responder.ed.pop_event(
EventTelecomVideoCallSessionEvent,
MAX_WAIT_TIME_VIDEO_SESSION_EVENT)
event_on_requester = ad_requester.ed.pop_event(
EventTelecomVideoCallSessionEvent,
MAX_WAIT_TIME_VIDEO_SESSION_EVENT)
if event_on_responder['data']['Event'] != SESSION_EVENT_RX_PAUSE:
self.log.error(
"Event not correct. event_on_responder: {}. Expected :{}".format(
event_on_responder, SESSION_EVENT_RX_PAUSE))
return False
if event_on_requester['data']['Event'] != SESSION_EVENT_RX_PAUSE:
self.log.error(
"Event not correct. event_on_requester: {}. Expected :{}".format(
event_on_requester, SESSION_EVENT_RX_PAUSE))
return False
except Empty:
self.log.error("Expected event not received.")
return False
finally:
ad_requester.droid.telecomCallVideoStopListeningForEvent(
call_id_requester, EVENT_VIDEO_SESSION_EVENT)
ad_responder.droid.telecomCallVideoStopListeningForEvent(
call_id_responder, EVENT_VIDEO_SESSION_EVENT)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_video_call_in_expected_state(
self.log, ad_requester, call_id_requester,
VT_STATE_BIDIRECTIONAL_PAUSED, CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ad_responder, call_id_responder,
VT_STATE_BIDIRECTIONAL_PAUSED, CALL_STATE_ACTIVE):
return False
self.log.info("Put In-Call UI on {} to foreground.".format(
ad_requester.serial))
ad_requester.droid.telecomCallVideoStartListeningForEvent(
call_id_requester, EVENT_VIDEO_SESSION_EVENT)
ad_responder.droid.telecomCallVideoStartListeningForEvent(
call_id_responder, EVENT_VIDEO_SESSION_EVENT)
ad_requester.droid.telecomShowInCallScreen()
try:
event_on_responder = ad_responder.ed.pop_event(
EventTelecomVideoCallSessionEvent,
MAX_WAIT_TIME_VIDEO_SESSION_EVENT)
event_on_requester = ad_requester.ed.pop_event(
EventTelecomVideoCallSessionEvent,
MAX_WAIT_TIME_VIDEO_SESSION_EVENT)
if event_on_responder['data']['Event'] != SESSION_EVENT_RX_RESUME:
self.log.error(
"Event not correct. event_on_responder: {}. Expected :{}".format(
event_on_responder, SESSION_EVENT_RX_RESUME))
return False
if event_on_requester['data']['Event'] != SESSION_EVENT_RX_RESUME:
self.log.error(
"Event not correct. event_on_requester: {}. Expected :{}".format(
event_on_requester, SESSION_EVENT_RX_RESUME))
return False
except Empty:
self.log.error("Expected event not received.")
return False
finally:
ad_requester.droid.telecomCallVideoStopListeningForEvent(
call_id_requester, EVENT_VIDEO_SESSION_EVENT)
ad_responder.droid.telecomCallVideoStopListeningForEvent(
call_id_responder, EVENT_VIDEO_SESSION_EVENT)
time.sleep(WAIT_TIME_IN_CALL)
self.log.info("Verify both calls are in bi-directional/active state.")
if not verify_video_call_in_expected_state(
self.log, ad_requester, call_id_requester,
VT_STATE_BIDIRECTIONAL, CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ad_responder, call_id_responder,
VT_STATE_BIDIRECTIONAL, CALL_STATE_ACTIVE):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video_mo_to_backgroundpause_foregroundresume(self):
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
return self._test_put_call_to_backgroundpause_and_foregroundresume(
ads[0], ads[1])
@TelephonyBaseTest.tel_test_wrap
def test_call_video_to_video_mt_to_backgroundpause_foregroundresume(self):
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
return self._test_put_call_to_backgroundpause_and_foregroundresume(
ads[1], ads[0])
def _vt_test_multi_call_hangup(self, ads):
"""private function to hangup calls for VT tests.
Hangup on PhoneB.
Verify PhoneA and PhoneC still in call.
Hangup on PhoneC.
Verify all phones not in call.
"""
if not hangup_call(self.log, ads[1]):
return False
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[2]], True):
return False
if not hangup_call(self.log, ads[2]):
return False
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], False):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mo_voice(self):
"""
From Phone_A, Initiate a Bi-Directional Video Call to Phone_B
Accept the call on Phone_B as Bi-Directional Video
From Phone_A, add a voice call to Phone_C
Accept the call on Phone_C
Verify both calls remain active.
"""
# This test case is not supported by VZW.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_volte,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Voice Call PhoneA->PhoneC.")
if not call_setup_teardown(self.log,
ads[0],
ads[2],
None,
verify_caller_func=None,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video:
call_id_voice = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
return self._vt_test_multi_call_hangup(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mt_voice(self):
"""
From Phone_A, Initiate a Bi-Directional Video Call to Phone_B
Accept the call on Phone_B as Bi-Directional Video
From Phone_C, add a voice call to Phone_A
Accept the call on Phone_A
Verify both calls remain active.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_volte,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Voice Call PhoneC->PhoneA.")
if not call_setup_teardown(self.log,
ads[2],
ads[0],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=None):
self.log.error("Failed to setup a call")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video:
call_id_voice = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL_PAUSED,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
return self._vt_test_multi_call_hangup(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mo_video(self):
"""
From Phone_A, Initiate a VoLTE Call to Phone_B
Accept the call on Phone_B
From Phone_A, add a Video call to Phone_C
Accept the call on Phone_C as Video
Verify both calls remain active.
"""
# This test case is not supported by VZW.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_volte, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate VoLTE Call PhoneA->PhoneB.")
if not call_setup_teardown(self.log,
ads[0],
ads[1],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Active call numbers in PhoneA is not 1.")
return False
call_id_voice = calls[0]
self.log.info("Step2: Initiate Video Call PhoneA->PhoneC.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[2],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
return self._vt_test_multi_call_hangup(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mt_video(self):
"""
From Phone_A, Initiate a VoLTE Call to Phone_B
Accept the call on Phone_B
From Phone_C, add a Video call to Phone_A
Accept the call on Phone_A as Video
Verify both calls remain active.
"""
# TODO (b/21437650):
# Test will fail. After established 2nd call ~15s, Phone C will drop call.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_volte, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate VoLTE Call PhoneA->PhoneB.")
if not call_setup_teardown(self.log,
ads[0],
ads[1],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Active call numbers in PhoneA is not 1.")
return False
call_id_voice = calls[0]
self.log.info("Step2: Initiate Video Call PhoneC->PhoneA.")
if not video_call_setup_teardown(
self.log,
ads[2],
ads[0],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
return self._vt_test_multi_call_hangup(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mt_voice_swap_once_local_drop(self):
"""
From Phone_A, Initiate a Bi-Directional Video Call to Phone_B
Accept the call on Phone_B as Bi-Directional Video
From Phone_C, add a voice call to Phone_A
Accept the call on Phone_A
Verify both calls remain active.
Swap calls on PhoneA.
End Video call on PhoneA.
End Voice call on PhoneA.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_volte,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Voice Call PhoneC->PhoneA.")
if not call_setup_teardown(self.log,
ads[2],
ads[0],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=None):
self.log.error("Failed to setup a call")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video:
call_id_voice = call
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL_PAUSED,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
self.log.info("Step4: Verify all phones remain in-call.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
self.log.info(
"Step5: Swap calls on PhoneA and verify call state correct.")
ads[0].droid.telecomCallHold(call_id_voice)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
for ad in [ads[0], ads[1]]:
if get_audio_route(self.log, ad) != AUDIO_ROUTE_SPEAKER:
self.log.error("{} Audio is not on speaker.".format(ad.serial))
# TODO: b/26337892 Define expected audio route behavior.
set_audio_route(self.log, ad, AUDIO_ROUTE_EARPIECE)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
self.log.info("Step6: Drop Video Call on PhoneA.")
disconnect_call_by_id(self.log, ads[0], call_id_video)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[2]], True):
return False
disconnect_call_by_id(self.log, ads[0], call_id_voice)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], False):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mt_voice_swap_twice_remote_drop_voice_unhold_video(
self):
"""
From Phone_A, Initiate a Bi-Directional Video Call to Phone_B
Accept the call on Phone_B as Bi-Directional Video
From Phone_C, add a voice call to Phone_A
Accept the call on Phone_A
Verify both calls remain active.
Swap calls on PhoneA.
Swap calls on PhoneA.
End Voice call on PhoneC.
Unhold Video call on PhoneA.
End Video call on PhoneA.
"""
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_volte,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Voice Call PhoneC->PhoneA.")
if not call_setup_teardown(self.log,
ads[2],
ads[0],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=None):
self.log.error("Failed to setup a call")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video:
call_id_voice = call
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL_PAUSED,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
self.log.info("Step4: Verify all phones remain in-call.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
self.log.info(
"Step5: Swap calls on PhoneA and verify call state correct.")
ads[0].droid.telecomCallHold(call_id_voice)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
for ad in [ads[0], ads[1]]:
if get_audio_route(self.log, ad) != AUDIO_ROUTE_SPEAKER:
self.log.error("{} Audio is not on speaker.".format(ad.serial))
# TODO: b/26337892 Define expected audio route behavior.
set_audio_route(self.log, ad, AUDIO_ROUTE_EARPIECE)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
self.log.info(
"Step6: Swap calls on PhoneA and verify call state correct.")
ads[0].droid.telecomCallHold(call_id_video)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
# Audio will goto earpiece in here
for ad in [ads[0], ads[1]]:
if get_audio_route(self.log, ad) != AUDIO_ROUTE_EARPIECE:
self.log.error("{} Audio is not on EARPIECE.".format(
ad.serial))
# TODO: b/26337892 Define expected audio route behavior.
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
self.log.info("Step7: Drop Voice Call on PhoneC.")
hangup_call(self.log, ads[2])
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1]], True):
return False
self.log.info(
"Step8: Unhold Video call on PhoneA and verify call state.")
ads[0].droid.telecomCallUnhold(call_id_video)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
# Audio will goto earpiece in here
for ad in [ads[0], ads[1]]:
if get_audio_route(self.log, ad) != AUDIO_ROUTE_EARPIECE:
self.log.error("{} Audio is not on EARPIECE.".format(
ad.serial))
# TODO: b/26337892 Define expected audio route behavior.
time.sleep(WAIT_TIME_IN_CALL)
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
self.log.info("Step9: Drop Video Call on PhoneA.")
disconnect_call_by_id(self.log, ads[0], call_id_video)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], False):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mo_video(self):
"""
From Phone_A, Initiate a Bi-Directional Video Call to Phone_B
Accept the call on Phone_B as Bi-Directional Video
From Phone_A, add a Bi-Directional Video Call to Phone_C
Accept the call on Phone_C
Verify both calls remain active.
"""
# This test case is not supported by VZW.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ab = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ab is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Video Call PhoneA->PhoneC.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[2],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Step3: Verify PhoneA's video calls in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video_ab:
call_id_video_ac = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_BIDIRECTIONAL,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
return self._vt_test_multi_call_hangup(ads)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mt_video(self):
"""
From Phone_A, Initiate a Bi-Directional Video Call to Phone_B
Accept the call on Phone_B as Bi-Directional Video
From Phone_C, add a Bi-Directional Video Call to Phone_A
Accept the call on Phone_A
Verify both calls remain active.
Hang up on PhoneC.
Hang up on PhoneA.
"""
# TODO: b/21437650 Test will fail. After established 2nd call ~15s,
# Phone C will drop call.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ab = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ab is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Video Call PhoneC->PhoneA.")
if not video_call_setup_teardown(
self.log,
ads[2],
ads[0],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Step3: Verify PhoneA's video calls in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video_ab:
call_id_video_ac = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab,
VT_STATE_BIDIRECTIONAL_PAUSED, CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
self.log.info("Step4: Hangup on PhoneC.")
if not hangup_call(self.log, ads[2]):
return False
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1]], True):
return False
self.log.info("Step4: Hangup on PhoneA.")
if not hangup_call(self.log, ads[0]):
return False
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], False):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_mt_video_add_mt_video(self):
"""
From Phone_B, Initiate a Bi-Directional Video Call to Phone_A
Accept the call on Phone_A as Bi-Directional Video
From Phone_C, add a Bi-Directional Video Call to Phone_A
Accept the call on Phone_A
Verify both calls remain active.
Hang up on PhoneC.
Hang up on PhoneA.
"""
# TODO: b/21437650 Test will fail. After established 2nd call ~15s,
# Phone C will drop call.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneB->PhoneA.")
if not video_call_setup_teardown(
self.log,
ads[1],
ads[0],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ab = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ab is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Video Call PhoneC->PhoneA.")
if not video_call_setup_teardown(
self.log,
ads[2],
ads[0],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Step3: Verify PhoneA's video calls in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video_ab:
call_id_video_ac = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab,
VT_STATE_BIDIRECTIONAL_PAUSED, CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
self.log.info("Step4: Hangup on PhoneC.")
if not hangup_call(self.log, ads[2]):
return False
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1]], True):
return False
self.log.info("Step4: Hangup on PhoneA.")
if not hangup_call(self.log, ads[0]):
return False
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], False):
return False
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_mt_video_add_mo_video(self):
"""
From Phone_B, Initiate a Bi-Directional Video Call to Phone_A
Accept the call on Phone_A as Bi-Directional Video
From Phone_A, add a Bi-Directional Video Call to Phone_C
Accept the call on Phone_C
Verify both calls remain active.
"""
# This test case is not supported by VZW.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneB->PhoneA.")
if not video_call_setup_teardown(
self.log,
ads[1],
ads[0],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ab = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ab is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Video Call PhoneA->PhoneC.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[2],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
self.log.info("Step3: Verify PhoneA's video calls in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video_ab:
call_id_video_ac = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_BIDIRECTIONAL,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
return self._vt_test_multi_call_hangup(ads)
def _test_vt_conference_merge_drop(self, ads, call_ab_id, call_ac_id):
"""Test conference merge and drop for VT call test.
PhoneA in call with PhoneB.
PhoneA in call with PhoneC.
Merge calls to conference on PhoneA.
Hangup on PhoneB, check call continues between AC.
Hangup on PhoneC.
Hangup on PhoneA.
Args:
call_ab_id: call id for call_AB on PhoneA.
call_ac_id: call id for call_AC on PhoneA.
Returns:
True if succeed;
False if failed.
"""
self.log.info(
"Merge - Step1: Merge to Conf Call and verify Conf Call.")
ads[0].droid.telecomCallJoinCallsInConf(call_ab_id, call_ac_id)
time.sleep(WAIT_TIME_IN_CALL)
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Total number of call ids in {} is not 1.".format(
ads[0].serial))
return False
call_conf_id = None
for call_id in calls:
if call_id != call_ab_id and call_id != call_ac_id:
call_conf_id = call_id
if not call_conf_id:
self.log.error("Merge call fail, no new conference call id.")
return False
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
# Check if Conf Call is currently active
if ads[0].droid.telecomCallGetCallState(
call_conf_id) != CALL_STATE_ACTIVE:
self.log.error(
"Call_id:{}, state:{}, expected: STATE_ACTIVE".format(
call_conf_id, ads[0].droid.telecomCallGetCallState(
call_conf_id)))
return False
self.log.info(
"Merge - Step2: End call on PhoneB and verify call continues.")
ads[1].droid.telecomEndCall()
time.sleep(WAIT_TIME_IN_CALL)
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if not verify_incall_state(self.log, [ads[0], ads[2]], True):
return False
if not verify_incall_state(self.log, [ads[1]], False):
return False
ads[1].droid.telecomEndCall()
ads[0].droid.telecomEndCall()
return True
def _test_vt_conference_merge_drop_cep(self, ads, call_ab_id, call_ac_id):
"""Merge CEP conference call.
PhoneA in IMS (VoLTE or WiFi Calling) call with PhoneB.
PhoneA in IMS (VoLTE or WiFi Calling) call with PhoneC.
Merge calls to conference on PhoneA (CEP enabled IMS conference).
Args:
call_ab_id: call id for call_AB on PhoneA.
call_ac_id: call id for call_AC on PhoneA.
Returns:
call_id for conference
"""
self.log.info("Step4: Merge to Conf Call and verify Conf Call.")
ads[0].droid.telecomCallJoinCallsInConf(call_ab_id, call_ac_id)
time.sleep(WAIT_TIME_IN_CALL)
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
call_conf_id = get_cep_conference_call_id(ads[0])
if call_conf_id is None:
self.log.error(
"No call with children. Probably CEP not enabled or merge failed.")
return False
calls.remove(call_conf_id)
if (set(ads[0].droid.telecomCallGetCallChildren(call_conf_id)) !=
set(calls)):
self.log.error(
"Children list<{}> for conference call is not correct.".format(
ads[0].droid.telecomCallGetCallChildren(call_conf_id)))
return False
if (CALL_PROPERTY_CONFERENCE not in
ads[0].droid.telecomCallGetProperties(call_conf_id)):
self.log.error("Conf call id properties wrong: {}".format(ads[
0].droid.telecomCallGetProperties(call_conf_id)))
return False
if (CALL_CAPABILITY_MANAGE_CONFERENCE not in
ads[0].droid.telecomCallGetCapabilities(call_conf_id)):
self.log.error("Conf call id capabilities wrong: {}".format(ads[
0].droid.telecomCallGetCapabilities(call_conf_id)))
return False
if (call_ab_id in calls) or (call_ac_id in calls):
self.log.error(
"Previous call ids should not in new call list after merge.")
return False
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
# Check if Conf Call is currently active
if ads[0].droid.telecomCallGetCallState(
call_conf_id) != CALL_STATE_ACTIVE:
self.log.error(
"Call_id:{}, state:{}, expected: STATE_ACTIVE".format(
call_conf_id, ads[0].droid.telecomCallGetCallState(
call_conf_id)))
return False
self.log.info(
"End call on PhoneB and verify call continues.")
ads[1].droid.telecomEndCall()
time.sleep(WAIT_TIME_IN_CALL)
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if not verify_incall_state(self.log, [ads[0], ads[2]], True):
return False
if not verify_incall_state(self.log, [ads[1]], False):
return False
ads[1].droid.telecomEndCall()
ads[0].droid.telecomEndCall()
return True
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mo_video_accept_as_voice_merge_drop(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneA add a Bi-Directional Video call to PhoneC.
PhoneC accept as voice.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mo_video_accept_as_voice_merge_drop(
False)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mo_video_accept_as_voice_merge_drop_cep(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneA add a Bi-Directional Video call to PhoneC.
PhoneC accept as voice.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mo_video_accept_as_voice_merge_drop(
True)
def _test_call_volte_add_mo_video_accept_as_voice_merge_drop(
self,
use_cep=False):
# This test case is not supported by VZW.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_volte, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate VoLTE Call PhoneA->PhoneB.")
if not call_setup_teardown(self.log, ads[0], ads[1], None,
is_phone_in_call_volte,
is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Active call numbers in PhoneA is not 1.")
return False
call_ab_id = calls[0]
self.log.info(
"Step2: Initiate Video Call PhoneA->PhoneC and accept as voice.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[2],
None,
video_state=VT_STATE_AUDIO_ONLY,
verify_caller_func=is_phone_in_call_voice_hd,
verify_callee_func=is_phone_in_call_voice_hd):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_ab_id:
call_ac_id = call
self.log.info("Step3: Verify calls in correct state.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_ab_id, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_ac_id, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
return {False: self._test_vt_conference_merge_drop,
True: self._test_vt_conference_merge_drop_cep}[use_cep](
ads, call_ab_id, call_ac_id)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mt_video_accept_as_voice_merge_drop(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneC add a Bi-Directional Video call to PhoneA.
PhoneA accept as voice.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mt_video_accept_as_voice_merge_drop(
False)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mt_video_accept_as_voice_merge_drop_cep(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneC add a Bi-Directional Video call to PhoneA.
PhoneA accept as voice.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mt_video_accept_as_voice_merge_drop(
True)
def _test_call_volte_add_mt_video_accept_as_voice_merge_drop(
self,
use_cep=False):
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_volte, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate VoLTE Call PhoneA->PhoneB.")
if not call_setup_teardown(self.log, ads[0], ads[1], None,
is_phone_in_call_volte,
is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Active call numbers in PhoneA is not 1.")
return False
call_ab_id = calls[0]
self.log.info(
"Step2: Initiate Video Call PhoneC->PhoneA and accept as voice.")
if not video_call_setup_teardown(
self.log,
ads[2],
ads[0],
None,
video_state=VT_STATE_AUDIO_ONLY,
verify_caller_func=is_phone_in_call_voice_hd,
verify_callee_func=is_phone_in_call_voice_hd):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_ab_id:
call_ac_id = call
self.log.info("Step3: Verify calls in correct state.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_ab_id, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_ac_id, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
return {False: self._test_vt_conference_merge_drop,
True: self._test_vt_conference_merge_drop_cep}[use_cep](
ads, call_ab_id, call_ac_id)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mo_voice_swap_downgrade_merge_drop(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Make Sure PhoneC is in LTE mode (with VoLTE).
PhoneA add a Bi-Directional Video call to PhoneB.
PhoneB accept as Video.
PhoneA VoLTE call to PhoneC. Accept on PhoneC.
Swap Active call on PhoneA.
Downgrade Video call on PhoneA and PhoneB to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_video_add_mo_voice_swap_downgrade_merge_drop(
False)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mo_voice_swap_downgrade_merge_drop_cep(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Make Sure PhoneC is in LTE mode (with VoLTE).
PhoneA add a Bi-Directional Video call to PhoneB.
PhoneB accept as Video.
PhoneA VoLTE call to PhoneC. Accept on PhoneC.
Swap Active call on PhoneA.
Downgrade Video call on PhoneA and PhoneB to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_video_add_mo_voice_swap_downgrade_merge_drop(
True)
def _test_call_video_add_mo_voice_swap_downgrade_merge_drop(self, use_cep):
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_volte,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ab = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ab is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Voice Call PhoneA->PhoneC.")
if not call_setup_teardown(self.log,
ads[0],
ads[2],
None,
verify_caller_func=None,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video_ab:
call_id_voice_ac = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_BIDIRECTIONAL,
CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
self.log.info(
"Step4: Swap calls on PhoneA and verify call state correct.")
ads[0].droid.telecomCallHold(call_id_voice_ac)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
for ad in [ads[0], ads[1]]:
self.log.info("{} audio: {}".format(ad.serial, get_audio_route(
self.log, ad)))
set_audio_route(self.log, ad, AUDIO_ROUTE_EARPIECE)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
self.log.info("Step5: Disable camera on PhoneA and PhoneB.")
if not video_call_downgrade(self.log, ads[0], call_id_video_ab, ads[1],
get_call_id_in_video_state(
self.log, ads[1],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneA.")
return False
if not video_call_downgrade(
self.log, ads[1], get_call_id_in_video_state(
self.log, ads[1], VT_STATE_TX_ENABLED), ads[0],
call_id_video_ab):
self.log.error("Failed to disable video on PhoneB.")
return False
self.log.info("Step6: Verify calls in correct state.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
return {False: self._test_vt_conference_merge_drop,
True: self._test_vt_conference_merge_drop_cep}[use_cep](
ads, call_id_video_ab, call_id_voice_ac)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mt_voice_swap_downgrade_merge_drop(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Make Sure PhoneC is in LTE mode (with VoLTE).
PhoneA add a Bi-Directional Video call to PhoneB.
PhoneB accept as Video.
PhoneC VoLTE call to PhoneA. Accept on PhoneA.
Swap Active call on PhoneA.
Downgrade Video call on PhoneA and PhoneB to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_video_add_mt_voice_swap_downgrade_merge_drop(
False)
@TelephonyBaseTest.tel_test_wrap
def test_call_video_add_mt_voice_swap_downgrade_merge_drop_cep(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with Video Calling).
Make Sure PhoneC is in LTE mode (with VoLTE).
PhoneA add a Bi-Directional Video call to PhoneB.
PhoneB accept as Video.
PhoneC VoLTE call to PhoneA. Accept on PhoneA.
Swap Active call on PhoneA.
Downgrade Video call on PhoneA and PhoneB to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_video_add_mt_voice_swap_downgrade_merge_drop(
True)
def _test_call_video_add_mt_voice_swap_downgrade_merge_drop(self,
use_cep=False):
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_video, (self.log, ads[1])), (phone_setup_volte,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate Video Call PhoneA->PhoneB.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ab = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ab is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info("Step2: Initiate Voice Call PhoneC->PhoneA.")
if not call_setup_teardown(self.log,
ads[2],
ads[0],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=None):
self.log.error("Failed to setup a call")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
for call in calls:
if call != call_id_video_ab:
call_id_voice_ac = call
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab,
VT_STATE_BIDIRECTIONAL_PAUSED, CALL_STATE_HOLDING):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
self.log.info(
"Step4: Swap calls on PhoneA and verify call state correct.")
ads[0].droid.telecomCallHold(call_id_voice_ac)
time.sleep(WAIT_TIME_ANDROID_STATE_SETTLING)
for ad in [ads[0], ads[1]]:
if get_audio_route(self.log, ad) != AUDIO_ROUTE_SPEAKER:
self.log.error("{} Audio is not on speaker.".format(ad.serial))
# TODO: b/26337892 Define expected audio route behavior.
set_audio_route(self.log, ad, AUDIO_ROUTE_EARPIECE)
time.sleep(WAIT_TIME_IN_CALL)
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
self.log.info("Step5: Disable camera on PhoneA and PhoneB.")
if not video_call_downgrade(self.log, ads[0], call_id_video_ab, ads[1],
get_call_id_in_video_state(
self.log, ads[1],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneA.")
return False
if not video_call_downgrade(
self.log, ads[1], get_call_id_in_video_state(
self.log, ads[1], VT_STATE_TX_ENABLED), ads[0],
call_id_video_ab):
self.log.error("Failed to disable video on PhoneB.")
return False
self.log.info("Step6: Verify calls in correct state.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ab, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
return {False: self._test_vt_conference_merge_drop,
True: self._test_vt_conference_merge_drop_cep}[use_cep](
ads, call_id_video_ab, call_id_voice_ac)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mo_video_downgrade_merge_drop(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneA add a Bi-Directional Video call to PhoneC.
PhoneC accept as Video.
Downgrade Video call on PhoneA and PhoneC to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mo_video_downgrade_merge_drop(False)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mo_video_downgrade_merge_drop_cep(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneA add a Bi-Directional Video call to PhoneC.
PhoneC accept as Video.
Downgrade Video call on PhoneA and PhoneC to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mo_video_downgrade_merge_drop(True)
def _test_call_volte_add_mo_video_downgrade_merge_drop(self, use_cep):
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_volte, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate VoLTE Call PhoneA->PhoneB.")
if not call_setup_teardown(self.log,
ads[0],
ads[1],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Active call numbers in PhoneA is not 1.")
return False
call_id_voice_ab = calls[0]
self.log.info("Step2: Initiate Video Call PhoneA->PhoneC.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[2],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ac = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ac is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ab, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
self.log.info("Step4: Disable camera on PhoneA and PhoneC.")
if not video_call_downgrade(self.log, ads[0], call_id_video_ac, ads[2],
get_call_id_in_video_state(
self.log, ads[2],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneA.")
return False
if not video_call_downgrade(
self.log, ads[2], get_call_id_in_video_state(
self.log, ads[2], VT_STATE_TX_ENABLED), ads[0],
call_id_video_ac):
self.log.error("Failed to disable video on PhoneB.")
return False
self.log.info("Step6: Verify calls in correct state.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ab, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
return {False: self._test_vt_conference_merge_drop,
True: self._test_vt_conference_merge_drop_cep}[use_cep](
ads, call_id_video_ac, call_id_voice_ab)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mt_video_downgrade_merge_drop(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneC add a Bi-Directional Video call to PhoneA.
PhoneA accept as Video.
Downgrade Video call on PhoneA and PhoneC to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mt_video_downgrade_merge_drop(False)
@TelephonyBaseTest.tel_test_wrap
def test_call_volte_add_mt_video_downgrade_merge_drop_cep(self):
"""Conference call
Make Sure PhoneA is in LTE mode (with Video Calling).
Make Sure PhoneB is in LTE mode (with VoLTE).
Make Sure PhoneC is in LTE mode (with Video Calling).
PhoneA VoLTE call to PhoneB. Accept on PhoneB.
PhoneC add a Bi-Directional Video call to PhoneA.
PhoneA accept as Video.
Downgrade Video call on PhoneA and PhoneC to audio only.
Merge call on PhoneA.
Hang up on PhoneB.
Hang up on PhoneC.
"""
return self._test_call_volte_add_mt_video_downgrade_merge_drop(True)
def _test_call_volte_add_mt_video_downgrade_merge_drop(self, use_cep):
# TODO: b/21437650 Test will fail. After established 2nd call ~15s,
# Phone C will drop call.
ads = self.android_devices
tasks = [(phone_setup_video, (self.log, ads[0])),
(phone_setup_volte, (self.log, ads[1])), (phone_setup_video,
(self.log, ads[2]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
self.log.info("Step1: Initiate VoLTE Call PhoneA->PhoneB.")
if not call_setup_teardown(self.log,
ads[0],
ads[1],
None,
verify_caller_func=is_phone_in_call_volte,
verify_callee_func=is_phone_in_call_volte):
self.log.error("Failed to setup a call")
return False
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 1:
self.log.error("Active call numbers in PhoneA is not 1.")
return False
call_id_voice_ab = calls[0]
self.log.info("Step2: Initiate Video Call PhoneC->PhoneA.")
if not video_call_setup_teardown(
self.log,
ads[2],
ads[0],
None,
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_video_bidirectional,
verify_callee_func=is_phone_in_call_video_bidirectional):
self.log.error("Failed to setup a call")
return False
call_id_video_ac = get_call_id_in_video_state(self.log, ads[0],
VT_STATE_BIDIRECTIONAL)
if call_id_video_ac is None:
self.log.error("No active video call in PhoneA.")
return False
self.log.info(
"Step3: Verify PhoneA's video/voice call in correct state.")
calls = ads[0].droid.telecomCallGetCallIds()
self.log.info("Calls in PhoneA{}".format(calls))
if num_active_calls(self.log, ads[0]) != 2:
self.log.error("Active call numbers in PhoneA is not 2.")
return False
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_BIDIRECTIONAL,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ab, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
self.log.info("Step4: Disable camera on PhoneA and PhoneC.")
if not video_call_downgrade(self.log, ads[0], call_id_video_ac, ads[2],
get_call_id_in_video_state(
self.log, ads[2],
VT_STATE_BIDIRECTIONAL)):
self.log.error("Failed to disable video on PhoneA.")
return False
if not video_call_downgrade(
self.log, ads[2], get_call_id_in_video_state(
self.log, ads[2], VT_STATE_TX_ENABLED), ads[0],
call_id_video_ac):
self.log.error("Failed to disable video on PhoneB.")
return False
self.log.info("Step6: Verify calls in correct state.")
if not verify_incall_state(self.log, [ads[0], ads[1], ads[2]], True):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_video_ac, VT_STATE_AUDIO_ONLY,
CALL_STATE_ACTIVE):
return False
if not verify_video_call_in_expected_state(
self.log, ads[0], call_id_voice_ab, VT_STATE_AUDIO_ONLY,
CALL_STATE_HOLDING):
return False
return {False: self._test_vt_conference_merge_drop,
True: self._test_vt_conference_merge_drop_cep}[use_cep](
ads, call_id_video_ac, call_id_voice_ab)
@TelephonyBaseTest.tel_test_wrap
def test_disable_data_vt_unavailable(self):
"""Disable Data, phone should no be able to make VT call.
Make sure PhoneA and PhoneB can make VT call.
Disable Data on PhoneA.
Make sure phoneA report vt_enabled as false.
Attempt to make a VT call from PhoneA to PhoneB,
Verify the call succeed as Voice call.
"""
self.log.info("Step1 Make sure Phones are able make VT call")
ads = self.android_devices
ads[0], ads[1] = ads[1], ads[0]
tasks = [(phone_setup_video, (self.log, ads[0])), (phone_setup_video,
(self.log, ads[1]))]
if not multithread_func(self.log, tasks):
self.log.error("Phone Failed to Set Up Properly.")
return False
try:
self.log.info("Step2 Turn off data and verify not connected.")
ads[0].droid.telephonyToggleDataConnection(False)
if verify_http_connection(self.log, ads[0]):
self.log.error("Internet Accessible when Disabled")
return False
self.log.info("Step3 Verify vt_enabled return false.")
if wait_for_video_enabled(self.log, ads[0],
MAX_WAIT_TIME_VOLTE_ENABLED):
self.log.error(
"{} failed to <report vt enabled false> for {}s."
.format(ads[0].serial, MAX_WAIT_TIME_VOLTE_ENABLED))
return False
self.log.info(
"Step4 Attempt to make VT call, verify call is AUDIO_ONLY.")
if not video_call_setup_teardown(
self.log,
ads[0],
ads[1],
ads[0],
video_state=VT_STATE_BIDIRECTIONAL,
verify_caller_func=is_phone_in_call_voice_hd,
verify_callee_func=is_phone_in_call_voice_hd):
self.log.error("Call failed or is not AUDIO_ONLY")
return False
finally:
ads[0].droid.telephonyToggleDataConnection(True)
return True
""" Tests End """
| 43.728239 | 85 | 0.595293 | 14,314 | 111,026 | 4.330236 | 0.022705 | 0.070697 | 0.050175 | 0.039575 | 0.947953 | 0.940726 | 0.929529 | 0.91922 | 0.915396 | 0.897149 | 0 | 0.01013 | 0.327833 | 111,026 | 2,538 | 86 | 43.745469 | 0.820429 | 0.126853 | 0 | 0.87095 | 0 | 0 | 0.116222 | 0.017408 | 0 | 0 | 0 | 0.002758 | 0 | 1 | 0.025918 | false | 0.00108 | 0.025378 | 0 | 0.213283 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
173c93b49aaa8edd7184864c15a4c1b05c21b95b | 5,648 | py | Python | template/template_sociais.py | AtlasGold/Formulario-Abro | f7afb4c6b192c58c6862ef557b15b95cd3205832 | [
"MIT"
] | null | null | null | template/template_sociais.py | AtlasGold/Formulario-Abro | f7afb4c6b192c58c6862ef557b15b95cd3205832 | [
"MIT"
] | null | null | null | template/template_sociais.py | AtlasGold/Formulario-Abro | f7afb4c6b192c58c6862ef557b15b95cd3205832 | [
"MIT"
] | null | null | null | #sociais________________________________________________________
sociaistemp0 ="""
<div style="background-color:#241c1c;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Sociais: {}</h1>
</div>
"""
sociaistemp1 ="""
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Qual Sua Profissão ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Gosta de Futebol ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Para Quais Times Você Torce ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Tem Algum Animal De Estimação ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Qual ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Tem Filhos ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Como se Chamam ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Tem Medo De Dentista ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Esta Satisfeito Com Sua Estética Facil e de Sorriso ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Tem Facebook ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Tem Instagram ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Qual ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Tem Algum Hobby ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Quais ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Gosta De Musica Ambiente ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Qual Gênero/Ritmo Gosta de Ouvir ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Qual Tipo De Programa De Televisão Gosta De Assistir ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
<div style="background-color:#37302b;padding:10px;border-radius:5px;margin:10px;box-shadow: 5px 5px 5px rgba(0,0,0,0.5);">
<h4 style="color:#f9b03d;text-align:center;">Qual Tipo De Programa De Televisão Gosta De Assistir ?</h1>
<br/>
<br/>
<p style="color:#f9b03d;text-align:center;font-family: monospace;"><br/>{}</p>
</div>
""" | 42.149254 | 122 | 0.699717 | 936 | 5,648 | 4.162393 | 0.075855 | 0.029261 | 0.029261 | 0.189938 | 0.930698 | 0.930698 | 0.930698 | 0.930698 | 0.930698 | 0.930698 | 0 | 0.092988 | 0.063208 | 5,648 | 134 | 123 | 42.149254 | 0.643357 | 0.011154 | 0 | 0.843478 | 0 | 0.391304 | 0.992301 | 0.684333 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 11 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.