hexsha string | size int64 | ext string | lang string | max_stars_repo_path string | max_stars_repo_name string | max_stars_repo_head_hexsha string | max_stars_repo_licenses list | max_stars_count int64 | max_stars_repo_stars_event_min_datetime string | max_stars_repo_stars_event_max_datetime string | max_issues_repo_path string | max_issues_repo_name string | max_issues_repo_head_hexsha string | max_issues_repo_licenses list | max_issues_count int64 | max_issues_repo_issues_event_min_datetime string | max_issues_repo_issues_event_max_datetime string | max_forks_repo_path string | max_forks_repo_name string | max_forks_repo_head_hexsha string | max_forks_repo_licenses list | max_forks_count int64 | max_forks_repo_forks_event_min_datetime string | max_forks_repo_forks_event_max_datetime string | content string | avg_line_length float64 | max_line_length int64 | alphanum_fraction float64 | qsc_code_num_words_quality_signal int64 | qsc_code_num_chars_quality_signal float64 | qsc_code_mean_word_length_quality_signal float64 | qsc_code_frac_words_unique_quality_signal float64 | qsc_code_frac_chars_top_2grams_quality_signal float64 | qsc_code_frac_chars_top_3grams_quality_signal float64 | qsc_code_frac_chars_top_4grams_quality_signal float64 | qsc_code_frac_chars_dupe_5grams_quality_signal float64 | qsc_code_frac_chars_dupe_6grams_quality_signal float64 | qsc_code_frac_chars_dupe_7grams_quality_signal float64 | qsc_code_frac_chars_dupe_8grams_quality_signal float64 | qsc_code_frac_chars_dupe_9grams_quality_signal float64 | qsc_code_frac_chars_dupe_10grams_quality_signal float64 | qsc_code_frac_chars_replacement_symbols_quality_signal float64 | qsc_code_frac_chars_digital_quality_signal float64 | qsc_code_frac_chars_whitespace_quality_signal float64 | qsc_code_size_file_byte_quality_signal float64 | qsc_code_num_lines_quality_signal float64 | qsc_code_num_chars_line_max_quality_signal float64 | qsc_code_num_chars_line_mean_quality_signal float64 | qsc_code_frac_chars_alphabet_quality_signal float64 | qsc_code_frac_chars_comments_quality_signal float64 | qsc_code_cate_xml_start_quality_signal float64 | qsc_code_frac_lines_dupe_lines_quality_signal float64 | qsc_code_cate_autogen_quality_signal float64 | qsc_code_frac_lines_long_string_quality_signal float64 | qsc_code_frac_chars_string_length_quality_signal float64 | qsc_code_frac_chars_long_word_length_quality_signal float64 | qsc_code_frac_lines_string_concat_quality_signal float64 | qsc_code_cate_encoded_data_quality_signal float64 | qsc_code_frac_chars_hex_words_quality_signal float64 | qsc_code_frac_lines_prompt_comments_quality_signal float64 | qsc_code_frac_lines_assert_quality_signal float64 | qsc_codepython_cate_ast_quality_signal float64 | qsc_codepython_frac_lines_func_ratio_quality_signal float64 | qsc_codepython_cate_var_zero_quality_signal bool | qsc_codepython_frac_lines_pass_quality_signal float64 | qsc_codepython_frac_lines_import_quality_signal float64 | qsc_codepython_frac_lines_simplefunc_quality_signal float64 | qsc_codepython_score_lines_no_logic_quality_signal float64 | qsc_codepython_frac_lines_print_quality_signal float64 | qsc_code_num_words int64 | qsc_code_num_chars int64 | qsc_code_mean_word_length int64 | qsc_code_frac_words_unique null | qsc_code_frac_chars_top_2grams int64 | qsc_code_frac_chars_top_3grams int64 | qsc_code_frac_chars_top_4grams int64 | qsc_code_frac_chars_dupe_5grams int64 | qsc_code_frac_chars_dupe_6grams int64 | qsc_code_frac_chars_dupe_7grams int64 | qsc_code_frac_chars_dupe_8grams int64 | qsc_code_frac_chars_dupe_9grams int64 | qsc_code_frac_chars_dupe_10grams int64 | qsc_code_frac_chars_replacement_symbols int64 | qsc_code_frac_chars_digital int64 | qsc_code_frac_chars_whitespace int64 | qsc_code_size_file_byte int64 | qsc_code_num_lines int64 | qsc_code_num_chars_line_max int64 | qsc_code_num_chars_line_mean int64 | qsc_code_frac_chars_alphabet int64 | qsc_code_frac_chars_comments int64 | qsc_code_cate_xml_start int64 | qsc_code_frac_lines_dupe_lines int64 | qsc_code_cate_autogen int64 | qsc_code_frac_lines_long_string int64 | qsc_code_frac_chars_string_length int64 | qsc_code_frac_chars_long_word_length int64 | qsc_code_frac_lines_string_concat null | qsc_code_cate_encoded_data int64 | qsc_code_frac_chars_hex_words int64 | qsc_code_frac_lines_prompt_comments int64 | qsc_code_frac_lines_assert int64 | qsc_codepython_cate_ast int64 | qsc_codepython_frac_lines_func_ratio int64 | qsc_codepython_cate_var_zero int64 | qsc_codepython_frac_lines_pass int64 | qsc_codepython_frac_lines_import int64 | qsc_codepython_frac_lines_simplefunc int64 | qsc_codepython_score_lines_no_logic int64 | qsc_codepython_frac_lines_print int64 | effective string | hits int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
f738f96e3c7e444aa88fd83fb171e57d9f3c193d | 1,698 | py | Python | Picture/Exbar.py | hashtagSELFIE/That-s-a-Wrap- | 31c8b824742fee01c384eefa49f9f82d85518651 | [
"MIT"
] | 2 | 2018-11-30T04:13:04.000Z | 2018-11-30T13:01:12.000Z | Picture/Exbar.py | hashtagSELFIE/That-s-a-Wrap- | 31c8b824742fee01c384eefa49f9f82d85518651 | [
"MIT"
] | 1 | 2022-02-12T05:05:55.000Z | 2022-02-12T05:05:55.000Z | Picture/Exbar.py | hashtagSELFIE/That-s-a-Wrap- | 31c8b824742fee01c384eefa49f9f82d85518651 | [
"MIT"
] | 1 | 2018-12-03T07:33:39.000Z | 2018-12-03T07:33:39.000Z | import pygal
def picture():
"""picture Bar"""
line_chart = pygal.Bar()
line_chart.title = 'Director (in %)'
line_chart.x_labels = map(str, range(2002, 2013))
line_chart.add('Action',[None, None, 0, 16.6, 25, 31, 36.4, 45.5, 46.3, 42.8, 37.1])
line_chart.add('Adventure',[None, None, None, None, None, None, 0, 3.9, 10.8, 23.8, 35.3])
line_chart.add('War',[85.8, 84.6, 84.7, 74.5, 66, 58.6, 54.7, 44.8, 36.2, 26.6, 20.1])
line_chart.add('Drama',[14.2, 15.4, 15.3, 8.9, 9, 10.4, 8.9, 5.8, 6.7, 6.8, 7.5])
line_chart.add('Science',[None, None, 0, 16.6, 25, 31, 36.4, 45.5, 46.3, 42.8, 37.1])
line_chart.add('Family',[None, None, None, None, None, None, 0, 3.9, 10.8, 23.8, 35.3])
line_chart.add('Thriller',[85.8, 84.6, 84.7, 74.5, 66, 58.6, 54.7, 44.8, 36.2, 26.6, 20.1])
line_chart.add('Crime',[14.2, 15.4, 15.3, 8.9, 9, 10.4, 8.9, 5.8, 6.7, 6.8, 7.5])
line_chart.add('Documentaries',[None, None, 0, 16.6, 25, 31, 36.4, 45.5, 46.3, 42.8, 37.1])
line_chart.add('Animation',[None, None, None, None, None, None, 0, 3.9, 10.8, 23.8, 35.3])
line_chart.add('Comedy',[85.8, 84.6, 84.7, 74.5, 66, 58.6, 54.7, 44.8, 36.2, 26.6, 20.1])
line_chart.add('Erotic',[14.2, 15.4, 15.3, 8.9, 9, 10.4, 8.9, 5.8, 6.7, 6.8, 7.5])
line_chart.add('Fantasy',[None, None, 0, 16.6, 25, 31, 36.4, 45.5, 46.3, 42.8, 37.1])
line_chart.add('Musicals',[None, None, None, None, None, None, 0, 3.9, 10.8, 23.8, 35.3])
line_chart.add('Romance',[85.8, 84.6, 84.7, 74.5, 66, 58.6, 54.7, 44.8, 36.2, 26.6, 20.1])
line_chart.add('Western',[14.2, 15.4, 15.3, 8.9, 9, 10.4, 8.9, 5.8, 6.7, 6.8, 7.5])
line_chart.render_to_file('bar-chart.svg')
picture() | 70.75 | 95 | 0.570671 | 393 | 1,698 | 2.407125 | 0.180662 | 0.20296 | 0.20296 | 0.20296 | 0.724101 | 0.724101 | 0.724101 | 0.724101 | 0.724101 | 0.724101 | 0 | 0.265912 | 0.167256 | 1,698 | 24 | 96 | 70.75 | 0.403112 | 0.006478 | 0 | 0 | 0 | 0 | 0.083234 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.043478 | false | 0 | 0.043478 | 0 | 0.086957 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
e39e699638042d37fadac1d25bcd94ae8d4d230f | 114 | py | Python | board/stm32f769i-discovery/ucube.py | ruoranluomu/AliOS-Things | d0f3431bcacac5b61645e9beb231a0a53be8078b | [
"Apache-2.0"
] | 1 | 2021-06-10T01:39:39.000Z | 2021-06-10T01:39:39.000Z | board/stm32f769i-discovery/ucube.py | ewfweftwer/AliOS-Things | 26a5c1a2d6b1771590f5d302f0b2e7fe2fcf843e | [
"Apache-2.0"
] | null | null | null | board/stm32f769i-discovery/ucube.py | ewfweftwer/AliOS-Things | 26a5c1a2d6b1771590f5d302f0b2e7fe2fcf843e | [
"Apache-2.0"
] | 5 | 2020-11-04T04:36:37.000Z | 2021-11-10T08:05:49.000Z | linux_only_targets="helloworld modbus_demo udata_demo.sensor_local_demo udata_demo.udata_local_demo udataapp yts"
| 57 | 113 | 0.903509 | 18 | 114 | 5.222222 | 0.611111 | 0.287234 | 0.276596 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.052632 | 114 | 1 | 114 | 114 | 0.87037 | 0 | 0 | 0 | 0 | 0 | 0.807018 | 0.482456 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
541cad123f57edc9d431a5eb7d7018922ae1bdee | 3,334 | py | Python | tests/test_http_context.py | jframos/sdklib | 0cc1126e94b823fad6cc47e6a00549cad6d2f771 | [
"BSD-2-Clause"
] | 3 | 2016-12-15T15:54:37.000Z | 2021-08-10T03:16:18.000Z | tests/test_http_context.py | jframos/sdklib | 0cc1126e94b823fad6cc47e6a00549cad6d2f771 | [
"BSD-2-Clause"
] | 44 | 2016-04-13T08:19:45.000Z | 2022-01-14T12:58:44.000Z | tests/test_http_context.py | jframos/sdklib | 0cc1126e94b823fad6cc47e6a00549cad6d2f771 | [
"BSD-2-Clause"
] | 5 | 2016-11-22T11:23:28.000Z | 2020-01-28T12:26:10.000Z | import unittest
from sdklib.http import HttpRequestContextSingleton, HttpRequestContext
class TestHttpContext(unittest.TestCase):
@classmethod
def setUpClass(cls):
pass
@classmethod
def tearDownClass(cls):
pass
def test_http_context_sigleton(self):
ctxt_singleton = HttpRequestContextSingleton.get_instance()
ctxt_singleton.method = "POST"
ctxt_singleton2 = HttpRequestContextSingleton.get_instance()
self.assertEqual(ctxt_singleton.method, ctxt_singleton2.method)
def test_http_context_singleton_clear(self):
ctxt_singleton = HttpRequestContextSingleton.get_instance()
ctxt_singleton.method = "POST"
self.assertEqual("POST", ctxt_singleton.method)
ctxt_singleton.clear()
self.assertNotEqual("POST", ctxt_singleton.method)
ctxt_singleton2 = HttpRequestContextSingleton.get_instance()
self.assertNotEqual("POST", ctxt_singleton2.method)
def test_http_context_singleton_fields_to_clear(self):
ctxt_singleton = HttpRequestContextSingleton.get_instance()
ctxt_singleton.fields_to_clear = ['proxy']
ctxt_singleton.proxy = "http://localhost:8080"
ctxt_singleton.method = "PUT"
self.assertEqual("http://localhost:8080", ctxt_singleton.proxy)
ctxt_singleton.clear()
self.assertNotEqual("http://localhost:8080", ctxt_singleton.proxy)
self.assertEqual("PUT", ctxt_singleton.method)
ctxt_singleton2 = HttpRequestContextSingleton.get_instance()
self.assertNotEqual("http://localhost:8080", ctxt_singleton2.proxy)
self.assertEqual("PUT", ctxt_singleton2.method)
def test_http_context(self):
ctxt_singleton = HttpRequestContext()
ctxt_singleton.method = "POST"
self.assertEqual("POST", ctxt_singleton.method)
def test_http_context_clear(self):
ctxt_singleton = HttpRequestContext()
ctxt_singleton.method = "POST"
self.assertEqual("POST", ctxt_singleton.method)
ctxt_singleton.clear()
self.assertNotEqual("POST", ctxt_singleton.method)
ctxt_singleton2 = HttpRequestContext()
self.assertNotEqual("POST", ctxt_singleton2.method)
def test_http_context_fields_to_clear(self):
ctxt_singleton = HttpRequestContext()
ctxt_singleton.fields_to_clear = ['proxy']
ctxt_singleton.proxy = "http://localhost:8080"
ctxt_singleton.method = "PUT"
self.assertEqual("http://localhost:8080", ctxt_singleton.proxy)
ctxt_singleton.clear()
self.assertNotEqual("http://localhost:8080", ctxt_singleton.proxy)
self.assertEqual("PUT", ctxt_singleton.method)
def test_http_context_clear_by_arg(self):
ctxt_singleton = HttpRequestContext()
ctxt_singleton.fields_to_clear = []
ctxt_singleton.proxy = "http://localhost:8080"
ctxt_singleton.method = "PUT"
self.assertEqual("http://localhost:8080", ctxt_singleton.proxy)
ctxt_singleton.clear("proxy")
self.assertNotEqual("http://localhost:8080", ctxt_singleton.proxy)
self.assertEqual("PUT", ctxt_singleton.method)
def test_http_context_headers_none(self):
ctxt = HttpRequestContext()
ctxt.headers = None
self.assertEqual({}, ctxt.headers)
| 37.460674 | 75 | 0.705459 | 341 | 3,334 | 6.624633 | 0.117302 | 0.23019 | 0.134573 | 0.092961 | 0.85259 | 0.833997 | 0.792829 | 0.773794 | 0.736609 | 0.665339 | 0 | 0.018243 | 0.194361 | 3,334 | 88 | 76 | 37.886364 | 0.822785 | 0 | 0 | 0.676471 | 0 | 0 | 0.086983 | 0 | 0 | 0 | 0 | 0 | 0.294118 | 1 | 0.147059 | false | 0.029412 | 0.029412 | 0 | 0.191176 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
581887e785e1ddc98b7a2d558780152f3d84aeba | 7,188 | py | Python | nion/data/RGB.py | Brow71189/niondata | 6ead4a4e6b0bfe0446052e7ff62ad2ef42b5c8c8 | [
"Apache-2.0"
] | null | null | null | nion/data/RGB.py | Brow71189/niondata | 6ead4a4e6b0bfe0446052e7ff62ad2ef42b5c8c8 | [
"Apache-2.0"
] | 20 | 2018-10-24T20:10:08.000Z | 2021-12-16T00:00:07.000Z | nion/data/RGB.py | Brow71189/niondata | 6ead4a4e6b0bfe0446052e7ff62ad2ef42b5c8c8 | [
"Apache-2.0"
] | 4 | 2018-12-21T23:14:26.000Z | 2021-03-12T19:32:05.000Z | # standard libraries
import typing
# third party libraries
import numpy
# local libraries
from nion.data import DataAndMetadata
from nion.data import Image
_DataAndMetadataLike = DataAndMetadata._DataAndMetadataLike
_DataAndMetadataIndeterminateSizeLike = DataAndMetadata._DataAndMetadataIndeterminateSizeLike
_ImageDataType = Image._ImageDataType
def function_rgb_channel(data_and_metadata_in: _DataAndMetadataLike, channel: int) -> DataAndMetadata.DataAndMetadata:
data_and_metadata = DataAndMetadata.promote_ndarray(data_and_metadata_in)
if channel < 0 or channel > 3:
raise ValueError("RGB channel: invalid channel.")
data = data_and_metadata.data
if not Image.is_data_valid(data):
raise ValueError("RGB channel: invalid data.")
if not data_and_metadata.is_data_rgb_type:
raise ValueError("RGB channel: data is not RGB type.")
assert data is not None
channel_data: _ImageDataType
if Image.is_shape_and_dtype_rgb(data.shape, data.dtype):
if channel == 3:
channel_data = numpy.ones(data.shape, int)
else:
channel_data = data[..., channel].astype(int)
elif Image.is_shape_and_dtype_rgba(data.shape, data.dtype):
channel_data = data[..., channel].astype(int)
else:
raise ValueError("RGB channel: unable to extract channel.")
return DataAndMetadata.new_data_and_metadata(channel_data,
intensity_calibration=data_and_metadata.intensity_calibration,
dimensional_calibrations=data_and_metadata.dimensional_calibrations)
def function_rgb_linear_combine(data_and_metadata_in: _DataAndMetadataLike, red_weight: float, green_weight: float,
blue_weight: float) -> DataAndMetadata.DataAndMetadata:
data_and_metadata = DataAndMetadata.promote_ndarray(data_and_metadata_in)
data = data_and_metadata.data
if not Image.is_data_valid(data):
raise ValueError("RGB linear combine: invalid data.")
if not data_and_metadata.is_data_rgb_type:
raise ValueError("RGB linear combine: data is not RGB type.")
assert data is not None
combined_data: _ImageDataType
if Image.is_shape_and_dtype_rgb(data.shape, data.dtype):
combined_data = numpy.sum(data[..., :] * (blue_weight, green_weight, red_weight), 2)
elif Image.is_shape_and_dtype_rgba(data.shape, data.dtype):
combined_data = numpy.sum(data[..., :] * (blue_weight, green_weight, red_weight, 0.0), 2)
else:
raise ValueError("RGB channel: unable to extract channel.")
return DataAndMetadata.new_data_and_metadata(combined_data, intensity_calibration=data_and_metadata.intensity_calibration, dimensional_calibrations=data_and_metadata.dimensional_calibrations)
def function_rgb(red_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike,
green_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike,
blue_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike) -> DataAndMetadata.DataAndMetadata:
red_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(red_data_and_metadata_in)
green_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(green_data_and_metadata_in)
blue_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(blue_data_and_metadata_in)
shape = DataAndMetadata.determine_shape(red_data_and_metadata_c, green_data_and_metadata_c, blue_data_and_metadata_c)
if shape is None:
raise ValueError("RGB: data shapes do not match or are indeterminate")
red_data_and_metadata = DataAndMetadata.promote_constant(red_data_and_metadata_c, shape)
green_data_and_metadata = DataAndMetadata.promote_constant(green_data_and_metadata_c, shape)
blue_data_and_metadata = DataAndMetadata.promote_constant(blue_data_and_metadata_c, shape)
channels = (blue_data_and_metadata, green_data_and_metadata, red_data_and_metadata)
if any([not Image.is_data_valid(data_and_metadata.data) for data_and_metadata in channels]):
raise ValueError("RGB: invalid data")
rgb_image = numpy.empty(shape + (3,), numpy.uint8)
for channel_index, channel in enumerate(channels):
data = channel._data_ex
if data.dtype.kind in 'iu':
rgb_image[..., channel_index] = numpy.clip(data, 0, 255)
elif data.dtype.kind in 'f':
rgb_image[..., channel_index] = numpy.clip(numpy.multiply(data, 255), 0, 255)
return DataAndMetadata.new_data_and_metadata(rgb_image,
intensity_calibration=red_data_and_metadata.intensity_calibration,
dimensional_calibrations=red_data_and_metadata.dimensional_calibrations)
def function_rgba(red_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike,
green_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike,
blue_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike,
alpha_data_and_metadata_in: _DataAndMetadataIndeterminateSizeLike) -> DataAndMetadata.DataAndMetadata:
red_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(red_data_and_metadata_in)
green_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(green_data_and_metadata_in)
blue_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(blue_data_and_metadata_in)
alpha_data_and_metadata_c = DataAndMetadata.promote_indeterminate_array(alpha_data_and_metadata_in)
shape = DataAndMetadata.determine_shape(red_data_and_metadata_c, green_data_and_metadata_c, blue_data_and_metadata_c)
if shape is None:
raise ValueError("RGBA: data shapes do not match or are indeterminate")
red_data_and_metadata = DataAndMetadata.promote_constant(red_data_and_metadata_c, shape)
green_data_and_metadata = DataAndMetadata.promote_constant(green_data_and_metadata_c, shape)
blue_data_and_metadata = DataAndMetadata.promote_constant(blue_data_and_metadata_c, shape)
alpha_data_and_metadata = DataAndMetadata.promote_constant(alpha_data_and_metadata_c, shape)
channels = (blue_data_and_metadata, green_data_and_metadata, red_data_and_metadata, alpha_data_and_metadata)
if any([not Image.is_data_valid(data_and_metadata.data) for data_and_metadata in channels]):
raise ValueError("RGB: invalid data")
rgba_image = numpy.empty(shape + (4,), numpy.uint8)
for channel_index, channel in enumerate(channels):
data = channel._data_ex
if data.dtype.kind in 'iu':
rgba_image[..., channel_index] = numpy.clip(data, 0, 255)
elif data.dtype.kind in 'f':
rgba_image[..., channel_index] = numpy.clip(numpy.multiply(data, 255), 0, 255)
return DataAndMetadata.new_data_and_metadata(rgba_image,
intensity_calibration=red_data_and_metadata.intensity_calibration,
dimensional_calibrations=red_data_and_metadata.dimensional_calibrations)
| 49.232877 | 195 | 0.747913 | 876 | 7,188 | 5.711187 | 0.10274 | 0.103538 | 0.221867 | 0.067959 | 0.88407 | 0.852688 | 0.828503 | 0.826304 | 0.815111 | 0.815111 | 0 | 0.005622 | 0.183361 | 7,188 | 145 | 196 | 49.572414 | 0.846678 | 0.007791 | 0 | 0.55 | 0 | 0 | 0.053591 | 0 | 0 | 0 | 0 | 0 | 0.02 | 1 | 0.04 | false | 0 | 0.04 | 0 | 0.12 | 0 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5467d1cea6579d9c0ce244c85f06223f174e1247 | 77,946 | py | Python | atrium/api/members_api.py | adam-krawczyk/atrium-python | 42bd90e00577907de502402b272a2d373e348ae0 | [
"MIT"
] | 6 | 2018-03-29T18:26:00.000Z | 2022-03-06T02:42:00.000Z | atrium/api/members_api.py | adam-krawczyk/atrium-python | 42bd90e00577907de502402b272a2d373e348ae0 | [
"MIT"
] | 5 | 2018-12-12T20:16:10.000Z | 2021-05-20T22:53:34.000Z | atrium/api/members_api.py | adam-krawczyk/atrium-python | 42bd90e00577907de502402b272a2d373e348ae0 | [
"MIT"
] | 5 | 2018-05-01T17:58:40.000Z | 2021-08-23T13:40:58.000Z | # coding: utf-8
"""
MX API
The MX Atrium API supports over 48,000 data connections to thousands of financial institutions. It provides secure access to your users' accounts and transactions with industry-leading cleansing, categorization, and classification. Atrium is designed according to resource-oriented REST architecture and responds with JSON bodies and HTTP response codes. Use Atrium's development environment, vestibule.mx.com, to quickly get up and running. The development environment limits are 100 users, 25 members per user, and access to the top 15 institutions. Contact MX to purchase production access. # noqa: E501
"""
from __future__ import absolute_import
import re # noqa: F401
# python 2 and python 3 compatibility library
import six
from atrium.api_client import ApiClient
class MembersApi(object):
def __init__(self, api_client=None):
if api_client is None:
api_client = ApiClient()
self.api_client = api_client
def aggregate_member(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Aggregate member # noqa: E501
Calling this endpoint initiates an aggregation event for the member. This brings in the latest account and transaction data from the connected institution. If this data has recently been updated, MX may not initiate an aggregation event. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.aggregate_member(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.aggregate_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.aggregate_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def aggregate_member_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Aggregate member # noqa: E501
Calling this endpoint initiates an aggregation event for the member. This brings in the latest account and transaction data from the connected institution. If this data has recently been updated, MX may not initiate an aggregation event. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.aggregate_member_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method aggregate_member" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `aggregate_member`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `aggregate_member`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/aggregate', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def aggregate_member_balances(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Aggregate member account balances # noqa: E501
This endpoint operates much like the _aggregate member_ endpoint except that it gathers only account balance information; it does not gather any transaction data at all. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.aggregate_member_balances(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.aggregate_member_balances_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.aggregate_member_balances_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def aggregate_member_balances_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Aggregate member account balances # noqa: E501
This endpoint operates much like the _aggregate member_ endpoint except that it gathers only account balance information; it does not gather any transaction data at all. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.aggregate_member_balances_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method aggregate_member_balances" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `aggregate_member_balances`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `aggregate_member_balances`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/balance', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def create_member(self, user_guid, body, **kwargs): # noqa: E501
"""Create member # noqa: E501
This endpoint allows you to create a new member. Members are created with the required parameters credentials and institution_code, and the optional parameters identifier and metadata.<br> When creating a member, you'll need to include the correct type of credential required by the financial institution and provided by the user. You can find out which credential type is required with the /institutions/{institution_code}/credentials endpoint.<br> If successful, Atrium will respond with the newly-created member object.<br> Once you successfully create a member, MX will immediately validate the provided credentials and attempt to aggregate data for accounts and transactions. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_member(user_guid, body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_guid: The unique identifier for a `user`. (required)
:param MemberCreateRequestBody body: Member object to be created with optional parameters (identifier and metadata) and required parameters (credentials and institution_code) (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.create_member_with_http_info(user_guid, body, **kwargs) # noqa: E501
else:
(data) = self.create_member_with_http_info(user_guid, body, **kwargs) # noqa: E501
return data
def create_member_with_http_info(self, user_guid, body, **kwargs): # noqa: E501
"""Create member # noqa: E501
This endpoint allows you to create a new member. Members are created with the required parameters credentials and institution_code, and the optional parameters identifier and metadata.<br> When creating a member, you'll need to include the correct type of credential required by the financial institution and provided by the user. You can find out which credential type is required with the /institutions/{institution_code}/credentials endpoint.<br> If successful, Atrium will respond with the newly-created member object.<br> Once you successfully create a member, MX will immediately validate the provided credentials and attempt to aggregate data for accounts and transactions. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.create_member_with_http_info(user_guid, body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_guid: The unique identifier for a `user`. (required)
:param MemberCreateRequestBody body: Member object to be created with optional parameters (identifier and metadata) and required parameters (credentials and institution_code) (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['user_guid', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_member" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `create_member`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `create_member`") # noqa: E501
collection_formats = {}
path_params = {}
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def delete_member(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Delete member # noqa: E501
Accessing this endpoint will permanently delete a member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_member(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.delete_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.delete_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def delete_member_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Delete member # noqa: E501
Accessing this endpoint will permanently delete a member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.delete_member_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: None
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_member" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `delete_member`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `delete_member`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}', 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type=None, # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def extend_history(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Extend history # noqa: E501
The extend_history endpoint begins the process of fetching up to 24 months of data associated with a particular `member`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.extend_history(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.extend_history_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.extend_history_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def extend_history_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Extend history # noqa: E501
The extend_history endpoint begins the process of fetching up to 24 months of data associated with a particular `member`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.extend_history_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method extend_history" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `extend_history`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `extend_history`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/extend_history', 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_member_accounts(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member accounts # noqa: E501
This endpoint returns an array with information about every account associated with a particular member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_accounts(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param int page: Specify current page.
:param int records_per_page: Specify records per page.
:return: AccountsResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_member_accounts_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.list_member_accounts_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def list_member_accounts_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member accounts # noqa: E501
This endpoint returns an array with information about every account associated with a particular member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_accounts_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param int page: Specify current page.
:param int records_per_page: Specify records per page.
:return: AccountsResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid', 'page', 'records_per_page'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_member_accounts" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `list_member_accounts`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `list_member_accounts`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'records_per_page' in params:
query_params.append(('records_per_page', params['records_per_page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/accounts', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='AccountsResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_member_credentials(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member credentials # noqa: E501
This endpoint returns an array which contains information on every non-MFA credential associated with a specific member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_credentials(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: CredentialsResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_member_credentials_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.list_member_credentials_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def list_member_credentials_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member credentials # noqa: E501
This endpoint returns an array which contains information on every non-MFA credential associated with a specific member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_credentials_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: CredentialsResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_member_credentials" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `list_member_credentials`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `list_member_credentials`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/credentials', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='CredentialsResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_member_mfa_challenges(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member MFA challenges # noqa: E501
Use this endpoint for information on what multi-factor authentication challenges need to be answered in order to aggregate a member.<br> If the aggregation is not challenged, i.e., the member does not have a connection status of CHALLENGED, then code 204 No Content will be returned.<br> If the aggregation has been challenged, i.e., the member does have a connection status of CHALLENGED, then code 200 OK will be returned — along with the corresponding credentials. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_mfa_challenges(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: ChallengesResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_member_mfa_challenges_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.list_member_mfa_challenges_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def list_member_mfa_challenges_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member MFA challenges # noqa: E501
Use this endpoint for information on what multi-factor authentication challenges need to be answered in order to aggregate a member.<br> If the aggregation is not challenged, i.e., the member does not have a connection status of CHALLENGED, then code 204 No Content will be returned.<br> If the aggregation has been challenged, i.e., the member does have a connection status of CHALLENGED, then code 200 OK will be returned — along with the corresponding credentials. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_mfa_challenges_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: ChallengesResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_member_mfa_challenges" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `list_member_mfa_challenges`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `list_member_mfa_challenges`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/challenges', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='ChallengesResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_member_transactions(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member transactions # noqa: E501
Use this endpoint to get all transactions from all accounts associated with a specific member.<br> This endpoint accepts optional URL query parameters — from_date and to_date — which are used to filter transactions according to the date they were posted. If no values are given for the query parameters, from_date will default to 90 days prior to the request and to_date will default to 5 days from the time of the request. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_transactions(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param str from_date: Filter transactions from this date.
:param str to_date: Filter transactions to this date.
:param int page: Specify current page.
:param int records_per_page: Specify records per page.
:return: TransactionsResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_member_transactions_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.list_member_transactions_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def list_member_transactions_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""List member transactions # noqa: E501
Use this endpoint to get all transactions from all accounts associated with a specific member.<br> This endpoint accepts optional URL query parameters — from_date and to_date — which are used to filter transactions according to the date they were posted. If no values are given for the query parameters, from_date will default to 90 days prior to the request and to_date will default to 5 days from the time of the request. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_member_transactions_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param str from_date: Filter transactions from this date.
:param str to_date: Filter transactions to this date.
:param int page: Specify current page.
:param int records_per_page: Specify records per page.
:return: TransactionsResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid', 'from_date', 'to_date', 'page', 'records_per_page'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_member_transactions" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `list_member_transactions`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `list_member_transactions`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
if 'from_date' in params:
query_params.append(('from_date', params['from_date'])) # noqa: E501
if 'to_date' in params:
query_params.append(('to_date', params['to_date'])) # noqa: E501
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'records_per_page' in params:
query_params.append(('records_per_page', params['records_per_page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/transactions', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='TransactionsResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def list_members(self, user_guid, **kwargs): # noqa: E501
"""List members # noqa: E501
This endpoint returns an array which contains information on every member associated with a specific user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_members(user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_guid: The unique identifier for a `user`. (required)
:param int page: Specify current page.
:param int records_per_page: Specify records per page.
:return: MembersResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.list_members_with_http_info(user_guid, **kwargs) # noqa: E501
else:
(data) = self.list_members_with_http_info(user_guid, **kwargs) # noqa: E501
return data
def list_members_with_http_info(self, user_guid, **kwargs): # noqa: E501
"""List members # noqa: E501
This endpoint returns an array which contains information on every member associated with a specific user. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.list_members_with_http_info(user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str user_guid: The unique identifier for a `user`. (required)
:param int page: Specify current page.
:param int records_per_page: Specify records per page.
:return: MembersResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['user_guid', 'page', 'records_per_page'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method list_members" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `list_members`") # noqa: E501
collection_formats = {}
path_params = {}
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
if 'page' in params:
query_params.append(('page', params['page'])) # noqa: E501
if 'records_per_page' in params:
query_params.append(('records_per_page', params['records_per_page'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MembersResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_member(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Read member # noqa: E501
Use this endpoint to read the attributes of a specific member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_member(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.read_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.read_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def read_member_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Read member # noqa: E501
Use this endpoint to read the attributes of a specific member. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_member_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_member" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `read_member`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `read_member`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_member_status(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Read member connection status # noqa: E501
This endpoint provides the status of the member's most recent aggregation event. This is an important step in the aggregation process, and the results returned by this endpoint should determine what you do next in order to successfully aggregate a member.<br> MX has introduced new, more detailed information on the current status of a member's connection to a financial institution and the state of its aggregation: the connection_status field. These are intended to replace and expand upon the information provided in the status field, which will soon be deprecated; support for the status field remains for the time being. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_member_status(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberConnectionStatusResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.read_member_status_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.read_member_status_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def read_member_status_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Read member connection status # noqa: E501
This endpoint provides the status of the member's most recent aggregation event. This is an important step in the aggregation process, and the results returned by this endpoint should determine what you do next in order to successfully aggregate a member.<br> MX has introduced new, more detailed information on the current status of a member's connection to a financial institution and the state of its aggregation: the connection_status field. These are intended to replace and expand upon the information provided in the status field, which will soon be deprecated; support for the status field remains for the time being. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_member_status_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:return: MemberConnectionStatusResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_member_status" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `read_member_status`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `read_member_status`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/status', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberConnectionStatusResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def read_o_auth_window_uri(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Read OAuth Window URI # noqa: E501
This endpoint will generate an `oauth_window_uri` for the specified `member`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_o_auth_window_uri(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param str referral_source: Should be either BROWSER or APP depending on the implementation.
:param str ui_message_webview_url_scheme: A scheme for routing the user back to the application state they were previously in.
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.read_o_auth_window_uri_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.read_o_auth_window_uri_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def read_o_auth_window_uri_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Read OAuth Window URI # noqa: E501
This endpoint will generate an `oauth_window_uri` for the specified `member`. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.read_o_auth_window_uri_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param str referral_source: Should be either BROWSER or APP depending on the implementation.
:param str ui_message_webview_url_scheme: A scheme for routing the user back to the application state they were previously in.
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid', 'referral_source', 'ui_message_webview_url_scheme'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method read_o_auth_window_uri" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `read_o_auth_window_uri`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `read_o_auth_window_uri`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
if 'referral_source' in params:
query_params.append(('referral_source', params['referral_source'])) # noqa: E501
if 'ui_message_webview_url_scheme' in params:
query_params.append(('ui_message_webview_url_scheme', params['ui_message_webview_url_scheme'])) # noqa: E501
header_params = {}
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/oauth_window_uri', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def resume_member(self, member_guid, user_guid, body, **kwargs): # noqa: E501
"""Resume aggregation from MFA # noqa: E501
This endpoint answers the challenges needed when a member has been challenged by multi-factor authentication. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resume_member(member_guid, user_guid, body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param MemberResumeRequestBody body: Member object with MFA challenge answers (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.resume_member_with_http_info(member_guid, user_guid, body, **kwargs) # noqa: E501
else:
(data) = self.resume_member_with_http_info(member_guid, user_guid, body, **kwargs) # noqa: E501
return data
def resume_member_with_http_info(self, member_guid, user_guid, body, **kwargs): # noqa: E501
"""Resume aggregation from MFA # noqa: E501
This endpoint answers the challenges needed when a member has been challenged by multi-factor authentication. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.resume_member_with_http_info(member_guid, user_guid, body, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param MemberResumeRequestBody body: Member object with MFA challenge answers (required)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method resume_member" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `resume_member`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `resume_member`") # noqa: E501
# verify the required parameter 'body' is set
if ('body' not in params or
params['body'] is None):
raise ValueError("Missing the required parameter `body` when calling `resume_member`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}/resume', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def update_member(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Update member # noqa: E501
Use this endpoint to update a member's attributes. Only the credentials, identifier, and metadata parameters can be updated. To get a list of the required credentials for the member, use the list member credentials endpoint. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_member(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param MemberUpdateRequestBody body: Member object to be updated with optional parameters (credentials, identifier, metadata)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.update_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
else:
(data) = self.update_member_with_http_info(member_guid, user_guid, **kwargs) # noqa: E501
return data
def update_member_with_http_info(self, member_guid, user_guid, **kwargs): # noqa: E501
"""Update member # noqa: E501
Use this endpoint to update a member's attributes. Only the credentials, identifier, and metadata parameters can be updated. To get a list of the required credentials for the member, use the list member credentials endpoint. # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.update_member_with_http_info(member_guid, user_guid, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str member_guid: The unique identifier for a `member`. (required)
:param str user_guid: The unique identifier for a `user`. (required)
:param MemberUpdateRequestBody body: Member object to be updated with optional parameters (credentials, identifier, metadata)
:return: MemberResponseBody
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['member_guid', 'user_guid', 'body'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_member" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'member_guid' is set
if ('member_guid' not in params or
params['member_guid'] is None):
raise ValueError("Missing the required parameter `member_guid` when calling `update_member`") # noqa: E501
# verify the required parameter 'user_guid' is set
if ('user_guid' not in params or
params['user_guid'] is None):
raise ValueError("Missing the required parameter `user_guid` when calling `update_member`") # noqa: E501
collection_formats = {}
path_params = {}
if 'member_guid' in params:
path_params['member_guid'] = params['member_guid'] # noqa: E501
if 'user_guid' in params:
path_params['user_guid'] = params['user_guid'] # noqa: E501
query_params = []
header_params = {}
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/vnd.mx.atrium.v1+json']) # noqa: E501
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.select_header_content_type( # noqa: E501
['application/json']) # noqa: E501
# Authentication setting
auth_settings = ['apiKey', 'clientID'] # noqa: E501
return self.api_client.call_api(
'/users/{user_guid}/members/{member_guid}', 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='MemberResponseBody', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
| 47.70257 | 703 | 0.640238 | 9,473 | 77,946 | 5.041803 | 0.038531 | 0.043383 | 0.026674 | 0.034296 | 0.971127 | 0.967421 | 0.963966 | 0.961349 | 0.958795 | 0.956094 | 0 | 0.014717 | 0.27469 | 77,946 | 1,633 | 704 | 47.731782 | 0.829981 | 0.396326 | 0 | 0.816964 | 0 | 0 | 0.229345 | 0.06452 | 0 | 0 | 0 | 0 | 0 | 1 | 0.034598 | false | 0 | 0.004464 | 0 | 0.090402 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
5477ab1b532f7a6dd9a42ae8d9f2fc1a255f8204 | 5,423 | py | Python | 14/00/1.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | 14/00/1.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | 39 | 2017-07-31T22:54:01.000Z | 2017-08-31T00:19:03.000Z | 14/00/1.py | pylangstudy/201708 | 126b1af96a1d1f57522d5a1d435b58597bea2e57 | [
"CC0-1.0"
] | null | null | null | #http://d.hatena.ne.jp/yumimue/20071220/1198141598
import re
regex = re.compile(r'ab', re.IGNORECASE)
match = regex.search('abcd')
print(regex.sub('XY', 'abcd'))
if match: print(match.expand('XY'))
match = regex.search('abcd')
print(match.groups())
if match: print(match.expand(r'XY'))
li_tag = '''<li>Apple</li>
<li>Orange</li>
<li>Meron</li>
<li>Grape</li>
<li>Cherry</li>
'''
print('-------------- re.sub() --------------')
result = re.sub(r'<li>(.+?)<\/li>', r'\1', li_tag, re.IGNORECASE)
print(result)
print('--------------')
result = re.sub(r'^<li>(.+?)<\/li>$', r'\1', li_tag, re.MULTILINE | re.IGNORECASE)
print(result)
print('--------------')
result = re.sub(r'<li>(.+?)<\/li>', r'\1', li_tag, re.MULTILINE | re.IGNORECASE)
print(result)
print('-------------- re.search() --------------')
match = re.search(r'<li>(.+?)<\/li>', li_tag, re.IGNORECASE)
print(match.lastindex)
print(match.expand(r'{\1}'))
"""
print('--------------')
match = re.search(r'^<li>(.+?)<\/li>$', li_tag, re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
"""
print('--------------')
match = re.search(r'<li>(.+?)<\/li>', li_tag, re.MULTILINE | re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
print('--------------')
match = re.search(r'^<li>(.+?)<\/li>$', li_tag, re.MULTILINE | re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
print('-------------- regex.search() --------------')
regex = re.compile(r'<li>(.+?)<\/li>', re.IGNORECASE)
match = regex.search(li_tag)
print(match.lastindex); print(match.expand(r'{\1}'));
print('--------------')
"""
regex = re.compile(r'^<li>(.+?)<\/li>$', re.IGNORECASE)
match = regex.search(li_tag)
print(match.lastindex); print(match.expand(r'{\1}'));#AttributeError: 'NoneType' object has no attribute 'lastindex'
print('--------------')
"""
regex = re.compile(r'<li>(.+?)<\/li>', re.MULTILINE | re.IGNORECASE)
match = regex.search(li_tag)
print(match.lastindex); print(match.expand(r'{\1}'));
print('--------------')
regex = re.compile(r'^<li>(.+?)<\/li>$', re.MULTILINE | re.IGNORECASE)
match = regex.search(li_tag)
print(match.lastindex); print(match.expand(r'{\1}'));
print('--------------')
print('-------------- re.findall() --------------')
regex = re.compile(r'<li>(.+?)<\/li>', re.IGNORECASE)
print(regex.findall(li_tag))
print('--------------')
regex = re.compile(r'^<li>(.+?)<\/li>$', re.IGNORECASE)
print(regex.findall(li_tag))
print('--------------')
regex = re.compile(r'<li>(.+?)<\/li>', re.MULTILINE | re.IGNORECASE)
print(regex.findall(li_tag))
print('--------------')
regex = re.compile(r'^<li>(.+?)<\/li>$', re.MULTILINE | re.IGNORECASE)
print(regex.findall(li_tag))
print('-------------- re.finditer() --------------')
regex = re.compile(r'<li>(.+?)<\/li>', re.IGNORECASE)
matches = regex.finditer(li_tag)
for match in matches:
print(match)
print(match.lastindex, match.groups())
#print(match.lastindex); print(match.expand(r'{\1}'));
print('--------------')
"""
print('-------------- re.findall() --------------')
print('--------------')
match = re.findall(r'<li>(.+?)<\/li>', li_tag, re.MULTILINE | re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
"""
"""
print('-------------- regex.findall() --------------')
regex = re.compile(r'<li>(.+?)<\/li>', re.MULTILINE | re.IGNORECASE)
#match = regex.findall(li_tag)#AttributeError: 'list' object has no attribute 'lastindex'
#match = regex.fullmatch(li_tag)#AttributeError: 'NoneType' object has no attribute 'lastindex'
#match = regex.match(li_tag)
match = regex.search(li_tag)
print(match.lastindex)
print(match.groups())
print(match.expand(r'{\1}'))
#print(match.expand(r'{\1} {\2}'))
"""
"""
print('--------------')
match = re.search(r'^<li>(.+?)<\/li>$', li_tag, re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
"""
"""
print('--------------')
match = re.search(r'<li>(.+?)<\/li>', li_tag, re.MULTILINE | re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
print('--------------')
match = re.search(r'^<li>(.+?)<\/li>$', li_tag, re.MULTILINE | re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
print('--------------aaaaaaaaaaaaa')
regex = re.compile(r'<li>(.+?)<\/li>', re.MULTILINE | re.IGNORECASE)
match = regex.search(li_tag, re.MULTILINE | re.IGNORECASE)
print(match.lastindex)#AttributeError: 'NoneType' object has no attribute 'lastindex'
print(match.expand(r'{\1}'))
print('--------------')
#regex = re.compile(r'^<li>(.+?)<\/li>$', re.MULTILINE | re.IGNORECASE)
match = regex.search(li_tag)
print(match.lastindex)#1
print(match.expand(r'{\1}'))
#print(match.expand(r'{\1} {\2} {\3} {\4}'))#sre_constants.error: invalid group reference 2 at position 7
#result = re.expand(r'<li>(.+?)<\/li>', r'\1', li_tag)
print('--------------')
regex = re.compile(r'<li>(.+?)<\/li>', re.IGNORECASE)
match = regex.search(li_tag)
print(match.lastindex)
print(match.expand(r'{\1} {\2} {\3} {\4}'))
#result = re.expand(r'<li>(.+?)<\/li>', r'\1', li_tag)
"""
| 35.913907 | 116 | 0.599115 | 718 | 5,423 | 4.481894 | 0.084958 | 0.152268 | 0.040398 | 0.105656 | 0.874145 | 0.840584 | 0.840273 | 0.809198 | 0.779988 | 0.779988 | 0 | 0.010404 | 0.07837 | 5,423 | 150 | 117 | 36.153333 | 0.633453 | 0.041674 | 0 | 0.58209 | 0 | 0 | 0.285082 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.014925 | 0 | 0.014925 | 0.567164 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 7 |
54977610e010b8434eee594e7e599b126dbf98bd | 21,012 | py | Python | clipboard/tests/test_views.py | RoofBite/SpanishClipboard | ce021040be19d24c768e8e8a432a4fa1f3015d0b | [
"MIT"
] | null | null | null | clipboard/tests/test_views.py | RoofBite/SpanishClipboard | ce021040be19d24c768e8e8a432a4fa1f3015d0b | [
"MIT"
] | null | null | null | clipboard/tests/test_views.py | RoofBite/SpanishClipboard | ce021040be19d24c768e8e8a432a4fa1f3015d0b | [
"MIT"
] | null | null | null | from django.test import TestCase, Client
from django.urls import reverse
from clipboard.models import Word, UserAccount, User
from django.db.models import Q
class TestViews_view_words(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
def test_view_words_POST_authenticated(self):
client = Client()
client.login(username="John", password="Password")
# If metod POST and authenticated view_words redirects to login_page which is redireting
# in that sitaution to add_words
response = client.post(reverse("view_words", args=["1"]), follow=True)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/add_word.html")
def test_view_words_POST_not_authenticated(self):
client = Client()
response = client.post(reverse("view_words", args=["1"]), follow=True)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/login.html")
def test_view_words_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("login_page"), follow=True)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/login.html")
def test_view_words_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("view_words", args=["1"]), follow=True)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/view_words.html")
def test_view_words_GET_authenticated_search_days_0(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("view_words", args=["0"]))
self.word = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=False,
user=response.wsgi_request.user,
)
self.assertEquals(Word.objects.first(), self.word)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/view_words.html")
def test_view_words_GET_authenticated_search_contains_0(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(
reverse("view_words", args=["0"]), {"search_query": "2021"}
)
self.assertEqual(response.context["search_query"], "2021")
self.word = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=False,
user=response.wsgi_request.user,
)
self.assertEquals(
Word.objects.get(
date_added__startswith="2021",
user=response.wsgi_request.user,
for_deletion=False,
),
self.word,
)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/view_words.html")
def test_view_words_GET_authenticated_search_without_0(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(
reverse("view_words", args=["0"]), {"search_query": "Test"}
)
self.search_query_text = "Test"
self.assertEqual(response.context["search_query"], self.search_query_text)
# Test for polish_word filed lookup
self.word1 = Word.objects.create(
polish_word="polish_wordtest",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=False,
user=response.wsgi_request.user,
)
self.search_query1 = Word.objects.get(
Q(polish_word__icontains=self.search_query_text)
| Q(date_added__startswith=self.search_query_text)
| Q(spanish_word__icontains=self.search_query_text)
| Q(etymology__icontains=self.search_query_text)
| Q(notes__icontains=self.search_query_text),
user=response.wsgi_request.user,
for_deletion=False,
)
self.assertEquals(self.search_query1, self.word1)
self.word1.delete()
# Test for spanish_word filed lookup
self.word2 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_wordtest",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=False,
user=response.wsgi_request.user,
)
self.search_query2 = Word.objects.get(
Q(polish_word__icontains=self.search_query_text)
| Q(date_added__startswith=self.search_query_text)
| Q(spanish_word__icontains=self.search_query_text)
| Q(etymology__icontains=self.search_query_text)
| Q(notes__icontains=self.search_query_text),
user=response.wsgi_request.user,
for_deletion=False,
)
self.assertEquals(self.search_query2, self.word2)
self.word2.delete()
# Test for etymology filed lookup
self.word3 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymologytest",
notes="notes",
date_added="2021-01-01",
for_deletion=False,
user=response.wsgi_request.user,
)
self.search_query3 = Word.objects.get(
Q(polish_word__icontains=self.search_query_text)
| Q(date_added__startswith=self.search_query_text)
| Q(spanish_word__icontains=self.search_query_text)
| Q(etymology__icontains=self.search_query_text)
| Q(notes__icontains=self.search_query_text),
user=response.wsgi_request.user,
for_deletion=False,
)
self.assertEquals(self.search_query3, self.word3)
self.word3.delete()
# Test for notes filed lookup
self.word4 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notestest",
date_added="2021-01-01",
for_deletion=False,
user=response.wsgi_request.user,
)
self.search_query4 = Word.objects.get(
Q(polish_word__icontains=self.search_query_text)
| Q(date_added__startswith=self.search_query_text)
| Q(spanish_word__icontains=self.search_query_text)
| Q(etymology__icontains=self.search_query_text)
| Q(notes__icontains=self.search_query_text),
user=response.wsgi_request.user,
for_deletion=False,
)
self.assertEquals(self.search_query4, self.word4)
self.word4.delete()
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/view_words.html")
class TestViews_add_word(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
def test_add_word_POST_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.post(
reverse("add_word"),
{
"polish_word": "polish_word",
"spanish_word": "spanish_word",
"etymology": "etymology",
"notes": "notes",
},
)
self.assertEquals(response.status_code, 302)
self.assertEquals(Word.objects.first().polish_word, "polish_word")
self.assertEquals(Word.objects.first().spanish_word, "spanish_word")
self.assertEquals(Word.objects.first().etymology, "etymology")
self.assertEquals(Word.objects.first().notes, "notes")
def test_add_word_POST_not_authenticated(self):
client = Client()
response = client.post(
reverse("add_word"),
{
"polish_word": "polish_word2",
"spanish_word": "spanish_word2",
"etymology": "etymology2",
"notes": "notes2",
},
)
self.assertEquals(response.status_code, 302)
class TestViews_view_deleted_words(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
self.word2 = Word.objects.create(
polish_word="polish_word2",
spanish_word="spanish_word2",
etymology="etymology",
notes="notes",
date_added="2021-01-02",
for_deletion=True,
user=self.user,
)
def test_view_deleted_words_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("view_deleted_words"))
self.assertEquals(Word.objects.first(), self.word1)
self.assertEquals(response.status_code, 200)
self.assertTemplateUsed(response, "clipboard/deleted_words.html")
def test_view_deleted_words_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("view_deleted_words"))
self.assertEquals(Word.objects.first(), self.word1)
self.assertEquals(response.status_code, 302)
class TestViews_hard_delete_words(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
self.word2 = Word.objects.create(
polish_word="polish_word2",
spanish_word="spanish_word2",
etymology="etymology",
notes="notes",
date_added="2021-01-02",
for_deletion=True,
user=self.user,
)
def test_hard_delete_words_POST_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.post(reverse("hard_delete_words"), {"delete_all":1, "delete_all_confirm":"delete"})
self.assertEquals(Word.objects.first(), None)
self.assertEquals(response.status_code, 302)
def test_hard_delete_words_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("hard_delete_words"),follow=True)
self.assertEquals(Word.objects.first(), self.word1)
self.assertEquals(response.status_code, 200)
class TestViews_delete_word(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
def test_delete_word_POST_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.post(reverse("delete_word", args=[1]))
self.assertEquals(Word.objects.first(), None)
self.assertEquals(response.status_code, 302)
def test_delete_word_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("delete_word", args=[1]))
self.assertEquals(response.status_code, 200)
def test_delete_word_GET_authenticated_no_word(self):
client = Client()
client.login(username="John", password="Password")
self.word1.delete()
response = client.get(reverse("delete_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_delete_word_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("delete_word", args=[0]))
self.assertEquals(response.status_code, 302)
class TestViews_hide_word(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
def test_hide_word_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("hide_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_hide_word_GET_authenticated_no_word(self):
client = Client()
client.login(username="John", password="Password")
self.word1.delete()
response = client.get(reverse("hide_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_hide_word_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("hide_word", args=[1]))
self.assertEquals(response.status_code, 302)
class TestViews_retrive_word(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
def test_hide_word_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("retrive_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_hide_word_GET_authenticated_no_word(self):
client = Client()
client.login(username="John", password="Password")
self.word1.delete()
response = client.get(reverse("retrive_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_hide_word_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("retrive_word", args=[1]))
self.assertEquals(response.status_code, 302)
class TestViews_edit_word(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
def test_edit_word_POST_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.post(reverse("edit_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_edit_word_POST_authenticated_redirect(self):
client = Client()
client.login(username="John", password="Password")
response = client.post(reverse("edit_word", args=[1]),{"redirect":"redirect"})
self.assertEquals(response.status_code, 302)
def test_edit_word_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("edit_word", args=[1]))
self.assertEquals(response.status_code, 200)
def test_edit_word_GET_authenticated_no_word(self):
client = Client()
client.login(username="John", password="Password")
self.word1.delete()
response = client.get(reverse("edit_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_edit_word_GET_not_authenticated(self):
client = Client()
response = client.get(reverse("edit_word", args=[0]))
self.assertEquals(response.status_code, 302)
class TestViews_view_word(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
self.word1 = Word.objects.create(
polish_word="polish_word",
spanish_word="spanish_word",
etymology="etymology",
notes="notes",
date_added="2021-01-01",
for_deletion=True,
user=self.user,
)
def test_view_word_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("view_word", args=[1]))
self.assertEquals(response.status_code, 200)
def test_view_word_GET_authenticated_no_word(self):
client = Client()
client.login(username="John", password="Password")
self.word1.delete()
response = client.get(reverse("view_word", args=[1]))
self.assertEquals(response.status_code, 302)
def test_view_word_GET_lazy_user(self):
client = Client()
self.client.logout()
response = client.get(reverse("view_word", args=[1]))
self.assertEquals(response.status_code, 302)
class TestViews_register(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
def test_register_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("register"))
self.assertEquals(response.status_code, 302)
def test_register_GET_lazy_user(self):
client = Client()
response = client.get(reverse("register"))
self.assertEquals(response.status_code, 200)
def test_register_POST_not_authenticated(self):
client = Client()
response = client.post(reverse("register"),{
"username":"ExampleUser",
"password1":"Pa$$word132",
"password2":"Pa$$word132",
})
self.assertEquals(response.status_code, 302)
class TestViews_logout_page(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
def test_logout_page_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("logout_page"))
self.assertEquals(response.status_code, 302)
def test_register_GET_lazy_user(self):
client = Client()
response = client.get(reverse("logout_page"))
self.assertEquals(response.status_code, 302)
class TestViews_login_page(TestCase):
def setUp(self):
self.user = User.objects.create_user("John", "John@example.com", "Password")
def test_login_page_GET_authenticated(self):
client = Client()
client.login(username="John", password="Password")
response = client.get(reverse("login_page"))
self.assertEquals(response.status_code, 302)
def test_login_page_GET_lazy_user(self):
client = Client()
response = client.get(reverse("login_page"))
self.assertEquals(response.status_code, 200)
def test_login_page_POST_not_authenticated_valid_data(self):
client = Client()
response = client.post(reverse("login_page"),{
"username":"John",
"password":"Password",
})
self.assertEquals(response.status_code, 302)
def test_login_page_POST_not_authenticated_non_valid_data(self):
client = Client()
response = client.post(reverse("login_page"),{
"username":"John2",
"password":"Password2",
})
self.assertEquals(response.status_code, 302) | 32.12844 | 109 | 0.621835 | 2,299 | 21,012 | 5.438886 | 0.053502 | 0.06142 | 0.051184 | 0.095969 | 0.918506 | 0.901472 | 0.87588 | 0.869962 | 0.857726 | 0.84285 | 0 | 0.021575 | 0.261041 | 21,012 | 654 | 110 | 32.12844 | 0.783732 | 0.011708 | 0 | 0.707265 | 0 | 0 | 0.115173 | 0.007274 | 0 | 0 | 0 | 0 | 0.138889 | 1 | 0.111111 | false | 0.08547 | 0.008547 | 0 | 0.145299 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
49a9364c654438990d314c0f6e780196441a12ca | 379 | py | Python | blechpy/__init__.py | thomasrgray/blechpy | 46a95991e1d41556a263e48c9c3b61b1d337aae0 | [
"MIT"
] | 8 | 2020-10-05T19:00:45.000Z | 2021-09-14T16:43:08.000Z | blechpy/__init__.py | thomasrgray/blechpy | 46a95991e1d41556a263e48c9c3b61b1d337aae0 | [
"MIT"
] | 25 | 2019-11-01T14:42:22.000Z | 2022-03-02T21:43:58.000Z | blechpy/__init__.py | thomasrgray/blechpy | 46a95991e1d41556a263e48c9c3b61b1d337aae0 | [
"MIT"
] | 3 | 2019-11-01T14:38:42.000Z | 2021-10-21T16:15:09.000Z | from blechpy.datastructures.objects import load_experiment, load_dataset
from blechpy.datastructures.objects import load_project, load_pickled_object
from blechpy.datastructures.dataset import dataset, port_in_dataset
from blechpy.datastructures.experiment import experiment
from blechpy.datastructures.project import project
from blechpy import dio, analysis, utils, plotting
| 42.111111 | 76 | 0.873351 | 47 | 379 | 6.893617 | 0.361702 | 0.203704 | 0.385802 | 0.197531 | 0.259259 | 0.259259 | 0 | 0 | 0 | 0 | 0 | 0 | 0.084433 | 379 | 8 | 77 | 47.375 | 0.933718 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
49f9fbdc4bc24e4113bee26c9b4faae26c4ed959 | 138 | py | Python | tf_implementation/segmentation/losses/segmentation.py | arekmula/skull_stripping | d03cef81392f8cd243dc1c6d32ffa897af922eb2 | [
"MIT"
] | 3 | 2021-02-23T15:26:40.000Z | 2021-08-11T19:36:21.000Z | tf_implementation/segmentation/losses/segmentation.py | arekmula/skull_stripping | d03cef81392f8cd243dc1c6d32ffa897af922eb2 | [
"MIT"
] | null | null | null | tf_implementation/segmentation/losses/segmentation.py | arekmula/skull_stripping | d03cef81392f8cd243dc1c6d32ffa897af922eb2 | [
"MIT"
] | null | null | null | from segmentation_models.losses import dice_loss as sm_dice_loss
def dice_loss(y_true, y_pred):
return sm_dice_loss(y_true, y_pred)
| 23 | 64 | 0.811594 | 26 | 138 | 3.884615 | 0.538462 | 0.316832 | 0.19802 | 0.257426 | 0.356436 | 0.356436 | 0 | 0 | 0 | 0 | 0 | 0 | 0.130435 | 138 | 5 | 65 | 27.6 | 0.841667 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.333333 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 9 |
b7402955d30ee7e9d55d80a8049ed442d918c3cf | 113 | py | Python | src/raspberrypi/events/__init__.py | EnricoFortunato/hawkeye | 9acf5d5e0d37fba794f2ea8b44e705ab086a358c | [
"Apache-2.0"
] | null | null | null | src/raspberrypi/events/__init__.py | EnricoFortunato/hawkeye | 9acf5d5e0d37fba794f2ea8b44e705ab086a358c | [
"Apache-2.0"
] | null | null | null | src/raspberrypi/events/__init__.py | EnricoFortunato/hawkeye | 9acf5d5e0d37fba794f2ea8b44e705ab086a358c | [
"Apache-2.0"
] | null | null | null | from events.event import Event
from events.echo_event import Echo_Event
from events.snap_event import Snap_Event
| 28.25 | 40 | 0.867257 | 19 | 113 | 4.947368 | 0.315789 | 0.319149 | 0.319149 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.106195 | 113 | 3 | 41 | 37.666667 | 0.930693 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
3f88b0be118a586efa70b7d13641457fab8939ae | 21,496 | py | Python | scripts/filtered_dataset_creation/dataset_balancing.py | mrjojo11/malpaca-pub | 26fd3a7045288bed66d624e0f5593067ff05952d | [
"MIT"
] | null | null | null | scripts/filtered_dataset_creation/dataset_balancing.py | mrjojo11/malpaca-pub | 26fd3a7045288bed66d624e0f5593067ff05952d | [
"MIT"
] | null | null | null | scripts/filtered_dataset_creation/dataset_balancing.py | mrjojo11/malpaca-pub | 26fd3a7045288bed66d624e0f5593067ff05952d | [
"MIT"
] | null | null | null | import csv
import glob
import math
import os
import socket
import sys
from random import random, seed
from timeit import default_timer as timer
import time
from statistics import mean
from pathlib import Path
import networkx as nx
import numpy as np
from scapy.layers.inet import IP, UDP
from scapy.utils import PcapWriter, PcapReader
import tkinter as tk
from tkinter import filedialog
import zat
from zat.log_to_dataframe import LogToDataFrame
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.font_manager import FontProperties
from matplotlib.pyplot import cm
import matplotlib.transforms as mtrans
class Dataset_Balancing():
@staticmethod
def creating_balanced_dataset_netflow(path_to_balancing_file, path_to_original_data_set, path_to_storage, old_exp_name,
new_exp_name):
path_to_balancing_file = path_to_balancing_file
path_to_original_data_set = path_to_original_data_set
path_to_storage = path_to_storage
old_exp_name = old_exp_name
new_exp_name = new_exp_name
new_folder_path = path_to_storage + "/" + new_exp_name
os.mkdir(new_folder_path)
balancing_df = pd.read_csv(path_to_balancing_file)
for scenario_index, scenario in enumerate(balancing_df.iterrows()):
scenario_name = scenario[1]["scenario"]
row = scenario[1].drop("scenario")
print("Balancing Scenario: " + str(scenario_index + 1) + "/" + str(len(balancing_df.index)))
print("Scenario: " + scenario_name)
detailed_labels_to_get = pd.Series(row).where(lambda x: x != 0).dropna()
if len(detailed_labels_to_get) > 0:
scenario_path = path_to_original_data_set + "/" + scenario_name
files = sorted([f.path for f in os.scandir(scenario_path) if f.is_dir()])
for file_index, file in enumerate(files):
csv_summary = glob.glob(file + "/*.csv")[0]
csv_summary_df = pd.read_csv(csv_summary)
if file_index == 0:
combined_df = csv_summary_df
else:
combined_df = combined_df.append(csv_summary_df)
combined_df["detailed_label"] = combined_df["detailed_label"].str.lower()
found_df = combined_df[(combined_df["status"] == "Found")]
response_df = combined_df[(combined_df["status"] == "Response")]
combined_df = found_df.append(response_df)
for index, detailed_label_to_get in enumerate(detailed_labels_to_get.iteritems()):
detailed_label = detailed_label_to_get[0]
amount = detailed_label_to_get[1]
filtered_df = combined_df[combined_df["detailed_label"] == detailed_label]
selected_df = filtered_df.sample(n=amount)
if index == 0:
combined_selected_df = selected_df
else:
combined_selected_df = combined_selected_df.append(selected_df)
files = combined_selected_df["file"].unique().tolist()
for selected_file_index, file in enumerate(files):
print("Balancing File: " + str(selected_file_index + 1) + "/" + str(len(files)))
print("File: " + file)
file_df = combined_selected_df[combined_selected_df["file"] == file]
scenario_name = file_df["scenario"].unique().tolist()[0]
scenario_folder_path = new_folder_path + "/" + scenario_name
if os.path.exists(scenario_folder_path) == False:
os.mkdir(scenario_folder_path)
file_path = scenario_folder_path + "/" + file
os.mkdir(file_path)
path_to_original_pcap = path_to_original_data_set + "/" + scenario_name + "/" + file + "/" + file + "_" + old_exp_name + ".pcap"
connections_needed = [x for x in zip(file_df["src_ip"], file_df["dst_ip"], file_df["ip_protocol"], file_df["src_port"], file_df["dst_port"])]
connections_needed = [(str(x[0]).strip(), str(x[1]).strip(), str(x[2]).strip(), str(x[3]).strip(), str(x[4]).strip(),) for x in connections_needed]
new_pcap_path = file_path + "/" + file + "_" + new_exp_name + ".pcap"
appended_packets = 0
file_dic = {}
with PcapReader(path_to_original_pcap) as packets:
for packet in packets:
packet_string = packet.show(dump=True)
packet_for_print = packet_string
packet_string = packet_string.split("\n")
packet_string = [x.replace(" ", "") for x in packet_string]
current_layer = "none"
packet_dic = {}
for line in packet_string:
if len(line) > 0:
if line[0] == '#':
new_layer = line.split('[')[1].split(']')[0]
current_layer = new_layer
packet_dic[current_layer] = {}
elif (line[0] != '\\') & (line[0] != '|'):
key = line.split("=")[0]
value = line.split("=")[1]
packet_dic[current_layer][key] = value
src_ip = packet_dic["IP"]["src"]
dst_ip = packet_dic["IP"]["dst"]
ip_protocol = packet_dic["IP"]["proto"].upper()
if ip_protocol == "UDP" and "UDP" in packet_dic:
src_port = packet_dic["UDP"]["sport"]
dst_port = packet_dic["UDP"]["dport"]
elif ip_protocol == "TCP" and "TCP" in packet_dic:
src_port = packet_dic["TCP"]["sport"]
dst_port = packet_dic["TCP"]["dport"]
elif ip_protocol == "ICMP" and "ICMP" in packet_dic:
src_port = 0
dst_port = str(packet_dic["ICMP"]["type"]) + "/" + str(packet_dic["ICMP"]["code"])
else:
src_port = 0
dst_port = 0
if not isinstance(src_port, int):
if not all(char.isdigit() for char in src_port):
try:
src_port = socket.getservbyname(src_port, ip_protocol)
except:
src_port = src_port
if not isinstance(dst_port, int) or ():
if not all(char.isdigit() for char in dst_port):
try:
dst_port = socket.getservbyname(dst_port, ip_protocol)
except:
dst_port = dst_port
src_ip = str(src_ip.strip())
dst_ip = str(dst_ip.strip())
ip_protocol = str(ip_protocol.strip())
src_port = str(src_port).strip()
dst_port = str(dst_port).strip()
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) in connections_needed:
if (src_ip, dst_ip, ip_protocol, src_port, dst_port) in file_dic:
file_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)].append(packet)
else:
file_dic[(src_ip, dst_ip, ip_protocol, src_port, dst_port)] = [packet]
appended_packets = appended_packets + 1
if appended_packets % 500000 == 0:
if appended_packets != 0:
pktdump = PcapWriter(new_pcap_path, append=True, sync=True)
for to_write_packets in file_dic.values():
for to_write_packet in to_write_packets:
pktdump.write(to_write_packet)
pktdump.close()
file_dic.clear()
packets.close()
if len(file_dic) > 0:
pktdump = PcapWriter(new_pcap_path, append=True, sync=True)
for to_write_packets in file_dic.values():
for to_write_packet in to_write_packets:
pktdump.write(to_write_packet)
pktdump.close()
file_dic.clear()
csv_summary_path = file_path + "/" + file + "_summary.csv"
file_df.to_csv(csv_summary_path, index=False)
@staticmethod
def creating_balanced_dataset(path_to_balancing_file, path_to_original_data_set, path_to_storage, old_exp_name, new_exp_name):
path_to_balancing_file = path_to_balancing_file
path_to_original_data_set = path_to_original_data_set
path_to_storage = path_to_storage
old_exp_name = old_exp_name
new_exp_name = new_exp_name
new_folder_path = path_to_storage + "/" + new_exp_name
os.mkdir(new_folder_path)
balancing_df = pd.read_csv(path_to_balancing_file)
for scenario_index, scenario in enumerate(balancing_df.iterrows()):
scenario_name = scenario[1]["scenario"]
row = scenario[1].drop("scenario")
print("Balancing Scenario: " + str(scenario_index + 1) + "/" + str(len(balancing_df.index)))
print("Scenario: " + scenario_name)
detailed_labels_to_get = pd.Series(row).where(lambda x : x!=0).dropna()
if len(detailed_labels_to_get) > 0:
scenario_path = path_to_original_data_set + "/" + scenario_name
files = sorted([f.path for f in os.scandir(scenario_path) if f.is_dir()])
for file_index, file in enumerate(files):
csv_summary = glob.glob(file + "/*.csv")[0]
csv_summary_df = pd.read_csv(csv_summary)
if file_index == 0:
combined_df = csv_summary_df
else:
combined_df = combined_df.append(csv_summary_df)
combined_df["detailed_label"] = combined_df["detailed_label"].str.lower()
found_df = combined_df[(combined_df["status"] == "Found")]
response_df = combined_df[(combined_df["status"] == "Response")]
combined_df = found_df.append(response_df)
for index, detailed_label_to_get in enumerate(detailed_labels_to_get.iteritems()):
detailed_label = detailed_label_to_get[0]
amount = detailed_label_to_get[1]
filtered_df = combined_df[combined_df["detailed_label"] == detailed_label]
selected_df = filtered_df.sample(n=amount)
if index == 0:
combined_selected_df = selected_df
else:
combined_selected_df = combined_selected_df.append(selected_df)
files = combined_selected_df["file"].unique().tolist()
for selected_file_index, file in enumerate(files):
print("Balancing File: " + str(selected_file_index + 1) + "/" + str(len(files)))
print("File: " + file)
file_df = combined_selected_df[combined_selected_df["file"] == file]
scenario_name = file_df["scenario"].unique().tolist()[0]
scenario_folder_path = new_folder_path + "/" + scenario_name
if os.path.exists(scenario_folder_path) == False:
os.mkdir(scenario_folder_path)
file_path = scenario_folder_path + "/" + file
os.mkdir(file_path)
path_to_original_pcap = path_to_original_data_set + "/" + scenario_name + "/" + file + "/" + file + "_" + old_exp_name + ".pcap"
connections_needed = [x for x in zip(file_df["src_ip"], file_df["dst_ip"])]
new_pcap_path = file_path + "/" + file + "_" + new_exp_name + ".pcap"
# with PcapReader(path_to_original_pcap) as packets, PcapWriter(new_pcap_path, append=True, sync=True) as pktdump:
# for packet in packets:
#
# src_ip = packet[IP].src
# dst_ip = packet[IP].dst
#
# if (src_ip, dst_ip) in connections_needed:
# pktdump.write(packet)
# packets.close()
# pktdump.close()
appended_packets = 0
file_dic = {}
with PcapReader(path_to_original_pcap) as packets:
for packet in packets:
src_ip = packet[IP].src
dst_ip = packet[IP].dst
if (src_ip, dst_ip) in connections_needed:
if (src_ip, dst_ip) in file_dic:
file_dic[(src_ip, dst_ip)].append(packet)
else:
file_dic[(src_ip, dst_ip)] = [packet]
appended_packets = appended_packets + 1
if appended_packets % 500000 == 0:
if appended_packets != 0:
pktdump = PcapWriter(new_pcap_path, append=True, sync=True)
for to_write_packets in file_dic.values():
for to_write_packet in to_write_packets:
pktdump.write(to_write_packet)
pktdump.close()
file_dic.clear()
packets.close()
if len(file_dic) > 0:
pktdump = PcapWriter(new_pcap_path, append=True, sync=True)
for to_write_packets in file_dic.values():
for to_write_packet in to_write_packets:
pktdump.write(to_write_packet)
pktdump.close()
file_dic.clear()
csv_summary_path = file_path + "/" + file + "_summary.csv"
file_df.to_csv(csv_summary_path, index=False)
@staticmethod
def creating_balanced_dataset_with_min_size(path_to_balancing_file, path_to_original_data_set, path_to_storage, old_exp_name,
new_exp_name, min_size):
path_to_balancing_file = path_to_balancing_file
path_to_original_data_set = path_to_original_data_set
path_to_storage = path_to_storage
old_exp_name = old_exp_name
new_exp_name = new_exp_name
min_size = int(min_size)
new_folder_path = path_to_storage + "/" + new_exp_name
os.mkdir(new_folder_path)
balancing_df = pd.read_csv(path_to_balancing_file)
for scenario_index, scenario in enumerate(balancing_df.iterrows()):
scenario_name = scenario[1]["scenario"]
row = scenario[1].drop("scenario")
print("Balancing Scenario: " + str(scenario_index + 1) + "/" + str(len(balancing_df.index)))
print("Scenario: " + scenario_name)
detailed_labels_to_get = pd.Series(row).where(lambda x: x != 0).dropna()
if len(detailed_labels_to_get) > 0:
scenario_path = path_to_original_data_set + "/" + scenario_name
files = sorted([f.path for f in os.scandir(scenario_path) if f.is_dir()])
for file_index, file in enumerate(files):
csv_summary = glob.glob(file + "/*.csv")[0]
csv_summary_df = pd.read_csv(csv_summary)
if file_index == 0:
combined_df = csv_summary_df
else:
combined_df = combined_df.append(csv_summary_df)
combined_df["detailed_label"] = combined_df["detailed_label"].str.lower()
combined_df = combined_df[combined_df["status"] == "Found"]
combined_df = combined_df[combined_df["connection_length"] >= min_size]
for index, detailed_label_to_get in enumerate(detailed_labels_to_get.iteritems()):
detailed_label = detailed_label_to_get[0]
amount = detailed_label_to_get[1]
filtered_df = combined_df[combined_df["detailed_label"] == detailed_label]
selected_df = filtered_df.sample(n=amount)
if index == 0:
combined_selected_df = selected_df
else:
combined_selected_df = combined_selected_df.append(selected_df)
files = combined_selected_df["file"].unique().tolist()
for selected_file_index, file in enumerate(files):
print("Balancing File: " + str(selected_file_index + 1) + "/" + str(len(files)))
print("File: " + file)
file_df = combined_selected_df[combined_selected_df["file"] == file]
scenario_name = file_df["scenario"].unique().tolist()[0]
scenario_folder_path = new_folder_path + "/" + scenario_name
if os.path.exists(scenario_folder_path) == False:
os.mkdir(scenario_folder_path)
file_path = scenario_folder_path + "/" + file
os.mkdir(file_path)
path_to_original_pcap = path_to_original_data_set + "/" + scenario_name + "/" + file + "/" + file + "_" + old_exp_name + ".pcap"
connections_needed = [x for x in zip(file_df["src_ip"], file_df["dst_ip"])]
new_pcap_path = file_path + "/" + file + "_" + new_exp_name + ".pcap"
# with PcapReader(path_to_original_pcap) as packets, PcapWriter(new_pcap_path, append=True, sync=True) as pktdump:
# for packet in packets:
#
# src_ip = packet[IP].src
# dst_ip = packet[IP].dst
#
# if (src_ip, dst_ip) in connections_needed:
# pktdump.write(packet)
# packets.close()
# pktdump.close()
appended_packets = 0
file_dic = {}
with PcapReader(path_to_original_pcap) as packets:
for packet in packets:
src_ip = packet[IP].src
dst_ip = packet[IP].dst
if (src_ip, dst_ip) in connections_needed:
if (src_ip, dst_ip) in file_dic:
file_dic[(src_ip, dst_ip)].append(packet)
else:
file_dic[(src_ip, dst_ip)] = [packet]
appended_packets = appended_packets + 1
if appended_packets % 500000 == 0:
if appended_packets != 0:
pktdump = PcapWriter(new_pcap_path, append=True, sync=True)
for to_write_packets in file_dic.values():
for to_write_packet in to_write_packets:
pktdump.write(to_write_packet)
pktdump.close()
file_dic.clear()
packets.close()
if len(file_dic) > 0:
pktdump = PcapWriter(new_pcap_path, append=True, sync=True)
for to_write_packets in file_dic.values():
for to_write_packet in to_write_packets:
pktdump.write(to_write_packet)
pktdump.close()
file_dic.clear()
csv_summary_path = file_path + "/" + file + "_summary.csv"
file_df.to_csv(csv_summary_path, index=False) | 47.557522 | 167 | 0.503908 | 2,259 | 21,496 | 4.43382 | 0.078796 | 0.028155 | 0.028754 | 0.026957 | 0.846645 | 0.832967 | 0.829173 | 0.819788 | 0.811102 | 0.808806 | 0 | 0.0067 | 0.409797 | 21,496 | 452 | 168 | 47.557522 | 0.78277 | 0.02982 | 0 | 0.752322 | 0 | 0 | 0.035661 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.009288 | false | 0 | 0.074303 | 0 | 0.086687 | 0.040248 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3f8e22a77cef847a65263862ba82f6b36f7d3ee9 | 503 | py | Python | cms/templates/Admin/includes/footerJs.py | angeal185/python-flask-material-design-cms | 32c6251792bca75aebe231ab08b6de7ea1936998 | [
"MIT"
] | null | null | null | cms/templates/Admin/includes/footerJs.py | angeal185/python-flask-material-design-cms | 32c6251792bca75aebe231ab08b6de7ea1936998 | [
"MIT"
] | null | null | null | cms/templates/Admin/includes/footerJs.py | angeal185/python-flask-material-design-cms | 32c6251792bca75aebe231ab08b6de7ea1936998 | [
"MIT"
] | null | null | null | <script src="{{ url_for('static', filename='js/jquery.js') }}"></script>
<script src="{{ url_for('static', filename='js/tether.js') }}"></script>
<script src="{{ url_for('static', filename='js/bootstrap.js') }}"></script>
<script src="{{ url_for('static', filename='js/material.js') }}"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/metisMenu/2.5.0/metisMenu.min.js"></script>
<script src="//cdnjs.cloudflare.com/ajax/libs/startbootstrap-sb-admin-2/1.0.8/js/sb-admin-2.min.js"></script>
| 71.857143 | 110 | 0.66998 | 76 | 503 | 4.381579 | 0.342105 | 0.162162 | 0.21021 | 0.255255 | 0.702703 | 0.702703 | 0.702703 | 0.60961 | 0.60961 | 0 | 0 | 0.016913 | 0.059642 | 503 | 6 | 111 | 83.833333 | 0.687104 | 0 | 0 | 0 | 0 | 0.333333 | 0.698189 | 0.50503 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
3f9931fa759e7ca3c310d7dd7d7531f28e8f4a04 | 25,635 | py | Python | scene_seg/gcn3d.py | JiazeWang/PAConv | c9c634dd5da972819656d1d2a9014145eadaf701 | [
"Apache-2.0"
] | null | null | null | scene_seg/gcn3d.py | JiazeWang/PAConv | c9c634dd5da972819656d1d2a9014145eadaf701 | [
"Apache-2.0"
] | null | null | null | scene_seg/gcn3d.py | JiazeWang/PAConv | c9c634dd5da972819656d1d2a9014145eadaf701 | [
"Apache-2.0"
] | null | null | null | """
@Author: Zhi-Hao Lin
@Contact: r08942062@ntu.edu.tw
@Time: 2020/03/06
@Document: Basic operation/blocks of 3D-GCN
"""
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
torch.manual_seed("1024")
torch.cuda.manual_seed("1024")
torch.cuda.manual_seed_all("1024")
def get_neighbor_index(vertices: "(bs, vertice_num, 3)", neighbor_num: int):
"""
Return: (bs, vertice_num, neighbor_num)
"""
#print("vertices.shape:", vertices.shape)
bs, v, _ = vertices.size()
device = vertices.device
inner = torch.bmm(vertices, vertices.transpose(1, 2)) #(bs, v, v)
quadratic = torch.sum(vertices**2, dim= 2) #(bs, v)
distance = inner * (-2) + quadratic.unsqueeze(1) + quadratic.unsqueeze(2)
#print("distance.shape", distance.shape)
neighbor_index = torch.topk(distance, k= neighbor_num + 1, dim= -1, largest= False)[1]
#print(neighbor_index.shape)
neighbor_index = neighbor_index[:, :, 1:]
#print(neighbor_index.shape)
return neighbor_index
def get_neighbor_index_value(vertices: "(bs, vertice_num, 3)", neighbor_num: int):
"""
Return: (bs, vertice_num, neighbor_num)
"""
#print("vertices.shape:", vertices.shape)
bs, v, _ = vertices.size()
device = vertices.device
inner = torch.bmm(vertices, vertices.transpose(1, 2)) #(bs, v, v)
quadratic = torch.sum(vertices**2, dim= 2) #(bs, v)
distance = inner * (-2) + quadratic.unsqueeze(1) + quadratic.unsqueeze(2)
#print("distance.shape", distance.shape)
neighbor_value, neighbor_index = torch.topk(distance, k= neighbor_num + 1, dim= -1, largest= False)
neighbor_value = neighbor_value[:, :, 1:]
neighbor_index = neighbor_index[:, :, 1:]
return neighbor_index, neighbor_value
def get_nearest_index(target: "(bs, v1, 3)", source: "(bs, v2, 3)"):
"""
Return: (bs, v1, 1)
"""
inner = torch.bmm(target, source.transpose(1, 2)) #(bs, v1, v2)
s_norm_2 = torch.sum(source ** 2, dim= 2) #(bs, v2)
t_norm_2 = torch.sum(target ** 2, dim= 2) #(bs, v1)
d_norm_2 = s_norm_2.unsqueeze(1) + t_norm_2.unsqueeze(2) - 2 * inner
nearest_index = torch.topk(d_norm_2, k= 1, dim= -1, largest= False)[1]
return nearest_index
def indexing_neighbor(tensor: "(bs, vertice_num, dim)", index: "(bs, vertice_num, neighbor_num)" ):
"""
Return: (bs, vertice_num, neighbor_num, dim)
"""
bs, v, n = index.size()
id_0 = torch.arange(bs).view(-1, 1, 1)
tensor_indexed = tensor[id_0, index]
return tensor_indexed
def get_neighbor_direction_norm(vertices: "(bs, vertice_num, 3)", neighbor_index: "(bs, vertice_num, neighbor_num)"):
"""
Return: (bs, vertice_num, neighobr_num, 3)
"""
neighbors = indexing_neighbor(vertices, neighbor_index) # (bs, v, n, 3)
neighbor_direction = neighbors - vertices.unsqueeze(2)
neighbor_direction_norm = F.normalize(neighbor_direction, dim= -1)
return neighbor_direction_norm
class Conv_surface(nn.Module):
"""Extract structure feafure from surface, independent from vertice coordinates"""
def __init__(self, kernel_num, support_num):
super().__init__()
self.kernel_num = kernel_num
self.support_num = support_num
self.relu = nn.ReLU(inplace= True)
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * kernel_num))
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.support_num * self.kernel_num)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_num)",
vertices: "(bs, vertice_num, 3)"):
"""
Return vertices with local feature: (bs, vertice_num, kernel_num)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0) #(3, s * k)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, s*k)
theta = self.relu(theta)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, self.support_num, self.kernel_num)
#print("theta.shape", theta.shape)
#theta.shape torch.Size([2, 2048, 50, 1, 128])
#theta_max.shape torch.Size([2, 2048, 1, 128])
#feature.shape torch.Size([2, 2048, 128])
theta = torch.max(theta, dim= 2)[0] # (bs, vertice_num, support_num, kernel_num)
#print("theta_max.shape", theta.shape)
feature = torch.sum(theta, dim= 2) # (bs, vertice_num, kernel_num)
#print("feature.shape", feature.shape)
return feature
class Conv_surface_avg(nn.Module):
"""Extract structure feafure from surface, independent from vertice coordinates"""
def __init__(self, kernel_num, support_num):
super().__init__()
self.kernel_num = kernel_num
self.support_num = support_num
self.relu = nn.ReLU(inplace= True)
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * kernel_num))
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.support_num * self.kernel_num)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_num)",
vertices: "(bs, vertice_num, 3)"):
"""
Return vertices with local feature: (bs, vertice_num, kernel_num)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0) #(3, s * k)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, s*k)
theta = self.relu(theta)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, self.support_num, self.kernel_num)
#print(theta.shape)
#theta = torch.max(theta, dim= 2)[0]
#print(theta.shape)
theta = torch.mean(theta, dim= 2) # (bs, vertice_num, support_num, kernel_num)
#print(theta.shape)
feature = torch.sum(theta, dim= 2) # (bs, vertice_num, kernel_num)
return feature
class Conv_surface_attention(nn.Module):
"""Extract structure feafure from surface, independent from vertice coordinates"""
def __init__(self, kernel_num, support_num):
super().__init__()
self.kernel_num = kernel_num
self.support_num = support_num
self.weight_num = 50
self.relu = nn.ReLU(inplace= True)
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * kernel_num))
self.linear = nn.Sequential(
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
)
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.support_num * self.kernel_num)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_num)",
vertices: "(bs, vertice_num, 3)",
neighbor_value: "(bs, vertice_num, neighbor_value)"):
"""
Return vertices with local feature: (bs, vertice_num, kernel_num)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0) #(3, s * k)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, s*k)
theta = self.relu(theta)
neighbor_value = F.normalize(neighbor_value, dim= -1)
weight = self.linear(neighbor_value)
weight = F.normalize(neighbor_value, dim= -1)
theta = torch.einsum('ijkl,ijk->ijkl', [theta, weight])
#print("theta.shape:", theta.shape)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, self.support_num, self.kernel_num)
theta = torch.sum(theta, dim= 2) # (bs, vertice_num, support_num, kernel_num)
feature = torch.sum(theta, dim= 2) # (bs, vertice_num, kernel_num)
return feature
class Conv_surface_encoding(nn.Module):
"""Extract structure feafure from surface, independent from vertice coordinates"""
def __init__(self, kernel_num, support_num):
super().__init__()
self.kernel_num = kernel_num
self.support_num = support_num
self.weight_num = 50
self.relu = nn.ReLU(inplace= True)
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * kernel_num))
self.linear = nn.Sequential(
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
)
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.support_num * self.kernel_num)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_num)",
vertices: "(bs, vertice_num, 3)",
neighbor_value: "(bs, vertice_num, neighbor_value)"):
"""
Return vertices with local feature: (bs, vertice_num, kernel_num)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0) #(3, s * k)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, s*k)
theta = self.relu(theta)
weight = self.linear(neighbor_value)
theta = torch.einsum('ijkl,ijk->ijkl', [theta, weight])
#print("theta.shape:", theta.shape)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, self.support_num, self.kernel_num)
theta = torch.sum(theta, dim= 2) # (bs, vertice_num, support_num, kernel_num)
feature = torch.sum(theta, dim= 2) # (bs, vertice_num, kernel_num)
return feature
class Conv_layer(nn.Module):
def __init__(self, in_channel, out_channel, support_num):
super().__init__()
# arguments:
self.in_channel = in_channel
self.out_channel = out_channel
self.support_num = support_num
# parameters:
self.relu = nn.ReLU(inplace= True)
self.weights = nn.Parameter(torch.FloatTensor(in_channel, (support_num + 1) * out_channel))
self.bias = nn.Parameter(torch.FloatTensor((support_num + 1) * out_channel))
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * out_channel))
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.out_channel * (self.support_num + 1))
self.weights.data.uniform_(-stdv, stdv)
self.bias.data.uniform_(-stdv, stdv)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_index)",
vertices: "(bs, vertice_num, 3)",
feature_map: "(bs, vertice_num, in_channel)"):
"""
Return: output feature map: (bs, vertice_num, out_channel)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, support_num * out_channel)
theta = self.relu(theta)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, -1)
# (bs, vertice_num, neighbor_num, support_num * out_channel)
feature_out = feature_map @ self.weights + self.bias # (bs, vertice_num, (support_num + 1) * out_channel)
feature_center = feature_out[:, :, :self.out_channel] # (bs, vertice_num, out_channel)
feature_support = feature_out[:, :, self.out_channel:] #(bs, vertice_num, support_num * out_channel)
# Fuse together - max among product
feature_support = indexing_neighbor(feature_support, neighbor_index) # (bs, vertice_num, neighbor_num, support_num * out_channel)
activation_support = theta * feature_support # (bs, vertice_num, neighbor_num, support_num * out_channel)
activation_support = activation_support.view(bs,vertice_num, neighbor_num, self.support_num, self.out_channel)
activation_support = torch.max(activation_support, dim= 2)[0] # (bs, vertice_num, support_num, out_channel)
activation_support = torch.sum(activation_support, dim= 2) # (bs, vertice_num, out_channel)
feature_fuse = feature_center + activation_support # (bs, vertice_num, out_channel)
return feature_fuse
class Conv_layer_avg(nn.Module):
def __init__(self, in_channel, out_channel, support_num):
super().__init__()
# arguments:
self.in_channel = in_channel
self.out_channel = out_channel
self.support_num = support_num
# parameters:
self.relu = nn.ReLU(inplace= True)
self.weights = nn.Parameter(torch.FloatTensor(in_channel, (support_num + 1) * out_channel))
self.bias = nn.Parameter(torch.FloatTensor((support_num + 1) * out_channel))
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * out_channel))
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.out_channel * (self.support_num + 1))
self.weights.data.uniform_(-stdv, stdv)
self.bias.data.uniform_(-stdv, stdv)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_index)",
vertices: "(bs, vertice_num, 3)",
feature_map: "(bs, vertice_num, in_channel)"):
"""
Return: output feature map: (bs, vertice_num, out_channel)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, support_num * out_channel)
theta = self.relu(theta)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, -1)
# (bs, vertice_num, neighbor_num, support_num * out_channel)
feature_out = feature_map @ self.weights + self.bias # (bs, vertice_num, (support_num + 1) * out_channel)
feature_center = feature_out[:, :, :self.out_channel] # (bs, vertice_num, out_channel)
feature_support = feature_out[:, :, self.out_channel:] #(bs, vertice_num, support_num * out_channel)
# Fuse together - max among product
feature_support = indexing_neighbor(feature_support, neighbor_index) # (bs, vertice_num, neighbor_num, support_num * out_channel)
activation_support = theta * feature_support # (bs, vertice_num, neighbor_num, support_num * out_channel)
activation_support = activation_support.view(bs,vertice_num, neighbor_num, self.support_num, self.out_channel)
activation_support = torch.mean(activation_support, dim= 2) # (bs, vertice_num, support_num, out_channel)
activation_support = torch.sum(activation_support, dim= 2) # (bs, vertice_num, out_channel)
feature_fuse = feature_center + activation_support # (bs, vertice_num, out_channel)
return feature_fuse
class Conv_layer_attention(nn.Module):
def __init__(self, in_channel, out_channel, support_num):
super().__init__()
# arguments:
self.in_channel = in_channel
self.out_channel = out_channel
self.support_num = support_num
self.weight_num = 50
# parameters:
self.relu = nn.ReLU(inplace= True)
self.weights = nn.Parameter(torch.FloatTensor(in_channel, (support_num + 1) * out_channel))
self.bias = nn.Parameter(torch.FloatTensor((support_num + 1) * out_channel))
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * out_channel))
self.linear = nn.Sequential(
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
)
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.out_channel * (self.support_num + 1))
self.weights.data.uniform_(-stdv, stdv)
self.bias.data.uniform_(-stdv, stdv)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_index)",
vertices: "(bs, vertice_num, 3)",
feature_map: "(bs, vertice_num, in_channel)",
neighbor_value: "(bs, vertice_num, neighbor_value)"):
"""
Return: output feature map: (bs, vertice_num, out_channel)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, support_num * out_channel)
theta = self.relu(theta)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, -1)
feature_out = feature_map @ self.weights + self.bias # (bs, vertice_num, (support_num + 1) * out_channel)
feature_center = feature_out[:, :, :self.out_channel] # (bs, vertice_num, out_channel)
feature_support = feature_out[:, :, self.out_channel:] #(bs, vertice_num, support_num * out_channel)
feature_support = indexing_neighbor(feature_support, neighbor_index) # (bs, vertice_num, neighbor_num, support_num * out_channel)
neighbor_value = F.normalize(neighbor_value, dim= -1)
weight = self.linear(neighbor_value)
weight = F.normalize(neighbor_value, dim= -1)
activation_support = theta * feature_support # (bs, vertice_num, neighbor_num, support_num * out_channel)
activation_support = torch.einsum('ijkl,ijk->ijkl', [activation_support, weight])
activation_support = activation_support.view(bs,vertice_num, neighbor_num, self.support_num, self.out_channel)
activation_support = torch.sum(activation_support, dim= 2) # (bs, vertice_num, support_num, out_channel)
activation_support = torch.sum(activation_support, dim= 2) # (bs, vertice_num, out_channel)
feature_fuse = feature_center + activation_support # (bs, vertice_num, out_channel)
return feature_fuse
class Conv_layer_encoding(nn.Module):
def __init__(self, in_channel, out_channel, support_num):
super().__init__()
# arguments:
self.in_channel = in_channel
self.out_channel = out_channel
self.support_num = support_num
self.weight_num = 50
# parameters:
self.relu = nn.ReLU(inplace= True)
self.weights = nn.Parameter(torch.FloatTensor(in_channel, (support_num + 1) * out_channel))
self.bias = nn.Parameter(torch.FloatTensor((support_num + 1) * out_channel))
self.directions = nn.Parameter(torch.FloatTensor(3, support_num * out_channel))
self.linear = nn.Sequential(
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
nn.Linear(self.weight_num, self.weight_num),
nn.ReLU(inplace= True),
)
self.initialize()
def initialize(self):
stdv = 1. / math.sqrt(self.out_channel * (self.support_num + 1))
self.weights.data.uniform_(-stdv, stdv)
self.bias.data.uniform_(-stdv, stdv)
self.directions.data.uniform_(-stdv, stdv)
def forward(self,
neighbor_index: "(bs, vertice_num, neighbor_index)",
vertices: "(bs, vertice_num, 3)",
feature_map: "(bs, vertice_num, in_channel)",
neighbor_value: "(bs, vertice_num, neighbor_value)"):
"""
Return: output feature map: (bs, vertice_num, out_channel)
"""
bs, vertice_num, neighbor_num = neighbor_index.size()
neighbor_direction_norm = get_neighbor_direction_norm(vertices, neighbor_index)
support_direction_norm = F.normalize(self.directions, dim= 0)
theta = neighbor_direction_norm @ support_direction_norm # (bs, vertice_num, neighbor_num, support_num * out_channel)
theta = self.relu(theta)
theta = theta.contiguous().view(bs, vertice_num, neighbor_num, -1)
feature_out = feature_map @ self.weights + self.bias # (bs, vertice_num, (support_num + 1) * out_channel)
feature_center = feature_out[:, :, :self.out_channel] # (bs, vertice_num, out_channel)
feature_support = feature_out[:, :, self.out_channel:] #(bs, vertice_num, support_num * out_channel)
feature_support = indexing_neighbor(feature_support, neighbor_index) # (bs, vertice_num, neighbor_num, support_num * out_channel)
#print(feature_support.shape)
weight = self.linear(neighbor_value)
#print(weight.shape)
activation_support = theta * feature_support # (bs, vertice_num, neighbor_num, support_num * out_channel)
#activation_support = torch.einsum('ijkl,ijk->ijkl', [activation_support, weight])
activation_support = activation_support.view(bs,vertice_num, neighbor_num, self.support_num, self.out_channel)
activation_support = torch.sum(activation_support, dim= 2) # (bs, vertice_num, support_num, out_channel)
activation_support = torch.max(activation_support, dim= 2) # (bs, vertice_num, out_channel)
feature_fuse = feature_center + activation_support # (bs, vertice_num, out_channel)
return feature_fuse
class Pool_layer(nn.Module):
def __init__(self, pooling_rate: int= 4, neighbor_num: int= 4):
super().__init__()
self.pooling_rate = pooling_rate
self.neighbor_num = neighbor_num
def forward(self,
vertices: "(bs, vertice_num, 3)",
feature_map: "(bs, vertice_num, channel_num)"):
"""
Return:
vertices_pool: (bs, pool_vertice_num, 3),
feature_map_pool: (bs, pool_vertice_num, channel_num)
"""
bs, vertice_num, _ = vertices.size()
neighbor_index = get_neighbor_index(vertices, self.neighbor_num)
neighbor_feature = indexing_neighbor(feature_map, neighbor_index) #(bs, vertice_num, neighbor_num, channel_num)
pooled_feature = torch.max(neighbor_feature, dim= 2)[0] #(bs, vertice_num, channel_num)
pool_num = int(vertice_num / self.pooling_rate)
sample_idx = torch.randperm(vertice_num)[:pool_num]
vertices_pool = vertices[:, sample_idx, :] # (bs, pool_num, 3)
feature_map_pool = pooled_feature[:, sample_idx, :] #(bs, pool_num, channel_num)
return vertices_pool, feature_map_pool
class Pool_layer_avg(nn.Module):
def __init__(self, pooling_rate: int= 4, neighbor_num: int= 4):
super().__init__()
self.pooling_rate = pooling_rate
self.neighbor_num = neighbor_num
def forward(self,
vertices: "(bs, vertice_num, 3)",
feature_map: "(bs, vertice_num, channel_num)"):
"""
Return:
vertices_pool: (bs, pool_vertice_num, 3),
feature_map_pool: (bs, pool_vertice_num, channel_num)
"""
bs, vertice_num, _ = vertices.size()
neighbor_index = get_neighbor_index(vertices, self.neighbor_num)
neighbor_feature = indexing_neighbor(feature_map, neighbor_index) #(bs, vertice_num, neighbor_num, channel_num)
pooled_feature = torch.mean(neighbor_feature, dim= 2) #(bs, vertice_num, channel_num)
pool_num = int(vertice_num / self.pooling_rate)
sample_idx = torch.randperm(vertice_num)[:pool_num]
vertices_pool = vertices[:, sample_idx, :] # (bs, pool_num, 3)
feature_map_pool = pooled_feature[:, sample_idx, :] #(bs, pool_num, channel_num)
return vertices_pool, feature_map_pool
def test():
import time
bs = 8
v = 1024
dim = 3
n = 20
vertices = torch.randn(bs, v, dim)
neighbor_index = get_neighbor_index(vertices, n)
s = 3
conv_1 = Conv_surface(kernel_num= 32, support_num= s)
conv_2 = Conv_layer(in_channel= 32, out_channel= 64, support_num= s)
pool = Pool_layer(pooling_rate= 4, neighbor_num= 4)
print("Input size: {}".format(vertices.size()))
start = time.time()
f1 = conv_1(neighbor_index, vertices)
print("\n[1] Time: {}".format(time.time() - start))
print("[1] Out shape: {}".format(f1.size()))
start = time.time()
f2 = conv_2(neighbor_index, vertices, f1)
print("\n[2] Time: {}".format(time.time() - start))
print("[2] Out shape: {}".format(f2.size()))
start = time.time()
v_pool, f_pool = pool(vertices, f2)
print("\n[3] Time: {}".format(time.time() - start))
print("[3] v shape: {}, f shape: {}".format(v_pool.size(), f_pool.size()))
if __name__ == "__main__":
test()
| 47.384473 | 137 | 0.658865 | 3,258 | 25,635 | 4.892879 | 0.051565 | 0.081551 | 0.091839 | 0.071514 | 0.906217 | 0.887585 | 0.875227 | 0.867449 | 0.867072 | 0.864187 | 0 | 0.012658 | 0.220285 | 25,635 | 540 | 138 | 47.472222 | 0.784871 | 0.186308 | 0 | 0.798956 | 0 | 0 | 0.054428 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088773 | false | 0 | 0.013055 | 0 | 0.167102 | 0.018277 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b20d528a7fbaa9f8042f21b5643f3cd5cec65b4c | 1,370 | py | Python | tests/test_generate_label_features.py | kevinyamauchi/apoc | 32b84f86f73e7114e6ce7eb42571595e3e04e0b9 | [
"BSD-3-Clause"
] | null | null | null | tests/test_generate_label_features.py | kevinyamauchi/apoc | 32b84f86f73e7114e6ce7eb42571595e3e04e0b9 | [
"BSD-3-Clause"
] | null | null | null | tests/test_generate_label_features.py | kevinyamauchi/apoc | 32b84f86f73e7114e6ce7eb42571595e3e04e0b9 | [
"BSD-3-Clause"
] | null | null | null | def test_label_feature_generation():
import numpy as np
import apoc
image = np.asarray([[0, 0, 1, 1, 2, 2, 3, 3, 3, 3, 3]])
labels = np.asarray([[0, 0, 1, 1, 1, 1, 3, 2, 2, 2, 2]])
annotation = np.asarray([[0, 0, 2, 2, 2, 0, 3, 0, 0, 1, 1]])
feature_definition = """
area
""".replace("\n", " ")
oc = apoc.ObjectClassifier()
table, ground_truth = oc._make_features(feature_definition, labels, annotation, image)
# there are 3 labels
assert len(ground_truth) == 3
# we only measured area, 1 feature
assert len(table) == 1
# there are three area measurements
assert len(table[0][0]) == 3
def test_label_feature_generation_with_annotated_background():
import numpy as np
import apoc
image = np.asarray([[0, 0, 1, 1, 2, 2, 3, 3, 3, 3, 3]])
labels = np.asarray([[0, 0, 1, 1, 1, 1, 3, 2, 2, 2, 2]])
annotation = np.asarray([[1, 0, 2, 2, 2, 0, 3, 0, 0, 1, 1]])
feature_definition = """
area
""".replace("\n", " ")
oc = apoc.ObjectClassifier()
table, ground_truth = oc._make_features(feature_definition, labels, annotation, image)
# there are 3 labels
assert len(ground_truth) == 3
# we only measured area, 1 feature
assert len(table) == 1
# there are three area measurements
assert len(table[0][0]) == 3
| 27.959184 | 90 | 0.582482 | 207 | 1,370 | 3.753623 | 0.198068 | 0.030888 | 0.023166 | 0.030888 | 0.967825 | 0.893179 | 0.893179 | 0.893179 | 0.893179 | 0.893179 | 0 | 0.079365 | 0.264234 | 1,370 | 48 | 91 | 28.541667 | 0.691468 | 0.124818 | 0 | 0.857143 | 0 | 0 | 0.055369 | 0 | 0 | 0 | 0 | 0 | 0.214286 | 1 | 0.071429 | false | 0 | 0.142857 | 0 | 0.214286 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b759d49ae7fa10da50eef5fdf039fb3f5272eb51 | 65 | py | Python | collection/81.py | nemero/py_neural | 87f151097f8c331a06f13b96c4cec9a1ee663abf | [
"MIT"
] | null | null | null | collection/81.py | nemero/py_neural | 87f151097f8c331a06f13b96c4cec9a1ee663abf | [
"MIT"
] | 1 | 2017-01-18T18:35:03.000Z | 2017-01-25T08:55:49.000Z | collection/81.py | nemero/py_neural | 87f151097f8c331a06f13b96c4cec9a1ee663abf | [
"MIT"
] | null | null | null | 0 1 0 0 0
0 0 0 0
0 0 0 0
0 1 1 1
0 1 0 1
0 1 1 1
0 1 0 1
0 1 1 1 | 8.125 | 9 | 0.507692 | 33 | 65 | 1 | 0.060606 | 0.666667 | 0.909091 | 1.090909 | 0.939394 | 0.939394 | 0.939394 | 0.939394 | 0.939394 | 0.939394 | 0 | 1 | 0.492308 | 65 | 8 | 10 | 8.125 | 0 | 0 | 0 | 0.875 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | null | 0 | 0 | null | null | 0 | 0 | 0 | 1 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 15 |
b771b2506ca0929ef9d7e71b7d6cf48f087beeae | 807,225 | py | Python | ecm_prep_test.py | NREL/scout | acf38df7ce877cbd8c1c10f4f61fdf1d088fd947 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | ecm_prep_test.py | NREL/scout | acf38df7ce877cbd8c1c10f4f61fdf1d088fd947 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | ecm_prep_test.py | NREL/scout | acf38df7ce877cbd8c1c10f4f61fdf1d088fd947 | [
"Apache-2.0",
"BSD-3-Clause"
] | null | null | null | #!/usr/bin/env python3
""" Tests for running the measure preparation routine """
# Import code to be tested
import ecm_prep
# Import needed packages
import unittest
import numpy
import os
from collections import OrderedDict
import warnings
import copy
import itertools
class CommonMethods(object):
"""Define common methods for use in all tests below."""
def dict_check(self, dict1, dict2):
"""Check the equality of two dicts.
Args:
dict1 (dict): First dictionary to be compared
dict2 (dict): Second dictionary to be compared
Raises:
AssertionError: If dictionaries are not equal.
"""
# zip() and zip_longest() produce tuples for the items
# identified, where in the case of a dict, the first item
# in the tuple is the key and the second item is the value;
# in the case where the dicts are not of identical size,
# zip_longest() will use the fill value created below as a
# substitute in the dict that has missing content; this
# value is given as a tuple to be of comparable structure
# to the normal output from zip_longest()
fill_val = ('substituted entry', 5.2)
# In this structure, k and k2 are the keys that correspond to
# the dicts or unitary values that are found in i and i2,
# respectively, at the current level of the recursive
# exploration of dict1 and dict2, respectively
for (k, i), (k2, i2) in itertools.zip_longest(sorted(dict1.items()),
sorted(dict2.items()),
fillvalue=fill_val):
# Confirm that at the current location in the dict structure,
# the keys are equal; this should fail if one of the dicts
# is empty, is missing section(s), or has different key names
self.assertEqual(k, k2)
# If the recursion has not yet reached the terminal/leaf node
if isinstance(i, dict):
# Test that the dicts from the current keys are equal
self.assertCountEqual(i, i2)
# Continue to recursively traverse the dict
self.dict_check(i, i2)
# At the terminal/leaf node
else:
# Compare the values, allowing for floating point inaccuracy
self.assertAlmostEqual(dict1[k], dict2[k2], places=2)
class EPlusGlobalsTest(unittest.TestCase, CommonMethods):
"""Test 'find_vintage_weights' function.
Ensure building vintage square footages are read in properly from a
cbecs data file and that the proper weights are derived for mapping
EnergyPlus building vintages to Scout's 'new' and 'retrofit' building
structure types.
Attributes:
cbecs_sf_byvint (dict): Commercial square footage by vintage data.
eplus_globals_ok (object): EPlusGlobals object with square footage and
vintage weights attributes to test against expected outputs.
eplus_failpath (string): Path to invalid EnergyPlus simulation data
file that should cause EPlusGlobals object instantiation to fail.
ok_out_weights (dict): Correct vintage weights output for
'find_vintage_weights'function given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables for use across all class functions."""
base_dir = os.getcwd()
cls.cbecs_sf_byvint = {
'2004 to 2007': 6524.0, '1960 to 1969': 10362.0,
'1946 to 1959': 7381.0, '1970 to 1979': 10846.0,
'1990 to 1999': 13803.0, '2000 to 2003': 7215.0,
'Before 1920': 3980.0, '2008 to 2012': 5726.0,
'1920 to 1945': 6020.0, '1980 to 1989': 15185.0}
cls.eplus_globals_ok = ecm_prep.EPlusGlobals(
base_dir + "/ecm_definitions/energyplus_data/energyplus_test_ok",
cls.cbecs_sf_byvint)
cls.eplus_failpath = \
base_dir + "/ecm_definitions/energyplus_data/energyplus_test_fail"
cls.ok_out_weights = {
'DOE Ref 1980-2004': 0.42, '90.1-2004': 0.07,
'90.1-2010': 0.07, 'DOE Ref Pre-1980': 0.44,
'90.1-2013': 1}
def test_vintageweights(self):
"""Test find_vintage_weights function given valid inputs.
Note:
Ensure EnergyPlus building vintage type data are correctly weighted
by their square footages (derived from CBECs data).
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(
self.eplus_globals_ok.find_vintage_weights(),
self.ok_out_weights)
# Test that an error is raised when unexpected eplus vintages are present
def test_vintageweights_fail(self):
"""Test find_vintage_weights function given invalid inputs.
Note:
Ensure that KeyError is raised when an unexpected EnergyPlus
building vintage is present.
Raises:
AssertionError: If KeyError is not raised.
"""
with self.assertRaises(KeyError):
ecm_prep.EPlusGlobals(
self.eplus_failpath,
self.cbecs_sf_byvint).find_vintage_weights()
class EPlusUpdateTest(unittest.TestCase, CommonMethods):
"""Test the 'fill_eplus' function and its supporting functions.
Ensure that the 'build_array' function properly assembles a set of input
CSVs into a structured array and that the 'create_perf_dict' and
'fill_perf_dict' functions properly initialize and fill a measure
performance dictionary with results from an EnergyPlus simulation output
file.
Attributes:
meas (object): Measure object instantiated based on sample_measure_in
attributes.
eplus_dir (string): EnergyPlus simulation output file directory.
eplus_coltypes (list): List of expected EnergyPlus output data types.
eplus_basecols (list): Variable columns that should never be removed.
mseg_in (dict): Sample baseline microsegment stock/energy data.
ok_eplus_vintagewts (dict): Sample EnergyPlus vintage weights.
ok_eplusfiles_in (list): List of all EnergyPlus simulation file names.
ok_perfarray_in (numpy recarray): Valid structured array of
EnergyPlus-based relative savings data.
fail_perfarray_in (numpy recarray): Invalid structured array of
EnergyPlus-based relative savings data (missing certain climate
zones, building types, and building vintages).
fail_perfdictempty_in (dict): Invalid empty dictionary to fill with
EnergyPlus-based performance information broken down by climate
zone, building type/vintage, fuel type, and end use (dictionary
includes invalid climate zone key).
ok_array_type_out (string): The array type that should be yielded by
'convert_to_array' given valid input.
ok_array_length_out (int): The array length that should be yielded by
'convert_to_array' given valid input.
ok_array_names_out (tuple): Tuple of column names for the recarray that
should be yielded by 'convert_to_array' given valid input.
ok_perfdictempty_out (dict): The empty dictionary that should be
yielded by 'create_perf_dict' given valid inputs.
ok_perfdictfill_out (dict): The dictionary filled with EnergyPlus-based
measure performance information that should be yielded by
'fill_perf_dict' and 'fill_eplus' given valid inputs.
Raises:
AssertionError: If function yields unexpected results or does not
raise a KeyError when it should.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Sample measure attributes to use in instantiating Measure object.
sample_measure_in = OrderedDict([
("name", "eplus sample measure 1"),
("status", OrderedDict([
("active", 1), ("updated", 1)])),
("installed_cost", 25),
("cost_units", "2014$/unit"),
("energy_efficiency", OrderedDict([
("EnergyPlus file", "eplus_sample_measure")])),
("energy_efficiency_units", OrderedDict([
("primary", "relative savings (constant)"),
("secondary", "relative savings (constant)")])),
("energy_efficiency_source", None),
("market_entry_year", None),
("market_exit_year", None),
("product_lifetime", 10),
("structure_type", ["new", "retrofit"]),
("bldg_type", ["assembly", "education"]),
("climate_zone", ["hot dry", "mixed humid"]),
("fuel_type", OrderedDict([
("primary", ["electricity"]),
("secondary", [
"electricity", "natural gas", "distillate"])])),
("fuel_switch_to", None),
("end_use", OrderedDict([
("primary", ["lighting"]),
("secondary", ["heating", "cooling"])])),
("technology", OrderedDict([
("primary", [
"technology A", "technology B", "technology C"]),
("secondary", ["windows conduction", "windows solar"])]))])
# Base directory
base_dir = os.getcwd()
# Useful global variables for the sample measure object
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
cls.meas = ecm_prep.Measure(handyvars, **sample_measure_in)
# Finalize the measure's 'technology_type' attribute (handled by the
# 'fill_attr' function, which is not run as part of this test)
cls.meas.technology_type = {"primary": "supply", "secondary": "demand"}
cls.eplus_dir = \
base_dir + "/ecm_definitions/energyplus_data/energyplus_test_ok"
cls.eplus_coltypes = [
('building_type', '<U50'), ('climate_zone', '<U50'),
('template', '<U50'), ('measure', '<U50'), ('status', '<U50'),
('ep_version', '<U50'), ('os_version', '<U50'),
('timestamp', '<U50'), ('cooling_electricity', '<f8'),
('cooling_water', '<f8'), ('district_chilled_water', '<f8'),
('district_hot_water_heating', '<f8'),
('district_hot_water_service_hot_water', '<f8'),
('exterior_equipment_electricity', '<f8'),
('exterior_equipment_gas', '<f8'),
('exterior_equipment_other_fuel', '<f8'),
('exterior_equipment_water', '<f8'),
('exterior_lighting_electricity', '<f8'),
('fan_electricity', '<f8'),
('floor_area', '<f8'), ('generated_electricity', '<f8'),
('heat_recovery_electricity', '<f8'),
('heat_rejection_electricity', '<f8'),
('heating_electricity', '<f8'), ('heating_gas', '<f8'),
('heating_other_fuel', '<f8'), ('heating_water', '<f8'),
('humidification_electricity', '<f8'),
('humidification_water', '<f8'),
('interior_equipment_electricity', '<f8'),
('interior_equipment_gas', '<f8'),
('interior_equipment_other_fuel', '<f8'),
('interior_equipment_water', '<f8'),
('interior_lighting_electricity', '<f8'),
('net_site_electricity', '<f8'), ('net_water', '<f8'),
('pump_electricity', '<f8'),
('refrigeration_electricity', '<f8'),
('service_water', '<f8'),
('service_water_heating_electricity', '<f8'),
('service_water_heating_gas', '<f8'),
('service_water_heating_other_fuel', '<f8'), ('total_gas', '<f8'),
('total_other_fuel', '<f8'), ('total_site_electricity', '<f8'),
('total_water', '<f8')]
cls.eplus_basecols = [
'building_type', 'climate_zone', 'template', 'measure']
cls.mseg_in = {
'hot dry': {
'education': {
'electricity': {
'lighting': {
"technology A": 0,
"technology B": 0,
"technology C": 0},
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'natural gas': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'distillate': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}}},
'assembly': {
'electricity': {
'lighting': {
"technology A": 0,
"technology B": 0,
"technology C": 0},
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'natural gas': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'distillate': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}}}},
'mixed humid': {
'education': {
'electricity': {
'lighting': {
"technology A": 0,
"technology B": 0,
"technology C": 0},
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'natural gas': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'distillate': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}}},
'assembly': {
'electricity': {
'lighting': {
"technology A": 0,
"technology B": 0,
"technology C": 0},
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'ASHP': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'natural gas': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}},
'cooling': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}},
'distillate': {
'heating': {
'supply': {
'technology A': 0},
'demand': {
'windows conduction': 0,
'windows solar': 0}}}}}}
# Set EnergyPlus building vintage weights (based on square footage)
cls.ok_eplus_vintagewts = {
'DOE Ref Pre-1980': 0.44, '90.1-2004': 0.07, '90.1-2010': 0.07,
'90.1-2013': 1, 'DOE Ref 1980-2004': 0.42}
cls.ok_eplusfiles_in = [
"fullservicerestaurant_scout_2016-07-23-16-25-59.csv",
"secondaryschool_scout_2016-07-23-16-25-59.csv",
"primaryschool_scout_2016-07-23-16-25-59.csv",
"smallhotel_scout_2016-07-23-16-25-59.csv",
"hospital_scout_2016-07-23-16-25-59.csv"]
# Set full paths for EnergyPlus files that are relevant to the measure
eplusfiles_in_fullpaths = [cls.eplus_dir + '/' + x for x in [
"secondaryschool_scout_2016-07-23-16-25-59.csv",
"primaryschool_scout_2016-07-23-16-25-59.csv",
"hospital_scout_2016-07-23-16-25-59.csv"]]
# Use 'build_array' to generate test input data for 'fill_eplus'
cls.ok_perfarray_in = cls.meas.build_array(
cls.eplus_coltypes, eplusfiles_in_fullpaths)
cls.fail_perfarray_in = numpy.rec.array([
('BA-MixedHumid', 'SecondarySchool', '90.1-2013', 'Success',
0, 0.5, 0.5, 0.25, 0.25, 0, 0.25, 0.75, 0, -0.1, 0.1, 0.5, -0.2),
('BA-HotDry', 'PrimarySchool', 'DOE Ref 1980-2004', 'Success',
0, 0.5, 0.5, 0.25, 0.25, 0, 0.25, 0.75, 0, -0.1, 0.1, 0.5, -0.2)],
dtype=[('climate_zone', '<U13'), ('building_type', '<U22'),
('template', '<U17'), ('status', 'U7'),
('floor_area', '<f8'),
('total_site_electricity', '<f8'),
('net_site_electricity', '<f8'),
('total_gas', '<f8'), ('total_other_fuel', '<f8'),
('total_water', '<f8'), ('net_water', '<f8'),
('interior_lighting_electricity', '<f8'),
('interior_equipment_electricity', '<f8'),
('heating_electricity', '<f8'),
('cooling_electricity', '<f8'),
('heating_gas', '<f8'),
('heat_recovery_electricity', '<f8')])
cls.fail_perfdictempty_in = {
"primary": {
'blazing hot': {
'education': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}}},
'mixed humid': {
'education': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}}}},
"secondary": {
'blazing hot': {
'education': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}}}},
'mixed humid': {
'education': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}}}}}}
cls.ok_array_length_out = 240
cls.ok_arraynames_out = cls.ok_perfarray_in.dtype.names
cls.ok_perfdictempty_out = {
"primary": {
'hot dry': {
'education': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}}},
'mixed humid': {
'education': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'lighting': {'retrofit': 0, 'new': 0}}}}},
"secondary": {
'hot dry': {
'education': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}}},
'mixed humid': {
'education': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0, 'new': 0}},
'natural gas': {
'heating': {'retrofit': 0, 'new': 0}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}}}}}
cls.ok_perfdictfill_out = {
"primary": {
'hot dry': {
'education': {
'electricity': {
'lighting': {'retrofit': 0.5, 'new': 0.5}}},
'assembly': {
'electricity': {
'lighting': {'retrofit': 0.5, 'new': 0.5}}}},
'mixed humid': {
'education': {
'electricity': {
'lighting': {
'retrofit': 0.75, 'new': 0.935}}},
'assembly': {
'electricity': {
'lighting': {
'retrofit': 0.75, 'new': 1}}}}},
"secondary": {
'hot dry': {
'education': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0.75, 'new': 0.555}},
'natural gas': {
'heating': {
'retrofit': 1.25, 'new': 1.25}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0.75, 'new': 0.75}},
'natural gas': {
'heating': {
'retrofit': 1.25, 'new': 1.25}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}}},
'mixed humid': {
'education': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0.5, 'new': 0.87}},
'natural gas': {
'heating': {
'retrofit': 1.5, 'new': 1.13}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}},
'assembly': {
'electricity': {
'heating': {'retrofit': 0, 'new': 0},
'cooling': {'retrofit': 0.5, 'new': 1}},
'natural gas': {
'heating': {
'retrofit': 1.5, 'new': 1}},
'distillate': {
'heating': {'retrofit': 0, 'new': 0}}}}}}
def test_array_build(self):
"""Test 'build_array' function given valid inputs.
Note:
Ensure correct assembly of numpy arrays from all EnergyPlus
files that are relevant to a test measure.
Raises:
AssertionError: If function yields unexpected results.
"""
# Check for correct column names and length of the converted array
self.assertEqual(
[self.ok_perfarray_in.dtype.names, len(self.ok_perfarray_in)],
[self.ok_arraynames_out, self.ok_array_length_out])
def test_dict_creation(self):
"""Test 'create_perf_dict' function given valid inputs.
Note:
Ensure correct generation of measure performance dictionary.
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(self.meas.create_perf_dict(
self.mseg_in), self.ok_perfdictempty_out)
def test_dict_fill(self):
"""Test 'fill_perf_dict' function given valid inputs.
Note:
Ensure correct updating of measure performance dictionary
with EnergyPlus simulation results.
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(
self.meas.fill_perf_dict(
self.ok_perfdictempty_out, self.ok_perfarray_in,
self.ok_eplus_vintagewts, self.eplus_basecols,
eplus_bldg_types={}),
self.ok_perfdictfill_out)
def test_dict_fill_fail(self):
"""Test 'fill_perf_dict' function given invalid inputs.
Note:
Ensure function fails when given either invalid blank
performance dictionary to fill or invalid input array of
EnergyPlus simulation information to fill the dict with.
Raises:
AssertionError: If KeyError is not raised
"""
with self.assertRaises(KeyError):
# Case with invalid input dictionary
self.meas.fill_perf_dict(
self.fail_perfdictempty_in, self.ok_perfarray_in,
self.ok_eplus_vintagewts, self.eplus_basecols,
eplus_bldg_types={})
# Case with incomplete input array of EnergyPlus information
self.meas.fill_perf_dict(
self.ok_perfdictempty_out, self.fail_perfarray_in,
self.ok_eplus_vintagewts, self.eplus_basecols,
eplus_bldg_types={})
def test_fill_eplus(self):
"""Test 'fill_eplus' function given valid inputs.
Note:
Ensure proper updating of measure performance with
EnergyPlus simulation results from start ('convert_to_array')
to finish ('fill_perf_dict').
Raises:
AssertionError: If function yields unexpected results.
"""
self.meas.fill_eplus(
self.mseg_in, self.eplus_dir, self.eplus_coltypes,
self.ok_eplusfiles_in, self.ok_eplus_vintagewts,
self.eplus_basecols)
# Check for properly updated measure energy_efficiency,
# energy_efficiency_source, and energy_efficiency_source_quality
# attributes.
self.dict_check(
self.meas.energy_efficiency, self.ok_perfdictfill_out)
self.assertEqual(
self.meas.energy_efficiency_source, 'EnergyPlus/OpenStudio')
class MarketUpdatesTest(unittest.TestCase, CommonMethods):
"""Test 'fill_mkts' function.
Ensure that the function properly fills in market microsegment data
for a series of sample measures.
Attributes:
verbose (NoneType): Determines whether to print all user messages.
convert_data (dict): ECM cost conversion data.
sample_mseg_in (dict): Sample baseline microsegment stock/energy.
sample_cpl_in (dict): Sample baseline technology cost, performance,
and lifetime.
ok_tpmeas_fullchk_in (list): Valid sample measure information
to update with markets data; measure cost, performance, and life
attributes are given as point estimates. Used to check the full
measure 'markets' attribute under a 'Technical potential scenario.
ok_tpmeas_partchk_in (list): Valid sample measure information to update
with markets data; measure cost, performance, and lifetime
attributes are given as point estimates. Used to check the
'master_mseg' branch of measure 'markets' attribute under a
'Technical potential scenario.
ok_mapmeas_partchk_in (list): Valid sample measure information
to update with markets data; measure cost, performance, and life
attributes are given as point estimates. Used to check the
'master_mseg' branch of measure 'markets' attribute under a 'Max
adoption potential scenario.
ok_distmeas_in (list): Valid sample measure information to
update with markets data; measure cost, performance, and lifetime
attributes are given as probability distributions.
ok_partialmeas_in (list): Partially valid measure information to update
with markets data.
failmeas_in (list): Invalid sample measure information that should
yield error when entered into function.
warnmeas_in (list): Incomplete sample measure information that
should yield warnings when entered into function (measure
sub-market scaling fraction source attributions are invalid).
ok_tpmeas_fullchk_msegout (list): Master market microsegments
information that should be yielded given 'ok_tpmeas_fullchk_in'.
ok_tpmeas_fullchk_competechoiceout (list): Consumer choice information
that should be yielded given 'ok_tpmeas_fullchk_in'.
ok_tpmeas_fullchk_msegadjout (list): Secondary microsegment adjustment
information that should be yielded given 'ok_tpmeas_fullchk_in'.
ok_tpmeas_fullchk_break_out (list): Output breakout information that
should be yielded given 'ok_tpmeas_fullchk_in'.
ok_tpmeas_partchk_msegout (list): Master market microsegments
information that should be yielded given 'ok_tpmeas_partchk_in'.
ok_mapmas_partchck_msegout (list): Master market microsegments
information that should be yielded given 'ok_mapmeas_partchk_in'.
ok_distmeas_out (list): Means and sampling Ns for measure energy/cost
markets and lifetime that should be yielded given 'ok_distmeas_in'.
ok_partialmeas_out (list): Master market microsegments information
that should be yielded given 'ok_partialmeas_in'.
ok_warnmeas_out (list): Warning messages that should be yielded
given 'warnmeas_in'.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
# Hard code aeo_years to fit test years
handyvars.aeo_years = ["2009", "2010"]
handyvars.retro_rate = 0.02
# Hard code carbon intensity, site-source conversion, and cost data for
# tests such that these data are not dependent on an input file that
# may change in the future
handyvars.ss_conv = {
"electricity": {"2009": 3.19, "2010": 3.20},
"natural gas": {"2009": 1.01, "2010": 1.01},
"distillate": {"2009": 1.01, "2010": 1.01},
"other fuel": {"2009": 1.01, "2010": 1.01}}
handyvars.carb_int = {
"residential": {
"electricity": {"2009": 56.84702689, "2010": 56.16823191},
"natural gas": {"2009": 56.51576602, "2010": 54.91762852},
"distillate": {"2009": 49.5454521, "2010": 52.59751597},
"other fuel": {"2009": 49.5454521, "2010": 52.59751597}},
"commercial": {
"electricity": {"2009": 56.84702689, "2010": 56.16823191},
"natural gas": {"2009": 56.51576602, "2010": 54.91762852},
"distillate": {"2009": 49.5454521, "2010": 52.59751597},
"other fuel": {"2009": 49.5454521, "2010": 52.59751597}}}
handyvars.ecosts = {
"residential": {
"electricity": {"2009": 10.14, "2010": 9.67},
"natural gas": {"2009": 11.28, "2010": 10.78},
"distillate": {"2009": 21.23, "2010": 20.59},
"other fuel": {"2009": 21.23, "2010": 20.59}},
"commercial": {
"electricity": {"2009": 9.08, "2010": 8.55},
"natural gas": {"2009": 8.96, "2010": 8.59},
"distillate": {"2009": 14.81, "2010": 14.87},
"other fuel": {"2009": 14.81, "2010": 14.87}}}
handyvars.ccosts = {"2009": 33, "2010": 33}
cls.verbose = None
cls.convert_data = {}
cls.sample_mseg_in = {
"AIA_CZ1": {
"assembly": {
"total square footage": {"2009": 11, "2010": 11},
"new square footage": {"2009": 0, "2010": 0},
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {
"2009": 0, "2010": 0}},
"windows solar": {
"stock": "NA",
"energy": {
"2009": 1, "2010": 1}},
"lighting gain": {
"stock": "NA",
"energy": {
"2009": -7, "2010": -7}}}},
"cooling": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {
"2009": 5, "2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {
"2009": 6, "2010": 6}},
"lighting gain": {
"stock": "NA",
"energy": {
"2009": 6, "2010": 6}}}},
"lighting": {
"T5 F28": {
"stock": "NA",
"energy": {
"2009": 11, "2010": 11}}},
"PCs": {
"stock": "NA",
"energy": {"2009": 12, "2010": 12}},
"MELs": {
"distribution transformers": {
"stock": "NA",
"energy": {"2009": 24, "2010": 24}
}
}}},
"single family home": {
"total square footage": {"2009": 100, "2010": 200},
"total homes": {"2009": 1000, "2010": 1000},
"new homes": {"2009": 100, "2010": 50},
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 0, "2010": 0}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 1, "2010": 1}},
"infiltration": {
"stock": "NA",
"energy": {"2009": 10, "2010": 10}}},
"supply": {
"resistance heat": {
"stock": {"2009": 2, "2010": 2},
"energy": {"2009": 2, "2010": 2}},
"ASHP": {
"stock": {"2009": 3, "2010": 3},
"energy": {"2009": 3, "2010": 3}},
"GSHP": {
"stock": {"2009": 4, "2010": 4},
"energy": {"2009": 4, "2010": 4}}}},
"secondary heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 5, "2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 6, "2010": 6}},
"infiltration": {
"stock": "NA",
"energy": {"2009": 10, "2010": 10}}},
"supply": {"non-specific": 7}},
"cooling": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 5, "2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 6, "2010": 6}},
"infiltration": {
"stock": "NA",
"energy": {"2009": 10, "2010": 10}}},
"supply": {
"central AC": {
"stock": {"2009": 7, "2010": 7},
"energy": {"2009": 7, "2010": 7}},
"room AC": {
"stock": {"2009": 8, "2010": 8},
"energy": {"2009": 8, "2010": 8}},
"ASHP": {
"stock": {"2009": 9, "2010": 9},
"energy": {"2009": 9, "2010": 9}},
"GSHP": {
"stock": {"2009": 10, "2010": 10},
"energy": {"2009": 10, "2010": 10}}}},
"lighting": {
"linear fluorescent (LED)": {
"stock": {"2009": 11, "2010": 11},
"energy": {"2009": 11, "2010": 11}},
"general service (LED)": {
"stock": {"2009": 12, "2010": 12},
"energy": {"2009": 12, "2010": 12}},
"reflector (LED)": {
"stock": {"2009": 13, "2010": 13},
"energy": {"2009": 13, "2010": 13}},
"external (LED)": {
"stock": {"2009": 14, "2010": 14},
"energy": {"2009": 14, "2010": 14}}},
"refrigeration": {
"stock": {"2009": 111, "2010": 111},
"energy": {"2009": 111, "2010": 111}},
"TVs": {
"TVs": {
"stock": {"2009": 99, "2010": 99},
"energy": {"2009": 9, "2010": 9}},
"set top box": {
"stock": {"2009": 99, "2010": 99},
"energy": {"2009": 999, "2010": 999}}
},
"computers": {
"desktop PC": {
"stock": {"2009": 44, "2010": 44},
"energy": {"2009": 4, "2010": 4}},
"laptop PC": {
"stock": {"2009": 55, "2010": 55},
"energy": {"2009": 5, "2010": 5}}
},
"other (grid electric)": {
"freezers": {
"stock": {"2009": 222, "2010": 222},
"energy": {"2009": 222, "2010": 222}},
"other MELs": {
"stock": {"2009": 333, "2010": 333},
"energy": {"2009": 333, "2010": 333}}}},
"natural gas": {
"water heating": {
"stock": {"2009": 15, "2010": 15},
"energy": {"2009": 15, "2010": 15}},
"heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 0,
"2010": 0}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 1,
"2010": 1}},
"infiltration": {
"stock": "NA",
"energy": {
"2009": 10, "2010": 10}}}},
"secondary heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 5,
"2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 6,
"2010": 6}},
"infiltration": {
"stock": "NA",
"energy": {
"2009": 10, "2010": 10}}}},
"cooling": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 5, "2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 6, "2010": 6}},
"infiltration": {
"stock": "NA",
"energy": {
"2009": 10, "2010": 10}}}}}},
"multi family home": {
"total square footage": {"2009": 300, "2010": 400},
"total homes": {"2009": 1000, "2010": 1000},
"new homes": {"2009": 100, "2010": 50},
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 0, "2010": 0}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 1, "2010": 1}}},
"supply": {
"resistance heat": {
"stock": {"2009": 2, "2010": 2},
"energy": {"2009": 2, "2010": 2}},
"ASHP": {
"stock": {"2009": 3, "2010": 3},
"energy": {"2009": 3, "2010": 3}},
"GSHP": {
"stock": {"2009": 4, "2010": 4},
"energy": {"2009": 4, "2010": 4}}}},
"lighting": {
"linear fluorescent (LED)": {
"stock": {"2009": 11, "2010": 11},
"energy": {"2009": 11, "2010": 11}},
"general service (LED)": {
"stock": {"2009": 12, "2010": 12},
"energy": {"2009": 12, "2010": 12}},
"reflector (LED)": {
"stock": {"2009": 13, "2010": 13},
"energy": {"2009": 13, "2010": 13}},
"external (LED)": {
"stock": {"2009": 14, "2010": 14},
"energy": {"2009": 14, "2010": 14}}}}}},
"AIA_CZ2": {
"single family home": {
"total square footage": {"2009": 500, "2010": 600},
"total homes": {"2009": 1000, "2010": 1000},
"new homes": {"2009": 100, "2010": 50},
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 0, "2010": 0}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 1, "2010": 1}},
"infiltration": {
"stock": "NA",
"energy": {"2009": 10, "2010": 10}}},
"supply": {
"resistance heat": {
"stock": {"2009": 2, "2010": 2},
"energy": {"2009": 2, "2010": 2}},
"ASHP": {
"stock": {"2009": 3, "2010": 3},
"energy": {"2009": 3, "2010": 3}},
"GSHP": {
"stock": {"2009": 4, "2010": 4},
"energy": {"2009": 4, "2010": 4}}}},
"secondary heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 5, "2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 6, "2010": 6}},
"infiltration": {
"stock": "NA",
"energy": {"2009": 10, "2010": 10}}},
"supply": {"non-specific": 7}},
"cooling": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 5, "2010": 5}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 6, "2010": 6}},
"infiltration": {
"stock": "NA",
"energy": {"2009": 10, "2010": 10}}},
"supply": {
"central AC": {
"stock": {"2009": 7, "2010": 7},
"energy": {"2009": 7, "2010": 7}},
"room AC": {
"stock": {"2009": 8, "2010": 8},
"energy": {"2009": 8, "2010": 8}},
"ASHP": {
"stock": {"2009": 9, "2010": 9},
"energy": {"2009": 9, "2010": 9}},
"GSHP": {
"stock": {"2009": 10, "2010": 10},
"energy": {"2009": 10, "2010": 10}}}},
"lighting": {
"linear fluorescent (LED)": {
"stock": {"2009": 11, "2010": 11},
"energy": {"2009": 11, "2010": 11}},
"general service (LED)": {
"stock": {"2009": 12, "2010": 12},
"energy": {"2009": 12, "2010": 12}},
"reflector (LED)": {
"stock": {"2009": 13, "2010": 13},
"energy": {"2009": 13, "2010": 13}},
"external (LED)": {
"stock": {"2009": 14, "2010": 14},
"energy": {"2009": 14, "2010": 14}}},
"TVs": {
"TVs": {
"stock": {"2009": 99, "2010": 99},
"energy": {"2009": 9, "2010": 9}},
"set top box": {
"stock": {"2009": 99, "2010": 99},
"energy": {"2009": 999, "2010": 999}}
},
"computers": {
"desktop PC": {
"stock": {"2009": 44, "2010": 44},
"energy": {"2009": 4, "2010": 4}},
"laptop PC": {
"stock": {"2009": 55, "2010": 55},
"energy": {"2009": 5, "2010": 5}}
}},
"natural gas": {"water heating": {
"stock": {"2009": 15, "2010": 15},
"energy": {"2009": 15, "2010": 15}}}},
"multi family home": {
"total square footage": {"2009": 700, "2010": 800},
"total homes": {"2009": 1000, "2010": 1000},
"new homes": {"2009": 100, "2010": 50},
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"stock": "NA",
"energy": {"2009": 0, "2010": 0}},
"windows solar": {
"stock": "NA",
"energy": {"2009": 1, "2010": 1}}},
"supply": {
"resistance heat": {
"stock": {"2009": 2, "2010": 2},
"energy": {"2009": 2, "2010": 2}},
"ASHP": {
"stock": {"2009": 3, "2010": 3},
"energy": {"2009": 3, "2010": 3}},
"GSHP": {
"stock": {"2009": 4, "2010": 4}}}},
"lighting": {
"linear fluorescent (LED)": {
"stock": {"2009": 11, "2010": 11},
"energy": {"2009": 11, "2010": 11}},
"general service (LED)": {
"stock": {"2009": 12, "2010": 12},
"energy": {"2009": 12, "2010": 12}},
"reflector (LED)": {
"stock": {"2009": 13, "2010": 13},
"energy": {"2009": 13, "2010": 13}},
"external (LED)": {
"stock": {"2009": 14, "2010": 14},
"energy": {"2009": 14, "2010": 14}}}}}},
"AIA_CZ4": {
"multi family home": {
"total square footage": {"2009": 900, "2010": 1000},
"total homes": {"2009": 1000, "2010": 1000},
"new homes": {"2009": 100, "2010": 50},
"electricity": {
"lighting": {
"linear fluorescent (LED)": {
"stock": {"2009": 11, "2010": 11},
"energy": {"2009": 11, "2010": 11}},
"general service (LED)": {
"stock": {"2009": 12, "2010": 12},
"energy": {"2009": 12, "2010": 12}},
"reflector (LED)": {
"stock": {"2009": 13, "2010": 13},
"energy": {"2009": 13, "2010": 13}},
"external (LED)": {
"stock": {"2009": 14, "2010": 14},
"energy": {"2009": 14, "2010": 14}}}}}}}
cls.sample_cpl_in = {
"AIA_CZ1": {
"assembly": {
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 10, "2010": 10},
"range": {"2009": 1, "2010": 1},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"lighting gain": 0}},
"cooling": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 10, "2010": 10},
"range": {"2009": 1, "2010": 1},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"lighting gain": 0}},
"lighting": {
"T5 F28": {
"performance": {
"typical": {"2009": 14, "2010": 14},
"best": {"2009": 14, "2010": 14},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 14, "2010": 14},
"best": {"2009": 14, "2010": 14},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 140, "2010": 140},
"range": {"2009": 14, "2010": 14},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"PCs": 0,
"MELs": {
"distribution transformers": 0
}}},
"single family home": {
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 10, "2010": 10},
"range": {"2009": 1, "2010": 1},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"resistance heat": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"ASHP": {
"performance": {
"typical": {"2009": 3, "2010": 3},
"best": {"2009": 3, "2010": 3},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 3, "2010": 3},
"best": {"2009": 3, "2010": 3},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 30, "2010": 30},
"range": {"2009": 3, "2010": 3},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"GSHP": {
"performance": {
"typical": {"2009": 4, "2010": 4},
"best": {"2009": 4, "2010": 4},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 4, "2010": 4},
"best": {"2009": 4, "2010": 4},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 40, "2010": 40},
"range": {"2009": 4, "2010": 4},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"secondary heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 5, "2010": 5},
"best": {"2009": 5, "2010": 5},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 5, "2010": 5},
"best": {"2009": 5, "2010": 5},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 50, "2010": 50},
"range": {"2009": 5, "2010": 5},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 6, "2010": 6},
"best": {"2009": 6, "2010": 6},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 6, "2010": 6},
"best": {"2009": 6, "2010": 6},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 60, "2010": 60},
"range": {"2009": 6, "2010": 6},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"non-specific": {
"performance": {
"typical": {"2009": 7, "2010": 7},
"best": {"2009": 7, "2010": 7},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 7, "2010": 7},
"best": {"2009": 7, "2010": 7},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 70, "2010": 70},
"range": {"2009": 7, "2010": 7},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"cooling": {
"demand": {
"windows conduction": {
"performance": {
"typical": {
"new": {"2009": 8, "2010": 8},
"existing": {
"2009": 8, "2010": 8}
},
"best": {"2009": 8, "2010": 8},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 8, "2010": 8},
"best": {"2009": 8, "2010": 8},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 80, "2010": 80},
"range": {"2009": 8, "2010": 8},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 9, "2010": 9},
"best": {"2009": 9, "2010": 9},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 9, "2010": 9},
"best": {"2009": 9, "2010": 9},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 90, "2010": 90},
"range": {"2009": 9, "2010": 9},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"central AC": {
"performance": {
"typical": {"2009": 10, "2010": 10},
"best": {"2009": 10, "2010": 10},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 10, "2010": 10},
"best": {"2009": 10, "2010": 10},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 100, "2010": 100},
"range": {"2009": 10, "2010": 10},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"room AC": {
"performance": {
"typical": {"2009": 11, "2010": 11},
"best": {"2009": 11, "2010": 11},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 11, "2010": 11},
"best": {"2009": 11, "2010": 11},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 110, "2010": 110},
"range": {"2009": 11, "2010": 11},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"ASHP": {
"performance": {
"typical": {"2009": 12, "2010": 12},
"best": {"2009": 12, "2010": 12},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 12, "2010": 12},
"best": {"2009": 12, "2010": 12},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 120, "2010": 120},
"range": {"2009": 12, "2010": 12},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"GSHP": {
"performance": {
"typical": {"2009": 13, "2010": 13},
"best": {"2009": 13, "2010": 13},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 13, "2010": 13},
"best": {"2009": 13, "2010": 13},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 130, "2010": 130},
"range": {"2009": 13, "2010": 13},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"lighting": {
"linear fluorescent (LED)": {
"performance": {
"typical": {"2009": 14, "2010": 14},
"best": {"2009": 14, "2010": 14},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 14, "2010": 14},
"best": {"2009": 14, "2010": 14},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 140 * (3/24),
"2010": 140 * (3/24)},
"range": {"2009": 14, "2010": 14},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"general service (LED)": {
"performance": {
"typical": {"2009": 15, "2010": 15},
"best": {"2009": 15, "2010": 15},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 15, "2010": 15},
"best": {"2009": 15, "2010": 15},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 150 * (3/24),
"2010": 150 * (3/24)},
"range": {"2009": 15, "2010": 15},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"reflector (LED)": {
"performance": {
"typical": {"2009": 16, "2010": 16},
"best": {"2009": 16, "2010": 16},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 16, "2010": 16},
"best": {"2009": 16, "2010": 16},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 160 * (3/24),
"2010": 160 * (3/24)},
"range": {"2009": 16, "2010": 16},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"external (LED)": {
"performance": {
"typical": {"2009": 17, "2010": 17},
"best": {"2009": 17, "2010": 17},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 17, "2010": 17},
"best": {"2009": 17, "2010": 17},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 170 * (3/24),
"2010": 170 * (3/24)},
"range": {"2009": 17, "2010": 17},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"refrigeration": {
"performance": {
"typical": {"2009": 550, "2010": 550},
"best": {"2009": 450, "2010": 450},
"units": "kWh/yr",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 300, "2010": 300},
"best": {"2009": 600, "2010": 600},
"units": "2010$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 17, "2010": 17},
"range": {"2009": 6, "2010": 6},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type": "logistic regression",
"parameters": {
"b1": {"2009": "NA", "2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"TVs": {
"TVs": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"set top box": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}
},
"computers": {
"desktop PC": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"laptop PC": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}
},
"other (grid electric)": {
"freezers": {
"performance": {
"typical": {"2009": 550, "2010": 550},
"best": {"2009": 450, "2010": 450},
"units": "kWh/yr",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 100, "2010": 100},
"best": {"2009": 200, "2010": 200},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 15, "2010": 15},
"range": {"2009": 3, "2010": 3},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"other MELs": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"natural gas": {
"water heating": {
"performance": {
"typical": {"2009": 18, "2010": 18},
"best": {"2009": 18, "2010": 18},
"units": "EF",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 18, "2010": 18},
"best": {"2009": 18, "2010": 18},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 180, "2010": 180},
"range": {"2009": 18, "2010": 18},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type": "logistic regression",
"parameters": {
"b1": {"2009": "NA", "2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 10, "2010": 10},
"range": {"2009": 1, "2010": 1},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"secondary heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 5, "2010": 5},
"best": {"2009": 5, "2010": 5},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 5, "2010": 5},
"best": {"2009": 5, "2010": 5},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 50, "2010": 50},
"range": {"2009": 5, "2010": 5},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 6, "2010": 6},
"best": {"2009": 6, "2010": 6},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 6, "2010": 6},
"best": {"2009": 6, "2010": 6},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 60, "2010": 60},
"range": {"2009": 6, "2010": 6},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"cooling": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 8, "2010": 8},
"best": {"2009": 8, "2010": 8},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 8, "2010": 8},
"best": {"2009": 8, "2010": 8},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 80, "2010": 80},
"range": {"2009": 8, "2010": 8},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 9, "2010": 9},
"best": {"2009": 9, "2010": 9},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 9, "2010": 9},
"best": {"2009": 9, "2010": 9},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 90, "2010": 90},
"range": {"2009": 9, "2010": 9},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}}}},
"multi family home": {
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 19, "2010": 19},
"best": {"2009": 19, "2010": 19},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 19, "2010": 19},
"best": {"2009": 19, "2010": 19},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 190, "2010": 190},
"range": {"2009": 19, "2010": 19},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 20, "2010": 20},
"best": {"2009": 20, "2010": 20},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 20, "2010": 20},
"best": {"2009": 20, "2010": 20},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 200, "2010": 200},
"range": {"2009": 20, "2010": 20},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"resistance heat": {
"performance": {
"typical": {"2009": 21, "2010": 21},
"best": {"2009": 21, "2010": 21},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 21, "2010": 21},
"best": {"2009": 21, "2010": 21},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 210, "2010": 210},
"range": {"2009": 21, "2010": 21},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"ASHP": {
"performance": {
"typical": {"2009": 22, "2010": 22},
"best": {"2009": 22, "2010": 22},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 22, "2010": 22},
"best": {"2009": 22, "2010": 22},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 220, "2010": 220},
"range": {"2009": 22, "2010": 22},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"GSHP": {
"performance": {
"typical": {"2009": 23, "2010": 23},
"best": {"2009": 23, "2010": 23},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 23, "2010": 23},
"best": {"2009": 23, "2010": 23},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 230, "2010": 230},
"range": {"2009": 23, "2010": 23},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"lighting": {
"linear fluorescent (LED)": {
"performance": {
"typical": {"2009": 24, "2010": 24},
"best": {"2009": 24, "2010": 24},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 24, "2010": 24},
"best": {"2009": 24, "2010": 24},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 240 * (3/24),
"2010": 240 * (3/24)},
"range": {"2009": 24, "2010": 24},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"general service (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 250 * (3/24),
"2010": 250 * (3/24)},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"reflector (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 250 * (3/24),
"2010": 250 * (3/24)},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"external (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 250 * (3/24),
"2010": 250 * (3/24)},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}}}},
"AIA_CZ2": {
"single family home": {
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 1, "2010": 1},
"best": {"2009": 1, "2010": 1},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 10, "2010": 10},
"range": {"2009": 1, "2010": 1},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"resistance heat": {
"performance": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"ASHP": {
"performance": {
"typical": {"2009": 3, "2010": 3},
"best": {"2009": 3, "2010": 3},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 3, "2010": 3},
"best": {"2009": 3, "2010": 3},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 30, "2010": 30},
"range": {"2009": 3, "2010": 3},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"GSHP": {
"performance": {
"typical": {"2009": 4, "2010": 4},
"best": {"2009": 4, "2010": 4},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 4, "2010": 4},
"best": {"2009": 4, "2010": 4},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 40, "2010": 40},
"range": {"2009": 4, "2010": 4},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"secondary heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 5, "2010": 5},
"best": {"2009": 5, "2010": 5},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 5, "2010": 5},
"best": {"2009": 5, "2010": 5},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 50, "2010": 50},
"range": {"2009": 5, "2010": 5},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 6, "2010": 6},
"best": {"2009": 6, "2010": 6},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 6, "2010": 6},
"best": {"2009": 6, "2010": 6},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 60, "2010": 60},
"range": {"2009": 6, "2010": 6},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"non-specific": {
"performance": {
"typical": {"2009": 7, "2010": 7},
"best": {"2009": 7, "2010": 7},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 7, "2010": 7},
"best": {"2009": 7, "2010": 7},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 70, "2010": 70},
"range": {"2009": 7, "2010": 7},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"cooling": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 8, "2010": 8},
"best": {"2009": 8, "2010": 8},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 8, "2010": 8},
"best": {"2009": 8, "2010": 8},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 80, "2010": 80},
"range": {"2009": 8, "2010": 8},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 9, "2010": 9},
"best": {"2009": 9, "2010": 9},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 9, "2010": 9},
"best": {"2009": 9, "2010": 9},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 90, "2010": 90},
"range": {"2009": 9, "2010": 9},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"infiltration": {
"performance": {
"typical": {"2009": 2, "2010": 3},
"best": {"2009": 2, "2010": 3},
"units": "ACH50",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 2, "2010": 2},
"best": {"2009": 2, "2010": 2},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 20, "2010": 20},
"range": {"2009": 2, "2010": 2},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"central AC": {
"performance": {
"typical": {"2009": 10, "2010": 10},
"best": {"2009": 10, "2010": 10},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 10, "2010": 10},
"best": {"2009": 10, "2010": 10},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 100, "2010": 100},
"range": {"2009": 10, "2010": 10},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"room AC": {
"performance": {
"typical": {"2009": 11, "2010": 11},
"best": {"2009": 11, "2010": 11},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 11, "2010": 11},
"best": {"2009": 11, "2010": 11},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 110, "2010": 110},
"range": {"2009": 11, "2010": 11},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"ASHP": {
"performance": {
"typical": {"2009": 12, "2010": 12},
"best": {"2009": 12, "2010": 12},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 12, "2010": 12},
"best": {"2009": 12, "2010": 12},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 120, "2010": 120},
"range": {"2009": 12, "2010": 12},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"GSHP": {
"performance": {
"typical": {"2009": 13, "2010": 13},
"best": {"2009": 13, "2010": 13},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 13, "2010": 13},
"best": {"2009": 13, "2010": 13},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 130, "2010": 130},
"range": {"2009": 13, "2010": 13},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"lighting": {
"linear fluorescent (LED)": {
"performance": {
"typical": {"2009": 14, "2010": 14},
"best": {"2009": 14, "2010": 14},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 14, "2010": 14},
"best": {"2009": 14, "2010": 14},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 140 * (3/24),
"2010": 140 * (3/24)},
"range": {"2009": 14, "2010": 14},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"general service (LED)": {
"performance": {
"typical": {"2009": 15, "2010": 15},
"best": {"2009": 15, "2010": 15},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 15, "2010": 15},
"best": {"2009": 15, "2010": 15},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 150 * (3/24),
"2010": 150 * (3/24)},
"range": {"2009": 15, "2010": 15},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"reflector (LED)": {
"performance": {
"typical": {"2009": 16, "2010": 16},
"best": {"2009": 16, "2010": 16},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 16, "2010": 16},
"best": {"2009": 16, "2010": 16},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 160 * (3/24),
"2010": 160 * (3/24)},
"range": {"2009": 16, "2010": 16},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"external (LED)": {
"performance": {
"typical": {"2009": 17, "2010": 17},
"best": {"2009": 17, "2010": 17},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 17, "2010": 17},
"best": {"2009": 17, "2010": 17},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 170 * (3/24),
"2010": 170 * (3/24)},
"range": {"2009": 17, "2010": 17},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"TVs": {
"TVs": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"set top box": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}
},
"computers": {
"desktop PC": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"laptop PC": {
"performance": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"installed cost": {
"typical": {"2009": "NA", "2010": "NA"},
"best": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"lifetime": {
"average": {"2009": "NA", "2010": "NA"},
"range": {"2009": "NA", "2010": "NA"},
"units": "NA",
"source": "NA"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}
}},
"natural gas": {
"water heating": {
"performance": {
"typical": {"2009": 18, "2010": 18},
"best": {"2009": 18, "2010": 18},
"units": "EF",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 18, "2010": 18},
"best": {"2009": 18, "2010": 18},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 180, "2010": 180},
"range": {"2009": 18, "2010": 18},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"multi family home": {
"electricity": {
"heating": {
"demand": {
"windows conduction": {
"performance": {
"typical": {"2009": 19, "2010": 19},
"best": {"2009": 19, "2010": 19},
"units": "R Value",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 19, "2010": 19},
"best": {"2009": 19, "2010": 19},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 190, "2010": 190},
"range": {"2009": 19, "2010": 19},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"windows solar": {
"performance": {
"typical": {"2009": 20, "2010": 20},
"best": {"2009": 20, "2010": 20},
"units": "SHGC",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 20, "2010": 20},
"best": {"2009": 20, "2010": 20},
"units": "2014$/ft^2 floor",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 200, "2010": 200},
"range": {"2009": 20, "2010": 20},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}},
"supply": {
"resistance heat": {
"performance": {
"typical": {"2009": 21, "2010": 21},
"best": {"2009": 21, "2010": 21},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 21, "2010": 21},
"best": {"2009": 21, "2010": 21},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 210, "2010": 210},
"range": {"2009": 21, "2010": 21},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"ASHP": {
"performance": {
"typical": {"2009": 22, "2010": 22},
"best": {"2009": 22, "2010": 22},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 22, "2010": 22},
"best": {"2009": 22, "2010": 22},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 220, "2010": 220},
"range": {"2009": 22, "2010": 22},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"GSHP": {
"performance": {
"typical": {"2009": 23, "2010": 23},
"best": {"2009": 23, "2010": 23},
"units": "COP",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 23, "2010": 23},
"best": {"2009": 23, "2010": 23},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 230, "2010": 230},
"range": {"2009": 23, "2010": 23},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}},
"lighting": {
"linear fluorescent (LED)": {
"performance": {
"typical": {"2009": 24, "2010": 24},
"best": {"2009": 24, "2010": 24},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 24, "2010": 24},
"best": {"2009": 24, "2010": 24},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 240 * (3/24),
"2010": 240 * (3/24)},
"range": {"2009": 24, "2010": 24},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"general service (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 250 * (3/24),
"2010": 250 * (3/24)},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"reflector (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 250 * (3/24),
"2010": 250 * (3/24)},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"external (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {
"2009": 250 * (3/24),
"2010": 250 * (3/24)},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}}}},
"AIA_CZ4": {
"multi family home": {
"electricity": {
"lighting": {
"linear fluorescent (LED)": {
"performance": {
"typical": {"2009": 24, "2010": 24},
"best": {"2009": 24, "2010": 24},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 24, "2010": 24},
"best": {"2009": 24, "2010": 24},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 240, "2010": 240},
"range": {"2009": 24, "2010": 24},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"general service (LED)": {
"performance": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 25, "2010": 25},
"best": {"2009": 25, "2010": 25},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 250, "2010": 250},
"range": {"2009": 25, "2010": 25},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"reflector (LED)": {
"performance": {
"typical": {"2009": 26, "2010": 26},
"best": {"2009": 26, "2010": 26},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 26, "2010": 26},
"best": {"2009": 26, "2010": 26},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 260, "2010": 260},
"range": {"2009": 26, "2010": 26},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}},
"external (LED)": {
"performance": {
"typical": {"2009": 27, "2010": 27},
"best": {"2009": 27, "2010": 27},
"units": "lm/W",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 27, "2010": 27},
"best": {"2009": 27, "2010": 27},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 270, "2010": 270},
"range": {"2009": 27, "2010": 27},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type":
"logistic regression",
"parameters": {
"b1": {"2009": "NA",
"2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}}}}}
ok_measures_in = [{
"name": "sample measure 1",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"AIA_CZ1": {"heating": 30,
"cooling": 25},
"AIA_CZ2": {"heating": 30,
"cooling": 15}},
"energy_efficiency_units": "COP",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["heating", "cooling"],
"technology": ["resistance heat", "ASHP", "GSHP", "room AC"]},
{
"name": "sample measure 2",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {"new": 25, "existing": 25},
"energy_efficiency_units": "EF",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1"],
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "water heating",
"technology": None},
{
"name": "sample measure 3",
"markets": None,
"installed_cost": 500,
"cost_units": {
"refrigeration": "2010$/unit",
"other (grid electric)": "2014$/unit"},
"energy_efficiency": 0.1,
"energy_efficiency_units": "relative savings (constant)",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": "AIA_CZ1",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["refrigeration", "other (grid electric)"],
"technology": [None, "freezers"]},
{
"name": "sample measure 4",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": {
"windows conduction": 20,
"windows solar": 1},
"energy_efficiency_units": {
"windows conduction": "R Value",
"windows solar": "SHGC"},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": "existing",
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "heating",
"technology": [
"windows conduction",
"windows solar"]},
{
"name": "sample measure 5",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": 0.1,
"energy_efficiency_units": "relative savings (constant)",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "add-on",
"structure_type": "existing",
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "lighting",
"technology": "linear fluorescent (LED)"},
{
"name": "sample measure 6",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": [
"heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "sample measure 7",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": {
"windows conduction": 20,
"windows solar": 1},
"energy_efficiency_units": {
"windows conduction": "R Value",
"windows solar": "SHGC"},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "heating",
"technology": [
"windows conduction",
"windows solar"]},
{
"name": "sample measure 8",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": 1,
"energy_efficiency_units": "SHGC",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "heating",
"technology": "windows solar"},
{
"name": "sample measure 9",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": {
"windows conduction": 10, "windows solar": 1},
"energy_efficiency_units": {
"windows conduction": "R Value",
"windows solar": "SHGC"},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": [
"heating", "secondary heating",
"cooling"],
"technology": [
"windows conduction", "windows solar"]},
{
"name": "sample measure 10",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": {
"windows conduction": 0.4,
"windows solar": 1},
"energy_efficiency_units": {
"windows conduction": "relative savings (constant)",
"windows solar": "SHGC"},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["heating", "secondary heating",
"cooling"],
"technology": ["windows conduction",
"windows solar"]},
{
"name": "sample measure 11", # Add heat/cool end uses later
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": 25,
"energy_efficiency_units": "lm/W",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "assembly",
"climate_zone": "AIA_CZ1",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "lighting",
"market_entry_year": None,
"market_exit_year": None,
"technology": [
"T5 F28"]},
{
"name": "sample measure 12",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": 25,
"energy_efficiency_units": "EF",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": "new",
"bldg_type": "single family home",
"climate_zone": "AIA_CZ1",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "water heating",
"market_entry_year": None,
"market_exit_year": None,
"technology": None},
{
"name": "sample measure 13",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": 25,
"energy_efficiency_units": "EF",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": "existing",
"bldg_type": "single family home",
"climate_zone": "AIA_CZ1",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "water heating",
"technology": None},
{
"name": "sample measure 14",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": 2010,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": ["heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "sample measure 15",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": None,
"market_exit_year": 2010,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": ["heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "sample measure 16",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": [
"relative savings (dynamic)", 2009]},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": ["heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "sample measure 17",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"new": 25, "existing": 25},
"energy_efficiency_units": "EF",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1"],
"fuel_type": "natural gas",
"fuel_switch_to": "electricity",
"end_use": "water heating",
"technology": None},
{
"name": "sample measure 18",
"markets": None,
"installed_cost": 11,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": 0.44,
"energy_efficiency_units":
"relative savings (constant)",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "add-on",
"structure_type": ["new", "existing"],
"bldg_type": "assembly",
"climate_zone": "AIA_CZ1",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "lighting",
"market_entry_year": None,
"market_exit_year": None,
"technology": [
"T5 F28"]},
{
"name": "sample measure 19",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"new": 25, "existing": 25},
"energy_efficiency_units": "EF",
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": {
"new": 0.25,
"existing": 0.5},
"market_scaling_fractions_source": {
"new": {
"title": 'Sample title 1',
"author": 'Sample author 1',
"organization": 'Sample org 1',
"year": 'Sample year 1',
"URL": ('http://www.eia.gov/consumption/'
'commercial/data/2012/'),
"fraction_derivation": "Divide X by Y"},
"existing": {
"title": 'Sample title 1',
"author": 'Sample author 1',
"organization": 'Sample org 1',
"year": 'Sample year 1',
"URL": ('http://www.eia.gov/consumption/'
'commercial/data/2012/'),
"fraction_derivation": "Divide X by Y"}},
"product_lifetime": 1,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": "AIA_CZ1",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "water heating",
"technology": None},
{
"name": "sample measure 20",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": {
"new": 0.25,
"existing": 0.5},
"market_scaling_fractions_source": {
"new": {
"title": 'Sample title 2',
"author": 'Sample author 2',
"organization": 'Sample org 2',
"year": 'Sample year 2',
"URL": ('http://www.eia.gov/consumption/'
'commercial/data/2012/'),
"fraction_derivation": "Divide X by Y"},
"existing": {
"title": 'Sample title 2',
"author": 'Sample author 2',
"organization": 'Sample org 2',
"year": 'Sample year 2',
"URL": ('http://www.eia.gov/consumption/'
'residential/data/2009/'),
"fraction_derivation": "Divide X by Y"}},
"product_lifetime": 1,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": ["heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "sample measure 21",
"markets": None,
"installed_cost": 25,
"cost_units": "$/ft^2 floor",
"energy_efficiency": 0.25,
"energy_efficiency_units": "relative savings (constant)",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "add-on",
"structure_type": ["new", "existing"],
"bldg_type": "assembly",
"climate_zone": "AIA_CZ1",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["PCs", "MELs"],
"market_entry_year": None,
"market_exit_year": None,
"technology": [None, "distribution transformers"]},
{
"name": "sample measure 22",
"markets": None,
"installed_cost": 25,
"cost_units": "$/unit",
"energy_efficiency": 0.5,
"energy_efficiency_units": "relative savings (constant)",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "add-on",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["TVs", "computers", "other (grid electric)"],
"market_entry_year": None,
"market_exit_year": None,
"technology": ["TVs", "desktop PC", "laptop PC", "other MELs"]},
{
"name": "sample measure 23",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": 25,
"energy_efficiency_units": "lm/W",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "assembly",
"climate_zone": "AIA_CZ1",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "lighting",
"market_entry_year": None,
"market_exit_year": None,
"technology": "T5 F28"}]
cls.ok_tpmeas_fullchk_in = [
ecm_prep.Measure(
handyvars, **x) for x in ok_measures_in[0:5]]
cls.ok_tpmeas_partchk_in = [
ecm_prep.Measure(
handyvars, **x) for x in ok_measures_in[5:22]]
cls.ok_mapmeas_partchk_in = [
ecm_prep.Measure(
handyvars, **x) for x in ok_measures_in[22:]]
ok_distmeas_in = [{
"name": "distrib measure 1",
"markets": None,
"installed_cost": ["normal", 25, 5],
"cost_units": "2014$/unit",
"energy_efficiency": {
"AIA_CZ1": {
"heating": ["normal", 30, 1],
"cooling": ["normal", 25, 2]},
"AIA_CZ2": {
"heating": 30,
"cooling": ["normal", 15, 4]}},
"energy_efficiency_units": "COP",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["heating", "cooling"],
"technology": ["resistance heat", "ASHP", "GSHP", "room AC"]},
{
"name": "distrib measure 2",
"markets": None,
"installed_cost": ["lognormal", 3.22, 0.06],
"cost_units": "2014$/unit",
"energy_efficiency": ["normal", 25, 5],
"energy_efficiency_units": "EF",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": ["normal", 1, 1],
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home"],
"climate_zone": ["AIA_CZ1"],
"fuel_type": ["natural gas"],
"fuel_switch_to": None,
"end_use": "water heating",
"technology": None},
{
"name": "distrib measure 3",
"markets": None,
"installed_cost": ["normal", 10, 5],
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": {
"windows conduction": [
"lognormal", 2.29, 0.14],
"windows solar": [
"normal", 1, 0.1]},
"energy_efficiency_units": {
"windows conduction": "R Value",
"windows solar": "SHGC"},
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": ["single family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": [
"heating", "secondary heating", "cooling"],
"technology": [
"windows conduction", "windows solar"]}]
cls.ok_distmeas_in = [
ecm_prep.Measure(
handyvars, **x) for x in ok_distmeas_in]
ok_partialmeas_in = [{
"name": "partial measure 1",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": 25,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"energy_efficiency_units": "COP",
"market_entry_year": None,
"market_exit_year": None,
"bldg_type": ["single family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "cooling",
"technology": ["resistance heat", "ASHP"]},
{
"name": "partial measure 2",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": 25,
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"energy_efficiency_units": "COP",
"bldg_type": ["single family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["heating", "cooling"],
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)", "GSHP", "ASHP"]}]
cls.ok_partialmeas_in = [
ecm_prep.Measure(
handyvars, **x) for x in ok_partialmeas_in]
failmeas_in = [{
"name": "fail measure 1",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/unit",
"energy_efficiency": 10,
"energy_efficiency_units": "COP",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ19", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": "cooling",
"technology": "resistance heat"},
{
"name": "fail measure 2",
"markets": None,
"installed_cost": 10,
"cost_units": "2014$/unit",
"energy_efficiency": {
"AIA_CZ1": {
"heating": 30, "cooling": 25},
"AIA_CZ2": {
"heating": 30, "cooling": 15}},
"energy_efficiency_units": "COP",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family homer",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["heating", "cooling"],
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "fail measure 3",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25, "secondary": None},
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["newer", "existing"],
"energy_efficiency_units": {
"primary": "lm/W", "secondary": None},
"market_entry_year": None,
"market_exit_year": None,
"bldg_type": "single family home",
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": [
"heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "fail measure 4",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25, "secondary": None},
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"energy_efficiency_units": {
"primary": "lm/W", "secondary": None},
"market_entry_year": None,
"market_exit_year": None,
"bldg_type": "single family home",
"climate_zone": "AIA_CZ1",
"fuel_type": "solar",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": [
"heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "fail measure 5",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/ft^2 floor",
"energy_efficiency": 0.25,
"energy_efficiency_units": "relative savings (constant)",
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "assembly",
"climate_zone": "AIA_CZ1",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": ["PCs", "MELs"],
"market_entry_year": None,
"market_exit_year": None,
"technology": [None, "distribution transformers"]}]
cls.failmeas_inputs_in = [
ecm_prep.Measure(
handyvars, **x) for x in failmeas_in[0:-1]]
cls.failmeas_missing_in = ecm_prep.Measure(
handyvars, **failmeas_in[-1])
warnmeas_in = [{
"name": "warn measure 1",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": {
"new": 0.25,
"existing": 0.5},
"market_scaling_fractions_source": {
"new": {
"title": None,
"author": None,
"organization": None,
"year": None,
"URL": None,
"fraction_derivation": None},
"existing": {
"title": None,
"author": None,
"organization": None,
"year": None,
"URL": None,
"fraction_derivation": None}},
"product_lifetime": 1,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": [
"single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": [
"heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "warn measure 2",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": {
"new": 0.25,
"existing": 0.5},
"market_scaling_fractions_source": {
"new": {
"title": "Sample title",
"author": "Sample author",
"organization": "Sample organization",
"year": "http://www.sciencedirectcom",
"URL": "some BS",
"fraction_derivation": None},
"existing": {
"title": "Sample title",
"author": "Sample author",
"organization": "Sample organization",
"year": "Sample year",
"URL": "http://www.sciencedirect.com",
"fraction_derivation": None}},
"product_lifetime": 1,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": [
"single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": [
"heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]},
{
"name": "warn measure 3",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"primary": 25,
"secondary": {
"heating": 0.4,
"secondary heating": 0.4,
"cooling": -0.4}},
"energy_efficiency_units": {
"primary": "lm/W",
"secondary": "relative savings (constant)"},
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": {
"new": 0.25,
"existing": 0.5},
"market_scaling_fractions_source": {
"new": {
"title": "Sample title",
"author": None,
"organization": "Sample organization",
"year": "Sample year",
"URL": "https://bpd.lbl.gov/",
"fraction_derivation": "Divide X by Y"},
"existing": {
"title": "Sample title",
"author": None,
"organization": "Sample organization",
"year": "Sample year",
"URL": "https://cms.doe.gov/data/green-button",
"fraction_derivation": "Divide X by Y"}},
"product_lifetime": 1,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": [
"single family home",
"multi family home"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": {
"primary": "lighting",
"secondary": [
"heating", "secondary heating",
"cooling"]},
"technology": [
"linear fluorescent (LED)",
"general service (LED)",
"external (LED)"]}]
cls.warnmeas_in = [
ecm_prep.Measure(
handyvars, **x) for x in warnmeas_in]
cls.ok_tpmeas_fullchk_msegout = [{
"stock": {
"total": {
"all": {"2009": 72, "2010": 72},
"measure": {"2009": 72, "2010": 72}},
"competed": {
"all": {"2009": 72, "2010": 72},
"measure": {"2009": 72, "2010": 72}}},
"energy": {
"total": {
"baseline": {"2009": 229.68, "2010": 230.4},
"efficient": {"2009": 117.0943, "2010": 117.4613}},
"competed": {
"baseline": {"2009": 229.68, "2010": 230.4},
"efficient": {"2009": 117.0943, "2010": 117.4613}}},
"carbon": {
"total": {
"baseline": {"2009": 13056.63, "2010": 12941.16},
"efficient": {"2009": 6656.461, "2010": 6597.595}},
"competed": {
"baseline": {"2009": 13056.63, "2010": 12941.16},
"efficient": {"2009": 6656.461, "2010": 6597.595}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 710, "2010": 710},
"efficient": {"2009": 1800, "2010": 1800}},
"competed": {
"baseline": {"2009": 710, "2010": 710},
"efficient": {"2009": 1800, "2010": 1800}}},
"energy": {
"total": {
"baseline": {"2009": 2328.955, "2010": 2227.968},
"efficient": {"2009": 1187.336, "2010": 1135.851}},
"competed": {
"baseline": {"2009": 2328.955, "2010": 2227.968},
"efficient": {"2009": 1187.336, "2010": 1135.851}}},
"carbon": {
"total": {
"baseline": {"2009": 430868.63, "2010": 427058.3},
"efficient": {"2009": 219663.21, "2010": 217720.65}},
"competed": {
"baseline": {"2009": 430868.63, "2010": 427058.3},
"efficient": {"2009": 219663.21, "2010": 217720.65}}}},
"lifetime": {"baseline": {"2009": 98.61, "2010": 98.61},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 15, "2010": 15},
"measure": {"2009": 15, "2010": 15}},
"competed": {
"all": {"2009": 15, "2010": 15},
"measure": {"2009": 15, "2010": 15}}},
"energy": {
"total": {
"baseline": {"2009": 15.15, "2010": 15.15},
"efficient": {"2009": 10.908, "2010": 10.908}},
"competed": {
"baseline": {"2009": 15.15, "2010": 15.15},
"efficient": {"2009": 10.908, "2010": 10.908}}},
"carbon": {
"total": {
"baseline": {"2009": 856.2139, "2010": 832.0021},
"efficient": {"2009": 616.474, "2010": 599.0415}},
"competed": {
"baseline": {"2009": 856.2139, "2010": 832.0021},
"efficient": {"2009": 616.474, "2010": 599.0415}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 270, "2010": 270},
"efficient": {"2009": 375, "2010": 375}},
"competed": {
"baseline": {"2009": 270, "2010": 270},
"efficient": {"2009": 375, "2010": 375}}},
"energy": {
"total": {
"baseline": {"2009": 170.892, "2010": 163.317},
"efficient": {"2009": 123.0422, "2010": 117.5882}},
"competed": {
"baseline": {"2009": 170.892, "2010": 163.317},
"efficient": {"2009": 123.0422, "2010": 117.5882}}},
"carbon": {
"total": {
"baseline": {"2009": 28255.06, "2010": 27456.07},
"efficient": {"2009": 20343.64, "2010": 19768.37}},
"competed": {
"baseline": {"2009": 28255.06, "2010": 27456.07},
"efficient": {"2009": 20343.64, "2010": 19768.37}}}},
"lifetime": {"baseline": {"2009": 180, "2010": 180},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 333, "2010": 333},
"measure": {"2009": 333, "2010": 333}},
"competed": {
"all": {"2009": 333, "2010": 333},
"measure": {"2009": 333, "2010": 333}}},
"energy": {
"total": {
"baseline": {"2009": 1062.27, "2010": 1065.6},
"efficient": {"2009": 956.043, "2010": 959.04}},
"competed": {
"baseline": {"2009": 1062.27, "2010": 1065.6},
"efficient": {"2009": 956.043, "2010": 959.04}}},
"carbon": {
"total": {
"baseline": {"2009": 60386.89, "2010": 59852.87},
"efficient": {"2009": 54348.2, "2010": 53867.58}},
"competed": {
"baseline": {"2009": 60386.89, "2010": 59852.87},
"efficient": {"2009": 54348.2, "2010": 53867.58}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 55500, "2010": 55500},
"efficient": {"2009": 166500, "2010": 166500}},
"competed": {
"baseline": {"2009": 55500, "2010": 55500},
"efficient": {"2009": 166500, "2010": 166500}}},
"energy": {
"total": {
"baseline": {"2009": 10771.42, "2010": 10304.35},
"efficient": {"2009": 9694.276, "2010": 9273.917}},
"competed": {
"baseline": {"2009": 10771.42, "2010": 10304.35},
"efficient": {"2009": 9694.276, "2010": 9273.917}}},
"carbon": {
"total": {
"baseline": {"2009": 1992767.41, "2010": 1975144.64},
"efficient": {"2009": 1793490.67, "2010": 1777630.18}},
"competed": {
"baseline": {"2009": 1992767.41, "2010": 1975144.64},
"efficient": {
"2009": 1793490.67, "2010": 1777630.18}}}},
"lifetime": {"baseline": {"2009": 15.67, "2010": 15.67},
"measure": 1}}]
# Correct consumer choice dict outputs
compete_choice_val = [{
"b1": {"2009": -0.01, "2010": -0.01},
"b2": {"2009": -0.12, "2010": -0.12}},
{
"b1": {"2009": -0.01 * handyvars.res_typ_sf_household[
"single family home"],
"2010": -0.01 * handyvars.res_typ_sf_household[
"single family home"]},
"b2": {"2009": -0.12 * handyvars.res_typ_sf_household[
"single family home"],
"2010": -0.12 * handyvars.res_typ_sf_household[
"single family home"]}},
{
"b1": {"2009": -0.01 * handyvars.res_typ_sf_household[
"multi family home"],
"2010": -0.01 * handyvars.res_typ_sf_household[
"multi family home"]},
"b2": {"2009": -0.12 * handyvars.res_typ_sf_household[
"multi family home"],
"2010": -0.12 * handyvars.res_typ_sf_household[
"multi family home"]}},
{
"b1": {
"2009": -0.01 * handyvars.res_typ_sf_household[
"single family home"] /
handyvars.res_typ_units_household[
"lighting"]["single family home"],
"2010": -0.01 * handyvars.res_typ_sf_household[
"single family home"] /
handyvars.res_typ_units_household[
"lighting"]["single family home"]},
"b2": {
"2009": -0.12 * handyvars.res_typ_sf_household[
"single family home"] /
handyvars.res_typ_units_household[
"lighting"]["single family home"],
"2010": -0.12 * handyvars.res_typ_sf_household[
"single family home"] /
handyvars.res_typ_units_household[
"lighting"]["single family home"]}},
{
"b1": {
"2009": -0.01 * handyvars.res_typ_sf_household[
"multi family home"] /
handyvars.res_typ_units_household[
"lighting"]["multi family home"],
"2010": -0.01 * handyvars.res_typ_sf_household[
"multi family home"] /
handyvars.res_typ_units_household[
"lighting"]["multi family home"]},
"b2": {
"2009": -0.12 * handyvars.res_typ_sf_household[
"multi family home"] /
handyvars.res_typ_units_household[
"lighting"]["multi family home"],
"2010": -0.12 * handyvars.res_typ_sf_household[
"multi family home"] /
handyvars.res_typ_units_household[
"lighting"]["multi family home"]}}]
cls.ok_tpmeas_fullchk_competechoiceout = [{
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'existing')"): compete_choice_val[0]},
{
("('primary', 'AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', "
"None, 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', "
"None, 'existing')"): compete_choice_val[0]},
{
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'other (grid electric)', "
"'freezers', 'new')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'other (grid electric)', "
"'freezers', 'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'refrigeration', None, "
"'existing')"): compete_choice_val[0],
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'refrigeration', None, "
"'new')"): compete_choice_val[0]},
{
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'demand', 'windows', "
"'existing')"): compete_choice_val[1],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'demand', 'windows', "
"'existing')"): compete_choice_val[1],
("('primary', 'AIA_CZ1', 'multi family home', "
"'electricity', 'heating', 'demand', 'windows', "
"'existing')"): compete_choice_val[2],
("('primary', 'AIA_CZ2', 'multi family home', "
"'electricity', 'heating', 'demand', 'windows', "
"'existing')"): compete_choice_val[2]},
{
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'lighting', 'linear fluorescent (LED)', "
"'existing')"): compete_choice_val[3],
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'lighting', 'linear fluorescent (LED)', "
"'existing')"): compete_choice_val[3],
("('primary', 'AIA_CZ1', 'multi family home', "
"'electricity', 'lighting', 'linear fluorescent (LED)', "
"'existing')"): compete_choice_val[4],
("('primary', 'AIA_CZ2', 'multi family home', "
"'electricity', 'lighting', 'linear fluorescent (LED)', "
"'existing')"): compete_choice_val[4]}]
cls.ok_tpmeas_fullchk_msegadjout = [{
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}},
{
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}},
{
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}]
cls.ok_tpmeas_fullchk_supplydemandout = [{
"savings": {
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'new')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'existing')"): {"2009": 0, "2010": 0},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'existing')"): {"2009": 0, "2010": 0}},
"total": {
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'new')"): {
"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'new')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'new')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'new')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'new')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'new')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'new')"): {
"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'new')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'new')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'new')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'new')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'new')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'existing')"): {
"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'existing')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'existing')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'existing')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'existing')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ1', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'existing')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'resistance heat', 'existing')"): {
"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'ASHP', 'existing')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'heating', 'supply', "
"'GSHP', 'existing')"): {"2009": 28.71, "2010": 28.80},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'ASHP', 'existing')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'GSHP', 'existing')"): {"2009": 108.46, "2010": 108.8},
("('primary', 'AIA_CZ2', 'single family home', "
"'electricity', 'cooling', 'supply', "
"'room AC', 'existing')"): {"2009": 108.46, "2010": 108.8}}},
{"savings": {}, "total": {}},
{"savings": {}, "total": {}}]
cls.ok_tpmeas_fullchk_break_out = [{
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {"2009": 0.0375, "2010": 0.05625},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {"2009": 0.0125, "2010": 0.01875},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {"2009": 0.3375, "2010": 0.31875},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {"2009": 0.1125, "2010": 0.10625},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {"2009": 0.0375, "2010": 0.05625},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {"2009": 0.0125, "2010": 0.01875},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {"2009": 0.3375, "2010": 0.31875},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {"2009": 0.1125, "2010": 0.10625},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}}},
{
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0.10, "2010": 0.15},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0.90, "2010": 0.85},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}}},
{
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {"2009": 0.10, "2010": 0.15},
'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {"2009": 0.90, "2010": 0.85},
'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {}, 'Water Heating': {},
'Computers and Electronics': {}, 'Heating (Equip.)': {},
'Envelope': {}}}}]
cls.ok_tpmeas_partchk_msegout = [{
"stock": {
"total": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 148, "2010": 148}},
"competed": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 148, "2010": 148}}},
"energy": {
"total": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 647.8339, "2010": 649.7508}},
"competed": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 647.8339, "2010": 649.7508}}},
"carbon": {
"total": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 36815.4, "2010": 36449.93}},
"competed": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 36815.4, "2010": 36449.93}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 3700, "2010": 3700}},
"competed": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 3700, "2010": 3700}}},
"energy": {
"total": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 6610.44, "2010": 6323.41}},
"competed": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 6610.44, "2010": 6323.41}}},
"carbon": {
"total": {
"baseline": {
"2009": 1437816.14, "2010": 1423665.24},
"efficient": {
"2009": 1214908.11, "2010": 1202847.67}},
"competed": {
"baseline": {
"2009": 1437816.14, "2010": 1423665.24},
"efficient": {
"2009": 1214908.11, "2010": 1202847.67}}}},
"lifetime": {"baseline": {"2009": 200.8108, "2010": 200.8108},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 1600000000, "2010": 2000000000},
"measure": {"2009": 1600000000, "2010": 2000000000}},
"competed": {
"all": {"2009": 1600000000, "2010": 2000000000},
"measure": {"2009": 1600000000, "2010": 2000000000}}},
"energy": {
"total": {
"baseline": {"2009": 12.76, "2010": 12.8},
"efficient": {"2009": 3.509, "2010": 3.52}},
"competed": {
"baseline": {"2009": 12.76, "2010": 12.8},
"efficient": {"2009": 3.509, "2010": 3.52}}},
"carbon": {
"total": {
"baseline": {"2009": 725.3681, "2010": 718.9534},
"efficient": {"2009": 199.4762, "2010": 197.7122}},
"competed": {
"baseline": {"2009": 725.3681, "2010": 718.9534},
"efficient": {"2009": 199.4762, "2010": 197.7122}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 20400000000, "2010": 24600000000},
"efficient": {
"2009": 16000000000, "2010": 20000000000}},
"competed": {
"baseline": {
"2009": 20400000000, "2010": 24600000000},
"efficient": {
"2009": 16000000000, "2010": 20000000000}}},
"energy": {
"total": {
"baseline": {"2009": 129.3864, "2010": 123.776},
"efficient": {"2009": 35.58126, "2010": 34.0384}},
"competed": {
"baseline": {"2009": 129.3864, "2010": 123.776},
"efficient": {"2009": 35.58126, "2010": 34.0384}}},
"carbon": {
"total": {
"baseline": {"2009": 23937.15, "2010": 23725.46},
"efficient": {"2009": 6582.715, "2010": 6524.502}},
"competed": {
"baseline": {"2009": 23937.15, "2010": 23725.46},
"efficient": {"2009": 6582.715, "2010": 6524.502}}}},
"lifetime": {"baseline": {"2009": 127.5, "2010": 123},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 600000000, "2010": 800000000},
"measure": {"2009": 600000000, "2010": 800000000}},
"competed": {
"all": {"2009": 600000000, "2010": 800000000},
"measure": {"2009": 600000000, "2010": 800000000}}},
"energy": {
"total": {
"baseline": {"2009": 6.38, "2010": 6.4},
"efficient": {"2009": 3.19, "2010": 3.2}},
"competed": {
"baseline": {"2009": 6.38, "2010": 6.4},
"efficient": {"2009": 3.19, "2010": 3.2}}},
"carbon": {
"total": {
"baseline": {"2009": 362.684, "2010": 359.4767},
"efficient": {"2009": 181.342, "2010": 179.7383}},
"competed": {
"baseline": {"2009": 362.684, "2010": 359.4767},
"efficient": {"2009": 181.342, "2010": 179.7383}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 900000000, "2010": 1200000000},
"efficient": {
"2009": 6000000000, "2010": 8000000000}},
"competed": {
"baseline": {
"2009": 900000000, "2010": 1200000000},
"efficient": {
"2009": 6000000000, "2010": 8000000000}}},
"energy": {
"total": {
"baseline": {"2009": 64.6932, "2010": 61.888},
"efficient": {"2009": 32.3466, "2010": 30.944}},
"competed": {
"baseline": {"2009": 64.6932, "2010": 61.888},
"efficient": {"2009": 32.3466, "2010": 30.944}}},
"carbon": {
"total": {
"baseline": {"2009": 11968.57, "2010": 11862.73},
"efficient": {"2009": 5984.287, "2010": 5931.365}},
"competed": {
"baseline": {"2009": 11968.57, "2010": 11862.73},
"efficient": {"2009": 5984.287, "2010": 5931.365}}}},
"lifetime": {"baseline": {"2009": 15, "2010": 15},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 600000000, "2010": 800000000},
"measure": {"2009": 600000000, "2010": 800000000}},
"competed": {
"all": {"2009": 600000000, "2010": 800000000},
"measure": {"2009": 600000000, "2010": 800000000}}},
"energy": {
"total": {
"baseline": {"2009": 146.74, "2010": 147.2},
"efficient": {"2009": 55.29333, "2010": 55.46667}},
"competed": {
"baseline": {"2009": 146.74, "2010": 147.2},
"efficient": {"2009": 55.29333, "2010": 55.46667}}},
"carbon": {
"total": {
"baseline": {"2009": 8341.733, "2010": 8267.964},
"efficient": {"2009": 3143.262, "2010": 3115.465}},
"competed": {
"baseline": {"2009": 8341.733, "2010": 8267.964},
"efficient": {"2009": 3143.262, "2010": 3115.465}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 3100000000, "2010": 4133333333.33},
"efficient": {
"2009": 6000000000, "2010": 8000000000}},
"competed": {
"baseline": {
"2009": 3100000000, "2010": 4133333333.33},
"efficient": {
"2009": 6000000000, "2010": 8000000000}}},
"energy": {
"total": {
"baseline": {"2009": 1487.944, "2010": 1423.424},
"efficient": {"2009": 560.6744, "2010": 536.3627}},
"competed": {
"baseline": {"2009": 1487.944, "2010": 1423.424},
"efficient": {"2009": 560.6744, "2010": 536.3627}}},
"carbon": {
"total": {
"baseline": {"2009": 275277.18, "2010": 272842.8},
"efficient": {"2009": 103727.63, "2010": 102810.33}},
"competed": {
"baseline": {"2009": 275277.18, "2010": 272842.8},
"efficient": {"2009": 103727.63, "2010": 102810.33}}}},
"lifetime": {"baseline": {"2009": 51.67, "2010": 51.67},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 600000000, "2010": 800000000},
"measure": {"2009": 600000000, "2010": 800000000}},
"competed": {
"all": {"2009": 600000000, "2010": 800000000},
"measure": {"2009": 600000000, "2010": 800000000}}},
"energy": {
"total": {
"baseline": {"2009": 146.74, "2010": 147.2},
"efficient": {"2009": 52.10333, "2010": 52.26667}},
"competed": {
"baseline": {"2009": 146.74, "2010": 147.2},
"efficient": {"2009": 52.10333, "2010": 52.26667}}},
"carbon": {
"total": {
"baseline": {"2009": 8341.733, "2010": 8267.964},
"efficient": {"2009": 2961.92, "2010": 2935.726}},
"competed": {
"baseline": {"2009": 8341.733, "2010": 8267.964},
"efficient": {"2009": 2961.92, "2010": 2935.726}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 3100000000, "2010": 4133333333.33},
"efficient": {
"2009": 6000000000, "2010": 8000000000}},
"competed": {
"baseline": {
"2009": 3100000000, "2010": 4133333333.33},
"efficient": {
"2009": 6000000000, "2010": 8000000000}}},
"energy": {
"total": {
"baseline": {"2009": 1487.944, "2010": 1423.424},
"efficient": {"2009": 528.3278, "2010": 505.4187}},
"competed": {
"baseline": {"2009": 1487.944, "2010": 1423.424},
"efficient": {"2009": 528.3278, "2010": 505.4187}}},
"carbon": {
"total": {
"baseline": {"2009": 275277.18, "2010": 272842.8},
"efficient": {"2009": 97743.35, "2010": 96878.97}},
"competed": {
"baseline": {"2009": 275277.18, "2010": 272842.8},
"efficient": {"2009": 97743.35, "2010": 96878.97}}}},
"lifetime": {"baseline": {"2009": 51.67, "2010": 51.67},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 11000000, "2010": 11000000}},
"competed": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 11000000, "2010": 11000000}}},
"energy": {
"total": {
"baseline": {"2009": 31.9, "2010": 32.0},
"efficient": {"2009": 17.86, "2010": 17.92}},
"competed": {
"baseline": {"2009": 31.9, "2010": 32.0},
"efficient": {"2009": 17.86, "2010": 17.92}}},
"carbon": {
"total": {
"baseline": {"2009": 1813.42, "2010": 1797.38},
"efficient": {"2009": 1015.52, "2010": 1006.53}},
"competed": {
"baseline": {"2009": 1813.42, "2010": 1797.38},
"efficient": {"2009": 1015.52, "2010": 1006.53}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 154000000, "2010": 154000000},
"efficient": {"2009": 275000000, "2010": 275000000}},
"competed": {
"baseline": {"2009": 154000000, "2010": 154000000},
"efficient": {"2009": 275000000, "2010": 275000000}}},
"energy": {
"total": {
"baseline": {"2009": 289.65, "2010": 273.6},
"efficient": {"2009": 162.21, "2010": 153.22}},
"competed": {
"baseline": {"2009": 289.65, "2010": 273.6},
"efficient": {"2009": 162.21, "2010": 153.22}}},
"carbon": {
"total": {
"baseline": {"2009": 59842.87, "2010": 59313.65},
"efficient": {"2009": 33512, "2010": 33215.65}},
"competed": {
"baseline": {"2009": 59842.87, "2010": 59313.65},
"efficient": {"2009": 33512, "2010": 33215.65}}}},
"lifetime": {"baseline": {"2009": 140, "2010": 140},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 1.5, "2010": 2.25},
"measure": {"2009": 1.5, "2010": 2.25}},
"competed": {
"all": {"2009": 1.5, "2010": 2.25},
"measure": {"2009": 1.5, "2010": 2.25}}},
"energy": {
"total": {
"baseline": {"2009": 1.515, "2010": 2.2725},
"efficient": {"2009": 1.0908, "2010": 1.6362}},
"competed": {
"baseline": {"2009": 1.515, "2010": 2.2725},
"efficient": {"2009": 1.0908, "2010": 1.6362}}},
"carbon": {
"total": {
"baseline": {"2009": 85.62139, "2010": 124.8003},
"efficient": {"2009": 61.6474, "2010": 89.85622}},
"competed": {
"baseline": {"2009": 85.62139, "2010": 124.8003},
"efficient": {"2009": 61.6474, "2010": 89.85622}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 27, "2010": 40.5},
"efficient": {"2009": 37.5, "2010": 56.25}},
"competed": {
"baseline": {"2009": 27, "2010": 40.5},
"efficient": {"2009": 37.5, "2010": 56.25}}},
"energy": {
"total": {
"baseline": {"2009": 17.0892, "2010": 24.49755},
"efficient": {"2009": 12.30422, "2010": 17.63823}},
"competed": {
"baseline": {"2009": 17.0892, "2010": 24.49755},
"efficient": {"2009": 12.30422, "2010": 17.63823}}},
"carbon": {
"total": {
"baseline": {"2009": 2825.506, "2010": 4118.409},
"efficient": {"2009": 2034.364, "2010": 2965.256}},
"competed": {
"baseline": {"2009": 2825.506, "2010": 4118.409},
"efficient": {"2009": 2034.364, "2010": 2965.256}}}},
"lifetime": {"baseline": {"2009": 180, "2010": 180},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 13.5, "2010": 12.75},
"measure": {"2009": 13.5, "2010": 12.75}},
"competed": {
"all": {"2009": 13.5, "2010": 12.75},
"measure": {"2009": 13.5, "2010": 12.75}}},
"energy": {
"total": {
"baseline": {"2009": 13.635, "2010": 12.8775},
"efficient": {"2009": 9.8172, "2010": 9.2718}},
"competed": {
"baseline": {"2009": 13.635, "2010": 12.8775},
"efficient": {"2009": 9.8172, "2010": 9.2718}}},
"carbon": {
"total": {
"baseline": {"2009": 770.5925, "2010": 707.2018},
"efficient": {"2009": 554.8266, "2010": 509.1853}},
"competed": {
"baseline": {"2009": 770.5925, "2010": 707.2018},
"efficient": {"2009": 554.8266, "2010": 509.1853}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 243, "2010": 229.5},
"efficient": {"2009": 337.5, "2010": 318.75}},
"competed": {
"baseline": {"2009": 243, "2010": 229.5},
"efficient": {"2009": 337.5, "2010": 318.75}}},
"energy": {
"total": {
"baseline": {"2009": 153.8028, "2010": 138.8195},
"efficient": {"2009": 110.738, "2010": 99.94998}},
"competed": {
"baseline": {"2009": 153.8028, "2010": 138.8195},
"efficient": {"2009": 110.738, "2010": 99.94998}}},
"carbon": {
"total": {
"baseline": {"2009": 25429.55, "2010": 23337.66},
"efficient": {"2009": 18309.28, "2010": 16803.11}},
"competed": {
"baseline": {"2009": 25429.55, "2010": 23337.66},
"efficient": {"2009": 18309.28, "2010": 16803.11}}}},
"lifetime": {"baseline": {"2009": 180, "2010": 180},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 0, "2010": 148}},
"competed": {
"all": {"2009": 18.17, "2010": 148},
"measure": {"2009": 0, "2010": 148}}},
"energy": {
"total": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 766.677, "2010": 649.7508}},
"competed": {
"baseline": {"2009": 94.42735, "2010": 768.9562},
"efficient": {"2009": 94.42735, "2010": 649.7508}}},
"carbon": {
"total": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 43570.19, "2010": 36449.93}},
"competed": {
"baseline": {"2009": 5366.289, "2010": 43141.37},
"efficient": {"2009": 5366.289, "2010": 36449.93}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 2972, "2010": 3700}},
"competed": {
"baseline": {"2009": 364.016, "2010": 2972},
"efficient": {"2009": 364.016, "2010": 3700}}},
"energy": {
"total": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 7819.26, "2010": 6323.41}},
"competed": {
"baseline": {"2009": 963.0867, "2010": 7479.78},
"efficient": {"2009": 963.0867, "2010": 6323.41}}},
"carbon": {
"total": {
"baseline": {"2009": 1437816.14, "2010": 1423665.24},
"efficient": {"2009": 1437816.14, "2010": 1202847.67}},
"competed": {
"baseline": {"2009": 177087.54, "2010": 1423665.24},
"efficient": {
"2009": 177087.54, "2010": 1202847.67}}}},
"lifetime": {"baseline": {"2009": 200.81, "2010": 200.81},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 148, "2010": 0}},
"competed": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 148, "2010": 0}}},
"energy": {
"total": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 647.8339, "2010": 768.9562}},
"competed": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 647.8339, "2010": 768.9562}}},
"carbon": {
"total": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 36815.4, "2010": 43141.37}},
"competed": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 36815.4, "2010": 43141.37}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 3700, "2010": 2972}},
"competed": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 3700, "2010": 2972}}},
"energy": {
"total": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 6610.44, "2010": 7479.78}},
"competed": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 6610.44, "2010": 7479.78}}},
"carbon": {
"total": {
"baseline": {
"2009": 1437816.14, "2010": 1423665.24},
"efficient": {
"2009": 1214908.11, "2010": 1423665.24}},
"competed": {
"baseline": {
"2009": 1437816.14, "2010": 1423665.24},
"efficient": {
"2009": 1214908.11, "2010": 1423665.24}}}},
"lifetime": {"baseline": {"2009": 200.8108, "2010": 200.8108},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 148, "2010": 148}},
"competed": {
"all": {"2009": 148, "2010": 148},
"measure": {"2009": 148, "2010": 148}}},
"energy": {
"total": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 647.8339, "2010": 649.7508}},
"competed": {
"baseline": {"2009": 766.677, "2010": 768.9562},
"efficient": {"2009": 647.8339, "2010": 649.7508}}},
"carbon": {
"total": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 36815.4, "2010": 36449.93}},
"competed": {
"baseline": {"2009": 43570.19, "2010": 43141.37},
"efficient": {"2009": 36815.4, "2010": 36449.93}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 3700, "2010": 3700}},
"competed": {
"baseline": {"2009": 2972, "2010": 2972},
"efficient": {"2009": 3700, "2010": 3700}}},
"energy": {
"total": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 6610.44, "2010": 6323.41}},
"competed": {
"baseline": {"2009": 7819.26, "2010": 7479.78},
"efficient": {"2009": 6610.44, "2010": 6323.41}}},
"carbon": {
"total": {
"baseline": {
"2009": 1437816.14, "2010": 1423665.24},
"efficient": {
"2009": 1214908.11, "2010": 1202847.67}},
"competed": {
"baseline": {
"2009": 1437816.14, "2010": 1423665.24},
"efficient": {
"2009": 1214908.11, "2010": 1202847.67}}}},
"lifetime": {"baseline": {"2009": 200.8108, "2010": 200.8108},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 15, "2010": 15},
"measure": {"2009": 15, "2010": 15}},
"competed": {
"all": {"2009": 15, "2010": 15},
"measure": {"2009": 15, "2010": 15}}},
"energy": {
"total": {
"baseline": {"2009": 15.15, "2010": 15.15},
"efficient": {"2009": 34.452, "2010": 34.56}},
"competed": {
"baseline": {"2009": 15.15, "2010": 15.15},
"efficient": {"2009": 34.452, "2010": 34.56}}},
"carbon": {
"total": {
"baseline": {"2009": 856.2139, "2010": 832.0021},
"efficient": {"2009": 1958.494, "2010": 1941.174}},
"competed": {
"baseline": {"2009": 856.2139, "2010": 832.0021},
"efficient": {"2009": 1958.494, "2010": 1941.174}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 270, "2010": 270},
"efficient": {"2009": 375, "2010": 375}},
"competed": {
"baseline": {"2009": 270, "2010": 270},
"efficient": {"2009": 375, "2010": 375}}},
"energy": {
"total": {
"baseline": {"2009": 170.892, "2010": 163.317},
"efficient": {"2009": 349.3433, "2010": 334.1952}},
"competed": {
"baseline": {"2009": 170.892, "2010": 163.317},
"efficient": {"2009": 349.3433, "2010": 334.1952}}},
"carbon": {
"total": {
"baseline": {"2009": 28255.06, "2010": 27456.07},
"efficient": {"2009": 64630.29, "2010": 64058.75}},
"competed": {
"baseline": {"2009": 28255.06, "2010": 27456.07},
"efficient": {"2009": 64630.29, "2010": 64058.75}}}},
"lifetime": {"baseline": {"2009": 180, "2010": 180},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 11000000, "2010": 11000000}},
"competed": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 11000000, "2010": 11000000}}},
"energy": {
"total": {
"baseline": {"2009": 31.9, "2010": 32.0},
"efficient": {"2009": 17.86, "2010": 17.92}},
"competed": {
"baseline": {"2009": 31.9, "2010": 32.0},
"efficient": {"2009": 17.86, "2010": 17.92}}},
"carbon": {
"total": {
"baseline": {"2009": 1813.42, "2010": 1797.38},
"efficient": {"2009": 1015.52, "2010": 1006.53}},
"competed": {
"baseline": {"2009": 1813.42, "2010": 1797.38},
"efficient": {"2009": 1015.52, "2010": 1006.53}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 154000000, "2010": 154000000},
"efficient": {"2009": 275000000, "2010": 275000000}},
"competed": {
"baseline": {"2009": 154000000, "2010": 154000000},
"efficient": {"2009": 275000000, "2010": 275000000}}},
"energy": {
"total": {
"baseline": {"2009": 289.65, "2010": 273.6},
"efficient": {"2009": 162.21, "2010": 153.22}},
"competed": {
"baseline": {"2009": 289.65, "2010": 273.6},
"efficient": {"2009": 162.21, "2010": 153.22}}},
"carbon": {
"total": {
"baseline": {"2009": 59842.87, "2010": 59313.65},
"efficient": {"2009": 33512, "2010": 33215.65}},
"competed": {
"baseline": {"2009": 59842.87, "2010": 59313.65},
"efficient": {"2009": 33512, "2010": 33215.65}}}},
"lifetime": {"baseline": {"2009": 140, "2010": 140},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 7.125, "2010": 6.9375},
"measure": {"2009": 7.125, "2010": 6.9375}},
"competed": {
"all": {"2009": 7.125, "2010": 6.9375},
"measure": {"2009": 7.125, "2010": 6.9375}}},
"energy": {
"total": {
"baseline": {"2009": 7.1963, "2010": 7.0069},
"efficient": {"2009": 5.1813, "2010": 5.0449}},
"competed": {
"baseline": {"2009": 7.1963, "2010": 7.0069},
"efficient": {"2009": 5.1813, "2010": 5.0449}}},
"carbon": {
"total": {
"baseline": {"2009": 406.7016, "2010": 384.801},
"efficient": {"2009": 292.8251, "2010": 277.0567}},
"competed": {
"baseline": {"2009": 406.7016, "2010": 384.801},
"efficient": {"2009": 292.8251, "2010": 277.0567}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 128.25, "2010": 124.875},
"efficient": {"2009": 178.125, "2010": 173.4375}},
"competed": {
"baseline": {"2009": 128.25, "2010": 124.875},
"efficient": {"2009": 178.125, "2010": 173.4375}}},
"energy": {
"total": {
"baseline": {"2009": 81.1737, "2010": 75.53411},
"efficient": {"2009": 58.44506, "2010": 54.38456}},
"competed": {
"baseline": {"2009": 81.1737, "2010": 75.53411},
"efficient": {"2009": 58.44506, "2010": 54.38456}}},
"carbon": {
"total": {
"baseline": {"2009": 13421.15, "2010": 12698.43},
"efficient": {"2009": 9663.23, "2010": 9142.871}},
"competed": {
"baseline": {"2009": 13421.15, "2010": 12698.43},
"efficient": {"2009": 9663.23, "2010": 9142.871}}}},
"lifetime": {"baseline": {"2009": 180, "2010": 180},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 70.3, "2010": 68.45},
"measure": {"2009": 70.3, "2010": 68.45}},
"competed": {
"all": {"2009": 70.3, "2010": 68.45},
"measure": {"2009": 70.3, "2010": 68.45}}},
"energy": {
"total": {
"baseline": {"2009": 364.1716, "2010": 355.6422},
"efficient": {"2009": 307.7211, "2010": 300.5098}},
"competed": {
"baseline": {"2009": 364.1716, "2010": 355.6422},
"efficient": {"2009": 307.7211, "2010": 300.5098}}},
"carbon": {
"total": {
"baseline": {"2009": 20695.84, "2010": 19952.88},
"efficient": {"2009": 17487.31, "2010": 16858.09}},
"competed": {
"baseline": {"2009": 20695.84, "2010": 19952.88},
"efficient": {"2009": 17487.31, "2010": 16858.09}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 1411.7, "2010": 1374.55},
"efficient": {"2009": 1757.5, "2010": 1711.25}},
"competed": {
"baseline": {"2009": 1411.7, "2010": 1374.55},
"efficient": {"2009": 1757.5, "2010": 1711.25}}},
"energy": {
"total": {
"baseline": {"2009": 3714.15, "2010": 3459.40},
"efficient": {"2009": 3139.96, "2010": 2924.58}},
"competed": {
"baseline": {"2009": 3714.15, "2010": 3459.40},
"efficient": {"2009": 3139.96, "2010": 2924.58}}},
"carbon": {
"total": {
"baseline": {"2009": 682962.67, "2010": 658445.18},
"efficient": {"2009": 577081.35, "2010": 556317.05}},
"competed": {
"baseline": {"2009": 682962.67, "2010": 658445.18},
"efficient": {"2009": 577081.35, "2010": 556317.05}}}},
"lifetime": {"baseline": {"2009": 200.81, "2010": 200.81},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 11000000, "2010": 11000000}},
"competed": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 11000000, "2010": 11000000}}},
"energy": {
"total": {
"baseline": {"2009": 114.84, "2010": 115.2},
"efficient": {"2009": 86.13, "2010": 86.4}},
"competed": {
"baseline": {"2009": 114.84, "2010": 115.2},
"efficient": {"2009": 86.13, "2010": 86.4}}},
"carbon": {
"total": {
"baseline": {"2009": 6528.313, "2010": 6470.58},
"efficient": {"2009": 4896.234, "2010": 4852.935}},
"competed": {
"baseline": {"2009": 6528.313, "2010": 6470.58},
"efficient": {"2009": 4896.234, "2010": 4852.935}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 0, "2010": 0},
"efficient": {"2009": 275000000, "2010": 275000000}},
"competed": {
"baseline": {"2009": 0, "2010": 0},
"efficient": {"2009": 275000000, "2010": 275000000}}},
"energy": {
"total": {
"baseline": {"2009": 1042.747, "2010": 984.96},
"efficient": {"2009": 782.0604, "2010": 738.72}},
"competed": {
"baseline": {"2009": 1042.747, "2010": 984.96},
"efficient": {"2009": 782.0604, "2010": 738.72}}},
"carbon": {
"total": {
"baseline": {"2009": 215434.31, "2010": 213529.15},
"efficient": {"2009": 161575.74, "2010": 160146.86}},
"competed": {
"baseline": {"2009": 215434.31, "2010": 213529.15},
"efficient": {"2009": 161575.74, "2010": 160146.86}}}},
"lifetime": {"baseline": {"2009": 10, "2010": 10},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 729, "2010": 729},
"measure": {"2009": 729, "2010": 729}},
"competed": {
"all": {"2009": 729, "2010": 729},
"measure": {"2009": 729, "2010": 729}}},
"energy": {
"total": {
"baseline": {"2009": 1177.11, "2010": 1180.8},
"efficient": {"2009": 588.555, "2010": 590.4}},
"competed": {
"baseline": {"2009": 1177.11, "2010": 1180.8},
"efficient": {"2009": 588.555, "2010": 590.4}}},
"carbon": {
"total": {
"baseline": {"2009": 66915.2, "2010": 66323.45},
"efficient": {"2009": 33457.6, "2010": 33161.72}},
"competed": {
"baseline": {"2009": 66915.2, "2010": 66323.45},
"efficient": {"2009": 33457.6, "2010": 33161.72}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 0, "2010": 0},
"efficient": {"2009": 18225, "2010": 18225}},
"competed": {
"baseline": {"2009": 0, "2010": 0},
"efficient": {"2009": 18225, "2010": 18225}}},
"energy": {
"total": {
"baseline": {"2009": 11935.9, "2010": 11418.34},
"efficient": {"2009": 5967.948, "2010": 5709.168}},
"competed": {
"baseline": {"2009": 11935.9, "2010": 11418.34},
"efficient": {"2009": 5967.948, "2010": 5709.168}}},
"carbon": {
"total": {
"baseline": {"2009": 2208201.73, "2010": 2188673.79},
"efficient": {"2009": 1104100.86, "2010": 1094336.90}},
"competed": {
"baseline": {"2009": 2208201.73, "2010": 2188673.79},
"efficient": {"2009": 1104100.86, "2010": 1094336.90}}}
},
"lifetime": {"baseline": {"2009": 10, "2010": 10},
"measure": 1}}]
cls.ok_mapmas_partchck_msegout = [{
"stock": {
"total": {
"all": {"2009": 11000000, "2010": 11000000},
"measure": {"2009": 298571.43, "2010": 597142.86}},
"competed": {
"all": {"2009": 298571.43, "2010": 597142.86},
"measure": {"2009": 298571.43, "2010": 597142.86}}},
"energy": {
"total": {
"baseline": {"2009": 31.90, "2010": 32.00},
"efficient": {"2009": 31.52, "2010": 31.24}},
"competed": {
"baseline": {"2009": 0.87, "2010": 1.74},
"efficient": {"2009": 0.48, "2010": 0.97}}},
"carbon": {
"total": {
"baseline": {"2009": 1813.42, "2010": 1797.38},
"efficient": {"2009": 1791.76, "2010": 1754.45}},
"competed": {
"baseline": {"2009": 49.22, "2010": 97.57},
"efficient": {"2009": 27.56, "2010": 54.64}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 154000000, "2010": 154000000},
"efficient": {
"2009": 157284285.71, "2010": 160568571.42857143}},
"competed": {
"baseline": {"2009": 4180000, "2010": 8360000},
"efficient": {
"2009": 7464285.71, "2010": 14928571.43}}},
"energy": {
"total": {
"baseline": {"2009": 289.65, "2010": 273.60},
"efficient": {"2009": 286.19, "2010": 267.06}},
"competed": {
"baseline": {"2009": 7.86, "2010": 14.85},
"efficient": {"2009": 4.40, "2010": 8.32}}},
"carbon": {
"total": {
"baseline": {"2009": 59842.87, "2010": 59313.65},
"efficient": {"2009": 59128.17, "2010": 57896.90}},
"competed": {
"baseline": {"2009": 1624.31, "2010": 3219.88},
"efficient": {"2009": 909.61, "2010": 1803.14}}}},
"lifetime": {"baseline": {"2009": 140, "2010": 140},
"measure": 1}}]
cls.ok_distmeas_out = [
[120.86, 100, 1741.32, 100, 1.0, 1],
[11.9, 100, 374.73, 100, 0.93, 100],
[55.44, 100, 6426946929.70, 100, 1.0, 1]]
cls.ok_partialmeas_out = [{
"stock": {
"total": {
"all": {"2009": 18, "2010": 18},
"measure": {"2009": 18, "2010": 18}},
"competed": {
"all": {"2009": 18, "2010": 18},
"measure": {"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 57.42, "2010": 57.6},
"efficient": {"2009": 27.5616, "2010": 27.648}},
"competed": {
"baseline": {"2009": 57.42, "2010": 57.6},
"efficient": {"2009": 27.5616, "2010": 27.648}}},
"carbon": {
"total": {
"baseline": {"2009": 3264.156, "2010": 3235.29},
"efficient": {"2009": 1566.795, "2010": 1552.939}},
"competed": {
"baseline": {"2009": 3264.156, "2010": 3235.29},
"efficient": {"2009": 1566.795, "2010": 1552.939}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 216, "2010": 216},
"efficient": {"2009": 450, "2010": 450}},
"competed": {
"baseline": {"2009": 216, "2010": 216},
"efficient": {"2009": 450, "2010": 450}}},
"energy": {
"total": {
"baseline": {"2009": 582.2388, "2010": 556.992},
"efficient": {"2009": 279.4746, "2010": 267.3562}},
"competed": {
"baseline": {"2009": 582.2388, "2010": 556.992},
"efficient": {"2009": 279.4746, "2010": 267.3562}}},
"carbon": {
"total": {
"baseline": {"2009": 107717.16, "2010": 106764.58},
"efficient": {"2009": 51704.24, "2010": 51247}},
"competed": {
"baseline": {"2009": 107717.16, "2010": 106764.58},
"efficient": {"2009": 51704.24, "2010": 51247}}}},
"lifetime": {"baseline": {"2009": 120, "2010": 120},
"measure": 1}},
{
"stock": {
"total": {
"all": {"2009": 52, "2010": 52},
"measure": {"2009": 52, "2010": 52}},
"competed": {
"all": {"2009": 52, "2010": 52},
"measure": {"2009": 52, "2010": 52}}},
"energy": {
"total": {
"baseline": {"2009": 165.88, "2010": 166.4},
"efficient": {"2009": 67.1176, "2010": 67.328}},
"competed": {
"baseline": {"2009": 165.88, "2010": 166.4},
"efficient": {"2009": 67.1176, "2010": 67.328}}},
"carbon": {
"total": {
"baseline": {"2009": 9429.785, "2010": 9346.394},
"efficient": {"2009": 3815.436, "2010": 3781.695}},
"competed": {
"baseline": {"2009": 9429.785, "2010": 9346.394},
"efficient": {"2009": 3815.436, "2010": 3781.695}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 526, "2010": 526},
"efficient": {"2009": 1300, "2010": 1300}},
"competed": {
"baseline": {"2009": 526, "2010": 526},
"efficient": {"2009": 1300, "2010": 1300}}},
"energy": {
"total": {
"baseline": {"2009": 1682.023, "2010": 1609.088},
"efficient": {"2009": 680.5725, "2010": 651.0618}},
"competed": {
"baseline": {"2009": 1682.023, "2010": 1609.088},
"efficient": {"2009": 680.5725, "2010": 651.0618}}},
"carbon": {
"total": {
"baseline": {"2009": 311182.9, "2010": 308431},
"efficient": {"2009": 125909.39, "2010": 124795.93}},
"competed": {
"baseline": {"2009": 311182.9, "2010": 308431},
"efficient": {"2009": 125909.39, "2010": 124795.93}}}},
"lifetime": {"baseline": {"2009": 101.1538, "2010": 101.1538},
"measure": 1}}]
cls.ok_warnmeas_out = [
[("WARNING: 'warn measure 1' has invalid "
"sub-market scaling fraction source title, author, "
"organization, and/or year information"),
("WARNING: 'warn measure 1' has invalid "
"sub-market scaling fraction source URL information"),
("WARNING: 'warn measure 1' has invalid "
"sub-market scaling fraction derivation information"),
("WARNING (CRITICAL): 'warn measure 1' has "
"insufficient sub-market source information and "
"will be removed from analysis")],
[("WARNING: 'warn measure 2' has invalid "
"sub-market scaling fraction source URL information"),
("WARNING: 'warn measure 2' has invalid "
"sub-market scaling fraction derivation information"),
("WARNING (CRITICAL): 'warn measure 2' has "
"insufficient sub-market source information and "
"will be removed from analysis")],
[("WARNING: 'warn measure 3' has invalid "
"sub-market scaling fraction source title, author, "
"organization, and/or year information")]]
def test_mseg_ok_full_tp(self):
"""Test 'fill_mkts' function given valid inputs.
Notes:
Checks the all branches of measure 'markets' attribute
under a Technical potential scenario.
Raises:
AssertionError: If function yields unexpected results.
"""
# Run function on all measure objects and check output
for idx, measure in enumerate(self.ok_tpmeas_fullchk_in):
measure.fill_mkts(self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
# Restrict the full check of all branches of 'markets' to only
# the first three measures in this set. For the remaining two
# measures, only check the competed choice parameter outputs.
# These last two measures are intended to test a special case where
# measure cost units are in $/ft^2 floor rather than $/unit and
# competed choice parameters must be scaled accordingly
if idx < 3:
self.dict_check(
measure.markets['Technical potential']['master_mseg'],
self.ok_tpmeas_fullchk_msegout[idx])
self.dict_check(
measure.markets['Technical potential']['mseg_adjust'][
'secondary mseg adjustments'],
self.ok_tpmeas_fullchk_msegadjout[idx])
self.dict_check(
measure.markets['Technical potential']['mseg_out_break'],
self.ok_tpmeas_fullchk_break_out[idx])
self.dict_check(
measure.markets['Technical potential']['mseg_adjust'][
'competed choice parameters'],
self.ok_tpmeas_fullchk_competechoiceout[idx])
def test_mseg_ok_part_tp(self):
"""Test 'fill_mkts' function given valid inputs.
Notes:
Checks the 'master_mseg' branch of measure 'markets' attribute
under a Technical potential scenario.
Raises:
AssertionError: If function yields unexpected results.
"""
for idx, measure in enumerate(self.ok_tpmeas_partchk_in):
measure.fill_mkts(self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
self.dict_check(
measure.markets['Technical potential']['master_mseg'],
self.ok_tpmeas_partchk_msegout[idx])
def test_mseg_ok_part_map(self):
"""Test 'fill_mkts' function given valid inputs.
Notes:
Checks the 'master_mseg' branch of measure 'markets' attribute
under a Max adoption potential scenario.
Raises:
AssertionError: If function yields unexpected results.
"""
# Run function on all measure objects and check for correct
# output
for idx, measure in enumerate(self.ok_mapmeas_partchk_in):
measure.fill_mkts(self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
self.dict_check(
measure.markets['Max adoption potential']['master_mseg'],
self.ok_mapmas_partchck_msegout[idx])
def test_mseg_ok_distrib(self):
"""Test 'fill_mkts' function given valid inputs.
Notes:
Valid input measures are assigned distributions on
their cost, performance, and/or lifetime attributes.
Raises:
AssertionError: If function yields unexpected results.
"""
# Seed random number generator to yield repeatable results
numpy.random.seed(1234)
for idx, measure in enumerate(self.ok_distmeas_in):
# Generate lists of energy and cost output values
measure.fill_mkts(
self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
test_outputs = measure.markets[
'Technical potential']['master_mseg']
test_e = test_outputs["energy"]["total"]["efficient"]["2009"]
test_c = test_outputs[
"cost"]["stock"]["total"]["efficient"]["2009"]
test_l = test_outputs["lifetime"]["measure"]
if type(test_l) == float:
test_l = [test_l]
# Calculate mean values from output lists for testing
param_e = round(sum(test_e) / len(test_e), 2)
param_c = round(sum(test_c) / len(test_c), 2)
param_l = round(sum(test_l) / len(test_l), 2)
# Check mean values and length of output lists to ensure
# correct
self.assertEqual([
param_e, len(test_e), param_c, len(test_c),
param_l, len(test_l)], self.ok_distmeas_out[idx])
def test_mseg_partial(self):
"""Test 'fill_mkts' function given partially valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
# Run function on all measure objects and check output
for idx, measure in enumerate(self.ok_partialmeas_in):
measure.fill_mkts(self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
self.dict_check(
measure.markets['Technical potential']['master_mseg'],
self.ok_partialmeas_out[idx])
def test_mseg_fail_inputs(self):
"""Test 'fill_mkts' function given invalid inputs.
Raises:
AssertionError: If ValueError is not raised.
"""
# Run function on all measure objects and check output
for idx, measure in enumerate(self.failmeas_inputs_in):
with self.assertRaises(ValueError):
measure.fill_mkts(self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
def test_mseg_fail_missing(self):
"""Test 'fill_mkts' function given a measure with missing baseline data.
Raises:
AssertionError: If KeyError is not raised.
"""
# Run function on all measure objects and check output
with self.assertRaises(KeyError):
self.failmeas_missing_in.fill_mkts(
self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
def test_mseg_warn(self):
"""Test 'fill_mkts' function given incomplete inputs.
Raises:
AssertionError: If function yields unexpected results or
UserWarning is not raised.
"""
# Run function on all measure objects and check output
for idx, mw in enumerate(self.warnmeas_in):
# Assert that inputs generate correct warnings and that measure
# is marked inactive where necessary
with warnings.catch_warnings(record=True) as w:
mw.fill_mkts(self.sample_mseg_in, self.sample_cpl_in,
self.convert_data, self.verbose)
# Check correct number of warnings is yielded
self.assertEqual(len(w), len(self.ok_warnmeas_out[idx]))
# Check correct type of warnings is yielded
self.assertTrue(all([
issubclass(wn.category, UserWarning) for wn in w]))
[self.assertTrue(wm in str([wmt.message for wmt in w])) for
wm in self.ok_warnmeas_out[idx]]
# Check that measure is marked inactive when a critical
# warning message is yielded
if any(['CRITICAL' in x for x in self.ok_warnmeas_out[
idx]]):
self.assertTrue(mw.remove is True)
else:
self.assertTrue(mw.remove is False)
class PartitionMicrosegmentTest(unittest.TestCase, CommonMethods):
"""Test the operation of the 'partition_microsegment' function.
Ensure that the function properly partitions an input microsegment
to yield the required total, competed, and efficient stock, energy,
carbon and cost information.
Attributes:
time_horizons (list): A series of modeling time horizons to use
in the various test functions of the class.
handyvars (object): Global variables to use for the test measure.
sample_measure_in (dict): Sample measure attributes.
ok_diffuse_params_in (NoneType): Placeholder for eventual technology
diffusion parameters to be used in 'adjusted adoption' scenario.
ok_mskeys_in (list): Sample key chains associated with the market
microsegment being partitioned by the function.
ok_mkt_scale_frac_in (float): Sample market microsegment scaling
factor.
ok_newbldg_frac_in (list): Sample fraction of the total stock that
is new construction, by year.
ok_stock_in (list): Sample baseline microsegment stock data, by year.
ok_energy_in (list): Sample baseline microsegment energy data, by year.
ok_carb_in (list): Sample baseline microsegment carbon data, by year.
ok_base_cost_in (list): Sample baseline technology unit costs, by year.
ok_cost_meas_in (list): Sample measure unit costs.
ok_cost_energy_base_in (numpy.ndarray): Sample baseline fuel costs.
ok_cost_energy_meas_in (numpy.ndarray): Sample measure fuel costs.
ok_relperf_in (list): Sample measure relative performance values.
ok_life_base_in (dict): Sample baseline technology lifetimes, by year.
ok_life_meas_in (int): Sample measure lifetime.
ok_ssconv_base_in (numpy.ndarray): Sample baseline fuel site-source
conversions, by year.
ok_ssconv_meas_in (numpy.ndarray): Sample measure fuel site-source
conversions, by year.
ok_carbint_base_in (numpy.ndarray): Sample baseline fuel carbon
intensities, by year.
ok_carbint_meas_in (numpy.ndarray): Sample measure fuel carbon
intensities, by year.
ok_out (list): Outputs that should be yielded by the function given
valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
cls.time_horizons = [
["2009", "2010", "2011"], ["2025", "2026", "2027"],
["2020", "2021", "2022"]]
# Base directory
base_dir = os.getcwd()
cls.handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
cls.handyvars.retro_rate = 0.02
cls.handyvars.ccosts = numpy.array(
(b'Test', 1, 4, 1, 1, 1, 1, 1, 1, 3), dtype=[
('Category', 'S11'), ('2009', '<f8'),
('2010', '<f8'), ('2011', '<f8'),
('2020', '<f8'), ('2021', '<f8'),
('2022', '<f8'), ('2025', '<f8'),
('2026', '<f8'), ('2027', '<f8')])
sample_measure_in = {
"name": "sample measure 1",
"active": 1,
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": ["single family home"],
"fuel_type": {
"primary": ["electricity"],
"secondary": None},
"fuel_switch_to": None,
"end_use": {
"primary": ["heating", "cooling"],
"secondary": None},
"technology": {
"primary": ["resistance heat", "ASHP", "GSHP", "room AC"],
"secondary": None}}
cls.measure_instance = ecm_prep.Measure(
cls.handyvars, **sample_measure_in)
cls.ok_diffuse_params_in = None
cls.ok_mskeys_in = [
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'resistance heat',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'resistance heat',
'existing')]
cls.ok_mkt_scale_frac_in = 1
cls.ok_new_bldg_constr = [{
"annual new": {"2009": 10, "2010": 5, "2011": 10},
"total new": {"2009": 10, "2010": 15, "2011": 25}},
{
"annual new": {"2025": 10, "2026": 5, "2027": 10},
"total new": {"2025": 10, "2026": 15, "2027": 25}},
{
"annual new": {"2020": 10, "2021": 95, "2022": 10},
"total new": {"2020": 10, "2021": 100, "2022": 100}}]
cls.ok_stock_in = [
{"2009": 100, "2010": 200, "2011": 300},
{"2025": 400, "2026": 500, "2027": 600},
{"2020": 700, "2021": 800, "2022": 900}]
cls.ok_energy_scnd_in = [
{"2009": 10, "2010": 20, "2011": 30},
{"2025": 40, "2026": 50, "2027": 60},
{"2020": 70, "2021": 80, "2022": 90}]
cls.ok_energy_in = [
{"2009": 10, "2010": 20, "2011": 30},
{"2025": 40, "2026": 50, "2027": 60},
{"2020": 70, "2021": 80, "2022": 90}]
cls.ok_carb_in = [
{"2009": 30, "2010": 60, "2011": 90},
{"2025": 120, "2026": 150, "2027": 180},
{"2020": 210, "2021": 240, "2022": 270}]
cls.ok_base_cost_in = [
{"2009": 10, "2010": 10, "2011": 10},
{"2025": 20, "2026": 20, "2027": 20},
{"2020": 30, "2021": 30, "2022": 30}]
cls.ok_cost_meas_in = [20, 30, 40]
cls.ok_cost_energy_base_in, cls.ok_cost_energy_meas_in = \
(numpy.array((b'Test', 1, 2, 2, 2, 2, 2, 2, 2, 2),
dtype=[('Category', 'S11'), ('2009', '<f8'),
('2010', '<f8'), ('2011', '<f8'),
('2020', '<f8'), ('2021', '<f8'),
('2022', '<f8'), ('2025', '<f8'),
('2026', '<f8'), ('2027', '<f8')])
for n in range(2))
cls.ok_relperf_in = [
{"2009": 0.30, "2010": 0.30, "2011": 0.30},
{"2025": 0.15, "2026": 0.15, "2027": 0.15},
{"2020": 0.75, "2021": 0.75, "2022": 0.75}]
cls.ok_life_base_in = {
"2009": 10, "2010": 10, "2011": 10,
"2020": 10, "2021": 10, "2022": 10,
"2025": 10, "2026": 10, "2027": 10}
cls.ok_life_meas_in = 10
cls.ok_ssconv_base_in, cls.ok_ssconv_meas_in = \
(numpy.array((b'Test', 1, 1, 1, 1, 1, 1, 1, 1, 1),
dtype=[('Category', 'S11'), ('2009', '<f8'),
('2010', '<f8'), ('2011', '<f8'),
('2020', '<f8'), ('2021', '<f8'),
('2022', '<f8'), ('2025', '<f8'),
('2026', '<f8'), ('2027', '<f8')])
for n in range(2))
cls.ok_carbint_base_in, cls.ok_carbint_meas_in = \
(numpy.array((b'Test', 1, 1, 1, 1, 1, 1, 1, 1, 1),
dtype=[('Category', 'S11'), ('2009', '<f8'),
('2010', '<f8'), ('2011', '<f8'),
('2020', '<f8'), ('2021', '<f8'),
('2022', '<f8'), ('2025', '<f8'),
('2026', '<f8'), ('2027', '<f8')])
for n in range(2))
cls.ok_out = [
[[[
{"2009": 100, "2010": 200, "2011": 300},
{"2009": 10, "2010": 20, "2011": 30},
{"2009": 30, "2010": 60, "2011": 90},
{"2009": 100, "2010": 200, "2011": 300},
{"2009": 3, "2010": 6, "2011": 9},
{"2009": 9, "2010": 18, "2011": 27},
{"2009": 100, "2010": 66.67, "2011": 120},
{"2009": 10, "2010": 6.67, "2011": 12},
{"2009": 30, "2010": 20, "2011": 36},
{"2009": 100, "2010": 66.67, "2011": 120},
{"2009": 3, "2010": 2, "2011": 3.6},
{"2009": 9, "2010": 6, "2011": 10.8},
{"2009": 1000, "2010": 2000, "2011": 3000},
{"2009": 10, "2010": 40, "2011": 60},
{"2009": 30, "2010": 240, "2011": 90},
{"2009": 2000, "2010": 4000, "2011": 6000},
{"2009": 3, "2010": 12, "2011": 18},
{"2009": 9, "2010": 72, "2011": 27},
{"2009": 1000, "2010": 666.67, "2011": 1200},
{"2009": 10, "2010": 13.33, "2011": 24},
{"2009": 30, "2010": 80, "2011": 36},
{"2009": 2000, "2010": 1333.33, "2011": 2400},
{"2009": 3, "2010": 4, "2011": 7.2},
{"2009": 9, "2010": 24, "2011": 10.8}],
[
{"2009": 100, "2010": 200, "2011": 300},
{"2009": 10, "2010": 20, "2011": 30},
{"2009": 30, "2010": 60, "2011": 90},
{"2009": 100, "2010": 200, "2011": 300},
{"2009": 3, "2010": 6, "2011": 9},
{"2009": 9, "2010": 18, "2011": 27},
{"2009": 100, "2010": 0, "2011": 0},
{"2009": 10, "2010": 0, "2011": 0},
{"2009": 30, "2010": 0, "2011": 0},
{"2009": 100, "2010": 0, "2011": 0},
{"2009": 3, "2010": 0, "2011": 0},
{"2009": 9, "2010": 0, "2011": 0},
{"2009": 1000, "2010": 2000, "2011": 3000},
{"2009": 10, "2010": 40, "2011": 60},
{"2009": 30, "2010": 240, "2011": 90},
{"2009": 2000, "2010": 4000, "2011": 6000},
{"2009": 3, "2010": 12, "2011": 18},
{"2009": 9, "2010": 72, "2011": 27},
{"2009": 1000, "2010": 0, "2011": 0},
{"2009": 10, "2010": 0, "2011": 0},
{"2009": 30, "2010": 0, "2011": 0},
{"2009": 2000, "2010": 0, "2011": 0},
{"2009": 3, "2010": 0, "2011": 0},
{"2009": 9, "2010": 0, "2011": 0}]],
[[
{"2009": 100, "2010": 200, "2011": 300},
{"2009": 10, "2010": 20, "2011": 30},
{"2009": 30, "2010": 60, "2011": 90},
{"2009": 100, "2010": 166.67, "2011": 286.67},
{"2009": 3, "2010": 6, "2011": 9},
{"2009": 9, "2010": 18, "2011": 27},
{"2009": 100, "2010": 66.67, "2011": 120},
{"2009": 10, "2010": 6.67, "2011": 12},
{"2009": 30, "2010": 20, "2011": 36},
{"2009": 100, "2010": 66.67, "2011": 120},
{"2009": 3, "2010": 2, "2011": 3.6},
{"2009": 9, "2010": 6, "2011": 10.8},
{"2009": 1000, "2010": 2000, "2011": 3000},
{"2009": 10, "2010": 40, "2011": 60},
{"2009": 30, "2010": 240, "2011": 90},
{"2009": 2000, "2010": 3666.67, "2011": 5866.67},
{"2009": 3, "2010": 12, "2011": 18},
{"2009": 9, "2010": 72, "2011": 27},
{"2009": 1000, "2010": 666.67, "2011": 1200},
{"2009": 10, "2010": 13.33, "2011": 24},
{"2009": 30, "2010": 80, "2011": 36},
{"2009": 2000, "2010": 1333.33, "2011": 2400},
{"2009": 3, "2010": 4, "2011": 7.2},
{"2009": 9, "2010": 24, "2011": 10.8}],
[
{"2009": 100, "2010": 200, "2011": 300},
{"2009": 10, "2010": 20, "2011": 30},
{"2009": 30, "2010": 60, "2011": 90},
{"2009": 12, "2010": 48, "2011": 108},
{"2009": 9.16, "2010": 16.84, "2011": 23.0448},
{"2009": 27.48, "2010": 50.52, "2011": 69.1344},
{"2009": 12, "2010": 24, "2011": 36},
{"2009": 1.2, "2010": 2.4, "2011": 3.6},
{"2009": 3.6, "2010": 7.2, "2011": 10.8},
{"2009": 12, "2010": 24, "2011": 36},
{"2009": 0.36, "2010": 0.72, "2011": 1.08},
{"2009": 1.08, "2010": 2.16, "2011": 3.24},
{"2009": 1000, "2010": 2000, "2011": 3000},
{"2009": 10, "2010": 40, "2011": 60},
{"2009": 30, "2010": 240, "2011": 90},
{"2009": 1120, "2010": 2480, "2011": 4080},
{"2009": 9.16, "2010": 33.68, "2011": 46.0896},
{"2009": 27.48, "2010": 202.10, "2011": 69.1344},
{"2009": 120, "2010": 240, "2011": 360},
{"2009": 1.2, "2010": 4.8, "2011": 7.2},
{"2009": 3.6, "2010": 28.8, "2011": 10.8},
{"2009": 240, "2010": 480, "2011": 720},
{"2009": 0.36, "2010": 1.44, "2011": 2.16},
{"2009": 1.08, "2010": 8.64, "2011": 3.24}]]],
[[[
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 40, "2026": 50, "2027": 60},
{"2025": 120, "2026": 150, "2027": 180},
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 6, "2026": 7.5, "2027": 9},
{"2025": 18, "2026": 22.5, "2027": 27},
{"2025": 400, "2026": 166.67, "2027": 240},
{"2025": 40, "2026": 16.67, "2027": 24},
{"2025": 120, "2026": 50, "2027": 72},
{"2025": 400, "2026": 166.67, "2027": 240},
{"2025": 6, "2026": 2.5, "2027": 3.6},
{"2025": 18, "2026": 7.5, "2027": 10.8},
{"2025": 8000, "2026": 10000, "2027": 12000},
{"2025": 80, "2026": 100, "2027": 120},
{"2025": 120, "2026": 150, "2027": 540},
{"2025": 12000, "2026": 15000, "2027": 18000},
{"2025": 12, "2026": 15, "2027": 18},
{"2025": 18, "2026": 22.5, "2027": 81},
{"2025": 8000, "2026": 3333.33, "2027": 4800},
{"2025": 80, "2026": 33.33, "2027": 48},
{"2025": 120, "2026": 50, "2027": 216},
{"2025": 12000, "2026": 5000, "2027": 7200},
{"2025": 12, "2026": 5, "2027": 7.2},
{"2025": 18, "2026": 7.5, "2027": 32.4}],
[
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 40, "2026": 50, "2027": 60},
{"2025": 120, "2026": 150, "2027": 180},
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 6, "2026": 7.5, "2027": 9},
{"2025": 18, "2026": 22.5, "2027": 27},
{"2025": 400, "2026": 0, "2027": 0},
{"2025": 40, "2026": 0, "2027": 0},
{"2025": 120, "2026": 0, "2027": 0},
{"2025": 400, "2026": 0, "2027": 0},
{"2025": 6, "2026": 0, "2027": 0},
{"2025": 18, "2026": 0, "2027": 0},
{"2025": 8000, "2026": 10000, "2027": 12000},
{"2025": 80, "2026": 100, "2027": 120},
{"2025": 120, "2026": 150, "2027": 540},
{"2025": 12000, "2026": 15000, "2027": 18000},
{"2025": 12, "2026": 15, "2027": 18},
{"2025": 18, "2026": 22.5, "2027": 81},
{"2025": 8000, "2026": 0, "2027": 0},
{"2025": 80, "2026": 0, "2027": 0},
{"2025": 120, "2026": 0, "2027": 0},
{"2025": 12000, "2026": 0, "2027": 0},
{"2025": 12, "2026": 0, "2027": 0},
{"2025": 18, "2026": 0, "2027": 0}]],
[[
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 40, "2026": 50, "2027": 60},
{"2025": 120, "2026": 150, "2027": 180},
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 6, "2026": 7.5, "2027": 9},
{"2025": 18, "2026": 22.5, "2027": 27},
{"2025": 400, "2026": 166.67, "2027": 240},
{"2025": 40, "2026": 16.67, "2027": 24},
{"2025": 120, "2026": 50, "2027": 72},
{"2025": 400, "2026": 166.67, "2027": 240},
{"2025": 6, "2026": 2.5, "2027": 3.6},
{"2025": 18, "2026": 7.5, "2027": 10.8},
{"2025": 8000, "2026": 10000, "2027": 12000},
{"2025": 80, "2026": 100, "2027": 120},
{"2025": 120, "2026": 150, "2027": 540},
{"2025": 12000, "2026": 15000, "2027": 18000},
{"2025": 12, "2026": 15, "2027": 18},
{"2025": 18, "2026": 22.5, "2027": 81},
{"2025": 8000, "2026": 3333.33, "2027": 4800},
{"2025": 80, "2026": 33.33, "2027": 48},
{"2025": 120, "2026": 50, "2027": 216},
{"2025": 12000, "2026": 5000, "2027": 7200},
{"2025": 12, "2026": 5, "2027": 7.2},
{"2025": 18, "2026": 7.5, "2027": 32.4}],
[
{"2025": 400, "2026": 500, "2027": 600},
{"2025": 40, "2026": 50, "2027": 60},
{"2025": 120, "2026": 150, "2027": 180},
{"2025": 48, "2026": 120, "2027": 216},
{"2025": 35.92, "2026": 40.41, "2027": 43.1088},
{"2025": 107.76, "2026": 121.24, "2027": 129.3264},
{"2025": 48, "2026": 60, "2027": 72},
{"2025": 4.8, "2026": 6, "2027": 7.2},
{"2025": 14.4, "2026": 18.0, "2027": 21.6},
{"2025": 48, "2026": 60, "2027": 72},
{"2025": 0.72, "2026": 0.90, "2027": 1.08},
{"2025": 2.16, "2026": 2.70, "2027": 3.24},
{"2025": 8000, "2026": 10000, "2027": 12000},
{"2025": 80, "2026": 100, "2027": 120},
{"2025": 120, "2026": 150, "2027": 540},
{"2025": 8480, "2026": 11200, "2027": 14160},
{"2025": 71.84, "2026": 80.82, "2027": 86.2176},
{"2025": 107.76, "2026": 121.24, "2027": 387.9792},
{"2025": 960, "2026": 1200, "2027": 1440},
{"2025": 9.6, "2026": 12.0, "2027": 14.4},
{"2025": 14.4, "2026": 18.0, "2027": 64.8},
{"2025": 1440, "2026": 1800, "2027": 2160},
{"2025": 1.44, "2026": 1.80, "2027": 2.16},
{"2025": 2.16, "2026": 2.70, "2027": 9.72}]]],
[[[
{"2020": 700, "2021": 800, "2022": 900},
{"2020": 70, "2021": 80, "2022": 90},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 700, "2021": 800, "2022": 900},
{"2020": 52.5, "2021": 60, "2022": 67.5},
{"2020": 157.5, "2021": 180.0, "2022": 202.5},
{"2020": 700, "2021": 760, "2022": 90},
{"2020": 70, "2021": 76, "2022": 9},
{"2020": 210, "2021": 228, "2022": 27},
{"2020": 700, "2021": 760, "2022": 90},
{"2020": 52.50, "2021": 57.00, "2022": 6.75},
{"2020": 157.50, "2021": 171.0, "2022": 20.25},
{"2020": 21000, "2021": 24000, "2022": 27000},
{"2020": 140, "2021": 160, "2022": 180},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 28000, "2021": 32000, "2022": 36000},
{"2020": 105, "2021": 120, "2022": 135},
{"2020": 157.5, "2021": 180.0, "2022": 202.5},
{"2020": 21000, "2021": 22800, "2022": 2700},
{"2020": 140, "2021": 152, "2022": 18.0},
{"2020": 210, "2021": 228, "2022": 27.0},
{"2020": 28000, "2021": 30400, "2022": 3600},
{"2020": 105.0, "2021": 114.0, "2022": 13.5},
{"2020": 157.50, "2021": 171.00, "2022": 20.25}],
[
{"2020": 700, "2021": 800, "2022": 900},
{"2020": 70, "2021": 80, "2022": 90},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 700, "2021": 800, "2022": 900},
{"2020": 52.5, "2021": 60, "2022": 67.5},
{"2020": 157.5, "2021": 180, "2022": 202.5},
{"2020": 700, "2021": 0, "2022": 0},
{"2020": 70, "2021": 0, "2022": 0},
{"2020": 210, "2021": 0, "2022": 0},
{"2020": 700, "2021": 0, "2022": 0},
{"2020": 52.5, "2021": 0, "2022": 0},
{"2020": 157.5, "2021": 0, "2022": 0},
{"2020": 21000, "2021": 24000, "2022": 27000},
{"2020": 140, "2021": 160, "2022": 180},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 28000, "2021": 32000, "2022": 36000},
{"2020": 105, "2021": 120, "2022": 135},
{"2020": 157.5, "2021": 180.0, "2022": 202.5},
{"2020": 21000, "2021": 0, "2022": 0},
{"2020": 140, "2021": 0, "2022": 0},
{"2020": 210, "2021": 0, "2022": 0},
{"2020": 28000, "2021": 0, "2022": 0},
{"2020": 105, "2021": 0, "2022": 0},
{"2020": 157.5, "2021": 0, "2022": 0}]],
[[
{"2020": 700, "2021": 800, "2022": 900},
{"2020": 70, "2021": 80, "2022": 90},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 700, "2021": 800, "2022": 890},
{"2020": 52.5, "2021": 60, "2022": 67.5},
{"2020": 157.5, "2021": 180.0, "2022": 202.5},
{"2020": 700, "2021": 760, "2022": 90},
{"2020": 70, "2021": 76, "2022": 9},
{"2020": 210, "2021": 228, "2022": 27},
{"2020": 700, "2021": 760, "2022": 90},
{"2020": 52.50, "2021": 57.00, "2022": 6.75},
{"2020": 157.50, "2021": 171.0, "2022": 20.25},
{"2020": 21000, "2021": 24000, "2022": 27000},
{"2020": 140, "2021": 160, "2022": 180},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 28000, "2021": 32000, "2022": 35900},
{"2020": 105, "2021": 120, "2022": 135},
{"2020": 157.5, "2021": 180.0, "2022": 202.5},
{"2020": 21000, "2021": 22800, "2022": 2700},
{"2020": 140, "2021": 152, "2022": 18.0},
{"2020": 210, "2021": 228, "2022": 27.0},
{"2020": 28000, "2021": 30400, "2022": 3600},
{"2020": 105.0, "2021": 114.0, "2022": 13.5},
{"2020": 157.50, "2021": 171.00, "2022": 20.25}],
[
{"2020": 700, "2021": 800, "2022": 900},
{"2020": 70, "2021": 80, "2022": 90},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 84, "2021": 192, "2022": 324},
{"2020": 67.90, "2021": 75.49, "2022": 82.548},
{"2020": 203.70, "2021": 226.46, "2022": 247.644},
{"2020": 84, "2021": 96, "2022": 108},
{"2020": 8.4, "2021": 9.6, "2022": 10.8},
{"2020": 25.2, "2021": 28.8, "2022": 32.4},
{"2020": 84, "2021": 96, "2022": 108},
{"2020": 6.3, "2021": 7.2, "2022": 8.1},
{"2020": 18.9, "2021": 21.6, "2022": 24.3},
{"2020": 21000, "2021": 24000, "2022": 27000},
{"2020": 140, "2021": 160, "2022": 180},
{"2020": 210, "2021": 240, "2022": 270},
{"2020": 21840, "2021": 25920, "2022": 30240},
{"2020": 135.8, "2021": 150.98, "2022": 165.096},
{"2020": 203.70, "2021": 226.46, "2022": 247.644},
{"2020": 2520, "2021": 2880, "2022": 3240},
{"2020": 16.8, "2021": 19.2, "2022": 21.6},
{"2020": 25.2, "2021": 28.8, "2022": 32.4},
{"2020": 3360, "2021": 3840, "2022": 4320},
{"2020": 12.6, "2021": 14.4, "2022": 16.2},
{"2020": 18.9, "2021": 21.6, "2022": 24.3}]]]]
def test_ok(self):
"""Test the 'partition_microsegment' function given valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
# Loop through 'ok_out' elements
for elem in range(0, len(self.ok_out)):
# Reset AEO time horizon and market entry/exit years
self.measure_instance.handyvars.aeo_years = \
self.time_horizons[elem]
self.measure_instance.market_entry_year = \
int(self.time_horizons[elem][0])
self.measure_instance.market_exit_year = \
int(self.time_horizons[elem][-1]) + 1
# Loop through two test schemes (Technical potential and Max
# adoption potential)
for scn in range(0, len(self.handyvars.adopt_schemes)):
# Loop through two microsegment key chains (one applying
# to new structure type, another to existing structure type)
for k in range(0, len(self.ok_mskeys_in)):
# List of output dicts generated by the function
lists1 = self.measure_instance.partition_microsegment(
self.handyvars.adopt_schemes[scn],
self.ok_diffuse_params_in,
self.ok_mskeys_in[k],
self.ok_mkt_scale_frac_in,
self.ok_new_bldg_constr[elem],
self.ok_stock_in[elem], self.ok_energy_in[elem],
self.ok_carb_in[elem],
self.ok_base_cost_in[elem], self.ok_cost_meas_in[elem],
self.ok_cost_energy_base_in,
self.ok_cost_energy_meas_in,
self.ok_relperf_in[elem],
self.ok_life_base_in,
self.ok_life_meas_in,
self.ok_ssconv_base_in, self.ok_ssconv_meas_in,
self.ok_carbint_base_in, self.ok_carbint_meas_in,
self.ok_energy_scnd_in[elem])
# Correct list of output dicts
lists2 = self.ok_out[elem][scn][k]
# Compare each element of the lists of output dicts
for elem2 in range(0, len(lists1)):
self.dict_check(lists1[elem2], lists2[elem2])
class CheckMarketsTest(unittest.TestCase, CommonMethods):
"""Test 'check_mkt_inputs' function.
Ensure that the function properly raises a ValueError when
a measure's applicable baseline market input names are invalid.
Attributes:
sample_measure_fail (dict): Sample measures with applicable
baseline market input names that should yield an error.
"""
@classmethod
def setUpClass(cls):
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measures_fail = [{
"name": "sample measure 5",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": {
"primary": 999, "secondary": None},
"energy_efficiency_units": {
"primary": "dummy", "secondary": None},
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": "all commercial",
"structure_type": "all",
"fuel_type": {
"primary": [
"electricity", "natty gas"],
"secondary": None},
"fuel_switch_to": None,
"end_use": {
"primary": [
"heating", "water heating"],
"secondary": None},
"technology": {
"primary": [
"all heating", "electric WH"],
"secondary": None}},
{
"name": "sample measure 6",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": {
"primary": 999, "secondary": None},
"energy_efficiency_units": {
"primary": "dummy", "secondary": None},
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": ["assembling", "education"],
"structure_type": "all",
"fuel_type": {
"primary": "natural gas",
"secondary": None},
"fuel_switch_to": None,
"end_use": {
"primary": "heating",
"secondary": None},
"technology": {
"primary": "all",
"secondary": None}}]
cls.sample_measures_fail = [ecm_prep.Measure(
handyvars, **x) for x in sample_measures_fail]
def test_invalid_mkts(self):
"""Test 'check_mkt_inputs' function given invalid inputs."""
for m in self.sample_measures_fail:
with self.assertRaises(ValueError):
m.check_mkt_inputs()
class FillParametersTest(unittest.TestCase, CommonMethods):
"""Test 'fill_attr' function.
Ensure that the function properly converts user-defined 'all'
climate zone, building type, fuel type, end use, and technology
attributes to the expanded set of names needed to retrieve measure
stock, energy, and technology characteristics data.
Attributes:
sample_measure_in (dict): Sample measures with attributes
including 'all' to fill out.
ok_primary_cpl_out (list): List of cost, performance, and
lifetime attributes that should be yielded by the function
for the first two sample measures, given valid inputs.
ok_primary_mkts_out (list): List of climate zone, building
type, primary fuel, primary end use, and primary technology
attributes that should be yielded by the function for each
of the sample measures, given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measures = [{
"name": "sample measure 1",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": {
"all residential": 1,
"all commercial": 2},
"cost_units": {
"all residential": "cost unit 1",
"all commercial": "cost unit 2"},
"energy_efficiency": {
"all residential": {
"heating": 111, "cooling": 111},
"all commercial": 222},
"energy_efficiency_units": {
"all residential": "energy unit 1",
"all commercial": "energy unit 2"},
"product_lifetime": {
"all residential": 11,
"all commercial": 22},
"climate_zone": "all",
"bldg_type": "all",
"structure_type": "all",
"fuel_type": "all",
"fuel_switch_to": None,
"end_use": "all",
"technology": "all"},
{
"name": "sample measure 2",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": {
"all residential": 1,
"assembly": 2,
"education": 2},
"cost_units": {
"all residential": "cost unit 1",
"assembly": "cost unit 2",
"education": "cost unit 2"},
"energy_efficiency": {
"all residential": {
"heating": 111, "cooling": 111},
"assembly": 222,
"education": 222},
"energy_efficiency_units": {
"all residential": "energy unit 1",
"assembly": "energy unit 2",
"education": "energy unit 2"},
"product_lifetime": {
"all residential": 11,
"assembly": 22,
"education": 22},
"climate_zone": "all",
"bldg_type": [
"all residential", "assembly", "education"],
"structure_type": "all",
"fuel_type": "all",
"fuel_switch_to": None,
"end_use": "all",
"technology": "all"},
{
"name": "sample measure 3",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": 999,
"energy_efficiency_units": "dummy",
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": "all",
"structure_type": "all",
"fuel_type": "all",
"fuel_switch_to": None,
"end_use": [
"heating", "cooling", "secondary heating"],
"technology": "all"},
{
"name": "sample measure 4",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": 999,
"energy_efficiency_units": "dummy",
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": "all residential",
"structure_type": "all",
"fuel_type": "electricity",
"fuel_switch_to": None,
"end_use": [
"lighting", "water heating"],
"technology": "all"},
{
"name": "sample measure 5",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": {
"primary": 999, "secondary": None},
"energy_efficiency_units": {
"primary": "dummy", "secondary": None},
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": "all commercial",
"structure_type": "all",
"fuel_type": [
"electricity", "natural gas"],
"fuel_switch_to": None,
"end_use": [
"heating", "water heating"],
"technology": [
"all heating", "electric WH"]},
{
"name": "sample measure 6",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": 999,
"energy_efficiency_units": "dummy",
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": ["assembly", "education"],
"structure_type": "all",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "heating",
"technology": "all"},
{
"name": "sample measure 7",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": 999,
"energy_efficiency_units": "dummy",
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": [
"all residential", "small office"],
"structure_type": "all",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "heating",
"technology": "all"},
{
"name": "sample measure 8",
"market_entry_year": None,
"market_exit_year": None,
"installed_cost": 999,
"cost_units": "dummy",
"energy_efficiency": 999,
"energy_efficiency_units": "dummy",
"product_lifetime": 999,
"climate_zone": "all",
"bldg_type": "small office",
"structure_type": "all",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "heating",
"technology": "all"}]
cls.sample_measures_in = [ecm_prep.Measure(
handyvars, **x) for x in sample_measures]
cls.ok_primary_cpl_out = [[{
'assembly': 2, 'education': 2, 'food sales': 2,
'food service': 2, 'health care': 2,
'large office': 2, 'lodging': 2, 'mercantile/service': 2,
'mobile home': 1, 'multi family home': 1, 'other': 2,
'single family home': 1, 'small office': 2, 'warehouse': 2},
{
'assembly': "cost unit 2", 'education': "cost unit 2",
'food sales': "cost unit 2",
'food service': "cost unit 2", 'health care': "cost unit 2",
'large office': "cost unit 2", 'lodging': "cost unit 2",
'mercantile/service': "cost unit 2",
'mobile home': "cost unit 1",
'multi family home': "cost unit 1", 'other': "cost unit 2",
'single family home': "cost unit 1",
'small office': "cost unit 2", 'warehouse': "cost unit 2"},
{
'assembly': 222, 'education': 222, 'food sales': 222,
'food service': 222, 'health care': 222,
'large office': 222, 'lodging': 222, 'mercantile/service': 222,
'mobile home': {"heating": 111, "cooling": 111},
'multi family home': {"heating": 111, "cooling": 111},
'other': 222,
'single family home': {"heating": 111, "cooling": 111},
'small office': 222, 'warehouse': 222},
{
'assembly': "energy unit 2", 'education': "energy unit 2",
'food sales': "energy unit 2",
'food service': "energy unit 2", 'health care': "energy unit 2",
'large office': "energy unit 2", 'lodging': "energy unit 2",
'mercantile/service': "energy unit 2",
'mobile home': "energy unit 1",
'multi family home': "energy unit 1", 'other': "energy unit 2",
'single family home': "energy unit 1",
'small office': "energy unit 2", 'warehouse': "energy unit 2"},
{
'assembly': 22, 'education': 22, 'food sales': 22,
'food service': 22, 'health care': 22,
'large office': 22, 'lodging': 22, 'mercantile/service': 22,
'mobile home': 11, 'multi family home': 11, 'other': 22,
'single family home': 11, 'small office': 22,
'warehouse': 22}],
[{
'assembly': 2, 'education': 2, 'mobile home': 1,
'multi family home': 1, 'single family home': 1},
{
'assembly': "cost unit 2", 'education': "cost unit 2",
'mobile home': "cost unit 1", 'multi family home': "cost unit 1",
'single family home': "cost unit 1"},
{
'assembly': 222, 'education': 222,
'mobile home': {"heating": 111, "cooling": 111},
'multi family home': {"heating": 111, "cooling": 111},
'single family home': {"heating": 111, "cooling": 111}},
{
'assembly': "energy unit 2", 'education': "energy unit 2",
'mobile home': "energy unit 1",
'multi family home': "energy unit 1",
'single family home': "energy unit 1"},
{
'assembly': 22, 'education': 22, 'mobile home': 11,
'multi family home': 11, 'single family home': 11}]]
cls.ok_primary_mkts_out = [[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["single family home", "multi family home", "mobile home",
"assembly", "education", "food sales", "food service",
"health care", "lodging", "large office", "small office",
"mercantile/service", "warehouse", "other"],
["new", "existing"],
["electricity", "natural gas", "distillate", "other fuel"],
['drying', 'other (grid electric)', 'water heating',
'cooling', 'cooking', 'computers', 'lighting',
'secondary heating', 'TVs', 'heating', 'refrigeration',
'fans & pumps', 'ceiling fan', 'ventilation', 'MELs',
'non-PC office equipment', 'PCs'],
['dishwasher', 'other MELs',
'clothes washing', 'freezers',
'solar WH', 'electric WH',
'room AC', 'ASHP', 'GSHP', 'central AC',
'desktop PC', 'laptop PC', 'network equipment',
'monitors',
'linear fluorescent (T-8)',
'linear fluorescent (T-12)',
'reflector (LED)', 'general service (CFL)',
'external (high pressure sodium)',
'general service (incandescent)',
'external (CFL)',
'external (LED)', 'reflector (CFL)',
'reflector (incandescent)',
'general service (LED)',
'external (incandescent)',
'linear fluorescent (LED)',
'reflector (halogen)',
'non-specific',
'home theater & audio', 'set top box',
'video game consoles', 'DVD', 'TV',
'resistance heat',
'NGHP', 'furnace (NG)', 'boiler (NG)',
'boiler (distillate)', 'furnace (distillate)',
'resistance', 'furnace (kerosene)',
'stove (wood)', 'furnace (LPG)',
'secondary heating (wood)',
'secondary heating (coal)',
'secondary heating (kerosene)',
'secondary heating (LPG)',
'VAV_Vent', 'CAV_Vent',
'Solar water heater', 'HP water heater',
'elec_booster_water_heater',
'elec_water_heater',
'rooftop_AC', 'scroll_chiller',
'res_type_central_AC', 'reciprocating_chiller',
'comm_GSHP-cool', 'centrifugal_chiller',
'rooftop_ASHP-cool', 'wall-window_room_AC',
'screw_chiller',
'electric_res-heat', 'comm_GSHP-heat',
'rooftop_ASHP-heat', 'elec_boiler',
'Commercial Beverage Merchandisers',
'Commercial Compressor Rack Systems', 'Commercial Condensers',
'Commercial Ice Machines', 'Commercial Reach-In Freezers',
'Commercial Reach-In Refrigerators',
'Commercial Refrigerated Vending Machines',
'Commercial Supermarket Display Cases',
'Commercial Walk-In Freezers',
'Commercial Walk-In Refrigerators',
'lab fridges and freezers',
'non-road electric vehicles',
'kitchen ventilation', 'escalators',
'distribution transformers',
'large video displays', 'video displays',
'elevators', 'laundry', 'medical imaging',
'coffee brewers', 'fume hoods',
'security systems',
'100W A19 Incandescent', '100W Equivalent A19 Halogen',
'100W Equivalent CFL Bare Spiral', '100W Equivalent LED A Lamp',
'Halogen Infrared Reflector (HIR) PAR38', 'Halogen PAR38',
'LED Integrated Luminaire', 'LED PAR38', 'Mercury Vapor',
'Metal Halide', 'Sodium Vapor', 'SodiumVapor', 'T5 F28',
'T5 4xF54 HO High Bay', 'T8 F28 High-efficiency/High-Output',
'T8 F32 Commodity', 'T8 F59 High Efficiency',
'T8 F59 Typical Efficiency', 'T8 F96 High Output',
'Range, Electric-induction, 4 burner, oven, ',
'Range, Electric, 4 burner, oven, 11 griddle',
'gas_eng-driven_RTAC', 'gas_chiller',
'res_type_gasHP-cool',
'gas_eng-driven_RTHP-cool',
'gas_water_heater', 'gas_instantaneous_WH',
'gas_booster_WH',
'Range, Gas, 4 powered burners, convect. ove',
'Range, Gas, 4 burner, oven, 11 griddle ',
'gas_eng-driven_RTHP-heat',
'res_type_gasHP-heat', 'gas_boiler',
'gas_furnace', 'oil_water_heater',
'oil_boiler', 'oil_furnace', None]],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["single family home", "multi family home", "mobile home",
"assembly", "education"],
["new", "existing"],
["electricity", "natural gas", "distillate", "other fuel"],
['drying', 'other (grid electric)', 'water heating',
'cooling', 'cooking', 'computers', 'lighting',
'secondary heating', 'TVs', 'heating', 'refrigeration',
'fans & pumps', 'ceiling fan', 'ventilation', 'MELs',
'non-PC office equipment', 'PCs'],
['dishwasher', 'other MELs',
'clothes washing', 'freezers',
'solar WH', 'electric WH',
'room AC', 'ASHP', 'GSHP', 'central AC',
'desktop PC', 'laptop PC', 'network equipment',
'monitors',
'linear fluorescent (T-8)',
'linear fluorescent (T-12)',
'reflector (LED)', 'general service (CFL)',
'external (high pressure sodium)',
'general service (incandescent)',
'external (CFL)',
'external (LED)', 'reflector (CFL)',
'reflector (incandescent)',
'general service (LED)',
'external (incandescent)',
'linear fluorescent (LED)',
'reflector (halogen)',
'non-specific',
'home theater & audio', 'set top box',
'video game consoles', 'DVD', 'TV',
'resistance heat',
'NGHP', 'furnace (NG)', 'boiler (NG)',
'boiler (distillate)', 'furnace (distillate)',
'resistance', 'furnace (kerosene)',
'stove (wood)', 'furnace (LPG)',
'secondary heating (wood)',
'secondary heating (coal)',
'secondary heating (kerosene)',
'secondary heating (LPG)',
'VAV_Vent', 'CAV_Vent',
'Solar water heater', 'HP water heater',
'elec_booster_water_heater',
'elec_water_heater',
'rooftop_AC', 'scroll_chiller',
'res_type_central_AC', 'reciprocating_chiller',
'comm_GSHP-cool', 'centrifugal_chiller',
'rooftop_ASHP-cool', 'wall-window_room_AC',
'screw_chiller',
'electric_res-heat', 'comm_GSHP-heat',
'rooftop_ASHP-heat', 'elec_boiler',
'Commercial Beverage Merchandisers',
'Commercial Compressor Rack Systems', 'Commercial Condensers',
'Commercial Ice Machines', 'Commercial Reach-In Freezers',
'Commercial Reach-In Refrigerators',
'Commercial Refrigerated Vending Machines',
'Commercial Supermarket Display Cases',
'Commercial Walk-In Freezers',
'Commercial Walk-In Refrigerators',
'lab fridges and freezers',
'non-road electric vehicles',
'kitchen ventilation', 'escalators',
'distribution transformers',
'large video displays', 'video displays',
'elevators', 'laundry', 'medical imaging',
'coffee brewers', 'fume hoods',
'security systems',
'100W A19 Incandescent', '100W Equivalent A19 Halogen',
'100W Equivalent CFL Bare Spiral', '100W Equivalent LED A Lamp',
'Halogen Infrared Reflector (HIR) PAR38', 'Halogen PAR38',
'LED Integrated Luminaire', 'LED PAR38', 'Mercury Vapor',
'Metal Halide', 'Sodium Vapor', 'SodiumVapor', 'T5 F28',
'T5 4xF54 HO High Bay', 'T8 F28 High-efficiency/High-Output',
'T8 F32 Commodity', 'T8 F59 High Efficiency',
'T8 F59 Typical Efficiency', 'T8 F96 High Output',
'Range, Electric-induction, 4 burner, oven, ',
'Range, Electric, 4 burner, oven, 11 griddle',
'gas_eng-driven_RTAC', 'gas_chiller',
'res_type_gasHP-cool',
'gas_eng-driven_RTHP-cool',
'gas_water_heater', 'gas_instantaneous_WH',
'gas_booster_WH',
'Range, Gas, 4 powered burners, convect. ove',
'Range, Gas, 4 burner, oven, 11 griddle ',
'gas_eng-driven_RTHP-heat',
'res_type_gasHP-heat', 'gas_boiler',
'gas_furnace', 'oil_water_heater',
'oil_boiler', 'oil_furnace', None]],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["single family home", "multi family home", "mobile home",
"assembly", "education", "food sales", "food service",
"health care", "lodging", "large office", "small office",
"mercantile/service", "warehouse", "other"],
["new", "existing"],
["electricity", "natural gas", "distillate", "other fuel"],
['cooling', 'secondary heating', 'heating'],
['rooftop_AC', 'scroll_chiller',
'res_type_central_AC', 'reciprocating_chiller',
'comm_GSHP-cool', 'centrifugal_chiller',
'rooftop_ASHP-cool', 'wall-window_room_AC',
'screw_chiller', 'electric_res-heat',
'comm_GSHP-heat', 'rooftop_ASHP-heat', 'elec_boiler',
'non-specific', 'furnace (NG)', 'boiler (NG)',
'NGHP', 'room AC', 'ASHP', 'GSHP', 'central AC',
'resistance heat', 'boiler (distillate)',
'furnace (distillate)', 'resistance', 'furnace (kerosene)',
'stove (wood)', 'furnace (LPG)',
'gas_eng-driven_RTAC', 'gas_chiller',
'res_type_gasHP-cool', 'gas_eng-driven_RTHP-cool',
'gas_eng-driven_RTHP-heat', 'res_type_gasHP-heat',
'gas_boiler', 'gas_furnace', 'oil_boiler', 'oil_furnace',
'secondary heating (wood)', 'secondary heating (coal)',
'secondary heating (kerosene)', 'secondary heating (LPG)']],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["single family home", "multi family home", "mobile home"],
["new", "existing"], "electricity",
["lighting", "water heating"],
['solar WH', 'electric WH', 'linear fluorescent (T-8)',
'linear fluorescent (T-12)',
'reflector (LED)', 'general service (CFL)',
'external (high pressure sodium)',
'general service (incandescent)',
'external (CFL)',
'external (LED)', 'reflector (CFL)',
'reflector (incandescent)',
'general service (LED)',
'external (incandescent)',
'linear fluorescent (LED)',
'reflector (halogen)']],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["assembly", "education", "food sales", "food service",
"health care", "lodging", "large office", "small office",
"mercantile/service", "warehouse", "other"],
["new", "existing"],
["electricity", "natural gas"],
["heating", "water heating"],
['electric_res-heat', 'comm_GSHP-heat', 'rooftop_ASHP-heat',
'elec_boiler', 'gas_eng-driven_RTHP-heat', 'res_type_gasHP-heat',
'gas_boiler', 'gas_furnace', 'electric WH']],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["assembly", "education"],
["new", "existing"], "natural gas", "heating",
["res_type_gasHP-heat", "gas_eng-driven_RTHP-heat",
"gas_boiler", "gas_furnace"]],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
["single family home", "multi family home", "mobile home",
"small office"],
["new", "existing"], "natural gas", "heating",
["furnace (NG)", "NGHP", "boiler (NG)", "res_type_gasHP-heat",
"gas_eng-driven_RTHP-heat", "gas_boiler", "gas_furnace"]],
[
["AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5"],
"small office", ["new", "existing"], "natural gas",
"heating", [
"res_type_gasHP-heat", "gas_eng-driven_RTHP-heat",
"gas_boiler", "gas_furnace"]]]
def test_fill(self):
"""Test 'fill_attr' function given valid inputs.
Note:
Tests that measure attributes containing 'all' are properly
filled in with the appropriate attribute details.
Raises:
AssertionError: If function yields unexpected results.
"""
# Loop through sample measures
for ind, m in enumerate(self.sample_measures_in):
# Execute the function on each sample measure
m.fill_attr()
# For the first two sample measures, check that cost, performance,
# and lifetime attribute dicts with 'all residential' and
# 'all commercial' keys were properly filled out
if ind < 2:
[self.dict_check(x, y) for x, y in zip([
m.installed_cost, m.cost_units,
m.energy_efficiency["primary"],
m.energy_efficiency_units["primary"],
m.product_lifetime],
[o for o in self.ok_primary_cpl_out[ind]])]
# For each sample measure, check that 'all' climate zone,
# building type/vintage, fuel type, end use, and technology
# attributes were properly filled out
self.assertEqual([
sorted(x, key=lambda x: (x is None, x)) if isinstance(x, list)
else x for x in [
m.climate_zone, m.bldg_type, m.structure_type,
m.fuel_type['primary'], m.end_use['primary'],
m.technology['primary']]],
[sorted(x, key=lambda x: (x is None, x)) if isinstance(x, list)
else x for x in self.ok_primary_mkts_out[ind]])
class CreateKeyChainTest(unittest.TestCase, CommonMethods):
"""Test 'create_keychain' function.
Ensure that the function yields proper key chain output given
input microsegment information.
Attributes:
sample_measure_in (dict): Sample measure attributes.
ok_out_primary (list): Primary microsegment key chain that should
be yielded by the function given valid inputs.
ok_out_secondary (list): Secondary microsegment key chain that
should be yielded by the function given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measure = {
"name": "sample measure 2",
"active": 1,
"market_entry_year": None,
"market_exit_year": None,
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": 0.5,
"energy_efficiency_units": "relative savings (constant)",
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": "single family home",
"fuel_type": {
"primary": "electricity",
"secondary": "electricity"},
"fuel_switch_to": None,
"end_use": {
"primary": ["heating", "cooling"],
"secondary": "lighting"},
"technology": {
"primary": ["resistance heat", "ASHP", "GSHP", "room AC"],
"secondary": "general service (LED)"},
"mseg_adjust": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}}}
cls.sample_measure_in = ecm_prep.Measure(
handyvars, **sample_measure)
# Finalize the measure's 'technology_type' attribute (handled by the
# 'fill_attr' function, which is not run as part of this test)
cls.sample_measure_in.technology_type = {
"primary": "supply", "secondary": "supply"}
cls.ok_out_primary = [
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply',
'resistance heat', 'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'ASHP',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'GSHP',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'room AC',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply',
'resistance heat', 'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply', 'ASHP',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply', 'GSHP',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply', 'room AC',
'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply',
'resistance heat', 'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply', 'ASHP',
'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply', 'GSHP',
'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply', 'room AC',
'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply',
'resistance heat', 'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply', 'ASHP',
'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply', 'GSHP',
'new'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply', 'room AC',
'new'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply',
'resistance heat', 'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'ASHP',
'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'GSHP',
'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'heating', 'supply', 'room AC',
'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply',
'resistance heat', 'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply', 'ASHP',
'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply', 'GSHP',
'existing'),
('primary', 'AIA_CZ1', 'single family home',
'electricity', 'cooling', 'supply', 'room AC',
'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply',
'resistance heat', 'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply', 'ASHP',
'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply', 'GSHP',
'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'heating', 'supply', 'room AC',
'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply',
'resistance heat', 'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply', 'ASHP',
'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply', 'GSHP',
'existing'),
('primary', 'AIA_CZ2', 'single family home',
'electricity', 'cooling', 'supply', 'room AC',
'existing')]
cls.ok_out_secondary = [
('secondary', 'AIA_CZ1', 'single family home',
'electricity', 'lighting',
'general service (LED)', 'new'),
('secondary', 'AIA_CZ2', 'single family home',
'electricity', 'lighting',
'general service (LED)', 'new'),
('secondary', 'AIA_CZ1', 'single family home',
'electricity', 'lighting',
'general service (LED)', 'existing'),
('secondary', 'AIA_CZ2', 'single family home',
'electricity', 'lighting',
'general service (LED)', 'existing')]
def test_primary(self):
"""Test 'create_keychain' function given valid inputs.
Note:
Tests generation of primary microsegment key chains.
Raises:
AssertionError: If function yields unexpected results.
"""
self.assertEqual(
self.sample_measure_in.create_keychain("primary")[0],
self.ok_out_primary)
# Test the generation of a list of secondary mseg key chains
def test_secondary(self):
"""Test 'create_keychain' function given valid inputs.
Note:
Tests generation of secondary microsegment key chains.
Raises:
AssertionError: If function yields unexpected results.
"""
self.assertEqual(
self.sample_measure_in.create_keychain("secondary")[0],
self.ok_out_secondary)
class AddKeyValsTest(unittest.TestCase, CommonMethods):
"""Test 'add_keyvals' and 'add_keyvals_restrict' functions.
Ensure that the functions properly add together input dictionaries.
Attributes:
sample_measure_in (dict): Sample measure attributes.
ok_dict1_in (dict): Valid sample input dict for 'add_keyvals' function.
ok_dict2_in (dict): Valid sample input dict for 'add_keyvals' function.
ok_dict3_in (dict): Valid sample input dict for
'add_keyvals_restrict' function.
ok_dict4_in (dict): Valid sample input dict for
'add_keyvals_restrict' function.
fail_dict1_in (dict): One of two invalid sample input dicts for
'add_keyvals' function (dict keys do not exactly match).
fail_dict2_in (dict): Two of two invalid sample input dicts for
'add_keyvals' function (dict keys do not exactly match).
ok_out (dict): Dictionary that should be generated by 'add_keyvals'
function given valid inputs.
ok_out_restrict (dict): Dictionary that should be generated by
'add_keyvals_restrict' function given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measure_in = {
"name": "sample measure 1",
"active": 1,
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": ["single family home"],
"fuel_type": {
"primary": ["electricity"],
"secondary": None},
"fuel_switch_to": None,
"end_use": {
"primary": ["heating", "cooling"],
"secondary": None},
"technology": {
"primary": ["resistance heat", "ASHP", "GSHP", "room AC"],
"secondary": None}}
cls.sample_measure_in = ecm_prep.Measure(
handyvars, **sample_measure_in)
cls.ok_dict1_in, cls.ok_dict2_in = ({
"level 1a": {
"level 2aa": {"2009": 2, "2010": 3},
"level 2ab": {"2009": 4, "2010": 5}},
"level 1b": {
"level 2ba": {"2009": 6, "2010": 7},
"level 2bb": {"2009": 8, "2010": 9}}} for n in range(2))
cls.ok_dict3_in, cls.ok_dict4_in = ({
"level 1a": {
"level 2aa": {"2009": 2, "2010": 3},
"level 2ab": {"2009": 4, "2010": 5}},
"lifetime": {
"level 2ba": {"2009": 6, "2010": 7},
"level 2bb": {"2009": 8, "2010": 9}}} for n in range(2))
cls.fail_dict1_in = {
"level 1a": {
"level 2aa": {"2009": 2, "2010": 3},
"level 2ab": {"2009": 4, "2010": 5}},
"level 1b": {
"level 2ba": {"2009": 6, "2010": 7},
"level 2bb": {"2009": 8, "2010": 9}}}
cls.fail_dict2_in = {
"level 1a": {
"level 2aa": {"2009": 2, "2010": 3},
"level 2ab": {"2009": 4, "2010": 5}},
"level 1b": {
"level 2ba": {"2009": 6, "2010": 7},
"level 2bb": {"2009": 8, "2011": 9}}}
cls.ok_out = {
"level 1a": {
"level 2aa": {"2009": 4, "2010": 6},
"level 2ab": {"2009": 8, "2010": 10}},
"level 1b": {
"level 2ba": {"2009": 12, "2010": 14},
"level 2bb": {"2009": 16, "2010": 18}}}
cls.ok_out_restrict = {
"level 1a": {
"level 2aa": {"2009": 4, "2010": 6},
"level 2ab": {"2009": 8, "2010": 10}},
"lifetime": {
"level 2ba": {"2009": 6, "2010": 7},
"level 2bb": {"2009": 8, "2010": 9}}}
def test_ok_add_keyvals(self):
"""Test 'add_keyvals' function given valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(
self.sample_measure_in.add_keyvals(
self.ok_dict1_in, self.ok_dict2_in), self.ok_out)
def test_fail_add_keyvals(self):
"""Test 'add_keyvals' function given invalid inputs.
Raises:
AssertionError: If KeyError is not raised.
"""
with self.assertRaises(KeyError):
self.sample_measure_in.add_keyvals(
self.fail_dict1_in, self.fail_dict2_in)
def test_ok_add_keyvals_restrict(self):
"""Test 'add_keyvals_restrict' function given valid inputs."""
self.dict_check(
self.sample_measure_in.add_keyvals_restrict(
self.ok_dict3_in, self.ok_dict4_in), self.ok_out_restrict)
class DivKeyValsTest(unittest.TestCase, CommonMethods):
"""Test 'div_keyvals' function.
Ensure that the function properly divides the key values of one dict
by those of another. Test inputs reflect the use of this function
to generate output partitioning fractions (used to break out
measure results by climate zone, building sector, end use).
Attributes:
sample_measure_in (dict): Sample measure attributes.
ok_reduce_dict (dict): Values from second dict to normalize first
dict values by.
ok_dict_in (dict): Sample input dict with values to normalize.
ok_out (dict): Output dictionary that should be yielded by the
function given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measure_in = {
"name": "sample measure 1",
"active": 1,
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": ["single family home"],
"fuel_type": {
"primary": ["electricity"],
"secondary": None},
"fuel_switch_to": None,
"end_use": {
"primary": ["heating", "cooling"],
"secondary": None},
"technology": {
"primary": ["resistance heat", "ASHP", "GSHP", "room AC"],
"secondary": None}}
cls.sample_measure_in = ecm_prep.Measure(
handyvars, **sample_measure_in)
cls.ok_reduce_dict = {"2009": 100, "2010": 100}
cls.ok_dict_in = {
"AIA CZ1": {
"Residential": {
"Heating": {"2009": 10, "2010": 10},
"Cooling": {"2009": 15, "2010": 15}},
"Commercial": {
"Heating": {"2009": 20, "2010": 20},
"Cooling": {"2009": 25, "2010": 25}}},
"AIA CZ2": {
"Residential": {
"Heating": {"2009": 30, "2010": 30},
"Cooling": {"2009": 35, "2010": 35}},
"Commercial": {
"Heating": {"2009": 40, "2010": 40},
"Cooling": {"2009": 45, "2010": 45}}}}
cls.ok_out = {
"AIA CZ1": {
"Residential": {
"Heating": {"2009": .10, "2010": .10},
"Cooling": {"2009": .15, "2010": .15}},
"Commercial": {
"Heating": {"2009": .20, "2010": .20},
"Cooling": {"2009": .25, "2010": .25}}},
"AIA CZ2": {
"Residential": {
"Heating": {"2009": .30, "2010": .30},
"Cooling": {"2009": .35, "2010": .35}},
"Commercial": {
"Heating": {"2009": .40, "2010": .40},
"Cooling": {"2009": .45, "2010": .45}}}}
def test_ok(self):
"""Test 'div_keyvals' function given valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(
self.sample_measure_in.div_keyvals(
self.ok_dict_in, self.ok_reduce_dict), self.ok_out)
class DivKeyValsFloatTest(unittest.TestCase, CommonMethods):
"""Test 'div_keyvals_float' and div_keyvals_float_restrict' functions.
Ensure that the functions properly divide dict key values by a given
factor.
Attributes:
sample_measure_in (dict): Sample measure attributes.
ok_reduce_num (float): Factor by which dict values should be divided.
ok_dict_in (dict): Sample input dict with values to divide.
ok_out (dict): Output dictionary that should be yielded by
'div_keyvals_float' function given valid inputs.
ok_out_restrict (dict): Output dictionary that should be yielded by
'div_keyvals_float_restrict'function given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measure_in = {
"name": "sample measure 1",
"active": 1,
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": ["single family home"],
"fuel_type": {
"primary": ["electricity"],
"secondary": None},
"fuel_switch_to": None,
"end_use": {
"primary": ["heating", "cooling"],
"secondary": None},
"technology": {
"primary": ["resistance heat", "ASHP", "GSHP", "room AC"],
"secondary": None}}
cls.sample_measure_in = ecm_prep.Measure(
handyvars, **sample_measure_in)
cls.ok_reduce_num = 4
cls.ok_dict_in = {
"stock": {
"total": {"2009": 100, "2010": 200},
"competed": {"2009": 300, "2010": 400}},
"energy": {
"total": {"2009": 500, "2010": 600},
"competed": {"2009": 700, "2010": 800},
"efficient": {"2009": 700, "2010": 800}},
"carbon": {
"total": {"2009": 500, "2010": 600},
"competed": {"2009": 700, "2010": 800},
"efficient": {"2009": 700, "2010": 800}},
"cost": {
"baseline": {
"stock": {"2009": 900, "2010": 1000},
"energy": {"2009": 900, "2010": 1000},
"carbon": {"2009": 900, "2010": 1000}},
"measure": {
"stock": {"2009": 1100, "2010": 1200},
"energy": {"2009": 1100, "2010": 1200},
"carbon": {"2009": 1100, "2010": 1200}}}}
cls.ok_out = {
"stock": {
"total": {"2009": 25, "2010": 50},
"competed": {"2009": 75, "2010": 100}},
"energy": {
"total": {"2009": 125, "2010": 150},
"competed": {"2009": 175, "2010": 200},
"efficient": {"2009": 175, "2010": 200}},
"carbon": {
"total": {"2009": 125, "2010": 150},
"competed": {"2009": 175, "2010": 200},
"efficient": {"2009": 175, "2010": 200}},
"cost": {
"baseline": {
"stock": {"2009": 225, "2010": 250},
"energy": {"2009": 225, "2010": 250},
"carbon": {"2009": 225, "2010": 250}},
"measure": {
"stock": {"2009": 275, "2010": 300},
"energy": {"2009": 275, "2010": 300},
"carbon": {"2009": 275, "2010": 300}}}}
cls.ok_out_restrict = {
"stock": {
"total": {"2009": 25, "2010": 50},
"competed": {"2009": 75, "2010": 100}},
"energy": {
"total": {"2009": 500, "2010": 600},
"competed": {"2009": 700, "2010": 800},
"efficient": {"2009": 700, "2010": 800}},
"carbon": {
"total": {"2009": 500, "2010": 600},
"competed": {"2009": 700, "2010": 800},
"efficient": {"2009": 700, "2010": 800}},
"cost": {
"baseline": {
"stock": {"2009": 225, "2010": 250},
"energy": {"2009": 900, "2010": 1000},
"carbon": {"2009": 900, "2010": 1000}},
"measure": {
"stock": {"2009": 275, "2010": 300},
"energy": {"2009": 1100, "2010": 1200},
"carbon": {"2009": 1100, "2010": 1200}}}}
def test_ok_div(self):
"""Test 'div_keyvals_float' function given valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(
self.sample_measure_in.div_keyvals_float(
copy.deepcopy(self.ok_dict_in), self.ok_reduce_num),
self.ok_out)
def test_ok_div_restrict(self):
"""Test 'div_keyvals_float_restrict' function given valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
self.dict_check(
self.sample_measure_in.div_keyvals_float_restrict(
copy.deepcopy(self.ok_dict_in), self.ok_reduce_num),
self.ok_out_restrict)
class AppendKeyValsTest(unittest.TestCase):
"""Test 'append_keyvals' function.
Ensure that the function properly determines a list of valid names
for describing a measure's applicable baseline market.
Attributes:
handyvars (object): Global variables to use for the test measure.
ok_mktnames_out (list): Set of valid names that should be generated
by the function given valid inputs.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
base_dir = os.getcwd()
cls.handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
cls.ok_mktnames_out = [
"AIA_CZ1", "AIA_CZ2", "AIA_CZ3", "AIA_CZ4", "AIA_CZ5",
"single family home",
"multi family home", "mobile home",
"assembly", "education", "food sales", "food service",
"health care", "lodging", "large office", "small office",
"mercantile/service", "warehouse", "other",
"electricity", "natural gas", "distillate", "other fuel",
'drying', 'other (grid electric)', 'water heating',
'cooling', 'cooking', 'computers', 'lighting',
'secondary heating', 'TVs', 'heating', 'refrigeration',
'fans & pumps', 'ceiling fan', 'ventilation', 'MELs',
'non-PC office equipment', 'PCs',
'dishwasher', 'other MELs', 'clothes washing', 'freezers',
'solar WH', 'electric WH', 'room AC', 'ASHP', 'central AC',
'desktop PC', 'laptop PC', 'network equipment', 'monitors',
'linear fluorescent (T-8)', 'linear fluorescent (T-12)',
'reflector (LED)', 'general service (CFL)',
'external (high pressure sodium)',
'general service (incandescent)',
'external (CFL)', 'external (LED)', 'reflector (CFL)',
'reflector (incandescent)', 'general service (LED)',
'external (incandescent)', 'linear fluorescent (LED)',
'reflector (halogen)', 'non-specific', 'home theater & audio',
'set top box', 'video game consoles', 'DVD', 'TV',
'GSHP', 'resistance heat', 'NGHP', 'furnace (NG)',
'boiler (NG)', 'boiler (distillate)', 'furnace (distillate)',
'resistance', 'furnace (kerosene)', 'stove (wood)',
'furnace (LPG)', 'secondary heating (wood)',
'secondary heating (coal)', 'secondary heating (kerosene)',
'secondary heating (LPG)', 'roof', 'ground', 'windows solar',
'windows conduction', 'equipment gain', 'people gain', 'wall',
'infiltration', 'lighting gain', 'floor', 'other heat gain',
'VAV_Vent', 'CAV_Vent', 'Solar water heater',
'HP water heater', 'elec_booster_water_heater',
'elec_water_heater', 'rooftop_AC', 'scroll_chiller',
'res_type_central_AC', 'reciprocating_chiller', 'comm_GSHP-cool',
'centrifugal_chiller', 'rooftop_ASHP-cool', 'wall-window_room_AC',
'screw_chiller', 'electric_res-heat', 'comm_GSHP-heat',
'rooftop_ASHP-heat', 'elec_boiler',
'Commercial Beverage Merchandisers',
'Commercial Compressor Rack Systems', 'Commercial Condensers',
'Commercial Ice Machines', 'Commercial Reach-In Freezers',
'Commercial Reach-In Refrigerators',
'Commercial Refrigerated Vending Machines',
'Commercial Supermarket Display Cases',
'Commercial Walk-In Freezers',
'Commercial Walk-In Refrigerators',
'lab fridges and freezers',
'non-road electric vehicles', 'kitchen ventilation',
'escalators', 'distribution transformers',
'large video displays', 'video displays', 'elevators', 'laundry',
'medical imaging', 'coffee brewers', 'fume hoods',
'security systems',
'100W A19 Incandescent', '100W Equivalent A19 Halogen',
'100W Equivalent CFL Bare Spiral', '100W Equivalent LED A Lamp',
'Halogen Infrared Reflector (HIR) PAR38', 'Halogen PAR38',
'LED Integrated Luminaire', 'LED PAR38', 'Mercury Vapor',
'Metal Halide', 'Sodium Vapor', 'SodiumVapor', 'T5 F28',
'T5 4xF54 HO High Bay', 'T8 F28 High-efficiency/High-Output',
'T8 F32 Commodity', 'T8 F59 High Efficiency',
'T8 F59 Typical Efficiency', 'T8 F96 High Output',
'Range, Electric-induction, 4 burner, oven, ',
'Range, Electric, 4 burner, oven, 11 griddle',
'gas_eng-driven_RTAC', 'gas_chiller', 'res_type_gasHP-cool',
'gas_eng-driven_RTHP-cool', 'gas_water_heater',
'gas_instantaneous_WH', 'gas_booster_WH',
'Range, Gas, 4 powered burners, convect. ove',
'Range, Gas, 4 burner, oven, 11 griddle ',
'gas_eng-driven_RTHP-heat', 'res_type_gasHP-heat',
'gas_boiler', 'gas_furnace', 'oil_water_heater', 'oil_boiler',
'oil_furnace', 'new', 'existing', 'supply', 'demand',
'all', 'all residential', 'all commercial', 'all heating',
'all drying', 'all other (grid electric)',
'all water heating', 'all cooling', 'all cooking',
'all computers', 'all lighting', 'all secondary heating',
'all TVs', 'all refrigeration', 'all fans & pumps',
'all ceiling fan', 'all ventilation', 'all MELs',
'all non-PC office equipment', 'all PCs']
def test_ok_append(self):
"""Test 'append_keyvals' function given valid inputs.
Raises:
AssertionError: If function yields unexpected results.
"""
self.assertEqual(sorted(
[x for x in self.handyvars.valid_mktnames if x is not None]),
sorted([x for x in self.ok_mktnames_out if x is not None]))
class CostConversionTest(unittest.TestCase, CommonMethods):
"""Test 'convert_costs' function.
Ensure that function properly converts user-defined measure cost units
to align with comparable baseline cost units.
Attributes:
verbose (NoneType): Determines whether to print all user messages.
sample_measure_in (dict): Sample measure attributes.
sample_convertdata_ok_in (dict): Sample cost conversion input data.
sample_bldgsect_ok_in (list): List of valid building sectors for
sample measure cost.
sample_mskeys_ok_in (list): List of valid full market microsegment
information for sample measure cost (mseg type->czone->bldg->fuel->
end use->technology type->structure type).
sample_mskeys_fail_in (list): List of microsegment information for
sample measure cost that should cause function to fail.
cost_meas_ok_in (int): Sample measure cost.
cost_meas_units_ok_in_yronly (string): List of valid sample measure
cost units where only the cost year needs adjustment.
cost_meas_units_ok_in_all (list): List of valid sample measure cost
units where the cost year and/or units need adjustment.
cost_meas_units_fail_in (string): List of sample measure cost units
that should cause the function to fail.
cost_base_units_ok_in (string): List of valid baseline cost units.
ok_out_costs_yronly (float): Converted measure costs that should be
yielded given 'cost_meas_units_ok_in_yronly' measure cost units.
ok_out_costs_all (list): Converted measure costs that should be
yielded given 'cost_meas_units_ok_in_all' measure cost units.
ok_out_cost_units (string): Converted measure cost units that should
be yielded given valid inputs to the function.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measure_in = {
"name": "sample measure 2",
"remove": False,
"market_entry_year": None,
"market_exit_year": None,
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": 0.5,
"energy_efficiency_units": "relative savings (constant)",
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": ["single family home"],
"fuel_type": {
"primary": ["electricity"],
"secondary": ["electricity"]},
"fuel_switch_to": None,
"end_use": {
"primary": ["heating", "cooling"],
"secondary": ["lighting"]},
"technology": {
"primary": ["resistance heat", "ASHP", "GSHP", "room AC"],
"secondary": ["general service (LED)"]},
"mseg_adjust": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}}}
cls.verbose = None
cls.sample_measure_in = ecm_prep.Measure(
handyvars, **sample_measure_in)
cls.sample_convertdata_ok_in = {
"building type conversions": {
"original type": "EnergyPlus reference buildings",
"revised type": "Annual Energy Outlook (AEO) buildings",
"conversion data": {
"description": "sample",
"value": {
"residential": {
"single family home": {
"Single-Family": 1},
"mobile home": {
"Single-Family": 1},
"multi family home": {
"Multifamily": 1}},
"commercial": {
"assembly": {
"Hospital": 1},
"education": {
"PrimarySchool": 0.26,
"SecondarySchool": 0.74},
"food sales": {
"Supermarket": 1},
"food service": {
"QuickServiceRestaurant": 0.31,
"FullServiceRestaurant": 0.69},
"health care": None,
"lodging": {
"SmallHotel": 0.26,
"LargeHotel": 0.74},
"large office": {
"LargeOffice": 0.9,
"MediumOffice": 0.1},
"small office": {
"SmallOffice": 0.12,
"OutpatientHealthcare": 0.88},
"mercantile/service": {
"RetailStandalone": 0.53,
"RetailStripmall": 0.47},
"warehouse": {
"Warehouse": 1},
"other": None}},
"source": {
"residential": "sample",
"commercial": "sample"},
"notes": {
"residential": "sample",
"commercial": "sample"}}},
"cost unit conversions": {
"whole building": {
"wireless sensor network": {
"original units": "$/node",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": {
"residential": {
"single family home": 0.0021,
"mobile home": 0.0021,
"multi family home": 0.0041},
"commercial": 0.002},
"units": "nodes/ft^2 floor",
"source": {
"residential": "sample",
"commercial": "sample"},
"notes": "sample"}},
"occupant-centered sensing and controls": {
"original units": "$/occupant",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": {
"residential": {
"single family home": {
"Single-Family": 0.001075},
"mobile home": {
"Single-Family": 0.001075},
"multi family home": {
"Multifamily": 0.00215}},
"commercial": {
"assembly": {
"Hospital": 0.005},
"education": {
"PrimarySchool": 0.02,
"SecondarySchool": 0.02},
"food sales": {
"Supermarket": 0.008},
"food service": {
"QuickServiceRestaurant": 0.07,
"FullServiceRestaurant": 0.07},
"health care": 0.005,
"lodging": {
"SmallHotel": 0.005,
"LargeHotel": 0.005},
"large office": {
"LargeOffice": 0.005,
"MediumOffice": 0.005},
"small office": {
"SmallOffice": 0.005,
"OutpatientHealthcare": 0.02},
"mercantile/service": {
"RetailStandalone": 0.01,
"RetailStripmall": 0.01},
"warehouse": {
"Warehouse": 0.0001},
"other": 0.005}},
"units": "occupants/ft^2 floor",
"source": {
"residential": "sample",
"commercial": "sample"},
"notes": ""}}},
"heating and cooling": {
"supply": {
"heating equipment": {
"original units": "$/kBtu/h heating",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": 0.020,
"units": "kBtu/h heating/ft^2 floor",
"source": "Rule of thumb",
"notes": "sample"}},
"cooling equipment": {
"original units": "$/kBtu/h cooling",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": 0.036,
"units": "kBtu/h cooling/ft^2 floor",
"source": "Rule of thumb",
"notes": "sample"}}},
"demand": {
"windows": {
"original units": "$/ft^2 glazing",
"revised units": "$/ft^2 wall",
"conversion factor": {
"description": "Window to wall ratio",
"value": {
"residential": {
"single family home": {
"Single-Family": 0.15},
"mobile home": {
"Single-Family": 0.15},
"multi family home": {
"Multifamily": 0.10}},
"commercial": {
"assembly": {
"Hospital": 0.15},
"education": {
"PrimarySchool": 0.35,
"SecondarySchool": 0.33},
"food sales": {
"Supermarket": 0.11},
"food service": {
"QuickServiceRestaurant": 0.14,
"FullServiceRestaurant": 0.17},
"health care": 0.2,
"lodging": {
"SmallHotel": 0.11,
"LargeHotel": 0.27},
"large office": {
"LargeOffice": 0.38,
"MediumOffice": 0.33},
"small office": {
"SmallOffice": 0.21,
"OutpatientHealthcare": 0.19},
"mercantile/service": {
"RetailStandalone": 0.07,
"RetailStripmall": 0.11},
"warehouse": {
"Warehouse": 0.006},
"other": 0.2}},
"units": None,
"source": {
"residential": "sample",
"commercial": "sample"},
"notes": "sample"}},
"walls": {
"original units": "$/ft^2 wall",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "Wall to floor ratio",
"value": {
"residential": {
"single family home": {
"Single-Family": 1},
"mobile home": {
"Single-Family": 1},
"multi family home": {
"Multifamily": 1}},
"commercial": {
"assembly": {
"Hospital": 0.26},
"education": {
"PrimarySchool": 0.20,
"SecondarySchool": 0.16},
"food sales": {
"Supermarket": 0.38},
"food service": {
"QuickServiceRestaurant": 0.80,
"FullServiceRestaurant": 0.54},
"health care": 0.4,
"lodging": {
"SmallHotel": 0.40,
"LargeHotel": 0.38},
"large office": {
"LargeOffice": 0.26,
"MediumOffice": 0.40},
"small office": {
"SmallOffice": 0.55,
"OutpatientHealthcare": 0.35},
"mercantile/service": {
"RetailStandalone": 0.51,
"RetailStripmall": 0.57},
"warehouse": {
"Warehouse": 0.53},
"other": 0.4}},
"units": None,
"source": {
"residential": "sample",
"commercial": "sample"},
"notes": "sample"}},
"footprint": {
"original units": "$/ft^2 footprint",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": {
"residential": {
"single family home": {
"Single-Family": 0.5},
"mobile home": {
"Single-Family": 0.5},
"multi family home": {
"Multifamily": 0.33}},
"commercial": {
"assembly": {
"Hospital": 0.20},
"education": {
"PrimarySchool": 1,
"SecondarySchool": 0.5},
"food sales": {"Supermarket": 1},
"food service": {
"QuickServiceRestaurant": 1,
"FullServiceRestaurant": 1},
"health care": 0.2,
"lodging": {
"SmallHotel": 0.25,
"LargeHotel": 0.17},
"large office": {
"LargeOffice": 0.083,
"MediumOffice": 0.33},
"small office": {
"SmallOffice": 1,
"OutpatientHealthcare": 0.33},
"mercantile/service": {
"RetailStandalone": 1,
"RetailStripmall": 1},
"warehouse": {
"Warehouse": 1},
"other": 1}},
"units": None,
"source": {
"residential": "sample",
"commercial": "sample"},
"notes": "sample"}},
"roof": {
"original units": "$/ft^2 roof",
"revised units": "$/ft^2 footprint",
"conversion factor": {
"description": "sample",
"value": {
"residential": 1.05,
"commercial": 1},
"units": None,
"source": "Rule of thumb",
"notes": "sample"}}}},
"ventilation": {
"original units": "$/1000 CFM",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": 0.001,
"units": "1000 CFM/ft^2 floor",
"source": "Rule of thumb",
"notes": "sample"}},
"lighting": {
"original units": "$/1000 lm",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": 0.049,
"units": "1000 lm/ft^2 floor",
"source": "sample",
"notes": "sample"}},
"water heating": {
"original units": "$/kBtu/h water heating",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": 0.012,
"units": "kBtu/h water heating/ft^2 floor",
"source": "sample",
"notes": "sample"}},
"refrigeration": {
"original units": "$/kBtu/h refrigeration",
"revised units": "$/ft^2 floor",
"conversion factor": {
"description": "sample",
"value": 0.02,
"units": "kBtu/h refrigeration/ft^2 floor",
"source": "sample",
"notes": "sample"}},
"cooking": {},
"MELs": {}
}
}
cls.sample_bldgsect_ok_in = [
"residential", "commercial", "commercial", "commercial",
"commercial", "commercial", "commercial", "commercial",
"residential", "residential", "commercial", "residential",
"residential"]
cls.sample_mskeys_ok_in = [
('primary', 'marine', 'single family home', 'electricity',
'cooling', 'demand', 'windows conduction', 'existing'),
('primary', 'marine', 'assembly', 'electricity', 'heating',
'supply', 'rooftop_ASHP-heat', 'new'),
('primary', 'marine', 'food sales', 'electricity', 'cooling',
'demand', 'ground', 'new'),
('primary', 'marine', 'education', 'electricity', 'cooling',
'demand', 'roof', 'existing'),
('primary', 'marine', 'lodging', 'electricity', 'cooling',
'demand', 'wall', 'new'),
('primary', 'marine', 'food service', 'electricity', 'ventilation',
'CAV_Vent', 'existing'),
('primary', 'marine', 'small office', 'electricity', 'cooling',
'reciprocating_chiller', 'existing'),
('primary', 'mixed humid', 'health care', 'electricity', 'cooling',
'demand', 'roof', 'existing'),
('primary', 'mixed humid', 'single family home', 'electricity',
'cooling', 'supply', 'ASHP'),
('primary', 'mixed humid', 'single family home', 'electricity',
'lighting', 'linear fluorescent (LED)'),
('primary', 'marine', 'food service', 'electricity', 'ventilation',
'CAV_Vent', 'existing'),
('primary', 'mixed humid', 'multi family home', 'electricity',
'lighting', 'general service (CFL)'),
('primary', 'mixed humid', 'multi family home', 'electricity',
'lighting', 'general service (CFL)')]
cls.sample_mskeys_fail_in = [
('primary', 'marine', 'single family home', 'electricity',
'cooling', 'demand', 'windows conduction', 'existing'),
('primary', 'marine', 'assembly', 'electricity', 'PCs',
None, 'new'),
('primary', 'marine', 'single family home', 'electricity', 'PCs',
None, 'new')]
cls.cost_meas_ok_in = 10
cls.cost_meas_units_ok_in_yronly = '2008$/ft^2 floor'
cls.cost_meas_units_ok_in_all = [
'$/ft^2 glazing', '2013$/kBtu/h heating', '2010$/ft^2 footprint',
'2016$/ft^2 roof', '2013$/ft^2 wall', '2012$/1000 CFM',
'2013$/occupant', '2013$/ft^2 roof', '2013$/node',
'2013$/ft^2 floor', '2013$/node', '2013$/node',
'2013$/occupant']
cls.cost_meas_units_fail_in = [
'$/ft^2 facade', '$/kWh', '$/ft^2 floor']
cls.cost_base_units_fail_in = [
'2013$/ft^2 floor', '2013$/ft^2 floor', '2013$/unit']
cls.cost_base_units_ok_in = numpy.repeat('2013$/ft^2 floor', 13)
cls.ok_out_costs_yronly = 11.11
cls.ok_out_costs_all = [
1.47, 0.2, 10.65, 6.18, 3.85, 0.01015, 0.182,
2, 0.021, 10, 0.02, 0.041, 0.0215]
def test_convertcost_ok_yronly(self):
"""Test 'convert_costs' function for year only conversion."""
func_output = self.sample_measure_in.convert_costs(
self.sample_convertdata_ok_in, self.sample_bldgsect_ok_in[0],
self.sample_mskeys_ok_in[0], self.cost_meas_ok_in,
self.cost_meas_units_ok_in_yronly,
self.cost_base_units_ok_in[0], self.verbose)
numpy.testing.assert_almost_equal(
func_output[0], self.ok_out_costs_yronly, decimal=2)
self.assertEqual(func_output[1], self.cost_base_units_ok_in[0])
def test_convertcost_ok_all(self):
"""Test 'convert_costs' function for year/units conversion."""
for k in range(0, len(self.sample_mskeys_ok_in)):
func_output = self.sample_measure_in.convert_costs(
self.sample_convertdata_ok_in, self.sample_bldgsect_ok_in[k],
self.sample_mskeys_ok_in[k], self.cost_meas_ok_in,
self.cost_meas_units_ok_in_all[k],
self.cost_base_units_ok_in[k], self.verbose)
numpy.testing.assert_almost_equal(
func_output[0], self.ok_out_costs_all[k], decimal=2)
self.assertEqual(
func_output[1], self.cost_base_units_ok_in[k])
def test_convertcost_fail(self):
"""Test 'convert_costs' function given invalid inputs."""
for k in range(0, len(self.sample_mskeys_fail_in)):
with self.assertRaises(KeyError):
self.sample_measure_in.convert_costs(
self.sample_convertdata_ok_in,
self.sample_bldgsect_ok_in[k],
self.sample_mskeys_fail_in[k], self.cost_meas_ok_in,
self.cost_meas_units_fail_in[k],
self.cost_base_units_fail_in[k], self.verbose)
class UpdateMeasuresTest(unittest.TestCase, CommonMethods):
"""Test 'prepare_measures' function.
Ensure that function properly instantiates Measure objects and finalizes
attributes for these objects.
Attributes:
handyvars (object): Global variables to use across measures.
verbose (NoneType): Determines whether to print all user messages.
cbecs_sf_byvint (dict): Commercial square footage by vintage data.
sample_mseg_in (dict): Sample baseline microsegment stock/energy.
sample_cpl_in (dict): Sample baseline technology cost, performance,
and lifetime.
measures_ok_in (list): List of measures with valid user-defined
'status' attributes.
measures_warn_in (list): List of measures that includes one measure
with invalid 'status' attribute (the measure's 'markets' attribute
has not been finalized but user has not flagged it for an update).
convert_data (dict): Data used to convert expected
user-defined measure cost units to cost units required by Scout
analysis engine.
ok_out (list): List of measure master microsegment dicts that
should be generated by 'prepare_measures' given sample input
measure information to update and an assumed technical potential
adoption scenario.
ok_warnmeas_out (list): Warnings that should be yielded when running
'measures_warn_in' through the function.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
cls.base_dir = os.getcwd()
cls.handyvars = ecm_prep.UsefulVars(cls.base_dir,
ecm_prep.UsefulInputFiles())
# Hard code aeo_years to fit test years
cls.handyvars.aeo_years = ["2009", "2010"]
cls.cbecs_sf_byvint = {
'2004 to 2007': 6524.0, '1960 to 1969': 10362.0,
'1946 to 1959': 7381.0, '1970 to 1979': 10846.0,
'1990 to 1999': 13803.0, '2000 to 2003': 7215.0,
'Before 1920': 3980.0, '2008 to 2012': 5726.0,
'1920 to 1945': 6020.0, '1980 to 1989': 15185.0}
# Hard code carbon intensity, site-source conversion, and cost data for
# tests such that these data are not dependent on an input file that
# may change in the future
cls.handyvars.ss_conv = {
"electricity": {"2009": 3.19, "2010": 3.20},
"natural gas": {"2009": 1.01, "2010": 1.01},
"distillate": {"2009": 1.01, "2010": 1.01},
"other fuel": {"2009": 1.01, "2010": 1.01}}
cls.handyvars.carb_int = {
"residential": {
"electricity": {"2009": 56.84702689, "2010": 56.16823191},
"natural gas": {"2009": 56.51576602, "2010": 54.91762852},
"distillate": {"2009": 49.5454521, "2010": 52.59751597},
"other fuel": {"2009": 49.5454521, "2010": 52.59751597}},
"commercial": {
"electricity": {"2009": 56.84702689, "2010": 56.16823191},
"natural gas": {"2009": 56.51576602, "2010": 54.91762852},
"distillate": {"2009": 49.5454521, "2010": 52.59751597},
"other fuel": {"2009": 49.5454521, "2010": 52.59751597}}}
cls.handyvars.ecosts = {
"residential": {
"electricity": {"2009": 10.14, "2010": 9.67},
"natural gas": {"2009": 11.28, "2010": 10.78},
"distillate": {"2009": 21.23, "2010": 20.59},
"other fuel": {"2009": 21.23, "2010": 20.59}},
"commercial": {
"electricity": {"2009": 9.08, "2010": 8.55},
"natural gas": {"2009": 8.96, "2010": 8.59},
"distillate": {"2009": 14.81, "2010": 14.87},
"other fuel": {"2009": 14.81, "2010": 14.87}}}
cls.handyvars.ccosts = {"2009": 33, "2010": 33}
cls.verbose = None
cls.sample_mseg_in = {
"AIA_CZ1": {
"single family home": {
"total square footage": {"2009": 100, "2010": 200},
"total homes": {"2009": 1000, "2010": 1000},
"new homes": {"2009": 100, "2010": 50},
"natural gas": {
"water heating": {
"stock": {"2009": 15, "2010": 15},
"energy": {"2009": 15, "2010": 15}}}}}}
cls.sample_cpl_in = {
"AIA_CZ1": {
"single family home": {
"natural gas": {
"water heating": {
"performance": {
"typical": {"2009": 18, "2010": 18},
"best": {"2009": 18, "2010": 18},
"units": "EF",
"source":
"EIA AEO"},
"installed cost": {
"typical": {"2009": 18, "2010": 18},
"best": {"2009": 18, "2010": 18},
"units": "2014$/unit",
"source": "EIA AEO"},
"lifetime": {
"average": {"2009": 180, "2010": 180},
"range": {"2009": 18, "2010": 18},
"units": "years",
"source": "EIA AEO"},
"consumer choice": {
"competed market share": {
"source": "EIA AEO",
"model type": "logistic regression",
"parameters": {
"b1": {"2009": "NA", "2010": "NA"},
"b2": {"2009": "NA",
"2010": "NA"}}},
"competed market": {
"source": "COBAM",
"model type": "bass diffusion",
"parameters": {
"p": "NA",
"q": "NA"}}}}}}}}
cls.convert_data = {} # Blank for now
cls.measures_ok_in = [{
"name": "sample measure to prepare",
"markets": None,
"installed_cost": 25,
"cost_units": "2014$/unit",
"energy_efficiency": {
"new": 25, "existing": 25},
"energy_efficiency_units": "EF",
"market_entry_year": None,
"market_exit_year": None,
"product_lifetime": 1,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"bldg_type": "single family home",
"climate_zone": "AIA_CZ1",
"fuel_type": "natural gas",
"fuel_switch_to": None,
"end_use": "water heating",
"technology": None}]
cls.ok_out = [{
"stock": {
"total": {
"all": {"2009": 15, "2010": 15},
"measure": {"2009": 15, "2010": 15}},
"competed": {
"all": {"2009": 15, "2010": 15},
"measure": {"2009": 15, "2010": 15}}},
"energy": {
"total": {
"baseline": {"2009": 15.15, "2010": 15.15},
"efficient": {"2009": 10.908, "2010": 10.908}},
"competed": {
"baseline": {"2009": 15.15, "2010": 15.15},
"efficient": {"2009": 10.908, "2010": 10.908}}},
"carbon": {
"total": {
"baseline": {"2009": 856.2139, "2010": 832.0021},
"efficient": {"2009": 616.474, "2010": 599.0415}},
"competed": {
"baseline": {"2009": 856.2139, "2010": 832.0021},
"efficient": {"2009": 616.474, "2010": 599.0415}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 270, "2010": 270},
"efficient": {"2009": 375, "2010": 375}},
"competed": {
"baseline": {"2009": 270, "2010": 270},
"efficient": {"2009": 375, "2010": 375}}},
"energy": {
"total": {
"baseline": {"2009": 170.892, "2010": 163.317},
"efficient": {"2009": 123.0422, "2010": 117.5882}},
"competed": {
"baseline": {"2009": 170.892, "2010": 163.317},
"efficient": {"2009": 123.0422, "2010": 117.5882}}},
"carbon": {
"total": {
"baseline": {"2009": 28255.06, "2010": 27456.07},
"efficient": {"2009": 20343.64, "2010": 19768.37}},
"competed": {
"baseline": {"2009": 28255.06, "2010": 27456.07},
"efficient": {"2009": 20343.64, "2010": 19768.37}}}},
"lifetime": {"baseline": {"2009": 180, "2010": 180},
"measure": 1}}]
def test_fillmeas_ok(self):
"""Test 'prepare_measures' function given valid measure inputs.
Note:
Ensure that function properly identifies which input measures
require updating and that the updates are performed correctly.
"""
measures_out = ecm_prep.prepare_measures(
self.measures_ok_in, self.convert_data, self.sample_mseg_in,
self.sample_cpl_in, self.handyvars, self.cbecs_sf_byvint,
self.base_dir, self.verbose)
for oc in range(0, len(self.ok_out)):
self.dict_check(
measures_out[oc].markets[
"Technical potential"]["master_mseg"], self.ok_out[oc])
class MergeMeasuresandApplyBenefitsTest(unittest.TestCase, CommonMethods):
"""Test 'merge_measures' and 'apply_pkg_benefits' functions.
Ensure that the 'merge_measures' function correctly assembles a series of
attributes for individual measures into attributes for a packaged measure,
and that the 'apply_pkg_benefits' function correctly applies additional
energy savings and installed cost benefits for a package measure.
Attributes:
sample_measures_in (list): List of valid sample measure attributes
to package.
sample_package_name (string): Sample packaged measure name.
sample_package_in_test1 (object): Sample packaged measure object to
update in the test of the 'merge_measures' function.
sample_package_in_test2 (object): Sample packaged measure object to
initialize for the test of the 'apply_pkg_benefits' function.
genattr_ok_out_test1 (list): General attributes that should be yielded
for the packaged measure in the 'merge_measures' test, given valid
sample measures to merge.
markets_ok_out_test1 (dict): Packaged measure stock, energy, carbon,
and cost data that should be yielded in the 'merge_measures' test,
given valid sample measures to merge.
mseg_ok_in_test2 (dict): Energy, carbon, and cost data to apply
additional energy savings and cost reduction benefits to in the
'apply_pkg_benefits' test.
mseg_ok_out_test2 (dict): Updated energy, carbon, and cost data
that should be yielded in 'apply_pkg_benefits' test, given valid
input data to apply packaging benefits to.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
# Define additional energy savings and cost reduction benefits to
# apply to the energy, carbon, and cost data for a package in the
# 'merge_measures' test
benefits_test1 = {
"energy savings increase": 0,
"cost reduction": 0}
# Define additional energy savings and cost reduction benefits to
# apply to the energy, carbon, and cost data for a package in the
# 'apply_pkg_benefits' test
benefits_test2 = {
"energy savings increase": 0.3,
"cost reduction": 0.2}
# Useful global variables for the sample package measure objects
handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
# Hard code aeo_years to fit test years
handyvars.aeo_years = ["2009", "2010"]
# Define a series of sample measures to package
sample_measures_in = [{
"name": "sample measure pkg 1",
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ2"],
"bldg_type": ["single family home"],
"fuel_type": ["natural gas"],
"fuel_switch_to": None,
"end_use": {"primary": ["water heating"],
"secondary": None},
"technology": [None],
"technology_type": {
"primary": "supply", "secondary": None},
"markets": {
"Technical potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 40, "2010": 40},
"measure": {"2009": 24, "2010": 24}},
"competed": {
"all": {"2009": 20, "2010": 20},
"measure": {"2009": 4, "2010": 4}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 48, "2010": 48}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 8, "2010": 8}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 12, "2010": 12}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 72, "2010": 72}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 48, "2010": 48}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 8, "2010": 8}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 12, "2010": 12}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0.5, "2010": 0.5},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0.5, "2010": 0.5},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}}}},
"Max adoption potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 40, "2010": 40},
"measure": {"2009": 24, "2010": 24}},
"competed": {
"all": {"2009": 20, "2010": 20},
"measure": {"2009": 4, "2010": 4}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 48, "2010": 48}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 8, "2010": 8}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 12, "2010": 12}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 72, "2010": 72}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 48, "2010": 48}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 8, "2010": 8}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 12, "2010": 12}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {
"2009": 20, "2010": 20},
"efficient": {
"2009": 12, "2010": 12}},
"competed": {
"baseline": {
"2009": 10, "2010": 10},
"efficient": {
"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {
"2009": 30, "2010": 30},
"efficient": {
"2009": 18, "2010": 18}},
"competed": {
"baseline": {
"2009": 15, "2010": 15},
"efficient": {
"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0.5, "2010": 0.5},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0.5, "2010": 0.5},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {
"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}}}}},
"out_break_norm": {
"Technical potential": {"2009": 80, "2010": 80},
"Max adoption potential": {"2009": 80, "2010": 80}}},
{
"name": "sample measure pkg 2",
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["existing"],
"climate_zone": ["AIA_CZ1"],
"bldg_type": ["single family home"],
"fuel_type": ["electricity"],
"fuel_switch_to": None,
"end_use": {"primary": ["lighting"],
"secondary": None},
"technology": [
"reflector (incandescent)",
"reflector (halogen)"],
"technology_type": {
"primary": "supply", "secondary": None},
"markets": {
"Technical potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 200, "2010": 200},
"measure": {"2009": 120, "2010": 120}},
"competed": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 20, "2010": 20}}},
"energy": {
"total": {
"baseline": {"2009": 400, "2010": 400},
"efficient": {"2009": 240, "2010": 240}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 40, "2010": 40}}},
"carbon": {
"total": {
"baseline": {"2009": 600, "2010": 600},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 60, "2010": 60}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}}},
"energy": {
"total": {
"baseline": {"2009": 400, "2010": 400},
"efficient": {"2009": 240, "2010": 240}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 40, "2010": 40}}},
"carbon": {
"total": {
"baseline": {"2009": 600, "2010": 600},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 60, "2010": 60}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 2, "2010": 2},
"measure": 15},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {
"2009": 1, "2010": 1},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}}}},
"Max adoption potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 200, "2010": 200},
"measure": {"2009": 120, "2010": 120}},
"competed": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 20, "2010": 20}}},
"energy": {
"total": {
"baseline": {"2009": 400, "2010": 400},
"efficient": {"2009": 240, "2010": 240}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 40, "2010": 40}}},
"carbon": {
"total": {
"baseline": {"2009": 600, "2010": 600},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 60, "2010": 60}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}}},
"energy": {
"total": {
"baseline": {"2009": 400, "2010": 400},
"efficient": {"2009": 240, "2010": 240}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 40, "2010": 40}}},
"carbon": {
"total": {
"baseline": {"2009": 600, "2010": 600},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 60, "2010": 60}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 2, "2010": 2},
"measure": 15},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {
"2009": 1, "2010": 1},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}}}}},
"out_break_norm": {
"Technical potential": {"2009": 400, "2010": 400},
"Max adoption potential": {"2009": 400, "2010": 400}}},
{
"name": "sample measure pkg 3",
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "full service",
"structure_type": ["new", "existing"],
"climate_zone": ["AIA_CZ1", "AIA_CZ5"],
"bldg_type": ["multi family home"],
"fuel_type": ["electricity"],
"fuel_switch_to": None,
"end_use": {
"primary": ["cooling", "lighting"],
"secondary": None},
"technology": [
"ASHP",
"reflector (incandescent)"],
"technology_type": {
"primary": "supply", "secondary": None},
"markets": {
"Technical potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 1100, "2010": 1100},
"measure": {"2009": 660, "2010": 660}},
"competed": {
"all": {"2009": 550, "2010": 550},
"measure": {"2009": 110, "2010": 110}}},
"energy": {
"total": {
"baseline": {"2009": 2200, "2010": 2200},
"efficient": {"2009": 1320, "2010": 1320}},
"competed": {
"baseline": {"2009": 1100, "2010": 1100},
"efficient": {"2009": 220, "2010": 220}}},
"carbon": {
"total": {
"baseline": {"2009": 3300, "2010": 3300},
"efficient": {"2009": 1980, "2010": 1980}},
"competed": {
"baseline": {"2009": 1650, "2010": 1650},
"efficient": {"2009": 330, "2010": 330}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}}},
"energy": {
"total": {
"baseline": {"2009": 400, "2010": 400},
"efficient": {"2009": 240, "2010": 240}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 40, "2010": 40}}},
"carbon": {
"total": {
"baseline": {"2009": 600, "2010": 600},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 60, "2010": 60}}}},
"lifetime": {
"baseline": {"2009": 18, "2010": 18},
"measure": 18}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1},
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"stock": {
"total": {
"all": {"2009": 1000, "2010": 1000},
"measure": {"2009": 600, "2010": 600}},
"competed": {
"all": {"2009": 500, "2010": 500},
"measure": {
"2009": 100, "2010": 100}}},
"energy": {
"total": {
"baseline": {
"2009": 2000, "2010": 2000},
"efficient": {
"2009": 1200, "2010": 1200}},
"competed": {
"baseline": {
"2009": 1000, "2010": 1000},
"efficient": {
"2009": 200, "2010": 200}}},
"carbon": {
"total": {
"baseline": {
"2009": 3000, "2010": 3000},
"efficient": {
"2009": 1800, "2010": 1800}},
"competed": {
"baseline": {
"2009": 1500, "2010": 1500},
"efficient": {
"2009": 300, "2010": 300}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 18, "2010": 18},
"measure": 18},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"b1": {"2009": 0.75, "2010": 0.75},
"b2": {"2009": 0.75, "2010": 0.75}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {
"2009": 0.5, "2010": 0.5},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {
"2009": 0.5, "2010": 0.5},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}}}},
"Max adoption potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 1100, "2010": 1100},
"measure": {"2009": 660, "2010": 660}},
"competed": {
"all": {"2009": 550, "2010": 550},
"measure": {"2009": 110, "2010": 110}}},
"energy": {
"total": {
"baseline": {"2009": 2200, "2010": 2200},
"efficient": {"2009": 1320, "2010": 1320}},
"competed": {
"baseline": {"2009": 1100, "2010": 1100},
"efficient": {"2009": 220, "2010": 220}}},
"carbon": {
"total": {
"baseline": {"2009": 3300, "2010": 3300},
"efficient": {"2009": 1980, "2010": 1980}},
"competed": {
"baseline": {"2009": 1650, "2010": 1650},
"efficient": {"2009": 330, "2010": 330}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 360, "2010": 360}}},
"energy": {
"total": {
"baseline": {"2009": 400, "2010": 400},
"efficient": {"2009": 240, "2010": 240}},
"competed": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 40, "2010": 40}}},
"carbon": {
"total": {
"baseline": {"2009": 600, "2010": 600},
"efficient": {"2009": 360, "2010": 360}},
"competed": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 60, "2010": 60}}}},
"lifetime": {
"baseline": {"2009": 18, "2010": 18},
"measure": 18}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1},
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"stock": {
"total": {
"all": {"2009": 1000, "2010": 1000},
"measure": {"2009": 600, "2010": 600}},
"competed": {
"all": {"2009": 500, "2010": 500},
"measure": {
"2009": 100, "2010": 100}}},
"energy": {
"total": {
"baseline": {
"2009": 2000, "2010": 2000},
"efficient": {
"2009": 1200, "2010": 1200}},
"competed": {
"baseline": {
"2009": 1000, "2010": 1000},
"efficient": {
"2009": 200, "2010": 200}}},
"carbon": {
"total": {
"baseline": {
"2009": 3000, "2010": 3000},
"efficient": {
"2009": 1800, "2010": 1800}},
"competed": {
"baseline": {
"2009": 1500, "2010": 1500},
"efficient": {
"2009": 300, "2010": 300}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 18, "2010": 18},
"measure": 18},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"b1": {"2009": 0.75, "2010": 0.75},
"b2": {"2009": 0.75, "2010": 0.75}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {
"2009": 0.5, "2010": 0.5},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {
"2009": 0.5, "2010": 0.5},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {},
'Envelope': {}}}}}},
"out_break_norm": {
"Technical potential": {"2009": 2200, "2010": 2200},
"Max adoption potential": {"2009": 2200, "2010": 2200}}},
{
"name": "sample measure pkg 4",
"market_entry_year": None,
"market_exit_year": None,
"market_scaling_fractions": None,
"market_scaling_fractions_source": None,
"measure_type": "add-on",
"structure_type": ["existing"],
"climate_zone": ["AIA_CZ1"],
"bldg_type": ["single family home"],
"fuel_type": ["electricity"],
"fuel_switch_to": None,
"end_use": {"primary": ["lighting"],
"secondary": None},
"technology": [
"reflector (incandescent)"],
"technology_type": {
"primary": "supply", "secondary": None},
"markets": {
"Technical potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}
},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {
"2009": 1, "2010": 1},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}}}},
"Max adoption potential": {
"master_mseg": {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {
"2009": 1, "2010": 1},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}}}}},
"out_break_norm": {
"Technical potential": {"2009": 200, "2010": 200},
"Max adoption potential": {"2009": 200, "2010": 200}}}]
cls.sample_measures_in = [ecm_prep.Measure(
handyvars, **x) for x in sample_measures_in]
# Reset sample measure technology types (initialized as string)
for ind, m in enumerate(cls.sample_measures_in):
m.technology_type = sample_measures_in[ind]["technology_type"]
# Reset sample measure markets (initialized to None)
for ind, m in enumerate(cls.sample_measures_in):
m.markets = sample_measures_in[ind]["markets"]
# Reset total absolute energy use figure used to normalize sample
# measure energy savings summed by climate, building, and end use
for ind, m in enumerate(cls.sample_measures_in):
m.out_break_norm = sample_measures_in[ind]["out_break_norm"]
cls.sample_package_name = "Package - CAC + CFLs + NGWH"
cls.sample_package_in_test1 = ecm_prep.MeasurePackage(
cls.sample_measures_in, cls.sample_package_name,
benefits_test1, handyvars)
cls.sample_package_in_test2 = ecm_prep.MeasurePackage(
cls.sample_measures_in, cls.sample_package_name,
benefits_test2, handyvars)
cls.genattr_ok_out_test1 = [
'Package - CAC + CFLs + NGWH',
['AIA_CZ1', 'AIA_CZ2', 'AIA_CZ5'],
['single family home', 'multi family home'],
['new', 'existing'],
['electricity', 'natural gas'],
['water heating', 'lighting', 'cooling']]
cls.markets_ok_out_test1 = {
"Technical potential": {
"master_mseg": {
'stock': {
'total': {
'all': {'2009': 1240, '2010': 1240},
'measure': {'2009': 744, '2010': 744}},
'competed': {
'all': {'2009': 620, '2010': 620},
'measure': {'2009': 124, '2010': 124}}},
'energy': {
'total': {
'baseline': {'2009': 2480, '2010': 2480},
'efficient': {'2009': 1488, '2010': 1488}},
'competed': {
'baseline': {'2009': 1240, '2010': 1240},
'efficient': {'2009': 248, '2010': 248}}},
'carbon': {
'total': {
'baseline': {'2009': 3720, '2010': 3720},
'efficient': {'2009': 2232, '2010': 2232}},
'competed': {
'baseline': {'2009': 1860, '2010': 1860},
'efficient': {'2009': 372, '2010': 372}}},
'cost': {
'stock': {
'total': {
'baseline': {'2009': 340, '2010': 340},
'efficient': {'2009': 612, '2010': 612}},
'competed': {
'baseline': {'2009': 340, '2010': 340},
'efficient': {'2009': 612, '2010': 612}}},
'energy': {
'total': {
'baseline': {'2009': 680, '2010': 680},
'efficient': {'2009': 408, '2010': 408}},
'competed': {
'baseline': {'2009': 340, '2010': 340},
'efficient': {'2009': 68, '2010': 68}}},
'carbon': {
'total': {
'baseline': {'2009': 1020, '2010': 1020},
'efficient': {'2009': 612, '2010': 612}},
'competed': {
'baseline': {'2009': 510, '2010': 510},
'efficient': {'2009': 102, '2010': 102}}}},
"lifetime": {
"baseline": {'2010': (41 / 1240),
'2009': (41 / 1240)},
"measure": 13.29}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', 'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 2, "2010": 2},
"measure": 15},
"sub-market scaling": 1},
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"stock": {
"total": {
"all": {"2009": 1000, "2010": 1000},
"measure": {"2009": 600, "2010": 600}},
"competed": {
"all": {"2009": 500, "2010": 500},
"measure": {"2009": 100, "2010": 100}}},
"energy": {
"total": {
"baseline": {"2009": 2000, "2010": 2000},
"efficient": {"2009": 1200, "2010": 1200}},
"competed": {
"baseline": {"2009": 1000, "2010": 1000},
"efficient": {"2009": 200, "2010": 200}}},
"carbon": {
"total": {
"baseline": {"2009": 3000, "2010": 3000},
"efficient": {"2009": 1800, "2010": 1800}},
"competed": {
"baseline": {"2009": 1500, "2010": 1500},
"efficient": {"2009": 300, "2010": 300}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 18, "2010": 18},
"measure": 18},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}},
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"b1": {"2009": 0.75, "2010": 0.75},
"b2": {"2009": 0.75, "2010": 0.75}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0.016, "2010": 0.016},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {
"2009": 0.5510753,
"2010": 0.5510753},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0.016, "2010": 0.016},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {
"2009": 0.4166667, "2010": 0.4166667},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}}}},
"Max adoption potential": {
"master_mseg": {
'stock': {
'total': {
'all': {'2009': 1240, '2010': 1240},
'measure': {'2009': 744, '2010': 744}},
'competed': {
'all': {'2009': 620, '2010': 620},
'measure': {'2009': 124, '2010': 124}}},
'energy': {
'total': {
'baseline': {'2009': 2480, '2010': 2480},
'efficient': {'2009': 1488, '2010': 1488}},
'competed': {
'baseline': {'2009': 1240, '2010': 1240},
'efficient': {'2009': 248, '2010': 248}}},
'carbon': {
'total': {
'baseline': {'2009': 3720, '2010': 3720},
'efficient': {'2009': 2232, '2010': 2232}},
'competed': {
'baseline': {'2009': 1860, '2010': 1860},
'efficient': {'2009': 372, '2010': 372}}},
'cost': {
'stock': {
'total': {
'baseline': {'2009': 340, '2010': 340},
'efficient': {'2009': 612, '2010': 612}},
'competed': {
'baseline': {'2009': 340, '2010': 340},
'efficient': {'2009': 612, '2010': 612}}},
'energy': {
'total': {
'baseline': {'2009': 680, '2010': 680},
'efficient': {'2009': 408, '2010': 408}},
'competed': {
'baseline': {'2009': 340, '2010': 340},
'efficient': {'2009': 68, '2010': 68}}},
'carbon': {
'total': {
'baseline': {'2009': 1020, '2010': 1020},
'efficient': {'2009': 612, '2010': 612}},
'competed': {
'baseline': {'2009': 510, '2010': 510},
'efficient': {'2009': 102, '2010': 102}}}},
"lifetime": {
"baseline": {'2010': (41 / 1240),
'2009': (41 / 1240)},
"measure": 13.29}},
"mseg_adjust": {
"contributing mseg keys and values": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 10, "2010": 10},
"measure": {"2009": 6, "2010": 6}},
"competed": {
"all": {"2009": 5, "2010": 5},
"measure": {"2009": 1, "2010": 1}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {
"2009": 18, "2010": 18}}},
"energy": {
"total": {
"baseline": {"2009": 20, "2010": 20},
"efficient": {"2009": 12, "2010": 12}},
"competed": {
"baseline": {"2009": 10, "2010": 10},
"efficient": {"2009": 2, "2010": 2}}},
"carbon": {
"total": {
"baseline": {"2009": 30, "2010": 30},
"efficient": {"2009": 18, "2010": 18}},
"competed": {
"baseline": {"2009": 15, "2010": 15},
"efficient": {"2009": 3, "2010": 3}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {
"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {
"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {
"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {
"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 1, "2010": 1},
"measure": 20},
"sub-market scaling": 1},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', 'existing')"): {
"stock": {
"total": {
"all": {"2009": 100, "2010": 100},
"measure": {"2009": 60, "2010": 60}},
"competed": {
"all": {"2009": 50, "2010": 50},
"measure": {"2009": 10, "2010": 10}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {"2009": 30, "2010": 30}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 2, "2010": 2},
"measure": 15},
"sub-market scaling": 1},
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"stock": {
"total": {
"all": {"2009": 1000, "2010": 1000},
"measure": {"2009": 600, "2010": 600}},
"competed": {
"all": {"2009": 500, "2010": 500},
"measure": {"2009": 100, "2010": 100}}},
"energy": {
"total": {
"baseline": {"2009": 2000, "2010": 2000},
"efficient": {"2009": 1200, "2010": 1200}},
"competed": {
"baseline": {"2009": 1000, "2010": 1000},
"efficient": {"2009": 200, "2010": 200}}},
"carbon": {
"total": {
"baseline": {"2009": 3000, "2010": 3000},
"efficient": {"2009": 1800, "2010": 1800}},
"competed": {
"baseline": {"2009": 1500, "2010": 1500},
"efficient": {"2009": 300, "2010": 300}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 180, "2010": 180}}},
"energy": {
"total": {
"baseline": {"2009": 200, "2010": 200},
"efficient": {
"2009": 120, "2010": 120}},
"competed": {
"baseline": {"2009": 100, "2010": 100},
"efficient": {
"2009": 20, "2010": 20}}},
"carbon": {
"total": {
"baseline": {"2009": 300, "2010": 300},
"efficient": {
"2009": 180, "2010": 180}},
"competed": {
"baseline": {"2009": 150, "2010": 150},
"efficient": {
"2009": 30, "2010": 30}}}},
"lifetime": {
"baseline": {"2009": 18, "2010": 18},
"measure": 18},
"sub-market scaling": 1}},
"competed choice parameters": {
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ1', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, 'new')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ2', 'single family home', "
"'natural gas', 'water heating', None, "
"'existing')"): {
"b1": {"2009": 0.5, "2010": 0.5},
"b2": {"2009": 0.5, "2010": 0.5}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (incandescent)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}},
("('primary', AIA_CZ1', 'single family home', "
"'electricity',"
"'lighting', 'reflector (halogen)', "
"'existing')"): {
"b1": {"2009": 0.25, "2010": 0.25},
"b2": {"2009": 0.25, "2010": 0.25}},
("('primary', AIA_CZ5', 'single family home', "
"'electricity',"
"'cooling', 'supply', 'ASHP', 'new')"): {
"b1": {"2009": 0.75, "2010": 0.75},
"b2": {"2009": 0.75, "2010": 0.75}}},
"secondary mseg adjustments": {
"sub-market": {
"original energy (total)": {},
"adjusted energy (sub-market)": {}},
"stock-and-flow": {
"original energy (total)": {},
"adjusted energy (previously captured)": {},
"adjusted energy (competed)": {},
"adjusted energy (competed and captured)": {}},
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}},
"mseg_out_break": {
'AIA CZ1': {
'Residential (New)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0.016, "2010": 0.016},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {},
'Lighting': {
"2009": 0.5510753,
"2010": 0.5510753},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ2': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0.016, "2010": 0.016},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {"2009": 0, "2010": 0},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ3': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ4': {
'Residential (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}},
'AIA CZ5': {
'Residential (New)': {
'Cooling (Equip.)': {
"2009": 0.4166667, "2010": 0.4166667},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Residential (Existing)': {
'Cooling (Equip.)': {},
'Ventilation': {}, 'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (New)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}},
'Commercial (Existing)': {
'Cooling (Equip.)': {}, 'Ventilation': {},
'Lighting': {},
'Refrigeration': {}, 'Other': {},
'Water Heating': {},
'Computers and Electronics': {},
'Heating (Equip.)': {}, 'Envelope': {}}}}}}
cls.mseg_ok_in_test2 = {
"stock": {
"total": {
"all": {"2009": 40, "2010": 40},
"measure": {"2009": 24, "2010": 24}},
"competed": {
"all": {"2009": 20, "2010": 20},
"measure": {"2009": 4, "2010": 4}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 48, "2010": 48}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 8, "2010": 8}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 12, "2010": 12}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 72, "2010": 72}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 48, "2010": 48}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 8, "2010": 8}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 72, "2010": 72}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 12, "2010": 12}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10}}
cls.mseg_ok_out_test2 = {
"stock": {
"total": {
"all": {"2009": 40, "2010": 40},
"measure": {"2009": 24, "2010": 24}},
"competed": {
"all": {"2009": 20, "2010": 20},
"measure": {"2009": 4, "2010": 4}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 38.4, "2010": 38.4}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 0, "2010": 0}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 57.6, "2010": 57.6}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 0, "2010": 0}}},
"cost": {
"stock": {
"total": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 57.6, "2010": 57.6}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 57.6, "2010": 57.6}}},
"energy": {
"total": {
"baseline": {"2009": 80, "2010": 80},
"efficient": {"2009": 38.4, "2010": 38.4}},
"competed": {
"baseline": {"2009": 40, "2010": 40},
"efficient": {"2009": 0, "2010": 0}}},
"carbon": {
"total": {
"baseline": {"2009": 120, "2010": 120},
"efficient": {"2009": 57.6, "2010": 57.6}},
"competed": {
"baseline": {"2009": 60, "2010": 60},
"efficient": {"2009": 0, "2010": 0}}}},
"lifetime": {
"baseline": {"2009": 5, "2010": 5},
"measure": 10}}
def test_merge_measure(self):
"""Test 'merge_measures' function given valid inputs."""
self.sample_package_in_test1.merge_measures()
# Check for correct general attributes for packaged measure
output_lists = [
self.sample_package_in_test1.name,
self.sample_package_in_test1.climate_zone,
self.sample_package_in_test1.bldg_type,
self.sample_package_in_test1.structure_type,
self.sample_package_in_test1.fuel_type,
self.sample_package_in_test1.end_use["primary"]]
for ind in range(0, len(output_lists)):
self.assertEqual(sorted(self.genattr_ok_out_test1[ind]),
sorted(output_lists[ind]))
# Check for correct markets for packaged measure
self.dict_check(
self.sample_package_in_test1.markets, self.markets_ok_out_test1)
def test_apply_pkg_benefits(self):
"""Test 'apply_pkg_benefits' function given valid inputs."""
self.dict_check(
self.sample_package_in_test2.apply_pkg_benefits(
self.mseg_ok_in_test2),
self.mseg_ok_out_test2)
class CleanUpTest(unittest.TestCase, CommonMethods):
"""Test 'split_clean_data' function.
Ensure building vintage square footages are read in properly from a
cbecs data file and that the proper weights are derived for mapping
EnergyPlus building vintages to Scout's 'new' and 'retrofit' building
structure types.
Attributes:
handyvars (object): Global variables to use for the test measure.
sample_measlist_in (list): List of individual and packaged measure
objects to clean up.
sample_measlist_out_comp_data (list): Measure competition data that
should be yielded by function given sample measures as input.
sample_measlist_out_mkt_keys (list): High level measure summary data
keys that should be yielded by function given sample measures as
input.
sample_measlist_out_highlev_keys (list): Measure 'markets' keys that
should be yielded by function given sample measures as input.
sample_pkg_meas_names (list): Updated 'contributing_ECMs'
attribute that should be yielded by function for sample
packaged measure.
"""
@classmethod
def setUpClass(cls):
"""Define variables and objects for use across all class functions."""
# Base directory
base_dir = os.getcwd()
benefits = {
"energy savings increase": None,
"cost reduction": None}
cls.handyvars = ecm_prep.UsefulVars(base_dir,
ecm_prep.UsefulInputFiles())
sample_measindiv_dicts = [{
"name": "cleanup 1",
"market_entry_year": None,
"market_exit_year": None,
"measure_type": "full service",
"technology": {
"primary": None, "secondary": None}},
{
"name": "cleanup 2",
"market_entry_year": None,
"market_exit_year": None,
"measure_type": "full service",
"technology": {
"primary": None, "secondary": None}}]
cls.sample_measlist_in = [ecm_prep.Measure(
cls.handyvars, **x) for x in sample_measindiv_dicts]
sample_measpackage = ecm_prep.MeasurePackage(
copy.deepcopy(cls.sample_measlist_in), "cleanup 3",
benefits, cls.handyvars)
cls.sample_measlist_in.append(sample_measpackage)
cls.sample_measlist_out_comp_data = [{
"Technical potential": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}},
"Max adoption potential": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}}},
{
"Technical potential": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}},
"Max adoption potential": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}}},
{
"Technical potential": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}},
"Max adoption potential": {
"contributing mseg keys and values": {},
"competed choice parameters": {},
"secondary mseg adjustments": {
"market share": {
"original energy (total captured)": {},
"original energy (competed and captured)": {},
"adjusted energy (total captured)": {},
"adjusted energy (competed and captured)": {}}}}}]
cls.sample_measlist_out_mkt_keys = ["master_mseg", "mseg_out_break"]
cls.sample_measlist_out_highlev_keys = [
["market_entry_year", "market_exit_year", "markets",
"name", "out_break_norm", "remove", 'technology',
'technology_type', 'yrs_on_mkt', 'measure_type'],
["market_entry_year", "market_exit_year", "markets",
"name", "out_break_norm", "remove", 'technology',
'technology_type', 'yrs_on_mkt', 'measure_type'],
['benefits', 'bldg_type', 'climate_zone', 'end_use', 'fuel_type',
"technology", "technology_type",
"market_entry_year", "market_exit_year", 'markets',
'contributing_ECMs', 'name', "out_break_norm", 'remove',
'structure_type', 'yrs_on_mkt', 'measure_type']]
cls.sample_pkg_meas_names = [x["name"] for x in sample_measindiv_dicts]
def test_cleanup(self):
"""Test 'split_clean_data' function given valid inputs."""
# Execute the function
measures_comp_data, measures_summary_data = \
ecm_prep.split_clean_data(self.sample_measlist_in)
# Check function outputs
for ind in range(0, len(self.sample_measlist_in)):
# Check measure competition data
self.dict_check(self.sample_measlist_out_comp_data[ind],
measures_comp_data[ind])
# Check measure summary data
for adopt_scheme in self.handyvars.adopt_schemes:
self.assertEqual(sorted(list(measures_summary_data[
ind].keys())),
sorted(self.sample_measlist_out_highlev_keys[ind]))
self.assertEqual(sorted(list(measures_summary_data[
ind]["markets"][adopt_scheme].keys())),
sorted(self.sample_measlist_out_mkt_keys))
# Verify correct updating of 'contributing_ECMs'
# MeasurePackage attribute
if "Package: " in measures_summary_data[ind]["name"]:
self.assertEqual(measures_summary_data[ind][
"contributing_ECMs"], self.sample_pkg_meas_names)
# Offer external code execution (include all lines below this point in all
# test files)
def main():
"""Trigger default behavior of running all test fixtures in the file."""
unittest.main()
if __name__ == "__main__":
main()
| 55.044323 | 81 | 0.308037 | 49,871 | 807,225 | 4.909928 | 0.035251 | 0.036559 | 0.016711 | 0.031855 | 0.884866 | 0.866668 | 0.85177 | 0.839649 | 0.831285 | 0.821525 | 0 | 0.139065 | 0.552634 | 807,225 | 14,664 | 82 | 55.048077 | 0.538991 | 0.037778 | 0 | 0.888657 | 0 | 0 | 0.258515 | 0.00752 | 0 | 0 | 0 | 0 | 0.002163 | 1 | 0.003678 | false | 0 | 0.000577 | 0 | 0.005409 | 0.000288 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
b7996df507a4571c79c4ebe9bbddb6ad58f68440 | 9,565 | py | Python | tests/Monkeypatching/test_Api_monkeypatching_api_post.py | LudwikaMalinowska/Automated-Testing-Project2 | f0868700af8d6b946768d67b3c1768c2447f1a60 | [
"MIT"
] | null | null | null | tests/Monkeypatching/test_Api_monkeypatching_api_post.py | LudwikaMalinowska/Automated-Testing-Project2 | f0868700af8d6b946768d67b3c1768c2447f1a60 | [
"MIT"
] | null | null | null | tests/Monkeypatching/test_Api_monkeypatching_api_post.py | LudwikaMalinowska/Automated-Testing-Project2 | f0868700af8d6b946768d67b3c1768c2447f1a60 | [
"MIT"
] | null | null | null | import unittest
import requests
from assertpy import assert_that
from requests.exceptions import Timeout
from unittest.mock import Mock, patch
from src.Api import Api
class TestApiMonkeyPatch(unittest.TestCase):
@patch('src.Api.Api', autospec=True)
def test_method_api_post_raises_timeout(self, mock_class):
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_class.api_post.side_effect = Timeout
with self.assertRaises(Timeout):
mock_class.api_post(mock_data)
def test_method_api_post_assert_that_called_once(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post(mock_data)
mock_api.api_post.assert_called_once()
def test_method_api_post_assert_that_called(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_data2 = Mock()
mock_data2.return_value = {
"userId": 2,
"title": "Lorem ipsum",
"completed": True
}
mock_api.api_post(mock_data)
mock_api.api_post(mock_data2)
mock_api.api_post.assert_called()
def test_method_api_post_assert_that_not_called(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post.assert_not_called()
def test_method_api_post_assert_that_called_with_mock_data_userId_1_title_Lorem(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post(mock_data)
mock_api.api_post.assert_called_with(mock_data)
def test_method_api_post_assert_that_called_once_with_mock_data_userId_1_title_Lorem(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post(mock_data)
mock_api.api_post.assert_called_once_with(mock_data)
def test_method_api_post_assert_that_response_has_status_code_200(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post.return_value = {"posted_data": post_todo, "status_code": 200}
response = mock_api.api_post(post_todo)
assert_that(response).has_status_code(200)
def test_method_api_post_assert_that_response_status_code_is_not_200(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post.return_value = {"status_code": 408}
response = mock_api.api_post(post_todo)
assert_that(response["status_code"]).is_not_equal_to(200)
def test_method_api_post_assert_that_response_returns_posted_data(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post.return_value = {"posted_data": post_todo, "status_code": 200}
response = mock_api.api_post(post_todo)
assert_that(response["posted_data"]).is_equal_to(post_todo)
def test_method_api_post_assert_that_response_is_instance_of_dict(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post.return_value = {"posted_data": post_todo, "status_code": 200}
response = mock_api.api_post(post_todo)
assert_that(response).is_instance_of(dict)
def test_method_api_post_assert_that_not_called_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_api.api_post(mock_data)
with self.assertRaises(AssertionError):
mock_api.api_post.assert_not_called()
def test_method_api_post_assert_that_called_once_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_data2 = Mock()
mock_data2.return_value = {
"userId": 2,
"title": "Lorem ipsum",
"completed": True
}
mock_api.api_post(mock_data)
mock_api.api_post(mock_data2)
with self.assertRaises(AssertionError):
mock_api.api_post.assert_called_once()
def test_method_api_post_assert_that_called_with_mock_data_userId_1_title_Lorem_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_data2 = Mock()
mock_data2.return_value = {
"userId": 2,
"title": "Lorem ipsum",
"completed": True
}
mock_api.api_post(mock_data2)
with self.assertRaises(AssertionError):
mock_api.api_post.assert_called_with(mock_data)
def test_method_api_post_assert_that_called_once_with_mock_data_userId_1_title_Lorem_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
mock_data = Mock()
mock_data.return_value = {
"userId": 1,
"title": "Lorem",
"completed": False
}
mock_data2 = Mock()
mock_data2.return_value = {
"userId": 2,
"title": "Lorem ipsum",
"completed": True
}
mock_api.api_post(mock_data)
mock_api.api_post(mock_data2)
with self.assertRaises(AssertionError):
mock_api.api_post.assert_called_once_with(mock_data)
def test_method_api_post_no_parameter_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
with self.assertRaises(TypeError):
mock_api.api_post()
def test_method_api_post_assert_that_response_returns_ValueError_when_called_with_empty_obj_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {}
mock_api.api_post.return_value = {"status_code": 408}
mock_api.api_post.side_effect = ValueError
assert_that(mock_api.api_post).raises(ValueError).when_called_with(post_todo)
def test_method_api_post_assert_that_response_returns_ValueError_when_called_with_obj_without_key_userId_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"title": "Lorem",
"completed": False
}
mock_api.api_post.return_value = {"status_code": 408}
mock_api.api_post.side_effect = ValueError
assert_that(mock_api.api_post).raises(ValueError).when_called_with(post_todo)
def test_method_api_post_assert_that_response_returns_ValueError_when_called_with_obj_without_key_title_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"userId": 1,
"completed": False
}
mock_api.api_post.return_value = {"status_code": 408}
mock_api.api_post.side_effect = ValueError
assert_that(mock_api.api_post).raises(ValueError).when_called_with(post_todo)
def test_method_api_post_assert_that_response_returns_ValueError_when_called_with_obj_without_key_completed_exception(self):
with patch('src.Api.Api', autospec=True) as mock_api:
post_todo = {
"userId": 1,
"title": "Lorem",
}
mock_api.api_post.return_value = {"status_code": 408}
mock_api.api_post.side_effect = ValueError
assert_that(mock_api.api_post).raises(ValueError).when_called_with(post_todo)
if __name__ == '__main__':
unittest.main()
| 38.724696 | 128 | 0.587977 | 1,127 | 9,565 | 4.575865 | 0.064774 | 0.095016 | 0.079504 | 0.111305 | 0.918945 | 0.915067 | 0.906147 | 0.889083 | 0.886368 | 0.84778 | 0 | 0.011041 | 0.318244 | 9,565 | 246 | 129 | 38.882114 | 0.779788 | 0 | 0 | 0.718894 | 0 | 0 | 0.09242 | 0 | 0 | 0 | 0 | 0 | 0.18894 | 1 | 0.087558 | false | 0 | 0.02765 | 0 | 0.119816 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7bd913dad3337a65809bfbd1bbb1cb1e298c588 | 6,574 | py | Python | test/ci_app_tests/test_aggregate.py | slabasan/Caliper | 85601f48e7f883fb87dec85e92c849eec2bb61f7 | [
"BSD-3-Clause"
] | 220 | 2016-01-19T19:00:10.000Z | 2022-03-29T02:09:39.000Z | test/ci_app_tests/test_aggregate.py | slabasan/Caliper | 85601f48e7f883fb87dec85e92c849eec2bb61f7 | [
"BSD-3-Clause"
] | 328 | 2016-05-12T15:47:30.000Z | 2022-03-30T19:42:02.000Z | test/ci_app_tests/test_aggregate.py | slabasan/Caliper | 85601f48e7f883fb87dec85e92c849eec2bb61f7 | [
"BSD-3-Clause"
] | 48 | 2016-03-04T22:04:39.000Z | 2021-12-18T12:11:43.000Z | # Basic smoke tests: aggregation
import unittest
import calipertest as calitest
class CaliperAggregationTest(unittest.TestCase):
""" Caliper test case """
def test_aggregate_default(self):
target_cmd = [ './ci_test_aggregate' ]
query_cmd = [ '../../src/tools/cali-query/cali-query', '-e' ]
caliper_config = {
'CALI_SERVICES_ENABLE' : 'aggregate:event:recorder:timestamp',
'CALI_TIMER_SNAPSHOT_DURATION' : 'true',
'CALI_TIMER_INCLUSIVE_DURATION' : 'true',
'CALI_RECORDER_FILENAME' : 'stdout',
'CALI_LOG_VERBOSITY' : '0'
}
query_output = calitest.run_test_with_query(target_cmd, query_cmd, caliper_config)
snapshots = calitest.get_snapshots_from_text(query_output)
self.assertTrue(calitest.has_snapshot_with_keys(
snapshots, [ 'loop.id', 'function',
'sum#time.inclusive.duration',
'min#time.inclusive.duration',
'max#time.inclusive.duration',
'sum#time.duration',
'count' ] ))
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'event.end#function': 'foo',
'loop.id': 'A',
'count': '6' }))
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'event.end#function': 'foo',
'loop.id': 'B',
'count': '4' }))
def test_aggregate_combined_key(self):
target_cmd = [ './ci_test_aggregate' ]
query_cmd = [ '../../src/tools/cali-query/cali-query', '-e' ]
caliper_config = {
'CALI_SERVICES_ENABLE' : 'aggregate:event:recorder',
'CALI_AGGREGATE_KEY' : 'event.end#function,iteration',
'CALI_RECORDER_FILENAME' : 'stdout',
'CALI_LOG_VERBOSITY' : '0'
}
query_output = calitest.run_test_with_query(target_cmd, query_cmd, caliper_config)
snapshots = calitest.get_snapshots_from_text(query_output)
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'event.end#function': 'foo',
'iteration': '1',
'count': '3' }))
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'event.end#function': 'foo',
'iteration': '3',
'count': '1' }))
def test_aggregate_value_key(self):
target_cmd = [ './ci_test_aggregate' ]
query_cmd = [ '../../src/tools/cali-query/cali-query', '-e' ]
caliper_config = {
'CALI_SERVICES_ENABLE' : 'aggregate:event:recorder',
'CALI_AGGREGATE_KEY' : 'iteration',
'CALI_RECORDER_FILENAME' : 'stdout',
'CALI_LOG_VERBOSITY' : '0'
}
query_output = calitest.run_test_with_query(target_cmd, query_cmd, caliper_config)
snapshots = calitest.get_snapshots_from_text(query_output)
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'iteration': '1',
'count': '8' }))
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'iteration': '3',
'count': '3' }))
self.assertFalse(calitest.has_snapshot_with_keys(
snapshots, [ 'function', 'loop.id' ]))
def test_aggregate_nested(self):
target_cmd = [ './ci_test_aggregate' ]
query_cmd = [ '../../src/tools/cali-query/cali-query', '-e' ]
caliper_config = {
'CALI_SERVICES_ENABLE' : 'aggregate:event:recorder',
'CALI_AGGREGATE_KEY' : 'prop:nested',
'CALI_RECORDER_FILENAME' : 'stdout',
'CALI_LOG_VERBOSITY' : '0'
}
query_output = calitest.run_test_with_query(target_cmd, query_cmd, caliper_config)
snapshots = calitest.get_snapshots_from_text(query_output)
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'loop.id' : 'A',
'function': 'foo',
'count' : '6' }))
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'loop.id' : 'B',
'function': 'foo',
'count' : '4' }))
def test_aggregate_implicit_and_value(self):
target_cmd = [ './ci_test_aggregate' ]
query_cmd = [ '../../src/tools/cali-query/cali-query', '-e' ]
caliper_config = {
'CALI_SERVICES_ENABLE' : 'aggregate:event:recorder',
'CALI_AGGREGATE_KEY' : '*,iteration',
'CALI_RECORDER_FILENAME' : 'stdout',
'CALI_LOG_VERBOSITY' : '0'
}
query_output = calitest.run_test_with_query(target_cmd, query_cmd, caliper_config)
snapshots = calitest.get_snapshots_from_text(query_output)
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'loop.id' : 'A',
'event.end#function': 'foo',
'iteration': '1',
'count' : '2' }))
self.assertTrue(calitest.has_snapshot_with_attributes(
snapshots, {
'loop.id' : 'B',
'function' : 'foo',
'iteration': '3',
'count' : '1' }))
def test_aggregate_attributes(self):
target_cmd = [ './ci_test_aggregate' ]
query_cmd = [ '../../src/tools/cali-query/cali-query', '-e' ]
caliper_config = {
'CALI_SERVICES_ENABLE' : 'aggregate:event:recorder:timestamp',
'CALI_TIMER_SNAPSHOT_DURATION' : 'true',
'CALI_TIMER_INCLUSIVE_DURATION' : 'true',
'CALI_AGGREGATE_ATTRIBUTES' : 'time.duration',
'CALI_RECORDER_FILENAME' : 'stdout',
'CALI_LOG_VERBOSITY' : '0'
}
query_output = calitest.run_test_with_query(target_cmd, query_cmd, caliper_config)
snapshots = calitest.get_snapshots_from_text(query_output)
self.assertTrue(calitest.has_snapshot_with_keys(
snapshots, [ 'loop.id', 'function',
'sum#time.duration',
'count' ] ))
self.assertFalse(calitest.has_snapshot_with_keys(
snapshots, [ 'sum#time.inclusive.duration' ] ))
if __name__ == "__main__":
unittest.main()
| 38.444444 | 90 | 0.558412 | 630 | 6,574 | 5.477778 | 0.133333 | 0.044625 | 0.077079 | 0.093306 | 0.878296 | 0.855984 | 0.855984 | 0.844393 | 0.814836 | 0.777456 | 0 | 0.004894 | 0.316246 | 6,574 | 170 | 91 | 38.670588 | 0.762848 | 0.007606 | 0 | 0.791367 | 0 | 0 | 0.264539 | 0.121682 | 0 | 0 | 0 | 0 | 0.100719 | 1 | 0.043165 | false | 0 | 0.014388 | 0 | 0.064748 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
b7fa9ab7ca3858b1272c4cea2463ea60301e06a3 | 259 | py | Python | chainermn/datasets/__init__.py | zaltoprofen/chainer | 3b03f9afc80fd67f65d5e0395ef199e9506b6ee1 | [
"MIT"
] | 3,705 | 2017-06-01T07:36:12.000Z | 2022-03-30T10:46:15.000Z | chainermn/datasets/__init__.py | zaltoprofen/chainer | 3b03f9afc80fd67f65d5e0395ef199e9506b6ee1 | [
"MIT"
] | 5,998 | 2017-06-01T06:40:17.000Z | 2022-03-08T01:42:44.000Z | chainermn/datasets/__init__.py | zaltoprofen/chainer | 3b03f9afc80fd67f65d5e0395ef199e9506b6ee1 | [
"MIT"
] | 1,150 | 2017-06-02T03:39:46.000Z | 2022-03-29T02:29:32.000Z | from chainermn.datasets.empty_dataset import create_empty_dataset # NOQA
from chainermn.datasets.scatter import DataSizeError # NOQA
from chainermn.datasets.scatter import scatter_index # NOQA
from chainermn.datasets.scatter import scatter_dataset # NOQA
| 51.8 | 73 | 0.84556 | 33 | 259 | 6.484848 | 0.333333 | 0.242991 | 0.392523 | 0.350467 | 0.598131 | 0.598131 | 0.420561 | 0 | 0 | 0 | 0 | 0 | 0.108108 | 259 | 4 | 74 | 64.75 | 0.926407 | 0.073359 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
4d4576981afbce5aabdd79ff066fcd32a5612251 | 3,110 | py | Python | _labWIP/lab06/lab05_tests.py | shenmo3/ucsb-cs8-f18.github.io | 4c751d3ea99e7ee03ae95f9840a3031088ea99d7 | [
"MIT"
] | 2 | 2018-02-24T07:29:55.000Z | 2021-04-30T14:39:57.000Z | _labWIP/lab06/lab05_tests.py | shenmo3/ucsb-cs8-f18.github.io | 4c751d3ea99e7ee03ae95f9840a3031088ea99d7 | [
"MIT"
] | 3 | 2020-02-25T15:59:52.000Z | 2021-09-27T21:47:59.000Z | _labWIP/lab06/lab05_tests.py | shenmo3/ucsb-cs8-f18.github.io | 4c751d3ea99e7ee03ae95f9840a3031088ea99d7 | [
"MIT"
] | 8 | 2018-09-27T16:07:04.000Z | 2019-01-15T23:06:11.000Z | #lab05_tests.py
from lab05 import create_screen
####################
from lab05 import invert_pixels
# Tests for invert_pixels
def test_invert_pixels1():
assert invert_pixels([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) \
== [[0, 1, 1], [1, 0, 1], [1, 1, 0]]
def test_invert_pixels2():
assert invert_pixels([[1, 0], [1, 0], [1, 0], [1, 0]]) \
== [[0, 1], [0, 1], [0, 1], [0, 1]]
def test_invert_pixels3():
assert invert_pixels([[0]]) \
== [[1]]
def test_invert_pixels4():
assert invert_pixels([[1, 1, 1], [0, 0, 0], [0, 0, 0]]) \
== [[0, 0, 0], [1, 1, 1], [1, 1, 1]]
####################
from lab05 import fill_rect
# Tests for fill_rect
def test_fill_rect1():
assert fill_rect(0,0,4,4,create_screen(5,5)) == \
[[1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1],
[1, 1, 1, 1, 1], [1, 1, 1, 1, 1]]
def test_fill_rect2():
assert fill_rect(1,1,3,2,create_screen(5,5)) == \
[[0, 0, 0, 0, 0], [0, 1, 1, 0, 0], [0, 1, 1, 0, 0],
[0, 1, 1, 0, 0], [0, 0, 0, 0, 0]]
def test_fill_rect3():
assert fill_rect(0,0,1,0,create_screen(3,3)) == \
[[1, 0, 0], [1, 0, 0], [0, 0, 0]]
def test_fill_rect4():
assert fill_rect(1,0,1,2,create_screen(3,3)) == \
[[0, 0, 0], [1, 1, 1], [0, 0, 0]]
####################
from lab05 import draw_rect
def test_draw_rect1():
assert draw_rect(0,0,4,4,create_screen(5,5)) == \
[[1, 1, 1, 1, 1], [1, 0, 0, 0, 1], [1, 0, 0, 0, 1],
[1, 0, 0, 0, 1], [1, 1, 1, 1, 1]]
def test_draw_rect2():
assert draw_rect(1,1,4,3,create_screen(5,5)) == \
[[0, 0, 0, 0, 0], [0, 1, 1, 1, 0], [0, 1, 0, 1, 0],
[0, 1, 0, 1, 0], [0, 1, 1, 1, 0]]
def test_draw_rect3():
assert draw_rect(0,0,0,2,create_screen(3,3)) == \
[[1, 1, 1], [0, 0, 0], [0, 0, 0]]
def test_draw_rect4():
assert draw_rect(1,0,3,3,create_screen(4,4)) == \
[[0, 0, 0, 0], [1, 1, 1, 1],
[1, 0, 0, 1], [1, 1, 1, 1]]
####################
from lab05 import draw_line
def test_draw_line1():
assert draw_line(0,0,4,4,create_screen(5,5)) == \
[[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 1, 0, 0],
[0, 0, 0, 1, 0], [0, 0, 0, 0, 1]]
def test_draw_line2():
assert draw_line(3,5,2,0,create_screen(6,6)) == \
[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
def test_draw_line3():
assert draw_line(1,1,1,4,create_screen(5,5)) == \
[[0, 0, 0, 0, 0], [0, 1, 1, 1, 1], [0, 0, 0, 0, 0],
[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
def test_draw_line4():
assert draw_line(0,2,5,2,create_screen(6,6)) == \
[[0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0],
[0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0], [0, 0, 1, 0, 0, 0]]
def test_draw_line5():
assert draw_line(2,2,2,2,create_screen(6,6)) == \
[[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
| 32.736842 | 72 | 0.441801 | 614 | 3,110 | 2.120521 | 0.060261 | 0.282642 | 0.320277 | 0.322581 | 0.603687 | 0.479263 | 0.466206 | 0.425499 | 0.3702 | 0.312596 | 0 | 0.207454 | 0.283923 | 3,110 | 94 | 73 | 33.085106 | 0.377189 | 0.01865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.257576 | 1 | 0.257576 | true | 0 | 0.075758 | 0 | 0.333333 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4d8f3175e937da78b456d28131a5f2315df9e5a5 | 17,810 | py | Python | Data.py | maurofm1992/smartpanel | 2eeddccc25ba35e594b34c552a30d243dd020918 | [
"Apache-2.0"
] | null | null | null | Data.py | maurofm1992/smartpanel | 2eeddccc25ba35e594b34c552a30d243dd020918 | [
"Apache-2.0"
] | null | null | null | Data.py | maurofm1992/smartpanel | 2eeddccc25ba35e594b34c552a30d243dd020918 | [
"Apache-2.0"
] | null | null | null | from cloudant.client import Cloudant
def getTimeLast():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "coolstuff" + "/_all_docs?")
end_point_status = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "status" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
response_status = client.r_session.get(end_point_status,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['current'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
#actually currently ref get data by 3 second
def getDataByMinute():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "coolstuff" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['Power'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataByMinute2():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load2" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['Power'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataByMinute3():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load3" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['Power'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
return table
def getDataByMinute4():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load4" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['Power'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataBySecond(load):
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load" + load+"/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['Power'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataFromMinTable(load):
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load" + load+"_min/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
table.append(response.json()['rows'][-i]['doc']['data'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataFor24(load):
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load" + load+"/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<25):
table.append(response.json()['rows'][-i]['doc']['Power'])
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataFor5min(load):
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load" + load+"/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
#make a function that adds all the data for past min
#each db entry is an average of 3 seconds
# there are 60/3 entries in a min
if i==1:
x=1
else:
x= (i-1)*60
total_power_for_one_min = 0
while (x< 60*i):
total_power_for_one_min += response.json()['rows'][-x]['doc']['Power']
x += 1
#since we are getting the average for 3 seconds we are only getting
#20 seconds worth of total power so multiply by 3 and you get one minute
table.append(total_power_for_one_min)
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataByMin():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "coolstuff" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
#make a function that adds all the data for past min
#each db entry is an average of 3 seconds
# there are 60/3 entries in a min
if i==1:
x=1
else:
x= (20*(i-1))
total_power_for_one_min = 0
while (x< 20*i):
total_power_for_one_min += response.json()['rows'][-x]['doc']['Power']
x += 1
#since we are getting the average for 3 seconds we are only getting
#20 seconds worth of total power so multiply by 3 and you get one minute
table.append(total_power_for_one_min * 3)
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataByMin2():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load2" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
#make a function that adds all the data for past min
#each db entry is an average of 3 seconds
# there are 60/3 entries in a min
if i==1:
x=1
else:
x= (20*(i-1))
total_power_for_one_min = 0
while (x< 20*i):
total_power_for_one_min += response.json()['rows'][-x]['doc']['Power']
x += 1
#since we are getting the average for 3 seconds we are only getting
#20 seconds worth of total power so multiply by 3 and you get one minute
table.append(total_power_for_one_min * 3)
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
client.disconnect()
return table
def getDataByMin3():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load3" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
#make a function that adds all the data for past min
#each db entry is an average of 3 seconds
# there are 60/3 entries in a min
if i==1:
x=1
else:
x= (20*(i-1))
total_power_for_one_min = 0
while (x< 20*i):
total_power_for_one_min += response.json()['rows'][-x]['doc']['Power']
x += 1
#since we are getting the average for 3 seconds we are only getting
#20 seconds worth of total power so multiply by 3 and you get one minute
table.append(total_power_for_one_min * 3)
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
return table
def getDataByMin4():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load4" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
#make a function that adds all the data for past min
#each db entry is an average of 3 seconds
# there are 60/3 entries in a min
if i==1:
x=1
else:
x= (20*(i-1))
total_power_for_one_min = 0
while (x< 20*i):
total_power_for_one_min += response.json()['rows'][-x]['doc']['Power']
x += 1
#since we are getting the average for 3 seconds we are only getting
#20 seconds worth of total power so multiply by 3 and you get one minute
table.append(total_power_for_one_min * 3)
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
return table
def getDataByMin2():
client = Cloudant("39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix",
"48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff",
url="https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com")
client.connect()
end_point = '{0}/{1}'.format("https://39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix:48e26645f504209f85b4c44d74a4cb14bc0d059a22b361534b78f406a513f8ff@39a4348e-3ce1-40cd-b016-1f85569d409e-bluemix.cloudant.com", "load2" + "/_all_docs?")
params = {'include_docs': 'true'}
response = client.r_session.get(end_point,params=params)
i=1
table = []
while (i<7):
#make a function that adds all the data for past min
#each db entry is an average of 3 seconds
# there are 60/3 entries in a min
if i==1:
x=1
else:
x= (20*(i-1))
total_power_for_one_min = 0
while (x< 20*i):
total_power_for_one_min += response.json()['rows'][-x]['doc']['Power']
x += 1
#since we are getting the average for 3 seconds we are only getting
#20 seconds worth of total power so multiply by 3 and you get one minute
table.append(total_power_for_one_min * 3)
# table[i] = (response.json
# table.insert(i,response.json()['rows'][i]['doc']['current'])
i = i+1
return table
def getCurId ():
curId = response.json()['rows'][-i]['doc']['_id']
return curId
# def getStatusCircuit (self):
# if(response_status.json()['rows'][-1]['doc']['status'] == 1):
# return "1"
# else:
# return "0"
| 42.404762 | 243 | 0.679843 | 2,025 | 17,810 | 5.89679 | 0.055802 | 0.072356 | 0.096474 | 0.120593 | 0.966251 | 0.964576 | 0.961896 | 0.961896 | 0.959719 | 0.959719 | 0 | 0.252853 | 0.183268 | 17,810 | 419 | 244 | 42.505967 | 0.56806 | 0.166367 | 0 | 0.903846 | 0 | 0.111538 | 0.497091 | 0.102287 | 0.042308 | 0 | 0 | 0 | 0 | 1 | 0.057692 | false | 0 | 0.003846 | 0 | 0.119231 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4df421355bf6f047a614435763f4f3fa1cc0344a | 12,649 | py | Python | tests/action/test_rename_embedded.py | Mohsen-Khodabakhshi/mongoengine-migrate | 1a7a26a47a474f70743c04700ce2a42f1872f166 | [
"Apache-2.0"
] | 15 | 2020-08-05T22:25:54.000Z | 2022-02-08T20:50:35.000Z | tests/action/test_rename_embedded.py | Mohsen-Khodabakhshi/mongoengine-migrate | 1a7a26a47a474f70743c04700ce2a42f1872f166 | [
"Apache-2.0"
] | 36 | 2020-10-22T09:05:01.000Z | 2022-02-21T14:50:17.000Z | tests/action/test_rename_embedded.py | Mohsen-Khodabakhshi/mongoengine-migrate | 1a7a26a47a474f70743c04700ce2a42f1872f166 | [
"Apache-2.0"
] | 5 | 2020-10-23T04:06:32.000Z | 2022-02-21T14:35:33.000Z | import pytest
from mongoengine_migrate.actions import RenameEmbedded
from mongoengine_migrate.graph import MigrationPolicy
from mongoengine_migrate.schema import Schema
class TestRenameEmbedded:
def test_build_object__on_usual_document_type__should_return_none(self):
left_schema = Schema({
'Document1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'collection': 'document1'}),
'~EmbeddedDocument1': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field22': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={}),
})
right_schema = Schema({
'Document1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'collection': 'document1'}),
'~EmbeddedDocument_new': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field22': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={}),
})
res = RenameEmbedded.build_object('Document1', left_schema, right_schema)
assert res is None
def test_build_object__if_document_is_similar_with_other_document__should_return_none(self):
left_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={}),
'~EmbeddedDocument2': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field22': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={'collection': 'document21'}),
})
right_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={}),
'Document2': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field22': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={'collection': 'document21'}),
})
res = RenameEmbedded.build_object('~EmbeddedDocument2', left_schema, right_schema)
assert res is None
@pytest.mark.parametrize('new_schema', (
Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
Schema.Document({
'field_changed': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param_changed': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue_changed', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document_changed'}),
))
def test_build_object__if_changes_similarity_more_than_threshold__should_return_object(
self, new_schema
):
left_schema = Schema({
'~EmbeddedDocument1': new_schema,
'Document2': Schema.Document({
'field1': {'param123': 'schemavalue123'},
}, parameters={'collection': 'document123', 'test_parameter': 'test_value'}),
'~EmbeddedDocument3': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
})
})
right_schema = Schema({
'~EmbeddedDocument11': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
'Document2': Schema.Document({
'field1': {'param123': 'schemavalue123'},
}, parameters={'collection': 'document123', 'test_parameter': 'test_value'}),
'~EmbeddedDocument31': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
})
})
res = RenameEmbedded.build_object('~EmbeddedDocument1', left_schema, right_schema)
assert isinstance(res, RenameEmbedded)
assert res.document_type == '~EmbeddedDocument1'
assert res.new_name == '~EmbeddedDocument11'
assert res.parameters == {}
def test_build_object__if_there_are_several_rename_candidates__should_return_none(self):
left_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
'Document2': Schema.Document({
'field1': {'param123': 'schemavalue123'},
}, parameters={'collection': 'document123', 'test_parameter': 'test_value'}),
'~EmbeddedDocument3': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
})
})
right_schema = Schema({
'~EmbeddedDocument11': Schema.Document({
'field_changed': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
'~EmbeddedDocument111': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field12': {'param21': 'schemavalue_changed', 'param22': 'schemavalue22'},
'field13': {'param31': 'schemavalue31', 'param32': 'schemavalue32'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
'field15': {'param51': 'schemavalue51', 'param52': 'schemavalue52'},
}, parameters={'collection': 'document1'}),
'Document2': Schema.Document({
'field1': {'param123': 'schemavalue123'},
}, parameters={'collection': 'document123', 'test_parameter': 'test_value'}),
'~EmbeddedDocument31': Schema.Document({
'field11': {'param11': 'schemavalue11', 'param12': 'schemavalue21'},
'field14': {'param41': 'schemavalue41', 'param42': 'schemavalue42'},
}),
})
res = RenameEmbedded.build_object('~EmbeddedDocument1', left_schema, right_schema)
assert res is None
def test_build_object__if_changes_similarity_less_than_threshold__should_return_object(self):
left_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'param1': 'value1'}),
'~EmbeddedDocument2': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={'param2': 'value2'}),
})
right_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'param1': 'value1'}),
'~EmbeddedDocument_new': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'param': 'value'}),
})
res = RenameEmbedded.build_object('Document2', left_schema, right_schema)
assert res is None
@pytest.mark.parametrize('document_type', ('Document1', 'Document_unknown'))
def test_build_object__if_document_is_not_disappears_in_right_schema__should_return_none(
self, document_type
):
left_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'param1': 'value1'}),
'~EmbeddedDocument2': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={'param2': 'value2'}),
})
right_schema = Schema({
'~EmbeddedDocument1': Schema.Document({
'field1': {'param1': 'schemavalue1', 'param2': 'schemavalue2'},
}, parameters={'param1': 'value1'}),
'~EmbeddedDocument2': Schema.Document({
'field21': {'param21': 'schemavalue21', 'param22': 'schemavalue22'},
}, parameters={'param2': 'value2'}),
})
res = RenameEmbedded.build_object(document_type, left_schema, right_schema)
assert res is None
def test_forward__should_do_nothing(self, load_fixture, test_db, dump_db):
schema = load_fixture('schema1').get_schema()
dump = dump_db()
action = RenameEmbedded('~Schema1EmbDoc1', new_name='~Schema1Doc')
action.prepare(test_db, schema, MigrationPolicy.strict)
action.run_forward()
assert dump == dump_db()
def test_backward__should_do_nothing(self, load_fixture, test_db, dump_db):
schema = load_fixture('schema1').get_schema()
dump = dump_db()
action = RenameEmbedded('~Schema1EmbDoc1', new_name='~Schema1Doc')
action.prepare(test_db, schema, MigrationPolicy.strict)
action.run_backward()
assert dump == dump_db()
| 51.628571 | 97 | 0.581785 | 913 | 12,649 | 7.877327 | 0.134721 | 0.064238 | 0.08718 | 0.100111 | 0.870273 | 0.867492 | 0.858315 | 0.841908 | 0.830367 | 0.819939 | 0 | 0.083703 | 0.251008 | 12,649 | 244 | 98 | 51.840164 | 0.675427 | 0 | 0 | 0.832558 | 0 | 0 | 0.375682 | 0.00332 | 0 | 0 | 0 | 0 | 0.051163 | 1 | 0.037209 | false | 0 | 0.018605 | 0 | 0.060465 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1508db88e2432d6ce0e8aefcc720dc7b661b2883 | 2,624 | py | Python | tests/test_year_2010.py | l0pht511/jpholiday | 083145737b61fad3420c066968c4329d17dc3baf | [
"MIT"
] | 179 | 2017-10-05T12:41:10.000Z | 2022-03-24T22:18:25.000Z | tests/test_year_2010.py | l0pht511/jpholiday | 083145737b61fad3420c066968c4329d17dc3baf | [
"MIT"
] | 17 | 2018-10-23T00:51:13.000Z | 2021-11-22T11:40:06.000Z | tests/test_year_2010.py | l0pht511/jpholiday | 083145737b61fad3420c066968c4329d17dc3baf | [
"MIT"
] | 17 | 2018-10-19T11:13:07.000Z | 2022-01-29T08:05:56.000Z | # coding: utf-8
import datetime
import unittest
import jpholiday
class TestYear2010(unittest.TestCase):
def test_holiday(self):
"""
2010年祝日
"""
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 1, 1)), '元日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 1, 11)), '成人の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 2, 11)), '建国記念の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 3, 21)), '春分の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 3, 22)), '春分の日 振替休日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 4, 29)), '昭和の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 5, 3)), '憲法記念日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 5, 4)), 'みどりの日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 5, 5)), 'こどもの日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 7, 19)), '海の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 9, 20)), '敬老の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 9, 23)), '秋分の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 10, 11)), '体育の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 11, 3)), '文化の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 11, 23)), '勤労感謝の日')
self.assertEqual(jpholiday.is_holiday_name(datetime.date(2010, 12, 23)), '天皇誕生日')
def test_count_month(self):
"""
2010年月祝日数
"""
self.assertEqual(len(jpholiday.month_holidays(2010, 1)), 2)
self.assertEqual(len(jpholiday.month_holidays(2010, 2)), 1)
self.assertEqual(len(jpholiday.month_holidays(2010, 3)), 2)
self.assertEqual(len(jpholiday.month_holidays(2010, 4)), 1)
self.assertEqual(len(jpholiday.month_holidays(2010, 5)), 3)
self.assertEqual(len(jpholiday.month_holidays(2010, 6)), 0)
self.assertEqual(len(jpholiday.month_holidays(2010, 7)), 1)
self.assertEqual(len(jpholiday.month_holidays(2010, 8)), 0)
self.assertEqual(len(jpholiday.month_holidays(2010, 9)), 2)
self.assertEqual(len(jpholiday.month_holidays(2010, 10)), 1)
self.assertEqual(len(jpholiday.month_holidays(2010, 11)), 2)
self.assertEqual(len(jpholiday.month_holidays(2010, 12)), 1)
def test_count_year(self):
"""
2010年祝日数
"""
self.assertEqual(len(jpholiday.year_holidays(2010)), 16)
| 50.461538 | 92 | 0.681021 | 339 | 2,624 | 5.123894 | 0.182891 | 0.250432 | 0.221071 | 0.239493 | 0.805412 | 0.805412 | 0.805412 | 0.75475 | 0.495682 | 0.34312 | 0 | 0.096136 | 0.171494 | 2,624 | 51 | 93 | 51.45098 | 0.702852 | 0.015625 | 0 | 0 | 0 | 0 | 0.029447 | 0 | 0 | 0 | 0 | 0 | 0.805556 | 1 | 0.083333 | false | 0 | 0.083333 | 0 | 0.194444 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
42171368bc85ed217098b3093f85bd7d3f4fc7eb | 9,567 | py | Python | resrc/list/forms.py | ignatandrei/resrc | 5b88e3cfbc638e5f98cf7bfe6f4a5757840a2565 | [
"MIT"
] | null | null | null | resrc/list/forms.py | ignatandrei/resrc | 5b88e3cfbc638e5f98cf7bfe6f4a5757840a2565 | [
"MIT"
] | 2 | 2020-08-04T18:08:04.000Z | 2021-02-02T22:57:59.000Z | resrc/list/forms.py | ignatandrei/resrc | 5b88e3cfbc638e5f98cf7bfe6f4a5757840a2565 | [
"MIT"
] | null | null | null | # coding: utf-8
from django import forms
from django.core.urlresolvers import reverse
from crispy_forms.helper import FormHelper
from crispy_forms_foundation.layout import Layout, Row, Column, Fieldset, Field, HTML, Submit
from django.conf import settings
class NewListAjaxForm(forms.Form):
title = forms.CharField(label='Title', max_length=80)
description = forms.CharField(
label='Description', required=False, widget=forms.Textarea())
private = forms.BooleanField(label='private', required=False)
# display a select with languages ordered by most used first
from resrc.language.models import Language
from django.db.models import Count
used_langs = Language.objects.all().annotate(
c=Count('link')).order_by('-c').values_list()
used_langs = [x[1] for x in used_langs]
lang_choices = []
for lang in used_langs:
lang_choices += [x for x in settings.LANGUAGES if x[0] == lang]
lang_choices += [x for x in settings.LANGUAGES if x not in lang_choices]
language = forms.ChoiceField(label='Language', choices=lang_choices)
def __init__(self, link_pk, *args, **kwargs):
self.helper = FormHelper()
self.helper.form_method = 'post'
self.helper.form_id = 'createlistform'
self.helper.form_action = reverse('ajax-create-list', args=(link_pk,))
self.helper.layout = Layout(
Fieldset(
u'Create a list',
Row(
Column(
Field('title'), css_class='large-12'
),
),
Row(
Column(
Field('description'), css_class='large-12'
),
),
Row(
Column(
Field('private'), css_class='large-4'
),
Column(
Field('language'), css_class='large-4'
),
),
),
Row(
Column(
HTML('<a id="createlist" class="small button">Create</a><a id="createclose" class="small secondary button" style="display:none">Close</a>'), css_class='large-12'
),
),
)
super(NewListAjaxForm, self).__init__(*args, **kwargs)
class NewListForm(forms.Form):
title = forms.CharField(label='Title', max_length=80)
description = forms.CharField(
label='Description', required=False, widget=forms.Textarea()
)
url = forms.URLField(
label='URL', required=False
)
private = forms.BooleanField(label='private', required=False)
mdcontent = forms.CharField(
label='content', required=False, widget=forms.Textarea()
)
# display a select with languages ordered by most used first
from resrc.language.models import Language
from django.db.models import Count
used_langs = Language.objects.all().annotate(
c=Count('link')).order_by('-c').values_list()
used_langs = [x[1] for x in used_langs]
lang_choices = []
for lang in used_langs:
lang_choices += [x for x in settings.LANGUAGES if x[0] == lang]
lang_choices += [x for x in settings.LANGUAGES if x not in lang_choices]
language = forms.ChoiceField(label='Language', choices=lang_choices)
def __init__(self, *args, **kwargs):
self.helper = FormHelper()
self.helper.form_method = 'post'
self.helper.form_id = 'createlistform'
self.helper.layout = Layout(
Fieldset(
u'Create a list',
Row(
Column(
Field('title'), css_class='large-12'
),
),
Row(
Column(
Field('description'), css_class='large-12'
),
),
Row(
Column(
Field('url'), css_class='large-12'
),
),
Row(
Column(
HTML('<label for="id_private"><input class="checkboxinput" id="id_private" name="private" type="checkbox"> private</label>'),
css_class='large-6'
),
Column(
Field('language'),
css_class='large-6'
),
),
Row(
Column(
Field('mdcontent'),
css_class='large-12'
),
css_class='markdownform'
),
),
Row(
Column(
Submit('submit', 'Save', css_class='small button'),
css_class='large-12',
),
)
)
super(NewListForm, self).__init__(*args, **kwargs)
class EditListForm(forms.Form):
title = forms.CharField(label='Title', max_length=80)
description = forms.CharField(
label='Description', required=False, widget=forms.Textarea()
)
url = forms.URLField(
label='URL', required=False
)
private = forms.BooleanField(label='private', required=False)
mdcontent = forms.CharField(
label='list source', required=False, widget=forms.Textarea()
)
# display a select with languages ordered by most used first
from resrc.language.models import Language
from django.db.models import Count
used_langs = Language.objects.all().annotate(
c=Count('link')).order_by('-c').values_list()
used_langs = [x[1] for x in used_langs]
lang_choices = []
for lang in used_langs:
lang_choices += [x for x in settings.LANGUAGES if x[0] == lang]
lang_choices += [x for x in settings.LANGUAGES if x not in lang_choices]
language = forms.ChoiceField(label='Language', choices=lang_choices)
def __init__(self, private_checkbox, alist, from_url, *args, **kwargs):
self.helper = FormHelper()
self.helper.form_method = 'post'
self.helper.form_id = 'createlistform'
delete_url = reverse('list-delete', args=(alist.pk,))
if not from_url:
self.helper.layout = Layout(
Fieldset(
u'Edit a list',
Row(
Column(
Field('title'), css_class='large-12'
),
),
Row(
Column(
Field('description'), css_class='large-12'
),
),
Row(
Column(
Field('url'), css_class='large-12'
),
),
Row(
Column(
HTML('<label for="id_private"><input class="checkboxinput" id="id_private" name="private" type="checkbox" %s> private</label>' % private_checkbox),
css_class='large-6'
),
Column(
Field('language'),
css_class='large-6'
),
),
Row(
Column(
Field('mdcontent'),
css_class='large-12'
),
css_class='markdownform'
),
),
Row(
Column(
Submit('submit', 'Save', css_class='small button'),
css_class='large-6',
),
Column(
HTML('<a href="%s" class="small button alert right">Delete list</a>' % delete_url),
css_class='large-6'
),
)
)
else:
self.helper.layout = Layout(
Fieldset(
u'Edit a list',
Row(
Column(
Field('title'), css_class='large-12'
),
),
Row(
Column(
Field('description'), css_class='large-12'
),
),
Row(
Column(
Field('url'), css_class='large-12'
),
),
Row(
Column(
HTML('<label for="id_private"><input class="checkboxinput" id="id_private" name="private" type="checkbox" %s> private</label>' % private_checkbox),
css_class='large-6'
),
Column(
Field('language'),
css_class='large-6'
),
),
),
Row(
Column(
Submit('submit', 'Fetch and save', css_class='small button'),
css_class='large-12',
),
)
)
super(EditListForm, self).__init__(*args, **kwargs)
| 36.515267 | 181 | 0.458033 | 864 | 9,567 | 4.9375 | 0.144676 | 0.058134 | 0.079231 | 0.056259 | 0.838959 | 0.821847 | 0.814346 | 0.80286 | 0.80286 | 0.80286 | 0 | 0.010166 | 0.434514 | 9,567 | 261 | 182 | 36.655172 | 0.778373 | 0.01986 | 0 | 0.827004 | 0 | 0.016878 | 0.132096 | 0.017286 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012658 | false | 0 | 0.046414 | 0 | 0.177215 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
424958bd9b4094a831b999300ecea2d005a14dd1 | 6,952 | py | Python | bot.py | 077so/twiiterbot | f4730e4e051115c7b9db35e1ac1ad0f46cc907bc | [
"Apache-2.0"
] | 2 | 2021-01-19T09:58:47.000Z | 2021-01-25T11:42:02.000Z | bot.py | 077so/twiiterbot | f4730e4e051115c7b9db35e1ac1ad0f46cc907bc | [
"Apache-2.0"
] | 1 | 2021-01-25T11:46:03.000Z | 2021-01-25T11:46:03.000Z | bot.py | 077so/twiiterbot | f4730e4e051115c7b9db35e1ac1ad0f46cc907bc | [
"Apache-2.0"
] | 2 | 2021-05-22T11:17:04.000Z | 2021-06-20T13:50:34.000Z | # this code created by yahye and othr random developer i dont remeber his Name
# tweet thanks in @mr__yahye it double __ remember
# yeah its my oficail tweeter acount
import tweepy
import time
#but your api keys here
consumer_key = 'hU4anbsA7eNHHgkhKqIH3uApk'
consumer_secret = 'wH0Y1lcjieVHvp9ZWdc02peZXPoqd4y6t6cRo3rVJm9u6ypSSK'
key = '1279649205754171392-a86nTVpdAOxiW5yIW6aEkgkrqi0D4Z'
secret = 'CYHS7FEEwPLDCYZOctqyynMPwLxONbWdK396n0ljXQtxS'
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(key, secret)
api = tweepy.API(auth)
#but your own keywords here
search = 'somali'
search2 = 'hacking'
search3 = 'viral'
search4 = 'somaliland'
search5 = 'love'
search6 = 'java'
search7 = 'code'
search8 = 'usa'
search9 = 'canada'
search10 = 'kalilinux'
search11 = 'cybersecurity'
search12 = 'python'
search13 = 'brutalforce'
search14 = 'github'
search15 = 'linux'
search16 = 'cisco'
nrTweets = 50
#make any coment in save stus pro tip use pycharm to make change in click
for tweet in tweepy.Cursor(api.search, search).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search4).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search2).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search3).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search5).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search6).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search7).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search8).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search9).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search10).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search11).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search12).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search13).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search14).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search15).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
for tweet in tweepy.Cursor(api.search, search16).items(nrTweets):
try:
tweet.favorite()
tweet.retweet()
api.update_status(status='i always tweet a real inspiring and moltivation tweet make sure you flow i love you')
time.sleep(84000)
except tweepy.TweepError as e:
print(e.reason)
except StopIteration:
break
#### the code is end bye
| 31.6 | 119 | 0.677215 | 908 | 6,952 | 5.156388 | 0.137665 | 0.027339 | 0.034173 | 0.054677 | 0.806493 | 0.806493 | 0.806493 | 0.799872 | 0.799872 | 0.799872 | 0 | 0.032136 | 0.239068 | 6,952 | 219 | 120 | 31.744292 | 0.85293 | 0.043585 | 0 | 0.774194 | 0 | 0 | 0.241374 | 0.025614 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.010753 | 0 | 0.010753 | 0.086022 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
42843866389f67bd6076f305a20e92a6f72b11f1 | 12,230 | py | Python | nautobot_golden_config/filters.py | nniehoff/nautobot-plugin-golden-config | bbc73bc0bf76a2c97193f3cb683ed5a078a1abc1 | [
"Apache-2.0"
] | null | null | null | nautobot_golden_config/filters.py | nniehoff/nautobot-plugin-golden-config | bbc73bc0bf76a2c97193f3cb683ed5a078a1abc1 | [
"Apache-2.0"
] | null | null | null | nautobot_golden_config/filters.py | nniehoff/nautobot-plugin-golden-config | bbc73bc0bf76a2c97193f3cb683ed5a078a1abc1 | [
"Apache-2.0"
] | null | null | null | """Filter for Device Configuration Backup."""
import django_filters
from django.db.models import Q
from nautobot.dcim.models import Device, Platform, Region, Site, DeviceRole, DeviceType, Manufacturer, RackGroup, Rack
from nautobot.extras.models import Status
from nautobot.extras.filters import CreatedUpdatedFilterSet, StatusFilter, CustomFieldModelFilterSet
from nautobot.tenancy.models import Tenant, TenantGroup
from nautobot.utilities.filters import TreeNodeMultipleChoiceFilter
from nautobot_golden_config import models
class GoldenConfigFilter(CreatedUpdatedFilterSet):
"""Filter capabilities for GoldenConfig instances."""
q = django_filters.CharFilter(
method="search",
label="Search",
)
tenant_group_id = TreeNodeMultipleChoiceFilter(
queryset=TenantGroup.objects.all(),
field_name="tenant__group",
lookup_expr="in",
label="Tenant Group (ID)",
)
tenant_group = TreeNodeMultipleChoiceFilter(
queryset=TenantGroup.objects.all(),
field_name="tenant__group",
to_field_name="slug",
lookup_expr="in",
label="Tenant Group (slug)",
)
tenant_id = django_filters.ModelMultipleChoiceFilter(
queryset=Tenant.objects.all(),
field_name="tenant_id",
label="Tenant (ID)",
)
tenant = django_filters.ModelMultipleChoiceFilter(
queryset=Tenant.objects.all(),
field_name="tenant__slug",
to_field_name="slug",
label="Tenant (slug)",
)
region_id = TreeNodeMultipleChoiceFilter(
queryset=Region.objects.all(),
field_name="site__region",
lookup_expr="in",
label="Region (ID)",
)
region = TreeNodeMultipleChoiceFilter(
queryset=Region.objects.all(),
field_name="site__region",
lookup_expr="in",
to_field_name="slug",
label="Region (slug)",
)
site_id = django_filters.ModelMultipleChoiceFilter(
queryset=Site.objects.all(),
label="Site (ID)",
)
site = django_filters.ModelMultipleChoiceFilter(
field_name="site__slug",
queryset=Site.objects.all(),
to_field_name="slug",
label="Site name (slug)",
)
rack_group_id = TreeNodeMultipleChoiceFilter(
queryset=RackGroup.objects.all(),
field_name="rack__group",
lookup_expr="in",
label="Rack group (ID)",
)
rack_id = django_filters.ModelMultipleChoiceFilter(
field_name="rack",
queryset=Rack.objects.all(),
label="Rack (ID)",
)
role_id = django_filters.ModelMultipleChoiceFilter(
field_name="device_role_id",
queryset=DeviceRole.objects.all(),
label="Role (ID)",
)
role = django_filters.ModelMultipleChoiceFilter(
field_name="device_role__slug",
queryset=DeviceRole.objects.all(),
to_field_name="slug",
label="Role (slug)",
)
manufacturer_id = django_filters.ModelMultipleChoiceFilter(
field_name="device_type__manufacturer",
queryset=Manufacturer.objects.all(),
label="Manufacturer (ID)",
)
manufacturer = django_filters.ModelMultipleChoiceFilter(
field_name="device_type__manufacturer__slug",
queryset=Manufacturer.objects.all(),
to_field_name="slug",
label="Manufacturer (slug)",
)
platform_id = django_filters.ModelMultipleChoiceFilter(
queryset=Platform.objects.all(),
label="Platform (ID)",
)
platform = django_filters.ModelMultipleChoiceFilter(
field_name="platform__slug",
queryset=Platform.objects.all(),
to_field_name="slug",
label="Platform (slug)",
)
device_status_id = StatusFilter(
field_name="status",
queryset=Status.objects.all(),
label="Device Status",
)
device_type_id = django_filters.ModelMultipleChoiceFilter(
field_name="device_type_id",
queryset=DeviceType.objects.all(),
label="Device type (ID)",
)
device_id = django_filters.ModelMultipleChoiceFilter(
field_name="device",
queryset=Device.objects.all(),
label="Device (ID)",
)
device = django_filters.ModelMultipleChoiceFilter(
field_name="name",
queryset=Device.objects.all(),
label="Device Name",
)
def search(self, queryset, name, value): # pylint: disable=unused-argument,no-self-use
"""Perform the filtered search."""
if not value.strip():
return queryset
# Chose only device, can be convinced more should be included
qs_filter = Q(name__icontains=value)
return queryset.filter(qs_filter)
class Meta:
"""Meta class attributes for GoldenConfig."""
model = Device
distinct = True
fields = [
"q",
"tenant_group_id",
"tenant_group",
"tenant_id",
"tenant",
"region_id",
"region",
"site_id",
"site",
"rack_group_id",
"rack_id",
"role_id",
"role",
"manufacturer_id",
"manufacturer",
"platform_id",
"platform",
"device_status_id",
"device_type_id",
"device_id",
"device",
]
class ConfigComplianceFilter(CreatedUpdatedFilterSet):
"""Filter capabilities for ConfigCompliance instances."""
q = django_filters.CharFilter(
method="search",
label="Search",
)
tenant_group_id = TreeNodeMultipleChoiceFilter(
queryset=TenantGroup.objects.all(),
field_name="device__tenant__group",
lookup_expr="in",
label="Tenant Group (ID)",
)
tenant_group = TreeNodeMultipleChoiceFilter(
queryset=TenantGroup.objects.all(),
field_name="device__tenant__group",
to_field_name="slug",
lookup_expr="in",
label="Tenant Group (slug)",
)
tenant_id = django_filters.ModelMultipleChoiceFilter(
queryset=Tenant.objects.all(),
field_name="device__tenant_id",
label="Tenant (ID)",
)
tenant = django_filters.ModelMultipleChoiceFilter(
queryset=Tenant.objects.all(),
field_name="device__tenant__slug",
to_field_name="slug",
label="Tenant (slug)",
)
region_id = TreeNodeMultipleChoiceFilter(
queryset=Region.objects.all(),
field_name="device__site__region",
lookup_expr="in",
label="Region (ID)",
)
region = TreeNodeMultipleChoiceFilter(
queryset=Region.objects.all(),
field_name="device__site__region",
lookup_expr="in",
to_field_name="slug",
label="Region (slug)",
)
site_id = django_filters.ModelMultipleChoiceFilter(
queryset=Site.objects.all(),
label="Site (ID)",
)
site = django_filters.ModelMultipleChoiceFilter(
field_name="device__site__slug",
queryset=Site.objects.all(),
to_field_name="slug",
label="Site name (slug)",
)
rack_group_id = TreeNodeMultipleChoiceFilter(
queryset=RackGroup.objects.all(),
field_name="device__rack__group",
lookup_expr="in",
label="Rack group (ID)",
)
rack_id = django_filters.ModelMultipleChoiceFilter(
field_name="device__rack",
queryset=Rack.objects.all(),
label="Rack (ID)",
)
role_id = django_filters.ModelMultipleChoiceFilter(
field_name="device__device_role_id",
queryset=DeviceRole.objects.all(),
label="Role (ID)",
)
role = django_filters.ModelMultipleChoiceFilter(
field_name="device__device_role__slug",
queryset=DeviceRole.objects.all(),
to_field_name="slug",
label="Role (slug)",
)
manufacturer_id = django_filters.ModelMultipleChoiceFilter(
field_name="device__device_type__manufacturer",
queryset=Manufacturer.objects.all(),
label="Manufacturer (ID)",
)
manufacturer = django_filters.ModelMultipleChoiceFilter(
field_name="device__device_type__manufacturer__slug",
queryset=Manufacturer.objects.all(),
to_field_name="slug",
label="Manufacturer (slug)",
)
platform_id = django_filters.ModelMultipleChoiceFilter(
queryset=Platform.objects.all(),
label="Platform (ID)",
)
platform = django_filters.ModelMultipleChoiceFilter(
field_name="device__platform__slug",
queryset=Platform.objects.all(),
to_field_name="slug",
label="Platform (slug)",
)
device_status_id = StatusFilter(
field_name="device__status",
queryset=Status.objects.all(),
label="Device Status",
)
device_type_id = django_filters.ModelMultipleChoiceFilter(
field_name="device__device_type_id",
queryset=DeviceType.objects.all(),
label="Device type (ID)",
)
device_id = django_filters.ModelMultipleChoiceFilter(
queryset=Device.objects.all(),
label="Device Name",
)
device = django_filters.ModelMultipleChoiceFilter(
field_name="device__name",
queryset=Device.objects.all(),
label="Device Name",
)
def search(self, queryset, name, value): # pylint: disable=unused-argument,no-self-use
"""Perform the filtered search."""
if not value.strip():
return queryset
# Chose only device, can be convinced more should be included
qs_filter = Q(device__name__icontains=value)
return queryset.filter(qs_filter)
class Meta:
"""Meta class attributes for ConfigComplianceFilter."""
model = models.ConfigCompliance
distinct = True
fields = [
"q",
"tenant_group_id",
"tenant_group",
"tenant_id",
"tenant",
"region_id",
"region",
"site_id",
"site",
"rack_group_id",
"rack_id",
"role_id",
"role",
"manufacturer_id",
"manufacturer",
"platform_id",
"platform",
"device_status_id",
"device_type_id",
"device_id",
"device",
]
class ComplianceFeatureFilter(CustomFieldModelFilterSet):
"""Inherits Base Class CustomFieldModelFilterSet."""
q = django_filters.CharFilter(
method="search",
label="Search",
)
def search(self, queryset, name, value): # pylint: disable=unused-argument,no-self-use
"""Perform the filtered search."""
if not value.strip():
return queryset
qs_filter = Q(name__icontains=value)
return queryset.filter(qs_filter)
class Meta:
"""Boilerplate filter Meta data for compliance feature."""
model = models.ComplianceFeature
fields = ["q", "name"]
class ComplianceRuleFilter(CustomFieldModelFilterSet):
"""Inherits Base Class CustomFieldModelFilterSet."""
q = django_filters.CharFilter(
method="search",
label="Search",
)
def search(self, queryset, name, value): # pylint: disable=unused-argument,no-self-use
"""Perform the filtered search."""
if not value.strip():
return queryset
qs_filter = Q(feature__name__icontains=value)
return queryset.filter(qs_filter)
class Meta:
"""Boilerplate filter Meta data for compliance rule."""
model = models.ComplianceRule
fields = ["q", "platform", "feature"]
class ConfigRemoveFilter(CustomFieldModelFilterSet):
"""Inherits Base Class CustomFieldModelFilterSet."""
class Meta:
"""Boilerplate filter Meta data for Config Remove."""
model = models.ConfigRemove
fields = ["platform", "name"]
class ConfigReplaceFilter(CustomFieldModelFilterSet):
"""Inherits Base Class CustomFieldModelFilterSet."""
class Meta:
"""Boilerplate filter Meta data for Config Remove."""
model = models.ConfigReplace
fields = ["platform", "name"]
| 31.439589 | 118 | 0.627146 | 1,165 | 12,230 | 6.33133 | 0.092704 | 0.059789 | 0.144252 | 0.110765 | 0.870526 | 0.869713 | 0.848156 | 0.840022 | 0.840022 | 0.840022 | 0 | 0 | 0.262633 | 12,230 | 388 | 119 | 31.520619 | 0.81792 | 0.084137 | 0 | 0.705706 | 0 | 0 | 0.150405 | 0.023492 | 0 | 0 | 0 | 0 | 0 | 1 | 0.012012 | false | 0 | 0.024024 | 0 | 0.228228 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
429aece0773aed75e59f5de9b10521acb70b957c | 83 | py | Python | recipe_api_project/calc.py | binyammesfin/recipe-app-api | eda609df69f3932469ec3a847ae7a2ccd15ed62b | [
"MIT"
] | null | null | null | recipe_api_project/calc.py | binyammesfin/recipe-app-api | eda609df69f3932469ec3a847ae7a2ccd15ed62b | [
"MIT"
] | 5 | 2020-06-06T00:02:54.000Z | 2021-06-09T18:29:37.000Z | recipe_api_project/calc.py | binyammesfin/recipe-app-api | eda609df69f3932469ec3a847ae7a2ccd15ed62b | [
"MIT"
] | null | null | null |
def addnumbers(x, y):
return x + y
def subnumbers(x, y):
return x - y
| 8.3 | 21 | 0.554217 | 14 | 83 | 3.285714 | 0.428571 | 0.173913 | 0.347826 | 0.391304 | 0.434783 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.325301 | 83 | 9 | 22 | 9.222222 | 0.821429 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.5 | false | 0 | 0 | 0.5 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
35eb1eaf8706887f04193a7605ee1b6eaf2c452a | 12,241 | py | Python | tests/chem/magpie_python/data/materials/test_CompositionEntry.py | crazysal/chemml | 300ed183c623fc8762ed2343e48c9e2ac5102c0f | [
"BSD-3-Clause"
] | 108 | 2018-03-23T20:06:03.000Z | 2022-01-06T19:32:46.000Z | tests/chem/magpie_python/data/materials/test_CompositionEntry.py | crazysal/chemml | 300ed183c623fc8762ed2343e48c9e2ac5102c0f | [
"BSD-3-Clause"
] | 18 | 2019-08-09T21:16:14.000Z | 2022-02-14T21:52:06.000Z | tests/chem/magpie_python/data/materials/test_CompositionEntry.py | crazysal/chemml | 300ed183c623fc8762ed2343e48c9e2ac5102c0f | [
"BSD-3-Clause"
] | 28 | 2018-04-28T17:07:33.000Z | 2022-02-28T07:22:56.000Z | import unittest
from itertools import permutations
import numpy.testing as np
import os
import pkg_resources
from chemml.chem.magpie_python.data.materials.CompositionEntry import CompositionEntry
class testCompositionEntry(unittest.TestCase):
def test_parsing(self):
entry = CompositionEntry(composition="Fe")
self.assertEqual(1, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0, entry.get_element_fraction(name="Fe"),
delta=1e-6)
entry = CompositionEntry(composition="FeO0")
self.assertEqual(1, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0, entry.get_element_fraction(name="Fe"),
delta=1e-6)
entry = CompositionEntry(composition="FeCl3")
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(0.25, entry.get_element_fraction(name="Fe"),
delta=1e-6)
self.assertAlmostEqual(0.75, entry.get_element_fraction(name="Cl"),
delta=1e-6)
entry = CompositionEntry(composition="Fe1Cl_3")
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(0.25, entry.get_element_fraction(name="Fe"),
delta=1e-6)
self.assertAlmostEqual(0.75, entry.get_element_fraction(name="Cl"),
delta=1e-6)
entry = CompositionEntry(composition="FeCl_3")
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(0.25, entry.get_element_fraction(name="Fe"),
delta=1e-6)
self.assertAlmostEqual(0.75, entry.get_element_fraction(name="Cl"),
delta=1e-6)
entry = CompositionEntry(composition="FeClCl2")
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(0.25, entry.get_element_fraction(name="Fe"),
delta=1e-6)
self.assertAlmostEqual(0.75, entry.get_element_fraction(name="Cl"),
delta=1e-6)
entry = CompositionEntry(composition="Ca(NO3)2")
self.assertEqual(3, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 9, entry.get_element_fraction(name="Ca"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 9, entry.get_element_fraction(name="N"),
delta=1e-6)
self.assertAlmostEqual(6.0 / 9, entry.get_element_fraction(name="O"),
delta=1e-6)
entry = CompositionEntry(composition="Ca(N[O]3)2")
self.assertEqual(3, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 9, entry.get_element_fraction(name="Ca"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 9, entry.get_element_fraction(name="N"),
delta=1e-6)
self.assertAlmostEqual(6.0 / 9, entry.get_element_fraction(name="O"),
delta=1e-6)
entry = CompositionEntry(composition="Ca(N(O1.5)2)2")
self.assertEqual(3, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 9, entry.get_element_fraction(name="Ca"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 9, entry.get_element_fraction(name="N"),
delta=1e-6)
self.assertAlmostEqual(6.0 / 9, entry.get_element_fraction(name="O"),
delta=1e-6)
entry = CompositionEntry(composition="Ca{N{O1.5}2}2")
self.assertEqual(3, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 9, entry.get_element_fraction(name="Ca"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 9, entry.get_element_fraction(name="N"),
delta=1e-6)
self.assertAlmostEqual(6.0 / 9, entry.get_element_fraction(name="O"),
delta=1e-6)
entry = CompositionEntry(composition="CaO-0.01Ni")
self.assertEqual(3, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 2.01, entry.get_element_fraction(
name="Ca"), delta=1e-6)
self.assertAlmostEqual(0.01 / 2.01, entry.get_element_fraction(
name="Ni"), delta=1e-6)
self.assertAlmostEqual(1.0 / 2.01, entry.get_element_fraction(
name="O"), delta=1e-6)
entry = CompositionEntry(composition="CaO"+str(chr(183))+"0.01Ni")
self.assertEqual(3, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 2.01, entry.get_element_fraction(
name="Ca"), delta=1e-6)
self.assertAlmostEqual(0.01 / 2.01, entry.get_element_fraction(
name="Ni"), delta=1e-6)
self.assertAlmostEqual(1.0 / 2.01, entry.get_element_fraction(
name="O"), delta=1e-6)
entry = CompositionEntry(composition="Ca(N(O1.5)2)2-2H2O")
self.assertEqual(4, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 15, entry.get_element_fraction(name="Ca"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 15, entry.get_element_fraction(name="N"),
delta=1e-6)
self.assertAlmostEqual(8.0 / 15, entry.get_element_fraction(name="O"),
delta=1e-6)
self.assertAlmostEqual(4.0 / 15, entry.get_element_fraction(name="H"),
delta=1e-6)
entry = CompositionEntry(composition="Ca(N(O1.5)2)2-2.1(H)2O")
self.assertEqual(4, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 15.3, entry.get_element_fraction(
name="Ca"), delta=1e-6)
self.assertAlmostEqual(2.0 / 15.3, entry.get_element_fraction(
name="N"), delta=1e-6)
self.assertAlmostEqual(8.1 / 15.3, entry.get_element_fraction(
name="O"), delta=1e-6)
self.assertAlmostEqual(4.2 / 15.3, entry.get_element_fraction(
name="H"), delta=1e-6)
entry = CompositionEntry(composition="{[("
"Fe0.6Co0.4)0.75B0.2Si0.05]0.96Nb0.04}96Cr4")
self.assertEqual(6, len(entry.get_element_ids()))
self.assertAlmostEqual(0.41472, entry.get_element_fraction(
name="Fe"), delta=1e-6)
self.assertAlmostEqual(0.27648, entry.get_element_fraction(
name="Co"), delta=1e-6)
self.assertAlmostEqual(0.18432, entry.get_element_fraction(
name="B"), delta=1e-6)
self.assertAlmostEqual(0.04608, entry.get_element_fraction(
name="Si"), delta=1e-6)
self.assertAlmostEqual(0.0384, entry.get_element_fraction(
name="Nb"), delta=1e-6)
self.assertAlmostEqual(0.04, entry.get_element_fraction(
name="Cr"), delta=1e-6)
def test_set_composition(self):
# One element.
elem = [0]
frac = [1]
entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(1, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0, entry.get_element_fraction(name="H"),
delta=1e-6)
# One element with duplicates.
elem = [0, 0]
frac = [0.5, 0.5]
entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(1, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0, entry.get_element_fraction(name="H"),
delta=1e-6)
# One element with zero.
elem = [0, 1]
frac = [1, 0]
entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(1, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0, entry.get_element_fraction(name="H"),
delta=1e-6)
# Two elements.
elem = [16, 10]
frac = [1, 1]
entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(0.5, entry.get_element_fraction(name="Na"),
delta=1e-6)
self.assertAlmostEqual(0.5, entry.get_element_fraction(name="Cl"),
delta=1e-6)
np.assert_array_equal([10, 16], entry.get_element_ids())
np.assert_array_almost_equal([0.5, 0.5], entry.get_element_fractions())
self.assertAlmostEqual(2, entry.number_in_cell, delta=1e-6)
# Two elements with duplicates.
elem = [11, 16, 16]
frac = [1, 1, 1]
entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 3, entry.get_element_fraction(name="Mg"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 3, entry.get_element_fraction(name="Cl"),
delta=1e-6)
np.assert_array_equal([11, 16], entry.get_element_ids())
np.assert_array_almost_equal([1.0 / 3, 2.0 / 3],
entry.get_element_fractions())
self.assertAlmostEqual(3, entry.number_in_cell, delta=1e-6)
# Two elements with zero.
elem = [11, 16, 16]
frac = [1, 2, 0]
entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(2, len(entry.get_element_ids()))
self.assertAlmostEqual(1.0 / 3, entry.get_element_fraction(name="Mg"),
delta=1e-6)
self.assertAlmostEqual(2.0 / 3, entry.get_element_fraction(name="Cl"),
delta=1e-6)
np.assert_array_equal([11, 16], entry.get_element_ids())
np.assert_array_almost_equal([1.0 / 3, 2.0 / 3],
entry.get_element_fractions())
self.assertAlmostEqual(3, entry.number_in_cell, delta=1e-6)
def test_sort_and_normalize(self):
# Make an example composition.
elem = [1, 2, 3, 4, 5]
frac = [1.0, 2.0, 3.0, 4.0, 5.0]
# Make first composition.
entry = CompositionEntry(element_ids=elem, fractions=frac)
entry_elems = entry.get_element_ids()
entry_fracs = entry.get_element_fractions()
for i in range(5):
self.assertAlmostEqual(entry_fracs[i], entry_elems[i] / 15.0,
delta=1e-6)
# Iterate through all permutations.
for perm in permutations([0, 1, 2, 3, 4]):
# Make a new version of elem and frac.
new_elem = list(elem)
new_frac = list(frac)
for i in range(len(new_elem)):
new_elem[i] = elem[perm[i]]
new_frac[i] = frac[perm[i]]
# Make sure it parses the same.
new_entry = CompositionEntry(element_ids=elem, fractions=frac)
self.assertEqual(new_entry, entry)
self.assertEqual(0, new_entry.__cmp__(entry))
np.assert_array_equal(entry_elems, new_entry.get_element_ids())
np.assert_array_almost_equal(entry_fracs,
new_entry.get_element_fractions())
def test_compare(self):
# this_file_path = os.path.dirname(__file__)
# abs_path = os.path.join(this_file_path, "../../test-files/")
abs_path = pkg_resources.resource_filename('chemml', os.path.join('datasets', 'data', 'magpie_python_test'))
entries = CompositionEntry.import_composition_list(
os.path.join(abs_path, "small_set_comp.txt"))
for e1 in range(len(entries)):
self.assertEqual(0, entries[e1].__cmp__(entries[e1]))
for e2 in range(e1 + 1, len(entries)):
self.assertEqual(entries[e1].__cmp__(entries[e2]),
-1 * entries[e2].__cmp__(entries[e1]))
if entries[e1].__cmp__(entries[e2]) == 0:
self.assertEqual(entries[e1].__hash__(), entries[
e2].__hash__())
self.assertTrue(entries[e1].__eq__(entries[e2])) | 48.768924 | 116 | 0.58108 | 1,496 | 12,241 | 4.570187 | 0.106952 | 0.095949 | 0.179903 | 0.171566 | 0.78631 | 0.754424 | 0.724733 | 0.715665 | 0.709083 | 0.686266 | 0 | 0.056375 | 0.291398 | 12,241 | 251 | 117 | 48.768924 | 0.731842 | 0.03186 | 0 | 0.549763 | 0 | 0.004739 | 0.026609 | 0.005406 | 0 | 0 | 0 | 0 | 0.42654 | 1 | 0.018957 | false | 0 | 0.033175 | 0 | 0.056872 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c407dd372072523b80c67b04f4b0c90d105b0212 | 7,462 | py | Python | Syco.py | MohSinTheLegend/Mohsin-V2 | 2747e1cde838459fcffed8790408cb5c8ae85611 | [
"Apache-2.0"
] | 1 | 2021-12-22T01:20:22.000Z | 2021-12-22T01:20:22.000Z | Syco.py | MohSinTheLegend/Mohsin-V2 | 2747e1cde838459fcffed8790408cb5c8ae85611 | [
"Apache-2.0"
] | null | null | null | Syco.py | MohSinTheLegend/Mohsin-V2 | 2747e1cde838459fcffed8790408cb5c8ae85611 | [
"Apache-2.0"
] | null | null | null | import marshal,zlib,base64
exec(marshal.loads(zlib.decompress(base64.b64decode("eJzNW01sG0l2Lv6JIkX92pYs2zNTnh+ZHo9Ikfqz7LE9sv5jWVJamtFYXkPTYheplshuurtpWY4MLDCbzAbBHpJLkGARJKdkc1hschokyCVA7kn2uocs9pTkEOSQAAECJO+96m42RcqSd7zJiGaz/rrq1av3vnrvVbnA3L8YfD+Br/1FhLHDnzCNMS3EyoxtMS8dYlshLx1mW2EvHWFbES8dZVtRLx1jWzEv3ca22rx0nG3FvXQ722r30gm2lfDSSbaV9NIdbKvDS6fYVorSYVbuZJUuttXFQpvGH7Oo6Gb7SWYpIfgTjO31MC3CvgwxLxMNZmIyE2Z7vUxrC9bEg5n2E5slgplkMNMRzKQoY4TY54e9DMv6mNZJZZvGnwHN54jmrrBL83m2d4FpXdjA6g67fXQHO+wJZnpPzPSdWHPudZo97Q4j7cjv86zczyoDbGsA2kTZ3kWmAan97EuQhEHmlgww7SKVXGJ6OxOXmRjEfrRBNoGZS5S5FMxcZhPaFaa9BT9vM+0d+OFMuwo/7zLtPfh5n2kfsImtK0y8xURIUhhie9B2iDLaNfY9kNF3mJamBLx9nRJXmfYhJaCjG5R4j2kfUQL6HGZbH7ASpIeYlqHCa0zLUiLNtBFKXGdajhIfMi1PiRtMG6XER0wbo8Qw08YpkWHaBCWyTJukxAgTOabdZPthZn0nIq6xLxlIAknxenoKNE3/H/hbSYcg6SThsbFrCVVbM82yfQGy1f0S1w3bUctlXj10dk2Dcy5r9KpfUxGFXdXQX4imGks8rQnbsXnpD/8A/350zz4vm+T9NoapiT2b2z1NFdWK3d+SiDxS0X+8vT9Yc1Wdwl6scjt5uLq4vrSSgWwUQUd9JuwUJDKZTFY3NPE8s2dTQeUZhzL8ZjL2ABQUNJnhQ0NIpTeMg2x8zz4Hz2LNFhYf3ufjIyMjWadQ5UPEHJysfNcfgg9J9nfDY8Y0DFFwdNOYsyzTKiEo/n2fPU2Jmz88kgn2/Z9+4iZ+8Yk9CL/Ph4s7wwX/5eEd1dAOdM3ZJfqp1tYrw7uGXi8whIMFTgIK5j6fmVtenlvZoOkd7+1pTS3rzqF9FacuyuXMzIaiaro5XSgI294A3hpm2SwdLq7Prk3TAh/vwTmsCvuvoOKh+UIvl9XseGaEp5d1o/b8Np82NMvUNQ5lt/n6w+GFqZGRNX6/ppe17LLyeT63cZsfPLvOp6vVstgUOw90Jzs+OpkZneDpB4sbD5c/4mV9X/AFUdg3r/PPhGXDmNkxGGJm1zIrIjs2mhnJ5EfHJzO5fI4/NHf0suDralG1dK+nx/P3t5em72fn749N356/P/1ZdhTogc/YzUx+cvT2ExvVA1d1WC0JwyFmqECRXlBxitnnwwcHB8NF06oM16yyMAqw0hpxG/jgwBvEBAe3uGW9BDT2eHzadZzqsDBKOqwICmLNKd4soEiE4RtBqUDpxMfzj9kRKe/A7JMR9jLEHABKQKIwO5Kw5OYjpOfQ2V4MNylC0Q62CRC6nsYOV2zsPln6+u3f/Ndf//qP7qXbUACxxj60HczYjmbWHKL2wNIdSXexXLN3iURHr8giuyxENY2EkvC/oKdIo2zSw8YqrUR9Vy1zCIvasSLUG+oJdYYK7Pg85xnNEMiGrbMUYS9h2iM47dknaZwy7KEw2YEFf+o01Qjbi3o76tN+d6ptNFWkN4OAQYmMn8hwWh2+bJaA9SW+ZHCpVOyf7qVxbkocH0itgmuvdOADX5HzjREfHH2f5mw2zxlfL8vOb2BxkubdDbNuZ53s/3fmF3Hm6+oznPiGuS8Mfv8QFGPX1l+PCwpKsdLrTb1h/m0SVGGI7Ldt+kj1gnAcnP6nVU11hP3Gpo14WqM+offRb9vMcQqexK/WnDe62CDsABqT37Ypd6Ksz5oHRtmEXQvm/abmjO9obr+3m2dt/xsAQ/Ly49ztqcnK8Bv849RnrjIDtpojNFJdMmX49PLSr2LAJJ/B3YzvHPJb3uAeWkyX9SSfVwtixzT3g9XSuAKKMhuLc5n5aXgqmdX5zOL0xpzyeRLwFoymkqVWjncJPa4Wi3pBV8uTNyeSfGV1Y47jX6AhFnJlbn1tdWX9/tLyHJ9eecQfLq2v80/X55J8c1pZWVpZCL4yu7pybYNvKI/4xiqfWV179CthVIG5f2FP0h/AQ0qx5u7RYbaw6IpyOLBrB6tBKeARxUcMW8dI/uviTptPoSxUq/TfP/r3f3y4s3bPfg+KHuee8JmyaQg+b+nC0Mq67XCwBflabQcsFb40y2n7fTzyhM89153aGGSQD2MTr8cHaTHgJE3XYADDQVTIOEAYqGsRaQkU6cYzXa3rUIMipbwm24VdUy+IZazrI2XqDMXgi0/5IQaHggxekQyW7AMcOSLHHD2ed9EoIY8nTaURKr2LdIOJ5DrjRopJHkeP8dhGHifv3r3LaYPP0XOEZiiAdbSFkgiNVubBpgUngBdMywKLt3xIwEI7gaUebOtGteYol5Ej+JKCFjt5WwVcqu2KqhvSBcC+d9XCfhOHFLQ2P8N8F80/BZ8kfHvCLkcaRG7tDYic1nZM7hAlVxR0D3z4tKdckVvGteOburPrGhPEl5FK2lW+ldXhmTW37Lqrdty+gm/nG97+FL2maZRX1bZ9Sb0PLPlmklrnu3KpAeOdLn8ZXi2jtHxfYP58S7l01yHCgptc6CTJTIYc2tXc6JCDvHdLpFS6trvc6prbwurstblxIsi14+vMDQjh7kkFcblptmNsCCNCYfYSHAWQ/YTnLmAmKVt1eHEid+TO5pG1ruCgbrvuYLueFj2kpAbepNn30ux5yOmUpe9SaR+VTvhtUxj6gRG66tJ3vlEjlbdwAVEYWzv0pd84/+OFn79w7pGld1w8dbQ+fmmBIsV/fOMJyqgjXIFfFJbA3YZMgAyJUsZ57hBmHBCNrpiv19BnLtbKdSuEJEp2Axa5qFviOoUlcKroItq3sllNtwFjtEyplH1xc2p9s/piY/OABskrb3vceHz1CexxD+ZW+NLKZ7D/zqYxtKDwV+oBakkBvFRhKec8KDKrwlAQxNztxrSF0u1VVoRR81wc3SjuKLj/kHepvOu94uCkpF+pPmtWKXxjH/Mf+6DmqRX+dsE3Bd84WVRJ+GB5l1ufdCEw6UEgokXUU70/D7WEQFkS9hQJ8TByEh4GNNCDQ0/6pS7vxVHRUOnCqHSgU1iS9EqkrsVR9dySCLJZqtnLKJNdtkv1S6AigsocRdgA6NnAURR+u+HbwxwM5ob2UuxljIHe7HWxIyLqZRvq01Eb22+DXSbkdKMiYcwURoqzoziGUI+ozSDpehwDyKRQ5+u7wYWTZt9Psx9wdb2BVX1SS+cwzooDDLJBJOEHIe1SU9tzsu0gBllb15F6X/E3lxP02deVK6S1o2O3cxXf6HR3FFKwb6jb/a5uz8GmXM4uzWZXapUd2JNuSSOAO0gGJdP0vG5319HAPgDdRBD43YDK7gyrVT1TdGnNFMxKtiJgelpWrTm7EiruqRRJ2yZ9uZMfnZwcn5oamRqfyk2Mj3+QH8+PT86MFHNjI6q6I7TizsS4WshPqpOjU0LLqfn8xOhObghDT6pzZ882jSFb299+JuNgd3JDAidzh0J7Q2WzoJbFHWFsf7o+VHVpvmMvYh28dUc37aGSMIQFnsW2DURBF9sFoFwXNnRl66U7o8Xx8fHi1BTQkSsWtElVHSmMjRXHbxbH83lRnCBU2BWqBgSQaRec3TGAVN7HNX/HB2laynwFUNKFybK7uDpiho2b8COzBjZCoWDWDIcvqja/L1zk1PQ8NkLIPzg4aOA52WICY6rbFbtko1tO9v9oroJwKQefBzaBl5Np6F3lM7uisF81dcM54b3jwrKq1KVh0zKNEm1fJ2KvMuTBM3HOEtUyUK6g/UvGsxfXJiAtCYewWvJXBuMEbDSYwKUn3EWH1FauYbc4bwVRV7mOjw9RcrFTst11jfrUtZz7m6ffmltePdCoX011VJL2pzJQDovTDObY8+8w17VnLYC8jz4S0NMA6Sn49gKEx/xWAOrhHsojqLvBQRwx5sH68nFYPxzEmcgwwpdhN/IJ8LxpfMyiQC2ebGksFAo5Mc+8dd93Tas2gmh5rOaCMZ2UTTEXk+MI9QMA9BKHMeiaYnudBPd0SCaBHiB+EAB+01iCceUp4D/TuA3QSkQkz0pE87tuSUeL15rHSeGjEx9d+Oj2sk5PHXh76sDbwnKxyPhAQafjCD42MpaRhsqK6fB50BGN8Nqq8GGryOtv1/H6RgAKwdmv7h6HwkbsU3KefWGoFWHn3eGX3ciNr6FNINCgpi33EHIn+Ap0CxD9y+8QqFXo8yxOzzzg88rqQ3gsza3M8uWl9Q0+DQnf1yaWogMjA2Pp3kYUuOZNFY/cSPWXVonPx/EB9zxlHB8T+MAAm4Kul3ILHzc9nHggDul1B99E8wzwuwzuaBrHUHAhlBlv91KVWdbKy0GQ+EvMf+JqcSQ06Ho5qKUdlO9zjbB6aVDbY6/w1XH7PM1XXwn46hpqdyD0F/XEGx0RqTzSjJLVcd+jDzhAJOcIg03+g28xv0J+KVCOAVMQO4yXBnamQwX3LTruQxmlIMCSwWdMSyDX231UJ+vZAidWudAoAfUlxv1XxgAQ3M0qHik1rw62+inm+4mzXhQg7nK+IRrgY+bfsLNgphoF7JoE7IoRdm0jdp0dLhvRJ1F/LX4aUiU8/1VLeNgI8AbvJ+uLlzoJpEieyXpowCjAh9dEqXMBwKhaZlG3TQshY9BTdlg8Pm+ZFTeuxpd12JA/YDKAUa91zZelWS/IIbHgDUUyOprxQ5nzdXu+QaJcOwBmSmgi/TuUwfwJ8Y1fg5+fYx6tERaua/7JGk6y1unKGnl5fxdu1G/gMGyY8Pxt+Qxq/F+zFjEQ6XRFkd4BkDlUYdhzwwhwGLaQ8YoQulLP/yIEzhLI1ODsk/voTx0RAAyCl4Spdki1oXuy1+HGRUC2BtExgkFT6EKBmA2AUzRAhwkWSP8XISQuRcT9V6sADXODGi/bGW2pzU0OZ5kfFzlqJ/dt4LSpJMiNSxDJAwuLm0YWtLCTtPAp3iMCr466OscGJA3n5Xhdde1rGPLC6UMC92wW5F6SHSV97iVfm3s/CW26oZx+4l53GEwgL5QzEDA5LjKnl4EDKeNPX4ZoSufwzpE2KNGcXMiGEuT0V4x9FWLFMLqRvwUy1cGcC0y7QpMBcsAg62dHHdRt2H1Fewsfb7sDfp95A2rvAH9kWYAIuZ4DdczhrTYMgiD0AV5p0WSLhBF2C8tGwW1VwbMi2n+VYXjozNuUjxREFN+jXMKwMYbsv4m5cvnV1FI8uZFQ8p9828q1lwgJk8dRFmhb1o19TlYewOAzwdcs/ZlaOOSrRvAIAjyipXWC48/Usq7RVZjkmgWD8jmMOQFeEzbKqzsnMJC2677gbjvjxduJZuTZxurG9DLQtQU01+E959bOIACiHbnuqJbDKe7rz/QAe5yuVoVqUTiPrmDgWqD5hBuEbcUJv9xYjwdz36FYD+jU4VIcdSrMBgCNfGwArRsAQJMRnRh82/CmBUZ0ohTRIVumHtGJexGdv0UtpJcT7ssJ0pZ4IJwrwzzUDHBLtuqUOokVCRJnEHNQ+yMZ9DL+pA1EHaM23TJqcz0EGNV6mF5vmHYElhOGafeHYcFhbrUhK/qAFYkWrEiciRUNdP5DA52JM9GZOJ3O34vVOT4WDnI8cSaOJ07neDaGrAD0BpRtZkXy9Vnxs3CQFckzsSJ5Oiu+itZZcS8SZEXyTKxIns6K96OweSF4N7Gh4/XZ8ItIkA0dZ2JDx+ls+G6kzob5aJANHWdiQ8fpbHgrAps07FvNbEi9Phv+JQqbrk9j6kxsSJ3OhsNwnQ0PY0E2pM7EhtTpbOgPo2IMgGJ0tmBF5+uz4j9iQYnoPBMrOk9nhRGqs+LztiArOs/Eis7TWdFBtuxFYEVXC1Z0vT4rWDzIiq4zsaLrdFYUAhtUIR5kRdeZWNF1OiswdhdH79O99L2eHiT3j4750cnI5Ue/XQH2/P91gF2akhTrvubahVMjlceXHzdEzt3iJ54rSsamTeGFfXKelzCPHOVHXF4IbRk6V+gO2/WGgejWQz3gdnwoGqVQpThiVC7ZGDmkmBgnX3Ty5gT8o7jVmroPNqJqtBrj5OnIjnN5GbKGvsg9lpdSq2XdOR4wwzBZIGrWGBwnklSw/QyNDkupT9Uqyfu4trCUu57NTiPguuaUac9w1+RNmH36KVT9Jnk/NeqnxvzUuJ+a8FOT8m5phPqREX9z3264LEOHr6puXIOR7Z9hTagtdAHc9aQbVr8Az0HI9YR7AmlZGyxJnbm27RvU/vLjno9GQjoZ/RjBWLNMlIT6ic+MWamWhSM08sj4ogmOStPxkSs12SaRvcXlTSiouT2ar/A0xVXQ6zt+TlOXoxu+MB2LvqL0WFLqNrFwAR8YUJGHLg+9JbUdi37LwgjE/phXXVGr8rzFPDDSnV7HMogrnUaMdymdntjZJJUkdIYqfyruQY5TF05dKXrJqgz70L1GDR+iLldE5X9ifo9kKgVShR95Jo9B3146gz8XuuKWpSgkFAPJu0q18lS+w83V2+JhThxWVYaQYpG+76ZCPRRAut4ySPyn7IQgsf1+IDRcY40XZiKBO1yxenCmLXg/RUYjG6/LhNxLTkaGOk/SQD+gdAcN9PvMa5FidKYCq+KHCXAtVmQg8N4TeQiJrqkInEbw1Qf8Hk8/yq5c57Q6h8Km1TiU8QQUEfRo18pCtQVf0MGLXndMSy2BNy2sik5bBF8q8ml7n+6KgMNcqT0ftoVTqw7bsilFOAtVLlGXZ22toFoy7ikJ8i7AAoHkREu/G+8GLph8B31vKUEm0WbYHS5Zj4QNjjun+18t9aPfF3a8yyGD2BEpxc1RRmzyYwSut10hu0hiEfdPAVFsLpLI0N6wTbfjtrfTeX9kIoSwnu7fwjZK/xuBdBCmY1bk+bNq75b1HZqTJej416H/2wTuP6kvtSoJB1GXXq5ZZXwBW8rtGHKk/A7yvFIrO3pVYhB0kamaZllqLpKyVKmaliMPbwY9xSZQr+xrukVjra7L05koQb2zS4PqdhEWxznH6se8GfG8IOgAwFYiHqE4M5Alms6OPJ219Yp7elS1CJUkCwQustNHwOFooqgC4fS/UXDidBEI6+iSj0LH0hf8RbzkLZ2u7cpDJ7x2KC8u4l0dOuOVkLboL/pDb+Xd9cJzO1gvf+Wbbkx/XDG1WlncJQQrwyMS+mHovBtcPv7pBXQBZAn3tag7DyiCR8preDMonIglou6nsy3cFk4k4dOR4IlriauJrsSFxNeJ3lTofwHaBtZo"))))
| 2,487.333333 | 7,434 | 0.966631 | 241 | 7,462 | 29.929461 | 0.987552 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.1586 | 0.000402 | 7,462 | 2 | 7,435 | 3,731 | 0.808419 | 0 | 0 | 0 | 0 | 0.5 | 0.988475 | 0.988475 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
c4637581696711adb308b84d580683c0e5987add | 264 | py | Python | zilean/server/conf/_service.py | A-Hilaly/zilean | 2b2e87969a0d8064e8b92b07c346a4006f93c795 | [
"Apache-2.0"
] | null | null | null | zilean/server/conf/_service.py | A-Hilaly/zilean | 2b2e87969a0d8064e8b92b07c346a4006f93c795 | [
"Apache-2.0"
] | null | null | null | zilean/server/conf/_service.py | A-Hilaly/zilean | 2b2e87969a0d8064e8b92b07c346a4006f93c795 | [
"Apache-2.0"
] | null | null | null | import os
from ._config import MYSQL_CMD_LOCATION
class Service(object):
@staticmethod
def start_mysql_service():
pass
@staticmethod
def stop_mysql_service():
pass
@staticmethod
def restart_mysql_service():
pass
| 15.529412 | 39 | 0.666667 | 29 | 264 | 5.758621 | 0.551724 | 0.269461 | 0.287425 | 0.335329 | 0.371257 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.272727 | 264 | 16 | 40 | 16.5 | 0.869792 | 0 | 0 | 0.5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | true | 0.25 | 0.166667 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
676b95f138ff7cf30de625cf54e31ac77397cd80 | 13,127 | py | Python | pyflux/gpnarx/tests/gpnarx_tests.py | ThomasHoppe/pyflux | 297f2afc2095acd97c12e827dd500e8ea5da0c0f | [
"BSD-3-Clause"
] | 2,091 | 2016-04-01T02:52:10.000Z | 2022-03-29T11:38:15.000Z | pyflux/gpnarx/tests/gpnarx_tests.py | EricSchles/pyflux | 297f2afc2095acd97c12e827dd500e8ea5da0c0f | [
"BSD-3-Clause"
] | 160 | 2016-04-26T14:52:18.000Z | 2022-03-15T02:09:07.000Z | pyflux/gpnarx/tests/gpnarx_tests.py | EricSchles/pyflux | 297f2afc2095acd97c12e827dd500e8ea5da0c0f | [
"BSD-3-Clause"
] | 264 | 2016-05-02T14:03:31.000Z | 2022-03-29T07:48:20.000Z | import numpy as np
import pyflux as pf
noise = np.random.normal(0,1,40)
data = np.zeros(40)
for i in range(1,len(data)):
data[i] = 0.9*data[i-1] + noise[i]
def test_couple_terms():
"""
Tests an GPNARX model with 1 AR term and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.SquaredExponential())
x = model.fit()
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_couple_terms_integ():
"""
Tests an GPNARX model with 1 AR term, integrated once, and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, integ=1, kernel=pf.SquaredExponential())
x = model.fit()
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_bbvi():
"""
Tests an GPNARX model estimated with BBVI and that the length of the latent variable
list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.SquaredExponential())
x = model.fit('BBVI',iterations=100)
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_mh():
"""
Tests an GPNARX model estimated with Metropolis-Hastings and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.SquaredExponential())
x = model.fit('M-H',nsims=300)
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_pml():
"""
Tests a PML model estimated with Laplace approximation and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.SquaredExponential())
x = model.fit('PML')
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_predict_length():
"""
Tests that the prediction dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.SquaredExponential())
x = model.fit()
x.summary()
assert(model.predict(h=5).shape[0] == 5)
def test_predict_is_length():
"""
Tests that the prediction IS dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.SquaredExponential())
x = model.fit()
assert(model.predict_is(h=5).shape[0] == 5)
def test_predict_nans():
"""
Tests that the predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.SquaredExponential())
x = model.fit()
x.summary()
assert(len(model.predict(h=5).values[np.isnan(model.predict(h=5).values)]) == 0)
def test_predict_is_nans():
"""
Tests that the in-sample predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.SquaredExponential())
x = model.fit()
x.summary()
assert(len(model.predict_is(h=5).values[np.isnan(model.predict_is(h=5).values)]) == 0)
def test_ou_couple_terms():
"""
Tests an GPNARX model with 1 AR term and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.OrnsteinUhlenbeck())
x = model.fit()
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_ou_couple_terms_integ():
"""
Tests an GPNARX model with 1 AR term, integrated once, and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, integ=1, kernel=pf.OrnsteinUhlenbeck())
x = model.fit()
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_ou_bbvi():
"""
Tests an GPNARX model estimated with BBVI and that the length of the latent variable
list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.OrnsteinUhlenbeck())
x = model.fit('BBVI',iterations=100)
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_ou_mh():
"""
Tests an GPNARX model estimated with Metropolis-Hastings and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.OrnsteinUhlenbeck())
x = model.fit('M-H',nsims=300)
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_ou_pml():
"""
Tests a PML model estimated with Laplace approximation and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.OrnsteinUhlenbeck())
x = model.fit('PML')
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_ou_predict_length():
"""
Tests that the prediction dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.OrnsteinUhlenbeck())
x = model.fit()
x.summary()
assert(model.predict(h=5).shape[0] == 5)
def test_ou_predict_is_length():
"""
Tests that the prediction IS dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.OrnsteinUhlenbeck())
x = model.fit()
assert(model.predict_is(h=5).shape[0] == 5)
def test_ou_predict_nans():
"""
Tests that the predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.OrnsteinUhlenbeck())
x = model.fit()
x.summary()
assert(len(model.predict(h=5).values[np.isnan(model.predict(h=5).values)]) == 0)
def test_ou_predict_is_nans():
"""
Tests that the in-sample predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.OrnsteinUhlenbeck())
x = model.fit()
x.summary()
assert(len(model.predict_is(h=5).values[np.isnan(model.predict_is(h=5).values)]) == 0)
def test_rq_couple_terms():
"""
Tests an GPNARX model with 1 AR term and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.RationalQuadratic())
x = model.fit()
assert(len(model.latent_variables.z_list) == 4)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_rq_couple_terms_integ():
"""
Tests an GPNARX model with 1 AR term, integrated once, and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, integ=1, kernel=pf.RationalQuadratic())
x = model.fit()
assert(len(model.latent_variables.z_list) == 4)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_rq_bbvi():
"""
Tests an GPNARX model estimated with BBVI and that the length of the latent variable
list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.RationalQuadratic())
x = model.fit('BBVI',iterations=100)
assert(len(model.latent_variables.z_list) == 4)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_rq_mh():
"""
Tests an GPNARX model estimated with Metropolis-Hastings and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.RationalQuadratic())
x = model.fit('M-H',nsims=300)
assert(len(model.latent_variables.z_list) == 4)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_rq_pml():
"""
Tests a PML model estimated with Laplace approximation and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.RationalQuadratic())
x = model.fit('PML')
assert(len(model.latent_variables.z_list) == 4)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_rq_predict_length():
"""
Tests that the prediction dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.RationalQuadratic())
x = model.fit()
x.summary()
assert(model.predict(h=5).shape[0] == 5)
def test_rq_predict_is_length():
"""
Tests that the prediction IS dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.RationalQuadratic())
x = model.fit()
assert(model.predict_is(h=5).shape[0] == 5)
def test_rq_predict_nans():
"""
Tests that the predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.RationalQuadratic())
x = model.fit()
x.summary()
assert(len(model.predict(h=5).values[np.isnan(model.predict(h=5).values)]) == 0)
def test_rq_predict_is_nans():
"""
Tests that the in-sample predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.RationalQuadratic())
x = model.fit()
x.summary()
assert(len(model.predict_is(h=5).values[np.isnan(model.predict_is(h=5).values)]) == 0)
def test_per_couple_terms():
"""
Tests an GPNARX model with 1 AR term and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.Periodic())
x = model.fit()
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_per_couple_terms_integ():
"""
Tests an GPNARX model with 1 AR term, integrated once, and that
the latent variable list length is correct, and that the estimated
latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, integ=1, kernel=pf.Periodic())
x = model.fit()
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_per_bbvi():
"""
Tests an GPNARX model estimated with BBVI and that the length of the latent variable
list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.Periodic())
x = model.fit('BBVI',iterations=100)
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_per_mh():
"""
Tests an GPNARX model estimated with Metropolis-Hastings and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.Periodic())
x = model.fit('M-H',nsims=300)
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_per_pml():
"""
Tests a PML model estimated with Laplace approximation and that the length of the
latent variable list is correct, and that the estimated latent variables are not nan
"""
model = pf.GPNARX(data=data, ar=1, kernel=pf.Periodic())
x = model.fit('PML')
assert(len(model.latent_variables.z_list) == 3)
lvs = np.array([i.value for i in model.latent_variables.z_list])
assert(len(lvs[np.isnan(lvs)]) == 0)
def test_per_predict_length():
"""
Tests that the prediction dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.Periodic())
x = model.fit()
x.summary()
assert(model.predict(h=5).shape[0] == 5)
def test_per_predict_is_length():
"""
Tests that the prediction IS dataframe length is equal to the number of steps h
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.Periodic())
x = model.fit()
assert(model.predict_is(h=5).shape[0] == 5)
def test_per_predict_nans():
"""
Tests that the predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.Periodic())
x = model.fit()
x.summary()
assert(len(model.predict(h=5).values[np.isnan(model.predict(h=5).values)]) == 0)
def test_per_predict_is_nans():
"""
Tests that the in-sample predictions are not nans
"""
model = pf.GPNARX(data=data, ar=2, kernel=pf.Periodic())
x = model.fit()
x.summary()
assert(len(model.predict_is(h=5).values[np.isnan(model.predict_is(h=5).values)]) == 0) | 34.635884 | 87 | 0.717224 | 2,222 | 13,127 | 4.151215 | 0.043204 | 0.097572 | 0.043365 | 0.091067 | 0.987858 | 0.987858 | 0.987858 | 0.98634 | 0.98634 | 0.984822 | 0 | 0.015067 | 0.140474 | 13,127 | 379 | 88 | 34.635884 | 0.802446 | 0.317133 | 0 | 0.772277 | 0 | 0 | 0.004645 | 0 | 0 | 0 | 0 | 0 | 0.277228 | 1 | 0.178218 | false | 0 | 0.009901 | 0 | 0.188119 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
67d20783914b1275f606050969e654374de6370f | 3,400 | py | Python | app/game2_dic.py | and27/Edubot-Flask-Firebase | 0cd73cf9a72738735a25fcfdf1212a7fdd752265 | [
"MIT"
] | 1 | 2020-08-31T20:40:16.000Z | 2020-08-31T20:40:16.000Z | app/game2_dic.py | and27/Edubot-Flask-Firebase | 0cd73cf9a72738735a25fcfdf1212a7fdd752265 | [
"MIT"
] | null | null | null | app/game2_dic.py | and27/Edubot-Flask-Firebase | 0cd73cf9a72738735a25fcfdf1212a7fdd752265 | [
"MIT"
] | null | null | null | game2_dic = [
{"game_title":"Juego 5 Enteros",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Facil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":4
},
{"game_title":"Juego Líneas Rectas",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Intermedio",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":2
},
{"game_title":"Juego Multiplicaciones",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Dificil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":1
},
{"game_title":"Juego 5 Enteros",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Facil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":4
},
{"game_title":"Juego Líneas Rectas",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Intermedio",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":2
},
{"game_title":"Juego Multiplicaciones",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Dificil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":1
},
{"game_title":"Juego 5 Enteros",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Facil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":4
},
{"game_title":"Juego Líneas Rectas",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Intermedio",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":2
},
{"game_title":"Juego Multiplicaciones",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Dificil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":1
},
{"game_title":"Juego 5 Enteros",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Facil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":4
},
{"game_title":"Juego Líneas Rectas",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Intermedio",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":2
},
{"game_title":"Juego Multiplicaciones",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Dificil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":1
},
{"game_title":"Juego 5 Enteros",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Facil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":4
},
{"game_title":"Juego Líneas Rectas",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Intermedio",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":2
},
{"game_title":"Juego Multiplicaciones",
"game_description":"Lorem ipsum dolor sit amet, consectetur adipiscing elit.",
"game_dificulty":"Dificil",
"game_image":"game_black.png",
"game_category":"Matematicas",
"rank":1
}
]
| 30.088496 | 79 | 0.723235 | 417 | 3,400 | 5.678657 | 0.076739 | 0.05701 | 0.088682 | 0.158361 | 0.996622 | 0.996622 | 0.996622 | 0.996622 | 0.996622 | 0.996622 | 0 | 0.006894 | 0.104118 | 3,400 | 112 | 80 | 30.357143 | 0.770519 | 0 | 0 | 0.841122 | 0 | 0 | 0.767647 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
db364f94d0174cd93736104a2c8e16f2d9b03885 | 12,850 | py | Python | tests/api/v3_0_0/test_network_device_group.py | CiscoISE/ciscoisesdk | 860b0fc7cc15d0c2a39c64608195a7ab3d5f4885 | [
"MIT"
] | 36 | 2021-05-18T16:24:19.000Z | 2022-03-05T13:44:41.000Z | tests/api/v3_0_0/test_network_device_group.py | CiscoISE/ciscoisesdk | 860b0fc7cc15d0c2a39c64608195a7ab3d5f4885 | [
"MIT"
] | 15 | 2021-06-08T19:03:37.000Z | 2022-02-25T14:47:33.000Z | tests/api/v3_0_0/test_network_device_group.py | CiscoISE/ciscoisesdk | 860b0fc7cc15d0c2a39c64608195a7ab3d5f4885 | [
"MIT"
] | 6 | 2021-06-10T09:32:01.000Z | 2022-01-12T08:34:39.000Z | # -*- coding: utf-8 -*-
"""IdentityServicesEngineAPI network_device_group API fixtures and tests.
Copyright (c) 2021 Cisco and/or its affiliates.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import pytest
from fastjsonschema.exceptions import JsonSchemaException
from ciscoisesdk.exceptions import MalformedRequest
from ciscoisesdk.exceptions import ciscoisesdkException
from tests.environment import IDENTITY_SERVICES_ENGINE_VERSION
pytestmark = pytest.mark.skipif(IDENTITY_SERVICES_ENGINE_VERSION != '3.0.0', reason='version does not match')
def is_valid_get_network_device_group_by_name(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_e1d938f110e059a5abcb9cc8fb3cbd7c_v3_0_0').validate(obj.response)
return True
def get_network_device_group_by_name(api):
endpoint_result = api.network_device_group.get_network_device_group_by_name(
name='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_network_device_group_by_name(api, validator):
try:
assert is_valid_get_network_device_group_by_name(
validator,
get_network_device_group_by_name(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def get_network_device_group_by_name_default(api):
endpoint_result = api.network_device_group.get_network_device_group_by_name(
name='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_network_device_group_by_name_default(api, validator):
try:
assert is_valid_get_network_device_group_by_name(
validator,
get_network_device_group_by_name_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_network_device_group_by_id(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_a0fdb67d95475cd39382171dec96d6c1_v3_0_0').validate(obj.response)
return True
def get_network_device_group_by_id(api):
endpoint_result = api.network_device_group.get_network_device_group_by_id(
id='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_network_device_group_by_id(api, validator):
try:
assert is_valid_get_network_device_group_by_id(
validator,
get_network_device_group_by_id(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def get_network_device_group_by_id_default(api):
endpoint_result = api.network_device_group.get_network_device_group_by_id(
id='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_network_device_group_by_id_default(api, validator):
try:
assert is_valid_get_network_device_group_by_id(
validator,
get_network_device_group_by_id_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_update_network_device_group_by_id(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_808461e6734850fabb2097fa969948cb_v3_0_0').validate(obj.response)
return True
def update_network_device_group_by_id(api):
endpoint_result = api.network_device_group.update_network_device_group_by_id(
active_validation=False,
description='string',
id='string',
name='string',
othername='string',
payload=None
)
return endpoint_result
@pytest.mark.network_device_group
def test_update_network_device_group_by_id(api, validator):
try:
assert is_valid_update_network_device_group_by_id(
validator,
update_network_device_group_by_id(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def update_network_device_group_by_id_default(api):
endpoint_result = api.network_device_group.update_network_device_group_by_id(
active_validation=False,
id='string',
description=None,
name=None,
othername=None,
payload=None
)
return endpoint_result
@pytest.mark.network_device_group
def test_update_network_device_group_by_id_default(api, validator):
try:
assert is_valid_update_network_device_group_by_id(
validator,
update_network_device_group_by_id_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_delete_network_device_group_by_id(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_9291975ded6653128f502c97e52cf279_v3_0_0').validate(obj.response)
return True
def delete_network_device_group_by_id(api):
endpoint_result = api.network_device_group.delete_network_device_group_by_id(
id='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_delete_network_device_group_by_id(api, validator):
try:
assert is_valid_delete_network_device_group_by_id(
validator,
delete_network_device_group_by_id(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def delete_network_device_group_by_id_default(api):
endpoint_result = api.network_device_group.delete_network_device_group_by_id(
id='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_delete_network_device_group_by_id_default(api, validator):
try:
assert is_valid_delete_network_device_group_by_id(
validator,
delete_network_device_group_by_id_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_network_device_group(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_2a1af553d663556ca429a10ed82effda_v3_0_0').validate(obj.response)
return True
def get_network_device_group(api):
endpoint_result = api.network_device_group.get_network_device_group(
filter='value1,value2',
filter_type='string',
page=0,
size=0,
sortasc='string',
sortdsc='string'
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_network_device_group(api, validator):
try:
assert is_valid_get_network_device_group(
validator,
get_network_device_group(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def get_network_device_group_default(api):
endpoint_result = api.network_device_group.get_network_device_group(
filter=None,
filter_type=None,
page=None,
size=None,
sortasc=None,
sortdsc=None
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_network_device_group_default(api, validator):
try:
assert is_valid_get_network_device_group(
validator,
get_network_device_group_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_create_network_device_group(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_6c38fb2e2dd45f4dab6ec3a19effd15a_v3_0_0').validate(obj.response)
return True
def create_network_device_group(api):
endpoint_result = api.network_device_group.create_network_device_group(
active_validation=False,
description='string',
name='string',
othername='string',
payload=None
)
return endpoint_result
@pytest.mark.network_device_group
def test_create_network_device_group(api, validator):
try:
assert is_valid_create_network_device_group(
validator,
create_network_device_group(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def create_network_device_group_default(api):
endpoint_result = api.network_device_group.create_network_device_group(
active_validation=False,
description=None,
name=None,
othername=None,
payload=None
)
return endpoint_result
@pytest.mark.network_device_group
def test_create_network_device_group_default(api, validator):
try:
assert is_valid_create_network_device_group(
validator,
create_network_device_group_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
def is_valid_get_version(json_schema_validate, obj):
if not obj:
return False
assert hasattr(obj, 'headers')
assert hasattr(obj, 'content')
assert hasattr(obj, 'text')
assert hasattr(obj, 'response')
json_schema_validate('jsd_163f22d64bd4557d856a66ad6599d2d1_v3_0_0').validate(obj.response)
return True
def get_version(api):
endpoint_result = api.network_device_group.get_version(
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_version(api, validator):
try:
assert is_valid_get_version(
validator,
get_version(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest)):
print("ERROR: {error}".format(error=original_e))
raise original_e
def get_version_default(api):
endpoint_result = api.network_device_group.get_version(
)
return endpoint_result
@pytest.mark.network_device_group
def test_get_version_default(api, validator):
try:
assert is_valid_get_version(
validator,
get_version_default(api)
)
except Exception as original_e:
with pytest.raises((JsonSchemaException, MalformedRequest, TypeError)):
raise original_e
| 31.418093 | 109 | 0.723035 | 1,594 | 12,850 | 5.475533 | 0.119197 | 0.141499 | 0.195921 | 0.100825 | 0.816567 | 0.813932 | 0.812557 | 0.810724 | 0.802475 | 0.789299 | 0 | 0.016587 | 0.207082 | 12,850 | 408 | 110 | 31.495098 | 0.840024 | 0.090739 | 0 | 0.673077 | 0 | 0 | 0.061917 | 0.025777 | 0 | 0 | 0 | 0 | 0.134615 | 1 | 0.112179 | false | 0 | 0.016026 | 0 | 0.217949 | 0.022436 | 0 | 0 | 0 | null | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1ea771d1111a8d51d625ab2e15ba2758e7d7a70 | 25,466 | py | Python | contrib/runners/orquesta_runner/tests/unit/test_rerun.py | saucetray/st2 | 8f507d6c8d9483c8371e386fe2b7998596856fd7 | [
"Apache-2.0"
] | 2 | 2021-08-04T01:04:06.000Z | 2021-08-04T01:04:08.000Z | contrib/runners/orquesta_runner/tests/unit/test_rerun.py | saucetray/st2 | 8f507d6c8d9483c8371e386fe2b7998596856fd7 | [
"Apache-2.0"
] | null | null | null | contrib/runners/orquesta_runner/tests/unit/test_rerun.py | saucetray/st2 | 8f507d6c8d9483c8371e386fe2b7998596856fd7 | [
"Apache-2.0"
] | null | null | null | # Copyright 2019 Extreme Networks, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
import mock
import st2tests
import st2tests.config as tests_config
tests_config.parse_args()
from local_runner import local_shell_command_runner
from orquesta import statuses as wf_statuses
from st2common.bootstrap import actionsregistrar
from st2common.bootstrap import runnersregistrar
from st2common.constants import action as action_constants
from st2common.models.db import liveaction as lv_db_models
from st2common.persistence import execution as ex_db_access
from st2common.persistence import liveaction as lv_db_access
from st2common.persistence import workflow as wf_db_access
from st2common.services import action as action_service
from st2common.services import executions as execution_service
from st2common.services import workflows as workflow_service
from st2common.transport import liveaction as lv_ac_xport
from st2common.transport import workflow as wf_ex_xport
from st2common.transport import publishers
from st2tests.mocks import liveaction as mock_lv_ac_xport
from st2tests.mocks import workflow as mock_wf_ex_xport
TEST_PACK = 'orquesta_tests'
TEST_PACK_PATH = st2tests.fixturesloader.get_fixtures_packs_base_path() + '/' + TEST_PACK
PACKS = [
TEST_PACK_PATH,
st2tests.fixturesloader.get_fixtures_packs_base_path() + '/core'
]
RUNNER_RESULT_FAILED = (action_constants.LIVEACTION_STATUS_FAILED, {}, {})
RUNNER_RESULT_RUNNING = (action_constants.LIVEACTION_STATUS_RUNNING, {'stdout': '...'}, {})
RUNNER_RESULT_SUCCEEDED = (action_constants.LIVEACTION_STATUS_SUCCEEDED, {'stdout': 'foobar'}, {})
@mock.patch.object(
publishers.CUDPublisher,
'publish_update',
mock.MagicMock(return_value=None))
@mock.patch.object(
lv_ac_xport.LiveActionPublisher,
'publish_create',
mock.MagicMock(side_effect=mock_lv_ac_xport.MockLiveActionPublisher.publish_create))
@mock.patch.object(
lv_ac_xport.LiveActionPublisher,
'publish_state',
mock.MagicMock(side_effect=mock_lv_ac_xport.MockLiveActionPublisher.publish_state))
@mock.patch.object(
wf_ex_xport.WorkflowExecutionPublisher,
'publish_create',
mock.MagicMock(side_effect=mock_wf_ex_xport.MockWorkflowExecutionPublisher.publish_create))
@mock.patch.object(
wf_ex_xport.WorkflowExecutionPublisher,
'publish_state',
mock.MagicMock(side_effect=mock_wf_ex_xport.MockWorkflowExecutionPublisher.publish_state))
class OrquestRunnerTest(st2tests.WorkflowTestCase):
@classmethod
def setUpClass(cls):
super(OrquestRunnerTest, cls).setUpClass()
# Register runners.
runnersregistrar.register_runners()
# Register test pack(s).
actions_registrar = actionsregistrar.ActionsRegistrar(
use_pack_cache=False,
fail_on_failure=True
)
for pack in PACKS:
actions_registrar.register_from_pack(pack)
@mock.patch.object(
local_shell_command_runner.LocalShellCommandRunner, 'run',
mock.MagicMock(side_effect=[RUNNER_RESULT_FAILED, RUNNER_RESULT_SUCCEEDED]))
def test_rerun_workflow(self):
wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'sequential.yaml')
wf_input = {'who': 'Thanos'}
lv_ac_db1 = lv_db_models.LiveActionDB(action=wf_meta['name'], parameters=wf_input)
lv_ac_db1, ac_ex_db1 = action_service.request(lv_ac_db1)
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db1.id))[0]
# Process task1.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_FAILED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.FAILED)
# Assert workflow is completed.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.FAILED)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
# Rerun the execution.
context = {
're-run': {
'ref': str(ac_ex_db1.id),
'tasks': ['task1']
}
}
lv_ac_db2 = lv_db_models.LiveActionDB(action=wf_meta['name'], context=context)
lv_ac_db2, ac_ex_db2 = action_service.request(lv_ac_db2)
# Assert the workflow reran ok and is running.
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db2.id))[0]
self.assertEqual(wf_ex_db.status, wf_statuses.RUNNING)
lv_ac_db2 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db2.id))
self.assertEqual(lv_ac_db2.status, action_constants.LIVEACTION_STATUS_RUNNING)
ac_ex_db2 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db2.id))
self.assertEqual(ac_ex_db2.status, action_constants.LIVEACTION_STATUS_RUNNING)
# Process task1 and make sure it succeeds.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_dbs = wf_db_access.TaskExecution.query(**query_filters)
self.assertEqual(len(tk1_ex_dbs), 2)
tk1_ex_dbs = sorted(tk1_ex_dbs, key=lambda x: x.start_timestamp)
tk1_ex_db = tk1_ex_dbs[-1]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.SUCCEEDED)
@mock.patch.object(
local_shell_command_runner.LocalShellCommandRunner, 'run',
mock.MagicMock(side_effect=[RUNNER_RESULT_FAILED]))
def test_rerun_with_missing_workflow_execution_id(self):
wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'sequential.yaml')
wf_input = {'who': 'Thanos'}
lv_ac_db1 = lv_db_models.LiveActionDB(action=wf_meta['name'], parameters=wf_input)
lv_ac_db1, ac_ex_db1 = action_service.request(lv_ac_db1)
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db1.id))[0]
# Process task1.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_FAILED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.FAILED)
# Assert workflow is completed.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.FAILED)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
# Delete the workflow execution.
wf_db_access.WorkflowExecution.delete(wf_ex_db, publish=False)
# Manually delete the workflow_execution_id from context of the action execution.
lv_ac_db1.context.pop('workflow_execution')
lv_ac_db1 = lv_db_access.LiveAction.add_or_update(lv_ac_db1, publish=False)
ac_ex_db1 = execution_service.update_execution(lv_ac_db1, publish=False)
# Rerun the execution.
context = {
're-run': {
'ref': str(ac_ex_db1.id),
'tasks': ['task1']
}
}
lv_ac_db2 = lv_db_models.LiveActionDB(action=wf_meta['name'], context=context)
lv_ac_db2, ac_ex_db2 = action_service.request(lv_ac_db2)
expected_error = (
'Unable to rerun workflow execution because '
'workflow_execution_id is not provided.'
)
# Assert the workflow rerrun fails.
lv_ac_db2 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db2.id))
self.assertEqual(lv_ac_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, lv_ac_db2.result['errors'][0]['message'])
ac_ex_db2 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db2.id))
self.assertEqual(ac_ex_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, ac_ex_db2.result['errors'][0]['message'])
@mock.patch.object(
local_shell_command_runner.LocalShellCommandRunner, 'run',
mock.MagicMock(side_effect=[RUNNER_RESULT_FAILED]))
def test_rerun_with_invalid_workflow_execution(self):
wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'sequential.yaml')
wf_input = {'who': 'Thanos'}
lv_ac_db1 = lv_db_models.LiveActionDB(action=wf_meta['name'], parameters=wf_input)
lv_ac_db1, ac_ex_db1 = action_service.request(lv_ac_db1)
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db1.id))[0]
# Process task1.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_FAILED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.FAILED)
# Assert workflow is completed.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.FAILED)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
# Delete the workflow execution.
wf_db_access.WorkflowExecution.delete(wf_ex_db, publish=False)
# Rerun the execution.
context = {
're-run': {
'ref': str(ac_ex_db1.id),
'tasks': ['task1']
}
}
lv_ac_db2 = lv_db_models.LiveActionDB(action=wf_meta['name'], context=context)
lv_ac_db2, ac_ex_db2 = action_service.request(lv_ac_db2)
expected_error = (
'Unable to rerun workflow execution "%s" because '
'it does not exist.' % str(wf_ex_db.id)
)
# Assert the workflow rerrun fails.
lv_ac_db2 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db2.id))
self.assertEqual(lv_ac_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, lv_ac_db2.result['errors'][0]['message'])
ac_ex_db2 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db2.id))
self.assertEqual(ac_ex_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, ac_ex_db2.result['errors'][0]['message'])
@mock.patch.object(
local_shell_command_runner.LocalShellCommandRunner, 'run',
mock.MagicMock(side_effect=[RUNNER_RESULT_RUNNING]))
def test_rerun_workflow_still_running(self):
wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'sequential.yaml')
wf_input = {'who': 'Thanos'}
lv_ac_db1 = lv_db_models.LiveActionDB(action=wf_meta['name'], parameters=wf_input)
lv_ac_db1, ac_ex_db1 = action_service.request(lv_ac_db1)
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db1.id))[0]
# Process task1.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_RUNNING)
# Assert workflow is still running.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.RUNNING)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_RUNNING)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_RUNNING)
# Rerun the execution.
context = {
're-run': {
'ref': str(ac_ex_db1.id),
'tasks': ['task1']
}
}
lv_ac_db2 = lv_db_models.LiveActionDB(action=wf_meta['name'], context=context)
lv_ac_db2, ac_ex_db2 = action_service.request(lv_ac_db2)
expected_error = (
'Unable to rerun workflow execution "%s" because '
'it is not in a completed state.' % str(wf_ex_db.id)
)
# Assert the workflow rerrun fails.
lv_ac_db2 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db2.id))
self.assertEqual(lv_ac_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, lv_ac_db2.result['errors'][0]['message'])
ac_ex_db2 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db2.id))
self.assertEqual(ac_ex_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, ac_ex_db2.result['errors'][0]['message'])
@mock.patch.object(
workflow_service, 'request_rerun',
mock.MagicMock(side_effect=Exception('Unexpected.')))
@mock.patch.object(
local_shell_command_runner.LocalShellCommandRunner, 'run',
mock.MagicMock(side_effect=[RUNNER_RESULT_FAILED]))
def test_rerun_with_unexpected_error(self):
wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'sequential.yaml')
wf_input = {'who': 'Thanos'}
lv_ac_db1 = lv_db_models.LiveActionDB(action=wf_meta['name'], parameters=wf_input)
lv_ac_db1, ac_ex_db1 = action_service.request(lv_ac_db1)
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db1.id))[0]
# Process task1.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_FAILED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.FAILED)
# Assert workflow is completed.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.FAILED)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_FAILED)
# Delete the workflow execution.
wf_db_access.WorkflowExecution.delete(wf_ex_db, publish=False)
# Rerun the execution.
context = {
're-run': {
'ref': str(ac_ex_db1.id),
'tasks': ['task1']
}
}
lv_ac_db2 = lv_db_models.LiveActionDB(action=wf_meta['name'], context=context)
lv_ac_db2, ac_ex_db2 = action_service.request(lv_ac_db2)
expected_error = 'Unexpected.'
# Assert the workflow rerrun fails.
lv_ac_db2 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db2.id))
self.assertEqual(lv_ac_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, lv_ac_db2.result['errors'][0]['message'])
ac_ex_db2 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db2.id))
self.assertEqual(ac_ex_db2.status, action_constants.LIVEACTION_STATUS_FAILED)
self.assertEqual(expected_error, ac_ex_db2.result['errors'][0]['message'])
@mock.patch.object(
local_shell_command_runner.LocalShellCommandRunner, 'run',
mock.MagicMock(return_value=RUNNER_RESULT_SUCCEEDED))
def test_rerun_workflow_already_succeeded(self):
wf_meta = self.get_wf_fixture_meta_data(TEST_PACK_PATH, 'sequential.yaml')
wf_input = {'who': 'Thanos'}
lv_ac_db1 = lv_db_models.LiveActionDB(action=wf_meta['name'], parameters=wf_input)
lv_ac_db1, ac_ex_db1 = action_service.request(lv_ac_db1)
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db1.id))[0]
# Process task1.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.SUCCEEDED)
# Process task2.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task2'}
tk2_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk2_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk2_ex_db.id))[0]
tk2_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk2_ac_ex_db.liveaction['id'])
self.assertEqual(tk2_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk2_ac_ex_db)
tk2_ex_db = wf_db_access.TaskExecution.get_by_id(tk2_ex_db.id)
self.assertEqual(tk2_ex_db.status, wf_statuses.SUCCEEDED)
# Process task3.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task3'}
tk3_ex_db = wf_db_access.TaskExecution.query(**query_filters)[0]
tk3_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk3_ex_db.id))[0]
tk3_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk3_ac_ex_db.liveaction['id'])
self.assertEqual(tk3_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk3_ac_ex_db)
tk3_ex_db = wf_db_access.TaskExecution.get_by_id(tk3_ex_db.id)
self.assertEqual(tk3_ex_db.status, wf_statuses.SUCCEEDED)
# Assert workflow is completed.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.SUCCEEDED)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
# Rerun the execution.
context = {
're-run': {
'ref': str(ac_ex_db1.id),
'tasks': ['task1']
}
}
lv_ac_db2 = lv_db_models.LiveActionDB(action=wf_meta['name'], context=context)
lv_ac_db2, ac_ex_db2 = action_service.request(lv_ac_db2)
# Assert the workflow reran ok and is running.
wf_ex_db = wf_db_access.WorkflowExecution.query(action_execution=str(ac_ex_db2.id))[0]
self.assertEqual(wf_ex_db.status, wf_statuses.RUNNING)
lv_ac_db2 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db2.id))
self.assertEqual(lv_ac_db2.status, action_constants.LIVEACTION_STATUS_RUNNING)
ac_ex_db2 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db2.id))
self.assertEqual(ac_ex_db2.status, action_constants.LIVEACTION_STATUS_RUNNING)
# Assert there are two task1 and the last entry succeeded.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task1'}
tk1_ex_dbs = wf_db_access.TaskExecution.query(**query_filters)
self.assertEqual(len(tk1_ex_dbs), 2)
tk1_ex_dbs = sorted(tk1_ex_dbs, key=lambda x: x.start_timestamp)
tk1_ex_db = tk1_ex_dbs[-1]
tk1_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk1_ex_db.id))[0]
tk1_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk1_ac_ex_db.liveaction['id'])
self.assertEqual(tk1_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk1_ac_ex_db)
tk1_ex_db = wf_db_access.TaskExecution.get_by_id(tk1_ex_db.id)
self.assertEqual(tk1_ex_db.status, wf_statuses.SUCCEEDED)
# Assert there are two task2 and the last entry succeeded.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task2'}
tk2_ex_dbs = wf_db_access.TaskExecution.query(**query_filters)
self.assertEqual(len(tk2_ex_dbs), 2)
tk2_ex_dbs = sorted(tk2_ex_dbs, key=lambda x: x.start_timestamp)
tk2_ex_db = tk2_ex_dbs[-1]
tk2_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk2_ex_db.id))[0]
tk2_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk2_ac_ex_db.liveaction['id'])
self.assertEqual(tk2_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk2_ac_ex_db)
tk2_ex_db = wf_db_access.TaskExecution.get_by_id(tk2_ex_db.id)
self.assertEqual(tk2_ex_db.status, wf_statuses.SUCCEEDED)
# Assert there are two task3 and the last entry succeeded.
query_filters = {'workflow_execution': str(wf_ex_db.id), 'task_id': 'task3'}
tk3_ex_dbs = wf_db_access.TaskExecution.query(**query_filters)
self.assertEqual(len(tk3_ex_dbs), 2)
tk3_ex_dbs = sorted(tk3_ex_dbs, key=lambda x: x.start_timestamp)
tk3_ex_db = tk3_ex_dbs[-1]
tk3_ac_ex_db = ex_db_access.ActionExecution.query(task_execution=str(tk3_ex_db.id))[0]
tk3_lv_ac_db = lv_db_access.LiveAction.get_by_id(tk3_ac_ex_db.liveaction['id'])
self.assertEqual(tk3_lv_ac_db.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
workflow_service.handle_action_execution_completion(tk3_ac_ex_db)
tk3_ex_db = wf_db_access.TaskExecution.get_by_id(tk3_ex_db.id)
self.assertEqual(tk3_ex_db.status, wf_statuses.SUCCEEDED)
# Assert workflow is completed.
wf_ex_db = wf_db_access.WorkflowExecution.get_by_id(wf_ex_db.id)
self.assertEqual(wf_ex_db.status, wf_statuses.SUCCEEDED)
lv_ac_db1 = lv_db_access.LiveAction.get_by_id(str(lv_ac_db1.id))
self.assertEqual(lv_ac_db1.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
ac_ex_db1 = ex_db_access.ActionExecution.get_by_id(str(ac_ex_db1.id))
self.assertEqual(ac_ex_db1.status, action_constants.LIVEACTION_STATUS_SUCCEEDED)
| 52.399177 | 98 | 0.730503 | 3,712 | 25,466 | 4.587015 | 0.064386 | 0.038997 | 0.023022 | 0.074646 | 0.86956 | 0.85059 | 0.846127 | 0.836084 | 0.81917 | 0.81917 | 0 | 0.018572 | 0.17329 | 25,466 | 485 | 99 | 52.507216 | 0.790196 | 0.065617 | 0 | 0.739837 | 0 | 0 | 0.050539 | 0.000884 | 0 | 0 | 0 | 0 | 0.189702 | 1 | 0.01897 | false | 0 | 0.056911 | 0 | 0.078591 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1eeedf8f3eccb94d25c54d612994daa76bd4269 | 7,098 | py | Python | gentelella/app/api/restaurants/restaurant/get.py | Dhvani35729/Trofi-Dashboard | a49b4cbac964e60ea3124e4808ba88ef7256a8fe | [
"MIT"
] | null | null | null | gentelella/app/api/restaurants/restaurant/get.py | Dhvani35729/Trofi-Dashboard | a49b4cbac964e60ea3124e4808ba88ef7256a8fe | [
"MIT"
] | null | null | null | gentelella/app/api/restaurants/restaurant/get.py | Dhvani35729/Trofi-Dashboard | a49b4cbac964e60ea3124e4808ba88ef7256a8fe | [
"MIT"
] | null | null | null | from django.http import JsonResponse
from app.constants import DISCOUNT_INCREMENT
def restaurant_not_found(res_public_id):
return JsonResponse({
"error": {
"code": "RestaurantNotFound",
"id": res_public_id,
"message": "The specified restaurant does not exist",
}
})
def get_restaurant_with_menu(db, res_public_id):
res_hour_ref = db.collection(u'restaurants').document(
res_public_id).get()
res_public_data = res_hour_ref.to_dict()
if res_public_data is None:
return restaurant_not_found(res_public_id)
menu = []
for food_id in res_public_data["menu"]:
food_ref = db.collection(u'foods').document(food_id).get()
food_data = food_ref.to_dict()
toppings_data = []
for topping in food_data["toppings"]:
toppings_data.append({
"key": topping
})
food = {
"key": food_id,
"name": food_data["name"],
"desc": food_data["desc"],
"original_price": food_data["sales_price"],
"toppings": toppings_data
}
menu.append(food)
return JsonResponse({"list": menu})
def get_restaurant_with_menu_for_hour(db, res_public_id, hour_id, active=True):
res_hour_ref = db.collection(u'restaurants').document(
res_public_id).collection(u'hours').document(hour_id).get()
res_hour_data = res_hour_ref.to_dict()
if res_hour_data is None:
return restaurant_not_found(res_public_id)
menu = []
for food_id in res_hour_data["foods_active"]:
food_ref = db.collection(u'foods').document(food_id).get()
food_data = food_ref.to_dict()
current_discount = "0.00"
all_discounts = res_hour_data["discounts"]
max_discount = res_hour_data["max_discount"]
for discount in sorted(all_discounts):
if all_discounts[discount]["is_active"] is True:
current_discount = discount
if float(current_discount.replace("_", ".")) == max_discount:
current_contribution = 0
else:
current_contribution = res_hour_data["contributions"][food_id][current_discount]
toppings_data = []
for topping in food_data["toppings"]:
toppings_data.append({
"key": topping
})
food = {
"key": food_id,
"name": food_data["name"],
"desc": food_data["desc"],
"original_price": food_data["sales_price"],
"tags": food_data["tags"],
"toppings": toppings_data,
"contribution": current_contribution,
}
menu.append(food)
return JsonResponse({"list": menu})
def get_restaurant_with_hours(db, res_public_id, active=True):
res_data = db.collection(u'restaurants').document(res_public_id).get()
all_hours = []
for i in range(24):
all_hours.append({"key": str(i), "data": []})
# print(u'{} => {}'.format(res.id, res.to_dict()))
res_public_data = res_data.to_dict()
if res_public_data is None:
return restaurant_not_found(res_public_id)
hours_ref = db.collection(u'restaurants').document(res_data.id).collection(
u'hours').where(u'start_id', u'>=', res_public_data["opening_hour"]).where(u'start_id', u'<=', res_public_data["closing_hour"])
if active:
hours_ref = hours_ref.where(u'is_active', u'==', True)
hours_ref = hours_ref.get()
for hour in hours_ref:
# print(u'{} => {}'.format(hour.id, hour.to_dict()))
hour_data = hour.to_dict()
current_discount = 0
next_discount = 0
current_contribution = 0
all_discounts = hour_data["discounts"]
max_discount = hour_data["max_discount"]
max_discount_reached = False
for discount in sorted(all_discounts):
if all_discounts[discount]["is_active"] is True:
current_discount = float(discount.replace("_", "."))
current_contribution = all_discounts[discount]["current_contributed"]
if max_discount != current_discount:
next_discount = current_discount + DISCOUNT_INCREMENT
else:
max_discount_reached = True
next_discount = max_discount
break
hour_id = int(hour_data["start_id"])
res_card = {
"hour_id": hour_id,
"key": res_data.id,
"name": res_public_data["name"],
"tags": res_public_data["tags"],
"needed_contribution": hour_data["needed_contribution"],
"max_discount_reached": max_discount_reached,
"current_discount": current_discount,
"next_discount": next_discount,
"current_contribution": current_contribution,
}
all_hours[hour_id]["data"].append(res_card)
return JsonResponse({"list": all_hours})
def get_restaurant_with_hour(db, res_public_id, hour_id, active=True):
res_data = db.collection(u'restaurants').document(res_public_id).get()
all_hours = {"key": str(hour_id), "data": []}
# print(u'{} => {}'.format(res.id, res.to_dict()))
res_public_data = res_data.to_dict()
if res_public_data is None:
return restaurant_not_found(res_public_id)
hours_ref = db.collection(u'restaurants').document(res_data.id).collection(
u'hours').where(u'start_id', u'==', int(hour_id))
if active:
hours_ref = hours_ref.where(u'is_active', u'==', True)
hours_ref = hours_ref.get()
for hour in hours_ref:
# print(u'{} => {}'.format(hour.id, hour.to_dict()))
hour_data = hour.to_dict()
current_discount = 0
next_discount = 0
current_contribution = 0
all_discounts = hour_data["discounts"]
max_discount = hour_data["max_discount"]
max_discount_reached = False
for discount in sorted(all_discounts):
if all_discounts[discount]["is_active"] is True:
current_discount = float(discount.replace("_", "."))
current_contribution = all_discounts[discount]["current_contributed"]
if max_discount != current_discount:
next_discount = current_discount + DISCOUNT_INCREMENT
else:
max_discount_reached = True
next_discount = max_discount
break
hour_id = int(hour_data["start_id"])
res_card = {
"hour_id": hour_id,
"key": res_data.id,
"name": res_public_data["name"],
"tags": res_public_data["tags"],
"needed_contribution": hour_data["needed_contribution"],
"max_discount_reached": max_discount_reached,
"current_discount": current_discount,
"next_discount": next_discount,
"current_contribution": current_contribution,
}
all_hours["data"].append(res_card)
return JsonResponse(all_hours)
| 33.63981 | 135 | 0.606791 | 839 | 7,098 | 4.781883 | 0.106079 | 0.060568 | 0.038385 | 0.023928 | 0.831505 | 0.813559 | 0.788883 | 0.788883 | 0.769192 | 0.768445 | 0 | 0.00233 | 0.274444 | 7,098 | 210 | 136 | 33.8 | 0.776699 | 0.028036 | 0 | 0.721519 | 0 | 0 | 0.126342 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.031646 | false | 0 | 0.012658 | 0.006329 | 0.101266 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c009d5512a153a4f3f104399b5daadb4e6883dac | 199 | py | Python | scripts/rpc/subsystem.py | ykirichok/spdk | db7f82baf819740025da0ba271745e89ba682f47 | [
"BSD-3-Clause"
] | null | null | null | scripts/rpc/subsystem.py | ykirichok/spdk | db7f82baf819740025da0ba271745e89ba682f47 | [
"BSD-3-Clause"
] | null | null | null | scripts/rpc/subsystem.py | ykirichok/spdk | db7f82baf819740025da0ba271745e89ba682f47 | [
"BSD-3-Clause"
] | 2 | 2019-01-30T16:18:59.000Z | 2020-05-27T15:41:37.000Z | def get_subsystems(args):
return args.client.call('get_subsystems')
def get_subsystem_config(args):
params = {'name': args.name}
return args.client.call('get_subsystem_config', params)
| 24.875 | 59 | 0.733668 | 27 | 199 | 5.185185 | 0.407407 | 0.085714 | 0.228571 | 0.285714 | 0.328571 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.135678 | 199 | 7 | 60 | 28.428571 | 0.813953 | 0 | 0 | 0 | 0 | 0 | 0.190955 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.4 | false | 0 | 0 | 0.2 | 0.8 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
c04369224424abde7b9fd9bba001f011e74ef3ef | 27,220 | py | Python | sdk/python/pulumi_okta/deprecated/_inputs.py | brinnehlops/pulumi-okta | 798be92b13233d23736016b7ae78f256d5c95c06 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_okta/deprecated/_inputs.py | brinnehlops/pulumi-okta | 798be92b13233d23736016b7ae78f256d5c95c06 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | sdk/python/pulumi_okta/deprecated/_inputs.py | brinnehlops/pulumi-okta | 798be92b13233d23736016b7ae78f256d5c95c06 | [
"ECL-2.0",
"Apache-2.0"
] | null | null | null | # coding=utf-8
# *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
# *** Do not edit by hand unless you're certain you know what you are doing! ***
import warnings
import pulumi
import pulumi.runtime
from typing import Any, Dict, List, Mapping, Optional, Tuple, Union
from .. import _utilities, _tables
__all__ = [
'AuthLoginAppUserArgs',
'BookmarkAppUserArgs',
'MfaPolicyDuoArgs',
'MfaPolicyFidoU2fArgs',
'MfaPolicyFidoWebauthnArgs',
'MfaPolicyGoogleOtpArgs',
'MfaPolicyOktaCallArgs',
'MfaPolicyOktaOtpArgs',
'MfaPolicyOktaPasswordArgs',
'MfaPolicyOktaPushArgs',
'MfaPolicyOktaQuestionArgs',
'MfaPolicyOktaSmsArgs',
'MfaPolicyRsaTokenArgs',
'MfaPolicySymantecVipArgs',
'MfaPolicyYubikeyTokenArgs',
'OauthAppUserArgs',
'SamlAppAttributeStatementArgs',
'SamlAppUserArgs',
'SecurePasswordStoreAppUserArgs',
'SwaAppUserArgs',
'ThreeFieldAppUserArgs',
]
@pulumi.input_type
class AuthLoginAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class BookmarkAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class MfaPolicyDuoArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyFidoU2fArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyFidoWebauthnArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyGoogleOtpArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyOktaCallArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyOktaOtpArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyOktaPasswordArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyOktaPushArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyOktaQuestionArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyOktaSmsArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyRsaTokenArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicySymantecVipArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class MfaPolicyYubikeyTokenArgs:
def __init__(__self__, *,
consent_type: Optional[pulumi.Input[str]] = None,
enroll: Optional[pulumi.Input[str]] = None):
if consent_type is not None:
pulumi.set(__self__, "consent_type", consent_type)
if enroll is not None:
pulumi.set(__self__, "enroll", enroll)
@property
@pulumi.getter(name="consentType")
def consent_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "consent_type")
@consent_type.setter
def consent_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "consent_type", value)
@property
@pulumi.getter
def enroll(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "enroll")
@enroll.setter
def enroll(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "enroll", value)
@pulumi.input_type
class OauthAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SamlAppAttributeStatementArgs:
def __init__(__self__, *,
name: pulumi.Input[str],
filter_type: Optional[pulumi.Input[str]] = None,
filter_value: Optional[pulumi.Input[str]] = None,
namespace: Optional[pulumi.Input[str]] = None,
type: Optional[pulumi.Input[str]] = None,
values: Optional[pulumi.Input[List[pulumi.Input[str]]]] = None):
pulumi.set(__self__, "name", name)
if filter_type is not None:
pulumi.set(__self__, "filter_type", filter_type)
if filter_value is not None:
pulumi.set(__self__, "filter_value", filter_value)
if namespace is not None:
pulumi.set(__self__, "namespace", namespace)
if type is not None:
pulumi.set(__self__, "type", type)
if values is not None:
pulumi.set(__self__, "values", values)
@property
@pulumi.getter
def name(self) -> pulumi.Input[str]:
return pulumi.get(self, "name")
@name.setter
def name(self, value: pulumi.Input[str]):
pulumi.set(self, "name", value)
@property
@pulumi.getter(name="filterType")
def filter_type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "filter_type")
@filter_type.setter
def filter_type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter_type", value)
@property
@pulumi.getter(name="filterValue")
def filter_value(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "filter_value")
@filter_value.setter
def filter_value(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "filter_value", value)
@property
@pulumi.getter
def namespace(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "namespace")
@namespace.setter
def namespace(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "namespace", value)
@property
@pulumi.getter
def type(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "type")
@type.setter
def type(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "type", value)
@property
@pulumi.getter
def values(self) -> Optional[pulumi.Input[List[pulumi.Input[str]]]]:
return pulumi.get(self, "values")
@values.setter
def values(self, value: Optional[pulumi.Input[List[pulumi.Input[str]]]]):
pulumi.set(self, "values", value)
@pulumi.input_type
class SamlAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SecurePasswordStoreAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class SwaAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
@pulumi.input_type
class ThreeFieldAppUserArgs:
def __init__(__self__, *,
id: Optional[pulumi.Input[str]] = None,
password: Optional[pulumi.Input[str]] = None,
scope: Optional[pulumi.Input[str]] = None,
username: Optional[pulumi.Input[str]] = None):
if id is not None:
pulumi.set(__self__, "id", id)
if password is not None:
pulumi.set(__self__, "password", password)
if scope is not None:
pulumi.set(__self__, "scope", scope)
if username is not None:
pulumi.set(__self__, "username", username)
@property
@pulumi.getter
def id(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "id")
@id.setter
def id(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "id", value)
@property
@pulumi.getter
def password(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "password")
@password.setter
def password(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "password", value)
@property
@pulumi.getter
def scope(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "scope")
@scope.setter
def scope(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "scope", value)
@property
@pulumi.getter
def username(self) -> Optional[pulumi.Input[str]]:
return pulumi.get(self, "username")
@username.setter
def username(self, value: Optional[pulumi.Input[str]]):
pulumi.set(self, "username", value)
| 31.688009 | 87 | 0.62421 | 3,166 | 27,220 | 5.186671 | 0.029375 | 0.136654 | 0.153462 | 0.233116 | 0.903234 | 0.886548 | 0.87583 | 0.860666 | 0.860666 | 0.855246 | 0 | 0.000146 | 0.243938 | 27,220 | 858 | 88 | 31.724942 | 0.797765 | 0.006503 | 0 | 0.857971 | 1 | 0 | 0.071606 | 0.010689 | 0 | 0 | 0 | 0 | 0 | 1 | 0.204348 | false | 0.086957 | 0.007246 | 0.086957 | 0.328986 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 11 |
c063024aaa9f43be907db4b10354b0706df231d5 | 17,366 | py | Python | tests/test_bookmarks_parser.py | wllmsash/yget | cb3828c62afc00655d8a3e72987c6c563c437580 | [
"MIT"
] | null | null | null | tests/test_bookmarks_parser.py | wllmsash/yget | cb3828c62afc00655d8a3e72987c6c563c437580 | [
"MIT"
] | null | null | null | tests/test_bookmarks_parser.py | wllmsash/yget | cb3828c62afc00655d8a3e72987c6c563c437580 | [
"MIT"
] | null | null | null | import unittest
from collections import deque
from yget.bookmarks_parser import BookmarksParser
from .mock_input_provider import MockInputProvider
from .mock_logger import MockLogger
class TestBookmarksParser(unittest.TestCase):
@classmethod
def setUpClass(cls):
cls.empty_bookmarks_file_data = """
<!DOCTYPE NETSCAPE-Bookmark-file-1>
<!-- This is an automatically generated file.
It will be read and overwritten.
DO NOT EDIT! -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks</H1>
<DL><p>
</DL><p>
"""
cls.single_level_single_page_bookmarks_file_data = """
<!DOCTYPE NETSCAPE-Bookmark-file-1>
<!-- This is an automatically generated file.
It will be read and overwritten.
DO NOT EDIT! -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks</H1>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 1</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 2</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
</DL><p>
"""
cls.single_level_multiple_pages_bookmarks_file_data = """
<!DOCTYPE NETSCAPE-Bookmark-file-1>
<!-- This is an automatically generated file.
It will be read and overwritten.
DO NOT EDIT! -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks</H1>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 1</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 2</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 3</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 4</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 5</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 6</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 7</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 8</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 9</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 10</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 11</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 12</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
</DL><p>
"""
cls.multiple_level_bookmarks_file_data = """
<!DOCTYPE NETSCAPE-Bookmark-file-1>
<!-- This is an automatically generated file.
It will be read and overwritten.
DO NOT EDIT! -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks</H1>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 1</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 1.1</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 1.2</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
</DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 2</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
</DL><p>
"""
cls.multiple_level_bookmarks_file_data_with_valid_urls = """
<!DOCTYPE NETSCAPE-Bookmark-file-1>
<!-- This is an automatically generated file.
It will be read and overwritten.
DO NOT EDIT! -->
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>Bookmarks</TITLE>
<H1>Bookmarks</H1>
<DL><p>
<DT><A HREF="https://youtube.com/watch?v=00000000000">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 1</H3>
<DL><p>
<DT><A HREF="https://youtube.com/watch?v=11111111111">Website</A>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://youtube.com/watch?v=22222222222">Website</A>
<DT><H3>Folder 1.1</H3>
<DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
<DT><H3>Folder 1.2</H3>
<DL><p>
<DT><A HREF="https://youtube.com/watch?v=33333333333&list=0000000000000000000000000000000000">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
</DL><p>
<DT><A HREF="https://website.com">Website</A>
<DT><H3>Folder 2</H3>
<DL><p>
<DT><A HREF="https://youtube.com/watch?list=0000000000000000000000000000000000&v=44444444444">Website</A>
<DT><A HREF="https://website.com">Website</A>
</DL><p>
</DL><p>
"""
def make_bookmarks_parser(self, mock_input_provider=None, mock_logger=None):
if not mock_input_provider:
mock_input_provider = MockInputProvider(lambda x: "", lambda x: "")
if not mock_logger:
mock_logger = MockLogger()
return BookmarksParser(mock_input_provider, mock_logger)
def test_parse_with_invalid_data_returns_failure(self):
make_bookmarks_parser = self.make_bookmarks_parser()
valid, urls = make_bookmarks_parser.parse("<invalid></file>")
self.assertFalse(valid)
self.assertIsNone(urls)
def test_parse_with_exit_returns_no_urls_and_success(self):
responses = deque(["0"])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.empty_bookmarks_file_data)
expected_line_list = ["Enter: Download links in Root", " 0: Exit"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, [])
def test_parse_root_with_empty_data_returns_no_urls_and_success(self):
responses = deque([""])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.empty_bookmarks_file_data)
expected_line_list = [
"Enter: Download links in Root",
" 0: Exit"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, [])
def test_parse_root_with_single_level_single_page_logs_and_returns_correctly(self):
responses = deque(["0"])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.single_level_single_page_bookmarks_file_data)
expected_line_list = [
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 0: Exit"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, [])
def test_parse_root_with_single_level_multiple_pages_logs_and_returns_correctly(self):
responses = deque(["9", "9", "0"])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.single_level_multiple_pages_bookmarks_file_data)
expected_line_list = [
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 3: Move to Folder 3",
" 4: Move to Folder 4",
" 5: Move to Folder 5",
" 6: Move to Folder 6",
" 7: Move to Folder 7",
" 8: Move to Folder 8",
" 9: Next page",
" 0: Exit",
"Enter: Download links in Root",
" 1: Move to Folder 9",
" 2: Move to Folder 10",
" 3: Move to Folder 11",
" 4: Move to Folder 12",
" 9: Back to first page",
" 0: Exit",
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 3: Move to Folder 3",
" 4: Move to Folder 4",
" 5: Move to Folder 5",
" 6: Move to Folder 6",
" 7: Move to Folder 7",
" 8: Move to Folder 8",
" 9: Next page",
" 0: Exit"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, [])
def test_parse_root_with_multiple_levels_logs_and_returns_correctly(self):
responses = deque(["1", "0", "0"])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.multiple_level_bookmarks_file_data)
expected_line_list = [
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 0: Exit",
"Enter: Download links in Folder 1",
" 1: Move to Folder 1.1",
" 2: Move to Folder 1.2",
" 0: Back to Root",
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 0: Exit"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, [])
def test_parse_root_with_single_level_single_page_and_incorrect_option_logs_and_returns_correctly(self):
responses = deque(["8", "0"])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.single_level_single_page_bookmarks_file_data)
expected_line_list = [
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 0: Exit",
"Option not valid"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, [])
def test_parse_root_no_youtube_urls_returns_correct_urls(self):
responses = deque([""])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.multiple_level_bookmarks_file_data)
expected_urls = []
expected_line_list = [
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 0: Exit"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, expected_urls)
def test_parse_root_youtube_urls_returns_correct_urls(self):
responses = deque([""])
mock_input_provider = MockInputProvider(lambda x: responses.popleft(), lambda x: "")
mock_logger = MockLogger()
make_bookmarks_parser = self.make_bookmarks_parser(mock_input_provider=mock_input_provider, mock_logger=mock_logger)
valid, urls = make_bookmarks_parser.parse(self.multiple_level_bookmarks_file_data_with_valid_urls)
expected_urls = [
"https://youtube.com/watch?v=00000000000",
"https://youtube.com/watch?v=11111111111",
"https://youtube.com/watch?v=22222222222",
"https://youtube.com/watch?list=0000000000000000000000000000000000&v=44444444444",
"https://youtube.com/watch?v=33333333333&list=0000000000000000000000000000000000"
]
expected_line_list = [
"Enter: Download links in Root",
" 1: Move to Folder 1",
" 2: Move to Folder 2",
" 0: Exit"
]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertTrue(valid)
self.assertListEqual(urls, expected_urls)
| 40.480186 | 133 | 0.532765 | 2,000 | 17,366 | 4.4495 | 0.068 | 0.020901 | 0.04877 | 0.083605 | 0.926059 | 0.917856 | 0.890887 | 0.862569 | 0.85268 | 0.824475 | 0 | 0.037798 | 0.332719 | 17,366 | 428 | 134 | 40.574766 | 0.730152 | 0 | 0 | 0.784314 | 0 | 0.028011 | 0.565127 | 0.021306 | 0 | 0 | 0 | 0 | 0.072829 | 1 | 0.030812 | false | 0 | 0.014006 | 0 | 0.05042 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
fbfd6f6a9487e1209f369180b1f5e3f1637f0682 | 204 | py | Python | dupescan/fs/__init__.py | yellcorp/dupescan | e89f789396e65509f440b32448d686e01aa43f81 | [
"MIT"
] | null | null | null | dupescan/fs/__init__.py | yellcorp/dupescan | e89f789396e65509f440b32448d686e01aa43f81 | [
"MIT"
] | null | null | null | dupescan/fs/__init__.py | yellcorp/dupescan | e89f789396e65509f440b32448d686e01aa43f81 | [
"MIT"
] | null | null | null | from dupescan.fs._fileentry import FileEntry
from dupescan.fs._fileinstance import FileInstance
from dupescan.fs._root import Root, NO_ROOT
from dupescan.fs._walker import flat_iterator, recurse_iterator
| 40.8 | 63 | 0.862745 | 29 | 204 | 5.827586 | 0.413793 | 0.284024 | 0.331361 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.088235 | 204 | 4 | 64 | 51 | 0.908602 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
220982b8982ea174cd12addf780c83dee5fe4c23 | 13,084 | py | Python | test/test_series.py | ramsdalesteve/forest | 12cac1b3dd93c4475a8a4f696c522576b44f16eb | [
"BSD-3-Clause"
] | null | null | null | test/test_series.py | ramsdalesteve/forest | 12cac1b3dd93c4475a8a4f696c522576b44f16eb | [
"BSD-3-Clause"
] | null | null | null | test/test_series.py | ramsdalesteve/forest | 12cac1b3dd93c4475a8a4f696c522576b44f16eb | [
"BSD-3-Clause"
] | null | null | null | import unittest
import os
import netCDF4
import numpy as np
import numpy.testing as npt
import datetime as dt
from forest import data
def variable_dim0(
dataset,
pressures,
times,
longitudes,
latitudes,
values):
dataset.createDimension("latitude", len(latitudes))
dataset.createDimension("longitude", len(longitudes))
dataset.createDimension("dim0", len(pressures))
var = dataset.createVariable(
"longitude", "d", ("longitude",))
var.axis = "X"
var.units = "degrees_east"
var.standard_name = "longitude"
var[:] = longitudes
var = dataset.createVariable(
"latitude", "d", ("latitude",))
var.axis = "Y"
var.units = "degrees_north"
var.standard_name = "latitude"
var[:] = latitudes
var = dataset.createVariable(
"pressure", "d", ("dim0",))
var[:] = pressures
units = "hours since 1970-01-01 00:00:00"
var = dataset.createVariable(
"time", "d", ("dim0",))
var.units = units
var[:] = netCDF4.date2num(times, units=units)
var = dataset.createVariable(
"relative_humidity", "f",
("dim0", "latitude", "longitude"))
var.units = "%"
var.grid_mapping = "latitude_longitude"
var.coordinates = "forecast_period forecast_reference_time pressure time"
var[:] = values
def variable_surface(
dataset,
variable,
times,
longitudes,
latitudes,
values):
dataset.createDimension("latitude", len(latitudes))
dataset.createDimension("longitude", len(longitudes))
dataset.createDimension("time", len(times))
var = dataset.createVariable(
"longitude", "d", ("longitude",))
var.axis = "X"
var.units = "degrees_east"
var.standard_name = "longitude"
var[:] = longitudes
var = dataset.createVariable(
"latitude", "d", ("latitude",))
var.axis = "Y"
var.units = "degrees_north"
var.standard_name = "latitude"
var[:] = latitudes
units = "hours since 1970-01-01 00:00:00"
var = dataset.createVariable(
"time", "d", ("time",))
var.units = units
var[:] = netCDF4.date2num(times, units=units)
var = dataset.createVariable(
variable, "f",
("time", "latitude", "longitude"))
var.units = "Pa"
var.grid_mapping = "latitude_longitude"
var.coordinates = "forecast_period forecast_reference_time"
var[:] = values
def variable_3d_scalar_time(
dataset,
variable,
time,
pressures,
longitudes,
latitudes,
values):
dataset.createDimension("latitude", len(latitudes))
dataset.createDimension("longitude", len(longitudes))
dataset.createDimension("pressure", len(pressures))
var = dataset.createVariable(
"longitude", "d", ("longitude",))
var.axis = "X"
var.units = "degrees_east"
var.standard_name = "longitude"
var[:] = longitudes
var = dataset.createVariable(
"latitude", "d", ("latitude",))
var.axis = "Y"
var.units = "degrees_north"
var.standard_name = "latitude"
var[:] = latitudes
units = "hours since 1970-01-01 00:00:00"
var = dataset.createVariable(
"pressure", "d", ("pressure",))
var[:] = pressures
var = dataset.createVariable(
"time", "d", ())
var.units = units
var[:] = netCDF4.date2num(time, units=units)
var = dataset.createVariable(
variable, "f",
("pressure", "latitude", "longitude"))
var.units = "%"
var.grid_mapping = "latitude_longitude"
var.coordinates = "forecast_period forecast_reference_time time"
var[:] = values
def variable_4d(
dataset,
variable,
times,
pressures,
longitudes,
latitudes,
values):
dataset.createDimension("latitude", len(latitudes))
dataset.createDimension("longitude", len(longitudes))
dataset.createDimension("time_1", len(times))
dataset.createDimension("pressure", len(pressures))
var = dataset.createVariable(
"longitude", "d", ("longitude",))
var.axis = "X"
var.units = "degrees_east"
var.standard_name = "longitude"
var[:] = longitudes
var = dataset.createVariable(
"latitude", "d", ("latitude",))
var.axis = "Y"
var.units = "degrees_north"
var.standard_name = "latitude"
var[:] = latitudes
units = "hours since 1970-01-01 00:00:00"
var = dataset.createVariable(
"pressure", "d", ("pressure",))
var[:] = pressures
var = dataset.createVariable(
"time_1", "d", ("time_1",))
var.units = units
var[:] = netCDF4.date2num(times, units=units)
var = dataset.createVariable(
variable, "f",
("time_1", "pressure", "latitude", "longitude"))
var.units = "K"
var.grid_mapping = "latitude_longitude"
var.coordinates = "forecast_period_1 forecast_reference_time"
var[:] = values
class TestSeries(unittest.TestCase):
def setUp(self):
self.path = "test-series.nc"
def tearDown(self):
if os.path.exists(self.path):
os.remove(self.path)
@unittest.skip('awaiting development')
def test_series_given_missing_variable_returns_empty(self):
pressure = 500
lon = 1
lat = 1
p0, p1 = 1000, 500
t0 = dt.datetime(2019, 1, 1)
t1 = dt.datetime(2019, 1, 1, 3)
longitudes = [0, 1]
latitudes = [0, 1]
pressures = [p0, p1, p0, p1]
times = [t0, t0, t1, t1]
values = np.arange(4*2*2).reshape(4, 2, 2)
with netCDF4.Dataset(self.path, "w") as dataset:
variable_dim0(
dataset,
pressures,
times,
longitudes,
latitudes,
values)
loader = data.SeriesLoader([self.path])
variable = "not_in_file"
result = loader.series_file(
self.path, variable, lon, lat, pressure)
expect = {
"x": [],
"y": []
}
npt.assert_array_equal(expect["x"], result["x"])
npt.assert_array_equal(expect["y"], result["y"])
def test_series_given_dim0_variable(self):
variable = "relative_humidity"
pressure = 500
lon = 1
lat = 1
p0, p1 = 1000, 500
t0 = dt.datetime(2019, 1, 1)
t1 = dt.datetime(2019, 1, 1, 3)
longitudes = [0, 1]
latitudes = [0, 1]
pressures = [p0, p1, p0, p1]
times = [t0, t0, t1, t1]
values = np.arange(4*2*2).reshape(4, 2, 2)
with netCDF4.Dataset(self.path, "w") as dataset:
variable_dim0(
dataset,
pressures,
times,
longitudes,
latitudes,
values)
loader = data.SeriesLoader([self.path])
result = loader.series_file(
self.path, variable, lon, lat, pressure)
i, j = 1, 1
expect = {
"x": [t0, t1],
"y": [values[1, j, i], values[3, j, i]]
}
npt.assert_array_equal(expect["x"], result["x"])
npt.assert_array_equal(expect["y"], result["y"])
def test_surface_variable(self):
variable = "air_pressure_at_sea_level"
times = [
dt.datetime(2019, 1, 1),
dt.datetime(2019, 1, 1, 12)]
longitudes = [0, 1, 2]
latitudes = [0, 1, 2]
values = np.arange(2*3*3).reshape(2, 3, 3)
with netCDF4.Dataset(self.path, "w") as dataset:
variable_surface(
dataset,
variable,
times,
longitudes,
latitudes,
values)
lon = 0
lat = 1
loader = data.SeriesLoader([self.path])
result = loader.series_file(
self.path, variable, lon, lat)
expect = {
"x": times,
"y": values[:, 1, 0]
}
npt.assert_array_equal(expect["x"], result["x"])
npt.assert_array_equal(expect["y"], result["y"])
def test_4d_variable(self):
variable = "wet_bulb_potential_temperature"
times = [
dt.datetime(2019, 1, 1),
dt.datetime(2019, 1, 1, 6),
dt.datetime(2019, 1, 1, 12)]
pressures = [
1000.001,
500,
250]
longitudes = [0, 1, 2]
latitudes = [0, 1, 2]
values = np.arange(3*3*3*3).reshape(3, 3, 3, 3)
with netCDF4.Dataset(self.path, "w") as dataset:
variable_4d(
dataset,
variable,
times,
pressures,
longitudes,
latitudes,
values)
lon, lat = 0.1, 0.1
loader = data.SeriesLoader([self.path])
result = loader.series_file(
self.path,
variable,
lon,
lat,
pressure=500)
expect = {
"x": times,
"y": values[:, 1, 0, 0]
}
npt.assert_array_equal(expect["x"], result["x"])
npt.assert_array_equal(expect["y"], result["y"])
def test_3d_variable_scalar_time(self):
variable = "relative_humidity"
time = dt.datetime(2019, 1, 1)
pressures = [
1000.001,
500,
250]
longitudes = [0, 1]
latitudes = [0, 1]
values = np.arange(3*2*2).reshape(3, 2, 2)
with netCDF4.Dataset(self.path, "w") as dataset:
variable_3d_scalar_time(
dataset,
variable,
time,
pressures,
longitudes,
latitudes,
values)
lon, lat = 0.1, 0.1
loader = data.SeriesLoader([self.path])
result = loader.series_file(
self.path,
variable,
lon,
lat,
pressure=500)
expect = {
"x": [time],
"y": [values[1, 0, 0]]
}
npt.assert_array_equal(expect["x"], result["x"])
npt.assert_array_equal(expect["y"], result["y"])
def test_series_locator(self):
paths = [
"/some/file_20190101T0000Z_000.nc",
"/some/file_20190101T0000Z_006.nc",
"/some/file_20190101T0000Z_012.nc",
"/some/file_20190101T1200Z_000.nc",
"/some/file_20190101T1200Z_006.nc",
"/some/file_20190101T1200Z_012.nc",
]
reference_time = dt.datetime(2019, 1, 1, 12)
locator = data.SeriesLocator(paths)
result = locator[reference_time]
expect = [
"/some/file_20190101T1200Z_000.nc",
"/some/file_20190101T1200Z_006.nc",
"/some/file_20190101T1200Z_012.nc",
]
self.assertEqual(expect, result)
def test_series_locator_getitem_given_datetime64(self):
paths = [
"/some/file_20190101T0000Z_000.nc",
"/some/file_20190101T0000Z_006.nc",
"/some/file_20190101T0000Z_012.nc",
"/some/file_20190101T1200Z_000.nc",
"/some/file_20190101T1200Z_006.nc",
"/some/file_20190101T1200Z_012.nc",
]
reference_time = np.datetime64('2019-01-01T12:00:00', 's')
locator = data.SeriesLocator(paths)
result = locator[reference_time]
expect = [
"/some/file_20190101T1200Z_000.nc",
"/some/file_20190101T1200Z_006.nc",
"/some/file_20190101T1200Z_012.nc",
]
self.assertEqual(expect, result)
def test_series_locator_initial_times(self):
paths = [
"/some/file_20190101T0000Z_000.nc",
"/some/file_20190101T0000Z_006.nc",
"/some/file_20190101T0000Z_012.nc",
"/some/file_20190101T1200Z_000.nc",
"/some/file_20190101T1200Z_006.nc",
"/some/file_20190101T1200Z_012.nc",
]
locator = data.SeriesLocator(paths)
result = locator.initial_times()
expect = np.array([
'2019-01-01 00:00',
'2019-01-01 12:00'], dtype='datetime64[s]')
npt.assert_array_equal(expect, result)
def test_pressures_matches_large_pressures(self):
pressures = np.array([1000.001, 1000.01, 1000.1, 950])
result = data.SeriesLoader.search(pressures, 1000)
expect = np.array([True, True, True, False])
npt.assert_array_equal(expect, result)
def test_pressures_matches_small_pressures(self):
pressures = np.array([0.03001, 0.020001, 0.010001])
result = data.SeriesLoader.search(pressures, 0.02)
expect = np.array([False, True, False])
npt.assert_array_equal(expect, result)
| 32.547264 | 77 | 0.542189 | 1,375 | 13,084 | 5.023273 | 0.109818 | 0.027798 | 0.06602 | 0.035761 | 0.860142 | 0.819459 | 0.7989 | 0.789055 | 0.757927 | 0.741856 | 0 | 0.088363 | 0.328799 | 13,084 | 401 | 78 | 32.628429 | 0.698133 | 0 | 0 | 0.757895 | 0 | 0 | 0.152858 | 0.069933 | 0 | 0 | 0 | 0 | 0.039474 | 1 | 0.042105 | false | 0 | 0.018421 | 0 | 0.063158 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
223cec9daa904fc0a2f0bb81e9ca4ebe71fd47f7 | 5,887 | py | Python | echarts.py | wswmjc/pyjob | 3bc76d3f290f7dd015c6278d140086e73e48509d | [
"MIT"
] | null | null | null | echarts.py | wswmjc/pyjob | 3bc76d3f290f7dd015c6278d140086e73e48509d | [
"MIT"
] | null | null | null | echarts.py | wswmjc/pyjob | 3bc76d3f290f7dd015c6278d140086e73e48509d | [
"MIT"
] | null | null | null | from pyecharts import Map, Geo
'''
生成地图工具包
需要安装依赖:
pip install pyecharts
pyecharts-snapshot
pip install echarts-countries-pypkg
pip install echarts-china-provinces-pypkg
pip install echarts-china-cities-pypkg
pip install echarts-china-counties-pypkg
pip install echarts-china-misc-pypkg
'''
def get_single_product_province_map(map_name:str,data:dict,product_name:str,output_path:str = None,min_range:int = None,max_range:int = None):
'''
根据数据生成省份分布图
map_name:str
地图名称,如:九阳全国销售分布图
data:dict
数据,字典类型,传入格式:{'北京':12,'河北':33,...} 标准省份名称+数值
product_name:str
产品名称:九阳豆浆机...
output_path:str
输出文件名称,输出文件为html,文件路径以.html结尾,默认为.../render.html 如: /jiuyang_product.html
range_min:int
分布range最小值,默认为传入数据最小值,否则为传入数值
range_max:int
分布range最大值,默认为传入数据最大值,否则为传入数值
'''
attr = data.keys()
value = data.values()
range_min = min_range if min_range is not None else min(value)
range_max = max_range if max_range is not None else max(value)
visual_range = [range_min,range_max]
map = Map(map_name, width=1200, height=600)
map.add(
product_name,
attr,
value,
maptype="china",
visual_range=visual_range,
is_visualmap=True,
is_label_show=True,
visual_text_color="#000",
)
if output_path:
map.render(output_path)
else:
map.render()
def get_multi_product_province_map(map_name:str,datas,product_names,output_path:str = None,min_range:int = None,max_range:int = None):
'''
根据多组数据生成多个系列省份分布图
map_name:str
地图名称,如:九阳全国销售分布图
datas: list(dict)
数据,字典数组,传入格式:[{'北京':12,'河北':33,...},...] 标准省份名称+数值
product_names: list(str)
产品名称数组,需要和字典列表一一对应:[九阳豆浆机,...]...
output_path:str
输出文件名称,输出文件为html,文件路径以.html结尾,默认为.../render.html 如: /jiuyang_product.html
range_min:int
分布range最小值,默认为传入数据最小值,否则为传入数值
range_max:int
分布range最大值,默认为传入数据最大值,否则为传入数值
'''
map = Map(map_name, width=1200, height=600)
min_mounts = [min(data.values()) for data in datas]
max_mounts = [max(data.values()) for data in datas]
range_min = min_range if min_range is not None else min(min_mounts)
range_max = max_range if max_range is not None else max(max_mounts)
visual_range = [range_min,range_max]
for index, item in enumerate(product_names):
data = datas[index]
product_name = item
attr = data.keys()
value = data.values()
map.add(
product_name,
attr,
value,
maptype="china",
visual_range=visual_range,
is_visualmap=True,
is_label_show=True,
visual_text_color="#000",
)
if output_path:
map.render(output_path)
else:
map.render()
def get_single_product_city_map(map_name:str,data:dict,product_name:str,output_path:str = None,min_range:int = None,max_range:int = None):
'''
根据数据生成城市分布图
map_name:str
地图名称,如:九阳全国销售分布图
data:dict
数据,字典类型,传入格式:{'信阳':12,'杭州':33,...} 标准省份名称+数值
product_name:str
产品名称:九阳豆浆机...
output_path:str
输出文件名称,输出文件为html,文件路径以.html结尾,默认为.../render.html 如: /jiuyang_product.html
range_min:int
分布range最小值,默认为传入数据最小值,否则为传入数值
range_max:int
分布range最大值,默认为传入数据最大值,否则为传入数值
'''
source = [(key,value) for key,value in data.items()]
range_min = min_range if min_range is not None else min(value)
range_max = max_range if max_range is not None else max(value)
visual_range = [range_min,range_max]
geo = Geo(
map_name,
"",
title_color="#fff",
title_pos="left",
width=1200,
height=600,
background_color="#404a59",
)
attr, value = geo.cast(source)
geo.add(product_name, attr, value, type="effectScatter",visual_range=visual_range, is_random=True, effect_scale=5)
if output_path:
geo.render(output_path)
else:
geo.render()
def get_multi_product_city_map(map_name:str,datas,product_names,output_path:str = None,min_range:int = None,max_range:int = None):
'''
根据多组数据生成多个系列城市分布图
map_name:str
地图名称,如:九阳全国销售分布图
datas: list(dict)
数据,字典数组,传入格式:[{'北京':12,'河北':33,...},...] 标准省份名称+数值
product_names: list(str)
产品名称数组,需要和字典列表一一对应:[九阳豆浆机,...]...
output_path:str
输出文件名称,输出文件为html,文件路径以.html结尾,默认为.../render.html 如: /jiuyang_product.html
range_min:int
分布range最小值,默认为传入数据最小值,否则为传入数值
range_max:int
分布range最大值,默认为传入数据最大值,否则为传入数值
'''
geo = Geo(
map_name,
"",
title_color="#fff",
title_pos="left",
width=1200,
height=600,
background_color="#404a59",
)
min_mounts = [min(data.values()) for data in datas]
max_mounts = [max(data.values()) for data in datas]
range_min = min_range if min_range is not None else min(min_mounts)
range_max = max_range if max_range is not None else max(max_mounts)
visual_range = [range_min,range_max]
for index, item in enumerate(product_names):
data = datas[index]
product_name = item
source = [(key,value) for key,value in data.items()]
attr, value = geo.cast(source)
geo.add(product_name, attr, value, type="effectScatter", visual_range=visual_range, is_random=True, effect_scale=5)
if output_path:
geo.render(output_path)
else:
geo.render()
get_multi_product_province_map('九阳全国销量图',[{'河北':10,'河南':200,'浙江':1000},{'西安':2000,'四川':20,'安徽':200}],['九阳豆浆机','抽油烟机'],max_range=1500,min_range=15) | 34.426901 | 146 | 0.62001 | 775 | 5,887 | 4.505806 | 0.171613 | 0.038946 | 0.02291 | 0.032073 | 0.919817 | 0.876861 | 0.846506 | 0.846506 | 0.827033 | 0.806415 | 0 | 0.019894 | 0.26567 | 5,887 | 171 | 146 | 34.426901 | 0.787879 | 0.24036 | 0 | 0.893617 | 0 | 0 | 0.027441 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.042553 | false | 0 | 0.010638 | 0 | 0.053191 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
225f20fc3a148f7197ce6af56eb30a25df4a415f | 5,820 | py | Python | asq/predicates.py | SlamJam/asq | e6e49a5ace421cb4f84f0bded5dbe5a2108b0cce | [
"MIT"
] | 3 | 2015-03-13T23:02:29.000Z | 2015-07-19T15:29:23.000Z | asq/predicates.py | SlamJam/asq | e6e49a5ace421cb4f84f0bded5dbe5a2108b0cce | [
"MIT"
] | null | null | null | asq/predicates.py | SlamJam/asq | e6e49a5ace421cb4f84f0bded5dbe5a2108b0cce | [
"MIT"
] | 1 | 2020-12-19T07:57:20.000Z | 2020-12-19T07:57:20.000Z | '''Predicate function factories'''
__author__ = 'Robert Smallshire'
def eq_(rhs):
'''Create a predicate which tests its argument for equality with a value.
Args:
rhs: (right-hand-side) The value with which the left-hand-side element
will be compared for equality.
Returns:
A unary predicate function which compares its single argument (lhs)
for equality with rhs.
'''
return lambda lhs: lhs == rhs
def ne_(rhs):
'''Create a predicate which tests its argument for inequality with a value.
Args:
rhs: (right-hand-side) The value with which the left-hand-side element
will be compared for inequality.
Returns:
A unary predicate function which compares its single argument (lhs)
for inequality with rhs.
'''
return lambda lhs: lhs != rhs
def lt_(rhs):
'''Create a predicate which performs a less-than comparison of its argument
with a value.
Args:
rhs: (right-hand-side) The value against which the less-than test will
be performed.
Returns:
A unary predicate function which determines whether its single
argument (lhs) is less-than rhs.
'''
return lambda lhs: lhs < rhs
def le_(rhs):
'''Create a predicate which performs a less-than-or-equal comparison of its
argument with a value.
Args:
rhs: (right-hand-side) The value against which the less-than-or-equal
test will be performed.
Returns:
A unary predicate function which determines whether its single
argument (lhs) is less-than-or-equal to rhs.
'''
return lambda lhs: lhs <= rhs
def ge_(rhs):
'''Create a predicate which performs a greater-than-or-equal comparison of
its argument with a value.
Args:
rhs: (right-hand-side) The value against which the greater-than-or-
equal test will be performed.
Returns:
A unary predicate function which determines whether its single
argument (lhs) is greater-than rhs.
'''
return lambda lhs: lhs >= rhs
def gt_(rhs):
'''Create a predicate which performs a greater-than comparison of its
argument with a value.
Args:
rhs: (right-hand-side) The value against which the greater-than test
will be performed.
Returns:
A unary predicate function which determines whether its single
argument (lhs) is less-than-or-equal to rhs.
'''
return lambda lhs: lhs > rhs
def is_(rhs):
'''Create a predicate which performs an identity comparison of its
argument with a value.
Args:
rhs: (right-hand-side) The value against which the identity test will
be performed.
Returns:
A unary predicate function which determines whether its single
arguments (lhs) has the same identity - that is, is the same object -
as rhs.
'''
return lambda lhs: lhs is rhs
def contains_(lhs):
'''Create a unary predicate which tests for membership if its argument.
Args:
lhs: (left-hand-side) The value to test for membership for in the
predicate argument.
Returns:
A unary predicate function which determines whether its single
arguments (lhs) contains lhs.
'''
return lambda rhs: lhs in rhs
def not_(predicate):
'''A predicate combinator which negates produces an inverted predicate.
The predicate returned by this combinator is the logical inverse of the
supplied combinator.
Args:
predicate: A unary predicate function to be inverted.
Returns:
A unary predicate function which is the logical inverse of pred.
'''
return lambda lhs: not predicate(lhs)
def and_(predicate1, predicate2):
'''A predicate combinator which produces the a new predicate which is the
logical conjunction of two existing unary predicates.
The predicate returned by this combinator returns True when both of the two
supplied predicates return True, otherwise it returns False.
Args:
predicate1: A unary predicate function.
predicate2: A unary predicate function.
Returns:
A unary predicate function which is the logical conjunction of
predicate1 and predicate2.
'''
return lambda lhs: predicate1(lhs) and predicate2(lhs)
def or_(predicate1, predicate2):
'''A predicate combinator which produces the a new predicate which is the
logical disjunction of two existing unary predicates.
The predicate returned by this combinator returns True when either or both
of the two supplied predicates return True, otherwise it returns False.
Args:
predicate1: A unary predicate function.
predicate2: A unary predicate function.
Returns:
A unary predicate function which is the logical disjunction of
predicate1 and predicate2.
'''
return lambda lhs: predicate1(lhs) or predicate2(lhs)
def xor_(predicate1, predicate2):
'''A predicate combinator which produces the a new predicate which is the
logical exclusive disjunction of two existing unary predicates.
The predicate returned by this combinator returns True when the two
supplied predicates return the different values, otherwise it returns
False.
Args:
predicate1: A unary predicate function.
predicate2: A unary predicate function.
Returns:
A unary predicate function which is the logical exclusive disjunction
of predicate1 and predicate2.
'''
return lambda lhs: predicate1(lhs) != predicate2(lhs)
| 30.3125 | 80 | 0.659622 | 747 | 5,820 | 5.117805 | 0.135208 | 0.088935 | 0.078472 | 0.114308 | 0.835208 | 0.81402 | 0.790479 | 0.78577 | 0.754381 | 0.665446 | 0 | 0.005768 | 0.285052 | 5,820 | 191 | 81 | 30.471204 | 0.913002 | 0.739691 | 0 | 0 | 0 | 0 | 0.020095 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.48 | false | 0 | 0 | 0 | 0.96 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
2273f4075b11f4296e404bd7c548a88293c2ebe0 | 59,096 | py | Python | TGAHzParse.py | Eiim/TGAHz-Parsing | a575270a4df298ac71e3ed7585f0b9b0ca439bee | [
"MIT"
] | 1 | 2021-04-28T19:26:16.000Z | 2021-04-28T19:26:16.000Z | TGAHzParse.py | Eiim/TGAHz-Parsing | a575270a4df298ac71e3ed7585f0b9b0ca439bee | [
"MIT"
] | 1 | 2021-04-28T22:53:04.000Z | 2021-04-28T22:53:04.000Z | TGAHzParse.py | Eiim/TGAHz-Parsing | a575270a4df298ac71e3ed7585f0b9b0ca439bee | [
"MIT"
] | 1 | 2021-04-28T20:21:20.000Z | 2021-04-28T20:21:20.000Z | from PIL import Image
import sys
class colors:
PURPLE = '\033[95m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
ENDC = '\033[0m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
def printrgb(rgb):
if(color):
# Print formatted color as RRRRRGGG GGBBBBBA
print(f"{colors.RED}"+rgb[0:5]+f"{colors.GREEN}"+rgb[5:10]+f"{colors.BLUE}"+rgb[11:]+f"{colors.PURPLE}"+rgb[16]+f"{colors.ENDC} ", end='')
else:
print(rgb+" ", end='')
def torgb(b2, b1):
# RRRRRGGG GGBBBBBA
r = int(b2/8)
g = (b2%8)*4 + int(b1/64)
b = int((b1%64)/2)
# Convert to 24-bit color (simple algorithm)
ra = r*8+int(r/4)
ga = g*8+int(g/4)
ba = b*8+int(b/4)
return(r,g,b,ra,ga,ba)
# Log each packet
log = ("-log" in sys.argv)
# Create image
image = not ("-noimage" in sys.argv)
# Disable colored logging
color = ("-logcolor" in sys.argv)
# Create animation
animated = False
# Input data
if ("-txt" in sys.argv):
txtfile = open(sys.argv[-1], "r")
data = bytes.fromhex(txtfile.readline())
txtfile.close()
elif ("-txtanim" in sys.argv):
animated = True
txtfile = open(sys.argv[-1], "r")
animdata = [bytes.fromhex(a) for a in txtfile.readlines()]
txtfile.close()
elif ("-hex" in sys.argv):
data = bytes.fromhex(sys.argv[-1])
elif ("-tga" in sys.argv or "-bin" in sys.argv):
binfile = open(sys.argv[-1], "rb")
data = binfile.read()
binfile.close()
else:
data = bytes.fromhex("03D86B0000000A000000000000000000F00090011020FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF010098010004070811301538133009108B0100030D20133013300F208B010004091011301538113007088A01000103000F28811538010D2001009F0100FF01009701000103000F20820F28010F200508890100010B180F20820F20010B1801008801000105080F20820F28010F200300890100050D180F200F200F280F200B189F0100041B4019401B401B481B40811940811B40851940011B401940841940811B400219401B401940831B40861940811B400219401B401B40831940011B401940861940031B40194019401B40871940041B4019401B4019401738821940811B40861940011B401940811940811738811940011B401940871940811B4093194002173819401940811B40811940011B401940841940011B401940821940011B401940861940811B40821940011B401940811940011B401940841940851B40861940021B4819401B40821940011B401940891940011B401940831940011B401940831940831B40861940011B401940831940041740194015300F200708850100011B481940831B480119401B40821B48831940811B480119401B40811B480119401B40821B48811B40841B48811B40811B480119401B48831B48041B401B481B481B401B48841B40031B481B4019401B48831B40811B48021B4019401B40811B48021B4019401B40861B48041B40194019401B401B48811B40811B48811B40811940011B401B48811B48811B40811B48841B40021B481B401940821B40821B48811940841B40861940831B48051B401B481B481B4019401B40821B48031B4019401B401B48831B40011B481B40871B40811B48811B40021B4819401B40821B48811940811B48821B400119401B40811940011B401B48811B48061B401B481B4019401B401B481B40821B480219401B481B40811B48811B40021B481B401B40861B48021B401B481B40861B480119401B40811B40821940811B48011940173881193802173813280300830100011B481940B81B480119401B48AC1B480119401B48A61B480119481B489C1B480119401B48B11B4882194081173801153003008201008A1B48031D481B481B481D48811B48831D48851B48011D481B48811B48011D501D48821B48011D481B48821B48011D481B48D21B48011D481B48811B48011D481B48B71B48011D481B48811B48021D481B481D508B1B48011D481B48951B48021B401940193881173801132801008101009219400119481940831940021740194019489219400117401940951940011740194095194001194819408D19400117401940841940011740194081194001194819408419400117401940AA19408119488B19400119481940A619400119481940821940811738011530091081010090174001173817408C174001153817408C17400115381740BD174001173817408617408115388617400115381740921740811538D01740811940021738173013288101009A174001173817408217400117381740BB1740811738821740811738921740811540BE17400317381740174017388217400117381740A6174081194004173817301530030001008C174001173817408717400117381740B61740011738174091174001173817408117400117381740AA174001173817409617400117381740B817400619401B40194017381730050801008717400115381740941740011738174090174001173817408517400117381740931740011538174082174001173817409517400117381740B017400117381740A417400317381740174017388117408117389A174081194003173817300708010084174001173817408117400117381740FF1740941740011738174090174001153817408B174003173817401740173892174001173817408717400317381740174017388117400117381740811940031938173809100100CD1740011538174082174001173817408317400117381740A817400117381740A7174001173817408217400115381740A217400117381740841740011738174083174001173817408217408119400319381738091001009817408117389717400117381740961740021738174015388417400117381740FF17409317408119400319381738091001009817400117381740B0174001173817409A17400117381740F0174001173817408B17408119400319381738091001008B174001173817409217400117381740BB174001173817408717400117381740871740811738B7174001173817408417400117381740A517408115388B174081194003193817380910010087174001173817408517400117381740B217400117381740B4174001173817409C174001173817409717400117381740AF17400217381740174081194081173801091001009717400115381740C41740021738174017408117389017400117381740C817400117381740A7174081194003193817380910010085174001194019488F194881194081194802194019481B48821948011B481948921948011B48194888194801194019488A194802194019481B488F194881194083194881194089194801194019488C1948011940194881194802194019481940871948041B4819481940174019408F194881194085194801194019488819488119408219480119401948811B4895194801194017408F1740041940193817380910010082174003194019481B481B50845D50015F505F58815F58815F50815D50015F505D50845D50025F505D505D50825F50835D50025F505D505D50815F50845D50815F50815D50855F50815D50055F505F585F585F505D505F50815D50035F505D505D505F50815D50815F50025D505F505F58815D50845F50815D50825F50815D50035F505F585F505D50815F50825D50855F50815F58015F505D50835F50815D50815F50025D505F505F58825F50825D50015F585F50825D50015F585F50875D50815F50855D50015F505D50885D50015F505D50865D50815F50015F585F50825D50815F50015D505F50835F50815D50025F505D505D50845F50815D50045F505D505D505F585F50815D50825F50035D505F505F505F58825F50035D501D501B4819408E17408119400319381738091001008117400719401B485B4857400F2809180710030887010081030082010081030801010003088101008103088101008203088601008303088801000103080100D001008A0308010100030882030802010003080100830308B401000503080B1855381B48194819408C1740811940811738010910010005194019481B4859400D2007108501000103000308FF0308C10308010300010086010003050853301B4819488A1740011538173881194081173801091001000319481B485538071081010005030843084310451085108518F8871887C71881871801C718871884871801C718871881871801851887188487180185188718B1871808C718C920451003080100051055381B50194085174001173817408117408117388219400217380910010006194055380710010003084510C718838518018718C71886C718018718C71883C71881871881C718028718C718C71882871886C71883871888C718018718C718D4C71881C72002C728C730C73083C73802C730C728C72081872001C7188718828718AFC71882871808C71809294B310B29030001000B205B4819488B174082194002173809100100065538071003088518C720C7188718FE851802871885188718858518818520038528853885408548818550038548854085388528818520B3851884851007871809298D39451001000508553819488B1740811940031938173809100100050710010087180929C720C7188E8718018518871882871882851883871882851886871883851888871802851887188718CA85180387188518871885188387188187200B853085408750C760C770C578C778C570C5608550854085308187208185180187188518938518028718851887189185180287188518871883851884851002C7184B3109298101000311301B481538173888174006173819401940193817380910010003010043104B31092981C718FF8718018718C71884C718108718872087308740C758C770C58805A905B105A947990789C770C558874087308720B387188185188285100285184B294B31810100030F281B481740173889174081194003173817300910010004010087184B31C720C718FF8718018718092181935214D55A9552D55A514AC71887288738C750C570C5A8C3B843E9C1F981F983F947A90791C570C75087388728B4871801851885108285100109294B31810100014F281D508B17408119400317381730091001000301004B314B29C718FF871881871881C718150929D55AC7184B319352C71887308740C760C598C3B0C1C801D981F9C3F941FAC5F947A90789C76087408730B5871801851885108185100109294D318101000251281B5019408A17408119400319381738091001000301004B310B29C718FF8718828718168518C720D55AC5188B319352C71887308750C57805A1C3B801F141F943FAC3FAC5FB83FA85C10799C77087488730B5871801851885108185100109294B318101000251301B48194082174001173817408517408119400319381738091001000303088D310929C718FF871882871816851885100F429352D55A4B29C71887388750C580C5A0C3C001F941F945FA89FB07FCC3FA85E107A1C77887508738B5871801851885108185100109298D318101000251301B4819408217400117381740831740071738174019401B4019401738091001000245108D310921FF87188287180CC71885184B290F42514A09298518871887388750C580C5A0C3B88101F90803FA47FB49FC83FAC3F107A1C77887508738B5871801851885108185100609298D310300010051301B4819408117400117381740861740821940021738091001000287188D31C920FF87188287181785180F42D55ACD390F4AD55A0921C71887308750C578C5A0C3B8C1D001F981F903FA83FA43FA45C90799C77087488730B587188185188185100509298D310308010051301B488117400117381538811740011738174082174006173819401940193817380910010002C9208D31C720FF8718828718024B29D55AC718818518120F425352C71887308740C760C590C5B0C3C0C1D001E943FAC1F983F907A9C780C56087408730B587188185188185100509298D310308010051301B488A17400617381940194017381730091001000209294B31C718FF871882871817CF39CF4187188518C7180929D55AC718872887388750C570C598C5B003C903E103F103F905A90791C770875087388728B587188185188185100509298D310308030051301B5083174001173817408317400117381740811940031938173809100100020B294B31C718FF8718828718018B31514A828718098D319352C718872087308740C758C570C590C5A881C5B006C598C788C770C758874087308720B587188185188185100609298D310308030051301B5019408117400117381740831740811738011740194081194002173809100100024B314B29C718FF87188287180EC71893520F4209218D3117634929851887208728873087408550C760C77081C77806C770C56085508740873087288720B587188185188185100109298D318103080151301D508217400115381740861740811940031938173809100100024B314B29C718FF8718828718088518C71811429352514A4B310921C7188718818720030939CD59C540854881855005854809514B51C73087288720B787180485188510851009298D318103080151301D508B1740811940031938173809100100024B310929C718FF8718838718058518C920CD39514AD55ACD398287180B87200931997BD572CD514B498B5151629983115AC7208720B787188185188185100109298D318103080151301B488B1740811940031938173809100100024B310929C718FF871882871806092917635963514A4B31C7188518848718075352DD7B9B739B6B9B73DB739B730929B987188185188185100109298D318103080151301B488B1740811940031938173009100100028D310921C718FF87188287180709219352D55A5352CF390929C718851883871801C7209352829B7301596BCD3981C718B887188185188185100109298D318103080351301B4817401738891740811940031938173809100100028D31C920C718FF87188287180785188510C7188B311763DD7B1142C71883871807C7184B3111425352514ACD390921C720B987188185188185100109298D318103080351301D5017401738831740011738174081174001173817408119408117380109100100018D31C720FF871883871801C7200F4281935203114A8B31C718851884871801C718C72085C718B887188185188185100109298D318103080251301D50194088174001173817408119408117380109100100014B31C720FF8718838718040929596B9B7311420929818518848718018510871883C718818518B987188185188185100109298D318103080151301D508B1740811940031738173009100100014B31C718FF87188387188185180409290F4253529352CD39838718020921CD3985188385100109214B31B987188185188185100109298D318103080151301B488B17408119408117380109100100014B31C718FF871883871806C7188D318D318B318D310F42CD39838718080921996BD55ACD394B298D31514A996B1142B987188185188185100109298D318103080151301B48871740011538173881174082194002173809100100014B29C718FF8718838718064B311763CF4119635352514A9352848718015352DD7B829B7302DB739B730929B987188185188185100109298D318103080351301D5017401738851740011738174081174082194002173809100100014B29C718FF8718838718068D319352851093524B290929935283871802C718C7209352829B7302596BCD39C718B987188185188185100109298D318103080351301D50174017388917408119408117380109100100010B29C718FF8718838718068B31514AC718114A4B294B31514A84871807C7184B3111425352514ACD390921C720B987188185188185100109298D318103080151301B4889174001173817408119408117380109100100010929C718FF8718838718060929CF39CF390F42CD39514A8D3985871801C718C72084C718B987188185188185100109298D318103080251281B481940821740011738174084174006173819401B401940173809100100010929C718FF87188387180609299352514A17639352955293528A871881C718B987188185188185100109298D318103080251301D5019408A17400519401B401940173809100100010929C718FF87188387180685188510CF39D55AC71809299552838718810929018718C71882C71801C9204B31B987188185188185100109298D318103080251301D5019408A17400519401B401940173809100100010921C718FF871883871806C7189352D55A17630929CD39935283871808C720596B514A4B29C92009218B3117631142B987188185188185100109298D318103080251301D5019408A1740811940811938010910010001C920C718FF87188387180649290F4A8518CF41D55A935A09298387180885100F42DD7B996B57635963DB73996BC920B987188185188185100109298D318103080251301D5019488717400819401740173819401B40194017380910010001C7208718FF871883871806C718851885188718C720C71885188387180885188718114A596B9B73996B155B4B318510B987188185188185100109298D318103080151301D508919408117400519401B48194017380910010001C7188718FF8718848718818518828718018518871884871804851809218B314B29C718818518B987188185188185100109298D318103080351301F501948194086194802194017401948811B48031B4019380B18030001C7188718FF87188E871881C7188185188185100187188518BA87188185188185100109298D318103080251281B4817408115380217381538173881174009173815381538173817401940173817300910010001C7188718FF87188E871881C7180187188518818518BC87188185188185100109298D318103080151281B48811740011738153881173801174017388117408117380517401940173815300710010001C7188718FF87188F871801C71887188487180185188718B987180485188510851009298D318103080551281B48173817401738153886173806174019401940173815300910010001C7188718FF8718D187188185188185100109298D318103080151281B488517400217381740174081173802174019401940811738010910010001C7188718FF8718D187188185188185100109298D318103080251281D5019408A174081194003173817300910010001C7188718AA8718FF85109585109087188185188185100109298D318103080151281D508B174081194003193817380910010001C7188718A987180185100100FF010094010001851087188E87188185188185100109298D318103080151281B50851740011738174082174006173819401940193817380910010001C7188718A887180185100100FF010096010001851087188D87188185188185100109298D318103080251281D5019408B1740041940193817380910010001C7188718A787180185100100FF010098010001851087188C87188185188185100109298D318103080451281D50174017381130811D5001153817408417400519401B401938153009100100A987180185100100FF010098010001851087188C87188185188185100109298D318103080551281B501740173811301B48811D508517408119408117380109100100A987180185100100FF010098010001851087188C87188185188185100109298D318103080451281D50194017401130821B48011F501B50811740021738174019408217380109100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080651281D5019401740113019481B488119480421601F581740173817408119408117380109100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301D508117400111301B48811B48811B508317400519401B401938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508117400511301B481D501F5819481538831740811940031938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B508117400213301F581F58851740061738194019401738153009100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080451281B50194017401130871740021738194019408117380109100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080551281B481940174013301538871740811940031938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B48811740011538174087174081194081173801091001008D871802C71809290F3A82934A03CF3909298718C71881871801C71887188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408917400217381940194081173801091001008C871803C7181142DB6B5D74811D74811F74029B6B514AC7189187180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B5084174081153884174081194081173801091001008C87180111421F7C811D6C021F741D741D74811D6C025F74D55AC7189087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508217408115380319401B481740153882174081194081173801091001008B87180D4B31DD731D6C1D74155BCF398B31CF39155B1D741B6C1F749352C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080A51301D5019401740133819481D5017401B482160194082174081194081173801091001008B87180593525F741D749352C718C72082C718040F42DD731D6C1D748D398F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080A51281D5019401740153823601B4811300F281B501F588117380217401940194081173801091001008B87180359631D74DB73092982C71807C720C7188718114A1F741D6C996BC9208E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B50811740061B481F5815381740133011301F5882174081194081173801091001008B871807DB635D745763C7188D39D552CF39C72081C71804C72059631D6C1D74CD318E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B50811740031948216015381740811330011F58174081174081194081173801091001008B871808DB6B1F74596B87181142A17C5F740929871881C71803114A1D741D74D34A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080A51281D5019401740133023601B4813380F281B501D5082174081194081173801091001008B87180759631F74DD73D552175B1F741D74092981871804C7188B311D741D6C575B8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080651281D5017401738113019401D50811338021B48153817388117408119400319381738091001008B871807514A5D741D6C5F741D741D6C1D74092981871804C7180929DB6B1D6C99638E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B488117400515381130194819401B481D5082174006173819401B4019381738091001008B871807CF411D741D741F745F741F745F744B2981871804C7180929DB6B1D6C97638E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B508117400615381F581B48133011301F581D508217400519401B4019381738091001008B87180299635F74DD6B81D35202575BDB6B4B2981871801C7188B31811D7401155387188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251301D501940811740051F58153817380F2813381F588217400119401B4081173801091001008B87180699635F741D6CDB6BD34A8B31C92083C718038D39DB6B1D744F428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080551281D501940174019481F5881174002133811281D5881174006173819401B4019381738091001008B87180953521D745F741D745F741F74175B8D39C920C71881C72002C9200929C7208E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080A51281D50194017401538236017401538113017401F5881174006173819401B4019381738091001008B87180AC7184B319552DB735D741D6C1F745F7459631142092983C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B48811740020F2819481D508115380123601B508217408119400319381738091001008B871882C718030929175B1F741D6C811D74035F74DB7393524B2981C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B5081174001153811308117400119481B4883174081194081173801091001008F87180393525F741D6CDB6B811D74041D6C5F74DD73935209298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D501940811740811B480119481B50811D508217408119400319381738091001008E871801C718534A815F7403CD314B29595B1D74811D6C015F74514A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080551281D5019401740173817408115380219401F581D508217408119400319381738091001008B871881C71807C720C920D5525F741D74595B575BDB6B821D6C015F7451428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080351281B4819401740811538021B481948194884174081194081173801091001008B8718048D3995525963DD6B1F74811D6C011D741F74821D6C021F749F7C914A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080951281B481940174011301D50236019481330173883174081194081173801091001008B8718029B639F7C5F74811D74011F745F74815F74051F74DB6B5963D55A514A4B298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B50811740040F28112815381B48194883174006173819401B4019381738091001008B87180159631D74821D6C099B6B176393520F428D310929C720C7188718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B5081174006153813381330153817401D501F5882174081194081173801091001008B871805595B1F741D7499630D3A092981871881C71801C720C71881C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308044F281B50194017401538841B48011B50174081174081194081173801091001008B871808596B5F741D6C1D741D6C5963514A4B31C72082C7189187180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F281D501940811740811B480119481B488119488117400617381940194019381738091001008B8718019352DD73815F74811D74055F741F745963534A8D31C9209187180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508217400119481940811740021D502360174081173881194081173801091001008B871804C7180929CF41D55ADB73815F74811D74055F741F749B6B534A4B29C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508217400817381538153819401D501F5817401738174081194081173801091001008B871881C72081C718030921CD31D352DB63821D6C031D745F745F740F3A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080351281D5019401740811538811B48021940153815388217408119400319381738091001008B871882C71805C72009298B31113A595B1D74821D6C021B6C5F7451428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080A51281B50194017400F281D5023601B481338133017388117400617381940194019381730071001008B8718044B291142D55259631D6C815F74011F741D6C811F74815F74015D740F428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080651281D5019401740133011301538811B50011940174082174081194081173801091001008B87180359639F7C5F745D74811D74811D6C061D749B6B17639352CF414B29C7208E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408217408115380217401F5825688217408119400317381530091001008B871802175B9F7C5F74821D6C031F74D5524929C72084C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D501940811740051D501B4819481B481B501F588117400217381940194081173801091001008B8718020921CD391763815F74821D6C02175BCF39C92083C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508117400113301D508717408119400319381738091001008B871807C720C718C7208B31D5521D6C5F741F74815F74029B6B534A09299087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F281B488117400411301F5819481740153884174081194081173801091001008B871881C72081C7180387184929914ADB63821D6C035F74DB6B514A09298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281B5019408A174081194081173801091001008B871809C920092109290B294B318B298B310F3A99631F6C821D6C019F7C934A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251301D5019408A174081194081173801091001008B871802D75A1D6C1D6C811D74825F74011F6C1D6C821D6C015F74914A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508B17400519401B4019381738091001008B87180159639F7C8A5F74019F7C914A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408A17408119400319381738091001008B871802D55A9963595B81575B04175B155BD55AD552935281514A020F42CF3909298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281D508B17408119400319381738091001008B8718085352D5525142CD314929C720C7188718851881871883C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281D508B17408119400319381738091001008B87180159639F7C815F74091D6C9B6B5963955251428D394B290929C720C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B48821740011738174086174081194081173801091001008B87180257635F741D6C811D74011D6C5F74825F74041D6CDB6B5963D55A8B318E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308064F281B501940174017381538173886174081194081173801091001008B87180459635F741D6CDB63DB6B811D6C031D741D6C5F745D74811D74019F7C934A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F281B508B174081194081173801091001008B87180E57635F745F748D39C720CF395D741D741D6C1763596BDB731D6C5D74934A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B488B174081194081173801091001008B87180B17635F745F748D3187184B311D745D74DB6BC920C718514A815F7401914A87188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301D508C174004194019381738091001008B87180B17635F745D748D3187184B291D6C5D74DB6B092987180F42815F7401914A87188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251301D5019408A1740011738194081173801091001008B8718055963A17C9F748D3187184B29815F7403DB6B092987180F42815F7401514287188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251301D50194085174001173817408217408119400317381530091001008B87180BCD39D55259634B31C7180929175B9B6BDB630921C7180F42815F7401914A87188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080351301D50174017388917408119400317381730091001008B871882C71882C72008C71809210929C720C7180F429F7CA17C934A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301D50881740021738174017408119400319381738091001008B871888C71881C7200309298D310F428D318E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080451301D501940174017388517400215381738174082194002173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080451281B501940174017388817408119400319381738091001008B871801C720C7189B87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508B17408119400317381730091001008B871806D552996BD55211428B310929C92081C7189487180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B488B174081194081173801091001008B871801175B5F74835F74061D6CDB63175B9352CF394B31092981C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B488B17408119400319381738091001008B871802D75A5F741D6C825F74011D6C1D74835F74021D6CDB6BCF398E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B508C174004194019381530091001008B871805175B5F741D6C514A934A575B821D74815F74031D741D6C5F74914A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408A17408119400319381738091001008B87180E175B5F741D6CC920851809291D745F749B6BCF41514A99631D745F7451428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080451301D501940174017388817408119400319381738091001008B87180B175B5F741D6C0929C7180B291D6C5F745963C71885101142815F7401914A87188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B488B17408119400319381738091001008B87180B175B5F741D740B2985180B291D745F74996BC72087185142815F7401914A87188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B488A17400617381740194019381738091001008B87180E93525F745F74D3520721934A5F741D6C5F740F3A0921175B5D745F74914A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408117408117388617400519401B4019401738091001008B87180E4B291D745D745F741D6C1F741D6C5D741D745F741D6C1D741D6C1F7CCD398E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281B4819408A17408119400319381738091001008C8718080F425F7C5F741D741F745F7C9B6B1D745F74811D6C025F745763C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281B5019408A174081194081173801091001008B87180EC7188718CD39596BDD73DB63514AC718CD39DB6B5F741D7417630929C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281B481940831740011738174084174081194081173801091001008B871881C718038718C7188D310F3A81CD39034B31092149294B2982C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B488B17408119400319381738091001008B871804C71887184B31175BDD73825F74035D74DB635142C92081C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301D508B174081194081173801091001008C871801CF411D7C855F74035D745F749B6B4B319087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301D508B174081194081173801091001008B8718094B315F745D745F749B6B9552514A9552996B5F74815D7401DD734B298F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408917400615381940194019381738091001008B87180417535F745D749352C72082871806C7208D3199635D741F745963C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301D508A17400615381738194019381738091001008B871802DB635F749B6B81C71882871806C7188718072199631D741D7C8D318E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151301B508B17400519401B4019381738091001008B8718039B6B5F74596BC71884871805C71885188D315D745F74914A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B5083174001173817408517408119400319381738091001008B87180359635F741D7C8B3184871881C7180309291D6C5F74D34A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151281B5083174001173817408417400617381940194019381738091001008B87180411429F741D74DB6B4B3183871805C7188518514A1F745F7451428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281B5019408A17400519401B4019401738091001008B871802C72059635F74811D7401D5528B31810929054B3193521D741D6C1D7449298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251301B5019408A17408119400319381738091001008C87180309299B6B5F741D6C815F74811D6C051D745D741D6C5F740F3A85108E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080251281D5019408A17408119400319381738091001008B871804C71887180921D55A1D74815F74011F741D74815F7403DB6B0F3A4B29CB318E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008B871882C7180B87184B29914A1763596B996B155351420F3AD5525F7415538E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008B871881092909C720C7188518831085188718C920CD31175B1D74815F7401D55287188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508A17400617381940194019381738091001008B871809D55A1F749B6B1763935251428D31CF4159635F74811D74025F74DB73CF418E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508A17400617381940194019381738091001008B871802175B617C1F74845F74051D741D6C1D7459630F4209218F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508A17400217381940194081173801091001008B871804CF415763996B1D6C1F74815F74811D6C035D7493528718851881C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508A174006173819401B4019381738091001008B871881C7180CC72009298B31CF41D55A1D7C1F741D74DD73534AC7208718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B17408119400319381738091001008B871883C718828718068D3999635F741D745D7499638D318F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B48831740011940174085174081194081173801091001008E871883C718038718C720514ADD73811D74011D7C0F428E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B5085174081173882174002173819401940811738010910010090871883C7180587184B3117631D7C5F74D5528E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508517400217381538173881174006173819401B40193817380910010091871883C718048718C718CF411D7415538E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B50861740011538173881174006173819401B40193817380910010094871882C71802851809298D398E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008B871882C71887871882C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285D5017408117388817408119400319381738091001008B871808C718871809290B49CB78CB800B590B31C71882871882C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D5084174001173817408417408119400319381738091001008A871881C718010D61CFD883CFF002CFE80FB10B4181C7189187180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A87180CC7180D81CFF8CFE0CFD8CFE0CFE8CFE0CFD8CFE8CFF00D89C9289187180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A87180D0B49CFF8CFD8CFE0CFE8CDC8CDB8CFD0CFF0CFE0CFD8CFE8CFD00B399087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17408119400319381738091001008A87180ECDA0CFE8CFE0CFE80B51C720C718C9280B510FA9CFE88FD88FE0CFE00B418F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17408119400319381738091001008A871803CFC8CFE0CFF00D8185C718050D61CFE08FD8CFD8CFE0C9408E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285D5019408A174081194081173801091001008A871803CFD8CFE0CFF00D5184871806C71887180B49CFE0CFD0CFE0CDB08E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A871803CFD8CFE0CFF00B4984871881C7180487180D69CFE88FD0CFE08E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488B174081194081173801091001008A871881CFE002CFF00B51C71884871881C71803C920CFD0CFD0CFE88E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A871881CFE002CFF00B51C71884871881C718030D59CFD8CFD0CFE88E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A871804CFD8CFE0D1F8CB78851883871803C71887180B49CFF881CFD801CFE087188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508A17400617381940194019381738091001008A87180FCFC0CFF0CFF0CB885329993955314D31C718851887180D59CFF0CFD8CFE8CDA88E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488A17400217381940194081173801091001008A871804C948CDC08BF09DA9FF5A82BF6207755AD949C718C720CDA0CFC00FA109298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285B50174081153884174001173817408117400519401B4019381738091001008B87180C0B319FA1B362BB5ABD62BB62BD5ABF62FF626B5AC9208718C92081C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D50841740011738174082174001173817408119400319381738091001008A871803C7186952FF5ABB5A84BF6203BD5ABF626F52C92082C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508C174004194019381738091001008A8718024D31BF62BD6281BF6208E14995319B39755ABF62BB5ABF625F52851881C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488B1740821940021738091001008A87180F9731FF62BD5AFF62D74185188718851809216F52BF62BD5ABF5A0B298718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B48881740021538174017408119400319381738091001008A871804DF39FF62BF62755AC72082C7180787184D31BF62BD62BF62194A8518C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488A17400617381740194019381738091001008A8718049B39FF62BF622D52C71882871881C718056D5ABF62BF62274A8718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A8718045331FF62BF62BB5AC72082871807C71887186952BF62FF62254A8718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308034F285B481740173889174081194081173801091001008A8718010921BB5A81FF620B1B4A87188718C718C5189531BD62BB5AFF6217428518C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A87180AC518E141FF62BF62FF62694A97319939694AFF62BF5A81BD5A020B29C718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B17408119400319381738091001008A871804C7180921B75AFF62BD6283FF6204BB62BB5AFF62DF41851881C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285D5019408517408117388217400519401B4019401738091001008A871805C720C7184D29B75AFF6ABF6281BD6204BB5ABD5AFF62214AC51881C7208F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285B5019408617400317381740174017388119400319381738091001008A871806C718871885100921DF41B55ABB5A81B95A06BB5AE1411329A339D77CC9AF8B4A8E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A87180609214B42CF4A1D425D4A29636B6B81A96B03A56BE36BE184959E81C9AF0189A787188D87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A871805498EC9C7CBB7CFAFCDAFCBAF84C9AF04C7B7C9AFC9AFC9B7C7858E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508A17400617381940194019381738091001008A871801C9AFC9B781C9AF0BC9B7C9BFC9BFCBBFC9BFCBBFC9BFCBBFC9B7C9AF8B7409298E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488B17400519401B4019381738091001008A871806498EC9B7C9B7CBB7499F097DCB63814B5B810B53048B4A0B3A09298718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488B17408119400319381738091001008A8718074753C9BFC9B7CBBF497D87108718C71884871882C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B17408119400319381738091001008A8718018729C9AF81C9B701C9AF472987C7189087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17408119400319381738091001008A871801C7200B8E81C9B701C9BF073A9887180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D5085174001173817408317408119400319381738091001008B871801CB74C9BF81C9B703076489390729C7189587180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D5085174001173817408317408119400319381738091001008A87180AC7188B4AC9B7C9AFCBA7C7E7C3FF83FFC5DDC762C7209387180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A87180DC7184931C9AFC9A789C705FF43FF45FF43FFC3FF45EE09298718C7189087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17400519401B4019381730091001008A871804C718C7ACC5FF45EF45FF8183FF0585FF83FF45FF83FF85EE092981C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17408119400319381738091001008A871805893183FF45FF43FFC3FF079481893906479C83FF03FFC3FF07948718C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A871805C56AC3FF43FFC5FF479C871881C718028718479CC3FF8103FF010729C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285D5019408A17408119400317381730091001008A871804056BC3FF43FFC3FF893983C71804893183FF05FF83FFC9419087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D5089174081173881194081173801091001008A8718054552C5FF43FFC3FF4931C71881871805C7180929C3F603FF83FF895A9087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508A17400217381940194081173801091001008A871804092945FF45FFC5FF495A82871801C71809298105FF0183FF49529087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A87180DC718C7ACC5FF85FF45EE07218718C7188510094A85FF03FF83FF89399087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17408119400319381738091001008B87180DC941C5FF05FF85FFC5D5C7414729475285EE43FF45FF05DEC720C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508217400317381740174017388317400617381940194019381738091001008A87180EC720C718479CC5FF03FF85FFC5FF45FF85FF45FF05F7C5FF89628718C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508917408117388119400319381738091001008A871804C720C718C720C5CDC5FF8105FF8145FF0305FFC3FF83AC871881C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285B5019408A17408119400319381738091001008A871881C71802C720052903A48343FF0647F78B948D21CF290B298718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308044F285D5019401740153886174001173817408119400319381738091001008A87180F0B29A1332734A1235B1A995C998599755D65E73C2B24EF2C332DF32CCF2985188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508117400317381740174017388417400615381740194019381738091001008A871801A323772D81F32C06312DAF246F1CAF246F24AD24EF2C81AF2C02EF2CAF2C4B198E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B5084174001173817408317400617381940194019381738091001008A8718016B2C352D81F12C82EF2C02AF2CEF2CEF2C81F12C03EF2CAF2C352DD9228E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488B17408119400319381738091001008A8718068F29E3336B2C692CAF2CEF2CF12C81EF2C0669342534A133AD2CEF2CF12CA3238E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B174081194081173801091001008A871805C718C720C9200B296B2CF12C81EF2C07F32C5532C71887185532F32CEF2CE5238E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508B17408119400319381738091001008A871881C72002C7185F33332D82EF2C07F32CDB3285188718D129F32CEF2CE7238E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D50871740011738174081174081194081173801091001008A87180F071987189732372DAF2CEF2C332DF32CEF2CAD2C092183105332F52CF12C61238E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A871802C5185522F52C81AF2C0AF52CD9321B33F52CF32C292497226B2CB12CF52C55228E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A87180B8D21F32CF12CAF2C352D5F33C7180B29AF2CF32CF12CF52C81F12C012D2409198E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A87180A2B24F32CAF2CF32C29340921C720C7188D29AF2CF52C81AF2C02372D133285188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B174081194081173801091001008A871805F52CF12CF12CAF2C4B29C71881C72002C7184B29A13381F32C02DB32C718C7188E87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B48841740011738174082174001153817408119400319381738091001008A8718031D23372DF12C8F2986C718014B290B2981C7188F87180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488B17400519401B4019381738091001008A871802C718D1214B2981C71883871884C7189087180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B508A1740021738194019408117380109100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508A1740021738194019408117380109100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508B17408119408117380109100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508A1740061738194019401938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B488A174006153819401B401938173009100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285B4885174001173817408117408117380519401B401938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308024F285D5019408517400117381740821740811940031938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D31810308014F285D508217400117381740861740811940031938173809100100A987180185100100830100FFADB58FADB584010001851087188C87188185188185100109298D318103080151285B508117400117381740821740811738821740811940031938173809100100A987180185100100FF010098010001851087188C87188185188185100109298D31810308014F285B48861740011738153881174006173819401B401938173809100100A987180185100100FF010098010001851087188C87188185188185100109298D31810308014F285B488A1740061738194019401738173009100100A987180185100100FF010098010001851087188C87188185188185100109298D31810308014F285B4884174002173815381738811740811738811940031738173009100100AA87180185100100FF010096010001851087188D87188185188185100109298D31810308024F285B481740811738811538811740811538021738174017388119408117380109100100AB87180185100100FF010094010001851087188E87188185188185100109298D31810308034F285B4817401738821538811740811538821738811940031738153009100100AC8718FF85109585109087188185188185100109298D31810308054F285B48174015381738174085173881153805174019401738153009100100EC871801851887188D871801851887188287188185188B8718888518BA87188185188185100109298D318103080351305F58194819408619488119400619481B481B481B4019380B180300FF8718D387188185188185100109298D318103080351305D5019401740861940811740811948041B48194019380B100100FF8718D387188185188185100109298D31810308014F285D508117400319401948194819408417400619401B481B40194019380910010001C7188718FF8718D187188185188185100109298D31810308024F285D5019408A17400519401B40193817380910010001C7188718FF8718D187188185188185100109298D31810308014F285D508B17400519401B48194017380910010001C7188718FF8718D187188185188185100109298D31810308014F285D508B174081194003193817380910010001C7188718FF8718D187188185188185100109298D31810308034F285D501740153881174003173817401740173883174081194003193817380910010001C7188718FF8718D187188185188185100109298D31810308034F281D501740173889174081194003193817380910010001C7188718FF8718D187188185188185100109298D31810308014F281B508B174081194003193817380910010001C7208718FF8718D187188185188185100109298D31810308014F281B488B174081194003193817380910010001C9208718FF8718D187188185188185100109298D31810308014F281B508817400217381740174081194081173801091001000109218718FF8718D187188185188185100109298D318103080251281D5019408A174081194081173801091001000109218718FF8718D187188185188185100109298D318103080251281B5019408A17408119400319381738091001000109298718FF8718D187188185188185100109298D31810308014F281B508B17408119400319381738091001000109298718FF8718D187188185188185100109298D318103080151281D508B174081194081173801091001000109298718FF8718D187188185188185100109298D318103080151281D50891740011538174081194081173801091001000109298718FF8718D187188185188185100109298D318103080251301D5019408417400117381740811740011538173882194002173809100100010929C718FF8718D187188185188185100109298D31810308014F285B508B174082194002173809100100014B29C718FF8718D187188185188185100109298D31810308014F285B508B1740811940031938173809100100014B31C718FF8718D187188185188185100109298D318103080351285B4815381738881740061738194019401938173809100100018D31C720FF8718D187188185188185100109298D31810308034F281B4817381538881740061738194019401938173009100100018D31C920FF8718D187188185188185100109298D318103080151281D508B1740811940031738173009100100018D310921FF8718D187188185188185100109298D318103080151281D508B17408119408117380109100100028D310929C718FF8718D087188185188185100109298D318103080151281B488317400117381740851740811940031938173809100100028D310929C718FF8718D087188185188185100109298D318103080151281B5083174001153817408517400519401B401938173809100100024B314B29C718FF8718D087188185188185100109298D318103080251301B5019488A174082194002173809100100814B3101C7188718FF8718CF87188185188185100509298D310308030051301B4881174001173815388117400117381740831740811940031938173809100100020B294B31C718FF8718D087188185188185100509298D310308030051301B488A174006173819401940173817300910010002C9208D31C720FF8718D087188185188185100609298D310308010051301B481940821740011738174085174081194081173801091001000285108D310921FF8718D087188185188185100609298D310308010051301B501940811740011738174083174081173806174019401B4019401738091001000203088D310929FF8718D087188185188185100609298D310300010051301B4819408117400115381740861740821940021738091001000301008D310929C718FF8718CF87188185188185100109294B318101000251301D5019408A17408119400319381738091001000301008D314B31C718FF8718CF87188185188185100109294B31810100014F281B488B17408119408117380109100100040100C7208D310921C718FF8718CE87188185188185100109294B31810100014F281B488B17408119400317381730091001000103084510814B3101C720C718FF8718CD871801851885108185100109294B29810100010F281B488B1740811940031938173809100100050F280100C7208D314B29C92081C718FF8718CB87180585188510851087180929C9208101000453301B50174015381738821740011738174081174001173817408119400319381738091001000359480F280308C720814B31010929C92083C718FF8718C78718818518078510C718092143080100091859401B48891740011538174081194081173801091001000A1B4859480D2801004510C7208D314B310B290929092181C920FFC720C6C72081C71802851885108518810100020F281B4819488B17408119400317381730091001000919401B485B480F280308030045108718C7200929FF4B31C84B31044B290B290929C7204308810100020B1859481B488C1740811940811738010910010007174019401B485D5057400F2803000308810300810100810300FF0308C1030801030001008501000403000B2059401B481940881740811738811740821940021738091001000A17381740174019401B485D50574011300D2007100308810300820100FF0300BD030088010003071053305948194882174081173885174001153817388117408219400217380910010083174003194019481B481B50845D50825F50815D50815F50815D50025F505F585F50825D50025F505F585F50825D50815F50015D505F50815D50825F50815D50025F505D505D50815F50815D50815F50815D50825F50835D50015F585D50835D50025F505F585F58845D50035F585D505F585F50815D50025F505D505F50815F58035F505D505F505F58825D50815F50025F585F505F50825D50025F505D505F50815D50045F585F505F505F585F50825D50035F505D505D505F50825D50815F50815D50815F50865D50015F585D50865D50045F505D505D505F505F58825F50035D505F505F505D50815F50815D50025F505D505D50825F50015D505F50815D50815F50835D50825F50825D50055F505D505D505F585F505D50835F50815D50015F505F58815F50815D50015F585F50815F50825D50025F505D505D50815F50825D50021B48194819408217408117388817400617381940194019381738091001008117400117381740811740021940194819408119488119408219488219408719488119408319488119400319481B4819481940991948031940194819481940851948811740A419488119408819488119408119480119401948821948811940861948811740861948011B4819488C19480119401948821948011B4819488119480119401948851948011940194881194801174019408D1948021B48194819408819488119408D1740011738174081194081173801091001009A17400117381740841740011738174090174001173817408117400117381538BC174001173817409517400117381740A617400217381740174081173888174001173817409717408117388617400519401B401938173809100100D21740011738174092174001173817408217400117381740991740011738174082174001153817408B17400117381740A2174081173887174001153817409C17400519401B4019401738091001000117381740D017400117381740B31740011738174082174001173817408B1740031738174017401738A117400115381740A51740811940031938173809100100011738174099174001173817409C17400117381740C5174081173886174081173890174001173817409617400117381740871740031738174017401738A3174081194003193817380910010089174001173817408417400115381738C61740011738174084174001173817409F17400117381740B217400117381740A51740011538174086174006173819401940193817380910010089174001173817409A17400117381740A71740811738B317400115381740CB1740021738174017408117388E174006173819401B401738153009100100D417408117388317400117381740C117408117388D17400117381538A2174081173882174001173817408C174001173817408119400319381738091001000117381740DA174001173817408917400117381740BB174001173817408517400117381740AE1740011738174084174081173883194002173809100100A5174001173817408617400115381738A117400115381738EC1740031738174017401738A217400119401B4881194002173805000100C717400115381740881740811738FF174087174081173881174001173817408417400219401B481B4881194002173803000100821940021948194017408419408217408E19400319481940194017408219408117408119400217401940194081174084194001174019408B19400117401940851940811740841940011740194085194001174019408119408217400219401740174088194001174019408619408117408619408117408B1940821740851940011948194082174081194001174019408119480119401740881940811740841940821740031940174017401940821740821940811740831940011740194081194001174019488719400217401940194081174006194019481940174019401740194081174084194001174019408119400219481B481B4881194002133001000100851B480119481B48821B48831948811B480119481B48811B480119481B488D1B480119481B488F1B480119481B48861B480119481B48811B48811948881B48811948891B48811948871B48811948821B48811948841B48811948881B48811948B41B480119481B48861B480119481B48841B4801194019488F1B480119481B488E1B48021D481D501D50811B48021B4019400910810100841D50011F501D508C1D50011F501B50891D50011B501D50821D50011B501D50881D50011B501D50851D50011F501D50811D50811B48811D50011B481D50811D50011B501B48821D50051B501B481D501F501D501F50891D50011F501D50821D50011F501D50811B48011D501F50821D50811F508A1D50811F509D1D50811F50051D501B501D501F501D501F508B1D50011B481D50881D50031F501D501D501F50881D50021B481D501F50861D50011F501D50901D50011F501D50821D50811F50841D50811B48011B401128820100861D50851F508E1D50821F50831D50011F501D50821D50811F50881D50011F501D50821D50811F50011D501F50811D50811F50821D50011F501D50881D50011F501D50831D50821F50821D50011F501D50811D50811F50881D50811F50831D50821F50891D50821F50821D50011F501D50841D50011F501D50881D50811F50861D50011F501D50811D50011F501D50851D50811F50821D50011F501D50821D50031F501D501D501B48861D50011F501D50841D50811F50821D50011F501D50951D50011F501D50841D50811B480113300100820100821D50021F501D501D50821F508B1D50011F501D50841D50811F50861D50811F50831D50811F50811D50811F50861D50031F501D501D501F50821D50811F50821D50811F50871D50011F501D50831D50821F50851D50841F50821D50021F501D501D50821F50831D50831F50871D50821F50811D50011F501D50821D50811F50811D50021F501D501D50811F50811D50811F50811D50021F501D501F50841D50811F50821D50811F508E1D50811F50811D50021B481D501F50851D50811F50821D50811F50831D500A1F501D501D501F501D501F501D501F501D501D481B48811F50821D50811F50811D50011F501D50881D50011B480D20840100991530021330153015388A153001133015308115300115381530881530811538F415300113301530961530011330153095153001133015308515300211300B180300850100FF01009701000103000F28821128010F200508890100050D180F200F2811280F200D188901000305080F2811280F28810F200103000100880100010F200F28811128010F280B189F0100FF01009801000407101530193817300B108A01000103000F2881193801112803008A0100040B1017301940153007088A01000403001128193817380F20A00100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100FF0100EF0100000000000000000054525545564953494F4E2D5846494C452E00");
def processframe(data):
# Skip last 26 bytes (footer)
data = data[:-26]
# Skip first 22 bytes
i = 22
# Implied header: 16010A000001002000000000F0009001102000000000
if(image):
imgdat = bytearray(b'')
while(i < len(data)):
header = data[i]
# Top byte indicates RLE/RAW
if(header > 127):
if(log):
print("RLE",end='')
rle = True
else:
if(log):
print("RAW",end='')
rle = False
# Length of packet, pixels for RLE or colors for RAW
packlen = header % 128 + 1
# Skip header byte
i = i + 1;
if(log):
print(str(packlen).rjust(4)+" ",end='')
if(rle):
# Two color bytes in LE order
b1 = data[i]
b2 = data[i+1]
if(log):
printrgb(format(b2, '08b')+" "+format(b1, '08b'))
if(image):
# bytes to 5-bit and 8-bit RGB
r,g,b,ra,ga,ba = torgb(b2,b1)
for j in range(packlen):
imgdat.append(ra)
imgdat.append(ga)
imgdat.append(ba)
# Skip past two color bytes
i = i + 2
else:
j = 0
while(j < packlen):
# Two color bytes in LE order
b1 = data[i+j*2]
b2 = data[i+j*2+1]
if(log):
printrgb(format(b2, '08b')+" "+format(b1, '08b'))
if(image):
# bytes to 5-bit and 8-bit RGB
r,g,b,ra,ga,ba = torgb(b2,b1)
#print(b2,b1,r,g,b)
imgdat.append(ra)
imgdat.append(ga)
imgdat.append(ba)
# Next color pair
j = j + 1
# Skip past raw color data
i = i + packlen*2
if(log):
# Need newline after all those colors
print()
return imgdat
if(animated):
frames = []
for i in range(0, len(animdata)):
imgdat = processframe(animdata[i])
if(image):
# Should be 288000
if(log):
print(len(imgdat))
# Needs to be immutable, so convert to bytes instead of bytearray
im = Image.frombytes('RGB', (240, 400), bytes(imgdat))
frames.append(im.rotate(90, expand=True))
frames[0].save("TGAHZ.gif", save_all=True, append_images=frames[1:], duration=len(animdata), loop=0)
if(log):
print("Saved")
else:
imgdat = processframe(data)
if(image):
# Should be 288000
if(log):
print(len(imgdat))
# Needs to be immutable, so convert to bytes instead of bytearray
im = Image.frombytes('RGB', (240, 400), bytes(imgdat))
im.rotate(90, expand=True).save("TGAHZ.png", "PNG")
if(log):
print("Saved")
| 358.157576 | 55,251 | 0.972722 | 594 | 59,096 | 96.771044 | 0.287879 | 0.001461 | 0.001253 | 0.000261 | 0.011325 | 0.010629 | 0.009777 | 0.009777 | 0.009777 | 0.007307 | 0 | 0.829002 | 0.01616 | 59,096 | 164 | 55,252 | 360.341463 | 0.159664 | 0.013283 | 0 | 0.350877 | 0 | 0 | 0.954301 | 0.950188 | 0 | 1 | 0 | 0 | 0 | 1 | 0.026316 | false | 0 | 0.017544 | 0 | 0.131579 | 0.114035 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
3f4dd50479ea9c638704f1e90e46df90cc409b0d | 71,297 | py | Python | test/test_commands/test_response_parsing.py | leanprover-community/lean-client-python | efeb257b7e672d02c1005a6624251ad6dd392451 | [
"Apache-2.0"
] | 13 | 2020-05-03T21:32:14.000Z | 2021-06-01T10:32:11.000Z | test/test_commands/test_response_parsing.py | leanprover-community/lean-client-python | efeb257b7e672d02c1005a6624251ad6dd392451 | [
"Apache-2.0"
] | 22 | 2020-04-25T12:18:12.000Z | 2021-07-22T19:39:19.000Z | test/test_commands/test_response_parsing.py | leanprover-community/lean-client-python | efeb257b7e672d02c1005a6624251ad6dd392451 | [
"Apache-2.0"
] | 2 | 2020-05-06T07:58:33.000Z | 2020-11-03T22:11:54.000Z | """
Unit tests for lean server response classes
Test that responses are properly converted from JSON into
Python classes and that new fields do not cause the parser
to crash.
When possible, tests should use ACTUAL LEAN OUTPUT under a
range of scenarios to ensure that all cases are covered.
(One way to generate output is to use the trio server with
debug_bytes=True.)
"""
import json
import lean_client.commands as cmds
class TestAllMessagesResponse:
def test_no_messages(self):
response_json = '{"msgs":[],"response":"all_messages"}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.AllMessagesResponse)
assert resp.response == "all_messages"
assert len(resp.msgs) == 0
def test_multiple_messages(self):
response_json = '{"msgs":[{"caption":"","file_name":"test3.lean","pos_col":7,"pos_line":2,"severity":"error","text":"unknown identifier \'foo\'"},{"caption":"","file_name":"test2.lean","pos_col":0,"pos_line":1,"severity":"warning","text":"declaration \'foo\' uses sorry"}],"response":"all_messages"}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.AllMessagesResponse)
assert resp.response == "all_messages"
assert len(resp.msgs) == 2
assert resp.msgs[0].caption == ""
assert resp.msgs[0].file_name == "test3.lean"
assert resp.msgs[0].pos_col == 7
assert resp.msgs[0].pos_line == 2
assert resp.msgs[0].severity == cmds.Severity.error
assert resp.msgs[0].text == "unknown identifier 'foo'"
assert resp.msgs[1].severity == cmds.Severity.warning
def test_extra_fields(self):
"""
Should not crash if given extra fields where are added in later versions of Lean.
"""
response_json = '{"_new_field_a":12345, "msgs":[{"_new_field_b":12345, "caption":"","file_name":"test3.lean","pos_col":7,"pos_line":2,"severity":"error","text":"unknown identifier \'foo\'","_new_field_a":12345},{"caption":"","file_name":"test2.lean","pos_col":0,"pos_line":1,"severity":"warning","text":"declaration \'foo\' uses sorry"}],"response":"all_messages"}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.AllMessagesResponse)
assert resp.response == "all_messages"
assert len(resp.msgs) == 2
assert resp.msgs[0].caption == ""
assert resp.msgs[0].file_name == "test3.lean"
assert resp.msgs[0].pos_col == 7
assert resp.msgs[0].pos_line == 2
assert resp.msgs[0].severity == cmds.Severity.error
assert resp.msgs[0].text == "unknown identifier 'foo'"
assert resp.msgs[1].severity == cmds.Severity.warning
class TestCurrentTasksResponse:
def test_no_tasks(self):
response_json = '{"is_running":false,"response":"current_tasks","tasks":[]}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.CurrentTasksResponse)
assert resp.is_running == False
assert resp.response == "current_tasks"
assert resp.tasks == []
def test_extra_fields(self):
"""
Should not crash if given extra fields where are added in later versions of Lean.
"""
response_json = '{"_new_field_a":123,"is_running":false,"response":"current_tasks","tasks":[]}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.CurrentTasksResponse)
assert resp.is_running == False
assert resp.response == "current_tasks"
assert resp.tasks == []
def test_running_tasks(self):
response_json = '{"is_running":true,"response":"current_tasks","tasks":[{"desc":"parsing at line 1","end_pos_col":70,"end_pos_line":1,"file_name":"test.lean","pos_col":0,"pos_line":1}]}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.CurrentTasksResponse)
assert resp.is_running == True
assert resp.response == "current_tasks"
assert resp.tasks == [cmds.Task(desc='parsing at line 1', end_pos_col=70, end_pos_line=1, file_name='test.lean', pos_col=0, pos_line=1)]
class TestErrorResponse:
def test_missing_file_error(self):
response_json = '{"message":"file \'missing.lean\' not found in the LEAN_PATH","response":"error","seq_num":3}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.ErrorResponse)
assert resp.message == "file \'missing.lean\' not found in the LEAN_PATH"
assert resp.response == "error"
assert resp.seq_num == 3
def test_error_with_no_seq_num(self):
response_json = '{"message":"key \'seq_num\' not found","response":"error"}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.ErrorResponse)
assert resp.message == "key 'seq_num' not found"
assert resp.response == "error"
assert resp.seq_num == None
def test_extra_fields(self):
"""
Should not crash if given extra fields where are added in later versions of Lean.
"""
response_json = '{"_new_field_a":123,"message":"key \'seq_num\' not found","response":"error"}'
resp = cmds.Response.parse_response(response_json)
assert isinstance(resp, cmds.ErrorResponse)
assert resp.message == "key 'seq_num' not found"
assert resp.response == "error"
assert resp.seq_num == None
class TestCommandResponse:
class CommandResponseExample:
def __init__(self, response_json: str, response_type):
self.response_json = response_json
self.data = json.loads(response_json)
self.command = response_type.command
self.response_type = response_type
self.ok_resp = None
self.resp = None
def add_fields(self, data):
if isinstance(data, dict):
data2 = {k: self.add_fields(d) for k, d in data.items()}
data2['_extra_field'] = "Extra field value"
return data2
if isinstance(data, list):
data2 = [self.add_fields(d) for d in data]
return data2
else:
return data
def parse_intermediate(self, add_extra_fields):
if add_extra_fields:
json_string = json.dumps(self.add_fields(self.data))
else:
json_string = self.response_json
self.ok_resp = cmds.Response.parse_response(json_string)
def parse_final(self):
self.resp = self.ok_resp.to_command_response(self.command)
def test_intermediate_representation(self):
assert isinstance(self.ok_resp, cmds.OkResponse)
assert self.ok_resp.response == self.data['response']
assert self.ok_resp.seq_num == self.data['seq_num']
def assert_data_and_object_match(self, data, object, ignore_keys, replacement_keys):
print("Comparing:", object, "\nwith: ", data)
if isinstance(data, (int, float, str, bool)):
assert data == object
elif isinstance(data, list):
assert isinstance(object, list)
assert len(object) == len(data)
for d, o in zip(data, object):
self.assert_data_and_object_match(d, o, ignore_keys, replacement_keys)
elif isinstance(data, dict):
for key, value in data.items():
if key in ignore_keys:
continue
if key in replacement_keys:
key = replacement_keys[key]
print(key, object)
self.assert_data_and_object_match(value, object.__dict__[key], ignore_keys, replacement_keys)
def test_final_representation(self, ignore_keys=None, replacement_keys=None):
assert isinstance(self.resp, self.response_type)
assert self.resp.command == self.command
assert self.resp.response == self.data['response']
if ignore_keys is None:
ignore_keys = []
if replacement_keys is None:
replacement_keys = {}
self.assert_data_and_object_match(self.data, self.resp, ignore_keys=ignore_keys + ['response'], replacement_keys=replacement_keys)
@staticmethod
def run_tests(response_json: str, response_type, replacement_keys=None):
"""
Attempt to parse the response_json string and test that all the desired fields are included and correctly parsed.
It also checks that the parsing still works even if new extra fields are in the json.
"""
example = TestCommandResponse.CommandResponseExample(
response_json=response_json,
response_type=response_type
)
# test parsing
example.parse_intermediate(add_extra_fields=False)
example.test_intermediate_representation()
example.parse_final()
example.test_final_representation(replacement_keys=replacement_keys)
# test that still parses with extra fields
example.parse_intermediate(add_extra_fields=False)
example.test_intermediate_representation()
example.parse_final()
example.test_final_representation(replacement_keys=replacement_keys)
class TestSyncResponse:
def test_file_invalidated_response(self):
TestCommandResponse.run_tests(
response_json='{"message":"file invalidated","response":"ok","seq_num":2}',
response_type=cmds.SyncResponse
)
def test_file_unchanged_response(self):
TestCommandResponse.run_tests(
response_json='{"message":"file unchanged","response":"ok","seq_num":3}',
response_type=cmds.SyncResponse
)
class TestAllHoleCommandsResponse:
def test_holes(self):
TestCommandResponse.run_tests(
response_json='{"holes":[{"end":{"column":21,"line":1},"file":"test2.lean","results":[{"description":"Infer type of the expression in the hole","name":"Infer"},{"description":"Show the current goal","name":"Show"},{"description":"Try to fill the hole using the given argument","name":"Use"}],"start":{"column":16,"line":1}}],"response":"ok","seq_num":6}',
response_type=cmds.AllHoleCommandsResponse
)
class TestHoleCommandsResponse:
def test_hole_not_found(self):
TestCommandResponse.run_tests(
response_json='{"message":"hole not found","response":"ok","seq_num":11}',
response_type=cmds.HoleCommandsResponse
)
def test_holes(self):
TestCommandResponse.run_tests(
response_json='{"end":{"column":21,"line":1},"file":"test2.lean","response":"ok","results":[{"description":"Infer type of the expression in the hole","name":"Infer"},{"description":"Show the current goal","name":"Show"},{"description":"Try to fill the hole using the given argument","name":"Use"}],"seq_num":7,"start":{"column":16,"line":1}}',
response_type=cmds.HoleCommandsResponse
)
class TestHoleResponse:
def test_hole_not_found(self):
TestCommandResponse.run_tests(
response_json='{"message":"hole not found","response":"ok","seq_num":11}',
response_type=cmds.HoleResponse
)
def test_hole_infer_action(self):
TestCommandResponse.run_tests(
response_json='{"message":"\xe2\x84\x95\\n","response":"ok","seq_num":8}',
response_type=cmds.HoleResponse
)
def test_hole_use_action(self):
TestCommandResponse.run_tests(
response_json='{"replacements":{"alternatives":[{"code":"1","description":""}],"end":{"column":21,"line":1},"file":"test2.lean","start":{"column":16,"line":1}},"response":"ok","seq_num":10}',
response_type=cmds.HoleResponse
)
class TestCompleteResponse:
def test_completions_skip_false(self):
TestCommandResponse.run_tests(
response_json='{"completions":[{"source":{"column":8,"line":1},"text":"foobar","type":"1 = 1"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":174},"text":"array.foldl","type":"array ?n ?\xce\xb1 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 (?\xce\xb1 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 ?\xce\xb2) \xe2\x86\x92 ?\xce\xb2"},{"doc":"Map each element of the given array with an index argument.","source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":166},"text":"array.foreach","type":"array ?n ?\xce\xb1 \xe2\x86\x92 (fin ?n \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 ?\xce\xb2) \xe2\x86\x92 array ?n ?\xce\xb2"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":234},"text":"array.has_to_format","type":"has_to_format (array ?n ?\xce\xb1)"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":237},"text":"array.has_to_tactic_format","type":"has_to_tactic_format (array ?n ?\xce\xb1)"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":183},"text":"array.rev_foldl","type":"array ?n ?\xce\xb1 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 (?\xce\xb1 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 ?\xce\xb2) \xe2\x86\x92 ?\xce\xb2"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/bool/lemmas.lean","line":89},"text":"band_eq_false_eq_eq_ff_or_eq_ff","type":"\xe2\x88\x80 (a b : bool), a && b = ff = (a = ff \xe2\x88\xa8 b = ff)"},{"doc":" Auxiliary annotation for binders (Lambda and Pi).\\n This information is only used for elaboration.\\n The difference between `{}` and `\xe2\xa6\x83\xe2\xa6\x84` is how implicit arguments are treated that are *not* followed by explicit arguments.\\n `{}` arguments are applied eagerly, while `\xe2\xa6\x83\xe2\xa6\x84` arguments are left partially applied:\\n```lean\\ndef foo {x : \xe2\x84\x95} : \xe2\x84\x95 := x\\ndef bar \xe2\xa6\x83x : \xe2\x84\x95\xe2\xa6\x84 : \xe2\x84\x95 := x\\n#check foo -- foo : \xe2\x84\x95\\n#check bar -- bar : \xce\xa0 \xe2\xa6\x83x : \xe2\x84\x95\xe2\xa6\x84, \xe2\x84\x95\\n```","source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info","type":"Type"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.aux_decl","type":"binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.aux_decl.inj","type":"binder_info.aux_decl = binder_info.aux_decl \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.aux_decl.inj_arrow","type":"binder_info.aux_decl = binder_info.aux_decl \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.aux_decl.sizeof_spec","type":"binder_info.sizeof binder_info.aux_decl = 1"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.cases_on","type":"\xce\xa0 (n : binder_info), ?C binder_info.default \xe2\x86\x92 ?C binder_info.implicit \xe2\x86\x92 ?C binder_info.strict_implicit \xe2\x86\x92 ?C binder_info.inst_implicit \xe2\x86\x92 ?C binder_info.aux_decl \xe2\x86\x92 ?C n"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.default","type":"binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.default.inj","type":"binder_info.default = binder_info.default \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.default.inj_arrow","type":"binder_info.default = binder_info.default \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.default.sizeof_spec","type":"binder_info.sizeof binder_info.default = 1"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":48},"text":"binder_info.has_repr","type":"has_repr binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.has_sizeof_inst","type":"has_sizeof binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.implicit","type":"binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.implicit.inj","type":"binder_info.implicit = binder_info.implicit \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.implicit.inj_arrow","type":"binder_info.implicit = binder_info.implicit \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.implicit.sizeof_spec","type":"binder_info.sizeof binder_info.implicit = 1"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.inst_implicit","type":"binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.inst_implicit.inj","type":"binder_info.inst_implicit = binder_info.inst_implicit \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.inst_implicit.inj_arrow","type":"binder_info.inst_implicit = binder_info.inst_implicit \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.inst_implicit.sizeof_spec","type":"binder_info.sizeof binder_info.inst_implicit = 1"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.no_confusion","type":"?v1 = ?v2 \xe2\x86\x92 binder_info.no_confusion_type ?P ?v1 ?v2"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.no_confusion_type","type":"Sort l \xe2\x86\x92 binder_info \xe2\x86\x92 binder_info \xe2\x86\x92 Sort l"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.rec","type":"?C binder_info.default \xe2\x86\x92 ?C binder_info.implicit \xe2\x86\x92 ?C binder_info.strict_implicit \xe2\x86\x92 ?C binder_info.inst_implicit \xe2\x86\x92 ?C binder_info.aux_decl \xe2\x86\x92 \xce\xa0 (n : binder_info), ?C n"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.rec_on","type":"\xce\xa0 (n : binder_info), ?C binder_info.default \xe2\x86\x92 ?C binder_info.implicit \xe2\x86\x92 ?C binder_info.strict_implicit \xe2\x86\x92 ?C binder_info.inst_implicit \xe2\x86\x92 ?C binder_info.aux_decl \xe2\x86\x92 ?C n"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.sizeof","type":"binder_info \xe2\x86\x92 \xe2\x84\x95"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.strict_implicit","type":"binder_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.strict_implicit.inj","type":"binder_info.strict_implicit = binder_info.strict_implicit \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.strict_implicit.inj_arrow","type":"binder_info.strict_implicit = binder_info.strict_implicit \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/expr.lean","line":35},"text":"binder_info.strict_implicit.sizeof_spec","type":"binder_info.sizeof binder_info.strict_implicit = 1"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool","type":"Type"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.cases_on","type":"\xce\xa0 (n : bool), ?C ff \xe2\x86\x92 ?C tt \xe2\x86\x92 ?C n"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":744},"text":"bool.decidable_eq","type":"decidable_eq bool"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"ff","type":"bool"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.ff.inj","type":"ff = ff \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.ff.inj_arrow","type":"ff = ff \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":737},"text":"bool.ff_ne_tt","type":"ff = tt \xe2\x86\x92 false"},{"source":{"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/derive.lean"},"text":"bool.has_reflect","type":"has_reflect bool"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/repr.lean","line":37},"text":"bool.has_repr","type":"has_repr bool"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":522},"text":"bool.has_sizeof","type":"has_sizeof bool"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/format.lean","line":91},"text":"bool.has_to_format","type":"has_to_format bool"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/to_string.lean","line":28},"text":"bool.has_to_string","type":"has_to_string bool"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":784},"text":"bool.inhabited","type":"inhabited bool"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.no_confusion","type":"?v1 = ?v2 \xe2\x86\x92 bool.no_confusion_type ?P ?v1 ?v2"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.no_confusion_type","type":"Sort l \xe2\x86\x92 bool \xe2\x86\x92 bool \xe2\x86\x92 Sort l"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.rec","type":"?C ff \xe2\x86\x92 ?C tt \xe2\x86\x92 \xce\xa0 (n : bool), ?C n"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.rec_on","type":"\xce\xa0 (n : bool), ?C ff \xe2\x86\x92 ?C tt \xe2\x86\x92 ?C n"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":519},"text":"bool.sizeof","type":"bool \xe2\x86\x92 \xe2\x84\x95"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"tt","type":"bool"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.tt.inj","type":"tt = tt \xe2\x86\x92 true"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":265},"text":"bool.tt.inj_arrow","type":"tt = tt \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (true \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/bool/lemmas.lean","line":124},"text":"bool_eq_false","type":"\xc2\xac\xe2\x86\xa5?b \xe2\x86\x92 ?b = ff"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/bool/lemmas.lean","line":122},"text":"bool_iff_false","type":"\xc2\xac\xe2\x86\xa5?b \xe2\x86\x94 ?b = ff"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":93},"text":"cast_proof_irrel","type":"\xe2\x88\x80 (h\xe2\x82\x81 h\xe2\x82\x82 : ?\xce\xb1 = ?\xce\xb2) (a : ?\xce\xb1), cast h\xe2\x82\x81 a = cast h\xe2\x82\x82 a"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":40},"text":"cc_state.eqv_proof","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 expr \xe2\x86\x92 tactic expr"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":83},"text":"cc_state.fold_eqc","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 (?\xce\xb1 \xe2\x86\x92 expr \xe2\x86\x92 ?\xce\xb1) \xe2\x86\x92 ?\xce\xb1"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":76},"text":"cc_state.fold_eqc_core","type":"cc_state \xe2\x86\x92 (?\xce\xb1 \xe2\x86\x92 expr \xe2\x86\x92 ?\xce\xb1) \xe2\x86\x92 expr \xe2\x86\x92 expr \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 ?\xce\xb1"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":59},"text":"cc_state.has_to_tactic_format","type":"has_to_tactic_format cc_state"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":33},"text":"cc_state.is_cg_root","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 bool"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":86},"text":"cc_state.mfold_eqc","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 (?\xce\xb1 \xe2\x86\x92 expr \xe2\x86\x92 ?m ?\xce\xb1) \xe2\x86\x92 ?m ?\xce\xb1"},{"doc":"`proof_for cc e` constructs a proof for e if it is equivalent to true in cc_state","source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":43},"text":"cc_state.proof_for","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 tactic expr"},{"doc":"If the given state is inconsistent, return a proof for false. Otherwise fail.","source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":47},"text":"cc_state.proof_for_false","type":"cc_state \xe2\x86\x92 tactic expr"},{"doc":"`refutation_for cc e` constructs a proof for `not e` if it is equivalent to false in cc_state","source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":45},"text":"cc_state.refutation_for","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 tactic expr"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":29},"text":"cc_state.root","type":"cc_state \xe2\x86\x92 expr \xe2\x86\x92 expr"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":56},"text":"cc_state.roots","type":"cc_state \xe2\x86\x92 list expr"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/smt/congruence_closure.lean","line":28},"text":"cc_state.roots_core","type":"cc_state \xe2\x86\x92 bool \xe2\x86\x92 list expr"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/format.lean","line":106},"text":"char.has_to_format","type":"has_to_format char"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/coe.lean","line":147},"text":"coe_bool_to_Prop","type":"has_coe bool Prop"},{"source":{"column":22,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/coe.lean","line":155},"text":"coe_sort_bool","type":"has_coe_to_sort bool"},{"source":{"column":18,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/congr_lemma.lean","line":29},"text":"congr_arg_kind.has_to_format","type":"has_to_format congr_arg_kind"},{"source":{"column":15,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/congr_lemma.lean","line":37},"text":"congr_lemma.proof","type":"congr_lemma \xe2\x86\x92 expr"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/converter/interactive.lean","line":93},"text":"conv.interactive.for","type":"interactive.parse (lean.parser.pexpr std.prec.max) \xe2\x86\x92 interactive.parse (interactive.types.list_of lean.parser.small_nat) \xe2\x86\x92 conv.interactive.itactic \xe2\x86\x92 conv unit"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/converter/interactive.lean","line":12},"text":"conv.save_info","type":"pos \xe2\x86\x92 conv unit"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":50},"text":"d_array.foldl","type":"d_array ?n ?\xce\xb1 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 (\xce\xa0 (i : fin ?n), ?\xce\xb1 i \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 ?\xce\xb2) \xe2\x86\x92 ?\xce\xb2"},{"doc":"Map the array. Has builtin VM implementation.","source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":40},"text":"d_array.foreach","type":"d_array ?n ?\xce\xb1 \xe2\x86\x92 (\xce\xa0 (i : fin ?n), ?\xce\xb1 i \xe2\x86\x92 ?\xce\xb1\' i) \xe2\x86\x92 d_array ?n ?\xce\xb1\'"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/format.lean","line":94},"text":"decidable.has_to_format","type":"has_to_format (decidable ?p)"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":643},"text":"decidable.not_and_iff_or_not","type":"\xe2\x88\x80 (p q : Prop) [d\xe2\x82\x81 : decidable p] [d\xe2\x82\x82 : decidable q], \xc2\xac(p \xe2\x88\xa7 q) \xe2\x86\x94 \xc2\xacp \xe2\x88\xa8 \xc2\xacq"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":592},"text":"to_bool","type":"\xce\xa0 (p : Prop) [h : decidable p], bool"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":750},"text":"decidable_eq_of_bool_pred","type":"is_dec_eq ?p \xe2\x86\x92 is_dec_refl ?p \xe2\x86\x92 decidable_eq ?\xce\xb1"},{"doc":"Fold over declarations in the environment.","source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":147},"text":"environment.fold","type":"environment \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 (declaration \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 ?\xce\xb1) \xe2\x86\x92 ?\xce\xb1"},{"doc":"Creates an environment containing the module `id` until `decl_name` including dependencies.\\n\\n**ONLY USE THIS FUNCTION IN (CI) SCRIPTS!**","source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/module_info.lean","line":112},"text":"environment.for_decl_of_imported_module","type":"module_info.module_id \xe2\x86\x92 name \xe2\x86\x92 environment"},{"doc":"Creates an environment containing the module `name` until declaration `decl_name`\\nincluding dependencies.\\n\\n**ONLY USE THIS FUNCTION IN (CI) SCRIPTS!**","source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/module_info.lean","line":129},"text":"environment.for_decl_of_imported_module_name","type":"module_info.module_name \xe2\x86\x92 name \xe2\x86\x92 opt_param string \\"\\" \xe2\x86\x92 environment"},{"doc":"Creates an environment containing the module `id` including dependencies.\\n\\n**ONLY USE THIS FUNCTION IN (CI) SCRIPTS!**\\n\\nThe environment `from_imported_module \\".../data/dlist.lean\\"` is roughly equivalent to\\nthe environment at the end of a file containing just `import data.dlist`.","source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/module_info.lean","line":104},"text":"environment.from_imported_module","type":"module_info.module_id \xe2\x86\x92 environment"},{"doc":"Creates an environment containing the module `name` including dependencies.\\n\\n**ONLY USE THIS FUNCTION IN (CI) SCRIPTS!**","source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/module_info.lean","line":120},"text":"environment.from_imported_module_name","type":"module_info.module_name \xe2\x86\x92 opt_param string \\"\\" \xe2\x86\x92 environment"},{"doc":"Consider a type `\xcf\x88` which is an inductive datatype using a single constructor `mk (a : \xce\xb1) (b : \xce\xb2) : \xcf\x88`.\\nLean will automatically make two projection functions `a : \xcf\x88 \xe2\x86\x92 \xce\xb1`, `b : \xcf\x88 \xe2\x86\x92 \xce\xb2`.\\nLean tags these declarations as __projections__.\\nThis helps the simplifier / rewriter not have to expand projectors.\\nEg `a (mk x y)` will automatically reduce to `x`.\\nIf you `extend` a structure, all of the projections on the parent will also be created for the child.\\nProjections are also treated differently in the VM for efficiency.\\n\\nNote that projections have nothing to do with the dot `mylist.map` syntax.\\n\\nYou can find out if a declaration is a projection using `environment.is_projection` which returns `projection_info`.\\n\\nData for a projection declaration:\\n- `cname` is the name of the constructor associated with the projection.\\n- `nparams` is the number of constructor parameters. Eg `and.intro` has two type parameters.\\n- `idx` is the parameter being projected by this projection.\\n- `is_class` is tt iff this is a typeclass projection.\\n\\n### Examples:\\n\\n- `and.right` is a projection with ``{cname := `and.intro, nparams := 2, idx := 1, is_class := ff}``\\n- `ordered_ring.neg` is a projection with ``{cname := `ordered_ring.mk, nparams := 1, idx := 5, is_class := tt}``.","source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info","type":"Type"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.cases_on","type":"\xce\xa0 (n : environment.projection_info), (\xce\xa0 (cname : name) (nparams idx : \xe2\x84\x95) (is_class : bool), ?C {cname := cname, nparams := nparams, idx := idx, is_class := is_class}) \xe2\x86\x92 ?C n"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.cname","type":"environment.projection_info \xe2\x86\x92 name"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.has_sizeof_inst","type":"has_sizeof environment.projection_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.idx","type":"environment.projection_info \xe2\x86\x92 \xe2\x84\x95"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.is_class","type":"environment.projection_info \xe2\x86\x92 bool"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.mk","type":"name \xe2\x86\x92 \xe2\x84\x95 \xe2\x86\x92 \xe2\x84\x95 \xe2\x86\x92 bool \xe2\x86\x92 environment.projection_info"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.mk.inj","type":"{cname := ?cname, nparams := ?nparams, idx := ?idx, is_class := ?is_class} = {cname := ?cname, nparams := ?nparams, idx := ?idx, is_class := ?is_class} \xe2\x86\x92 ?cname = ?cname \xe2\x88\xa7 ?nparams = ?nparams \xe2\x88\xa7 ?idx = ?idx \xe2\x88\xa7 ?is_class = ?is_class"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.mk.inj_arrow","type":"{cname := ?cname, nparams := ?nparams, idx := ?idx, is_class := ?is_class} = {cname := ?cname, nparams := ?nparams, idx := ?idx, is_class := ?is_class} \xe2\x86\x92 \xce\xa0 \xe2\xa6\x83P : Sort l\xe2\xa6\x84, (?cname = ?cname \xe2\x86\x92 ?nparams = ?nparams \xe2\x86\x92 ?idx = ?idx \xe2\x86\x92 ?is_class = ?is_class \xe2\x86\x92 P) \xe2\x86\x92 P"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/environment.lean","line":39},"text":"environment.projection_info.mk.sizeof_spec","type":"\xe2\x88\x80 (cname : name) (nparams idx : \xe2\x84\x95) (is_class : bool), environment.projection_info.sizeof {cname := cname, nparams := nparams, idx := idx, is_class := is_class} = 1 + sizeof cname + sizeof nparams + sizeof idx + sizeof is_class"}],"prefix":"foo","response":"ok","seq_num":17}',
response_type=cmds.CompleteResponse,
replacement_keys={'type': 'type_'}
)
def test_completions_skip_true(self):
TestCommandResponse.run_tests(
response_json='{"prefix":"foo","response":"ok","seq_num":18}',
response_type=cmds.CompleteResponse
)
def test_no_completions(self):
TestCommandResponse.run_tests(
response_json='{"response":"ok","seq_num":19}',
response_type=cmds.CompleteResponse
)
class TestInfoResponse:
def test_empty(self):
TestCommandResponse.run_tests(
response_json='{"response":"ok","seq_num":24}',
response_type=cmds.InfoResponse
)
def test_state(self):
TestCommandResponse.run_tests(
response_json='{"record":{"state":"p q : Prop,\\na : p,\\nb : q\\n⊢ p ∧ q ∧ p"},"response":"ok","seq_num":4}',
response_type=cmds.InfoResponse
)
def test_doc(self):
TestCommandResponse.run_tests(
response_json='{"record":{"doc":" Pi or elet introduction. \\nGiven the tactic state `⊢ Π x : α, Y`, ``intro `hello`` will produce the state `hello : α ⊢ Y[x/hello]`.\\nReturns the new local constant. Similarly for `elet` expressions. \\nIf the target is not a Pi or elet it will try to put it in WHNF.","full-id":"tactic.intro","state":"a b c : ℕ\\n⊢ a = b → c = b → a = c","type":"name → tactic expr"},"response":"ok","seq_num":6}',
response_type=cmds.InfoResponse,
replacement_keys={'full-id': 'full_id', "type": "type_"}
)
def test_full_id_and_type(self):
TestCommandResponse.run_tests(
response_json='{"record":{"full-id":"n","type":"ℕ"},"response":"ok","seq_num":2}',
response_type=cmds.InfoResponse,
replacement_keys={'full-id': 'full_id', "type": "type_"}
)
def test_param_stuff(self):
TestCommandResponse.run_tests(
response_json='{"record":{"doc":"An abbreviation for `rewrite`.","source":{"column":10,"file":"test.lean","line":186},"state":"no goals","tactic_param_idx":0,"tactic_params":["([ (←? expr), ... ] | ←? expr)","(at (* | (⊢ | id)*))?","tactic.rewrite_cfg?"],"text":"rw","type":"interactive.parse tactic.interactive.rw_rules → interactive.parse interactive.types.location → opt_param tactic.rewrite_cfg {to_apply_cfg := {md := reducible, approx := tt, new_goals := tactic.new_goals.non_dep_first, instances := tt, auto_param := tt, opt_param := tt, unify := tt}, symm := ff, occs := occurrences.all} → tactic unit"},"response":"ok","seq_num":8}',
response_type=cmds.InfoResponse,
replacement_keys={'full-id': 'full_id', "type": "type_"}
)
def test_text_and_source(self):
TestCommandResponse.run_tests(
response_json='{"record":{"source":{"column":10,"file":"test.lean","line":186},"state":"Custom state: 2\\n2 goals\\np q : Prop,\\na : p,\\na_1 : q\\n⊢ p\\n\\np q : Prop,\\na : p,\\na_1 : q\\n⊢ q","tactic_params":[],"text":"assumption","type":"mytac unit"},"response":"ok","seq_num":66}',
response_type=cmds.InfoResponse,
replacement_keys = {'full-id': 'full_id', "type": "type_"}
)
class TestSearchResponse:
def test_searches_found(self):
TestCommandResponse.run_tests(
response_json='{"response":"ok","results":[{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":186},"text":"and","type":"Prop \xe2\x86\x92 Prop \xe2\x86\x92 Prop"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/wf.lean","line":11},"text":"acc","type":"(?\xce\xb1 \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 Prop) \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 Prop"},{"source":{"column":11,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":13},"text":"abs","type":"?\xce\xb1 \xe2\x86\x92 ?\xce\xb1"},{"doc":"A non-dependent array (see `d_array`). Implemented in the VM as a persistent array.","source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/data/array/basic.lean","line":138},"text":"array","type":"\xe2\x84\x95 \xe2\x86\x92 Type u \xe2\x86\x92 Type u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/classes.lean","line":148},"text":"asymm","type":"?r ?a ?b \xe2\x86\x92 \xc2\xac?r ?b ?a"},{"doc":"We can\'t have `a` and `\xc2\xaca`, that would be absurd!","source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":29},"text":"absurd","type":"?a \xe2\x86\x92 \xc2\xac?a \xe2\x86\x92 ?b"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/category/alternative.lean","line":27},"text":"assert","type":"\xce\xa0 (p : Prop) [_inst_3 : decidable p], ?f (inhabited p)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":333},"text":"append","type":"?\xce\xb1 \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 ?\xce\xb1"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":472},"text":"abs_div","type":"\xe2\x88\x80 (a b : ?\xce\xb1), abs (a / b) = abs a / abs b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/core.lean","line":334},"text":"andthen","type":"?\xce\xb1 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 ?\xcf\x83"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":95},"text":"add_pos","type":"0 < ?a \xe2\x86\x92 0 < ?b \xe2\x86\x92 0 < ?a + ?b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":107},"text":"add_neg","type":"?a < 0 \xe2\x86\x92 ?b < 0 \xe2\x86\x92 ?a + ?b < 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":370},"text":"add_sub","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), a + (b - c) = a + b - c"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ring.lean","line":31},"text":"add_mul","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), (a + b) * c = a * c + b * c"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":967},"text":"as_true","type":"\xce\xa0 (c : Prop) [_inst_1 : decidable c], Prop"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":244},"text":"abs_abs","type":"\xe2\x88\x80 (a : ?\xce\xb1), abs (abs a) = abs a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":374},"text":"abs_mul","type":"\xe2\x88\x80 (a b : ?\xce\xb1), abs (a * b) = abs a * abs b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":209},"text":"abs_neg","type":"\xe2\x88\x80 (a : ?\xce\xb1), abs (-a) = abs a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":218},"text":"abs_sub","type":"\xe2\x88\x80 (a b : ?\xce\xb1), abs (a - b) = abs (b - a)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":385},"text":"and_comm","type":"\xe2\x88\x80 (a b : Prop), a \xe2\x88\xa7 b \xe2\x86\x94 b \xe2\x88\xa7 a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/classes.lean","line":145},"text":"antisymm","type":"?r ?a ?b \xe2\x86\x92 ?r ?b ?a \xe2\x86\x92 ?a = ?b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":206},"text":"abs_zero","type":"abs 0 = 0"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_zero","type":"\xe2\x88\x80 (a : ?\xce\xb1), a + 0 = a"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":403},"text":"and_true","type":"\xe2\x88\x80 (a : Prop), a \xe2\x88\xa7 true \xe2\x86\x94 a"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":421},"text":"and_self","type":"\xe2\x88\x80 (a : Prop), a \xe2\x88\xa7 a \xe2\x86\x94 a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/classes.lean","line":176},"text":"asymm_of","type":"\xe2\x88\x80 (r : ?\xce\xb1 \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 Prop) [_inst_1 : is_asymm ?\xce\xb1 r] {a b : ?\xce\xb1}, r a b \xe2\x86\x92 \xc2\xacr b a"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":970},"text":"as_false","type":"\xce\xa0 (c : Prop) [_inst_1 : decidable c], Prop"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_comm","type":"\xe2\x88\x80 (a b : ?\xce\xb1), a + b = b + a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":203},"text":"add_group","type":"Type u \xe2\x86\x92 Type u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":392},"text":"and_assoc","type":"\xe2\x88\x80 (a b : Prop), (a \xe2\x88\xa7 b) \xe2\x88\xa7 ?c \xe2\x86\x94 a \xe2\x88\xa7 b \xe2\x88\xa7 ?c"},{"source":{"column":15,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":374},"text":"and_congr","type":"(?a \xe2\x86\x94 ?c) \xe2\x86\x92 (?b \xe2\x86\x94 ?d) \xe2\x86\x92 (?a \xe2\x88\xa7 ?b \xe2\x86\x94 ?c \xe2\x88\xa7 ?d)"},{"source":{"column":27,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":775},"text":"arbitrary","type":"\xce\xa0 (\xce\xb1 : Sort u) [_inst_1 : inhabited \xce\xb1], \xce\xb1"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":409},"text":"and_false","type":"\xe2\x88\x80 (a : Prop), a \xe2\x88\xa7 false \xe2\x86\x94 false"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_assoc","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), a + b + c = a + (b + c)"},{"doc":"Gadget for automatic parameter support. This is similar to the opt_param gadget, but it uses\\n the tactic declaration names tac_name to synthesize the argument.\\n Like opt_param, this gadget only affects elaboration.\\n For example, the tactic will *not* be invoked during type class resolution.","source":{"column":17,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/name.lean","line":19},"text":"auto_param","type":"Sort u \xe2\x86\x92 name \xe2\x86\x92 Sort u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":198},"text":"add_monoid","type":"Type u \xe2\x86\x92 Type u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":66},"text":"add_lt_add","type":"?a < ?b \xe2\x86\x92 ?c < ?d \xe2\x86\x92 ?a + ?c < ?b + ?d"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":104},"text":"add_nonpos","type":"?a \xe2\x89\xa4 0 \xe2\x86\x92 ?b \xe2\x89\xa4 0 \xe2\x86\x92 ?a + ?b \xe2\x89\xa4 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":92},"text":"add_nonneg","type":"0 \xe2\x89\xa4 ?a \xe2\x86\x92 0 \xe2\x89\xa4 ?b \xe2\x86\x92 0 \xe2\x89\xa4 ?a + ?b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":196},"text":"abs_of_pos","type":"?a > 0 \xe2\x86\x92 abs ?a = ?a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":237},"text":"abs_nonneg","type":"\xe2\x88\x80 (a : ?\xce\xb1), abs a \xe2\x89\xa5 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_field.lean","line":276},"text":"add_halves","type":"\xe2\x88\x80 (a : ?\xce\xb1), a / 2 + a / 2 = a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":203},"text":"abs_of_neg","type":"?a < 0 \xe2\x86\x92 abs ?a = -?a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":338},"text":"abs_sub_le","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), abs (a - c) \xe2\x89\xa4 abs (a - b) + abs (b - c)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":55},"text":"add_le_add","type":"?a \xe2\x89\xa4 ?b \xe2\x86\x92 ?c \xe2\x89\xa4 ?d \xe2\x86\x92 ?a + ?c \xe2\x89\xa4 ?b + ?d"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":1070},"text":"associative","type":"(?\xce\xb1 \xe2\x86\x92 ?\xce\xb1 \xe2\x86\x92 ?\xce\xb1) \xe2\x86\x92 Prop"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/category/alternative.lean","line":15},"text":"alternative","type":"(Type u \xe2\x86\x92 Type v) \xe2\x86\x92 Type (max (u+1) v)"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":372},"text":"and_implies","type":"(?a \xe2\x86\x92 ?c) \xe2\x86\x92 (?b \xe2\x86\x92 ?d) \xe2\x86\x92 ?a \xe2\x88\xa7 ?b \xe2\x86\x92 ?c \xe2\x88\xa7 ?d"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":481},"text":"abs_one_div","type":"\xe2\x88\x80 (a : ?\xce\xb1), abs (1 / a) = 1 / abs a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/category/applicative.lean","line":31},"text":"applicative","type":"(Type u \xe2\x86\x92 Type v) \xe2\x86\x92 Type (max (u+1) v)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/category/state.lean","line":159},"text":"adapt_state","type":"(?\xcf\x83\' \xe2\x86\x92 ?\xcf\x83 \xc3\x97 ?\xcf\x83\'\') \xe2\x86\x92 (?\xcf\x83 \xe2\x86\x92 ?\xcf\x83\'\' \xe2\x86\x92 ?\xcf\x83\') \xe2\x86\x92 ?m ?\xce\xb1 \xe2\x86\x92 ?m\' ?\xce\xb1"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":418},"text":"and_not_self","type":"\xe2\x88\x80 (a : Prop), a \xe2\x88\xa7 \xc2\xaca \xe2\x86\x94 false"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":314},"text":"add_neg_self","type":"\xe2\x88\x80 (a : ?\xce\xb1), a + -a = 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/category/reader.lean","line":106},"text":"adapt_reader","type":"(?\xcf\x81\' \xe2\x86\x92 ?\xcf\x81) \xe2\x86\x92 ?m ?\xce\xb1 \xe2\x86\x92 ?m\' ?\xce\xb1"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":268},"text":"abs_by_cases","type":"\xe2\x88\x80 (P : ?\xce\xb1 \xe2\x86\x92 Prop) {a : ?\xce\xb1}, P a \xe2\x86\x92 P (-a) \xe2\x86\x92 P (abs a)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/cc_lemmas.lean","line":31},"text":"and_eq_of_eq","type":"?a = ?b \xe2\x86\x92 (?a \xe2\x88\xa7 ?b) = ?a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/category/except.lean","line":129},"text":"adapt_except","type":"(?\xce\xb5 \xe2\x86\x92 ?\xce\xb5\') \xe2\x86\x92 ?m ?\xce\xb1 \xe2\x86\x92 ?m\' ?\xce\xb1"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_left_neg","type":"\xe2\x88\x80 (a : ?\xce\xb1), -a + a = 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":408},"text":"abs_mul_self","type":"\xe2\x88\x80 (a : ?\xce\xb1), abs (a * a) = a * a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":397},"text":"and_iff_left","type":"?b \xe2\x86\x92 (?a \xe2\x88\xa7 ?b \xe2\x86\x94 ?a)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":424},"text":"add_sub_comm","type":"\xe2\x88\x80 (a b c d : ?\xce\xb1), a + b - (c + d) = a - c + (b - d)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_field.lean","line":283},"text":"add_midpoint","type":"?a < ?b \xe2\x86\x92 ?a + (?b - ?a) / 2 < ?b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":186},"text":"add_semigroup","type":"Type u \xe2\x86\x92 Type u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":192},"text":"abs_of_nonneg","type":"?a \xe2\x89\xa5 0 \xe2\x86\x92 abs ?a = ?a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":199},"text":"abs_of_nonpos","type":"?a \xe2\x89\xa4 0 \xe2\x86\x92 abs ?a = -?a"},{"source":{"column":14,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/name.lean","line":22},"text":"auto_param_eq","type":"\xe2\x88\x80 (\xce\xb1 : Sort u) (n : name), auto_param \xce\xb1 n = \xce\xb1"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_left_comm","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), a + (b + c) = b + (a + c)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":337},"text":"add_sub_assoc","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), a + b - c = a + (b - c)"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":344},"text":"abs_add_three","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), abs (a + b + c) \xe2\x89\xa4 abs a + abs b + abs c"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_right_neg","type":"\xe2\x88\x80 (a : ?\xce\xb1), a + -a = 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":400},"text":"and_iff_right","type":"?a \xe2\x86\x92 (?a \xe2\x88\xa7 ?b \xe2\x86\x94 ?b)"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_right_comm","type":"\xe2\x88\x80 (a b c : ?\xce\xb1), a + b + c = a + c + b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":212},"text":"abs_pos_of_pos","type":"?a > 0 \xe2\x86\x92 abs ?a > 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":334},"text":"add_sub_cancel","type":"\xe2\x88\x80 (a b : ?\xce\xb1), a + b - b = a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":206},"text":"add_comm_group","type":"Type u \xe2\x86\x92 Type u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":215},"text":"abs_pos_of_neg","type":"?a < 0 \xe2\x86\x92 abs ?a > 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":438},"text":"abs_sub_square","type":"\xe2\x88\x80 (a b : ?\xce\xb1), abs (a - b) * abs (a - b) = a * a + b * b - (1 + 1) * a * b"},{"source":{"column":4,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":1039},"text":"anti_symmetric","type":"(?\xce\xb2 \xe2\x86\x92 ?\xce\xb2 \xe2\x86\x92 Prop) \xe2\x86\x92 Prop"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":201},"text":"add_comm_monoid","type":"Type u \xe2\x86\x92 Type u"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ring.lean","line":235},"text":"add_mul_self_eq","type":"\xe2\x88\x80 (a b : ?\xce\xb1), (a + b) * (a + b) = a * a + 2 * a * b + b * b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":38},"text":"add_lt_add_left","type":"?a < ?b \xe2\x86\x92 \xe2\x88\x80 (c : ?\xce\xb1), c + ?a < c + ?b"},{"doc":"Copy a list of meta definitions in the current namespace to tactic.interactive.\\n\\nThis command is useful when we want to update tactic.interactive without closing the current namespace.","source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/meta/interactive.lean","line":1630},"text":"add_interactive","type":"list name \xe2\x86\x92 opt_param name (name.mk_string \\"interactive\\" (name.mk_string \\"tactic\\" name.anonymous)) \xe2\x86\x92 tactic unit"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":27},"text":"add_le_add_left","type":"?a \xe2\x89\xa4 ?b \xe2\x86\x92 \xe2\x88\x80 (c : ?\xce\xb1), c + ?a \xe2\x89\xa4 c + ?b"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/logic.lean","line":377},"text":"and_congr_right","type":"(?a \xe2\x86\x92 (?b \xe2\x86\x94 ?c)) \xe2\x86\x92 (?a \xe2\x88\xa7 ?b \xe2\x86\x94 ?a \xe2\x88\xa7 ?c)"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_left_cancel","type":"?a + ?b = ?a + ?c \xe2\x86\x92 ?b = ?c"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_field.lean","line":295},"text":"add_self_div_two","type":"\xe2\x88\x80 (a : ?\xce\xb1), (a + a) / 2 = a"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":46},"text":"add_le_add_right","type":"?a \xe2\x89\xa4 ?b \xe2\x86\x92 \xe2\x88\x80 (c : ?\xce\xb1), ?a + c \xe2\x89\xa4 ?b + c"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":608},"text":"add_le_add_three","type":"?a \xe2\x89\xa4 ?d \xe2\x86\x92 ?b \xe2\x89\xa4 ?e \xe2\x86\x92 ?c \xe2\x89\xa4 ?f \xe2\x86\x92 ?a + ?b + ?c \xe2\x89\xa4 ?d + ?e + ?f"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":388},"text":"add_eq_of_eq_sub","type":"?a = ?c - ?b \xe2\x86\x92 ?a + ?b = ?c"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_right_cancel","type":"?a + ?b = ?c + ?b \xe2\x86\x92 ?a = ?c"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":49},"text":"add_lt_add_right","type":"?a < ?b \xe2\x86\x92 \xe2\x88\x80 (c : ?\xce\xb1), ?a + c < ?b + c"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":405},"text":"abs_mul_abs_self","type":"\xe2\x88\x80 (a : ?\xce\xb1), abs a * abs a = a * a"},{"source":{"column":9,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":321},"text":"add_group_has_sub","type":"has_sub ?\xce\xb1"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":418},"text":"add_eq_of_eq_sub\'","type":"?b = ?c - ?a \xe2\x86\x92 ?a + ?b = ?c"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/functions.lean","line":265},"text":"abs_pos_of_ne_zero","type":"?a \xe2\x89\xa0 0 \xe2\x86\x92 abs ?a > 0"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":189},"text":"add_comm_semigroup","type":"Type u \xe2\x86\x92 Type u"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":157},"text":"add_lt_of_le_of_neg","type":"?b \xe2\x89\xa4 ?c \xe2\x86\x92 ?a < 0 \xe2\x86\x92 ?b + ?a < ?c"},{"source":{"column":6,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/ordered_group.lean","line":181},"text":"add_lt_of_lt_of_neg","type":"?b < ?c \xe2\x86\x92 ?a < 0 \xe2\x86\x92 ?b + ?a < ?c"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_left_cancel_iff","type":"?a + ?b = ?a + ?c \xe2\x86\x94 ?b = ?c"},{"source":{"column":8,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/algebra/group.lean","line":226},"text":"add_neg_cancel_left","type":"\xe2\x88\x80 (a b : ?\xce\xb1), a + (-a + b) = b"},{"source":{"column":10,"file":"/Users/jasonrute/.elan/toolchains/leanprover-community-lean-3.9.0/lib/lean/library/init/wf.lean","line":11},"text":"acc.cases_on","type":"acc ?r ?a \xe2\x86\x92 (\xce\xa0 (x : ?\xce\xb1), (\xce\xa0 (y : ?\xce\xb1), ?r y x \xe2\x86\x92 acc ?r y) \xe2\x86\x92 ?C x) \xe2\x86\x92 ?C ?a"}],"seq_num":20}',
response_type=cmds.SearchResponse,
replacement_keys={'type': 'type_'}
)
class TestRoiResponse:
def test_response(self):
TestCommandResponse.run_tests(
response_json='{"response":"ok","seq_num":23}',
response_type=cmds.RoiResponse
)
| 206.060694 | 29,714 | 0.687617 | 11,177 | 71,297 | 4.299723 | 0.066207 | 0.034833 | 0.048878 | 0.091556 | 0.796517 | 0.780432 | 0.763265 | 0.732948 | 0.715177 | 0.69826 | 0 | 0.058842 | 0.088727 | 71,297 | 345 | 29,715 | 206.657971 | 0.680581 | 0.01216 | 0 | 0.432432 | 0 | 0.07722 | 0.668079 | 0.484791 | 0 | 0 | 0 | 0 | 0.235521 | 1 | 0.138996 | false | 0 | 0.011583 | 0 | 0.212355 | 0.007722 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
18a851ab5b492ed14d71c7dc68b1475017ff13fb | 31 | py | Python | code/wikibase/password.py | HeardLibrary/digital-scholarship | c2a791376ecea4efff4ff57c7a93b291b605d956 | [
"CC0-1.0"
] | 25 | 2018-09-27T03:46:38.000Z | 2022-03-13T00:08:22.000Z | code/wikibase/password.py | HeardLibrary/digital-scholarship | c2a791376ecea4efff4ff57c7a93b291b605d956 | [
"CC0-1.0"
] | 22 | 2019-07-23T15:30:14.000Z | 2022-03-29T22:04:37.000Z | code/wikibase/password.py | HeardLibrary/digital-scholarship | c2a791376ecea4efff4ff57c7a93b291b605d956 | [
"CC0-1.0"
] | 18 | 2019-01-28T16:40:28.000Z | 2022-01-13T01:59:00.000Z | ("botUsername", "botPassword")
| 15.5 | 30 | 0.709677 | 2 | 31 | 11 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.064516 | 31 | 1 | 31 | 31 | 0.758621 | 0 | 0 | 0 | 0 | 0 | 0.709677 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 7 |
18cd9b30a6574a794e8c90f300c3042972cc2011 | 5,405 | py | Python | run_all.py | zackfei0721/LP-Recognition | 8bc0efc8a7a5c5d0806914282c1a4a1eb5839644 | [
"MIT"
] | null | null | null | run_all.py | zackfei0721/LP-Recognition | 8bc0efc8a7a5c5d0806914282c1a4a1eb5839644 | [
"MIT"
] | null | null | null | run_all.py | zackfei0721/LP-Recognition | 8bc0efc8a7a5c5d0806914282c1a4a1eb5839644 | [
"MIT"
] | null | null | null | import os
os.system("python cdetecor.py --image test(1).jpg")
os.system("python cdetecor.py --image test(2).jpg")
os.system("python cdetecor.py --image test(3).jpg")
os.system("python cdetecor.py --image test(4).jpg")
os.system("python cdetecor.py --image test(5).jpg")
os.system("python cdetecor.py --image test(6).jpg")
os.system("python cdetecor.py --image test(7).jpg")
os.system("python cdetecor.py --image test(8).jpg")
os.system("python cdetecor.py --image test(9).jpg")
os.system("python cdetecor.py --image test(10).jpg")
os.system("python cdetecor.py --image test(11).jpg")
os.system("python cdetecor.py --image test(12).jpg")
os.system("python cdetecor.py --image test(13).jpg")
os.system("python cdetecor.py --image test(14).jpg")
os.system("python cdetecor.py --image test(15).jpg")
os.system("python cdetecor.py --image test(16).jpg")
os.system("python cdetecor.py --image test(17).jpg")
os.system("python cdetecor.py --image test(18).jpg")
os.system("python cdetecor.py --image test(19).jpg")
os.system("python cdetecor.py --image test(20).jpg")
os.system("python cdetecor.py --image test(21).jpg")
os.system("python cdetecor.py --image test(22).jpg")
os.system("python cdetecor.py --image test(23).jpg")
os.system("python cdetecor.py --image test(24).jpg")
os.system("python cdetecor.py --image test(25).jpg")
os.system("python cdetecor.py --image test(26).jpg")
os.system("python cdetecor.py --image test(27).jpg")
os.system("python cdetecor.py --image test(28).jpg")
os.system("python cdetecor.py --image test(29).jpg")
os.system("python cdetecor.py --image test(30).jpg")
os.system("python cdetecor.py --image test(31).jpg")
os.system("python cdetecor.py --image test(32).jpg")
os.system("python cdetecor.py --image test(33).jpg")
os.system("python cdetecor.py --image test(34).jpg")
os.system("python cdetecor.py --image test(35).jpg")
os.system("python cdetecor.py --image test(36).jpg")
os.system("python cdetecor.py --image test(37).jpg")
os.system("python cdetecor.py --image test(38).jpg")
os.system("python cdetecor.py --image test(39).jpg")
os.system("python cdetecor.py --image test(40).jpg")
os.system("python cdetecor.py --image test(41).jpg")
os.system("python cdetecor.py --image test(42).jpg")
os.system("python cdetecor.py --image test(43).jpg")
os.system("python cdetecor.py --image test(44).jpg")
os.system("python cdetecor.py --image test(45).jpg")
os.system("python cdetecor.py --image test(46).jpg")
os.system("python cdetecor.py --image test(47).jpg")
os.system("python cdetecor.py --image test(48).jpg")
os.system("python cdetecor.py --image test(49).jpg")
os.system("python cdetecor.py --image test(50).jpg")
os.system("python cdetecor.py --image test(51).jpg")
os.system("python cdetecor.py --image test(52).jpg")
os.system("python cdetecor.py --image test(53).jpg")
os.system("python cdetecor.py --image test(54).jpg")
os.system("python cdetecor.py --image test(55).jpg")
os.system("python cdetecor.py --image test(56).jpg")
os.system("python cdetecor.py --image test(57).jpg")
os.system("python cdetecor.py --image test(58).jpg")
os.system("python cdetecor.py --image test(59).jpg")
os.system("python cdetecor.py --image test(60).jpg")
os.system("python cdetecor.py --image test(61).jpg")
os.system("python cdetecor.py --image test(62).jpg")
os.system("python cdetecor.py --image test(63).jpg")
os.system("python cdetecor.py --image test(64).jpg")
os.system("python cdetecor.py --image test(65).jpg")
os.system("python cdetecor.py --image test(66).jpg")
os.system("python cdetecor.py --image test(67).jpg")
os.system("python cdetecor.py --image test(68).jpg")
os.system("python cdetecor.py --image test(69).jpg")
os.system("python cdetecor.py --image test(70).jpg")
os.system("python cdetecor.py --image test(71).jpg")
os.system("python cdetecor.py --image test(72).jpg")
os.system("python cdetecor.py --image test(73).jpg")
os.system("python cdetecor.py --image test(74).jpg")
os.system("python cdetecor.py --image test(75).jpg")
os.system("python cdetecor.py --image test(76).jpg")
os.system("python cdetecor.py --image test(77).jpg")
os.system("python cdetecor.py --image test(78).jpg")
os.system("python cdetecor.py --image test(79).jpg")
os.system("python cdetecor.py --image test(80).jpg")
os.system("python cdetecor.py --image test(81).jpg")
os.system("python cdetecor.py --image test(82).jpg")
os.system("python cdetecor.py --image test(83).jpg")
os.system("python cdetecor.py --image test(84).jpg")
os.system("python cdetecor.py --image test(85).jpg")
os.system("python cdetecor.py --image test(86).jpg")
os.system("python cdetecor.py --image test(87).jpg")
os.system("python cdetecor.py --image test(88).jpg")
os.system("python cdetecor.py --image test(89).jpg")
os.system("python cdetecor.py --image test(90).jpg")
os.system("python cdetecor.py --image test(91).jpg")
os.system("python cdetecor.py --image test(92).jpg")
os.system("python cdetecor.py --image test(93).jpg")
os.system("python cdetecor.py --image test(94).jpg")
os.system("python cdetecor.py --image test(95).jpg")
os.system("python cdetecor.py --image test(96).jpg")
os.system("python cdetecor.py --image test(97).jpg")
os.system("python cdetecor.py --image test(98).jpg")
os.system("python cdetecor.py --image test(99).jpg")
os.system("python cdetecor.py --image test(100).jpg")
| 52.475728 | 54 | 0.703053 | 902 | 5,405 | 4.21286 | 0.120843 | 0.210526 | 0.368421 | 0.578947 | 0.946579 | 0.946579 | 0.946579 | 0.937895 | 0 | 0 | 0 | 0.039184 | 0.093432 | 5,405 | 102 | 55 | 52.990196 | 0.736327 | 0 | 0 | 0 | 0 | 0 | 0.733924 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.009901 | 0 | 0.009901 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
18e669c9ed7a3d5f3cf708d7f421480dae797e48 | 93,523 | py | Python | iengage_client/apis/project_management_api.py | iEngage/python-sdk | 76cc6ed697d7599ce9af74124c12d33ad5aff419 | [
"Apache-2.0"
] | null | null | null | iengage_client/apis/project_management_api.py | iEngage/python-sdk | 76cc6ed697d7599ce9af74124c12d33ad5aff419 | [
"Apache-2.0"
] | null | null | null | iengage_client/apis/project_management_api.py | iEngage/python-sdk | 76cc6ed697d7599ce9af74124c12d33ad5aff419 | [
"Apache-2.0"
] | null | null | null | # coding: utf-8
"""
iEngage 2.0 API
This API enables Intelligent Engagement for your Business. iEngage is a platform that combines process, augmented intelligence and rewards to help you intelligently engage customers.
OpenAPI spec version: 2.0
Generated by: https://github.com/swagger-api/swagger-codegen.git
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
from __future__ import absolute_import
import sys
import os
import re
# python 2 and python 3 compatibility library
from six import iteritems
from ..configuration import Configuration
from ..api_client import ApiClient
class ProjectManagementApi(object):
"""
NOTE: This class is auto generated by the swagger code generator program.
Do not edit the class manually.
Ref: https://github.com/swagger-api/swagger-codegen
"""
def __init__(self, api_client=None):
config = Configuration()
if api_client:
self.api_client = api_client
else:
if not config.api_client:
config.api_client = ApiClient()
self.api_client = config.api_client
def add_milestone_comment(self, milestone_id, requester_id, client_token, **kwargs):
"""
Comment on milestone
This service allows a user to comment on a milestone. The following fields(key:value) are required to be present in the Comment JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. milestoneId (Path Parameter) 2. commentText
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.add_milestone_comment(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Comment body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseComment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.add_milestone_comment_with_http_info(milestone_id, requester_id, client_token, **kwargs)
else:
(data) = self.add_milestone_comment_with_http_info(milestone_id, requester_id, client_token, **kwargs)
return data
def add_milestone_comment_with_http_info(self, milestone_id, requester_id, client_token, **kwargs):
"""
Comment on milestone
This service allows a user to comment on a milestone. The following fields(key:value) are required to be present in the Comment JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. milestoneId (Path Parameter) 2. commentText
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.add_milestone_comment_with_http_info(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Comment body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseComment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['milestone_id', 'requester_id', 'client_token', 'body', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_milestone_comment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'milestone_id' is set
if ('milestone_id' not in params) or (params['milestone_id'] is None):
raise ValueError("Missing the required parameter `milestone_id` when calling `add_milestone_comment`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `add_milestone_comment`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `add_milestone_comment`")
resource_path = '/milestones/{milestoneId}/comments'.replace('{format}', 'json')
path_params = {}
if 'milestone_id' in params:
path_params['milestoneId'] = params['milestone_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseComment',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def add_task_comment(self, task_id, requester_id, client_token, **kwargs):
"""
Comment on task
This service allows a user to comment on a task. The following fields(key:value) are required to be present in the Comment JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. **taskId (Path Parameter)** 2. **commentText**
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.add_task_comment(task_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Comment body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseComment
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.add_task_comment_with_http_info(task_id, requester_id, client_token, **kwargs)
else:
(data) = self.add_task_comment_with_http_info(task_id, requester_id, client_token, **kwargs)
return data
def add_task_comment_with_http_info(self, task_id, requester_id, client_token, **kwargs):
"""
Comment on task
This service allows a user to comment on a task. The following fields(key:value) are required to be present in the Comment JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. **taskId (Path Parameter)** 2. **commentText**
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.add_task_comment_with_http_info(task_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Comment body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseComment
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id', 'requester_id', 'client_token', 'body', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method add_task_comment" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params) or (params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `add_task_comment`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `add_task_comment`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `add_task_comment`")
resource_path = '/milestones/tasks/{taskId}/comments'.replace('{format}', 'json')
path_params = {}
if 'task_id' in params:
path_params['taskId'] = params['task_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseComment',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def create_milestone(self, requester_id, client_token, **kwargs):
"""
Create milestone
This service allows a user to create a milestone. The following fields(key:value) are required to be present in the Milestone JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. **milestoneTitle** 2. **milestoneDescription** 3. **dueDate** 4. **neverDue**
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_milestone(requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Milestone body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestone
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_milestone_with_http_info(requester_id, client_token, **kwargs)
else:
(data) = self.create_milestone_with_http_info(requester_id, client_token, **kwargs)
return data
def create_milestone_with_http_info(self, requester_id, client_token, **kwargs):
"""
Create milestone
This service allows a user to create a milestone. The following fields(key:value) are required to be present in the Milestone JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. **milestoneTitle** 2. **milestoneDescription** 3. **dueDate** 4. **neverDue**
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_milestone_with_http_info(requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Milestone body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestone
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['requester_id', 'client_token', 'body', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_milestone" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `create_milestone`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `create_milestone`")
resource_path = '/milestones'.replace('{format}', 'json')
path_params = {}
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseMilestone',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def create_task(self, milestone_id, requester_id, client_token, **kwargs):
"""
Create task
This service allows a user to create a task. The following fields(key:value) are required to be present in the Task JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. **taskTitle** 2. **taskDescription** 3. **priority** 4. **dueDate** 5. **assigneeUserId** 6. **neverDue** 7. **user: { userId }**
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_task(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: Milestone Id (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Task body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.create_task_with_http_info(milestone_id, requester_id, client_token, **kwargs)
else:
(data) = self.create_task_with_http_info(milestone_id, requester_id, client_token, **kwargs)
return data
def create_task_with_http_info(self, milestone_id, requester_id, client_token, **kwargs):
"""
Create task
This service allows a user to create a task. The following fields(key:value) are required to be present in the Task JSON object. Refer to the Model & Model Schema of the expected JSON Object for the body of this API. **Required fields** 1. **taskTitle** 2. **taskDescription** 3. **priority** 4. **dueDate** 5. **assigneeUserId** 6. **neverDue** 7. **user: { userId }**
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.create_task_with_http_info(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: Milestone Id (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param Task body:
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['milestone_id', 'requester_id', 'client_token', 'body', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method create_task" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'milestone_id' is set
if ('milestone_id' not in params) or (params['milestone_id'] is None):
raise ValueError("Missing the required parameter `milestone_id` when calling `create_task`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `create_task`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `create_task`")
resource_path = '/milestones/{milestoneId}/tasks'.replace('{format}', 'json')
path_params = {}
if 'milestone_id' in params:
path_params['milestoneId'] = params['milestone_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
if 'body' in params:
body_params = params['body']
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json', 'application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'POST',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseTask',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def delete_milestone(self, milestone_id, requester_id, client_token, **kwargs):
"""
Delete milestone
Allows the user to delete milestone. Returns the deleted milestone
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_milestone(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate **A) Available values-** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate 5)status 6)priority 7)dueDate */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestone
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_milestone_with_http_info(milestone_id, requester_id, client_token, **kwargs)
else:
(data) = self.delete_milestone_with_http_info(milestone_id, requester_id, client_token, **kwargs)
return data
def delete_milestone_with_http_info(self, milestone_id, requester_id, client_token, **kwargs):
"""
Delete milestone
Allows the user to delete milestone. Returns the deleted milestone
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_milestone_with_http_info(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate **A) Available values-** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate 5)status 6)priority 7)dueDate */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestone
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['milestone_id', 'requester_id', 'client_token', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_milestone" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'milestone_id' is set
if ('milestone_id' not in params) or (params['milestone_id'] is None):
raise ValueError("Missing the required parameter `milestone_id` when calling `delete_milestone`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `delete_milestone`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `delete_milestone`")
resource_path = '/milestones/{milestoneId}'.replace('{format}', 'json')
path_params = {}
if 'milestone_id' in params:
path_params['milestoneId'] = params['milestone_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
if 'fields' in params:
form_params.append(('fields', params['fields']))
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseMilestone',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def delete_task(self, task_id, requester_id, client_token, **kwargs):
"""
Delete task
Allows the user to delete task. Returns the deleted task
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_task(task_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.delete_task_with_http_info(task_id, requester_id, client_token, **kwargs)
else:
(data) = self.delete_task_with_http_info(task_id, requester_id, client_token, **kwargs)
return data
def delete_task_with_http_info(self, task_id, requester_id, client_token, **kwargs):
"""
Delete task
Allows the user to delete task. Returns the deleted task
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.delete_task_with_http_info(task_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id', 'requester_id', 'client_token', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method delete_task" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params) or (params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `delete_task`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `delete_task`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `delete_task`")
resource_path = '/milestones/tasks/{taskId}'.replace('{format}', 'json')
path_params = {}
if 'task_id' in params:
path_params['taskId'] = params['task_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
if 'fields' in params:
form_params.append(('fields', params['fields']))
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'DELETE',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseTask',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_milestones(self, requester_id, client_token, **kwargs):
"""
Get list of milestones
Returns the list of milestones
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_milestones(requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param int organization_id: organizationId
:param str fields: Filter fields in result list /* **A) Default values -** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate **A) Available values-** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate 5)status 6)priority 7)dueDate */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestoneList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_milestones_with_http_info(requester_id, client_token, **kwargs)
else:
(data) = self.get_milestones_with_http_info(requester_id, client_token, **kwargs)
return data
def get_milestones_with_http_info(self, requester_id, client_token, **kwargs):
"""
Get list of milestones
Returns the list of milestones
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_milestones_with_http_info(requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param int organization_id: organizationId
:param str fields: Filter fields in result list /* **A) Default values -** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate **A) Available values-** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate 5)status 6)priority 7)dueDate */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestoneList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['requester_id', 'client_token', 'organization_id', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_milestones" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `get_milestones`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `get_milestones`")
resource_path = '/milestones'.replace('{format}', 'json')
path_params = {}
query_params = {}
if 'organization_id' in params:
query_params['organizationId'] = params['organization_id']
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseMilestoneList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_milestones_comments(self, milestone_id, requester_id, client_token, **kwargs):
"""
Get list of comments written on Milestones
Returns the list comments written on milestone
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_milestones_comments(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseCommentList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_milestones_comments_with_http_info(milestone_id, requester_id, client_token, **kwargs)
else:
(data) = self.get_milestones_comments_with_http_info(milestone_id, requester_id, client_token, **kwargs)
return data
def get_milestones_comments_with_http_info(self, milestone_id, requester_id, client_token, **kwargs):
"""
Get list of comments written on Milestones
Returns the list comments written on milestone
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_milestones_comments_with_http_info(milestone_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseCommentList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['milestone_id', 'requester_id', 'client_token', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_milestones_comments" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'milestone_id' is set
if ('milestone_id' not in params) or (params['milestone_id'] is None):
raise ValueError("Missing the required parameter `milestone_id` when calling `get_milestones_comments`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `get_milestones_comments`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `get_milestones_comments`")
resource_path = '/milestones/{milestoneId}/comments'.replace('{format}', 'json')
path_params = {}
if 'milestone_id' in params:
path_params['milestoneId'] = params['milestone_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseCommentList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_task_comments(self, task_id, requester_id, client_token, **kwargs):
"""
Get list of Comments written on task
Returns the list of comments written on task
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_task_comments(task_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseCommentList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_task_comments_with_http_info(task_id, requester_id, client_token, **kwargs)
else:
(data) = self.get_task_comments_with_http_info(task_id, requester_id, client_token, **kwargs)
return data
def get_task_comments_with_http_info(self, task_id, requester_id, client_token, **kwargs):
"""
Get list of Comments written on task
Returns the list of comments written on task
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_task_comments_with_http_info(task_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseCommentList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id', 'requester_id', 'client_token', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_task_comments" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params) or (params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `get_task_comments`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `get_task_comments`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `get_task_comments`")
resource_path = '/milestones/tasks/{taskId}/comments'.replace('{format}', 'json')
path_params = {}
if 'task_id' in params:
path_params['taskId'] = params['task_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseCommentList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def get_user_tasks(self, user_id, status, requester_id, client_token, **kwargs):
"""
Get list of task assigned to user
Returns the list of task assigned to user
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_tasks(user_id, status, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int user_id: User Id whose assinged task want to get (required)
:param int status: /* Task status 0 - ALL 1 - OPEN 2 - PERCENT_TWENTY 3 - PERCENT_FORTY 4 - PERCENT_SIXTY 5 - PERCENT_EIGHTY 6 - RESOLVED 7 - REOPENED */ (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTaskList
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.get_user_tasks_with_http_info(user_id, status, requester_id, client_token, **kwargs)
else:
(data) = self.get_user_tasks_with_http_info(user_id, status, requester_id, client_token, **kwargs)
return data
def get_user_tasks_with_http_info(self, user_id, status, requester_id, client_token, **kwargs):
"""
Get list of task assigned to user
Returns the list of task assigned to user
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.get_user_tasks_with_http_info(user_id, status, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int user_id: User Id whose assinged task want to get (required)
:param int status: /* Task status 0 - ALL 1 - OPEN 2 - PERCENT_TWENTY 3 - PERCENT_FORTY 4 - PERCENT_SIXTY 5 - PERCENT_EIGHTY 6 - RESOLVED 7 - REOPENED */ (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTaskList
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['user_id', 'status', 'requester_id', 'client_token', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method get_user_tasks" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'user_id' is set
if ('user_id' not in params) or (params['user_id'] is None):
raise ValueError("Missing the required parameter `user_id` when calling `get_user_tasks`")
# verify the required parameter 'status' is set
if ('status' not in params) or (params['status'] is None):
raise ValueError("Missing the required parameter `status` when calling `get_user_tasks`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `get_user_tasks`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `get_user_tasks`")
resource_path = '/milestones/tasks/{userId}/assigned'.replace('{format}', 'json')
path_params = {}
if 'user_id' in params:
path_params['userId'] = params['user_id']
query_params = {}
if 'status' in params:
query_params['status'] = params['status']
if 'fields' in params:
query_params['fields'] = params['fields']
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/json'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseTaskList',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def update_milestone(self, milestone_id, title, description, due_date, requester_id, client_token, **kwargs):
"""
Update milestone
Allows the user to update milestone. Returns the updated milestone
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_milestone(milestone_id, title, description, due_date, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str title: title (required)
:param str description: description (required)
:param str due_date: Due date (Format: MM-dd-yyyy HH:mm:ss a) (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate **A) Available values-** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate 5)status 6)priority 7)dueDate */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestone
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_milestone_with_http_info(milestone_id, title, description, due_date, requester_id, client_token, **kwargs)
else:
(data) = self.update_milestone_with_http_info(milestone_id, title, description, due_date, requester_id, client_token, **kwargs)
return data
def update_milestone_with_http_info(self, milestone_id, title, description, due_date, requester_id, client_token, **kwargs):
"""
Update milestone
Allows the user to update milestone. Returns the updated milestone
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_milestone_with_http_info(milestone_id, title, description, due_date, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int milestone_id: milestoneId (required)
:param str title: title (required)
:param str description: description (required)
:param str due_date: Due date (Format: MM-dd-yyyy HH:mm:ss a) (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate **A) Available values-** 1)milestoneId 2)milestoneTitle 3)milestoneDescription 4)createdDate 5)status 6)priority 7)dueDate */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseMilestone
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['milestone_id', 'title', 'description', 'due_date', 'requester_id', 'client_token', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_milestone" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'milestone_id' is set
if ('milestone_id' not in params) or (params['milestone_id'] is None):
raise ValueError("Missing the required parameter `milestone_id` when calling `update_milestone`")
# verify the required parameter 'title' is set
if ('title' not in params) or (params['title'] is None):
raise ValueError("Missing the required parameter `title` when calling `update_milestone`")
# verify the required parameter 'description' is set
if ('description' not in params) or (params['description'] is None):
raise ValueError("Missing the required parameter `description` when calling `update_milestone`")
# verify the required parameter 'due_date' is set
if ('due_date' not in params) or (params['due_date'] is None):
raise ValueError("Missing the required parameter `due_date` when calling `update_milestone`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `update_milestone`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `update_milestone`")
resource_path = '/milestones/{milestoneId}'.replace('{format}', 'json')
path_params = {}
if 'milestone_id' in params:
path_params['milestoneId'] = params['milestone_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
if 'title' in params:
form_params.append(('title', params['title']))
if 'description' in params:
form_params.append(('description', params['description']))
if 'due_date' in params:
form_params.append(('dueDate', params['due_date']))
if 'fields' in params:
form_params.append(('fields', params['fields']))
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseMilestone',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def update_task(self, task_id, title, description, due_date, status, re_assignee_user_id, requester_id, client_token, **kwargs):
"""
Update task
Allows the user to update task. Returns the updated task
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_task(task_id, title, description, due_date, status, re_assignee_user_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str title: title (required)
:param str description: description (required)
:param str due_date: Due date (required)
:param int status: /* Task status 1 - OPEN 2 - PERCENT_TWENTY 3 - PERCENT_FORTY 4 - PERCENT_SIXTY 5 - PERCENT_EIGHTY 6 - RESOLVED 7 - REOPENED */ (required)
:param int re_assignee_user_id: re-assignee User Id (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_task_with_http_info(task_id, title, description, due_date, status, re_assignee_user_id, requester_id, client_token, **kwargs)
else:
(data) = self.update_task_with_http_info(task_id, title, description, due_date, status, re_assignee_user_id, requester_id, client_token, **kwargs)
return data
def update_task_with_http_info(self, task_id, title, description, due_date, status, re_assignee_user_id, requester_id, client_token, **kwargs):
"""
Update task
Allows the user to update task. Returns the updated task
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_task_with_http_info(task_id, title, description, due_date, status, re_assignee_user_id, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param str title: title (required)
:param str description: description (required)
:param str due_date: Due date (required)
:param int status: /* Task status 1 - OPEN 2 - PERCENT_TWENTY 3 - PERCENT_FORTY 4 - PERCENT_SIXTY 5 - PERCENT_EIGHTY 6 - RESOLVED 7 - REOPENED */ (required)
:param int re_assignee_user_id: re-assignee User Id (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id', 'title', 'description', 'due_date', 'status', 're_assignee_user_id', 'requester_id', 'client_token', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_task" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params) or (params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `update_task`")
# verify the required parameter 'title' is set
if ('title' not in params) or (params['title'] is None):
raise ValueError("Missing the required parameter `title` when calling `update_task`")
# verify the required parameter 'description' is set
if ('description' not in params) or (params['description'] is None):
raise ValueError("Missing the required parameter `description` when calling `update_task`")
# verify the required parameter 'due_date' is set
if ('due_date' not in params) or (params['due_date'] is None):
raise ValueError("Missing the required parameter `due_date` when calling `update_task`")
# verify the required parameter 'status' is set
if ('status' not in params) or (params['status'] is None):
raise ValueError("Missing the required parameter `status` when calling `update_task`")
# verify the required parameter 're_assignee_user_id' is set
if ('re_assignee_user_id' not in params) or (params['re_assignee_user_id'] is None):
raise ValueError("Missing the required parameter `re_assignee_user_id` when calling `update_task`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `update_task`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `update_task`")
resource_path = '/milestones/tasks/{taskId}'.replace('{format}', 'json')
path_params = {}
if 'task_id' in params:
path_params['taskId'] = params['task_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
if 'title' in params:
form_params.append(('title', params['title']))
if 'description' in params:
form_params.append(('description', params['description']))
if 'due_date' in params:
form_params.append(('dueDate', params['due_date']))
if 'status' in params:
form_params.append(('status', params['status']))
if 're_assignee_user_id' in params:
form_params.append(('reAssigneeUserId', params['re_assignee_user_id']))
if 'fields' in params:
form_params.append(('fields', params['fields']))
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseTask',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
def update_task_status(self, task_id, status, requester_id, client_token, **kwargs):
"""
Update task status
Allows the user to update task status. Returns the updated task status
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_task_status(task_id, status, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param int status: /* Task status 1 - OPEN 2 - PERCENT_TWENTY 3 - PERCENT_FORTY 4 - PERCENT_SIXTY 5 - PERCENT_EIGHTY 6 - RESOLVED 7 - REOPENED */ (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('callback'):
return self.update_task_status_with_http_info(task_id, status, requester_id, client_token, **kwargs)
else:
(data) = self.update_task_status_with_http_info(task_id, status, requester_id, client_token, **kwargs)
return data
def update_task_status_with_http_info(self, task_id, status, requester_id, client_token, **kwargs):
"""
Update task status
Allows the user to update task status. Returns the updated task status
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please define a `callback` function
to be invoked when receiving the response.
>>> def callback_function(response):
>>> pprint(response)
>>>
>>> thread = api.update_task_status_with_http_info(task_id, status, requester_id, client_token, callback=callback_function)
:param callback function: The callback function
for asynchronous request. (optional)
:param int task_id: taskId (required)
:param int status: /* Task status 1 - OPEN 2 - PERCENT_TWENTY 3 - PERCENT_FORTY 4 - PERCENT_SIXTY 5 - PERCENT_EIGHTY 6 - RESOLVED 7 - REOPENED */ (required)
:param str requester_id: requesterId can be user id OR email address. (required)
:param str client_token: Use the Client Token. Please generate it from the Applications section under the Production & Sandbox tabs (required)
:param str fields: Filter fields in result list /* **A) Default values -** 1)taskId 2)taskTitle 3)taskDescription 4)dueDate **A) Available values-** 1)taskId 2)taskTitle 3)taskDescription 4)status 5)priority 6)dueDate 7)milestoneName 8)groupType 9)groupName */
:param str access_token: Unique session token for user. To get access token user will have to authenticate
:return: VerveResponseTask
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['task_id', 'status', 'requester_id', 'client_token', 'fields', 'access_token']
all_params.append('callback')
all_params.append('_return_http_data_only')
params = locals()
for key, val in iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method update_task_status" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'task_id' is set
if ('task_id' not in params) or (params['task_id'] is None):
raise ValueError("Missing the required parameter `task_id` when calling `update_task_status`")
# verify the required parameter 'status' is set
if ('status' not in params) or (params['status'] is None):
raise ValueError("Missing the required parameter `status` when calling `update_task_status`")
# verify the required parameter 'requester_id' is set
if ('requester_id' not in params) or (params['requester_id'] is None):
raise ValueError("Missing the required parameter `requester_id` when calling `update_task_status`")
# verify the required parameter 'client_token' is set
if ('client_token' not in params) or (params['client_token'] is None):
raise ValueError("Missing the required parameter `client_token` when calling `update_task_status`")
resource_path = '/milestones/tasks/{taskId}/status'.replace('{format}', 'json')
path_params = {}
if 'task_id' in params:
path_params['taskId'] = params['task_id']
query_params = {}
header_params = {}
if 'requester_id' in params:
header_params['requesterId'] = params['requester_id']
if 'access_token' in params:
header_params['accessToken'] = params['access_token']
if 'client_token' in params:
header_params['clientToken'] = params['client_token']
form_params = []
local_var_files = {}
if 'status' in params:
form_params.append(('status', params['status']))
if 'fields' in params:
form_params.append(('fields', params['fields']))
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.\
select_header_accept(['application/json'])
if not header_params['Accept']:
del header_params['Accept']
# HTTP header `Content-Type`
header_params['Content-Type'] = self.api_client.\
select_header_content_type(['application/x-www-form-urlencoded'])
# Authentication setting
auth_settings = ['default']
return self.api_client.call_api(resource_path, 'PUT',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='VerveResponseTask',
auth_settings=auth_settings,
callback=params.get('callback'),
_return_http_data_only=params.get('_return_http_data_only'))
| 53.65634 | 386 | 0.614811 | 10,520 | 93,523 | 5.282795 | 0.029278 | 0.043743 | 0.033828 | 0.036023 | 0.973585 | 0.968475 | 0.966406 | 0.962375 | 0.961368 | 0.95892 | 0 | 0.003919 | 0.30431 | 93,523 | 1,742 | 387 | 53.687141 | 0.850253 | 0.42489 | 0 | 0.803337 | 0 | 0 | 0.233655 | 0.037919 | 0 | 0 | 0 | 0 | 0 | 1 | 0.032181 | false | 0 | 0.008343 | 0 | 0.0882 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e1628a1434a12a220064f418818468c84cef4da5 | 14,815 | py | Python | tests/test_utils.py | emorisse/rdsolver | 89ef35eeadc50bf3618e10fd7e3f1ed0250ead30 | [
"MIT"
] | 2 | 2021-04-27T03:47:17.000Z | 2022-01-17T19:30:06.000Z | tests/test_utils.py | emorisse/rdsolver | 89ef35eeadc50bf3618e10fd7e3f1ed0250ead30 | [
"MIT"
] | 4 | 2017-07-14T22:52:20.000Z | 2017-08-31T22:55:32.000Z | tests/test_utils.py | emorisse/rdsolver | 89ef35eeadc50bf3618e10fd7e3f1ed0250ead30 | [
"MIT"
] | 2 | 2021-08-16T14:59:00.000Z | 2021-10-14T04:55:48.000Z | import numpy as np
import pytest
import rdsolver as rd
def test_grid_points_1d():
# Test standard
correct = np.array([1, 2, 3, 4, 5]).astype(float) / 5 * 2 * np.pi
assert np.isclose(rd.utils.grid_points_1d(5), correct).all()
# Test standard with specified length
correct = np.array([1, 2, 3, 4, 5]).astype(float)
assert np.isclose(rd.utils.grid_points_1d(5, L=5), correct).all()
# Test different starting point
correct = np.array([1, 2, 3, 4, 5]).astype(float) / 5 * 2 * np.pi - 1.0
assert np.isclose(rd.utils.grid_points_1d(5, x_start=-1.0), correct).all()
def test_grid_points_2d():
# Test standard
n = (5, 5)
correct_x = np.array([1, 2, 3, 4, 5]) / 5 * 2 * np.pi
correct_y = np.array([1, 2, 3, 4, 5]) / 5 * 2 * np.pi
correct_x_grid = np.array([[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4],
[5, 5, 5, 5, 5]]) / 5 * 2 * np.pi
correct_y_grid = np.array([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]) / 5 * 2 * np.pi
correct_xx = np.array([1]*5 + [2]*5 + [3]*5 + [4]*5 + [5]*5) / 5 * 2 * np.pi
correct_yy = np.array([1, 2, 3, 4, 5]*5) / 5 * 2 * np.pi
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n)
assert np.isclose(x, correct_x).all()
assert np.isclose(y, correct_y).all()
assert np.isclose(x_grid, correct_x_grid).all()
assert np.isclose(y_grid, correct_y_grid).all()
assert np.isclose(xx, correct_xx).all()
assert np.isclose(yy, correct_yy).all()
# Test standard with different number of grid points
n = (5, 6)
correct_x = np.array([1, 2, 3, 4, 5]) / 5 * 2 * np.pi
correct_y = np.array([1, 2, 3, 4, 5, 6]) / 6 * 2 * np.pi
correct_x_grid = np.array([[1, 1, 1, 1, 1, 1],
[2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 3, 3],
[4, 4, 4, 4, 4, 4],
[5, 5, 5, 5, 5, 5]]) / 5 * 2 * np.pi
correct_y_grid = np.array([[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6]]) / 6 * 2 * np.pi
correct_xx = np.array([1]*6 + [2]*6 + [3]*6 + [4]*6 + [5]*6) / 5 * 2 * np.pi
correct_yy = np.array([1, 2, 3, 4, 5, 6]*5) / 6 * 2 * np.pi
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n)
assert np.isclose(x, correct_x).all()
assert np.isclose(y, correct_y).all()
assert np.isclose(x_grid, correct_x_grid).all()
assert np.isclose(y_grid, correct_y_grid).all()
assert np.isclose(xx, correct_xx).all()
assert np.isclose(yy, correct_yy).all()
# Test different physical lengths and different number of grid poitns
n = (5, 6)
L = (2*np.pi, 1)
correct_x = np.array([1, 2, 3, 4, 5]) / 5 * 2 * np.pi
correct_y = np.array([1, 2, 3, 4, 5, 6]) / 6
correct_x_grid = np.array([[1, 1, 1, 1, 1, 1],
[2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 3, 3],
[4, 4, 4, 4, 4, 4],
[5, 5, 5, 5, 5, 5]]) / 5 * 2 * np.pi
correct_y_grid = np.array([[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6],
[1, 2, 3, 4, 5, 6]]) / 6
correct_xx = np.array([1]*6 + [2]*6 + [3]*6 + [4]*6 + [5]*6) / 5 * 2 * np.pi
correct_yy = np.array([1, 2, 3, 4, 5, 6]*5) / 6
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
assert np.isclose(x, correct_x).all()
assert np.isclose(y, correct_y).all()
assert np.isclose(x_grid, correct_x_grid).all()
assert np.isclose(y_grid, correct_y_grid).all()
assert np.isclose(xx, correct_xx).all()
assert np.isclose(yy, correct_yy).all()
# Test different physical lengths
n = (5, 5)
L = (2*np.pi, 1)
correct_x = np.array([1, 2, 3, 4, 5]) / 5 * 2 * np.pi
correct_y = np.array([1, 2, 3, 4, 5]) / 5
correct_x_grid = np.array([[1, 1, 1, 1, 1],
[2, 2, 2, 2, 2],
[3, 3, 3, 3, 3],
[4, 4, 4, 4, 4],
[5, 5, 5, 5, 5]]) / 5 * 2 * np.pi
correct_y_grid = np.array([[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5],
[1, 2, 3, 4, 5]]) / 5
correct_xx = np.array([1]*5 + [2]*5 + [3]*5 + [4]*5 + [5]*5) / 5 * 2 * np.pi
correct_yy = np.array([1, 2, 3, 4, 5]*5) / 5
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
assert np.isclose(x, correct_x).all()
assert np.isclose(y, correct_y).all()
assert np.isclose(x_grid, correct_x_grid).all()
assert np.isclose(y_grid, correct_y_grid).all()
assert np.isclose(xx, correct_xx).all()
assert np.isclose(yy, correct_yy).all()
def test_wave_numbers_1d():
# 2π domain length
correct = np.array([0, 1, 2, -3, -2, -1])
assert (correct == rd.utils.wave_numbers_1d(6)).all()
# Other domain lengths
L = 1
correct = np.array([0, 1, 2, -3, -2, -1]) * (2 * np.pi / L)
assert (correct == rd.utils.wave_numbers_1d(6, L=L)).all()
L = 7.89
correct = np.array([0, 1, 2, -3, -2, -1]) * (2 * np.pi / L)
assert (correct == rd.utils.wave_numbers_1d(6, L=L)).all()
# Odd domains
correct = np.array([0, 1, 2, 3, -3, -2, -1])
assert (correct == rd.utils.wave_numbers_1d(7)).all()
L = 1
correct = np.array([0, 1, 2, 3, -3, -2, -1]) * (2 * np.pi / L)
assert (correct == rd.utils.wave_numbers_1d(7, L=L)).all()
L = 7.89
correct = np.array([0, 1, 2, 3, -3, -2, -1]) * (2 * np.pi / L)
assert (correct == rd.utils.wave_numbers_1d(7, L=L)).all()
def test_wave_numbers_2d():
# 2π domain length
correct_x = np.reshape(np.array([0, 1, 2, -3, -2, -1]*6), (6, 6), order='F')
correct_y = np.reshape(np.array([0, 1, 2, -3, -2, -1]*6), (6, 6), order='C')
kx, ky = rd.utils.wave_numbers_2d((6, 6))
assert (correct_x == kx).all()
assert (correct_y == ky).all()
# Mixed number of grid points
correct_x = np.reshape(np.array([0, 1, 2, -3, -2, -1]*8), (6, 8), order='F')
correct_y = np.reshape(np.array([0, 1, 2, 3, -4, -3, -2, -1]*6), (6, 8),
order='C')
kx, ky = rd.utils.wave_numbers_2d((6, 8))
assert (correct_x == kx).all()
assert (correct_y == ky).all()
# Mixed number of grid points amd different lengths
L = (3.4, 5.7)
correct_x = np.reshape(np.array([0, 1, 2, -3, -2, -1]*8), (6, 8),
order='F') * (2*np.pi / L[0])
correct_y = np.reshape(np.array([0, 1, 2, 3, -4, -3, -2, -1]*6), (6, 8),
order='C') * (2*np.pi / L[1])
kx, ky = rd.utils.wave_numbers_2d((6, 8), L=L)
assert (correct_x == kx).all()
assert (correct_y == ky).all()
def test_spectral_integrate_2d():
L = (2*np.pi, 2*np.pi)
n = (64, 64)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
correct = 44.649967131680145266
assert np.isclose(rd.utils.spectral_integrate_2d(f, L=L), correct)
L = (2*np.pi, 2*np.pi)
n = (64, 128)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
correct = 44.649967131680145266
assert np.isclose(rd.utils.spectral_integrate_2d(f, L=L), correct)
L = (2*np.pi, 4*np.pi)
n = (128, 64)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
correct = 89.299934263360290533
assert np.isclose(rd.utils.spectral_integrate_2d(f, L=L), correct)
def test_diff_multiplier_periodic_1d():
# Error out on odd number of grid points
with pytest.raises(RuntimeError) as excinfo:
rd.utils.diff_multiplier_periodic_1d(65)
excinfo.match('Must have even number of grid points.')
# First derivative
correct = np.array([0, 1, 2, 3, 4, 0, -4, -3, -2, -1]) * 1j
assert (rd.utils.diff_multiplier_periodic_1d(10) == correct).all()
# Second derivative
correct = -np.array([0, 1, 2, 3, 4, 5, -4, -3, -2, -1])**2
assert (rd.utils.diff_multiplier_periodic_1d(10, order=2) == correct).all()
# Third derivative
correct = -np.array([0, 1, 2, 3, 4, 0, -4, -3, -2, -1])**3 * 1j
assert (rd.utils.diff_multiplier_periodic_1d(10, order=3) == correct).all()
def test_diff_multiplier_periodic_2d():
# Error out on odd number of grid points
with pytest.raises(RuntimeError) as excinfo:
rd.utils.diff_multiplier_periodic_2d((65, 64))
excinfo.match('Must have even number of grid points.')
# First derivative
n = (10, 10)
correct_yy = np.array(
[[i]*10 for i in [0, 1, 2, 3, 4, 0, -4, -3, -2, -1]]) * 1j
correct_xx = np.array(
[[0, 1, 2, 3, 4, 0, -4, -3, -2, -1] for _ in range(10)]) * 1j
mult_xx, mult_yy = rd.utils.diff_multiplier_periodic_2d(n)
assert np.isclose(mult_xx, correct_xx).all()
assert np.isclose(mult_yy, correct_yy).all()
# Second derivative
n = (10, 10)
correct_yy = -np.array(
[[i]*10 for i in [0, 1, 2, 3, 4, 5, -4, -3, -2, -1]])**2
correct_xx = -np.array(
[[0, 1, 2, 3, 4, 5, -4, -3, -2, -1] for _ in range(10)])**2
mult_xx, mult_yy = rd.utils.diff_multiplier_periodic_2d(n, order=2)
assert np.isclose(mult_xx, correct_xx).all()
assert np.isclose(mult_yy, correct_yy).all()
def test_diff_periodic_fft_2d():
# Test standard grid spacing
n = (64, 64)
L = None
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L)
df_dx_correct = f * np.cos(x_grid) * np.cos(y_grid)
df_dy_correct = -f * np.sin(x_grid) * np.sin(y_grid)
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Different physical lengths of x and y
n = (64, 64)
L = (2*np.pi, 4*np.pi)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L)
df_dx_correct = f * np.cos(x_grid) * np.cos(y_grid)
df_dy_correct = -f * np.sin(x_grid) * np.sin(y_grid)
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Different number of grid points in x and y
n = (64, 128)
L = None
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L)
df_dx_correct = f * np.cos(x_grid) * np.cos(y_grid)
df_dy_correct = -f * np.sin(x_grid) * np.sin(y_grid)
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Different number of grid points in x and y and different lengths
n = (64, 128)
L = (4*np.pi, 2*np.pi)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L)
df_dx_correct = f * np.cos(x_grid) * np.cos(y_grid)
df_dy_correct = -f * np.sin(x_grid) * np.sin(y_grid)
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Test standard grid spacing, second derivative
n = (64, 64)
L = None
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L, order=2)
df_dx_correct = f * np.cos(y_grid) \
* (np.cos(x_grid)**2 * np.cos(y_grid) - np.sin(x_grid))
df_dy_correct = f * np.sin(x_grid) \
* (np.sin(y_grid)**2 * np.sin(x_grid) - np.cos(y_grid))
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Different physical lengths of x and y, second derivative
n = (64, 64)
L = (2*np.pi, 4*np.pi)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L, order=2)
df_dx_correct = f * np.cos(y_grid) \
* (np.cos(x_grid)**2 * np.cos(y_grid) - np.sin(x_grid))
df_dy_correct = f * np.sin(x_grid) \
* (np.sin(y_grid)**2 * np.sin(x_grid) - np.cos(y_grid))
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Different number of grid points in x and y, second derivative
n = (64, 128)
L = None
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L, order=2)
df_dx_correct = f * np.cos(y_grid) \
* (np.cos(x_grid)**2 * np.cos(y_grid) - np.sin(x_grid))
df_dy_correct = f * np.sin(x_grid) \
* (np.sin(y_grid)**2 * np.sin(x_grid) - np.cos(y_grid))
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
# Different number of grid points in x and y and diff len, second derivative
n = (64, 128)
L = (4*np.pi, 2*np.pi)
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(x_grid) * np.cos(y_grid))
df_dx, df_dy = rd.utils.diff_periodic_fft_2d(f, L=L, order=2)
df_dx_correct = f * np.cos(y_grid) \
* (np.cos(x_grid)**2 * np.cos(y_grid) - np.sin(x_grid))
df_dy_correct = f * np.sin(x_grid) \
* (np.sin(y_grid)**2 * np.sin(x_grid) - np.cos(y_grid))
assert np.isclose(df_dx, df_dx_correct).all()
assert np.isclose(df_dy, df_dy_correct).all()
def test_laplacian_flat_periodic_2d():
# Same shape in x and y, standard grid
n = (64, 64)
L = None
x, y, xx, yy, x_grid, y_grid = rd.utils.grid_points_2d(n, L=L)
f = np.exp(np.sin(xx) * np.cos(yy))
correct = f * np.cos(yy) * (np.cos(xx)**2 * np.cos(yy) - np.sin(xx)) \
+ f * np.sin(xx) * (np.sin(yy)**2 * np.sin(xx) - np.cos(yy))
assert np.isclose(correct, rd.utils.laplacian_flat_periodic_2d(f, n)).all()
| 42.449857 | 80 | 0.541748 | 2,690 | 14,815 | 2.819331 | 0.042007 | 0.016878 | 0.021361 | 0.023207 | 0.902031 | 0.886999 | 0.877373 | 0.872495 | 0.869066 | 0.824367 | 0 | 0.079188 | 0.278029 | 14,815 | 348 | 81 | 42.571839 | 0.629862 | 0.067769 | 0 | 0.75 | 0 | 0 | 0.005805 | 0 | 0 | 0 | 0 | 0 | 0.23913 | 1 | 0.032609 | false | 0 | 0.01087 | 0 | 0.043478 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
e19c98e95bc173cd57f01e5fea7a7782a1338c95 | 6,623 | py | Python | gridworld_vav/src/grid_worlds.py | dsbrown1331/vav-icml | 90f40c2b5b52f3cc142ffd4e02bb82d88e1e221d | [
"MIT"
] | null | null | null | gridworld_vav/src/grid_worlds.py | dsbrown1331/vav-icml | 90f40c2b5b52f3cc142ffd4e02bb82d88e1e221d | [
"MIT"
] | null | null | null | gridworld_vav/src/grid_worlds.py | dsbrown1331/vav-icml | 90f40c2b5b52f3cc142ffd4e02bb82d88e1e221d | [
"MIT"
] | null | null | null | """A variety of prebuild worlds for debugging and testing"""
import src.mdp as mdp
import numpy as np
def create_aaai19_toy_world():
#features is a 2-d array of tuples
num_rows = 2
num_cols = 3
features =[[(1, 0), (0, 1), (1, 0)],
[(1, 0), (1, 0), (1, 0)]]
weights = [-1,-4]
initials = [(r,c) for r in range(num_rows) for c in range(num_cols)] #states indexed by row and then column
#print(initials)
terminals = [(0,0)]
gamma = 0.9
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_safety_island_world():
#Taken from the AI safety grid worlds paper
#features is a 2-d array of tuples
num_rows = 6
num_cols = 8
wall = None
goal = (1,0,0)
white = (0,1,0)
water = (0,0,1)
features = [[water, water, wall, wall, wall, wall, wall, wall],
[water, water, white, white, white, white, white, water],
[water, water, white, white, white, white, white, water],
[water, white, white, white, white, white, white, water],
[water, white, white, goal, white, white, water, water],
[water, wall, wall, wall, wall, wall, wall, wall]]
weights = [+50, -1, -50] #goal, movement, water
#can't start in water or wall
initials = [(r,c) for r in range(num_rows) for c in range(num_cols) if (features[r][c] != None and features[r][c] != water)] #states indexed by row and then column
print(initials)
terminals = [(4,3)]
gamma = 0.95
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_safety_island_world_nowalls():
#Taken from the AI safety grid worlds paper
#features is a 2-d array of tuples
num_rows = 3
num_cols = 8
wall = None
goal = (1,0,0)
white = (0,1,0)
water = (0,0,1)
features = [[water, water, white, white, white, white, white, water],
[water, white, white, white, white, white, white, water],
[water, white, white, goal, white, white, water, water]]
weights = [+50, -1, -50] #goal, movement, water
#can't start in water or wall
initials = [(r,c) for r in range(num_rows) for c in range(num_cols) if (features[r][c] != None and features[r][c] != water)] #states indexed by row and then column
print(initials)
terminals = [(2,3)]
gamma = 0.95
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_safety_lava_world():
#Taken from the AI safety grid worlds paper
#features is a 2-d array of tuples
num_rows = 7
num_cols = 9
wall = None
goal = (1,0,0)
white = (0,1,0)
lava = (0,0,1)
features = [[wall, wall, wall, wall, wall, wall, wall, wall, wall],
[wall, white, white, lava, lava, lava, white, goal, wall],
[wall, white, white, lava, lava, lava, white, white, wall],
[wall, white, white, white, white, white, white, white, wall],
[wall, white, white, white, white, white, white, white, wall],
[wall, white, white, white, white, white, white, white, wall],
[wall, wall, wall, wall, wall, wall, wall, wall, wall]]
weights = [+50, -1, -50] #goal, movement, lava
initials = [(r,c) for r in range(num_rows) for c in range(num_cols) if features[r][c] != None] #states indexed by row and then column
print(initials)
terminals = [(1,7)]
gamma = 0.95
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_safety_lava_world_nowalls():
#Taken from the AI safety grid worlds paper
#features is a 2-d array of tuples
num_rows = 3
num_cols = 7
wall = None
goal = (1,0,0)
white = (0,1,0)
lava = (0,0,1)
features = [[white, white, lava, lava, lava, white, goal],
[white, white, lava, lava, lava, white, white],
[white, white, white, white, white, white, white]]
weights = [+50, -1, -50] #goal, movement, lava
initials = [(r,c) for r in range(num_rows) for c in range(num_cols) if features[r][c] != None] #states indexed by row and then column
print(initials)
terminals = [(0,6)]
gamma = 0.95
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_cakmak_task1():
#features is a 2-d array of tuples
num_rows = 6
num_cols = 7
wall = None
star = (1,0,0)
white = (0,1,0)
gray = (0,0,1)
features = [[wall, star, wall, wall, white, wall, wall],
[wall, white, white, white, white, white, wall],
[wall, gray, wall, wall, wall, white, wall],
[wall, gray, wall, wall, wall, white, wall],
[wall, white, white, white, white, white, wall],
[white, white, wall, wall, wall, wall, wall]]
weights = [2,-1,-1]
initials = [(r,c) for r in range(num_rows) for c in range(num_cols) if features[r][c] != None] #states indexed by row and then column
print(initials)
terminals = [(0,1)]
gamma = 0.95
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_cakmak_task2():
world = create_cakmak_task1()
world.weights = [2, -1, -10]
return world
def create_cakmak_task3():
#features is a 2-d array of tuples
num_rows = 6
num_cols = 6
wall = None
star = (1,0,0)
diamond = (0,1,0)
white = (0,0,1)
features = [[star, wall, white, white, wall, diamond],
[white, wall, white, white, wall, white],
[white, white, white, white, white, white],
[white, white, white, white, white, white],
[white, white, white, white, white, white],
[white, white, white, white, white, white]]
weights = [1,1,-1]
initials = [(r,c) for r in range(num_rows) for c in range(num_cols) if features[r][c] != None] #states indexed by row and then column
print(initials)
terminals = [(0,0), (0,5)]
gamma = 0.95
world = mdp.LinearFeatureGridWorld(features, weights, initials, terminals, gamma)
return world
def create_cakmak_task4():
#note that Cakmak and Lopes appear to be using gamma = 1, but in this case you don't get the teaching set they show...
#increasing the reward of the diamond feature gives a comparable result to their paper.
world = create_cakmak_task3()
world.weights = [1,3,-1]
return world | 38.958824 | 167 | 0.605164 | 963 | 6,623 | 4.102804 | 0.115265 | 0.232852 | 0.258162 | 0.283473 | 0.87674 | 0.856745 | 0.846115 | 0.821311 | 0.784105 | 0.773981 | 0 | 0.034327 | 0.265439 | 6,623 | 170 | 168 | 38.958824 | 0.777801 | 0.161407 | 0 | 0.602941 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.066176 | false | 0 | 0.014706 | 0 | 0.147059 | 0.044118 | 0 | 0 | 0 | null | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
beca1a59c939a4e052b0534a99eeb628edfc3b37 | 20,875 | py | Python | tests/unit/frontend/onnx/ops/test_onnx_l2_convolution.py | pankajdarak-xlnx/pyxir | a93b785a04b6602418c4f07a0f29c809202d35bd | [
"Apache-2.0"
] | null | null | null | tests/unit/frontend/onnx/ops/test_onnx_l2_convolution.py | pankajdarak-xlnx/pyxir | a93b785a04b6602418c4f07a0f29c809202d35bd | [
"Apache-2.0"
] | null | null | null | tests/unit/frontend/onnx/ops/test_onnx_l2_convolution.py | pankajdarak-xlnx/pyxir | a93b785a04b6602418c4f07a0f29c809202d35bd | [
"Apache-2.0"
] | null | null | null | # Copyright 2020 Xilinx Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Module for testing the pyxir ONNX frontend
"""
import onnx
import unittest
import numpy as np
from pyxir.graph.layer import xlayer_factory as xlf
from pyxir.frontend.onnx.onnx_tools import NodeWrapper
from pyxir.frontend.onnx.ops import onnx_l2_convolution as ol2c
class TestONNXL2Convolutions(unittest.TestCase):
def test_eltwise_any_ops(self):
any_ops = ['LRN']
for any_op in any_ops:
a = np.zeros((1, 2, 3, 3), dtype=np.float32)
node = onnx.helper.make_node(
any_op,
inputs=['a'],
outputs=['y']
)
wrapped_node = NodeWrapper(node)
aX = xlf.get_xop_factory_func('Input')('a', list(a.shape),
dtype='float32')
xmap = {'a': aX}
params = {}
func = getattr(ol2c, any_op.lower())
Xs = func(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'AnyOp' in X.type
assert X.shapes.tolist() == [-1, 2, 3, 3]
def test_avg_pool_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'AveragePool',
inputs=['x'],
outputs=['y'],
kernel_shape=[2, 2],
pads=[0, 1, 0, 1],
strides=[2, 2]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.avg_pool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Pooling' in X.type
assert X.shapes.tolist() == [-1, 1, 2, 2]
assert X.attrs['padding'] == [[0, 0], [0, 0], [0, 1], [0, 1]]
assert X.attrs['strides'] == [2, 2]
assert X.attrs['kernel_size'] == [2, 2]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['type'] == 'Avg'
def test_avg_pool_node_ceil_mode(self):
x = np.array([[[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12],
[13, 14, 15, 16]]]]).astype(np.float32)
node = onnx.helper.make_node(
'AveragePool',
inputs=['x'],
outputs=['y'],
kernel_shape=[3, 3],
strides=[2, 2],
ceil_mode=True
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.avg_pool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Pooling' in X.type
assert X.shapes.tolist() == [-1, 1, 2, 2]
assert X.attrs['padding'] == [[0, 0], [0, 0], [0, 0], [0, 0]]
assert X.attrs['strides'] == [2, 2]
assert X.attrs['kernel_size'] == [3, 3]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['type'] == 'Avg'
def test_conv_node_0(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
W = np.array([[[[1, 1],
[1, 1]]],
[[[1, -1],
[1, 1]]]]).astype(np.float32)
B = np.array([1, -1]).astype(np.float32)
node = onnx.helper.make_node(
'Conv',
inputs=['x', 'W', 'B'],
outputs=['y'],
kernel_shape=[2, 2],
pads=[1, 1, 0, 0]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
wX = xlf.get_xop_factory_func('Constant')('W', W, onnx_id='W')
bX = xlf.get_xop_factory_func('Constant')('B', B, onnx_id='B')
xmap = {'x': iX, 'W': wX, 'B': bX}
params = {}
Xs = ol2c.conv(wrapped_node, params, xmap)
assert len(Xs) == 2
X, baX = Xs
assert X.name == 'y_Conv'
assert X.shapes.tolist() == [-1, 2, 3, 3]
assert X.attrs['padding'] == [(0, 0), (0, 0), (1, 0), (1, 0)]
assert X.attrs['strides'] == [1, 1]
assert X.attrs['dilation'] == [1, 1]
assert X.attrs['kernel_size'] == [2, 2]
assert X.attrs['channels'] == [1, 2]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['kernel_layout'] == 'OIHW'
assert X.attrs['groups'] == 1
assert X.attrs['onnx_id'] == 'y'
assert baX.name == 'y'
assert baX.shapes == [-1, 2, 3, 3]
assert baX.attrs['axis'] == 1
assert baX.attrs['onnx_id'] == 'y'
def test_conv_node_1(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
W = np.array([[[[1, 1],
[1, 1]]],
[[[1, -1],
[1, 1]]]]).astype(np.float32)
B = np.array([1, -1]).astype(np.float32)
node = onnx.helper.make_node(
'Conv',
inputs=['x', 'W', 'B'],
outputs=['y'],
kernel_shape=[2, 2],
pads=[1, 1, 0, 0]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
wX = xlf.get_xop_factory_func('Constant')('W', W, onnx_id='W')
bX = xlf.get_xop_factory_func('Constant')('B', B, onnx_id='B')
xmap = {'x': iX, 'W': wX, 'B': bX}
params = {}
Xs = ol2c.conv(wrapped_node, params, xmap)
assert len(Xs) == 2
X, baX = Xs
assert X.name == 'y_Conv'
assert X.shapes.tolist() == [-1, 2, 3, 3]
assert X.attrs['padding'] == [(0, 0), (0, 0), (1, 0), (1, 0)]
assert X.attrs['strides'] == [1, 1]
assert X.attrs['dilation'] == [1, 1]
assert X.attrs['kernel_size'] == [2, 2]
assert X.attrs['channels'] == [1, 2]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['kernel_layout'] == 'OIHW'
assert X.attrs['groups'] == 1
assert X.attrs['onnx_id'] == 'y'
assert baX.name == 'y'
assert baX.shapes == [-1, 2, 3, 3]
assert baX.attrs['axis'] == 1
assert baX.attrs['onnx_id'] == 'y'
def test_depth_conv_node(self):
x = np.ones((1,16,4,4)).astype(np.float32)
W = np.ones((8,4,2,2)).astype(np.float32)
B = np.ones((8,)).astype(np.float32)
node = onnx.helper.make_node(
'Conv',
inputs=['x', 'W', 'B'],
outputs=['y'],
kernel_shape=[2, 2],
pads=[1, 1, 0, 0],
group=4
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
wX = xlf.get_xop_factory_func('Constant')('W', W, onnx_id='W')
bX = xlf.get_xop_factory_func('Constant')('B', B, onnx_id='B')
xmap = {'x': iX, 'W': wX, 'B': bX}
params = {}
Xs = ol2c.conv(wrapped_node, params, xmap)
assert len(Xs) == 2
X, baX = Xs
assert X.name == 'y_Conv'
assert X.shapes.tolist() == [-1, 8, 4, 4]
assert X.attrs['padding'] == [(0, 0), (0, 0), (1, 0), (1, 0)]
assert X.attrs['strides'] == [1, 1]
assert X.attrs['dilation'] == [1, 1]
assert X.attrs['kernel_size'] == [2, 2]
assert X.attrs['channels'] == [16, 8]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['kernel_layout'] == 'OIHW'
assert X.attrs['groups'] == 4
assert X.attrs['onnx_id'] == 'y'
assert baX.name == 'y'
assert baX.shapes == [-1, 8, 4, 4]
assert baX.attrs['axis'] == 1
assert baX.attrs['onnx_id'] == 'y'
def test_conv_transpose_node(self):
x = np.zeros((1, 2, 3, 3))
W = np.zeros((4, 2, 3, 3))
B = np.array([1, -1]).astype(np.float32)
node = onnx.helper.make_node(
'ConvTranspose',
inputs=['x', 'W', 'B'],
outputs=['y'],
kernel_shape=[3, 3],
pads=[0, 0, 0, 0]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
wX = xlf.get_xop_factory_func('Constant')('W', W, onnx_id='W')
bX = xlf.get_xop_factory_func('Constant')('B', B, onnx_id='B')
xmap = {'x': iX, 'W': wX, 'B': bX}
params = {}
Xs = ol2c.conv_transpose(wrapped_node, params, xmap)
assert len(Xs) == 2
X, baX = Xs
assert X.name == 'y_Conv'
assert X.shapes.tolist() == [-1, 4, 5, 5]
assert X.attrs['padding'] == [(0, 0), (0, 0), (0, 0), (0, 0)]
assert X.attrs['strides'] == [1, 1]
assert X.attrs['dilation'] == [1, 1]
assert X.attrs['kernel_size'] == [3, 3]
assert X.attrs['channels'] == [2, 4]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['kernel_layout'] == 'OIHW'
assert X.attrs['groups'] == 1
assert X.attrs['onnx_id'] == 'y'
assert baX.name == 'y'
assert baX.shapes == [-1, 4, 5, 5]
assert baX.attrs['axis'] == 1
assert baX.attrs['onnx_id'] == 'y'
def test_flatten_2_flatten(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'Flatten',
inputs=['x'],
outputs=['y'],
axis=1
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.flatten(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Flatten' in X.type
assert X.shapes.tolist() == [-1, 9]
assert X.attrs['onnx_id'] == 'y'
def test_flatten_2_reshape(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'Flatten',
inputs=['x'],
outputs=['y'],
axis=2
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.flatten(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Reshape' in X.type
assert X.shapes.tolist() == [-2, 9]
assert X.attrs['shape'] == [-2, 9]
assert X.attrs['onnx_id'] == 'y'
def test_flatten_2_reshape_axis_0(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'Flatten',
inputs=['x'],
outputs=['y'],
axis=0
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.flatten(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Reshape' in X.type
assert X.shapes.tolist() == [1, -18]
assert X.attrs['shape'] == [1, -18]
assert X.attrs['onnx_id'] == 'y'
def test_global_avg_pool_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'GlobalAveragePool',
inputs=['x'],
outputs=['y']
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.global_avg_pool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Pooling' in X.type
assert X.shapes.tolist() == [-1, 2, 1, 1]
assert X.attrs['padding'] == [(0, 0), (0, 0), (0, 0), (0, 0)]
assert X.attrs['strides'] == [1, 1]
assert X.attrs['kernel_size'] == [3, 3]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['pool_type'] == 'Avg'
assert X.attrs['onnx_id'] == 'y'
def test_max_pool_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'MaxPool',
inputs=['x'],
outputs=['y'],
kernel_shape=[2, 2],
pads=[0, 1, 0, 1],
strides=[1, 1]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.max_pool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Pooling' in X.type
assert X.shapes.tolist() == [-1, 1, 3, 3]
assert X.attrs['padding'] == [[0, 0], [0, 0], [0, 1], [0, 1]]
assert X.attrs['strides'] == [1, 1]
assert X.attrs['kernel_size'] == [2, 2]
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['type'] == 'Max'
def test_max_roi_pool_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
a = np.array([[0, 0, 1, 0, 1], [0, 1, 2, 1, 2]])
node = onnx.helper.make_node(
'MaxRoiPool',
inputs=['x', 'a'],
outputs=['y'],
pooled_shape=[2, 2]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
aX = xlf.get_xop_factory_func('Constant')('a', a)
xmap = {'x': iX, 'a': aX}
params = {}
Xs = ol2c.max_roi_pool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'AnyOp' in X.type
assert X.shapes.tolist() == [2, 1, 2, 2]
def test_max_unpool_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
node = onnx.helper.make_node(
'MaxUnPool',
inputs=['x'],
outputs=['y'],
kernel_shape=[2, 2],
pads=[0, 1, 0, 1],
strides=[1, 1]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.max_unpool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'AnyOp' in X.type
assert X.shapes.tolist() == [-1, 1, 3, 3]
def test_max_unpool_node_output_shape(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
z = np.array([-1, 1, 4, 4])
node = onnx.helper.make_node(
'MaxUnPool',
inputs=['x', 'y', 'z'],
outputs=['y'],
kernel_shape=[2, 2],
strides=[2, 2]
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
zX = xlf.get_xop_factory_func('Constant')('z', z)
xmap = {'x': iX, 'z': zX}
params = {}
Xs = ol2c.max_unpool(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'AnyOp' in X.type
assert X.shapes.tolist() == [-1, 1, 4, 4]
def test_pad(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
p = np.array([0, 0, 1, 1, 0, 0, 2, 3])
pv = np.array([0])
node = onnx.helper.make_node(
'Pad',
inputs=['x', 'p', 'pv'],
outputs=['y']
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
pX = xlf.get_xop_factory_func('Constant')('p', p)
pvX = xlf.get_xop_factory_func('Constant')('pv', pv)
xmap = {'x': iX, 'p': pX, 'pv': pvX}
params = {}
Xs = ol2c.pad(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Pad' in X.type
assert X.shapes.tolist() == [-1, 1, 6, 7]
def test_upsample_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
scales = np.array([1.0, 1.0, 2.0, 3.0], dtype=np.float32)
node = onnx.helper.make_node(
'Upsample',
inputs=['x', 'scales'],
outputs=['y']
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
sX = xlf.get_xop_factory_func('Constant')('scales', scales)
xmap = {'x': iX, 'scales': sX}
params = {}
Xs = ol2c.upsample(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Upsampling2D' in X.type
assert X.shapes.tolist() == [-1, 1, 6, 9]
assert X.attrs['scale_h'] == 2.
assert X.attrs['scale_w'] == 3.
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['method'] == 'nearest_neighbor'
def test_upsample7_node(self):
x = np.array([[[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]]]).astype(np.float32)
scales = [1.0, 1.0, 2.0, 3.0]
node = onnx.helper.make_node(
'Upsample-7',
inputs=['x'],
outputs=['y'],
scales=scales
)
wrapped_node = NodeWrapper(node)
iX = xlf.get_xop_factory_func('Input')('x', list(x.shape),
dtype='float32')
xmap = {'x': iX}
params = {}
Xs = ol2c.upsample(wrapped_node, params, xmap)
assert len(Xs) == 1
X = Xs[0]
assert X.name == 'y'
assert 'Upsampling2D' in X.type
assert X.shapes.tolist() == [-1, 1, 6, 9]
assert X.attrs['scale_h'] == 2.
assert X.attrs['scale_w'] == 3.
assert X.attrs['data_layout'] == 'NCHW'
assert X.attrs['method'] == 'nearest_neighbor'
| 30.519006 | 74 | 0.447713 | 2,630 | 20,875 | 3.444106 | 0.077947 | 0.081917 | 0.092736 | 0.054758 | 0.844116 | 0.824906 | 0.789909 | 0.789909 | 0.761758 | 0.761537 | 0 | 0.053417 | 0.380311 | 20,875 | 683 | 75 | 30.56369 | 0.6468 | 0.028359 | 0 | 0.762089 | 0 | 0 | 0.072701 | 0 | 0 | 0 | 0 | 0 | 0.297872 | 1 | 0.034816 | false | 0 | 0.011605 | 0 | 0.048356 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
beda2bdb94283bac175495d611f1d6fdfcfdebc5 | 220,536 | py | Python | old_overall.py | PartumSomnia/bns_ppr_tools | b02bab870bb54171bc0d0cd7e07bfb50e978e7dd | [
"MIT"
] | null | null | null | old_overall.py | PartumSomnia/bns_ppr_tools | b02bab870bb54171bc0d0cd7e07bfb50e978e7dd | [
"MIT"
] | 4 | 2019-12-01T18:42:45.000Z | 2019-12-07T10:59:37.000Z | old_overall.py | PartumSomnia/bns_ppr_tools | b02bab870bb54171bc0d0cd7e07bfb50e978e7dd | [
"MIT"
] | null | null | null |
#
from __future__ import division
from sys import path
from dask.array.ma import masked_array
path.append('modules/')
from _curses import raw
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib import ticker
import matplotlib.pyplot as plt
from matplotlib import rc
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
# import units as ut # for tmerg
import statsmodels.formula.api as smf
from math import pi, log10, sqrt
import scipy.optimize as opt
import matplotlib as mpl
import pandas as pd
import numpy as np
import itertools
import os.path
import cPickle
import math
import time
import copy
import h5py
import csv
import os
import functools
from scipy import interpolate
from scidata.utils import locate
import scidata.carpet.hdf5 as h5
import scidata.xgraph as xg
from matplotlib.mlab import griddata
from matplotlib.ticker import AutoMinorLocator, FixedLocator, NullFormatter, \
MultipleLocator
from matplotlib.colors import LogNorm, Normalize
from matplotlib.colors import Normalize, LogNorm
from matplotlib.collections import PatchCollection
from matplotlib.patches import Rectangle
from matplotlib import patches
from preanalysis import LOAD_INIT_DATA
from outflowed import EJECTA_PARS
from preanalysis import LOAD_ITTIME
from plotting_methods import PLOT_MANY_TASKS
from profile import LOAD_PROFILE_XYXZ, LOAD_RES_CORR, LOAD_DENSITY_MODES
from mkn_interface import COMPUTE_LIGHTCURVE, COMBINE_LIGHTCURVES
from combine import TEX_TABLES, COMPARISON_TABLE, TWO_SIMS, THREE_SIMS, ADD_METHODS_ALL_PAR
import units as ut # for tmerg
from utils import *
for letter in "kusi":
print(letter),
''' lissts of all the simulations '''
simulations = {"BLh":
{
"q=1.8": ["BLh_M10201856_M0_LK_SR"], # Prompt collapse
"q=1.7": ["BLh_M10651772_M0_LK_SR"], # stable
"q=1.4": ["BLh_M16351146_M0_LK_LR"],
"q=1.3": ["BLh_M11841581_M0_LK_SR"],
"q=1": ["BLh_M13641364_M0_LK_SR"]
},
"DD2":
{
"q=1": ["DD2_M13641364_M0_HR_R04", "DD2_M13641364_M0_LK_HR_R04",
"DD2_M13641364_M0_LK_LR_R04", "DD2_M13641364_M0_LK_SR_R04",
"DD2_M13641364_M0_LR", "DD2_M13641364_M0_LR_R04",
"DD2_M13641364_M0_SR", "DD2_M13641364_M0_SR_R04"],
"q=1.1": ["DD2_M14321300_M0_LR", "DD2_M14351298_M0_LR"],
"q=1.2": ["DD2_M14861254_M0_HR", "DD2_M14861254_M0_LR",
"DD2_M14971245_M0_HR", "DD2_M14971245_M0_SR",
"DD2_M14971246_M0_LR", "DD2_M15091235_M0_LK_HR",
"DD2_M15091235_M0_LK_SR"],
"q=1.4": ["DD2_M16351146_M0_LK_LR"]
},
"LS220":
{
"q=1": ["LS220_M13641364_M0_HR", #"LS220_M13641364_M0_LK_HR", # TOO short. 3ms
"LS220_M13641364_M0_LK_SR", "LS220_M13641364_M0_LK_SR_restart",
"LS220_M13641364_M0_LR", "LS220_M13641364_M0_SR"],
"q=1.1": ["LS220_M14001330_M0_HR", "LS220_M14001330_M0_SR",
"LS220_M14351298_M0_HR", "LS220_M14351298_M0_SR"],
"q=1.2": ["LS220_M14691268_M0_HR", "LS220_M14691268_M0_LK_HR",
"LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_LR",
"LS220_M14691268_M0_SR"],
"q=1.4": ["LS220_M16351146_M0_LK_LR", "LS220_M11461635_M0_LK_SR"],
"q=1.7": ["LS220_M10651772_M0_LK_LR"]
},
"SFHo":
{
"q=1": ["SFHo_M13641364_M0_HR", "SFHo_M13641364_M0_LK_HR",
"SFHo_M13641364_M0_LK_SR", #"SFHo_M13641364_M0_LK_SR_2019pizza", # failed
"SFHo_M13641364_M0_SR"],
"q=1.1":["SFHo_M14521283_M0_HR", "SFHo_M14521283_M0_LK_HR",
"SFHo_M14521283_M0_LK_SR", "SFHo_M14521283_M0_LK_SR_2019pizza",
"SFHo_M14521283_M0_SR"],
"q=1.4":["SFHo_M16351146_M0_LK_LR"]
},
"SLy4":
{
"q=1": [#"SLy4_M13641364_M0_HR", # precollapse
# "SLy4_M13641364_M0_LK_HR", # crap, absent tarball data
"SLy4_M13641364_M0_LK_LR", "SLy4_M13641364_M0_LK_SR",
# "SLy4_M13641364_M0_LR",
"SLy4_M13641364_M0_SR"],
"q=1.1":[#"SLy4_M14521283_M0_HR", unphysical and premerger
"SLy4_M14521283_M0_LR",
"SLy4_M14521283_M0_SR"]
}
}
sims_err_lk_onoff = {
"def": {"sims":["DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR", "SFHo_M14521283_M0_LK_SR"],
"lbls": ["DD2 136 136 LK", "DD2 151 123 LK", "LS220 147 127 LK", "SFHo 145 128 LK"],
"colors":["black", 'gray', 'red', "green"],
"lss":["-", '-', '-', '-'],
"lws":[1.,1.,1.,1.]},
"comp":{"sims":["DD2_M13641364_M0_SR_R04", "DD2_M14971245_M0_SR", "LS220_M14691268_M0_SR", "SFHo_M14521283_M0_SR"],
"lbls": ["DD2 136 136", "DD2 150 125", "LS220 147 127", "SFHo 145 128"],
"colors":["black", 'gray', 'red', "green"],
"lss":["--", '--', '--', '--'],
"lws":[1.,1.,1.,1.]},
}
"""=================================================================================================================="""
''' ejecta summory '''
def plot_last_disk_mass_with_lambda(v_n_x, v_n_y, v_n, det=None, mask=None):
#
simlist = [
"BLh_M10651772_M0_LK_SR",
"BLh_M11841581_M0_LK_SR",
"BLh_M13641364_M0_LK_SR",
"BLh_M16351146_M0_LK_LR",
"BLh_M10201856_M0_LK_SR"] + [
"DD2_M13641364_M0_HR",
"DD2_M13641364_M0_HR_R04",
"DD2_M13641364_M0_LK_HR_R04",
"DD2_M14861254_M0_HR",
"DD2_M14971245_M0_HR",
"DD2_M15091235_M0_LK_HR",
"DD2_M11461635_M0_LK_SR",
"DD2_M13641364_M0_LK_SR_R04",
"DD2_M13641364_M0_SR",
"DD2_M13641364_M0_SR_R04",
"DD2_M14971245_M0_SR",
"DD2_M15091235_M0_LK_SR",
"DD2_M14321300_M0_LR",
"DD2_M14351298_M0_LR",
"DD2_M14861254_M0_LR",
"DD2_M14971246_M0_LR",
"DD2_M13641364_M0_LR",
"DD2_M13641364_M0_LR_R04",
"DD2_M13641364_M0_LK_LR_R04",
"DD2_M16351146_M0_LK_LR"] + [
"LS220_M13641364_M0_HR",
"LS220_M14001330_M0_HR",
"LS220_M14351298_M0_HR",
"LS220_M14691268_M0_HR",
"LS220_M14691268_M0_LK_HR",
"LS220_M13641364_M0_LK_SR",
"LS220_M13641364_M0_LK_SR_restart",
"LS220_M14691268_M0_SR",
"LS220_M13641364_M0_SR",
"LS220_M14001330_M0_SR",
"LS220_M14351298_M0_SR",
"LS220_M11461635_M0_LK_SR",
"LS220_M14691268_M0_LK_SR",
"LS220_M14691268_M0_LR",
"LS220_M13641364_M0_LR",
"LS220_M10651772_M0_LK_LR",
"LS220_M16351146_M0_LK_LR"] + [
# "SFHo_M10651772_M0_LK_LR", # premerger
# "SFHo_M11461635_M0_LK_SR", # too short. No dyn. ej
"SFHo_M13641364_M0_HR",
"SFHo_M13641364_M0_LK_HR",
"SFHo_M14521283_M0_HR",
"SFHo_M14521283_M0_LK_HR",
"SFHo_M13641364_M0_LK_SR",
"SFHo_M13641364_M0_LK_SR_2019pizza",
"SFHo_M13641364_M0_SR",
"SFHo_M14521283_M0_LK_SR",
"SFHo_M14521283_M0_LK_SR_2019pizza",
"SFHo_M14521283_M0_SR",
"SFHo_M16351146_M0_LK_LR"] + [
# "SLy4_M10651772_M0_LK_LR", # premerger
# "SLy4_M11461635_M0_LK_SR", # premerger
"SLy4_M13641364_M0_LK_SR",
# "SLy4_M13641364_M0_LR", # removed. Wrong
"SLy4_M13641364_M0_SR",
# "SLy4_M14521283_M0_HR",
# "SLy4_M14521283_M0_LR", # missing output-0012 Wring GW data (but good simulation)
"SLy4_M14521283_M0_SR",
"SLy4_M13641364_M0_LK_LR",
]
#
# v_n = "Mdisk3Dmax"
# v_n_x = "Lambda"
# v_n_y = "q"
# det = None
# mask = None
#
# --------------------------
if det != None and mask != None:
figname = "{}_{}_{}_{}_{}.png".format(v_n_x, v_n_y, v_n, det, mask)
else:
figname = "{}_{}_{}.png".format(v_n_x, v_n_y, v_n)
# --------------------------
eos_lambda = {}
data = {"LS220": {},
"DD2": {},
"BLh": {},
"SFHo": {},
"SLy4": {}}
for sim in simlist:
o_par = ADD_METHODS_ALL_PAR(sim)
o_init = LOAD_INIT_DATA(sim)
lam = o_init.get_par(v_n_x)
eos = o_init.get_par("EOS")
q = o_init.get_par(v_n_y)
if det != None and mask != None:
mdisk = o_par.get_outflow_par(det, mask, v_n)
else:
mdisk = o_par.get_par(v_n)
# tdisk = o_par.get_par("tdisk3D")
#
if sim.__contains__("_HR"):
lam = lam + 25.
elif sim.__contains__("_SR"):
lam = lam + 0.
elif sim.__contains__("_LR"):
lam = lam - 25.
else:
raise NameError("res:{} is not recognized".format(eos))
#
for eos_ in data.keys():
if eos_ == eos:
if not np.isnan(mdisk):
if not eos in eos_lambda.keys():
eos_lambda[eos] = lam
data[eos][sim] = {}
Printcolor.green("sim: {}. v_n:{} is not nan".format(sim, v_n))
data[eos][sim][v_n_x] = float(lam)
data[eos][sim][v_n_y] = float(q)
data[eos][sim][v_n] = float(mdisk)
data[eos][sim]['eos'] = eos
else:
Printcolor.red("sim: {}, v_n:{} is nan".format(sim, v_n))
#
if det != None and mask != None and mask.__contains__("bern"):
tcoll = o_par.get_par("tcoll_gw")
for eos_ in data.keys():
if eos_ == eos:
if not np.isinf(tcoll):
Printcolor.green("tcoll != np.inf sim: {}".format(sim))
data[eos][sim]["tcoll_gw"] = float(tcoll)
else:
data[eos][sim]["tcoll_gw"] = np.inf
Printcolor.yellow("\ttcoll = np.inf sim: {}".format(sim))
# # # # #
# # # # #
for eos in data.keys():
# print(data[eos][sim]["Lambda"])
sims = data[eos].keys()
data[eos][v_n_x + 's'] = np.array([float(data[eos][sim][v_n_x]) for sim in sims])
data[eos][v_n_y + 's'] = np.array([float(data[eos][sim][v_n_y]) for sim in sims])
data[eos][v_n] = np.array([float(data[eos][sim][v_n]) for sim in sims])
if det != None and mask != None and mask.__contains__("bern"):
data[eos]["tcoll_gw"] = np.array([float(data[eos][sim]["tcoll_gw"]) for sim in sims])
# lams = [np.array([data[eos][sim]["Lambda"] for sim in data.keys()]) for eos in data.keys()]
# qs = [np.array([data[eos][sim]["q"] for sim in data.keys()]) for eos in data.keys()]
# dmasses = [np.array([data[eos][sim]["Mdisk3D"] for sim in data.keys()]) for eos in data.keys()]
#
#
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = figname
o_plot.gen_set["sharex"] = True
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.0
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
#
# lams2d, qs2d = np.meshgrid(lams, qs)
# dmasses2d = griddata(lams, qs, dmasses, lams2d, qs2d, interp='linear')
# print(lams2d)
# print(qs2d)
# print(dmasses2d)
# print(len(lams), len(qs), len(dmasses))
# qs1, qs2 = qs.min(), qs.max()
# lam1, lam2 = lams.min(), lams.max()
# qstep = 0.1
# lamstep = 100
# grid_q = np.arange(start=qs1, stop=qs2, step=qstep)
# grid_lam = np.arange(start=lam1, stop=lam2, step=lamstep)
# for eos in eos_lambda.keys():
# eos_dic = {
# 'task': 'text', 'ptype': 'cartesian',
# 'position': (1, 1),
# 'x': eos_lambda[eos], 'y': 1.5, 'text': eos,
# 'horizontalalignment': 'center',
# 'color': 'black', 'fs': 14
# }
# o_plot.set_plot_dics.append(eos_dic)
#
if det != None and mask != None and mask.__contains__("bern") and v_n.__contains__("Mej"):
for eos in data.keys():
for sim in simlist:
if sim in data[eos].keys():
x = data[eos][sim][v_n_x]
y = data[eos][sim][v_n_y]
tcoll = data[eos][sim]["tcoll_gw"]
arror_dic = {
'task': 'line', 'position': (1, 1), 'ptype': 'cartesian',
'xarr': x, "yarr": y,
'v_n_x': v_n_x, 'v_n_y': v_n_y, 'v_n': v_n,
'xmin': None, 'xmax': None, 'ymin': None, 'ymax': None,
'xscale': None, 'yscale': None,
'marker': 'o', "color": "black", 'annotate': None, 'ms': 1, 'arrow': "up",
'alpha': 1.0,
'fontsize': 12,
'labelsize': 12,
}
# if sim.__contains__("_LR"):
# arror_dic['marker'] = 'x'
# elif sim.__contains__("_SR"):
# arror_dic['marker'] = 'o'
# elif sim.__contains__("_HR"):
# arror_dic['marker'] = "d"
if not np.isinf(tcoll):
pass
# BH FORMED
# print("BH: {}".format(sim))
# arror_dic['arrow'] = None
# o_plot.set_plot_dics.append(arror_dic)
else:
# BH DOES NOT FORM
arror_dic['arrow'] = "up"
print("No BH: {}".format(sim))
o_plot.set_plot_dics.append(arror_dic)
for eos, marker in zip(data.keys(), ['^', '<', '>', 'v', 'd']):
lams_i = data[eos][v_n_x + 's']
qs_i = data[eos][v_n_y + 's']
dmasses_i = data[eos][v_n]
mss = [] # np.zeros(len(data[eos].keys()))
sr_x_arr = []
sr_y_arr = []
for i, sim in enumerate(data[eos].keys()):
if sim.__contains__("_LR"):
mss.append(40)
elif sim.__contains__("_SR"):
mss.append(55)
sr_x_arr.append(data[eos][sim][v_n_x])
sr_y_arr.append(data[eos][sim][v_n_y])
elif sim.__contains__("_HR"):
mss.append(70)
# SR line
sr_y_arr, sr_x_arr = UTILS.x_y_z_sort(sr_y_arr, sr_x_arr)
sr_line_dic = {
'task': 'line', 'position': (1, 1), 'ptype': 'cartesian',
'xarr': sr_x_arr, "yarr": sr_y_arr,
'v_n_x': v_n_x, 'v_n_y': v_n_y, 'v_n': v_n,
'xmin': None, 'xmax': None, 'ymin': None, 'ymax': None,
'xscale': None, 'yscale': None,
# 'marker': 'x', "color": "white", 'alpha':1., 'ms':5,#
'ls': ':', "color": "gray", 'alpha': 1., 'lw': 0.5, 'alpha': 1., 'ds': 'default', #
'alpha': 1.0,
'fontsize': 12,
'labelsize': 12,
}
o_plot.set_plot_dics.append(sr_line_dic)
# lr
lks = []
for i, sim in enumerate(data[eos].keys()):
if sim.__contains__("_LK_"):
lks.append("green")
else:
lks.append('none')
dic = {
'task': 'scatter', 'ptype': 'cartesian', # 'aspect': 1.,
'xarr': lams_i, "yarr": qs_i, "zarr": dmasses_i,
'position': (1, 1), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {'location': 'right .03 .0', 'label': Labels.labels(v_n), # 'fmt': '%.1f',
'labelsize': 14, 'fontsize': 14},
'v_n_x': v_n_x, 'v_n_y': v_n_y, 'v_n': v_n,
'xlabel': v_n_x, "ylabel": v_n_y, 'label': eos,
'xmin': 300, 'xmax': 900, 'ymin': 0.90, 'ymax': 2.1, 'vmin': 0.001, 'vmax': 0.40,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'cmap': 'inferno', 'norm': None, 'ms': mss, 'marker': marker, 'alpha': 0.7, "edgecolors": lks,
'fancyticks': True,
'minorticks': True,
'title': {},
'legend': {},
'sharey': False,
'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
if v_n.__contains__("Mdisk3D"):
dic["vmin"], dic["vmax"] = 0.001, 0.40
elif v_n.__contains__("Mej"):
dic["vmin"], dic["vmax"] = 0.001, 0.02
dic['norm'] = "log"
elif v_n.__contains__("Ye"):
dic['vmin'] = 0.1
dic['vmax'] = 0.4
elif v_n.__contains__("vel_inf"):
dic['vmin'] = 0.10
dic['vmax'] = 0.25
#
if eos == data.keys()[-1]:
dic['legend'] = {'loc': 'upp'
'er right', 'ncol': 3, 'fontsize': 10}
o_plot.set_plot_dics.append(dic)
# for sim in data.keys():
# eos_dic = {
# 'task': 'text', 'ptype': 'cartesian',
# 'position': (1, 1),
# 'x': data[sim]['Lambda'], 'y': data[sim]['q'], 'text': data[sim]['eos'],
# 'horizontalalignment': 'center',
# 'color': 'black', 'fs': 11
# }
# o_plot.set_plot_dics.append(eos_dic)
# disk_mass_dic = {
# 'task': 'colormesh', 'ptype': 'cartesian', #'aspect': 1.,
# 'xarr': lams2d, "yarr": qs2d, "zarr": dmasses2d,
# 'position': (1, 1), # 'title': '[{:.1f} ms]'.format(time_),
# 'cbar': {'location': 'right .03 .0', 'label': Labels.labels("Mdisk3D"), # 'fmt': '%.1f',
# 'labelsize': 14, 'fontsize': 14},
# 'v_n_x': 'x', 'v_n_y': 'z', 'v_n': "Mdisk3D",
# 'xlabel': 'Lambda', "ylabel": "q",
# 'xmin': 350, 'xmax': 860, 'ymin': 1.00, 'ymax': 1.6, 'vmin': 0.001, 'vmax': 0.40,
# 'fill_vmin': False, # fills the x < vmin with vmin
# 'xscale': None, 'yscale': None,
# 'mask': None, 'cmap': 'Greys', 'norm': "log",
# 'fancyticks': True,
# 'minorticks':True,
# 'title': {},
# 'sharey': False,
# 'sharex': False, # removes angular citkscitks
# 'fontsize': 14,
# 'labelsize': 14
# }
# o_plot.set_plot_dics.append(disk_mass_dic)
o_plot.main()
print("DONE")
exit(1)
def plot_last_disk_mass_with_lambda2(v_n_x, v_n_y, v_n_col, mask_x=None, mask_y=None, mask_col=None, det=None,
plot_legend=True):
data = {"BLh": {}, "DD2": {}, "LS220": {}, "SFHo": {}, "SLy4": {}}
for eos in simulations.keys():
all_x_arr = []
all_y_arr = []
all_col_arr = []
all_res_arr = []
all_lk_arr = []
all_bh_arr = []
for q in simulations[eos].keys():
data[eos][q] = {}
#
x_arr = []
y_arr = []
col_arr = []
res_arr = []
lk_arr = []
bh_arr = []
for sim in simulations[eos][q]:
o_init = LOAD_INIT_DATA(sim)
o_par = ADD_METHODS_ALL_PAR(sim)
#
if v_n_x in o_init.list_v_ns and mask_x == None:
x_arr.append(o_init.get_par(v_n_x))
elif not v_n_x in o_init.list_v_ns and mask_x == None:
x_arr.append(o_par.get_par(v_n_x))
elif not v_n_x in o_init.list_v_ns and mask_x != None:
x_arr.append(o_par.get_outflow_par(det, mask_x, v_n_x))
else:
raise NameError("unrecognized: v_n_x:{} mask_x:{} det:{} combination"
.format(v_n_x, mask_x, det))
#
if v_n_y in o_init.list_v_ns and mask_y == None:
y_arr.append(o_init.get_par(v_n_y))
elif not v_n_y in o_init.list_v_ns and mask_y == None:
y_arr.append(o_par.get_par(v_n_y))
elif not v_n_y in o_init.list_v_ns and mask_y != None:
y_arr.append(o_par.get_outflow_par(det, mask_y, v_n_y))
else:
raise NameError("unrecognized: v_n_y:{} mask_x:{} det:{} combination"
.format(v_n_y, mask_y, det))
#
if v_n_col in o_init.list_v_ns and mask_col == None:
col_arr.append(o_init.get_par(v_n_col))
elif not v_n_col in o_init.list_v_ns and mask_col == None:
col_arr.append(o_par.get_par(v_n_col))
elif not v_n_col in o_init.list_v_ns and mask_col != None:
col_arr.append(o_par.get_outflow_par(det, mask_col, v_n_col))
else:
raise NameError("unrecognized: v_n_col:{} mask_x:{} det:{} combination"
.format(v_n_col, mask_col, det))
#
res = o_init.get_par("res")
if res == "HR": res_arr.append("v")
if res == "SR": res_arr.append("d")
if res == "LR": res_arr.append("^")
#
lk = o_init.get_par("vis")
if lk == "LK":
lk_arr.append("gray")
else:
lk_arr.append("none")
tcoll = o_par.get_par("tcoll_gw")
if not np.isinf(tcoll):
bh_arr.append("x")
else:
bh_arr.append(None)
#
#
data[eos][q][v_n_x] = x_arr
data[eos][q][v_n_y] = y_arr
data[eos][q][v_n_col] = col_arr
data[eos][q]["res"] = res_arr
data[eos][q]["vis"] = lk_arr
data[eos][q]["tcoll"] = bh_arr
#
all_x_arr = all_x_arr + x_arr
all_y_arr = all_y_arr + y_arr
all_col_arr = all_col_arr + col_arr
all_res_arr = all_res_arr + res_arr
all_lk_arr = all_lk_arr + lk_arr
all_bh_arr = all_bh_arr + bh_arr
#
data[eos][v_n_x + 's'] = all_x_arr
data[eos][v_n_y + 's'] = all_y_arr
data[eos][v_n_col + 's'] = all_col_arr
data[eos]["res" + 's'] = all_res_arr
data[eos]["vis" + 's'] = all_lk_arr
data[eos]["tcoll" + 's'] = all_bh_arr
#
#
figname = ''
if mask_x == None:
figname = figname + v_n_x + '_'
else:
figname = figname + v_n_x + '_' + mask_x + '_'
if mask_y == None:
figname = figname + v_n_y + '_'
else:
figname = figname + v_n_y + '_' + mask_y + '_'
if mask_col == None:
figname = figname + v_n_col + '_'
else:
figname = figname + v_n_col + '_' + mask_col + '_'
if det == None:
figname = figname + ''
else:
figname = figname + str(det)
figname = figname + '.png'
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = figname
o_plot.gen_set["sharex"] = True
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.0
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
#
#
i_col = 1
for eos in ["SLy4", "SFHo", "BLh", "LS220", "DD2"]:
print(eos)
# LEGEND
if eos == "DD2" and plot_legend:
for res in ["HR", "LR", "SR"]:
marker_dic_lr = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, i_col),
'xarr': [-1], "yarr": [-1],
'xlabel': None, "ylabel": None,
'label': res,
'marker': 'd', 'color': 'gray', 'ms': 8, 'alpha': 1.,
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
if res == "HR": marker_dic_lr['marker'] = "v"
if res == "SR": marker_dic_lr['marker'] = "d"
if res == "LR": marker_dic_lr['marker'] = "^"
# if res == "BH": marker_dic_lr['marker'] = "x"
if res == "SR":
marker_dic_lr['legend'] = {'loc': 'upper right', 'ncol': 1, 'fontsize': 12, 'shadow': False,
'framealpha': 0.5, 'borderaxespad': 0.0}
o_plot.set_plot_dics.append(marker_dic_lr)
#
xarr = np.array(data[eos][v_n_x + 's'])
yarr = np.array(data[eos][v_n_y + 's'])
colarr = data[eos][v_n_col + 's']
marker = data[eos]["res" + 's']
edgecolor = data[eos]["vis" + 's']
bh_marker = data[eos]["tcoll" + 's']
#
if v_n_y == "Mej_tot":
yarr = yarr * 1e2
#
#
#
dic_bh = {
'task': 'scatter', 'ptype': 'cartesian', # 'aspect': 1.,
'xarr': xarr, "yarr": yarr, "zarr": colarr,
'position': (1, i_col), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': v_n_x, 'v_n_y': v_n_y, 'v_n': v_n_col,
'xlabel': None, "ylabel": None, 'label': eos,
'xmin': 300, 'xmax': 900, 'ymin': 0.03, 'ymax': 0.3, 'vmin': 1.0, 'vmax': 1.5,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'cmap': 'viridis', 'norm': None, 'ms': 80, 'marker': bh_marker, 'alpha': 1.0, "edgecolors": edgecolor,
'fancyticks': True,
'minorticks': True,
'title': {},
'legend': {},
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
if mask_y != None and mask_y.__contains__("bern"):
o_plot.set_plot_dics.append(dic_bh)
#
#
#
print("marker: {}".format(marker))
dic = {
'task': 'scatter', 'ptype': 'cartesian', # 'aspect': 1.,
'xarr': xarr, "yarr": yarr, "zarr": colarr,
'position': (1, i_col), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': v_n_x, 'v_n_y': v_n_y, 'v_n': v_n_col,
'xlabel': None, "ylabel": Labels.labels(v_n_y),
'xmin': 300, 'xmax': 900, 'ymin': 0.03, 'ymax': 0.3, 'vmin': 1.0, 'vmax': 1.8,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'cmap': 'viridis', 'norm': None, 'ms': 80, 'marker': marker, 'alpha': 0.8, "edgecolors": edgecolor,
'tick_params': {"axis": 'both', "which": 'both', "labelleft": True,
"labelright": False, # "tick1On":True, "tick2On":True,
"labelsize": 12,
"direction": 'in',
"bottom": True, "top": True, "left": True, "right": True},
'yaxiscolor': {'bottom': 'black', 'top': 'black', 'right': 'black', 'left': 'black'},
'minorticks': True,
'title': {"text": eos, "fontsize": 12},
'label': "xxx",
'legend': {},
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
if v_n_y == "Mdisk3Dmax":
dic['ymin'], dic['ymax'] = 0.03, 0.30
if v_n_y == "Mej_tot" and mask_y == "geo":
dic['ymin'], dic['ymax'] = 0, 0.8
if v_n_y == "Mej_tot" and mask_y == "bern_geoend":
dic['ymin'], dic['ymax'] = 0, 3.2
if v_n_y == "Ye_ave" and mask_y == "geo":
dic['ymin'], dic['ymax'] = 0.1, 0.3
if v_n_y == "Ye_ave" and mask_y == "bern_geoend":
dic['ymin'], dic['ymax'] = 0.1, 0.4
if v_n_y == "vel_inf_ave" and mask_y == "geo":
dic['ymin'], dic['ymax'] = 0.1, 0.3
if v_n_y == "vel_inf_ave" and mask_y == "bern_geoend":
dic['ymin'], dic['ymax'] = 0.05, 0.25
#
if eos == "SLy4":
dic['xmin'], dic['xmax'] = 380, 420
dic['xticks'] = [400]
if eos == "SFHo":
dic['xmin'], dic['xmax'] = 400, 440
dic['xticks'] = [420]
if eos == "BLh":
dic['xmin'], dic['xmax'] = 520, 550
dic['xticks'] = [530]
if eos == "LS220":
dic['xmin'], dic['xmax'] = 690, 730
dic['xticks'] = [710]
if eos == "DD2":
dic['xmin'], dic['xmax'] = 830, 855
dic['xticks'] = [840]
if eos == "SLy4":
dic['tick_params']['right'] = False
dic['yaxiscolor']["right"] = "lightgray"
elif eos == "DD2":
dic['tick_params']['left'] = False
dic['yaxiscolor']["left"] = "lightgray"
else:
dic['tick_params']['left'] = False
dic['tick_params']['right'] = False
dic['yaxiscolor']["left"] = "lightgray"
dic['yaxiscolor']["right"] = "lightgray"
#
# if eos != "SLy4" and eos != "DD2":
# dic['yaxiscolor'] = {'left':'lightgray','right':'lightgray', 'label': 'black'}
# dic['ytickcolor'] = {'left':'lightgray','right':'lightgray'}
# dic['yminortickcolor'] = {'left': 'lightgray', 'right': 'lightgray'}
# elif eos == "DD2":
# dic['yaxiscolor'] = {'left': 'lightgray', 'right': 'black', 'label': 'black'}
# # dic['ytickcolor'] = {'left': 'lightgray'}
# # dic['yminortickcolor'] = {'left': 'lightgray'}
# elif eos == "SLy4":
# dic['yaxiscolor'] = {'left': 'black', 'right': 'lightgray', 'label': 'black'}
# # dic['ytickcolor'] = {'right': 'lightgray'}
# # dic['yminortickcolor'] = {'right': 'lightgray'}
#
if eos != "SLy4":
dic['sharey'] = True
if eos == "BLh":
dic['xlabel'] = Labels.labels(v_n_x)
if eos == 'DD2':
dic['cbar'] = {'location': 'right .03 .0', 'label': Labels.labels(v_n_col), # 'fmt': '%.1f',
'labelsize': 14, 'fontsize': 14}
#
i_col = i_col + 1
o_plot.set_plot_dics.append(dic)
#
#
o_plot.main()
# exit(0)
''' timecorr '''
def plot_ejecta_time_corr_properites():
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (11.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "timecorrs_Ye_DD2_LS220_SLy_equalmass.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.01
o_plot.set_plot_dics = []
det = 0
sims = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR", "LS220_M13641364_M0_LK_SR",
"SLy4_M13641364_M0_LK_SR", "SFHo_M13641364_M0_LK_SR"]
lbls = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR", "LS220_M13641364_M0_LK_SR",
"SLy4_M13641364_M0_LK_SR", "SFHo_M13641364_M0_LK_SR"]
masks = ["bern_geoend", "bern_geoend", "bern_geoend", "bern_geoend", "bern_geoend"]
# v_ns = ["vel_inf", "vel_inf", "vel_inf", "vel_inf", "vel_inf"]
v_ns = ["Y_e", "Y_e", "Y_e", "Y_e", "Y_e"]
i_x_plot = 1
for sim, lbl, mask, v_n in zip(sims, lbls, masks, v_ns):
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "timecorr_{}.h5".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
dfile = h5py.File(fpath, "r")
timearr = np.array(dfile["time"])
v_n_arr = np.array(dfile[v_n])
mass = np.array(dfile["mass"])
corr_dic2 = { # relies on the "get_res_corr(self, it, v_n): " method of data object
'task': 'corr2d', 'dtype': 'corr', 'ptype': 'cartesian',
'xarr': timearr, 'yarr': v_n_arr, 'zarr': mass,
'position': (1, i_x_plot),
'v_n_x': "time", 'v_n_y': v_n, 'v_n': 'mass', 'normalize': True,
'cbar': {},
'cmap': 'inferno',
'xlabel': Labels.labels("time"), 'ylabel': Labels.labels(v_n),
'xmin': timearr[0], 'xmax': timearr[-1], 'ymin': None, 'ymax': None, 'vmin': 1e-4, 'vmax': 1e-1,
'xscale': "linear", 'yscale': "linear", 'norm': 'log',
'mask_below': None, 'mask_above': None,
'title': {}, # {"text": o_corr_data.sim.replace('_', '\_'), 'fontsize': 14},
'text': {'text': lbl.replace('_', '\_'), 'coords': (0.05, 0.9), 'color': 'white', 'fs': 12},
'fancyticks': True,
'minorticks': True,
'sharex': False, # removes angular citkscitks
'sharey': False,
'fontsize': 14,
'labelsize': 14
}
if i_x_plot > 1:
corr_dic2['sharey'] = True
# if i_x_plot == 1:
# corr_dic2['text'] = {'text': lbl.replace('_', '\_'), 'coords': (0.1, 0.9), 'color': 'white', 'fs': 14}
if sim == sims[-1]:
corr_dic2['cbar'] = {
'location': 'right .03 .0', 'label': Labels.labels("mass"), # 'fmt': '%.1f',
'labelsize': 14, 'fontsize': 14}
i_x_plot += 1
corr_dic2 = Limits.in_dic(corr_dic2)
o_plot.set_plot_dics.append(corr_dic2)
o_plot.main()
exit(1)
# plot_ejecta_time_corr_properites()
# def plot_total_fluxes_q1():
#
# o_plot = PLOT_MANY_TASKS()
# o_plot.gen_set["figdir"] = Paths.plots + "all2/"
# o_plot.gen_set["type"] = "cartesian"
# o_plot.gen_set["figsize"] = (9.0, 3.6) # <->, |]
# o_plot.gen_set["figname"] = "totfluxes_equalmasses.png"
# o_plot.gen_set["sharex"] = False
# o_plot.gen_set["sharey"] = True
# o_plot.gen_set["dpi"] = 128
# o_plot.gen_set["subplots_adjust_h"] = 0.3
# o_plot.gen_set["subplots_adjust_w"] = 0.01
# o_plot.set_plot_dics = []
#
# det = 0
#
# sims = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR", "LS220_M13641364_M0_LK_SR", "SLy4_M13641364_M0_LK_SR", "SFHo_M13641364_M0_LK_SR"]
# lbls = ["DD2", "BLh", "LS220", "SLy4", "SFHo"]
# masks= ["bern_geoend", "bern_geoend", "bern_geoend", "bern_geoend", "bern_geoend"]
# colors=["black", "gray", "red", "blue", "green"]
# lss =["-", "-", "-", "-", "-"]
#
# i_x_plot = 1
# for sim, lbl, mask, color, ls in zip(sims, lbls, masks, colors, lss):
#
# fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
# if not os.path.isfile(fpath):
# raise IOError("File does not exist: {}".format(fpath))
#
# timearr, massarr = np.loadtxt(fpath,usecols=(0,2),unpack=True)
#
# plot_dic = {
# 'task': 'line', 'ptype': 'cartesian',
# 'position': (1, 1),
# 'xarr': timearr * 1e3, 'yarr': massarr * 1e2,
# 'v_n_x': "time", 'v_n_y': "mass",
# 'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
# 'ymin': 0, 'ymax': 1.5, 'xmin': 15, 'xmax': 100,
# 'xlabel': Labels.labels("time"), 'ylabel': Labels.labels("ejmass"),
# 'label': lbl, 'yscale': 'linear',
# 'fancyticks': True, 'minorticks': True,
# 'fontsize': 14,
# 'labelsize': 14,
# 'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
# }
# if sim == sims[-1]:
# plot_dic['legend'] = {'loc': 'best', 'ncol': 1, 'fontsize': 14}
#
# o_plot.set_plot_dics.append(plot_dic)
#
# #
# #
#
#
# i_x_plot += 1
# o_plot.main()
# exit(1)
# plot_total_fluxes_q1()
# def plot_total_fluxes_qnot1():
#
# o_plot = PLOT_MANY_TASKS()
# o_plot.gen_set["figdir"] = Paths.plots + "all2/"
# o_plot.gen_set["type"] = "cartesian"
# o_plot.gen_set["figsize"] = (9.0, 3.6) # <->, |]
# o_plot.gen_set["figname"] = "totfluxes_unequalmasses.png"
# o_plot.gen_set["sharex"] = False
# o_plot.gen_set["sharey"] = True
# o_plot.gen_set["dpi"] = 128
# o_plot.gen_set["subplots_adjust_h"] = 0.3
# o_plot.gen_set["subplots_adjust_w"] = 0.01
# o_plot.set_plot_dics = []
#
# det = 0
#
# sims = ["DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR", "SFHo_M14521283_M0_LK_SR"]
# lbls = ["DD2 151 124", "LS220 150 127", "SFHo 145 128"]
# masks= ["bern_geoend", "bern_geoend", "bern_geoend"]
# colors=["black", "red", "green"]
# lss =["-", "-", "-"]
#
# i_x_plot = 1
# for sim, lbl, mask, color, ls in zip(sims, lbls, masks, colors, lss):
#
# fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
# if not os.path.isfile(fpath):
# raise IOError("File does not exist: {}".format(fpath))
#
# timearr, massarr = np.loadtxt(fpath,usecols=(0,2),unpack=True)
#
# plot_dic = {
# 'task': 'line', 'ptype': 'cartesian',
# 'position': (1, 1),
# 'xarr': timearr * 1e3, 'yarr': massarr * 1e2,
# 'v_n_x': "time", 'v_n_y': "mass",
# 'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
# 'ymin': 0, 'ymax': 3.0, 'xmin': 15, 'xmax': 100,
# 'xlabel': Labels.labels("time"), 'ylabel': Labels.labels("ejmass"),
# 'label': lbl, 'yscale': 'linear',
# 'fancyticks': True, 'minorticks': True,
# 'fontsize': 14,
# 'labelsize': 14,
# 'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
# }
# if sim == sims[-1]:
# plot_dic['legend'] = {'loc': 'best', 'ncol': 1, 'fontsize': 14}
#
# o_plot.set_plot_dics.append(plot_dic)
#
# #
# #
#
#
# i_x_plot += 1
# o_plot.main()
# exit(1)
# plot_total_fluxes_qnot1()
''' ejecta mass fluxes '''
def plot_total_fluxes_q1_and_qnot1(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "totfluxes_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.01
o_plot.set_plot_dics = []
det = 0
# sims = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR", "LS220_M13641364_M0_LK_SR", "SLy4_M13641364_M0_LK_SR", "SFHo_M13641364_M0_LK_SR"]
# lbls = ["DD2", "BLh", "LS220", "SLy4", "SFHo"]
# masks= [mask, mask, mask, mask, mask]
# colors=["black", "gray", "red", "blue", "green"]
# lss =["-", "-", "-", "-", "-"]
#
# sims += ["DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR", "SFHo_M14521283_M0_LK_SR"]
# lbls += ["DD2 151 124", "LS220 150 127", "SFHo 145 128"]
# masks+= [mask, mask, mask, mask, mask]
# colors+=["black", "red", "green"]
# lss +=["--", "--", "--"]
sims = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR",
"LS220_M14691268_M0_LK_SR"]
lbls = [r"DD2_M14971245_M0_SR".replace('_', '\_'), r"DD2_M13641364_M0_SR".replace('_', '\_'),
r"DD2_M15091235_M0_LK_SR".replace('_', '\_'), r"BLh_M13641364_M0_LK_SR".replace('_', '\_'),
r"LS220_M14691268_M0_LK_SR".replace('_', '\_')]
masks = [mask, mask, mask, mask, mask]
colors = ["blue", "green", "cyan", "black", "red"]
lss = ["-", "-", "-", "-", '-']
# sims += ["DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR"]
# lbls += ["DD2 151 124", "LS220 150 127"]
# masks+= [mask, mask]
# colors+=["blue", "red"]
# lss +=["--", "--"]
i_x_plot = 1
for sim, lbl, mask, color, ls in zip(sims, lbls, masks, colors, lss):
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
timearr, massarr = np.loadtxt(fpath, usecols=(0, 2), unpack=True)
fpath = Paths.ppr_sims + sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
tmerg = np.float(np.loadtxt(fpath, unpack=True))
timearr = timearr - (tmerg * Constants.time_constant * 1e-3)
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': timearr * 1e3, 'yarr': massarr * 1e4,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
'xmin': 0, 'xmax': 110, 'ymin': 0, 'ymax': 2.5,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("ejmass4"),
'label': lbl, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {'loc': 'best', 'ncol': 1, 'fontsize': 11} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if mask == "geo": plot_dic["ymax"] = 1.
if sim >= sims[-1]:
plot_dic['legend'] = {'loc': 'best', 'ncol': 1, 'fontsize': 12}
o_plot.set_plot_dics.append(plot_dic)
#
#
i_x_plot += 1
o_plot.main()
exit(1)
# plot_total_fluxes_q1_and_qnot1(mask="bern_geoend")
# plot_total_fluxes_q1_and_qnot1(mask="geo")
def plot_total_fluxes_lk_on_off(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (9.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "totfluxes_lk_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.01
o_plot.set_plot_dics = []
det = 0
# plus LK
sims = ["DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR",
"SFHo_M14521283_M0_LK_SR"]
lbls = ["DD2 136 136 LK", "DD2 151 123 LK", "LS220 147 127 LK", "SFHo 145 128 LK"]
masks = [mask, mask, mask, mask]
colors = ["black", 'gray', 'red', "green"]
lss = ["-", '-', '-', '-']
# minus LK
sims2 = ["DD2_M13641364_M0_SR_R04", "DD2_M14971245_M0_SR", "LS220_M14691268_M0_SR", "SFHo_M14521283_M0_SR"]
lbls2 = ["DD2 136 136", "DD2 150 125", "LS220 147 127", "SFHo 145 128"]
masks2 = [mask, mask, mask, mask]
colors2 = ["black", 'gray', 'red', "green"]
lss2 = ["--", '--', '--', '--']
sims += sims2
lbls += lbls2
masks += masks2
colors += colors2
lss += lss2
i_x_plot = 1
for sim, lbl, mask, color, ls in zip(sims, lbls, masks, colors, lss):
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
timearr, massarr = np.loadtxt(fpath, usecols=(0, 2), unpack=True)
fpath = Paths.ppr_sims + sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
tmerg = np.float(np.loadtxt(fpath, unpack=True))
timearr = timearr - (tmerg * Constants.time_constant * 1e-3)
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': timearr * 1e3, 'yarr': massarr * 1e2,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
'xmin': 0, 'xmax': 110, 'ymin': 0, 'ymax': 3.0,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("ejmass"),
'label': lbl, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if mask == "geo": plot_dic["ymax"] = 1.
if sim == sims[-1]:
plot_dic['legend'] = {'loc': 'best', 'ncol': 2, 'fontsize': 14}
o_plot.set_plot_dics.append(plot_dic)
#
#
i_x_plot += 1
o_plot.main()
errs = {}
for sim1, mask1, sim2, mask2 in zip(sims, masks, sims2, masks2):
errs[sim1] = {}
print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
# loading times
fpath1 = Paths.ppr_sims + sim1 + "/" + "outflow_{}/".format(det) + mask1 + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
# loading tmerg
fpath1 = Paths.ppr_sims + sim1 + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
tmerg1 = np.float(np.loadtxt(fpath1, unpack=True))
timearr1 = timearr1 - (tmerg1 * Constants.time_constant * 1e-3)
# loading times
fpath2 = Paths.ppr_sims + sim2 + "/" + "outflow_{}/".format(det) + mask2 + '/' + "total_flux.dat"
if not os.path.isfile(fpath2):
raise IOError("File does not exist: {}".format(fpath2))
timearr2, massarr2 = np.loadtxt(fpath2, usecols=(0, 2), unpack=True)
# loading tmerg
fpath2 = Paths.ppr_sims + sim2 + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath2):
raise IOError("File does not exist: {}".format(fpath2))
tmerg2 = np.float(np.loadtxt(fpath2, unpack=True))
timearr2 = timearr2 - (tmerg2 * Constants.time_constant * 1e-3)
# estimating tmax
tmax = np.array([timearr1[-1], timearr2[-1]]).min()
assert tmax <= timearr1.max()
assert tmax <= timearr2.max()
m1 = massarr1[UTILS.find_nearest_index(timearr1, tmax)]
m2 = massarr2[UTILS.find_nearest_index(timearr2, tmax)]
# print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
print(" tmax: {:.1f} [ms]".format(tmax * 1e3))
# print(" \n")
print(" sim1: {} ".format(sim1))
print(" timearr1[-1]: {:.1f} [ms]".format(timearr1[-1] * 1e3))
print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr1[-1] * 1e2))
print(" m1[tmax] {:.2f} [1e-2Msun]".format(m1 * 1e2))
# print(" \n")
print(" sim1: {} ".format(sim2))
print(" timearr1[-1]: {:.1f} [ms]".format(timearr2[-1] * 1e3))
print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr2[-1] * 1e2))
print(" m2[tmax] {:.2f} [1e-2Msun]".format(m2 * 1e2))
# print(" \n")
print(" abs(m1-m2)/m1 {:.1f} [%]".format(100 * np.abs(m1 - m2) / m1))
print(" ---------------------------------------- ")
errs[sim1]["sim1"] = sim1
errs[sim1]["sim2"] = sim2
errs[sim1]["tmax"] = tmax * 1e3
errs[sim1]["m1"] = m1 * 1e2
errs[sim1]["m2"] = m2 * 1e2
errs[sim1]["err"] = 100 * np.abs(m1 - m2) / m1
# table
# sims = ['DD2_M13641364_M0_SR', 'LS220_M13641364_M0_SR', 'SLy4_M13641364_M0_SR']
# v_ns = ["EOS", "M1", "M2", 'Mdisk3D', 'Mej', 'Yeej', 'vej', 'Mej_bern', 'Yeej_bern', 'vej_bern']
# precs = ["str", "1.2", "1.2", ".4", ".4", ".4", ".4", ".4", ".4", ".4"]
print('\n')
cols = ["sim1", "sim2", "m1", "m2", "tmax", "err"]
units_dic = {"sim1": "", "sim2": "", "m1": "$[10^{-2} M_{\odot}]$", "m2": "$[10^{-2} M_{\odot}]$", "tmax": "[ms]",
"err": r"[\%]"}
lbl_dic = {"sim1": "Default Run", "sim2": "Comparison Run", "m1": r"$M_{\text{ej}}^a$", "m2": r"$M_{\text{ej}}^b$",
"tmax": r"$t_{\text{max}}$", "err": r"$\Delta$"}
precs = ["", "", ".2f", ".2f", ".1f", "d"]
size = '{'
head = ''
for i, v_n in enumerate(cols):
v_n = lbl_dic[v_n]
size = size + 'c'
head = head + '{}'.format(v_n)
if v_n != cols[-1]: size = size + ' '
if i != len(cols) - 1: head = head + ' & '
size = size + '}'
unit_bar = ''
for v_n in cols:
if v_n in units_dic.keys():
unit = units_dic[v_n]
else:
unit = v_n
unit_bar = unit_bar + '{}'.format(unit)
if v_n != cols[-1]: unit_bar = unit_bar + ' & '
head = head + ' \\\\' # = \\
unit_bar = unit_bar + ' \\\\ '
print(errs[sims[0]])
print('\n')
print('\\begin{table*}[t]')
print('\\begin{center}')
print('\\begin{tabular}' + '{}'.format(size))
print('\\hline')
print(head)
print(unit_bar)
print('\\hline\\hline')
for sim1, mask1, sim2, mask2 in zip(sims, masks, sims2, masks2):
row = ''
for v_n, prec in zip(cols, precs):
if prec != "":
val = "%{}".format(prec) % errs[sim1][v_n]
else:
val = errs[sim1][v_n].replace("_", "\_")
row = row + val
if v_n != cols[-1]: row = row + ' & '
row = row + ' \\\\' # = \\
print(row)
print(r'\hline')
print(r'\end{tabular}')
print(r'\end{center}')
print(r'\caption{' + r'Viscosity effect on the ejected material total cumulative mass. Criterion {} '
.format(mask.replace('_', '\_')) +
r'$\Delta = |M_{\text{ej}}^a - M_{\text{ej}}^b| / M_{\text{ej}}^a |_{tmax} $ }')
print(r'\label{tbl:1}')
print(r'\end{table*}')
exit(1)
# plot_total_fluxes_lk_on_off(mask="bern_geoend")
# plot_total_fluxes_lk_on_off("geo")
def plot_total_fluxes_lk_on_resolution(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (9.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "totfluxes_lk_res_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.01
o_plot.set_plot_dics = []
det = 0
# HR # LS220_M13641364_M0_LK_HR
sims_hr = ["DD2_M13641364_M0_LK_HR_R04", "DD2_M15091235_M0_LK_HR", "", "LS220_M14691268_M0_LK_HR",
"SFHo_M13641364_M0_LK_HR", "SFHo_M14521283_M0_LK_HR"]
lbl_hr = ["DD2 136 136 HR", "DD2 151 124 HR", "LS220 136 136 HR", "LS220 147 137 HR", "SFHo 136 136 HR",
"SFHo 145 128 HR"]
color_hr = ["black", "gray", "orange", "red", "green", "lightgreen"]
masks_hr = [mask, mask, mask, mask, mask, mask]
lss_hr = ['--', '--', '--', '--', "--", "--"]
# SR
sims_sr = ["DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "LS220_M13641364_M0_LK_SR",
"LS220_M14691268_M0_LK_SR", "SFHo_M13641364_M0_LK_SR", "SFHo_M14521283_M0_LK_SR"]
lbl_sr = ["DD2 136 136 SR", "DD2 151 124 HR", "LS220 136 136 SR", "LS220 147 137 SR", "SFHo 136 136 HR",
"SFHo 145 128 HR"]
color_sr = ["black", "gray", "orange", "red", "green", "lightgreen"]
masks_sr = [mask, mask, mask, mask, mask, mask]
lss_sr = ['-', '-', '-', '-', '-', '-']
# LR
sims_lr = ["DD2_M13641364_M0_LK_LR_R04", "", "", "", "", ""]
lbl_lr = ["DD2 136 136 LR", "DD2 151 124 LR", "LS220 136 136 LR", "LS220 147 137 LR", "SFHo 136 136 LR",
"SFHo 145 128 LR"]
color_lr = ["black", "gray", "orange", "red", "green", "lightgreen"]
masks_lr = [mask, mask, mask, mask, mask, mask]
lss_lr = [':', ':', ":", ":", ":", ":"]
# plus
sims = sims_hr + sims_lr + sims_sr
lsls = lbl_hr + lbl_lr + lbl_sr
colors = color_hr + color_lr + color_sr
masks = masks_hr + masks_lr + masks_sr
lss = lss_hr + lss_lr + lss_sr
i_x_plot = 1
for sim, lbl, mask, color, ls in zip(sims, lsls, masks, colors, lss):
if sim != "":
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
timearr, massarr = np.loadtxt(fpath, usecols=(0, 2), unpack=True)
fpath = Paths.ppr_sims + sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
tmerg = np.float(np.loadtxt(fpath, unpack=True))
timearr = timearr - (tmerg * Constants.time_constant * 1e-3)
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': timearr * 1e3, 'yarr': massarr * 1e2,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
'xmin': 0, 'xmax': 110, 'ymin': 0, 'ymax': 3.0,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("ejmass"),
'label': lbl, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if mask == "geo": plot_dic["ymax"] = 1.
# print(sim, sims[-1])
if sim == sims[-1]:
plot_dic['legend'] = {'loc': 'best', 'ncol': 2, 'fontsize': 12}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for sim_hr, sim_sr, sim_lr, mask_hr, mask_sr, mask_lr in \
zip(sims_hr, sims_sr, sims_lr, masks_hr, masks_sr, masks_lr):
def_sim = sim_sr
def_mask = mask_sr
def_res = "SR"
if sims_hr != "":
comp_res = "HR"
comp_sim = sim_hr
comp_mask = mask_hr
elif sims_lr != "":
comp_res = "LR"
comp_sim = sim_lr
comp_mask = mask_lr
else:
raise ValueError("neither HR nor LR is available")
# loading times
fpath1 = Paths.ppr_sims + def_sim + "/" + "outflow_{}/".format(det) + def_mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
# loading tmerg
fpath1 = Paths.ppr_sims + def_sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
tmerg1 = np.float(np.loadtxt(fpath1, unpack=True))
timearr1 = timearr1 - (tmerg1 * Constants.time_constant * 1e-3)
# loading times
fpath2 = Paths.ppr_sims + comp_sim + "/" + "outflow_{}/".format(det) + comp_mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath2):
raise IOError("File does not exist: {}".format(fpath2))
timearr2, massarr2 = np.loadtxt(fpath2, usecols=(0, 2), unpack=True)
# loading tmerg
fpath2 = Paths.ppr_sims + comp_sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath2):
raise IOError("File does not exist: {}".format(fpath2))
tmerg2 = np.float(np.loadtxt(fpath2, unpack=True))
timearr2 = timearr2 - (tmerg2 * Constants.time_constant * 1e-3)
# estimating tmax
tmax = np.array([timearr1[-1], timearr2[-1]]).min()
assert tmax <= timearr1.max()
assert tmax <= timearr2.max()
m1 = massarr1[UTILS.find_nearest_index(timearr1, tmax)]
m2 = massarr2[UTILS.find_nearest_index(timearr2, tmax)]
# print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
print(" tmax: {:.1f} [ms]".format(tmax * 1e3))
# print(" \n")
print(" Resolution: {} ".format(def_res))
print(" sim1: {} ".format(def_sim))
print(" timearr1[-1]: {:.1f} [ms]".format(timearr1[-1] * 1e3))
print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr1[-1] * 1e2))
print(" m1[tmax] {:.2f} [1e-2Msun]".format(m1 * 1e2))
# print(" \n")
print("\nResolution: {} ".format(comp_res))
print(" sim1: {} ".format(comp_sim))
print(" timearr1[-1]: {:.1f} [ms]".format(timearr2[-1] * 1e3))
print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr2[-1] * 1e2))
print(" m2[tmax] {:.2f} [1e-2Msun]".format(m2 * 1e2))
# print(" \n")
print(" abs(m1-m2)/m1 {:.1f} [%]".format(100 * np.abs(m1 - m2) / m1))
print(" ---------------------------------------- ")
#
# print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
#
# # loading times
# fpath1 = Paths.ppr_sims + sim1 + "/" + "outflow_{}/".format(det) + mask1 + '/' + "total_flux.dat"
# if not os.path.isfile(fpath1):
# raise IOError("File does not exist: {}".format(fpath1))
#
# timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
#
# # loading tmerg
# fpath1 = Paths.ppr_sims + sim1 + "/" + "waveforms/" + "tmerger.dat"
# if not os.path.isfile(fpath1):
# raise IOError("File does not exist: {}".format(fpath1))
# tmerg1 = np.float(np.loadtxt(fpath1, unpack=True))
# timearr1 = timearr1 - (tmerg1 * Constants.time_constant * 1e-3)
#
# # loading times
# fpath2 = Paths.ppr_sims + sim2 + "/" + "outflow_{}/".format(det) + mask2 + '/' + "total_flux.dat"
# if not os.path.isfile(fpath2):
# raise IOError("File does not exist: {}".format(fpath2))
#
# timearr2, massarr2 = np.loadtxt(fpath2, usecols=(0, 2), unpack=True)
#
# # loading tmerg
# fpath2 = Paths.ppr_sims + sim2 + "/" + "waveforms/" + "tmerger.dat"
# if not os.path.isfile(fpath2):
# raise IOError("File does not exist: {}".format(fpath2))
# tmerg2 = np.float(np.loadtxt(fpath2, unpack=True))
# timearr2 = timearr2 - (tmerg2 * Constants.time_constant * 1e-3)
#
# # estimating tmax
# tmax = np.array([timearr1[-1], timearr2[-1]]).min()
# assert tmax <= timearr1.max()
# assert tmax <= timearr2.max()
# m1 = massarr1[UTILS.find_nearest_index(timearr1, tmax)]
# m2 = massarr2[UTILS.find_nearest_index(timearr2, tmax)]
#
# # print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
# print(" tmax: {:.1f} [ms]".format(tmax*1e3))
# # print(" \n")
# print(" sim1: {} ".format(sim1))
# print(" timearr1[-1]: {:.1f} [ms]".format(timearr1[-1]*1e3))
# print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr1[-1]*1e2))
# print(" m1[tmax] {:.2f} [1e-2Msun]".format(m1 * 1e2))
# # print(" \n")
# print(" sim1: {} ".format(sim2))
# print(" timearr1[-1]: {:.1f} [ms]".format(timearr2[-1]*1e3))
# print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr2[-1]*1e2))
# print(" m2[tmax] {:.2f} [1e-2Msun]".format(m2 * 1e2))
# # print(" \n")
# print(" abs(m1-m2)/m1 {:.1f} [%]".format(100 * np.abs(m1 - m2) / m1))
# print(" ---------------------------------------- ")
exit(1)
# plot_total_fluxes_lk_on_resolution(mask="geo_geoend")
# plot_total_fluxes_lk_on_resolution(mask="geo")
def plot_total_fluxes_lk_off_resolution(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (9.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "totfluxes_res_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.01
o_plot.set_plot_dics = []
det = 0
# HR "DD2_M13641364_M0_HR_R04"
sims_hr = ["", "DD2_M14971245_M0_HR", "LS220_M13641364_M0_HR", "LS220_M14691268_M0_HR", "SFHo_M13641364_M0_HR",
"SFHo_M14521283_M0_HR"]
lbl_hr = ["DD2 136 136 HR", "DD2 150 125 HR", "LS220 136 136 HR", "LS220 147 127 HR", "SFHo 136 136 HR",
"SFHo 145 128 HR"]
color_hr = ["black", "gray", "orange", "red", "lightgreen", "green"]
masks_hr = [mask, mask, mask, mask, mask, mask]
lss_hr = ['--', '--', '--', '--', '--', '--']
# SR
sims_sr = ["DD2_M13641364_M0_SR_R04", "DD2_M14971245_M0_SR", "LS220_M13641364_M0_SR", "LS220_M14691268_M0_SR",
"SFHo_M13641364_M0_SR", "SFHo_M14521283_M0_SR"]
lbl_sr = ["DD2 136 136 SR", "DD2 150 125 SR", "LS220 136 136 SR", "LS220 147 127 SR", "SFHo 136 136 SR",
"SFHo 145 128 SR"]
color_sr = ["black", "gray", "orange", "red", "lightgreen", "green"]
masks_sr = [mask, mask, mask, mask, mask, mask]
lss_sr = ['-', '-', '-', '-', '-', '-']
# LR
sims_lr = ["DD2_M13641364_M0_LR_R04", "DD2_M14971246_M0_LR", "LS220_M13641364_M0_LR", "LS220_M14691268_M0_LR", "",
""]
lbl_lr = ["DD2 136 136 LR", "DD2 150 125 LR", "LS220 136 136 LR", "LS220 147 127 LR", "SFHo 136 136 LR",
"SFHo 145 128 LR"]
color_lr = ["black", "gray", "orange", "red", "lightgreen", "green"]
masks_lr = [mask, mask, mask, mask, mask, mask]
lss_lr = [':', ':', ':', ':', ':', ':']
# plus
sims = sims_hr + sims_lr + sims_sr
lsls = lbl_hr + lbl_lr + lbl_sr
colors = color_hr + color_lr + color_sr
masks = masks_hr + masks_lr + masks_sr
lss = lss_hr + lss_lr + lss_sr
i_x_plot = 1
for sim, lbl, mask, color, ls in zip(sims, lsls, masks, colors, lss):
if sim != "":
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
timearr, massarr = np.loadtxt(fpath, usecols=(0, 2), unpack=True)
fpath = Paths.ppr_sims + sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
tmerg = np.float(np.loadtxt(fpath, unpack=True))
timearr = timearr - (tmerg * Constants.time_constant * 1e-3)
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': timearr * 1e3, 'yarr': massarr * 1e2,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
'xmin': 0, 'xmax': 110, 'ymin': 0, 'ymax': 3.0,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("ejmass"),
'label': lbl, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
# print(sim, sims[-1])
if mask == "geo": plot_dic["ymax"] = 1.
if sim == sims[-1]:
plot_dic['legend'] = {'loc': 'best', 'ncol': 3, 'fontsize': 12}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for sim_hr, sim_sr, sim_lr, mask_hr, mask_sr, mask_lr in \
zip(sims_hr, sims_sr, sims_lr, masks_hr, masks_sr, masks_lr):
def_sim = sim_sr
def_mask = mask_sr
def_res = "SR"
if sim_hr != "":
comp_res = "HR"
comp_sim = sim_hr
comp_mask = mask_hr
elif sim_lr != "":
comp_res = "LR"
comp_sim = sim_lr
comp_mask = mask_lr
else:
raise ValueError("neither HR nor LR is available")
assert comp_sim != ""
# loading times
fpath1 = Paths.ppr_sims + def_sim + "/" + "outflow_{}/".format(det) + def_mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
# loading tmerg
fpath1 = Paths.ppr_sims + def_sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
tmerg1 = np.float(np.loadtxt(fpath1, unpack=True))
timearr1 = timearr1 - (tmerg1 * Constants.time_constant * 1e-3)
# loading times
fpath2 = Paths.ppr_sims + comp_sim + "/" + "outflow_{}/".format(det) + comp_mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath2):
raise IOError("File does not exist: {}".format(fpath2))
timearr2, massarr2 = np.loadtxt(fpath2, usecols=(0, 2), unpack=True)
# loading tmerg
fpath2 = Paths.ppr_sims + comp_sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath2):
raise IOError("File does not exist: {}".format(fpath2))
tmerg2 = np.float(np.loadtxt(fpath2, unpack=True))
timearr2 = timearr2 - (tmerg2 * Constants.time_constant * 1e-3)
# estimating tmax
tmax = np.array([timearr1[-1], timearr2[-1]]).min()
assert tmax <= timearr1.max()
assert tmax <= timearr2.max()
m1 = massarr1[UTILS.find_nearest_index(timearr1, tmax)]
m2 = massarr2[UTILS.find_nearest_index(timearr2, tmax)]
# print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
print(" tmax: {:.1f} [ms]".format(tmax * 1e3))
# print(" \n")
print(" Resolution: {} ".format(def_res))
print(" sim1: {} ".format(def_sim))
print(" timearr1[-1]: {:.1f} [ms]".format(timearr1[-1] * 1e3))
print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr1[-1] * 1e2))
print(" m1[tmax] {:.2f} [1e-2Msun]".format(m1 * 1e2))
# print(" \n")
print("\nResolution: {} ".format(comp_res))
print(" sim1: {} ".format(comp_sim))
print(" timearr1[-1]: {:.1f} [ms]".format(timearr2[-1] * 1e3))
print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr2[-1] * 1e2))
print(" m2[tmax] {:.2f} [1e-2Msun]".format(m2 * 1e2))
# print(" \n")
print(" abs(m1-m2)/m1 {:.1f} [%]".format(100 * np.abs(m1 - m2) / m1))
print(" ---------------------------------------- ")
#
# print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
#
# # loading times
# fpath1 = Paths.ppr_sims + sim1 + "/" + "outflow_{}/".format(det) + mask1 + '/' + "total_flux.dat"
# if not os.path.isfile(fpath1):
# raise IOError("File does not exist: {}".format(fpath1))
#
# timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
#
# # loading tmerg
# fpath1 = Paths.ppr_sims + sim1 + "/" + "waveforms/" + "tmerger.dat"
# if not os.path.isfile(fpath1):
# raise IOError("File does not exist: {}".format(fpath1))
# tmerg1 = np.float(np.loadtxt(fpath1, unpack=True))
# timearr1 = timearr1 - (tmerg1 * Constants.time_constant * 1e-3)
#
# # loading times
# fpath2 = Paths.ppr_sims + sim2 + "/" + "outflow_{}/".format(det) + mask2 + '/' + "total_flux.dat"
# if not os.path.isfile(fpath2):
# raise IOError("File does not exist: {}".format(fpath2))
#
# timearr2, massarr2 = np.loadtxt(fpath2, usecols=(0, 2), unpack=True)
#
# # loading tmerg
# fpath2 = Paths.ppr_sims + sim2 + "/" + "waveforms/" + "tmerger.dat"
# if not os.path.isfile(fpath2):
# raise IOError("File does not exist: {}".format(fpath2))
# tmerg2 = np.float(np.loadtxt(fpath2, unpack=True))
# timearr2 = timearr2 - (tmerg2 * Constants.time_constant * 1e-3)
#
# # estimating tmax
# tmax = np.array([timearr1[-1], timearr2[-1]]).min()
# assert tmax <= timearr1.max()
# assert tmax <= timearr2.max()
# m1 = massarr1[UTILS.find_nearest_index(timearr1, tmax)]
# m2 = massarr2[UTILS.find_nearest_index(timearr2, tmax)]
#
# # print(" --------------| {} |---------------- ".format(sim1.split('_')[0]))
# print(" tmax: {:.1f} [ms]".format(tmax*1e3))
# # print(" \n")
# print(" sim1: {} ".format(sim1))
# print(" timearr1[-1]: {:.1f} [ms]".format(timearr1[-1]*1e3))
# print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr1[-1]*1e2))
# print(" m1[tmax] {:.2f} [1e-2Msun]".format(m1 * 1e2))
# # print(" \n")
# print(" sim1: {} ".format(sim2))
# print(" timearr1[-1]: {:.1f} [ms]".format(timearr2[-1]*1e3))
# print(" mass1[-1] {:.2f} [1e-2Msun]".format(massarr2[-1]*1e2))
# print(" m2[tmax] {:.2f} [1e-2Msun]".format(m2 * 1e2))
# # print(" \n")
# print(" abs(m1-m2)/m1 {:.1f} [%]".format(100 * np.abs(m1 - m2) / m1))
# print(" ---------------------------------------- ")
exit(1)
# plot_total_fluxes_lk_off_resolution(mask="bern_geoend")
# plot_total_fluxes_lk_off_resolution(mask="geo")
''' ejecta 1D histograms '''
def plot_histograms_ejecta(mask, mask2):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (16.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "hists_for_all_nucleo_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
averages = {}
det = 0
sims = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR",
"LS220_M14691268_M0_LK_SR"]
lbls = [sim.replace('_', '\_') for sim in sims]
masks = [mask, mask, mask, mask, mask]
colors = ["blue", "cyan", "green", "black", "red"]
lss = ["-", "-", "-", "-", "-"]
lws = [1., 1., 1., 1., 1.]
# sims = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR", "LS220_M13641364_M0_LK_SR",
# "SLy4_M13641364_M0_LK_SR", "SFHo_M13641364_M0_LK_SR"]
# lbls = ["DD2", "BLh", "LS220", "SLy4", "SFHo"]
# masks = [mask, mask, mask, mask, mask]
# colors = ["black", "gray", "red", "blue", "green"]
# lss = ["-", "-", "-", "-", "-"]
# lws = [1., 1., 1., 1., 1.]
#
# sims += ["DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR", "SFHo_M14521283_M0_LK_SR"]
# lbls += ["DD2 151 124", "LS220 150 127", "SFHo 145 128"]
# masks += [mask, mask, mask]
# colors += ["black", "red", "green"]
# lss += ["--", "--", "--"]
# lws += [1., 1., 1.]
# v_ns = ["theta", "Y_e", "vel_inf", "entropy"]
v_ns = ["Y_e"]
i_x_plot = 1
for v_n in v_ns:
averages[v_n] = {}
for sim, lbl, mask, color, ls, lw in zip(sims, lbls, masks, colors, lss, lws):
# loading hist
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "hist_{}.dat".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
hist = np.loadtxt(fpath, usecols=(0, 1), unpack=False)
# loading times
fpath1 = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
if v_n == "Y_e":
ave = EJECTA_PARS.compute_ave_ye(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "theta":
ave = EJECTA_PARS.compute_ave_theta_rms(hist)
averages[v_n][sim] = ave
elif v_n == "vel_inf":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "entropy":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
else:
raise NameError("no averages set for v_n:{}".format(v_n))
plot_dic = {
'task': 'hist1d', 'ptype': 'cartesian',
'position': (1, i_x_plot),
'data': hist, 'normalize': True,
'v_n_x': v_n, 'v_n_y': None,
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': 1.0,
'xmin': None, 'xamx': None, 'ymin': 1e-3, 'ymax': 5e-1,
'xlabel': Labels.labels(v_n), 'ylabel': Labels.labels("mass"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
plot_dic = Limits.in_dic(plot_dic)
if v_n != v_ns[0]:
plot_dic["sharey"] = True
if v_n == v_ns[0] and sim == sims[-1]:
plot_dic['legend'] = {'loc': 'lower center', 'ncol': 1, "fontsize": 9} #
# plot_dic['legend'] = {
# 'bbox_to_anchor': (1.0, -0.1),
# # 'loc': 'lower left',
# 'loc': 'lower left', 'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
# 'borderayespad': 0.}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
#
masks = [mask2, mask2, mask2, mask2, mask2]
v_ns = ["Y_e"]
i_x_plot = 2
for v_n in v_ns:
averages[v_n] = {}
for sim, lbl, mask, color, ls, lw in zip(sims, lbls, masks, colors, lss, lws):
# loading hist
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "hist_{}.dat".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
hist = np.loadtxt(fpath, usecols=(0, 1), unpack=False)
# loading times
fpath1 = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
if v_n == "Y_e":
ave = EJECTA_PARS.compute_ave_ye(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "theta":
ave = EJECTA_PARS.compute_ave_theta_rms(hist)
averages[v_n][sim] = ave
elif v_n == "vel_inf":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "entropy":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
else:
raise NameError("no averages set for v_n:{}".format(v_n))
plot_dic = {
'task': 'hist1d', 'ptype': 'cartesian',
'position': (1, i_x_plot),
'data': hist, 'normalize': True,
'v_n_x': v_n, 'v_n_y': None,
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': 1.0,
'xmin': None, 'xamx': None, 'ymin': 1e-3, 'ymax': 5e-1,
'xlabel': Labels.labels(v_n), 'ylabel': Labels.labels("mass"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': True,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
plot_dic = Limits.in_dic(plot_dic)
if v_n != v_ns[0]:
plot_dic["sharey"] = True
# if v_n == v_ns[0] and sim == sims[-1]:
# plot_dic['legend'] = {'loc': 'lower left', 'ncol':1,"fontsize":9} #
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for v_n in v_ns:
print("\t{}".format(v_n))
for sim in sims:
print("\t\t{}".format(sim)),
print(" {:.2f}".format(averages[v_n][sim]))
exit(1)
def plot_histograms_ejecta_for_many_sims():
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "hists_geo_for_all_nucleo.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
averages = {}
det = 0
sims = ["BLh_M11841581_M0_LK_SR",
"DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_SR_R04", "DD2_M15091235_M0_LK_SR", "DD2_M14971245_M0_SR",
"LS220_M13641364_M0_LK_SR_restart", "LS220_M13641364_M0_SR", "LS220_M14691268_M0_LK_SR",
"LS220_M14351298_M0_SR", # "LS220_M14691268_M0_SR",
"SFHo_M13641364_M0_LK_SR_2019pizza", "SFHo_M13641364_M0_SR", "SFHo_M14521283_M0_LK_SR_2019pizza",
"SFHo_M14521283_M0_SR",
"SLy4_M13641364_M0_LK_SR", "SLy4_M14521283_M0_SR"]
lbls = [sim.replace('_', '\_') for sim in sims]
masks = ["geo",
"geo", "geo", "geo", "geo",
"geo", "geo", "geo", "geo",
"geo", "geo", "geo", "geo",
"geo", "geo"]
# masks = ["geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend"]
colors = ["black",
"blue", "blue", "blue", "blue",
"red", "red", "red", "red",
"green", "green", "green", "green",
"orange", "orange"]
alphas = [1.,
1., 1., 1., 1.,
1., 1., 1., 1.,
1., 1., 1., 1.,
1., 1.]
lss = ['-',
'-', '--', '-.', ':',
'-', '--', '-.', ':',
'-', '--', '-.', ':',
'-', '--']
lws = [1.,
1., 0.8, 0.5, 0.5,
1., 0.8, 0.5, 0.5,
1., 0.8, 0.5, 0.5,
1., 0.8]
# v_ns = ["theta", "Y_e", "vel_inf", "entropy"]
v_ns = ["Y_e"]
i_x_plot = 1
for v_n in v_ns:
averages[v_n] = {}
for sim, lbl, mask, color, ls, lw in zip(sims, lbls, masks, colors, lss, lws):
# loading hist
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "hist_{}.dat".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
hist = np.loadtxt(fpath, usecols=(0, 1), unpack=False)
# loading times
fpath1 = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
if v_n == "Y_e":
ave = EJECTA_PARS.compute_ave_ye(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "theta":
ave = EJECTA_PARS.compute_ave_theta_rms(hist)
averages[v_n][sim] = ave
elif v_n == "vel_inf":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "entropy":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
else:
raise NameError("no averages set for v_n:{}".format(v_n))
plot_dic = {
'task': 'hist1d', 'ptype': 'cartesian',
'position': (1, i_x_plot),
'data': hist, 'normalize': True,
'v_n_x': v_n, 'v_n_y': None,
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': 1.0,
'xmin': None, 'xamx': None, 'ymin': 1e-3, 'ymax': 5e-1,
'xlabel': Labels.labels(v_n), 'ylabel': Labels.labels("mass"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
plot_dic = Limits.in_dic(plot_dic)
if v_n != v_ns[0]:
plot_dic["sharey"] = True
if v_n == v_ns[0] and sim == sims[-1]:
# plot_dic['legend'] = {'loc': 'lower center', 'ncol': 1, "fontsize": 9} #
plot_dic['legend'] = {
'bbox_to_anchor': (1.0, -0.1),
# 'loc': 'lower left',
'loc': 'lower left', 'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
'borderayespad': 0.}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for v_n in v_ns:
print("\t{}".format(v_n))
for sim in sims:
print("\t\t{}".format(sim)),
print(" {:.2f}".format(averages[v_n][sim]))
exit(1)
# plot_histograms_ejecta("geo")
# plot_histograms_ejecta("bern_geoend")
def plot_histograms_lk_on_off(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (11.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "tothist_lk_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
averages = {}
det = 0
sims = ["DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR",
"SFHo_M14521283_M0_LK_SR"]
lbls = ["DD2 136 136 LK", "DD2 151 123 LK", "LS220 147 127 LK", "SFHo 145 128 LK"]
masks = [mask, mask, mask, mask]
colors = ["black", 'gray', 'red', "green"]
lss = ["-", '-', '-', '-']
lws = [1., 1., 1., 1., ]
# minus LK
sims2 = ["DD2_M13641364_M0_SR_R04", "DD2_M14971245_M0_SR", "LS220_M14691268_M0_SR", "SFHo_M14521283_M0_SR"]
lbls2 = ["DD2 136 136", "DD2 150 125", "LS220 147 127", "SFHo 145 128"]
masks2 = [mask, mask, mask, mask]
colors2 = ["black", 'gray', 'red', "green"]
lss2 = ["--", '--', '--', '--']
lws2 = [1., 1., 1., 1., ]
sims += sims2
lbls += lbls2
masks += masks2
colors += colors2
lss += lss2
lws += lws2
v_ns = ["theta", "Y_e", "vel_inf", "entropy"]
i_x_plot = 1
for v_n in v_ns:
averages[v_n] = {}
for sim, lbl, mask, color, ls, lw in zip(sims, lbls, masks, colors, lss, lws):
# loading hist
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "hist_{}.dat".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
hist = np.loadtxt(fpath, usecols=(0, 1), unpack=False)
# loading times
fpath1 = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
if v_n == "Y_e":
ave = EJECTA_PARS.compute_ave_ye(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "theta":
ave = EJECTA_PARS.compute_ave_theta_rms(hist)
averages[v_n][sim] = ave
elif v_n == "vel_inf":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "entropy":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
else:
raise NameError("no averages set for v_n:{}".format(v_n))
plot_dic = {
'task': 'hist1d', 'ptype': 'cartesian',
'position': (1, i_x_plot),
'data': hist, 'normalize': True,
'v_n_x': v_n, 'v_n_y': None,
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': 1.0,
'xmin': None, 'xamx': None, 'ymin': 1e-3, 'ymax': 1e0,
'xlabel': Labels.labels(v_n), 'ylabel': Labels.labels("mass"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
plot_dic = Limits.in_dic(plot_dic)
if v_n != v_ns[0]:
plot_dic["sharey"] = True
if v_n == v_ns[-1] and sim == sims[-1]:
plot_dic['legend'] = {'bbox_to_anchor': (-3.00, 1.0), 'loc': 'upper left', 'ncol': 4, "fontsize": 12}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for v_n in v_ns:
print(" --- v_n: {} --- ".format(v_n))
for sim1, sim2 in zip(sims, sims2):
val1 = averages[v_n][sim1]
val2 = averages[v_n][sim2]
err = 100 * (val1 - val2) / val1
print("\t{} : {:.2f}".format(sim1, val1))
print("\t{} : {:.2f}".format(sim2, val2))
print("\t\tErr:\t\t{:.1f}".format(err))
print(" -------------------- ".format(v_n))
exit(1)
# plot_histograms_lk_on_off("geo")
# plot_histograms_lk_on_off("bern_geoend")
def plot_histograms_lk_on_resolution(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (11.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "tothist_lk_res_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
averages = {}
det = 0
# HR "LS220_M13641364_M0_LK_HR" -- too short
sims_hr = ["DD2_M13641364_M0_LK_HR_R04", "DD2_M15091235_M0_LK_HR", "", "LS220_M14691268_M0_LK_HR",
"SFHo_M13641364_M0_LK_HR", "SFHo_M14521283_M0_LK_HR"]
lbl_hr = ["DD2 136 136 HR", "DD2 151 124 HR", "LS220 136 136 HR", "LS220 147 137 HR", "SFHo 136 136 HR",
"SFHo 145 128 HR"]
color_hr = ["black", "gray", "orange", "red", "green", "lightgreen"]
masks_hr = [mask, mask, mask, mask, mask, mask]
lss_hr = ['--', '--', '--', '--', "--", "--"]
lws_hr = [1., 1., 1., 1., 1., 1.]
# SR "LS220_M13641364_M0_LK_SR"
sims_sr = ["DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "", "LS220_M14691268_M0_LK_SR",
"SFHo_M13641364_M0_LK_SR", "SFHo_M14521283_M0_LK_SR"]
lbl_sr = ["DD2 136 136 SR", "DD2 151 124 HR", "LS220 136 136 SR", "LS220 147 137 SR", "SFHo 136 136 HR",
"SFHo 145 128 HR"]
color_sr = ["black", "gray", "orange", "red", "green", "lightgreen"]
masks_sr = [mask, mask, mask, mask, mask, mask]
lss_sr = ['-', '-', '-', '-', '-', '-']
lws_sr = [1., 1., 1., 1., 1., 1.]
# LR
sims_lr = ["DD2_M13641364_M0_LK_LR_R04", "", "", "", "", ""]
lbl_lr = ["DD2 136 136 LR", "DD2 151 124 LR", "LS220 136 136 LR", "LS220 147 137 LR", "SFHo 136 136 LR",
"SFHo 145 128 LR"]
color_lr = ["black", "gray", "orange", "red", "green", "lightgreen"]
masks_lr = [mask, mask, mask, mask, mask, mask]
lss_lr = [':', ':', ":", ":", ":", ":"]
lws_lr = [1., 1., 1., 1., 1., 1.]
# plus
sims = sims_hr + sims_lr + sims_sr
lbls = lbl_hr + lbl_lr + lbl_sr
colors = color_hr + color_lr + color_sr
masks = masks_hr + masks_lr + masks_sr
lss = lss_hr + lss_lr + lss_sr
lws = lws_hr + lws_lr + lws_sr
v_ns = ["theta", "Y_e", "vel_inf", "entropy"]
i_x_plot = 1
for v_n in v_ns:
averages[v_n] = {}
for sim, lbl, mask, color, ls, lw in zip(sims, lbls, masks, colors, lss, lws):
if sim != "":
# loading hist
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "hist_{}.dat".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
hist = np.loadtxt(fpath, usecols=(0, 1), unpack=False)
# loading times
fpath1 = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
if v_n == "Y_e":
ave = EJECTA_PARS.compute_ave_ye(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "theta":
ave = EJECTA_PARS.compute_ave_theta_rms(hist)
averages[v_n][sim] = ave
elif v_n == "vel_inf":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "entropy":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
else:
raise NameError("no averages set for v_n:{}".format(v_n))
plot_dic = {
'task': 'hist1d', 'ptype': 'cartesian',
'position': (1, i_x_plot),
'data': hist, 'normalize': True,
'v_n_x': v_n, 'v_n_y': None,
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': 1.0,
'xmin': None, 'xamx': None, 'ymin': 1e-3, 'ymax': 1e0,
'xlabel': Labels.labels(v_n), 'ylabel': Labels.labels("mass"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
plot_dic = Limits.in_dic(plot_dic)
if v_n != v_ns[0]:
plot_dic["sharey"] = True
if v_n == v_ns[-1] and sim == sims[-1]:
plot_dic['legend'] = {'bbox_to_anchor': (-3.00, 1.0), 'loc': 'upper left', 'ncol': 4,
"fontsize": 12}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for v_n in v_ns:
print(" --- v_n: {} --- ".format(v_n))
for sim_hr, sim_sr, sim_lr in zip(sims_hr, sims_sr, sims_lr):
# print(sim_hr, sim_sr, sim_lr)
if not sim_sr == "":
assert sim_sr != ""
def_sim = sim_sr
def_res = "SR"
if sim_hr != '':
comp_res = "HR"
comp_sim = sim_hr
elif sim_hr == '' and sim_lr != '':
comp_res = "LR"
comp_sim = sim_lr
else:
raise ValueError("neither HR nor LR is available")
# print(def_sim, comp_sim)
assert comp_sim != ""
val1 = averages[v_n][def_sim]
val2 = averages[v_n][comp_sim]
err = 100 * (val1 - val2) / val1
print("\t{} : {:.2f}".format(def_sim, val1))
print("\t{} : {:.2f}".format(comp_sim, val2))
print("\t\tErr:\t\t{:.1f}".format(err))
print(" -------------------- ".format(v_n))
exit(1)
# plot_histograms_lk_on_resolution("geo")
# plot_histograms_lk_on_resolution("bern_geoend")
def plot_histograms_lk_off_resolution(mask):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (11.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "tothist_res_{}.png".format(mask)
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
averages = {}
det = 0
# HR "LS220_M13641364_M0_LK_HR" -- too short
sims_hr = ["", "DD2_M14971245_M0_HR", "LS220_M13641364_M0_HR", "LS220_M14691268_M0_HR", "SFHo_M13641364_M0_HR",
"SFHo_M14521283_M0_HR"]
lbl_hr = ["DD2 136 136 HR", "DD2 150 125 HR", "LS220 136 136 HR", "LS220 147 127 HR", "SFHo 136 136 HR",
"SFHo 145 128 HR"]
color_hr = ["black", "gray", "orange", "red", "lightgreen", "green"]
masks_hr = [mask, mask, mask, mask, mask, mask]
lss_hr = ['--', '--', '--', '--', '--', '--']
lws_hr = [1., 1., 1., 1., 1., 1.]
# SR "LS220_M13641364_M0_LK_SR"
sims_sr = ["DD2_M13641364_M0_SR_R04", "DD2_M14971245_M0_SR", "LS220_M13641364_M0_SR", "LS220_M14691268_M0_SR",
"SFHo_M13641364_M0_SR", "SFHo_M14521283_M0_SR"]
lbl_sr = ["DD2 136 136 SR", "DD2 150 125 SR", "LS220 136 136 SR", "LS220 147 127 SR", "SFHo 136 136 SR",
"SFHo 145 128 SR"]
color_sr = ["black", "gray", "orange", "red", "lightgreen", "green"]
masks_sr = [mask, mask, mask, mask, mask, mask]
lss_sr = ['-', '-', '-', '-', '-', '-']
lws_sr = [1., 1., 1., 1., 1., 1.]
# LR
sims_lr = ["DD2_M13641364_M0_LR_R04", "DD2_M14971246_M0_LR", "LS220_M13641364_M0_LR", "LS220_M14691268_M0_LR", "",
""]
lbl_lr = ["DD2 136 136 LR", "DD2 150 125 LR", "LS220 136 136 LR", "LS220 147 127 LR", "SFHo 136 136 LR",
"SFHo 145 128 LR"]
color_lr = ["black", "gray", "orange", "red", "lightgreen", "green"]
masks_lr = [mask, mask, mask, mask, mask, mask]
lss_lr = [':', ':', ':', ':', ':', ':']
lws_lr = [1., 1., 1., 1., 1., 1.]
# plus
sims = sims_hr + sims_lr + sims_sr
lbls = lbl_hr + lbl_lr + lbl_sr
colors = color_hr + color_lr + color_sr
masks = masks_hr + masks_lr + masks_sr
lss = lss_hr + lss_lr + lss_sr
lws = lws_hr + lws_lr + lws_sr
v_ns = ["theta", "Y_e", "vel_inf", "entropy"]
i_x_plot = 1
for v_n in v_ns:
averages[v_n] = {}
for sim, lbl, mask, color, ls, lw in zip(sims, lbls, masks, colors, lss, lws):
if sim != "":
# loading hist
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "hist_{}.dat".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
hist = np.loadtxt(fpath, usecols=(0, 1), unpack=False)
# loading times
fpath1 = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask + '/' + "total_flux.dat"
if not os.path.isfile(fpath1):
raise IOError("File does not exist: {}".format(fpath1))
timearr1, massarr1 = np.loadtxt(fpath1, usecols=(0, 2), unpack=True)
if v_n == "Y_e":
ave = EJECTA_PARS.compute_ave_ye(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "theta":
ave = EJECTA_PARS.compute_ave_theta_rms(hist)
averages[v_n][sim] = ave
elif v_n == "vel_inf":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
elif v_n == "entropy":
ave = EJECTA_PARS.compute_ave_vel_inf(massarr1[-1], hist)
averages[v_n][sim] = ave
else:
raise NameError("no averages set for v_n:{}".format(v_n))
plot_dic = {
'task': 'hist1d', 'ptype': 'cartesian',
'position': (1, i_x_plot),
'data': hist, 'normalize': True,
'v_n_x': v_n, 'v_n_y': None,
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': 1.0,
'xmin': None, 'xamx': None, 'ymin': 1e-3, 'ymax': 1e0,
'xlabel': Labels.labels(v_n), 'ylabel': Labels.labels("mass"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
plot_dic = Limits.in_dic(plot_dic)
if v_n != v_ns[0]:
plot_dic["sharey"] = True
if v_n == v_ns[-1] and sim == sims[-1]:
plot_dic['legend'] = {'bbox_to_anchor': (-3.00, 1.0), 'loc': 'upper left', 'ncol': 4,
"fontsize": 12}
o_plot.set_plot_dics.append(plot_dic)
i_x_plot += 1
o_plot.main()
for v_n in v_ns:
print(" --- v_n: {} --- ".format(v_n))
for sim_hr, sim_sr, sim_lr in zip(sims_hr, sims_sr, sims_lr):
# print(sim_hr, sim_sr, sim_lr)
if not sim_sr == "":
assert sim_sr != ""
def_sim = sim_sr
def_res = "SR"
if sim_hr != '':
comp_res = "HR"
comp_sim = sim_hr
elif sim_hr == '' and sim_lr != '':
comp_res = "LR"
comp_sim = sim_lr
else:
raise ValueError("neither HR nor LR is available")
# print(def_sim, comp_sim)
assert comp_sim != ""
val1 = averages[v_n][def_sim]
val2 = averages[v_n][comp_sim]
err = 100 * (val1 - val2) / val1
print("\t{} : {:.2f}".format(def_sim, val1))
print("\t{} : {:.2f}".format(comp_sim, val2))
print("\t\tErr:\t\t{:.1f}".format(err))
print(" -------------------- ".format(v_n))
exit(1)
# plot_histograms_lk_off_resolution("geo")
# plot_histograms_lk_off_resolution("bern_geoend")
''' neutrino driven wind '''
def plot_several_q_eff(v_n, sims, iterations, figname):
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (12., 3.2) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = figname
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.2
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
rl = 3
# v_n = "Q_eff_nua"
# sims = ["LS220_M14691268_M0_LK_SR"]
# iterations = [1302528, 1515520, 1843200]
i_x_plot = 1
i_y_plot = 1
for sim in sims:
d3class = LOAD_PROFILE_XYXZ(sim)
d1class = ADD_METHODS_ALL_PAR(sim)
for it in iterations:
tmerg = d1class.get_par("tmerg")
time_ = d3class.get_time_for_it(it, "prof")
dens_arr = d3class.get_data(it, rl, "xz", "density")
data_arr = d3class.get_data(it, rl, "xz", v_n)
data_arr = data_arr / dens_arr
x_arr = d3class.get_data(it, rl, "xz", "x")
z_arr = d3class.get_data(it, rl, "xz", "z")
def_dic_xz = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": z_arr, "zarr": data_arr,
'position': (i_y_plot, i_x_plot), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xmin': None, 'xmax': None, 'ymin': None, 'ymax': None, 'vmin': 1e-10, 'vmax': 1e-4,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': None, 'cmap': 'inferno_r', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {"text": r'$t-t_{merg}:$' + r'${:.1f}$'.format((time_ - tmerg) * 1e3),
'fontsize': 14},
# 'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': True,
}
def_dic_xz["xmin"], def_dic_xz["xmax"], _, _, def_dic_xz["ymin"], def_dic_xz["ymax"] \
= UTILS.get_xmin_xmax_ymin_ymax_zmin_zmax(rl)
if v_n == 'Q_eff_nua':
def_dic_xz['v_n'] = 'Q_eff_nua/D'
def_dic_xz['vmin'] = 1e-7
def_dic_xz['vmax'] = 1e-3
# def_dic_xz['norm'] = None
elif v_n == 'Q_eff_nue':
def_dic_xz['v_n'] = 'Q_eff_nue/D'
def_dic_xz['vmin'] = 1e-7
def_dic_xz['vmax'] = 1e-3
# def_dic_xz['norm'] = None
elif v_n == 'Q_eff_nux':
def_dic_xz['v_n'] = 'Q_eff_nux/D'
def_dic_xz['vmin'] = 1e-10
def_dic_xz['vmax'] = 1e-4
# def_dic_xz['norm'] = None
# print("v_n: {} [{}->{}]".format(v_n, def_dic_xz['zarr'].min(), def_dic_xz['zarr'].max()))
elif v_n == "R_eff_nua":
def_dic_xz['v_n'] = 'R_eff_nua/D'
def_dic_xz['vmin'] = 1e2
def_dic_xz['vmax'] = 1e6
# def_dic_xz['norm'] = None
print("v_n: {} [{}->{}]".format(v_n, def_dic_xz['zarr'].min(), def_dic_xz['zarr'].max()))
# exit(1)
if it == iterations[0]:
def_dic_xz["sharey"] = False
if it == iterations[-1]:
def_dic_xz['cbar'] = {'location': 'right .02 0.', 'label': Labels.labels(v_n) + "/D",
# 'right .02 0.' 'fmt': '%.1e',
'labelsize': 14, 'aspect': 6.,
'fontsize': 14}
o_plot.set_plot_dics.append(def_dic_xz)
i_x_plot = i_x_plot + 1
i_y_plot = i_y_plot + 1
o_plot.main()
exit(0)
''' disk histogram evolution & disk mass '''
def plot_disk_hist_evol_one_v_n(v_n, sim, figname):
# sim = "LS220_M13641364_M0_LK_SR_restart"
# v_n = "Ye"
# figname = "ls220_ye_disk_hist.png"
print(v_n)
d3_corr = LOAD_RES_CORR(sim)
iterations = d3_corr.list_iterations
times = []
bins = []
values = []
for it in iterations:
fpath = Paths.ppr_sims + sim + "/" + "profiles/" + str(it) + "/" + "hist_{}.dat".format(v_n)
if os.path.isfile(fpath):
times.append(d3_corr.get_time_for_it(it, "prof"))
print("\tLoading it:{} t:{}".format(it, times[-1]))
data = np.loadtxt(fpath, unpack=False)
bins = data[:, 0]
values.append(data[:, 1])
else:
print("\tFile not found it:{}".format(fpath))
assert len(times) > 0
times = np.array(times) * 1e3
bins = np.array(bins)
values = np.reshape(np.array(values), newshape=(len(iterations), len(bins))).T
#
d1class = ADD_METHODS_ALL_PAR(sim)
tmerg = d1class.get_par("tmerg") * 1e3
times = times - tmerg
#
values = values / np.sum(values)
values = np.maximum(values, 1e-10)
#
if v_n in ["theta"]:
bins = bins / np.pi * 180.
#
def_dic = {'task': 'colormesh', 'ptype': 'cartesian', # 'aspect': 1.,
'xarr': times, "yarr": bins, "zarr": values,
'position': (1, 1), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {'location': 'right .02 0.', 'label': Labels.labels("mass"), # 'right .02 0.' 'fmt': '%.1e',
'labelsize': 14, # 'aspect': 6.,
'fontsize': 14},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels(v_n),
'xmin': times.min(), 'xmax': times.max(), 'ymin': bins.min(), 'ymax': bins.max(), 'vmin': 1e-6,
'vmax': 1e-2,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': None, 'cmap': 'Greys', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {}, # "text": r'$t-t_{merg}:$' + r'${:.1f}$'.format((time_ - tmerg) * 1e3), 'fontsize': 14
# 'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
}
#
tcoll = d1class.get_par("tcoll_gw")
if not np.isnan(tcoll):
tcoll = (tcoll * 1e3) - tmerg
tcoll_dic = {'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': [tcoll, tcoll], 'yarr': [bins.min(), bins.max()],
'color': 'black', 'ls': '-', 'lw': 0.6, 'ds': 'default', 'alpha': 1.0,
}
print(tcoll)
else:
print("No tcoll")
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = figname
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.2
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
#
if not np.isnan(tcoll):
o_plot.set_plot_dics.append(tcoll_dic)
o_plot.set_plot_dics.append(def_dic)
#
if v_n in ["temp", "dens_unb_bern", "rho"]:
def_dic["yscale"] = "log"
#
o_plot.main()
exit(1)
def plot_disk_hist_evol(sim, figname):
v_ns = ["r", "theta", "Ye", "velz", "temp", "rho", "dens_unb_bern"]
# v_ns = ["velz", "temp", "rho", "dens_unb_bern"]
d3_corr = LOAD_RES_CORR(sim)
iterations = d3_corr.list_iterations
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (len(v_ns) * 3., 2.7) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = figname
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.2
o_plot.gen_set["subplots_adjust_w"] = 0.4
o_plot.set_plot_dics = []
i_plot = 1
for v_n in v_ns:
print("v_n:{}".format(v_n))
times = []
bins = []
values = []
for it in iterations:
fpath = Paths.ppr_sims + sim + "/" + "profiles/" + str(it) + "/" + "hist_{}.dat".format(v_n)
if os.path.isfile(fpath):
times.append(d3_corr.get_time_for_it(it, "prof"))
print("\tLoading it:{} t:{}".format(it, times[-1]))
data = np.loadtxt(fpath, unpack=False)
bins = data[:, 0]
values.append(data[:, 1])
else:
print("\tFile not found it:{}".format(fpath))
assert len(times) > 0
times = np.array(times) * 1e3
bins = np.array(bins)
values = np.reshape(np.array(values), newshape=(len(times), len(bins))).T
#
d1class = ADD_METHODS_ALL_PAR(sim)
tmerg = d1class.get_par("tmerg") * 1e3
times = times - tmerg
#
values = values / np.sum(values)
values = np.maximum(values, 1e-10)
#
if v_n in ["theta"]:
bins = 90 - (bins / np.pi * 180.)
#
def_dic = {'task': 'colormesh', 'ptype': 'cartesian', # 'aspect': 1.,
'xarr': times, "yarr": bins, "zarr": values,
'position': (1, i_plot), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels(v_n),
'xmin': times.min(), 'xmax': times.max(), 'ymin': bins.min(), 'ymax': bins.max(), 'vmin': 1e-6,
'vmax': 1e-2,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': None, 'cmap': 'Greys', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {}, # "text": r'$t-t_{merg}:$' + r'${:.1f}$'.format((time_ - tmerg) * 1e3), 'fontsize': 14
# 'sharex': True, # removes angular citkscitks
'text': {},
'fontsize': 14,
'labelsize': 14,
'sharex': False,
'sharey': False,
}
if v_n == v_ns[-1]:
def_dic['cbar'] = {'location': 'right .02 0.', 'label': Labels.labels("mass"),
# 'right .02 0.' 'fmt': '%.1e',
'labelsize': 14, # 'aspect': 6.,
'fontsize': 14}
if v_n == v_ns[0]:
def_dic['text'] = {'coords': (1.0, 1.05), 'text': sim.replace("_", "\_"), 'color': 'black', 'fs': 16}
if v_n == "velz":
def_dic['ymin'] = -.3
def_dic['ymax'] = .3
elif v_n == "temp":
def_dic['ymin'] = 1e-1
def_dic['ymax'] = 1e2
tcoll = d1class.get_par("tcoll_gw")
if not np.isnan(tcoll):
tcoll = (tcoll * 1e3) - tmerg
tcoll_dic = {'task': 'line', 'ptype': 'cartesian',
'position': (1, i_plot),
'xarr': [tcoll, tcoll], 'yarr': [bins.min(), bins.max()],
'color': 'black', 'ls': '-', 'lw': 0.6, 'ds': 'default', 'alpha': 1.0,
}
print(tcoll)
else:
print("No tcoll")
#
if not np.isnan(tcoll):
o_plot.set_plot_dics.append(tcoll_dic)
o_plot.set_plot_dics.append(def_dic)
#
if v_n in ["temp", "dens_unb_bern", "rho"]:
def_dic["yscale"] = "log"
#
i_plot = i_plot + 1
o_plot.main()
exit(1)
def plot_disk_mass_evol_SR():
# 11
sims = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR"] + \
["DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR"] + \
["DD2_M13641364_M0_SR", "LS220_M13641364_M0_SR", "SFHo_M13641364_M0_SR", "SLy4_M13641364_M0_SR"] + \
["DD2_M14971245_M0_SR", "SFHo_M14521283_M0_SR", "SLy4_M14521283_M0_SR"]
#
colors = ["blue", "black"] + \
["blue", "red"] + \
["blue", "red", "green", "orange"] + \
["blue", "green", "orange"]
#
lss = ["-", "-"] + \
["--", "--"] + \
[":", ":", ":", ":"] + \
["-.", "-."]
#
lws = [1., 1.] + \
[1., 1.] + \
[1., 1., 1., 1.] + \
[1., 1.]
alphas = [1., 1.] + \
[1., 1.] + \
[1., 1., 1., 1.] + \
[1., 1.]
#
# ----
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "disk_mass_evol_SR.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
for sim, color, ls, lw, alpha in zip(sims, colors, lss, lws, alphas):
print("{}".format(sim))
o_data = ADD_METHODS_ALL_PAR(sim)
data = o_data.get_disk_mass()
tmerg = o_data.get_par("tmerg")
tarr = (data[:, 0] - tmerg) * 1e3
marr = data[:, 1]
if sim == "DD2_M13641364_M0_LK_SR_R04":
tarr = tarr[3:] # 3ms, 6ms, 51ms.... Removing initial profiles
marr = marr[3:] #
#
tcoll = o_data.get_par("tcoll_gw")
if not np.isnan(tcoll) and tcoll < tarr[-1]:
tcoll = (tcoll - tmerg) * 1e3
print(tcoll, tarr[0])
mcoll = interpolate.interp1d(tarr, marr, kind="linear")(tcoll)
tcoll_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': [tcoll], 'yarr': [mcoll],
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'marker': "x", 'ms': 5., 'alpha': alpha,
'xmin': -10, 'xmax': 100, 'ymin': 0, 'ymax': .3,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("diskmass"),
'label': None, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
o_plot.set_plot_dics.append(tcoll_dic)
#
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': tarr, 'yarr': marr,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'steps', 'alpha': 1.0,
'xmin': -10, 'xmax': 100, 'ymin': 0, 'ymax': .35,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("diskmass"),
'label': str(sim).replace('_', '\_'), 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {'bbox_to_anchor': (1.1, 1.05),
'loc': 'lower right', 'ncol': 2, 'fontsize': 8} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if sim == sims[-1]:
plot_dic['legend'] = {'bbox_to_anchor': (1.1, 1.05),
'loc': 'lower right', 'ncol': 2, 'fontsize': 8}
o_plot.set_plot_dics.append(plot_dic)
o_plot.main()
exit(1)
def plot_disk_mass_evol_LR():
sims = ["BLh_M16351146_M0_LK_LR", "SLy4_M10651772_M0_LK_LR", "SFHo_M10651772_M0_LK_LR", "SFHo_M16351146_M0_LK_LR",
"LS220_M10651772_M0_LK_LR", "LS220_M16351146_M0_LK_LR", "DD2_M16351146_M0_LK_LR"] + \
["DD2_M13641364_M0_LR", "LS220_M13641364_M0_LR"] + \
["DD2_M14971246_M0_LR", "DD2_M14861254_M0_LR", "DD2_M14351298_M0_LR", "DD2_M14321300_M0_LR",
"SLy4_M14521283_M0_LR"]
#
colors = ["black", "orange", "pink", "olive", "red", "purple", "blue"] + \
["blue", "red"] + \
["darkblue", "blue", "cornflowerblue", "orange"]
#
lss = ["-", "-", "-", "-", "-", "-"] + \
['--', '--', '--'] + \
["-.", "-.", "-.", "-."]
#
lws = [1., 1., 1., 1., 1., 1., 1.] + \
[1., 1.] + \
[1., 1., 1., 1., 1.]
#
alphas = [1., 1., 1., 1., 1., 1., 1.] + \
[1., 1.] + \
[1., 1., 1., 1., 1.]
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "disk_mass_evol_LR.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
for sim, color, ls, lw, alpha in zip(sims, colors, lss, lws, alphas):
print("{}".format(sim))
o_data = ADD_METHODS_ALL_PAR(sim)
data = o_data.get_disk_mass()
assert len(data) > 0
tmerg = o_data.get_par("tmerg")
tarr = (data[:, 0] - tmerg) * 1e3
marr = data[:, 1]
if sim == "DD2_M13641364_M0_LK_SR_R04":
tarr = tarr[3:] # 3ms, 6ms, 51ms.... Removing initial profiles
marr = marr[3:] #
#
tcoll = o_data.get_par("tcoll_gw")
if not np.isnan(tcoll) and tcoll < tarr[-1]:
tcoll = (tcoll - tmerg) * 1e3
print(tcoll, tarr[0])
mcoll = interpolate.interp1d(tarr, marr, kind="linear")(tcoll)
tcoll_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': [tcoll], 'yarr': [mcoll],
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'marker': "x", 'ms': 5., 'alpha': alpha,
'xmin': -10, 'xmax': 40, 'ymin': 0, 'ymax': .3,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("diskmass"),
'label': None, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
o_plot.set_plot_dics.append(tcoll_dic)
#
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': tarr, 'yarr': marr,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'steps', 'alpha': 1.0,
'xmin': -10, 'xmax': 40, 'ymin': 0, 'ymax': .35,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("diskmass"),
'label': str(sim).replace('_', '\_'), 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {'bbox_to_anchor': (1.1, 1.05),
'loc': 'lower right', 'ncol': 2, 'fontsize': 8} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if sim == sims[-1]:
plot_dic['legend'] = {'bbox_to_anchor': (1.1, 1.05),
'loc': 'lower right', 'ncol': 2, 'fontsize': 8}
o_plot.set_plot_dics.append(plot_dic)
o_plot.main()
exit(1)
def plot_disk_mass_evol_HR():
#
# SFHo_M14521283_M0_HR, SFHo_M13641364_M0_HR, DD2_M14971245_M0_HR, DD2_M14861254_M0_HR
#
sims = ["SFHo_M13641364_M0_HR",
"DD2_M14971245_M0_HR", "DD2_M14861254_M0_HR", "SFHo_M14521283_M0_HR"]
#
colors = ["green",
"blue", "cornflowerblue", "green"]
#
lss = ['--'] + \
["-.", "-.", "-."]
#
lws = [1., ] + \
[1., 1., 1.]
#
alphas = [1.] + \
[1., 1., 1.]
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "disk_mass_evol_HR.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
for sim, color, ls, lw, alpha in zip(sims, colors, lss, lws, alphas):
if not sim.__contains__("10651772"):
print("{}".format(sim))
o_data = ADD_METHODS_ALL_PAR(sim)
data = o_data.get_disk_mass()
assert len(data) > 0
tmerg = o_data.get_par("tmerg")
tarr = (data[:, 0] - tmerg) * 1e3
marr = data[:, 1]
if sim == "DD2_M13641364_M0_LK_SR_R04":
tarr = tarr[3:] # 3ms, 6ms, 51ms.... Removing initial profiles
marr = marr[3:] #
#
tcoll = o_data.get_par("tcoll_gw")
if not np.isnan(tcoll) and tcoll < tarr[-1]:
tcoll = (tcoll - tmerg) * 1e3
print(tcoll, tarr[0])
mcoll = interpolate.interp1d(tarr, marr, kind="linear")(tcoll)
tcoll_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': [tcoll], 'yarr': [mcoll],
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'marker': "x", 'ms': 5., 'alpha': alpha,
'xmin': -10, 'xmax': 40, 'ymin': 0, 'ymax': .3,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("diskmass"),
'label': None, 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
o_plot.set_plot_dics.append(tcoll_dic)
#
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': tarr, 'yarr': marr,
'v_n_x': "time", 'v_n_y': "mass",
'color': color, 'ls': ls, 'lw': 0.8, 'ds': 'steps', 'alpha': 1.0,
'xmin': -10, 'xmax': 40, 'ymin': 0, 'ymax': .35,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels("diskmass"),
'label': str(sim).replace('_', '\_'), 'yscale': 'linear',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'legend': {'bbox_to_anchor': (1.1, 1.05),
'loc': 'lower right', 'ncol': 2, 'fontsize': 8} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if sim == sims[-1]:
plot_dic['legend'] = {'bbox_to_anchor': (1.1, 1.05),
'loc': 'lower right', 'ncol': 2, 'fontsize': 8}
o_plot.set_plot_dics.append(plot_dic)
o_plot.main()
exit(1)
''' disk slice xy-xz '''
def plot_den_unb__vel_z_sly4_evol():
# tmp = d3class.get_data(688128, 3, "xy", "ang_mom_flux")
# print(tmp.min(), tmp.max())
# print(tmp)
# exit(1) # dens_unb_geo
""" --- --- --- """
'''sly4 '''
simlist = ["SLy4_M13641364_M0_SR", "SLy4_M13641364_M0_SR", "SLy4_M13641364_M0_SR", "SLy4_M13641364_M0_SR"]
# itlist = [434176, 475136, 516096, 565248]
# itlist = [606208, 647168, 696320, 737280]
# itlist = [434176, 516096, 647168, 737280]
''' ls220 '''
simlist = ["LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_LK_SR",
"LS220_M14691268_M0_LK_SR"] # , "LS220_M14691268_M0_LK_SR"]
itlist = [1515520, 1728512, 1949696] # , 2162688]
''' dd2 '''
simlist = ["DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_LK_SR_R04",
"DD2_M13641364_M0_LK_SR_R04"] # , "DD2_M13641364_M0_LK_SR_R04"]
itlist = [1111116, 1741554, 2213326] # ,2611022]
#
simlist = ["DD2_M13641364_M0_LK_SR_R04", "BLh_M13641364_M0_LK_SR", "LS220_M14691268_M0_LK_SR",
"SLy4_M13641364_M0_SR"]
itlist = [2611022, 1974272, 1949696, 737280]
#
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + 'all2/'
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4 * len(simlist), 6.0) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = "disk_structure_last.png".format(simlist[0]) # "DD2_1512_slices.png" # LS_1412_slices
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = -0.35
o_plot.gen_set["subplots_adjust_w"] = 0.05
o_plot.set_plot_dics = []
#
rl = 3
#
o_plot.gen_set["figsize"] = (4.2 * len(simlist), 8.0) # <->, |] # to match hists with (8.5, 2.7)
plot_x_i = 1
for sim, it in zip(simlist, itlist):
print("sim:{} it:{}".format(sim, it))
d3class = LOAD_PROFILE_XYXZ(sim)
d1class = ADD_METHODS_ALL_PAR(sim)
t = d3class.get_time_for_it(it, d1d2d3prof="prof")
tmerg = d1class.get_par("tmerg")
xmin, xmax, ymin, ymax, zmin, zmax = UTILS.get_xmin_xmax_ymin_ymax_zmin_zmax(rl)
# --------------------------------------------------------------------------
# --------------------------------------------------------------------------
mask = "x>0"
#
v_n = "rho"
data_arr = d3class.get_data(it, rl, "xz", v_n)
x_arr = d3class.get_data(it, rl, "xz", "x")
z_arr = d3class.get_data(it, rl, "xz", "z")
# print(data_arr); exit(1)
contour_dic_xz = {
'task': 'contour',
'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": z_arr, "zarr": data_arr, 'levels': [1.e13 / 6.176e+17],
'position': (1, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'colors': ['white'], 'lss': ["-"], 'lws': [1.],
'v_n_x': 'x', 'v_n_y': 'y', 'v_n': 'rho',
'xscale': None, 'yscale': None,
'fancyticks': True,
'sharey': False,
'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14}
o_plot.set_plot_dics.append(contour_dic_xz)
rho_dic_xz = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": z_arr, "zarr": data_arr,
'position': (1, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': zmin, 'ymax': zmax, 'vmin': 1e-9, 'vmax': 1e-5,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': 'Greys', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {"text": sim.replace('_', '\_'), 'fontsize': 12},
# 'title': {"text": r'$t-t_{merg}:$' + r'${:.1f}$ [ms]'.format((t - tmerg) * 1e3), 'fontsize': 14},
'sharey': False,
'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
data_arr = d3class.get_data(it, rl, "xy", v_n)
x_arr = d3class.get_data(it, rl, "xy", "x")
y_arr = d3class.get_data(it, rl, "xy", "y")
contour_dic_xy = {
'task': 'contour',
'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": y_arr, "zarr": data_arr, 'levels': [1.e13 / 6.176e+17],
'position': (2, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'colors': ['white'], 'lss': ["-"], 'lws': [1.],
'v_n_x': 'x', 'v_n_y': 'y', 'v_n': 'rho',
'xscale': None, 'yscale': None,
'fancyticks': True,
'sharey': False,
'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14}
o_plot.set_plot_dics.append(contour_dic_xy)
rho_dic_xy = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": y_arr, "zarr": data_arr,
'position': (2, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'y', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': ymin, 'ymax': ymax, 'vmin': 1e-9, 'vmax': 1e-5,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': 'Greys', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {},
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
if plot_x_i == 1:
rho_dic_xy['cbar'] = {'location': 'bottom -.05 .00', 'label': r'$\rho$ [GEO]', # 'fmt': '%.1e',
'labelsize': 14,
'fontsize': 14}
if plot_x_i > 1:
rho_dic_xz['sharey'] = True
rho_dic_xy['sharey'] = True
o_plot.set_plot_dics.append(rho_dic_xz)
o_plot.set_plot_dics.append(rho_dic_xy)
# ----------------------------------------------------------------------
v_n = "dens_unb_bern"
#
data_arr = d3class.get_data(it, rl, "xz", v_n)
x_arr = d3class.get_data(it, rl, "xz", "x")
z_arr = d3class.get_data(it, rl, "xz", "z")
dunb_dic_xz = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": z_arr, "zarr": data_arr,
'position': (1, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': zmin, 'ymax': zmax, 'vmin': 1e-10, 'vmax': 1e-7,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': 'Blues', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {},
# {"text": r'$t-t_{merg}:$' + r'${:.1f}$ [ms]'.format((t - tmerg) * 1e3), 'fontsize': 14},
'sharex': True, # removes angular citkscitks
'sharey': False,
'fontsize': 14,
'labelsize': 14
}
#
data_arr = d3class.get_data(it, rl, "xy", v_n)
x_arr = d3class.get_data(it, rl, "xy", "x")
y_arr = d3class.get_data(it, rl, "xy", "y")
dunb_dic_xy = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": y_arr, "zarr": data_arr,
'position': (2, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'fill_vmin': False, # fills the x < vmin with vmin
'v_n_x': 'x', 'v_n_y': 'y', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': ymin, 'ymax': ymax, 'vmin': 1e-10, 'vmax': 1e-7,
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': 'Blues', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {},
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
if plot_x_i == 2:
dunb_dic_xy['cbar'] = {'location': 'bottom -.05 .00', 'label': r'$D_{\rm{unb}}$ [GEO]', # 'fmt': '%.1e',
'labelsize': 14,
'fontsize': 14}
if plot_x_i > 1:
dunb_dic_xz['sharey'] = True
dunb_dic_xy['sharey'] = True
o_plot.set_plot_dics.append(dunb_dic_xz)
o_plot.set_plot_dics.append(dunb_dic_xy)
# ----------------------------------------------------------------------
mask = "x<0"
#
v_n = "Ye"
cmap = "bwr_r"
#
data_arr = d3class.get_data(it, rl, "xz", v_n)
x_arr = d3class.get_data(it, rl, "xz", "x")
z_arr = d3class.get_data(it, rl, "xz", "z")
ye_dic_xz = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": z_arr, "zarr": data_arr,
'position': (1, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'fill_vmin': False, # fills the x < vmin with vmin
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': zmin, 'ymax': zmax, 'vmin': 0.05, 'vmax': 0.5,
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': cmap, 'norm': None,
'fancyticks': True,
'minorticks': True,
'title': {},
# {"text": r'$t-t_{merg}:$' + r'${:.1f}$ [ms]'.format((t - tmerg) * 1e3), 'fontsize': 14},
'sharey': False,
'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
data_arr = d3class.get_data(it, rl, "xy", v_n)
x_arr = d3class.get_data(it, rl, "xy", "x")
y_arr = d3class.get_data(it, rl, "xy", "y")
ye_dic_xy = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": y_arr, "zarr": data_arr,
'position': (2, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'fill_vmin': False, # fills the x < vmin with vmin
'v_n_x': 'x', 'v_n_y': 'y', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': ymin, 'ymax': ymax, 'vmin': 0.01, 'vmax': 0.5,
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': cmap, 'norm': None,
'fancyticks': True,
'minorticks': True,
'title': {},
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
if plot_x_i == 3:
ye_dic_xy['cbar'] = {'location': 'bottom -.05 .00', 'label': r'$Y_e$', 'fmt': '%.1f',
'labelsize': 14,
'fontsize': 14}
if plot_x_i > 1:
ye_dic_xz['sharey'] = True
ye_dic_xy['sharey'] = True
o_plot.set_plot_dics.append(ye_dic_xz)
o_plot.set_plot_dics.append(ye_dic_xy)
# ----------------------------------------------------------
tcoll = d1class.get_par("tcoll_gw")
if not np.isnan(tcoll) and t >= tcoll:
print(tcoll, t)
v_n = "lapse"
mask = "z>0.15"
data_arr = d3class.get_data(it, rl, "xz", v_n)
x_arr = d3class.get_data(it, rl, "xz", "x")
z_arr = d3class.get_data(it, rl, "xz", "z")
lapse_dic_xz = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": z_arr, "zarr": data_arr,
'position': (1, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': zmin, 'ymax': zmax, 'vmin': 0., 'vmax': 0.15,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': 'Greys', 'norm': None,
'fancyticks': True,
'minorticks': True,
'title': {}, # ,{"text": r'$t-t_{merg}:$' + r'${:.1f}$ [ms]'.format((t - tmerg) * 1e3),
# 'fontsize': 14},
'sharey': False,
'sharex': True, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
data_arr = d3class.get_data(it, rl, "xy", v_n)
# print(data_arr.min(), data_arr.max()); exit(1)
x_arr = d3class.get_data(it, rl, "xy", "x")
y_arr = d3class.get_data(it, rl, "xy", "y")
lapse_dic_xy = {'task': 'colormesh', 'ptype': 'cartesian', 'aspect': 1.,
'xarr': x_arr, "yarr": y_arr, "zarr": data_arr,
'position': (2, plot_x_i), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'y', 'v_n': v_n,
'xmin': xmin, 'xmax': xmax, 'ymin': ymin, 'ymax': ymax, 'vmin': 0, 'vmax': 0.15,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': mask, 'cmap': 'Greys', 'norm': None,
'fancyticks': True,
'minorticks': True,
'title': {},
'sharey': False,
'sharex': False, # removes angular citkscitks
'fontsize': 14,
'labelsize': 14
}
#
# if plot_x_i == 1:
# rho_dic_xy['cbar'] = {'location': 'bottom -.05 .00', 'label': r'$\rho$ [GEO]', # 'fmt': '%.1e',
# 'labelsize': 14,
# 'fontsize': 14}
if plot_x_i > 1:
lapse_dic_xz['sharey'] = True
lapse_dic_xy['sharey'] = True
o_plot.set_plot_dics.append(lapse_dic_xz)
o_plot.set_plot_dics.append(lapse_dic_xy)
plot_x_i += 1
o_plot.main()
exit(0)
''' density moes '''
def plot_desity_modes():
sims = ["DD2_M13641364_M0_SR", "DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "LS220_M14691268_M0_LK_SR"]
lbls = ["DD2", "DD2 136 136", "DD2 151 124", "LS220 147 127"]
ls_m1 = ["-", "-", '-', '-']
ls_m2 = [":", ":", ":", ":"]
colors = ["black", "green", "blue", "red"]
lws_m1 = [1., 1., 1., 1.]
lws_m2 = [0.8, 0.8, 0.8, 0.8]
alphas = [1., 1., 1., 1.]
#
norm_to_m = 0
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (9.0, 2.7) # <->, |]
o_plot.gen_set["figname"] = "dm_dd2_dd2_ls220.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.set_plot_dics = []
#
#
for sim, lbl, ls1, ls2, color, lw1, lw2, alpha in zip(sims, lbls, ls_m1, ls_m2, colors, lws_m1, lws_m2, alphas):
o_dm = LOAD_DENSITY_MODES(sim)
o_dm.gen_set['fname'] = Paths.ppr_sims + sim + "/" + "profiles/" + "density_modes_lap15.h5"
o_par = ADD_METHODS_ALL_PAR(sim)
tmerg = o_par.get_par("tmerg")
#
mags1 = o_dm.get_data(1, "int_phi_r")
mags1 = np.abs(mags1)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags1 = mags1 / abs(norm_int_phi_r1d)[0]
times = o_dm.get_grid("times")
#
print(mags1)
#
times = (times - tmerg) * 1e3 # ms
#
densmode_m1 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags1,
'position': (1, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls1, 'color': color, 'lw': lw1, 'ds': 'default', 'alpha': alpha,
'label': lbl, 'ylabel': r'$C_m/C_0$ Magnitude', 'xlabel': Labels.labels("t-tmerg"),
'xmin': 45, 'xmax': 110, 'ymin': 1e-5, 'ymax': 1e-1,
'xscale': None, 'yscale': 'log', 'legend': {},
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14
}
#
mags2 = o_dm.get_data(2, "int_phi_r")
mags2 = np.abs(mags2)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags2 = mags2 / abs(norm_int_phi_r1d)[0]
# times = (times - tmerg) * 1e3 # ms
# print(mags2); exit(1)
densmode_m2 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags2,
'position': (1, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls2, 'color': color, 'lw': lw2, 'ds': 'default', 'alpha': alpha,
'label': None, 'ylabel': r'$C_m/C_0$ Magnitude', 'xlabel': Labels.labels("t-tmerg"),
'xmin': 45, 'xmax': 110, 'ymin': 1e-5, 'ymax': 1e-1,
'xscale': None, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'legend': {'loc': 'best', 'ncol': 1, 'fontsize': 12},
'fontsize': 14,
'labelsize': 14
}
#
o_plot.set_plot_dics.append(densmode_m1)
o_plot.set_plot_dics.append(densmode_m2)
#
o_plot.main()
exit(1)
def plot_desity_modes2():
_fpath = "slices/" + "rho_modes.h5" # "profiles/" + "density_modes_lap15.h5"
sims = ["DD2_M13641364_M0_SR", "DD2_M13641364_M0_LK_SR_R04"]
lbls = ["DD2 136 136", "DD2 136 136 LK"]
ls_m1 = ["-", "-"]
ls_m2 = [":", ":"]
colors = ["green", "orange"]
lws_m1 = [1., 1., ]
lws_m2 = [0.8, 0.8]
alphas = [1., 1.]
#
norm_to_m = 0
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (9.0, 3.6) # <->, |]
o_plot.gen_set["figname"] = "dm_dd2_dd2_ls220.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.2
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
#
#
for sim, lbl, ls1, ls2, color, lw1, lw2, alpha in zip(sims, lbls, ls_m1, ls_m2, colors, lws_m1, lws_m2, alphas):
o_dm = LOAD_DENSITY_MODES(sim)
o_dm.gen_set['fname'] = Paths.ppr_sims + sim + "/" + _fpath
o_par = ADD_METHODS_ALL_PAR(sim)
tmerg = o_par.get_par("tmerg")
#
mags1 = o_dm.get_data(1, "int_phi_r")
mags1 = np.abs(mags1)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags1 = mags1 / abs(norm_int_phi_r1d)[0]
times = o_dm.get_grid("times")
#
print(mags1)
#
times = (times - tmerg) * 1e3 # ms
#
densmode_m1 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags1,
'position': (1, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls1, 'color': 'gray', 'lw': lw1, 'ds': 'default', 'alpha': alpha,
'label': None, 'ylabel': None, 'xlabel': Labels.labels("t-tmerg"),
'xmin': -10, 'xmax': 110, 'ymin': 1e-4, 'ymax': 5e-1,
'xscale': None, 'yscale': 'log', 'legend': {},
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14
}
#
mags2 = o_dm.get_data(2, "int_phi_r")
mags2 = np.abs(mags2)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags2 = mags2 / abs(norm_int_phi_r1d)[0]
# times = (times - tmerg) * 1e3 # ms
# print(mags2); exit(1)
densmode_m2 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags2,
'position': (1, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls2, 'color': 'gray', 'lw': lw2, 'ds': 'default', 'alpha': alpha,
'label': None, 'ylabel': r'$C_m/C_0$', 'xlabel': Labels.labels("t-tmerg"),
'xmin': 0, 'xmax': 110, 'ymin': 1e-4, 'ymax': 5e-1,
'xscale': None, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'legend': {},
'fontsize': 14,
'labelsize': 14,
'title': {'text': "Density Mode Evolution", 'fontsize': 14}
# 'sharex': True
}
#
if sim == sims[0]:
densmode_m1['label'] = r"$m=1$"
densmode_m2['label'] = r"$m=2$"
o_plot.set_plot_dics.append(densmode_m1)
o_plot.set_plot_dics.append(densmode_m2)
#
# ---
#
densmode_m1 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags1,
'position': (1, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls1, 'color': color, 'lw': lw1, 'ds': 'default', 'alpha': alpha,
'label': None, 'ylabel': None, 'xlabel': Labels.labels("t-tmerg"),
'xmin': -10, 'xmax': 110, 'ymin': 1e-4, 'ymax': 5e-1,
'xscale': None, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'legend': {'loc': 'upper right', 'ncol': 2, 'fontsize': 12, 'shadow': False, 'framealpha': 0.5,
'borderaxespad': 0.0},
'fontsize': 14,
'labelsize': 14
}
#
mags2 = o_dm.get_data(2, "int_phi_r")
mags2 = np.abs(mags2)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags2 = mags2 / abs(norm_int_phi_r1d)[0]
# times = (times - tmerg) * 1e3 # ms
# print(mags2); exit(1)
densmode_m2 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags2,
'position': (1, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls2, 'color': color, 'lw': lw2, 'ds': 'default', 'alpha': alpha,
'label': None, 'ylabel': r'$C_m/C_0$', 'xlabel': Labels.labels("t-tmerg"),
'xmin': 0, 'xmax': 110, 'ymin': 1e-4, 'ymax': 5e-1,
'xscale': None, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
# 'legend2': {'loc': 'lower right', 'ncol': 1, 'fontsize': 12, 'shadow':False, 'framealpha': 1.0, 'borderaxespad':0.0},
'fontsize': 14,
'labelsize': 14,
'title': {'text': "Density Mode Evolution", 'fontsize': 14}
# 'sharex': True
}
#
if sim == sims[0]:
densmode_m1['label'] = "DD2 136 136"
else:
densmode_m1['label'] = "DD2 136 136 Viscosity"
o_plot.set_plot_dics.append(densmode_m1)
o_plot.set_plot_dics.append(densmode_m2)
#
_fpath = "profiles/" + "density_modes_lap15.h5"
#
sims = ["LS220_M13641364_M0_SR", "LS220_M13641364_M0_LK_SR_restart"]
lbls = ["LS220 136 136", "LS220 136 136 LK"]
ls_m1 = ["-", "-"]
ls_m2 = [":", ":"]
colors = ["green", "orange"]
lws_m1 = [1., 1., ]
lws_m2 = [0.8, 0.8]
alphas = [1., 1.]
#
for sim, lbl, ls1, ls2, color, lw1, lw2, alpha in zip(sims, lbls, ls_m1, ls_m2, colors, lws_m1, lws_m2, alphas):
o_dm = LOAD_DENSITY_MODES(sim)
o_dm.gen_set['fname'] = Paths.ppr_sims + sim + "/" + _fpath
o_par = ADD_METHODS_ALL_PAR(sim)
tmerg = o_par.get_par("tmerg")
#
mags1 = o_dm.get_data(1, "int_phi_r")
mags1 = np.abs(mags1)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags1 = mags1 / abs(norm_int_phi_r1d)[0]
times = o_dm.get_grid("times")
#
print(mags1)
#
times = (times - tmerg) * 1e3 # ms
#
densmode_m1 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags1,
'position': (2, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls1, 'color': color, 'lw': lw1, 'ds': 'default', 'alpha': alpha,
'label': lbl, 'ylabel': r'$C_m/C_0$ Magnitude', 'xlabel': Labels.labels("t-tmerg"),
'xmin': -0, 'xmax': 50, 'ymin': 1e-5, 'ymax': 5e-1,
'xscale': None, 'yscale': 'log', 'legend': {},
'fancyticks': True,
'minorticks': True,
'fontsize': 14,
'labelsize': 14
}
#
mags2 = o_dm.get_data(2, "int_phi_r")
mags2 = np.abs(mags2)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags2 = mags2 / abs(norm_int_phi_r1d)[0]
# times = (times - tmerg) * 1e3 # ms
# print(mags2); exit(1)
densmode_m2 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags2,
'position': (2, 1),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': ls2, 'color': color, 'lw': lw2, 'ds': 'default', 'alpha': alpha,
'label': None, 'ylabel': r'$C_m/C_0$', 'xlabel': Labels.labels("t-tmerg"),
'xmin': 0, 'xmax': 40, 'ymin': 1e-5, 'ymax': 5e-1,
'xscale': None, 'yscale': 'log',
'fancyticks': True,
'minorticks': True,
'legend': {'loc': 'best', 'ncol': 1, 'fontsize': 12, 'shadow': False, 'framealpha': 1.0,
'borderaxespad': 0.0},
'fontsize': 14,
'labelsize': 14
}
#
if sim == sims[0]:
densmode_m1['label'] = "LS220 136 136"
else:
densmode_m1['label'] = "LS220 136 136 Viscosity"
o_plot.set_plot_dics.append(densmode_m1)
o_plot.set_plot_dics.append(densmode_m2)
o_plot.main()
exit(1)
''' Nucleo '''
def many_yeilds():
sims = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR",
"LS220_M14691268_M0_LK_SR"]
lbls = [sim.replace('_', '\_') for sim in sims]
masks = ["geo", "geo", "geo", "geo", "geo"]
# masks = ["geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend"]
colors = ["blue", "cyan", "green", "black", "red"]
alphas = [1., 1., 1., 1., 1.]
lss = ['-', '-', '-', '-', '-']
lws = [1., 1., 1., 1., 1.]
det = 0
method = "sum" # "Asol=195"
#
sims = ["BLh_M11841581_M0_LK_SR",
"DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_SR_R04", "DD2_M15091235_M0_LK_SR", "DD2_M14971245_M0_SR",
"LS220_M13641364_M0_LK_SR_restart", "LS220_M13641364_M0_SR", "LS220_M14691268_M0_LK_SR", "LS220_M14351298_M0_SR", # "LS220_M14691268_M0_SR",
"SFHo_M13641364_M0_LK_SR_2019pizza", "SFHo_M13641364_M0_SR", "SFHo_M14521283_M0_LK_SR_2019pizza", "SFHo_M14521283_M0_SR",
"SLy4_M13641364_M0_LK_SR", "SLy4_M14521283_M0_SR"]
lbls = [sim.replace('_', '\_') for sim in sims]
masks = ["geo",
"geo", "geo", "geo", "geo",
"geo", "geo", "geo", "geo",
"geo", "geo", "geo", "geo",
"geo", "geo"]
# masks = ["geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend"]
colors = ["black",
"blue", "blue", "blue", "blue",
"red", "red", "red", "red",
"green", "green", "green", "green",
"orange", "orange"]
alphas = [1.,
1., 1., 1., 1.,
1., 1., 1., 1.,
1., 1., 1., 1.,
1., 1.]
lss = ['-',
'-', '--', '-.', ':',
'-', '--', '-.', ':',
'-', '--', '-.', ':',
'-', '--']
lws = [1.,
1., 0.8, 0.5, 0.5,
1., 0.8, 0.5, 0.5,
1., 0.8, 0.5, 0.5,
1., 0.8]
det = 0
method = "Asol=195" # "Asol=195"
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (4.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "yields_all_geo.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
#
o_data = ADD_METHODS_ALL_PAR(sims[0])
a_sol, y_sol = o_data.get_normalized_sol_data("sum")
sol_yeilds = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': a_sol, 'yarr': y_sol,
'v_n_x': 'Asun', 'v_n_y': 'Ysun',
'color': 'gray', 'marker': 'o', 'ms': 4, 'alpha': 0.4,
'ymin': 1e-5, 'ymax': 2e-1, 'xmin': 50, 'xmax': 210,
'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
'label': 'solar', 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
}
o_plot.set_plot_dics.append(sol_yeilds)
for sim, mask, color, ls, alpha, lw, lbl in zip(sims, masks, colors, lss, alphas, lws, lbls):
o_data = ADD_METHODS_ALL_PAR(sim, add_mask=mask)
a_sim, y_sim = o_data.get_outflow_yields(det, mask, method=method)
sim_nucleo = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': a_sim, 'yarr': y_sim,
'v_n_x': 'A', 'v_n_y': 'abundances',
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': alpha,
'ymin': 1e-5, 'ymax': 2e-1, 'xmin': 50, 'xmax': 210,
'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
}
if sim == sims[-1]:
sim_nucleo['legend'] = {
'bbox_to_anchor': (1.0, -0.1),
# 'loc': 'lower left',
'loc': 'lower left', 'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
'borderayespad': 0.}
o_plot.set_plot_dics.append(sim_nucleo)
o_plot.main()
exit(1)
def tmp_many_yeilds():
# sims = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR",
# "LS220_M14691268_M0_LK_SR"] # long-lasting sims
sims = ["BLh_M11841581_M0_LK_SR",
"DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_SR_R04", "DD2_M15091235_M0_LK_SR", "DD2_M14971245_M0_SR",
"LS220_M13641364_M0_LK_SR_restart", "LS220_M13641364_M0_SR", "LS220_M14691268_M0_LK_SR", "LS220_M14351298_M0_SR", #"LS220_M14691268_M0_SR",
"SFHo_M13641364_M0_LK_SR_2019pizza", "SFHo_M13641364_M0_SR", "SFHo_M14521283_M0_LK_SR_2019pizza", "SFHo_M14521283_M0_SR",
"SLy4_M13641364_M0_LK_SR", "SLy4_M14521283_M0_SR"]
lbls = [sim.replace('_', '\_') for sim in sims]
masks = ["geo",
"geo", "geo", "geo", "geo",
"geo", "geo", "geo", "geo", "geo",
"geo", "geo", "geo", "geo",
"geo", "geo"]
# masks = ["geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend"]
colors = ["black",
"blue", "blue", "blue", "blue",
"red", "red", "red", "red", "red",
"green", "green", "green", "green",
"orange", "orange"]
alphas = [1.,
1., 1., 1., 1.,
1., 1., 1., 1., 1.,
1., 1., 1., 1.,
1., 1.]
lss = ['-',
'-', '--', '-.', ':',
'-', '--', '-.', ':', '-',
'-', '--', '-.', ':',
'-', '--']
lws = [1.,
1., 0.8, 0.5, 0.5,
1., 0.8, 0.5, 0.5, 0.5,
1., 0.8, 0.5, 0.5,
1., 0.8]
det = 0
method = "Asol=195" # "Asol=195"
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (16.2, 3.6) # <->, |]
o_plot.gen_set["figname"] = "yields_all_geo.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
#
o_data = ADD_METHODS_ALL_PAR(sims[0])
a_sol, y_sol = o_data.get_normalized_sol_data("sum")
sol_yeilds = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': a_sol, 'yarr': y_sol,
'v_n_x': 'Asun', 'v_n_y': 'Ysun',
'color': 'gray', 'marker': 'o', 'ms': 4, 'alpha': 0.4,
'ymin': 1e-5, 'ymax': 8e-1, 'xmin': 50, 'xmax': 230,
'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
'label': 'solar', 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
}
o_plot.set_plot_dics.append(sol_yeilds)
for sim, mask, color, ls, alpha, lw, lbl in zip(sims, masks, colors, lss, alphas, lws, lbls):
o_data = ADD_METHODS_ALL_PAR(sim, add_mask=mask)
a_sim, y_sim = o_data.get_outflow_yields(det, mask, method=method)
sim_nucleo = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 1),
'xarr': a_sim, 'yarr': y_sim,
'v_n_x': 'A', 'v_n_y': 'abundances',
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': alpha,
'ymin': 1e-5, 'ymax': 8e-1, 'xmin': 50, 'xmax': 220,
'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'title': {'text': "Mask:{} Norm:{}".format(mask.replace('_', '\_'), method), 'fontsize': 14}
}
o_plot.set_plot_dics.append(sim_nucleo)
# # --- --- --- --- --- 1
# sol_yeilds = {
# 'task': 'line', 'ptype': 'cartesian',
# 'position': (1, 2),
# 'xarr': a_sol, 'yarr': y_sol,
# 'v_n_x': 'Asun', 'v_n_y': 'Ysun',
# 'color': 'gray', 'marker': 'o', 'ms': 4, 'alpha': 0.4,
# 'ymin': 1e-5, 'ymax': 2e-1, 'xmin': 50, 'xmax': 230,
# 'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
# 'label': 'solar', 'yscale': 'log',
# 'fancyticks': True, 'minorticks': True,
# 'fontsize': 14,
# 'labelsize': 14,
# 'sharey': True
# }
# o_plot.set_plot_dics.append(sol_yeilds)
#
# method = "Asol=195"
# #
# for sim, mask, color, ls, alpha, lw, lbl in zip(sims, masks, colors, lss, alphas, lws, lbls):
# o_data = ADD_METHODS_ALL_PAR(sim, add_mask=mask)
# a_sim, y_sim = o_data.get_outflow_yields(det, mask, method=method)
# sim_nucleo = {
# 'task': 'line', 'ptype': 'cartesian',
# 'position': (1, 2),
# 'xarr': a_sim, 'yarr': y_sim,
# 'v_n_x': 'A', 'v_n_y': 'abundances',
# 'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': alpha,
# 'ymin': 1e-5, 'ymax': 2e-1, 'xmin': 50, 'xmax': 220,
# 'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
# 'label': lbl, 'yscale': 'log',
# 'fancyticks': True, 'minorticks': True,
# 'fontsize': 14,
# 'labelsize': 14,
# 'sharey': True,
# 'title': {'text': "Mask:{} Norm:{}".format(mask.replace('_', '\_'), method), 'fontsize': 14}
# }
#
# o_plot.set_plot_dics.append(sim_nucleo)
# --- --- --- --- --- 2
# sol_yeilds = {
# 'task': 'line', 'ptype': 'cartesian',
# 'position': (1, 3),
# 'xarr': a_sol, 'yarr': y_sol,
# 'v_n_x': 'Asun', 'v_n_y': 'Ysun',
# 'color': 'gray', 'marker': 'o', 'ms': 4, 'alpha': 0.4,
# 'ymin': 1e-5, 'ymax': 2e-1, 'xmin': 50, 'xmax': 230,
# 'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
# 'label': 'solar', 'yscale': 'log',
# 'fancyticks': True, 'minorticks': True,
# 'fontsize': 14,
# 'labelsize': 14,
# 'sharey': True
# }
# o_plot.set_plot_dics.append(sol_yeilds)
#
# method = "sum"
# masks = ["geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend"]
# #
# for sim, mask, color, ls, alpha, lw, lbl in zip(sims, masks, colors, lss, alphas, lws, lbls):
# o_data = ADD_METHODS_ALL_PAR(sim, add_mask=mask)
# a_sim, y_sim = o_data.get_outflow_yields(det, mask, method=method)
# sim_nucleo = {
# 'task': 'line', 'ptype': 'cartesian',
# 'position': (1, 3),
# 'xarr': a_sim, 'yarr': y_sim,
# 'v_n_x': 'A', 'v_n_y': 'abundances',
# 'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': alpha,
# 'ymin': 1e-5, 'ymax': 2e-1, 'xmin': 50, 'xmax': 220,
# 'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
# 'label': lbl, 'yscale': 'log',
# 'fancyticks': True, 'minorticks': True,
# 'fontsize': 14,
# 'labelsize': 14,
# 'sharey': True,
# 'title': {'text': "Mask:{} Norm:{}".format(mask.replace('_', '\_'), method), 'fontsize': 14}
# }
#
# o_plot.set_plot_dics.append(sim_nucleo)
# --- --- --- --- --- 3
sol_yeilds = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 2),
'xarr': a_sol, 'yarr': y_sol,
'v_n_x': 'Asun', 'v_n_y': 'Ysun',
'color': 'gray', 'marker': 'o', 'ms': 4, 'alpha': 0.4,
'ymin': 1e-5, 'ymax': 8e-1, 'xmin': 50, 'xmax': 210,
'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
'label': 'solar', 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharey': True
}
o_plot.set_plot_dics.append(sol_yeilds)
method = "Asol=195"
masks = ["geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend", "geo bern_geoend"]
#
for sim, mask, color, ls, alpha, lw, lbl in zip(sims, masks, colors, lss, alphas, lws, lbls):
o_data = ADD_METHODS_ALL_PAR(sim, add_mask=mask)
a_sim, y_sim = o_data.get_outflow_yields(det, mask, method=method)
sim_nucleo = {
'task': 'line', 'ptype': 'cartesian',
'position': (1, 2),
'xarr': a_sim, 'yarr': y_sim,
'v_n_x': 'A', 'v_n_y': 'abundances',
'color': color, 'ls': ls, 'lw': lw, 'ds': 'steps', 'alpha': alpha,
'ymin': 1e-5, 'ymax': 8e-1, 'xmin': 50, 'xmax': 210,
'xlabel': Labels.labels("A"), 'ylabel': Labels.labels("Y_final"),
'label': lbl, 'yscale': 'log',
'fancyticks': True, 'minorticks': True,
'fontsize': 14,
'labelsize': 14,
'sharey': True,
'title': {'text': "Mask:{} Norm:{}".format(mask.replace('_', '\_'), method), 'fontsize': 14}
}
if sim == sims[-1]:
sim_nucleo['legend'] = {'loc': 'lower left', 'ncol': 1, 'fontsize': 9, 'framealpha': 0.,
'borderaxespad': 0., 'borderayespad': 0.}
o_plot.set_plot_dics.append(sim_nucleo)
o_plot.main()
exit(1)
''' MKN '''
def plot_many_mkn():
bands = ["g", "z", "Ks"]
#
sims = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR",
"LS220_M14691268_M0_LK_SR"]
lbls = [sim.replace('_', '\_') for sim in sims]
fnames = ["mkn_model.h5", "mkn_model.h5", "mkn_model.h5", "mkn_model.h5", "mkn_model.h5"]
lss = ["-", "-", "-", "-", "-"]
lws = [1., 1., 1., 1., 1.]
alphas = [1., 1., 1., 1., 1.]
colors = ["blue", "cyan", "green", "black", "red"]
#
sims = ["LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_LK_SR"]
lbls = [r"LR $\kappa \rightarrow Y_e$", r"PBR $\kappa \rightarrow Y_e$", "LR", "PBR"]
fnames = ["mkn_model_k_lr.h5", "mkn_model_k_pbr.h5", "mkn_model_lr.h5", "mkn_model_pbr.h5"]
lss = ["-", "-", "--", "--"]
lws = [1., 1., 1., 1.]
alphas = [1., 1., 1., 1.]
colors = ["blue", "red", "blue", "red"]
#
#
compute_models = True
#
if compute_models:
#
heat_rates = ["LR", "PBR", "LR", "PBR"]
kappas = [True, True, False, False]
#
components = ["dynamics", "spiral"]
detectors = [0, 0]
masks = ["geo", "bern_geoend"]
#
for sim, fname, heating, kappa in zip(sims, fnames, heat_rates, kappas):
o_mkn = COMPUTE_LIGHTCURVE(sim)
o_mkn.output_fname = fname
#
for component, detector, mask in zip(components, detectors, masks):
if component == "dynamics":
o_mkn.set_dyn_ej_nr(detector, mask)
o_mkn.set_dyn_par_var("aniso", detector, mask)
o_mkn.ejecta_params[component]['eps_ye_dep'] = heating#"PBR"
o_mkn.ejecta_params[component]['use_kappa_table'] = kappa # "PBR"
elif component == "spiral":
o_mkn.set_bern_ej_nr(detector, mask)
o_mkn.set_spiral_par_var("aniso", detector, mask)
o_mkn.ejecta_params[component]['eps_ye_dep'] = heating#"PBR"
o_mkn.ejecta_params[component]['use_kappa_table'] = kappa # "PBR"
else:
raise AttributeError("no method to set NR data for component:{}".format(component))
#
o_mkn.set_wind_par_war("") # No wind
o_mkn.set_secular_par_war("") # No secular
o_mkn.set_glob_par_var_source(True, True) # use both NR files
#
o_mkn.compute_save_lightcurve(True, fname) # save output
#
figname = ''
for band in bands:
figname = figname + band
if band != bands[-1]:
figname = figname + '_'
figname = figname + '.png'
#
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + 'all2/'
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (len(bands) * 3.0, 3.6) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = figname
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
fontsize = 14
labelsize = 14
i_sim = 0
for sim, fname, lbl, ls, lw, alpha, color in zip(sims, fnames, lbls, lss, lws, alphas, colors):
o_res = COMBINE_LIGHTCURVES(sim)
for i_plot, band in enumerate(bands):
i_plot = i_plot + 1
times, mags = o_res.get_model_median(band, fname)
model = {
'task': 'line', "ptype": "cartesian",
'position': (1, i_plot),
'xarr': times, 'yarr': mags,
'v_n_x': 'time', 'v_n_y': 'mag',
'color': color, 'ls': ls, 'lw': lw, 'ds': 'default', 'alpha': alpha,
'ymin': 25, 'ymax': 15, 'xmin': 3e-1, 'xmax': 3e1,
'xlabel': r"time [days]", 'ylabel': r"AB magnitude at 40 Mpc",
'label': lbl, 'xscale': 'log',
'fancyticks': True, 'minorticks': True,
'sharey': False,
'fontsize': fontsize,
'labelsize': labelsize,
'legend': {} # {'loc': 'best', 'ncol': 2, 'fontsize': 18}
}
#
if i_sim == len(sims)-1:
obs = {
'task': 'mkn obs', "ptype": "cartesian",
'position': (1, i_plot),
'data': o_res, 'band': band, 'obs': True,
'v_n_x': 'time', 'v_n_y': 'mag',
'color': 'gray', 'marker': 'o', 'ms': 5., 'alpha': 0.8,
'ymin': 25, 'ymax': 15, 'xmin': 3e-1, 'xmax': 3e1,
'xlabel': r"time [days]", 'ylabel': r"AB magnitude at 40 Mpc",
'label': "AT2017gfo", 'xscale': 'log',
'fancyticks': True, 'minorticks': True,
'title': {'text': '{} band'.format(band), 'fontsize': 14},
'sharey': False,
'fontsize': fontsize,
'labelsize': labelsize,
'legend': {}
}
# if sim == sims[-1] and band != bands[-1]:
# model['label'] = None
if i_sim == len(sims)-1 and band != bands[0]:
model['sharey'] = True
obs['sharey'] = True
if i_sim == len(sims)-1 and band == bands[-1]:
model['legend'] = {
'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
'borderayespad': 0.}
if i_sim == len(sims)-1:
o_plot.set_plot_dics.append(obs)
o_plot.set_plot_dics.append(model)
i_sim = i_sim + 1
o_plot.main()
exit(1)
def plot_many_mkn_long(heating="PBR"):
#
bands = ["g", "z", "Ks"]
#
sims1 = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR", "LS220_M14691268_M0_LK_SR"]
lbls1 = [sim.replace('_', '\_') for sim in sims1]
fnames1 = ["mkn_model_{}.h5".format(heating) for sim in sims1]
lss1 = ["-", "-", "-", "-", "-"]
lws1 = [1., 1., 1., 1., 1.]
alphas1 = [1., 1., 1., 1., 1.]
colors1 = ["blue", "cyan", "green", "black", "red"]
#
sims2 = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR", "LS220_M14691268_M0_LK_SR"]
lbls2 = [None for sim in sims2]
fnames2 = ["mkn_model_k_{}.h5".format(heating) for sim in sims2]
lss2 = ["--", "--", "--", "--", "--"]
lws2 = [0.7, 0.7, 0.7, 0.7, 0.7]
alphas2 = [1., 1., 1., 1., 1.]
colors2 = ["blue", "cyan", "green", "black", "red"]
sims = sims1 + sims2
lbls = lbls1 + lbls2
fnames = fnames1 + fnames2
lss = lss1 + lss2
lws = lws1 + lws2
alphas = alphas1 + alphas2
colors = colors1 + colors2
#
#
compute_models = True
#
if compute_models:
#
heat_rates = [heating for i in sims]
kappas = [False for i in sims1] + [True for i in sims2]
#
components = ["dynamics", "spiral"]
detectors = [0, 0]
masks = ["geo", "bern_geoend"]
#
for sim, fname, heating, kappa in zip(sims, fnames, heat_rates, kappas):
o_mkn = COMPUTE_LIGHTCURVE(sim)
o_mkn.output_fname = fname
#
for component, detector, mask in zip(components, detectors, masks):
if component == "dynamics":
o_mkn.set_dyn_ej_nr(detector, mask)
o_mkn.set_dyn_par_var("aniso", detector, mask)
o_mkn.ejecta_params[component]['eps_ye_dep'] = heating#"PBR"
o_mkn.ejecta_params[component]['use_kappa_table'] = kappa # "PBR"
elif component == "spiral":
o_mkn.set_bern_ej_nr(detector, mask)
o_mkn.set_spiral_par_var("aniso", detector, mask)
o_mkn.ejecta_params[component]['eps_ye_dep'] = heating#"PBR"
o_mkn.ejecta_params[component]['use_kappa_table'] = kappa # "PBR"
else:
raise AttributeError("no method to set NR data for component:{}".format(component))
#
o_mkn.set_wind_par_war("") # No wind
o_mkn.set_secular_par_war("") # No secular
o_mkn.set_glob_par_var_source(True, True) # use both NR files
#
o_mkn.compute_save_lightcurve(True, fname) # save output
#
figname = ''
for band in bands:
figname = figname + band
if band != bands[-1]:
figname = figname + '_'
figname = figname + '_{}_all_long.png'.format(heating)
#
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + 'all2/'
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (len(bands) * 3.0, 3.6) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = figname
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
fontsize = 14
labelsize = 14
i_sim = 0
for sim, fname, lbl, ls, lw, alpha, color in zip(sims, fnames, lbls, lss, lws, alphas, colors):
o_res = COMBINE_LIGHTCURVES(sim)
for i_plot, band in enumerate(bands):
i_plot = i_plot + 1
times, mags = o_res.get_model_median(band, fname)
model = {
'task': 'line', "ptype": "cartesian",
'position': (1, i_plot),
'xarr': times, 'yarr': mags,
'v_n_x': 'time', 'v_n_y': 'mag',
'color': color, 'ls': ls, 'lw': lw, 'ds': 'default', 'alpha': alpha,
'ymin': 25, 'ymax': 15, 'xmin': 3e-1, 'xmax': 3e1,
'xlabel': r"time [days]", 'ylabel': r"AB magnitude at 40 Mpc",
'label': lbl, 'xscale': 'log',
'fancyticks': True, 'minorticks': True,
'sharey': False,
'fontsize': fontsize,
'labelsize': labelsize,
'legend': {} # {'loc': 'best', 'ncol': 2, 'fontsize': 18}
}
#
obs = {
'task': 'mkn obs', "ptype": "cartesian",
'position': (1, i_plot),
'data': o_res, 'band': band, 'obs': True,
'v_n_x': 'time', 'v_n_y': 'mag',
'color': 'gray', 'marker': 'o', 'ms': 5., 'alpha': 0.8,
'ymin': 25, 'ymax': 15, 'xmin': 3e-1, 'xmax': 3e1,
'xlabel': r"time [days]", 'ylabel': r"AB magnitude at 40 Mpc",
'label': "AT2017gfo", 'xscale': 'log',
'fancyticks': True, 'minorticks': True,
'title': {'text': '{} band'.format(band), 'fontsize': 14},
'sharey': False,
'fontsize': fontsize,
'labelsize': labelsize,
'legend': {}
}
# if sim == sims[-1] and band != bands[-1]:
# model['label'] = None
if i_sim == len(sims)-1 and band != bands[0]:
model['sharey'] = True
obs['sharey'] = True
if i_sim == len(sims)-1 and band == bands[-1]:
model['legend'] = {
'loc':"lower left",
'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
'borderayespad': 0.}
model['textold'] = {'coords':(0.8, 0.8), 'text':heating, 'color':'black', 'fs':16}
if i_sim == 0:
o_plot.set_plot_dics.append(obs)
o_plot.set_plot_dics.append(model)
i_sim = i_sim + 1
o_plot.main()
exit(1)
def plot_many_mkn_dyn_only_long(heating="PBR"):
#
bands = ["g", "z", "Ks"]
#
sims1 = ["BLh_M11841581_M0_LK_SR",
"DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_SR_R04", "DD2_M15091235_M0_LK_SR", "DD2_M14971245_M0_SR",
"LS220_M13641364_M0_LK_SR_restart", "LS220_M13641364_M0_SR", "LS220_M14691268_M0_LK_SR", "LS220_M14351298_M0_SR", # "LS220_M14691268_M0_SR",
"SFHo_M13641364_M0_LK_SR_2019pizza", "SFHo_M13641364_M0_SR", "SFHo_M14521283_M0_LK_SR_2019pizza",
"SFHo_M14521283_M0_SR",
"SLy4_M13641364_M0_LK_SR", "SLy4_M14521283_M0_SR"]
lbls1 = [sim.replace('_', '\_') for sim in sims1]
fnames1 = ["mkn_model_1_{}.h5".format(heating) for sim in sims1]
colors1 = ["black",
"blue", "blue", "blue", "blue",
"red", "red", "red", "red", #"red",
"green", "green", "green", "green",
"orange", "orange"]
alphas1 = [1.,
1., 1., 1., 1.,
1., 1., 1., 1.,# 1.,
1., 1., 1., 1.,
1., 1.]
lss1 = ['-',
'-', '--', '-.', ':',
'-', '--', '-.', ':', #'-',
'-', '--', '-.', ':',
'-', '--']
lws1 = [1.,
1., 0.8, 0.5, 0.5,
1., 0.8, 0.5, 0.5,#0.5,
1., 0.8, 0.5, 0.5,
1., 0.8]
#
sims2 = sims1
lbls2 = [None for sim in sims2]
fnames2 = ["mkn_model_1_k_{}.h5".format(heating) for sim in sims2]
lss2 = lss1
lws2 = lws1
alphas2 = [0.5,
0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5,
0.5, 0.5, 0.5, 0.5,
0.5, 0.5]
colors2 = colors1
sims = sims1 + sims2
lbls = lbls1 + lbls2
fnames = fnames1 + fnames2
lss = lss1 + lss2
lws = lws1 + lws2
alphas = alphas1 + alphas2
colors = colors1 + colors2
#
#
compute_models = True
#
if compute_models:
#
heat_rates = [heating for i in sims]
kappas = [False for i in sims1] + [True for i in sims2]
#
components = ["dynamics"]#, "spiral"]
detectors = [0, 0]
masks = ["geo"]#, "bern_geoend"]
#
for sim, fname, heating, kappa in zip(sims, fnames, heat_rates, kappas):
o_mkn = COMPUTE_LIGHTCURVE(sim)
o_mkn.output_fname = fname
#
for component, detector, mask in zip(components, detectors, masks):
if component == "dynamics":
o_mkn.set_dyn_ej_nr(detector, mask)
o_mkn.set_dyn_par_var("aniso", detector, mask)
o_mkn.ejecta_params[component]['eps_ye_dep'] = heating#"PBR"
o_mkn.ejecta_params[component]['use_kappa_table'] = kappa # "PBR"
elif component == "spiral":
o_mkn.set_bern_ej_nr(detector, mask)
o_mkn.set_spiral_par_var("aniso", detector, mask)
o_mkn.ejecta_params[component]['eps_ye_dep'] = heating#"PBR"
o_mkn.ejecta_params[component]['use_kappa_table'] = kappa # "PBR"
else:
raise AttributeError("no method to set NR data for component:{}".format(component))
#
o_mkn.set_wind_par_war("") # No wind
o_mkn.set_secular_par_war("") # No secular
o_mkn.set_glob_par_var_source(True, True) # use both NR files
#
o_mkn.glob_vars['m_disk'] = None
#
o_mkn.compute_save_lightcurve(True, fname) # save output
#
figname = ''
for band in bands:
figname = figname + band
if band != bands[-1]:
figname = figname + '_'
figname = figname + '_{}_all_short.png'.format(heating)
#
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + 'all2/'
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (len(bands) * 3.0, 3.6) # <->, |] # to match hists with (8.5, 2.7)
o_plot.gen_set["figname"] = figname
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = False
o_plot.gen_set["subplots_adjust_h"] = 0.3
o_plot.gen_set["subplots_adjust_w"] = 0.0
o_plot.set_plot_dics = []
fontsize = 14
labelsize = 14
i_sim = 0
for sim, fname, lbl, ls, lw, alpha, color in zip(sims, fnames, lbls, lss, lws, alphas, colors):
o_res = COMBINE_LIGHTCURVES(sim)
for i_plot, band in enumerate(bands):
i_plot = i_plot + 1
times, mags = o_res.get_model_median(band, fname)
model = {
'task': 'line', "ptype": "cartesian",
'position': (1, i_plot),
'xarr': times, 'yarr': mags,
'v_n_x': 'time', 'v_n_y': 'mag',
'color': color, 'ls': ls, 'lw': lw, 'ds': 'default', 'alpha': alpha,
'ymin': 25, 'ymax': 15, 'xmin': 3e-1, 'xmax': 3e1,
'xlabel': r"time [days]", 'ylabel': r"AB magnitude at 40 Mpc",
'label': lbl, 'xscale': 'log',
'fancyticks': True, 'minorticks': True,
'sharey': False,
'fontsize': fontsize,
'labelsize': labelsize,
'legend': {} # {'loc': 'best', 'ncol': 2, 'fontsize': 18}
}
#
obs = {
'task': 'mkn obs', "ptype": "cartesian",
'position': (1, i_plot),
'data': o_res, 'band': band, 'obs': True,
'v_n_x': 'time', 'v_n_y': 'mag',
'color': 'gray', 'marker': 'o', 'ms': 5., 'alpha': 0.8,
'ymin': 25, 'ymax': 15, 'xmin': 3e-1, 'xmax': 3e1,
'xlabel': r"time [days]", 'ylabel': r"AB magnitude at 40 Mpc",
'label': "AT2017gfo", 'xscale': 'log',
'fancyticks': True, 'minorticks': True,
'title': {'text': '{} band'.format(band), 'fontsize': 14},
'sharey': False,
'fontsize': fontsize,
'labelsize': labelsize,
'legend': {}
}
# if sim == sims[-1] and band != bands[-1]:
# model['label'] = None
if i_sim == len(sims)-1 and band != bands[0]:
model['sharey'] = True
obs['sharey'] = True
if i_sim == len(sims)-1 and band == bands[-1]:
# model['legend'] = {
# 'loc':"lower left",
# 'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
# 'borderayespad': 0.}
# {
model['legend'] = {'bbox_to_anchor': (1.0, -0.1),
# 'loc': 'lower left',
'loc': 'lower left', 'ncol': 1, 'fontsize': 9, 'framealpha': 0., 'borderaxespad': 0.,
'borderayespad': 0.}
model['textold'] = {'coords':(0.8, 0.8), 'text':heating, 'color':'black', 'fs':16}
if i_sim == 0:
o_plot.set_plot_dics.append(obs)
o_plot.set_plot_dics.append(model)
i_sim = i_sim + 1
o_plot.main()
exit(1)
""" ---------------------------------------------- MIXED ------------------------------------------------------------"""
def plot_2ejecta_1disk_timehists():
# columns
sims = ["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M13641364_M0_LK_SR_R04", "DD2_M15091235_M0_LK_SR", "BLh_M13641364_M0_LK_SR",
"LS220_M14691268_M0_LK_SR"]
# rows
masks2 = ["bern_geoend", "bern_geoend", "bern_geoend", "bern_geoend"]
masks1 = ["geo", "geo", "geo", "geo"]
v_ns = ["vel_inf", "Y_e", "theta", "temperature"]
v_ns_diks = ["Ye", "velz", "theta", "temp"]
det = 0
norm_to_m = 0
_fpath = "slices/" + "rho_modes.h5"
#
o_plot = PLOT_MANY_TASKS()
o_plot.gen_set["figdir"] = Paths.plots + "all2/"
o_plot.gen_set["type"] = "cartesian"
o_plot.gen_set["figsize"] = (14.0, 10.0) # <->, |]
o_plot.gen_set["figname"] = "timecorr_ej_disk.png"
o_plot.gen_set["sharex"] = False
o_plot.gen_set["sharey"] = True
o_plot.gen_set["dpi"] = 128
o_plot.gen_set["subplots_adjust_h"] = 0.03 # w
o_plot.gen_set["subplots_adjust_w"] = 0.01
o_plot.set_plot_dics = []
#
i_col = 1
for sim in sims:
#
o_data = ADD_METHODS_ALL_PAR(sim)
#
i_row = 1
# Time of the merger
fpath = Paths.ppr_sims + sim + "/" + "waveforms/" + "tmerger.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
tmerg = float(np.loadtxt(fname=fpath, unpack=True)) * Constants.time_constant # ms
# Total Ejecta Mass
for v_n, mask1, ls in zip(["Mej_tot", "Mej_tot"], ["geo", "bern_geoend"], ["--", "-"]):
# Time to end dynamical ejecta
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask1 + '/' + "total_flux.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
timearr, mass = np.loadtxt(fname=fpath, unpack=True, usecols=(0, 2))
tend = float(timearr[np.where(mass >= (mass.max() * 0.98))][0]) * 1e3 # ms
tend = tend - tmerg
# print(time*1e3); exit(1)
# Dybamical
timearr = (timearr * 1e3) - tmerg
mass = mass * 1e2
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (i_row, i_col),
'xarr': timearr, 'yarr': mass,
'v_n_x': "time", 'v_n_y': "mass",
'color': "black", 'ls': ls, 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
'ymin': 0.05, 'ymax': 2.9, 'xmin': timearr.min(), 'xmax': timearr.max(),
'xlabel': Labels.labels("t-tmerg"), 'ylabel': "M $[M_{\odot}]$",
'label': None, 'yscale': 'linear',
'fontsize': 14,
'labelsize': 14,
'fancyticks': True,
'minorticks': True,
'sharex': True, # removes angular citkscitks
'sharey': True,
'title': {"text": sim.replace('_', '\_'), 'fontsize': 12},
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if sim == sims[0]:
plot_dic["sharey"] = False
if mask1 == "geo":
plot_dic['label'] = r"$M_{\rm{ej}}$ $[10^{-2} M_{\odot}]$"
else:
plot_dic['label'] = r"$M_{\rm{ej}}^{\rm{w}}$ $[10^{-2} M_{\odot}]$"
o_plot.set_plot_dics.append(plot_dic)
# Total Disk Mass
timedisk_massdisk = o_data.get_disk_mass()
timedisk = timedisk_massdisk[:, 0]
massdisk = timedisk_massdisk[:, 1]
timedisk = (timedisk * 1e3) - tmerg
massdisk = massdisk * 1e1
plot_dic = {
'task': 'line', 'ptype': 'cartesian',
'position': (i_row, i_col),
'xarr': timedisk, 'yarr': massdisk,
'v_n_x': "time", 'v_n_y': "mass",
'color': "black", 'ls': ':', 'lw': 0.8, 'ds': 'default', 'alpha': 1.0,
'ymin': 0.05, 'ymax': 3.0, 'xmin': timearr.min(), 'xmax': timearr.max(),
'xlabel': Labels.labels("t-tmerg"), 'ylabel': "M $[M_{\odot}]$",
'label': None, 'yscale': 'linear',
'fontsize': 14,
'labelsize': 14,
'fancyticks': True,
'minorticks': True,
'sharex': True, # removes angular citkscitks
'sharey': True,
# 'title': {"text": sim.replace('_', '\_'), 'fontsize': 12},
'legend': {} # 'loc': 'best', 'ncol': 2, 'fontsize': 18
}
if sim == sims[0]:
plot_dic["sharey"] = False
plot_dic['label'] = r"$M_{\rm{disk}}$ $[10^{-1} M_{\odot}]$"
plot_dic['legend'] = {'loc': 'best', 'ncol': 1, 'fontsize': 9, 'framealpha': 0.}
o_plot.set_plot_dics.append(plot_dic)
#
i_row = i_row + 1
# DEBSITY MODES
o_dm = LOAD_DENSITY_MODES(sim)
o_dm.gen_set['fname'] = Paths.ppr_sims + sim + "/" + _fpath
#
mags1 = o_dm.get_data(1, "int_phi_r")
mags1 = np.abs(mags1)
# if sim == "DD2_M13641364_M0_SR": print("m1", mags1)#; exit(1)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags1 = mags1 / abs(norm_int_phi_r1d)[0]
times = o_dm.get_grid("times")
#
assert len(times) > 0
# if sim == "DD2_M13641364_M0_SR": print("m0", abs(norm_int_phi_r1d)); exit(1)
#
times = (times * 1e3) - tmerg # ms
#
densmode_m1 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags1,
'position': (i_row, i_col),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': '-', 'color': 'black', 'lw': 0.8, 'ds': 'default', 'alpha': 1.,
'label': None, 'ylabel': None, 'xlabel': Labels.labels("t-tmerg"),
'xmin': timearr.min(), 'xmax': timearr.max(), 'ymin': 1e-4, 'ymax': 1e0,
'xscale': None, 'yscale': 'log', 'legend': {},
'fontsize': 14,
'labelsize': 14,
'fancyticks': True,
'minorticks': True,
'sharex': True, # removes angular citkscitks
'sharey': True
}
#
mags2 = o_dm.get_data(2, "int_phi_r")
mags2 = np.abs(mags2)
print(mags2)
if norm_to_m != None:
# print('Normalizing')
norm_int_phi_r1d = o_dm.get_data(norm_to_m, 'int_phi_r')
# print(norm_int_phi_r1d); exit(1)
mags2 = mags2 / abs(norm_int_phi_r1d)[0]
# times = (times - tmerg) * 1e3 # ms
# print(abs(norm_int_phi_r1d)); exit(1)
densmode_m2 = {
'task': 'line', 'ptype': 'cartesian',
'xarr': times, 'yarr': mags2,
'position': (i_row, i_col),
'v_n_x': 'times', 'v_n_y': 'int_phi_r abs',
'ls': '-', 'color': 'gray', 'lw': 0.5, 'ds': 'default', 'alpha': 1.,
'label': None, 'ylabel': r'$C_m/C_0$', 'xlabel': Labels.labels("t-tmerg"),
'xmin': timearr.min(), 'xmax': timearr.max(), 'ymin': 1e-4, 'ymax': 9e-1,
'xscale': None, 'yscale': 'log',
'legend': {},
'fontsize': 14,
'labelsize': 14,
'fancyticks': True,
'minorticks': True,
'sharex': True, # removes angular citkscitks
'sharey': True,
'title': {} # {'text': "Density Mode Evolution", 'fontsize': 14}
# 'sharex': True
}
#
if sim == sims[0]:
densmode_m1['label'] = r"$m=1$"
densmode_m2['label'] = r"$m=2$"
if sim == sims[0]:
densmode_m1["sharey"] = False
densmode_m1['label'] = r"$m=1$"
densmode_m1['legend'] = {'loc': 'upper center', 'ncol': 2, 'fontsize': 9, 'framealpha': 0.,
'borderayespad': 0.}
if sim == sims[0]:
densmode_m2["sharey"] = False
densmode_m2['label'] = r"$m=2$"
densmode_m2['legend'] = {'loc': 'upper center', 'ncol': 2, 'fontsize': 9, 'framealpha': 0.,
'borderayespad': 0.}
o_plot.set_plot_dics.append(densmode_m2)
o_plot.set_plot_dics.append(densmode_m1)
i_row = i_row + 1
# TIME CORR EJECTA
for v_n, mask1, mask2 in zip(v_ns, masks1, masks2):
# Time to end dynamical ejecta
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask1 + '/' + "total_flux.dat"
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
timearr, mass = np.loadtxt(fname=fpath, unpack=True, usecols=(0, 2))
tend = float(timearr[np.where(mass >= (mass.max() * 0.98))][0]) * 1e3 # ms
tend = tend - tmerg
# print(time*1e3); exit(1)
# Dybamical
#
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask1 + '/' + "timecorr_{}.h5".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
# loadind data
dfile = h5py.File(fpath, "r")
timearr = np.array(dfile["time"]) - tmerg
v_n_arr = np.array(dfile[v_n])
mass = np.array(dfile["mass"])
timearr, v_n_arr = np.meshgrid(timearr, v_n_arr)
# mass = np.maximum(mass, mass.min())
#
corr_dic2 = { # relies on the "get_res_corr(self, it, v_n): " method of data object
'task': 'corr2d', 'dtype': 'corr', 'ptype': 'cartesian',
'xarr': timearr, 'yarr': v_n_arr, 'zarr': mass,
'position': (i_row, i_col),
'v_n_x': "time", 'v_n_y': v_n, 'v_n': 'mass', 'normalize': True,
'cbar': {},
'cmap': 'inferno_r',
'xlabel': Labels.labels("time"), 'ylabel': Labels.labels(v_n, alternative=True),
'xmin': timearr.min(), 'xmax': timearr.max(), 'ymin': None, 'ymax': None, 'vmin': 1e-4, 'vmax': 1e-1,
'xscale': "linear", 'yscale': "linear", 'norm': 'log',
'mask_below': None, 'mask_above': None,
'title': {}, # {"text": o_corr_data.sim.replace('_', '\_'), 'fontsize': 14},
# 'text': {'text': lbl.replace('_', '\_'), 'coords': (0.05, 0.9), 'color': 'white', 'fs': 12},
'axvline': {"x": tend, "linestyle": "--", "color": "black", "linewidth": 1.},
'mask': "x>{}".format(tend),
'fancyticks': True,
'minorticks': True,
'sharex': True, # removes angular citkscitks
'sharey': True,
'fontsize': 14,
'labelsize': 14
}
if sim == sims[0]:
corr_dic2["sharey"] = False
if v_n == v_ns[-1]:
corr_dic2["sharex"] = False
if v_n == "vel_inf":
corr_dic2['ymin'], corr_dic2['ymax'] = 0., 0.45
elif v_n == "Y_e":
corr_dic2['ymin'], corr_dic2['ymax'] = 0.05, 0.45
elif v_n == "temperature":
corr_dic2['ymin'], corr_dic2['ymax'] = 0.1, 1.8
o_plot.set_plot_dics.append(corr_dic2)
# WIND
fpath = Paths.ppr_sims + sim + "/" + "outflow_{}/".format(det) + mask2 + '/' + "timecorr_{}.h5".format(v_n)
if not os.path.isfile(fpath):
raise IOError("File does not exist: {}".format(fpath))
# loadind data
dfile = h5py.File(fpath, "r")
timearr = np.array(dfile["time"]) - tmerg
v_n_arr = np.array(dfile[v_n])
mass = np.array(dfile["mass"])
timearr, v_n_arr = np.meshgrid(timearr, v_n_arr)
# print(timearr);exit(1)
# mass = np.maximum(mass, mass.min())
#
corr_dic2 = { # relies on the "get_res_corr(self, it, v_n): " method of data object
'task': 'corr2d', 'dtype': 'corr', 'ptype': 'cartesian',
'xarr': timearr, 'yarr': v_n_arr, 'zarr': mass,
'position': (i_row, i_col),
'v_n_x': "time", 'v_n_y': v_n, 'v_n': 'mass', 'normalize': True,
'cbar': {},
'cmap': 'inferno_r',
'xlabel': Labels.labels("time"), 'ylabel': Labels.labels(v_n, alternative=True),
'xmin': timearr.min(), 'xmax': timearr.max(), 'ymin': None, 'ymax': None, 'vmin': 1e-4, 'vmax': 1e-1,
'xscale': "linear", 'yscale': "linear", 'norm': 'log',
'mask_below': None, 'mask_above': None,
'title': {}, # {"text": o_corr_data.sim.replace('_', '\_'), 'fontsize': 14},
# 'text': {'text': lbl.replace('_', '\_'), 'coords': (0.05, 0.9), 'color': 'white', 'fs': 12},
'mask': "x<{}".format(tend),
'fancyticks': True,
'minorticks': True,
'sharex': True, # removes angular citkscitks
'sharey': True,
'fontsize': 14,
'labelsize': 14
}
if sim == sims[0]:
corr_dic2["sharey"] = False
if v_n == v_ns[-1] and len(v_ns_diks) == 0:
corr_dic2["sharex"] = False
if v_n == "vel_inf":
corr_dic2['ymin'], corr_dic2['ymax'] = 0., 0.45
elif v_n == "Y_e":
corr_dic2['ymin'], corr_dic2['ymax'] = 0.05, 0.45
elif v_n == "theta":
corr_dic2['ymin'], corr_dic2['ymax'] = 0, 85
elif v_n == "temperature":
corr_dic2['ymin'], corr_dic2['ymax'] = 0, 1.8
if sim == sims[-1] and v_n == v_ns[-1]:
corr_dic2['cbar'] = {'location': 'right .02 0.', 'label': Labels.labels("mass"),
# 'right .02 0.' 'fmt': '%.1e',
'labelsize': 14, # 'aspect': 6.,
'fontsize': 14}
o_plot.set_plot_dics.append(corr_dic2)
i_row = i_row + 1
# DISK
if len(v_ns_diks) > 0:
d3_corr = LOAD_RES_CORR(sim)
iterations = d3_corr.list_iterations
#
for v_n in v_ns_diks:
# Loading 3D data
print("v_n:{}".format(v_n))
times = []
bins = []
values = []
for it in iterations:
fpath = Paths.ppr_sims + sim + "/" + "profiles/" + str(it) + "/" + "hist_{}.dat".format(v_n)
if os.path.isfile(fpath):
times.append(d3_corr.get_time_for_it(it, "prof"))
print("\tLoading it:{} t:{}".format(it, times[-1]))
data = np.loadtxt(fpath, unpack=False)
bins = data[:, 0]
values.append(data[:, 1])
else:
print("\tFile not found it:{}".format(fpath))
assert len(times) > 0
times = np.array(times) * 1e3
bins = np.array(bins)
values = np.reshape(np.array(values), newshape=(len(times), len(bins))).T
#
times = times - tmerg
#
values = values / np.sum(values)
values = np.maximum(values, 1e-10)
#
def_dic = {'task': 'colormesh', 'ptype': 'cartesian', # 'aspect': 1.,
'xarr': times, "yarr": bins, "zarr": values,
'position': (i_row, i_col), # 'title': '[{:.1f} ms]'.format(time_),
'cbar': {},
'v_n_x': 'x', 'v_n_y': 'z', 'v_n': v_n,
'xlabel': Labels.labels("t-tmerg"), 'ylabel': Labels.labels(v_n, alternative=True),
'xmin': timearr.min(), 'xmax': timearr.max(), 'ymin': bins.min(), 'ymax': bins.max(),
'vmin': 1e-6,
'vmax': 1e-2,
'fill_vmin': False, # fills the x < vmin with vmin
'xscale': None, 'yscale': None,
'mask': None, 'cmap': 'inferno_r', 'norm': "log",
'fancyticks': True,
'minorticks': True,
'title': {},
# "text": r'$t-t_{merg}:$' + r'${:.1f}$'.format((time_ - tmerg) * 1e3), 'fontsize': 14
# 'sharex': True, # removes angular citkscitks
'text': {},
'fontsize': 14,
'labelsize': 14,
'sharex': True,
'sharey': True,
}
if sim == sims[-1] and v_n == v_ns_diks[-1]:
def_dic['cbar'] = {'location': 'right .02 0.', # 'label': Labels.labels("mass"),
# 'right .02 0.' 'fmt': '%.1e',
'labelsize': 14, # 'aspect': 6.,
'fontsize': 14}
if v_n == v_ns[0]:
def_dic['text'] = {'coords': (1.0, 1.05), 'text': sim.replace("_", "\_"), 'color': 'black',
'fs': 16}
if v_n == "Ye":
def_dic['ymin'] = 0.05
def_dic['ymax'] = 0.45
if v_n == "velz":
def_dic['ymin'] = -.25
def_dic['ymax'] = .25
elif v_n == "temp":
# def_dic['yscale'] = "log"
def_dic['ymin'] = 1e-1
def_dic['ymax'] = 2.5e1
elif v_n == "theta":
def_dic['ymin'] = 0
def_dic['ymax'] = 85
def_dic["yarr"] = 90 - (def_dic["yarr"] / np.pi * 180.)
#
if v_n == v_ns_diks[-1]:
def_dic["sharex"] = False
if sim == sims[0]:
def_dic["sharey"] = False
o_plot.set_plot_dics.append(def_dic)
i_row = i_row + 1
i_col = i_col + 1
o_plot.main()
exit(1)
if __name__ == '__main__':
plot_2ejecta_1disk_timehists()
''' density modes '''
# plot_desity_modes()
# plot_desity_modes2()
''' --- neutrinos --- '''
# plot_several_q_eff("Q_eff_nua", ["LS220_M14691268_M0_LK_SR"], [1302528, 1515520, 1843200], "ls220_q_eff.png")
# plot_several_q_eff("Q_eff_nua", ["DD2_M15091235_M0_LK_SR"], [1277952, 1425408, 1540096], "dd2_q_eff.png")
#
# plot_several_q_eff("R_eff_nua", ["LS220_M14691268_M0_LK_SR"], [1302528, 1515520, 1843200], "ls220_r_eff.png")
# plot_several_q_eff("R_eff_nua", ["DD2_M15091235_M0_LK_SR"], [1277952, 1425408, 1540096], "dd2_r_eff.png")
''' ejecta properties '''
# plot_histograms_ejecta_for_many_sims()
# plot_histograms_ejecta("geo", "geo")
# plot_histograms_ejecta("geo", "bern_geoend")
# plot_total_fluxes_q1_and_qnot1("Y_e04_geoend")
# plot_total_fluxes_q1_and_qnot1("theta60_geoend")
# plot_2ejecta_1disk_timehists()
# plot_2ejecta_1disk_timehists()
''' disk ejecta summory properties '''
# plot_last_disk_mass_with_lambda("Lambda", "q", "Mdisk3Dmax", None, None)
# plot_last_disk_mass_with_lambda("Lambda", "q", "Mej_tot", det=0, mask="geo")
# plot_last_disk_mass_with_lambda("Lambda", "q", "Mej_tot", det=0, mask="bern_geoend")
# plot_last_disk_mass_with_lambda("Lambda", "q", "Ye_ave", det=0, mask="geo")
# plot_last_disk_mass_with_lambda("Lambda", "q", "Ye_ave", det=0, mask="bern_geoend")
# plot_last_disk_mass_with_lambda("Lambda", "q", "vel_inf_ave", det=0, mask="geo")
# plot_last_disk_mass_with_lambda("Lambda", "q", "vel_inf_ave", det=0, mask="bern_geoend")
''' - '''
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="Mej_tot", v_n_col="q",
# mask_x=None,mask_y="geo",mask_col=None,det=0, plot_legend=True)
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="Mej_tot", v_n_col="q",
# mask_x=None,mask_y="bern_geoend",mask_col=None,det=0, plot_legend=False)
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="Ye_ave", v_n_col="q",
# mask_x=None,mask_y="geo",mask_col=None,det=0, plot_legend=False)
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="Ye_ave", v_n_col="q",
# mask_x=None,mask_y="bern_geoend",mask_col=None,det=0, plot_legend=False)
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="vel_inf_ave", v_n_col="q",
# mask_x=None,mask_y="geo",mask_col=None,det=0, plot_legend=False)
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="vel_inf_ave", v_n_col="q",
# mask_x=None,mask_y="bern_geoend",mask_col=None,det=0, plot_legend=False)
# plot_last_disk_mass_with_lambda2(v_n_x="Lambda", v_n_y="Mdisk3Dmax", v_n_col="q",
# mask_x=None,mask_y=None, mask_col=None,det=0, plot_legend=False)
exit(0)
''' disk properties '''
# plot_histograms_ejecta("geo")
# plot_disk_mass_evol_SR()
# plot_disk_mass_evol_LR()
# plot_disk_mass_evol_HR()
# plot_disk_hist_evol("LS220_M13641364_M0_SR", "ls220_no_lk_disk_hists.png")
# plot_disk_hist_evol("LS220_M13641364_M0_LK_SR_restart", "ls220_disk_hists.png")
# plot_disk_hist_evol("BLh_M13641364_M0_LK_SR", "blh_disk_hists.png")
# plot_disk_hist_evol("DD2_M13641364_M0_SR", "dd2_nolk_disk_hists.png")
# plot_disk_hist_evol("SFHo_M13641364_M0_SR", "sfho_nolk_disk_hists.png")
# plot_disk_hist_evol("SLy4_M13641364_M0_SR", "sly_nolk_disk_hists.png")
# plot_disk_hist_evol("SFHo_M14521283_M0_SR", "sfho_qnot1_nolk_disk_hists.png")
# plot_disk_hist_evol("SLy4_M14521283_M0_SR", "sly_qnot1_nolk_disk_hists.png")
# plot_disk_hist_evol("DD2_M14971245_M0_SR", "dd2_qnot1_nolk_disk_hists.png")
# plot_disk_hist_evol("LS220_M13641364_M0_SR", "ls220_nolk_disk_hists.png")
# plot_disk_hist_evol_one_v_n("Ye", "LS220_M13641364_M0_LK_SR_restart", "ls220_ye_disk_hist.png")
# plot_disk_hist_evol_one_v_n("temp", "LS220_M13641364_M0_LK_SR_restart", "ls220_temp_disk_hist.png")
# plot_disk_hist_evol_one_v_n("rho", "LS220_M13641364_M0_LK_SR_restart", "ls220_rho_disk_hist.png")
# plot_disk_hist_evol_one_v_n("dens_unb_bern", "LS220_M13641364_M0_LK_SR_restart", "ls220_dens_unb_bern_disk_hist.png")
# plot_disk_hist_evol_one_v_n("velz", "LS220_M13641364_M0_LK_SR_restart", "ls220_velz_disk_hist.png")
# o_err = ErrorEstimation("DD2_M15091235_M0_LK_SR","DD2_M14971245_M0_SR")
# o_err.main(rewrite=False)
# # plot_total_fluxes_lk_on_off("bern_geoend")
# exit(1)
''' disk slices '''
# plot_den_unb__vel_z_sly4_evol()
''' nucleo '''
# many_yeilds()
# tmp_many_yeilds()
''' mkn '''
# plot_many_mkn()
# plot_many_mkn_long("PBR")
# plot_many_mkn_dyn_only_long("LR")
# plot_many_mkn_dyn_only_long("PBR")
''' --- COMPARISON TABLE --- '''
# tbl = COMPARISON_TABLE()
### --- effect of viscosity
# tbl.print_mult_table([["DD2_M15091235_M0_LK_SR", "DD2_M14971245_M0_SR"],
# ["DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_SR_R04"],
# ["LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_SR"],
# ["SFHo_M14521283_M0_LK_SR", "SFHo_M14521283_M0_SR"]],
# [r"\hline",
# r"\hline",
# r"\hline",
# r"\hline"],
# comment=r"{Analysis of the viscosity effect on the outflow properties and disk mass. "
# r"Here the $t_{\text{disk}}$ is the maximum postmerger time, for which the 3D is "
# r"available for both simulations For that time, the disk mass is interpolated using "
# r"linear inteprolation. The $\Delta t_{\text{wind}}$ is the maximum common time window "
# r"between the time at which dynamical ejecta reaches 98\% of its total mass and the end of the "
# r"simulation Cases where $t_{\text{disk}}$ or $\Delta t_{\text{wind}}$ is N/A indicate the absence "
# r"of the ovelap between 3D data fro simulations or absence of this data entirely and "
# r"absence of overlap between the time window in which the spiral-wave wind is computed "
# r"which does not allow to do a proper, one-to-one comparison. $\Delta$ is a estimated "
# r"change as $|value_1 - value_2|/value_1$ in percentage }",
# label=r"{tbl:vis_effect}"
# )
# exit(0)
#### --- resulution effect on simulations with viscosity
# tbl.print_mult_table([["DD2_M13641364_M0_LK_SR_R04", "DD2_M13641364_M0_LK_LR_R04", "DD2_M13641364_M0_LK_HR_R04"], # HR too short
# ["DD2_M15091235_M0_LK_SR", "DD2_M15091235_M0_LK_HR"], # no
# ["LS220_M14691268_M0_LK_SR", "LS220_M14691268_M0_LK_HR"], # no
# ["SFHo_M13641364_M0_LK_SR", "SFHo_M13641364_M0_LK_HR"], # no
# ["SFHo_M14521283_M0_LK_SR", "SFHo_M14521283_M0_LK_HR"]], # no
# [r"\hline",
# r"\hline",
# r"\hline",
# r"\hline",
# r"\hline"],
# comment=r"{Resolution effect to on the outflow properties and disk mass on the simulations with "
# r"subgird turbulence. Here the $t_{\text{disk}}$ "
# r"is the maximum postmerger time, for which the 3D is available for both simulations "
# r"For that time, the disk mass is interpolated using linear inteprolation. The "
# r"$\Delta t_{\text{wind}}$ is the maximum common time window between the time at "
# r"which dynamical ejecta reaches 98\% of its total mass and the end of the simulation "
# r"Cases where $t_{\text{disk}}$ or $\Delta t_{\text{wind}}$ is N/A indicate the absence "
# r"of the ovelap between 3D data fro simulations or absence of this data entirely and "
# r"absence of overlap between the time window in which the spiral-wave wind is computed "
# r"which does not allow to do a proper, one-to-one comparison. $\Delta$ is a estimated "
# r"change as $|value_1 - value_2|/value_1$ in percentage }",
# label=r"{tbl:res_effect_vis}"
# )
# exit(0)
#### --- resolution effect on simulations without voscosity
# tbl.print_mult_table([["DD2_M13641364_M0_SR_R04", "DD2_M13641364_M0_LR_R04", "DD2_M13641364_M0_HR_R04"], # DD2_M13641364_M0_LR_R04
# ["DD2_M14971245_M0_SR", "DD2_M14971246_M0_LR", "DD2_M14971245_M0_HR"], # DD2_M14971246_M0_LR
# ["LS220_M13641364_M0_SR", "LS220_M13641364_M0_LR", "LS220_M13641364_M0_HR"], # LS220_M13641364_M0_LR
# ["LS220_M14691268_M0_SR", "LS220_M14691268_M0_LR", "LS220_M14691268_M0_HR"], # LS220_M14691268_M0_LR
# ["SFHo_M13641364_M0_SR", "SFHo_M13641364_M0_HR"], # no
# ["SFHo_M14521283_M0_SR", "SFHo_M14521283_M0_HR"]], # no
# [r"\hline",
# r"\hline",
# r"\hline",
# r"\hline",
# r"\hline",
# r"\hline"],
# comment=r"{Resolution effec to on the outflow properties and disk mass on the simulations without "
# r"subgird turbulence. Here the $t_{\text{disk}}$ "
# r"is the maximum postmerger time, for which the 3D is available for both simulations "
# r"For that time, the disk mass is interpolated using linear inteprolation. The "
# r"$\Delta t_{\text{wind}}$ is the maximum common time window between the time at "
# r"which dynamical ejecta reaches 98\% of its total mass and the end of the simulation "
# r"Cases where $t_{\text{disk}}$ or $\Delta t_{\text{wind}}$ is N/A indicate the absence "
# r"of the ovelap between 3D data fro simulations or absence of this data entirely and "
# r"absence of overlap between the time window in which the spiral-wave wind is computed "
# r"which does not allow to do a proper, one-to-one comparison. $\Delta$ is a estimated "
# r"change as $|value_1 - value_2|/value_1$ in percentage }",
# label=r"{tbl:res_effect}"
# )
#
#
# exit(0)
''' --- OVERALL TABLE --- '''
tbl = TEX_TABLES()
# tbl.print_mult_table([simulations["BLh"]["q=1"], simulations["BLh"]["q=1.3"], simulations["BLh"]["q=1.4"], simulations["BLh"]["q=1.7"], simulations["BLh"]["q=1.8"],
# simulations["DD2"]["q=1"], simulations["DD2"]["q=1.1"], simulations["DD2"]["q=1.2"], simulations["DD2"]["q=1.4"],
# simulations["LS220"]["q=1"], simulations["LS220"]["q=1.1"], simulations["LS220"]["q=1.2"], simulations["LS220"]["q=1.4"], simulations["LS220"]["q=1.7"],
# simulations["SFHo"]["q=1"], simulations["SFHo"]["q=1.1"], simulations["SFHo"]["q=1.4"],
# simulations["SLy4"]["q=1"], simulations["SLy4"]["q=1.1"]],
# [r"\hline", r"\hline", r"\hline", r"\hline",
# r"\hline\hline",
# r"\hline", r"\hline", r"\hline",
# r"\hline\hline",
# r"\hline", r"\hline", r"\hline", r"\hline",
# r"\hline\hline",
# r"\hline", r"\hline",
# r"\hline\hline",
# r"\hline", r"\hline"])
tbl.init_data_v_ns = ["EOS", "q", "note", "res", "vis"]
tbl.init_data_prec = ["", ".1f", "", "", ""]
#
tbl.col_d3_gw_data_v_ns = []
tbl.col_d3_gw_data_prec = []
#
tbl.outflow_data_v_ns = ['Mej_tot', 'Ye_ave', 'vel_inf_ave',
'Mej_tot', 'Ye_ave', 'vel_inf_ave']
tbl.outflow_data_prec = [".4f", ".3f", ".3f",
".4f", ".3f", ".3f"]
tbl.outflow_data_mask = ["theta60_geoend", "theta60_geoend", "theta60_geoend", "theta60_geoend",
"Y_e04_geoend", "Y_e04_geoend", "Y_e04_geoend", "Y_e04_geoend"]
tbl.print_mult_table([["DD2_M14971245_M0_SR", "DD2_M13641364_M0_SR", "DD2_M15091235_M0_LK_SR",
"BLh_M13641364_M0_LK_SR", "LS220_M14691268_M0_LK_SR"]],
[r"\hline"])
# par = COMPUTE_PAR("LS220_M14691268_M0_LK_SR")
# print("tcoll",par.get_par("tcoll_gw"))
# print("Mdisk",par.get_par("Mdisk3D"))
# o_lf = COMPUTE_PAR("SLy4_M13641364_M0_LK_SR")
# print(o_lf.get_outflow_data(0, "geo", "corr_vel_inf_theta.h5"))
# print(o_lf.get_collated_data("dens_unbnd.norm1.asc"))
# print(o_lf.get_gw_data("tmerger.dat"))
# print(o_lf.get_outflow_par(0, "geo", "Mej_tot"))
# print(o_lf.get_outflow_par(0, "geo", "Ye_ave"))
# print(o_lf.get_outflow_par(0, "geo", "vel_inf_ave"))
# print(o_lf.get_outflow_par(0, "geo", "s_ave"))
# print(o_lf.get_outflow_par(0, "geo", "theta_rms"))
# print(o_lf.get_disk_mass())
# print("---")
# print(o_lf.get_par("tmerg"))
# print(o_lf.get_par("Munb_tot"))
# print(o_lf.get_par("Munb_tot"))
# print(o_lf.get_par("Munb_bern_tot"))
# print(o_lf.get_par("tcoll_gw"))
| 42.889148 | 180 | 0.492083 | 27,396 | 220,536 | 3.699664 | 0.033071 | 0.010813 | 0.019969 | 0.027458 | 0.893741 | 0.867003 | 0.842368 | 0.815176 | 0.793668 | 0.774705 | 0 | 0.074463 | 0.328939 | 220,536 | 5,141 | 181 | 42.897491 | 0.610407 | 0.188681 | 0 | 0.732833 | 0 | 0.000278 | 0.209995 | 0.036311 | 0.000834 | 0 | 0 | 0 | 0.004726 | 1 | 0.007506 | false | 0.000278 | 0.012232 | 0 | 0.019739 | 0.030025 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
360279fea785c3b13c3d8c0fc64882d4df891b90 | 6,349 | py | Python | tests/test_slicing.py | ConservationMetrics/sahi | ce336e199735f6510e046394cbaf8398328a79a7 | [
"MIT"
] | null | null | null | tests/test_slicing.py | ConservationMetrics/sahi | ce336e199735f6510e046394cbaf8398328a79a7 | [
"MIT"
] | null | null | null | tests/test_slicing.py | ConservationMetrics/sahi | ce336e199735f6510e046394cbaf8398328a79a7 | [
"MIT"
] | null | null | null | # OBSS SAHI Tool
# Code written by Fatih C Akyon, 2020.
import unittest
import numpy as np
from PIL import Image
from sahi.slicing import slice_coco, slice_image
from sahi.utils.coco import Coco
from sahi.utils.cv import read_image
class TestSlicing(unittest.TestCase):
def test_slice_image(self):
# read coco file
coco_path = "tests/data/coco_utils/terrain1_coco.json"
coco = Coco.from_coco_dict_or_path(coco_path)
output_file_name = None
output_dir = None
image_path = "tests/data/coco_utils/" + coco.images[0].file_name
slice_image_result = slice_image(
image=image_path,
coco_annotation_list=coco.images[0].annotations,
output_file_name=output_file_name,
output_dir=output_dir,
slice_height=512,
slice_width=512,
overlap_height_ratio=0.1,
overlap_width_ratio=0.4,
min_area_ratio=0.1,
out_ext=".png",
verbose=False,
)
self.assertEqual(len(slice_image_result.images), 18)
self.assertEqual(len(slice_image_result.coco_images), 18)
self.assertEqual(slice_image_result.coco_images[0].annotations, [])
self.assertEqual(slice_image_result.coco_images[15].annotations[1].area, 7296)
self.assertEqual(
slice_image_result.coco_images[15].annotations[1].bbox,
[17, 186, 48, 152],
)
image_cv = read_image(image_path)
slice_image_result = slice_image(
image=image_cv,
coco_annotation_list=coco.images[0].annotations,
output_file_name=output_file_name,
output_dir=output_dir,
slice_height=512,
slice_width=512,
overlap_height_ratio=0.1,
overlap_width_ratio=0.4,
min_area_ratio=0.1,
out_ext=".png",
verbose=False,
)
self.assertEqual(len(slice_image_result.images), 18)
self.assertEqual(len(slice_image_result.coco_images), 18)
self.assertEqual(slice_image_result.coco_images[0].annotations, [])
self.assertEqual(slice_image_result.coco_images[15].annotations[1].area, 7296)
self.assertEqual(
slice_image_result.coco_images[15].annotations[1].bbox,
[17, 186, 48, 152],
)
image_pil = Image.open(image_path)
slice_image_result = slice_image(
image=image_pil,
coco_annotation_list=coco.images[0].annotations,
output_file_name=output_file_name,
output_dir=output_dir,
slice_height=512,
slice_width=512,
overlap_height_ratio=0.1,
overlap_width_ratio=0.4,
min_area_ratio=0.1,
out_ext=".png",
verbose=False,
)
self.assertEqual(len(slice_image_result.images), 18)
self.assertEqual(len(slice_image_result.coco_images), 18)
self.assertEqual(slice_image_result.coco_images[0].annotations, [])
self.assertEqual(slice_image_result.coco_images[15].annotations[1].area, 7296)
self.assertEqual(
slice_image_result.coco_images[15].annotations[1].bbox,
[17, 186, 48, 152],
)
def test_slice_coco(self):
import shutil
coco_annotation_file_path = "tests/data/coco_utils/terrain1_coco.json"
image_dir = "tests/data/coco_utils/"
output_coco_annotation_file_name = "test_out"
output_dir = "tests/data/coco_utils/test_out/"
ignore_negative_samples = True
coco_dict, _ = slice_coco(
coco_annotation_file_path=coco_annotation_file_path,
image_dir=image_dir,
output_coco_annotation_file_name=output_coco_annotation_file_name,
output_dir=output_dir,
ignore_negative_samples=ignore_negative_samples,
slice_height=512,
slice_width=512,
overlap_height_ratio=0.1,
overlap_width_ratio=0.4,
min_area_ratio=0.1,
out_ext=".png",
verbose=False,
)
self.assertEqual(len(coco_dict["images"]), 5)
self.assertEqual(coco_dict["images"][1]["height"], 512)
self.assertEqual(coco_dict["images"][1]["width"], 512)
self.assertEqual(len(coco_dict["annotations"]), 14)
self.assertEqual(coco_dict["annotations"][2]["id"], 3)
self.assertEqual(coco_dict["annotations"][2]["image_id"], 2)
self.assertEqual(coco_dict["annotations"][2]["category_id"], 1)
self.assertEqual(coco_dict["annotations"][2]["area"], 12483)
self.assertEqual(
coco_dict["annotations"][2]["bbox"],
[340, 204, 73, 171],
)
shutil.rmtree(output_dir)
coco_annotation_file_path = "tests/data/coco_utils/terrain1_coco.json"
image_dir = "tests/data/coco_utils/"
output_coco_annotation_file_name = "test_out"
output_dir = "tests/data/coco_utils/test_out/"
ignore_negative_samples = False
coco_dict, _ = slice_coco(
coco_annotation_file_path=coco_annotation_file_path,
image_dir=image_dir,
output_coco_annotation_file_name=output_coco_annotation_file_name,
output_dir=output_dir,
ignore_negative_samples=ignore_negative_samples,
slice_height=512,
slice_width=512,
overlap_height_ratio=0.1,
overlap_width_ratio=0.4,
min_area_ratio=0.1,
out_ext=".png",
verbose=False,
)
self.assertEqual(len(coco_dict["images"]), 18)
self.assertEqual(coco_dict["images"][1]["height"], 512)
self.assertEqual(coco_dict["images"][1]["width"], 512)
self.assertEqual(len(coco_dict["annotations"]), 14)
self.assertEqual(coco_dict["annotations"][2]["id"], 3)
self.assertEqual(coco_dict["annotations"][2]["image_id"], 14)
self.assertEqual(coco_dict["annotations"][2]["category_id"], 1)
self.assertEqual(coco_dict["annotations"][2]["area"], 12483)
self.assertEqual(
coco_dict["annotations"][2]["bbox"],
[340, 204, 73, 171],
)
shutil.rmtree(output_dir)
if __name__ == "__main__":
unittest.main()
| 37.568047 | 86 | 0.629075 | 779 | 6,349 | 4.786906 | 0.1181 | 0.132743 | 0.077233 | 0.08635 | 0.881738 | 0.875838 | 0.875838 | 0.865648 | 0.855457 | 0.831322 | 0 | 0.046531 | 0.262089 | 6,349 | 168 | 87 | 37.791667 | 0.749413 | 0.010395 | 0 | 0.739726 | 0 | 0 | 0.086001 | 0.039497 | 0 | 0 | 0 | 0 | 0.226027 | 1 | 0.013699 | false | 0 | 0.047945 | 0 | 0.068493 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
363817b5280e617545b73c2ddc7afde5a1161222 | 222,649 | py | Python | run.py | a523/obscmdbench | 109f83d42f7e266d6205bac3f13c210502ed86f4 | [
"Apache-2.0"
] | 27 | 2018-01-23T09:23:03.000Z | 2021-08-09T19:01:42.000Z | run.py | a523/obscmdbench | 109f83d42f7e266d6205bac3f13c210502ed86f4 | [
"Apache-2.0"
] | 3 | 2019-06-23T07:30:21.000Z | 2020-08-04T08:58:19.000Z | run.py | a523/obscmdbench | 109f83d42f7e266d6205bac3f13c210502ed86f4 | [
"Apache-2.0"
] | 8 | 2018-09-20T10:08:39.000Z | 2021-09-14T07:33:37.000Z | #!/usr/bin/env python
# -*- coding:utf-8 -*-
import sys
import os
import time
import math
import random
import commands
import logging
import logging.config
import datetime
import hashlib
import base64
import multiprocessing
import results
import Util
import obsPyCmd
import myLib.cloghandler
from StringIO import StringIO
import string
from constant import Mode
from constant import Role
import threading
import urllib
VERSION = '-------------------obscmdbench: v3.1.7, Python: %s-------------------\n' % sys.version.split(' ')[0]
valid_start_time = None
class User:
doc = """
This is user class
"""
def __init__(self, username, ak, sk):
self.username = username
self.ak = ak
self.sk = sk
def read_config(config_file='config.dat'):
"""
:rtype : None
:param config_file: string
"""
try:
f = open(config_file, 'r')
lines = f.readlines()
for line in lines:
line = line.strip()
if line and line[0] != '#':
CONFIG[line[:line.find('=')].strip()] = line[line.find('=') + 1:].strip()
else:
continue
f.close()
CONFIG['OSCs'] = CONFIG['OSCs'].replace(' ', '').replace(',,', ',')
if CONFIG['OSCs'][-1:] == ',':
CONFIG['OSCs'] = CONFIG['OSCs'][:-1]
if CONFIG['IsHTTPs'].lower() == 'true':
CONFIG['IsHTTPs'] = True
else:
CONFIG['IsHTTPs'] = False
CONFIG['ConnectTimeout'] = int(CONFIG['ConnectTimeout'])
if int(CONFIG['ConnectTimeout']) < 5:
CONFIG['ConnectTimeout'] = 5
if CONFIG['LongConnection'].lower() == 'true':
CONFIG['LongConnection'] = True
else:
CONFIG['LongConnection'] = False
if CONFIG['UseDomainName'].lower() == 'true':
CONFIG['UseDomainName'] = True
# 如果使用域名,则OSCs为域名
CONFIG['OSCs'] = CONFIG['DomainName']
else:
CONFIG['UseDomainName'] = False
if CONFIG['VirtualHost'].lower() == 'true':
CONFIG['VirtualHost'] = True
else:
CONFIG['VirtualHost'] = False
if CONFIG['ObjectLexical'].lower() == 'true':
CONFIG['ObjectLexical'] = True
else:
CONFIG['ObjectLexical'] = False
if CONFIG['CalHashMD5'].lower() == 'true':
CONFIG['CalHashMD5'] = True
else:
CONFIG['CalHashMD5'] = False
CONFIG['Testcase'] = int(CONFIG['Testcase'])
CONFIG['Users'] = int(CONFIG['Users'])
CONFIG['UserStartIndex'] = int(CONFIG['UserStartIndex'])
CONFIG['ThreadsPerUser'] = int(CONFIG['ThreadsPerUser'])
CONFIG['Threads'] = CONFIG['Users'] * CONFIG['ThreadsPerUser']
CONFIG['RequestsPerThread'] = int(CONFIG['RequestsPerThread'])
CONFIG['BucketsPerUser'] = int(CONFIG['BucketsPerUser'])
if CONFIG['copyDstObjFixed'] and '/' not in CONFIG['copyDstObjFixed']:
CONFIG['copyDstObjFixed'] = ''
if CONFIG['copySrcObjFixed'] and '/' not in CONFIG['copySrcObjFixed']:
CONFIG['copySrcObjFixed'] = ''
CONFIG['ObjectsPerBucketPerThread'] = int(CONFIG['ObjectsPerBucketPerThread'])
CONFIG['DeleteObjectsPerRequest'] = int(CONFIG['DeleteObjectsPerRequest'])
CONFIG['PartsForEachUploadID'] = int(CONFIG['PartsForEachUploadID'])
if CONFIG['ConcurrentUpParts'].lower() == 'true':
CONFIG['ConcurrentUpParts'] = True
if CONFIG['PartsForEachUploadID'] % CONFIG['ThreadsPerUser']:
if CONFIG['PartsForEachUploadID'] < CONFIG['ThreadsPerUser']:
CONFIG['PartsForEachUploadID'] = CONFIG['ThreadsPerUser']
else:
CONFIG['PartsForEachUploadID'] = int(
round(1.0 * CONFIG['PartsForEachUploadID'] / CONFIG['ThreadsPerUser']) * CONFIG[
'ThreadsPerUser'])
logging.warning('change PartsForEachUploadID to %d' % CONFIG['PartsForEachUploadID'])
else:
CONFIG['ConcurrentUpParts'] = False
CONFIG['PutTimesForOneObj'] = int(CONFIG['PutTimesForOneObj'])
if CONFIG['MixLoopCount'] is not None and CONFIG['MixLoopCount']:
CONFIG['MixLoopCount'] = int(CONFIG['MixLoopCount'])
if CONFIG['RunSeconds']:
CONFIG['RunSeconds'] = int(CONFIG['RunSeconds'])
if CONFIG['TpsPerThread']:
CONFIG['TpsPerThread'] = float(CONFIG['TpsPerThread'])
if CONFIG['RecordDetails'].lower() == 'true':
CONFIG['RecordDetails'] = True
else:
CONFIG['RecordDetails'] = False
CONFIG['StatisticsInterval'] = int(CONFIG['StatisticsInterval'])
if CONFIG['BadRequestCounted'].lower() == 'true':
CONFIG['BadRequestCounted'] = True
else:
CONFIG['BadRequestCounted'] = False
if CONFIG['AvoidSinBkOp'].lower() == 'true':
CONFIG['AvoidSinBkOp'] = True
else:
CONFIG['AvoidSinBkOp'] = False
if CONFIG['PrintProgress'].lower() == 'true':
CONFIG['PrintProgress'] = True
else:
CONFIG['PrintProgress'] = False
if CONFIG['LatencyPercentileMap'].lower() == 'true':
CONFIG['LatencyPercentileMap'] = True
else:
CONFIG['LatencyPercentileMap'] = False
if CONFIG['LatencyRequestsNumber'].lower() == 'true':
CONFIG['LatencyRequestsNumber'] = True
else:
CONFIG['LatencyRequestsNumber'] = False
if CONFIG['ObjNamePatternHash'].lower() == 'true':
CONFIG['ObjNamePatternHash'] = True
else:
CONFIG['ObjNamePatternHash'] = False
if CONFIG['CollectBasicData'].lower() == 'true':
CONFIG['CollectBasicData'] = True
else:
CONFIG['CollectBasicData'] = False
if CONFIG['IsMaster'].lower() == 'true':
CONFIG['IsMaster'] = True
else:
CONFIG['IsMaster'] = False
if CONFIG['IsRandomGet'].lower() == 'true':
CONFIG['IsRandomGet'] = True
else:
CONFIG['IsRandomGet'] = False
if CONFIG['IsRandomDelete'].lower() == 'true':
CONFIG['IsRandomDelete'] = True
else:
CONFIG['IsRandomDelete'] = False
if CONFIG['Anonymous'].lower() == 'true':
CONFIG['Anonymous'] = True
else:
CONFIG['Anonymous'] = False
if CONFIG['PutTimesForOnePart']:
CONFIG['PutTimesForOnePart'] = int(CONFIG['PutTimesForOnePart'])
CONFIG['StopWindowSeconds'] = int(CONFIG['StopWindowSeconds']) if CONFIG['StopWindowSeconds'] else 0
CONFIG['RunWindowSeconds'] = int(CONFIG['RunWindowSeconds']) if CONFIG['RunWindowSeconds'] else 0
if CONFIG['StopWindowSeconds'] > 0 and CONFIG['RunWindowSeconds'] > 0:
CONFIG['WindowMode'] = True
CONFIG['WindowTime'] = CONFIG['StopWindowSeconds'] + CONFIG['RunWindowSeconds']
else:
CONFIG['WindowMode'] = False
if CONFIG['GetPositionFromMeta'].lower() == 'true':
CONFIG['GetPositionFromMeta'] = True
else:
CONFIG['GetPositionFromMeta'] = False
if CONFIG['IsDataFromFile'].lower() == 'true':
CONFIG['IsDataFromFile'] = True
if CONFIG['LocalFilePath'] is None or not CONFIG['LocalFilePath']:
raise Exception('local file path is not provided.')
else:
CONFIG['IsDataFromFile'] = False
CONFIG['LocalFilePath'] = None
if CONFIG['IsCdn'].lower() == 'true':
CONFIG['IsCdn'] = True
if not CONFIG['CdnAK'] or not CONFIG['CdnSK'] or CONFIG['CdnSTSToken']:
raise Exception('cdn ak or sk or stsToken is not provided.')
else:
CONFIG['IsCdn'] = False
if CONFIG['IsHTTP2'].lower() == 'true':
CONFIG['IsHTTP2'] = True
else:
CONFIG['IsHTTP2'] = False
if CONFIG['TestNetwork'].lower() == 'true':
CONFIG['TestNetwork'] = True
else:
CONFIG['TestNetwork'] = False
if CONFIG['IsShareConnection'].lower() == 'true':
CONFIG['IsShareConnection'] = True
else:
CONFIG['IsShareConnection'] = False
if CONFIG['IsFileInterface'].lower() == 'true':
CONFIG['IsFileInterface'] = True
else:
CONFIG['IsFileInterface'] = False
if not ('processID' in CONFIG['ObjectNamePartten'] and 'ObjectNamePrefix' in CONFIG[
'ObjectNamePartten'] and 'Index' in CONFIG['ObjectNamePartten']):
raise Exception('both of processID,Index,ObjectNamePartten should be in config ObjectNamePartten')
if CONFIG['IsDataFromFile'] and CONFIG['CalHashMD5']:
raise Exception('IsDataFromFile and CalHashMD5 can not be true at the same time.')
if CONFIG['IsHTTP2']:
print '[Attention] currently, http2 is not stable in this test tool, make sure you have already set CalHashMD5 = false'
except Exception, e:
print '[ERROR] Read config file %s error: %s' % (config_file, e)
sys.exit()
def initialize_object_index():
global CONFIG, LIST_INDEX
LIST_INDEX = range(int(CONFIG['ObjectsPerBucketPerThread']))
def is_needed_to_build_list_index():
global CONFIG
if str(CONFIG['Testcase']) == '202' and CONFIG['IsRandomGet']:
return True
elif str(CONFIG['Testcase']) == '204' and CONFIG['IsRandomDelete']:
return True
elif str(CONFIG['Testcase']) == '900':
if '202' in CONFIG['MixOperations']:
return True
if '204' in CONFIG['MixOperations']:
return True
return False
def read_users():
"""
load users.dat
"""
global USERS, CONFIG
index = -1
try:
with open('./users.dat', 'r') as fd:
for line in fd:
if not line:
continue
index += 1
if index >= CONFIG['UserStartIndex'] and len(USERS) <= CONFIG['Users']:
user_info = line.strip()
user = User(user_info.split(',')[0], user_info.split(',')[1], user_info.split(',')[2])
USERS.append(user)
fd.close()
logging.debug("load user file end")
except Exception, data:
print "\033[1;31;40m[ERROR]\033[0m Load users Error, check file users.dat. Use iamPyTool.py to create users [%r]" % (
data)
logging.error(
'Load users Error, check file users.dat. Use iamPyTool.py to create users')
sys.exit()
def create_file_in_memory():
f = StringIO()
f.write(bytearray(random.getrandbits(8) for _ in xrange(1024 * 1024)))
pos = random.randint(0, 4096)
f.seek(pos, 0)
return f
def generate_append_object_position():
global CONFIG
lines = []
for i in range(int(CONFIG['ThreadsPerUser'])):
bucket_name = CONFIG['BucketNameFixed']
object_name = CONFIG['ObjectNamePrefix']
obj_file = 'position/%s-%s-%s.dat' % (bucket_name, object_name, str(i))
if os.path.exists(obj_file) and os.path.getsize(obj_file) > 0:
logging.debug("read object from file: [%s] done." % obj_file)
tmp = [tuple(map(str, line.rstrip('\n')[1:-1].split(','))) for line in open(obj_file, 'r')]
lines.extend(tmp)
else:
logging.debug("file: [%s] does not exist. please check." % obj_file)
return
logging.debug("[%d] objects detected." % len(lines))
return dict(lines)
def generate_image_process_parameters():
global CONFIG
if CONFIG['ImageManipulationType'] is not None and CONFIG['ImageManipulationType']:
# CONFIG['x-image-process'] = 'image'
params = ''
if 'format' in CONFIG['ImageManipulationType'] and CONFIG['ImageFormat'] is not None and CONFIG['ImageFormat']:
params = params + '/format,' + CONFIG['IamgeFormat']
if 'resize' in CONFIG['ImageManipulationType'] and CONFIG['ResizeParams'] is not None and CONFIG['ResizeParams']:
params = params + '/resize,' + CONFIG['ResizeParams']
if 'crop' in CONFIG['ImageManipulationType'] and CONFIG['CropParams'] is not None and CONFIG['CropParams']:
params = params + '/crop,' + CONFIG['CropParams']
if 'info' in CONFIG['ImageManipulationType']:
params = params + '/info'
if params:
CONFIG['x-image-process'] = 'image%s' % params
logging.debug('image process param is: [%s]' % CONFIG['x-image-process'])
else:
raise Exception('ImageManipulationType or other parameters config is not correct.')
else:
raise Exception('ImageManipulationType or other parameters config is not correct.')
def list_user_buckets(process_id, user, conn, result_queue):
request_type = 'ListUserBuckets'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
i = 0
while i < CONFIG['RequestsPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = i * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
i += 1
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def create_bucket(process_id, user, conn, result_queue):
request_type = 'CreateBucket'
send_content = ''
if CONFIG['BucketLocation']:
send_content = '<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">\
<LocationConstraint>%s</LocationConstraint></CreateBucketConfiguration >' % random.choice(
CONFIG['BucketLocation'].split(','))
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], send_content=send_content,
virtual_host=CONFIG['VirtualHost'], domain_name=CONFIG['DomainName'],
region=CONFIG['Region'], is_http2=CONFIG['IsHTTP2'], host=conn.host)
if CONFIG['CreateWithACL']:
rest.headers['x-amz-acl'] = CONFIG['CreateWithACL']
if CONFIG['StorageClass']:
if CONFIG['StorageClass'][-1:] == ',':
CONFIG['StorageClass'] = CONFIG['StorageClass'][:-1]
if CONFIG['StorageClass'].__contains__(','):
storage_class_provided = CONFIG['StorageClass'].split(',')
rest.headers['x-default-storage-class'] = random.choice(storage_class_provided)
else:
rest.headers['x-default-storage-class'] = CONFIG['StorageClass']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['IsFileInterface']:
rest.headers['x-obs-fs-file-interface'] = "Enabled"
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
i = 0
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
i += 1
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def list_objects_in_bucket(process_id, user, conn, result_queue):
request_type = 'ListObjectsInBucket'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['max-keys'] = CONFIG['Max-keys']
if CONFIG.__contains__('prefix') and CONFIG['prefix']:
rest.queryArgs['prefix'] = CONFIG['prefix']
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += 1
marker = ''
while marker is not None:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
total_requests += 1
rest.queryArgs['marker'] = urllib.unquote_plus(marker)
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
marker = resp.return_data
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def head_bucket(process_id, user, conn, result_queue):
request_type = 'HeadBucket'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def delete_bucket(process_id, user, conn, result_queue):
request_type = 'DeleteBucket'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def bucket_delete(process_id, user, conn, result_queue):
request_type = 'BucketDelete'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['deletebucket'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.sendContent = '<?xml version="1.0" encoding="UTF-8"?><DeleteBucket><Bucket>' + rest.bucket + '</Bucket></DeleteBucket>'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def options_bucket(process_id, user, conn, result_queue):
request_type = 'OPTIONSBucket'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
if CONFIG['AllowedMethod']:
if ',' in CONFIG['AllowedMethod']:
rest.headers['Access-Control-Request-Method'] = []
for i in CONFIG['AllowedMethod'].split(','):
rest.headers['Access-Control-Request-Method'].append(i.upper())
else:
rest.headers['Access-Control-Request-Method'] = CONFIG['AllowedMethod'].upper()
else:
rest.headers['Access-Control-Request-Method'] = 'GET'
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
rest.headers['Origin'] = CONFIG['DomainName']
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_bucket_versioning(process_id, user, conn, result_queue):
request_type = 'PutBucketVersioning'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk, auth_algorithm=CONFIG['AuthAlgorithm'],
virtual_host=CONFIG['VirtualHost'], domain_name=CONFIG['DomainName'],
region=CONFIG['Region'], is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['versioning'] = None
rest.sendContent = '<VersioningConfiguration><Status>%s</Status></VersioningConfiguration>' % CONFIG[
'VersionStatus']
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
logging.info('bucket:' + rest.bucket)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_versioning(process_id, user, conn, result_queue):
request_type = 'GetBucketVersioning'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['versioning'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_bucket_website(process_id, user, conn, result_queue):
request_type = 'PutBucketWebsite'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['website'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.sendContent = '<WebsiteConfiguration><RedirectAllRequestsTo><HostName>' + CONFIG[
'RedirectHostName'] + '</HostName></RedirectAllRequestsTo></WebsiteConfiguration>'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_website(process_id, user, conn, result_queue):
request_type = 'GetBucketWebsite'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['website'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def delete_bucket_website(process_id, user, conn, result_queue):
request_type = 'DeleteBucketWebsite'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['website'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.sendContent = '<WebsiteConfiguration><RedirectAllRequestsTo><HostName>' + CONFIG[
'RedirectHostName'] + '</HostName></RedirectAllRequestsTo></WebsiteConfiguration>'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_bucket_cors(process_id, user, conn, result_queue):
request_type = 'PutBucketCORS'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['cors'] = None
allow_method = ''
if CONFIG['AllowedMethod']:
if ',' in CONFIG['AllowedMethod']:
for i in CONFIG['AllowedMethod'].split(','):
allow_method += '<AllowedMethod>%s</AllowedMethod>' % i.upper()
else:
allow_method += '<AllowedMethod>%s</AllowedMethod>' % CONFIG['AllowedMethod'].upper()
else:
allow_method = '<AllowedMethod>GET</AllowedMethod>'
rest.sendContent = '<CORSConfiguration><CORSRule>%s<AllowedOrigin>%s</AllowedOrigin></CORSRule></CORSConfiguration>' % \
(allow_method, CONFIG['DomainName'])
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_cors(process_id, user, conn, result_queue):
request_type = 'GetBucketCORS'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['cors'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def delete_bucket_cors(process_id, user, conn, result_queue):
request_type = 'DeleteBucketCORS'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['cors'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.sendContent = '<WebsiteConfiguration><RedirectAllRequestsTo><HostName>' + CONFIG[
'RedirectHostName'] + '</HostName></RedirectAllRequestsTo></WebsiteConfiguration>'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def random_english(length):
chars = string.ascii_letters + string.digits + '_-'
return ''.join([random.choice(chars) for _ in range(length)])
def random_chinese(length):
return ''.join([unichr(random.randint(0x4E00, 0x9FBF)).encode('utf-8') for _ in range(length)])
def put_bucket_tag(process_id, user, conn, result_queue):
request_type = 'PutBucketTag'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['tagging'] = None
key_values = 1
if CONFIG['KeyValueNumber'] and int(CONFIG['KeyValueNumber']) <= 10:
key_values = int(CONFIG['KeyValueNumber'])
tempStr = ''
for i in xrange(key_values):
tempStr += '<Tag><Key>%s</Key><Value>%s</Value></Tag>' % (
random.choice([random_chinese(random.randint(1, 36)), random_english(random.randint(1, 36))]),
random.choice([random_chinese(random.randint(0, 43)), random_english(random.randint(0, 43))]))
rest.sendContent = '<Tagging><TagSet>%s</TagSet></Tagging>' % tempStr
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_tag(process_id, user, conn, result_queue):
request_type = 'GetBucketTag'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['tagging'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def delete_bucket_tag(process_id, user, conn, result_queue):
request_type = 'DeleteBucketTag'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'])
rest.queryArgs['tagging'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_multi_parts_upload(process_id, user, conn, result_queue):
request_type = 'GetBucketMultiPartsUpload'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['uploads'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_location(process_id, user, conn, result_queue):
request_type = 'GetBucketLocation'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['location'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_bucket_log(process_id, user, conn, result_queue):
request_type = 'PutBucketLog'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['logging'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
target_bucket = CONFIG['BucketNameFixed']
else:
target_bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
rest.sendContent = '<?xml version="1.0" encoding="UTF-8"?><BucketLoggingStatus xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LoggingEnabled><TargetBucket>%s</TargetBucket><TargetPrefix>access_log</TargetPrefix></LoggingEnabled></BucketLoggingStatus>' % target_bucket
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_log(process_id, user, conn, result_queue):
request_type = 'GetBucketLog'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['logging'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def get_bucket_storageinfo(process_id, user, conn, result_queue):
request_type = 'GetBucketStorageInfo'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['storageinfo'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_bucket_storage_quota(process_id, user, conn, result_queue):
request_type = 'PutBucketStorageQuota'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['quota'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
# if CONFIG['StorageQuota']:
# storagequota = int(CONFIG['StorageQuota'])
# else:
# storagequota = 8
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.AuthAlgorithm = 'AWSV4'
rest.sendContent = '<Quota xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><StorageQuota>102400</StorageQuota></Quota>'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function GetBucketStorageQuota
def get_bucket_storage_quota(process_id, user, conn, result_queue):
request_type = 'GetBucketStorageQuota'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['quota'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function PutBucketAcl
def put_bucket_acl(process_id, user, conn, result_queue):
request_type = 'PutBucketAcl'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
# rest.headers["x-amz-acl"] = 'private'
rest.queryArgs['acl'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
user_id = 'domainiddomainiddomainiddo' + user.ak[len(user.ak) - 6:]
displayname = 'domainnamedom' + user.ak[len(user.ak) - 6:]
rest.sendContent = '''<?xml version="1.0" encoding="UTF-8"?>
<AccessControlPolicy xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>%s</ID>
<DisplayName>%s</DisplayName>
</Owner>
<AccessControlList>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="CanonicalUser">
<ID>%s</ID>
<DisplayName>%s</DisplayName>
</Grantee>
<Permission>FULL_CONTROL</Permission>
</Grant>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
<URI>http://acs.amazonaws.com/groups/global/AllUsers</URI>
</Grantee>
<Permission>FULL_CONTROL</Permission>
</Grant>
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="Group">
<URI>http://acs.amazonaws.com/groups/s3/LogDelivery</URI>
</Grantee>
<Permission>FULL_CONTROL</Permission>
</Grant>
</AccessControlList>
</AccessControlPolicy>''' % (user_id, displayname, user_id, displayname)
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function GetBucketAcl
def get_bucket_acl(process_id, user, conn, result_queue):
request_type = 'GetBucketAcl'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['acl'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function PutBucketPolicy
def put_bucket_policy(process_id, user, conn, result_queue):
request_type = 'PutBucketPolicy'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['policy'] = None
i = process_id % CONFIG['ThreadsPerUser']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
rest.sendContent = '{"Version":"2008-10-17","Id":"aaaa-bbbb-cccc-dddd","Statement":[{"Sid":"1","Effect":"Allow","Principal":{"CanonicalUser":"*"},"Action":"s3:*","Resource":["arn:aws:s3:::%s"]}]}' % rest.bucket
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function GetBucketPolicy
def get_bucket_policy(process_id, user, conn, result_queue):
request_type = 'GetBucketPolicy'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['policy'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function DeleteBucketPolicy
def delete_bucket_policy(process_id, user, conn, result_queue):
request_type = 'DeleteBucketPolicy'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['policy'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function PutBucketLifecycle
def put_bucket_lifecycle(process_id, user, conn, result_queue):
request_type = 'PutBucketLifecycle'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['lifecycle'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
rest.sendContent = '<LifecycleConfiguration><Rule><Prefix>%s</Prefix><Status>Enabled</Status><Expiration><Days>%d</Days></Expiration></Rule></LifecycleConfiguration>' % \
(CONFIG['BucketNameFixed'], 2)
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function GetBucketLifecycle
def get_bucket_lifecycle(process_id, user, conn, result_queue):
request_type = 'GetBucketLifecycle'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['lifecycle'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function DeleteBucketLifecycle
def delete_bucket_lifecycle(process_id, user, conn, result_queue):
request_type = 'DeleteBucketLifecycle'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['lifecycle'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function PutBucketNotification
# 修改/opt/dfv/obs_service_layer/objectwebservice/osc/conf/obs_sod.properties smn_connection = true
def put_bucket_notification(process_id, user, conn, result_queue):
request_type = 'PutBucketNotification'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['notification'] = None
rest.sendContent = '<NotificationConfiguration></NotificationConfiguration>'
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# add new function GetBucketNotification
def get_bucket_notification(process_id, user, conn, result_queue):
request_type = 'GetBucketNotification'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['notification'] = None
i = process_id % CONFIG['ThreadsPerUser']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = (i - process_id % CONFIG['ThreadsPerUser']) / CONFIG['ThreadsPerUser'] * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += CONFIG['ThreadsPerUser']
# rest.AuthAlgorithm = 'AWSV4'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_object(process_id, user, conn, result_queue):
global SHARE_MEM
request_type = 'PutObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_data_from_file=CONFIG['IsDataFromFile'],
local_file_path=CONFIG['LocalFilePath'], is_http2=CONFIG['IsHTTP2'],
host=conn.host)
rest.headers['content-type'] = 'application/octet-stream'
if CONFIG['ObjectStorageClass']:
if CONFIG['ObjectStorageClass'][-1:] == ',':
CONFIG['ObjectStorageClass'] = CONFIG['ObjectStorageClass'][:-1]
if CONFIG['ObjectStorageClass'].__contains__(','):
object_storage_class_provided = CONFIG['ObjectStorageClass'].split(',')
rest.headers['x-amz-storage-class'] = random.choice(object_storage_class_provided)
else:
rest.headers['x-amz-storage-class'] = CONFIG['ObjectStorageClass']
if CONFIG['PutWithACL']:
rest.headers['x-amz-acl'] = CONFIG['PutWithACL']
fixed_size = False
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aws:kms':
rest.headers['x-amz-server-side-encryption'] = 'aws:kms'
if CONFIG['SrvSideEncryptAWSKMSKeyId']:
rest.headers['x-amz-server-side-encryption-aws-kms-key-id'] = CONFIG['SrvSideEncryptAWSKMSKeyId']
if CONFIG['SrvSideEncryptContext']:
rest.headers['x-amz-server-side-encryption-context'] = CONFIG['SrvSideEncryptContext']
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aes256':
rest.headers['x-amz-server-side-encryption'] = 'AES256'
# 如果打开CalHashMD5开关,在对象上传时写入一个自定义元数据,用于标记为本工具put上传的对象。
if CONFIG['CalHashMD5']:
rest.headers['x-amz-meta-md5written'] = 'yes'
if CONFIG['Expires']:
rest.headers['x-obs-expires'] = CONFIG['Expires']
# 对象多版本,需要在上传后记录下版本号
obj_v = ''
obj_v_file = 'data/objv-%d.dat' % process_id
open(obj_v_file, 'w').write(obj_v)
# 错开每个并发起始选桶,避免单桶性能瓶颈。
range_arr = range(0, CONFIG['BucketsPerUser'])
if CONFIG['AvoidSinBkOp']:
range_arr = range(process_id % CONFIG['BucketsPerUser'], CONFIG['BucketsPerUser']) + range(0, process_id % CONFIG['BucketsPerUser'])
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
buckets_cover = 0 # 已经遍历桶数量
for i in range_arr:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if not CONFIG['ObjectNameFixed']:
if CONFIG['ObjectLexical']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
rest.key = Util.random_string_create(random.randint(300, 900))
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
logging.debug('side-encryption-customer-key: [%r]' % rest.key[-32:].zfill(32))
put_times_for_one_obj = CONFIG['PutTimesForOneObj']
while put_times_for_one_obj > 0:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (buckets_cover * CONFIG['ObjectsPerBucketPerThread'] * CONFIG['PutTimesForOneObj'] + j *
CONFIG[
'PutTimesForOneObj'] + (CONFIG['PutTimesForOneObj'] - put_times_for_one_obj)) * 1.0 / \
CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
if CONFIG['IsDataFromFile']:
rest.contentLength = int(os.path.getsize(CONFIG['LocalFilePath']))
fixed_size = True
else:
if not fixed_size:
# change size every request for the same obj.
rest.contentLength, fixed_size = Util.generate_a_size(CONFIG['ObjectSize'])
put_times_for_one_obj -= 1
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'],
memory_file=SHARE_MEM)
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
if resp.return_data:
obj_v += '%s\t%s\t%s\n' % (rest.bucket, rest.key, resp.return_data)
# 每1KB,写入对象的versionID到本地文件objv-process_id.dat
if len(obj_v) >= 1024:
logging.info('write obj_v to file %s' % obj_v_file)
open(obj_v_file, 'a').write(obj_v)
obj_v = ''
j += 1
buckets_cover += 1
if obj_v:
open(obj_v_file, 'a').write(obj_v)
def append_object(process_id, user, conn, result_queue):
global SHARE_MEM
request_type = 'AppendObject'
if CONFIG['GetPositionFromMeta']:
logging.debug('Getting position from object meta')
rest_head = obsPyCmd.OBSRequestDescriptor("HeadObject", ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest_append = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
# 处理head请求头域
global OBJECTS
if OBJECTS:
handle_from_objects(request_type, rest_head, process_id, user, conn, result_queue)
return
elif not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
if CONFIG['BucketNameFixed']:
rest_head.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest_head.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest_head.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
start_time = None
start_time_append = None
if CONFIG['TpsPerThread']:
start_time_head = time.time() # 开始时间
start_time_append = time.time()
# 处理append object请求头域
rest_append.headers['content-type'] = 'application/octet-stream'
if CONFIG['ObjectStorageClass']:
if CONFIG['ObjectStorageClass'][-1:] == ',':
CONFIG['ObjectStorageClass'] = CONFIG['ObjectStorageClass'][:-1]
if CONFIG['ObjectStorageClass'].__contains__(','):
object_storage_class_provided = CONFIG['ObjectStorageClass'].split(',')
rest_append.headers['x-amz-storage-class'] = random.choice(object_storage_class_provided)
else:
rest_append.headers['x-amz-storage-class'] = CONFIG['ObjectStorageClass']
if CONFIG['PutWithACL']:
rest_append.headers['x-amz-acl'] = CONFIG['PutWithACL']
fixed_size = False
if CONFIG['BucketNameFixed']:
rest_append.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest_append.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest_append.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aws:kms':
rest_append.headers['x-amz-server-side-encryption'] = 'aws:kms'
if CONFIG['SrvSideEncryptAWSKMSKeyId']:
rest_append.headers['x-amz-server-side-encryption-aws-kms-key-id'] = CONFIG['SrvSideEncryptAWSKMSKeyId']
if CONFIG['SrvSideEncryptContext']:
rest_append.headers['x-amz-server-side-encryption-context'] = CONFIG['SrvSideEncryptContext']
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aes256':
rest_append.headers['x-amz-server-side-encryption'] = 'AES256'
# 如果打开CalHashMD5开关,在对象上传时写入一个自定义元数据,用于标记为本工具put上传的对象。
if CONFIG['CalHashMD5']:
rest_append.headers['x-amz-meta-md5written'] = 'yes'
if CONFIG['Expires']:
rest_append.headers['x-obs-expires'] = CONFIG['Expires']
# 错开每个并发起始选桶,避免单桶性能瓶颈。
rest_append.queryArgs["append"] = None
range_arr = range(0, CONFIG['BucketsPerUser'])
if CONFIG['AvoidSinBkOp']:
range_arr = range(process_id % CONFIG['BucketsPerUser'], CONFIG['BucketsPerUser']) + range(0, process_id % CONFIG['BucketsPerUser'])
buckets_cover = 0 # 已经遍历桶数量
for i in range_arr:
if not CONFIG['BucketNameFixed']:
rest_head.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
rest_append.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if not CONFIG['ObjectNameFixed']:
if CONFIG['ObjectLexical']:
if not CONFIG['ObjNamePatternHash']:
key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
key = Util.random_string_create(random.randint(300, 900))
rest_head.key = key
rest_append.key = key
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest_head.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest_head.key[-32:].zfill(32))
rest_append.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest_append.key[-32:].zfill(32))
rest_head.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(hashlib.md5(rest_head.key[-32:].zfill(32)).digest())
rest_append.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(hashlib.md5(rest_append.key[-32:].zfill(32)).digest())
logging.debug('side-encryption-customer-key: [%r]' % rest_append.key[-32:].zfill(32))
put_times_for_one_obj = CONFIG['PutTimesForOneObj']
while put_times_for_one_obj > 0:
logging.debug("send Head object meta data request")
resp_head = obsPyCmd.OBSRequestHandler(rest_head, conn).make_request()
# 暂定不需要把head请求加入队列
# result_queue.put((process_id, user.username, rest_head.recordUrl, request_type, resp_head.start_time, resp_head.end_time, resp_head.send_bytes, resp_head.recv_bytes, '', resp_head.request_id, resp_head.status, resp_head.id2))
if CONFIG['IsHTTP2']:
rest_append.queryArgs["position"] = resp_head.position[0] if '200' in resp_head.status and resp_head.position else "0"
else:
rest_append.queryArgs["position"] = resp_head.position if resp_head.status == '200 OK' else "0"
logging.debug("position: [%s]" % str(resp_head.position))
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time_append = (buckets_cover * CONFIG['ObjectsPerBucketPerThread'] * CONFIG['PutTimesForOneObj'] + j * CONFIG['PutTimesForOneObj'] + (CONFIG['PutTimesForOneObj'] - put_times_for_one_obj)) * 1.0 / CONFIG['TpsPerThread'] + start_time_append
wait_time_append = dst_time_append - time.time()
if wait_time_append > 0:
time.sleep(wait_time_append)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
if not fixed_size:
# change size every request for the same obj.
rest_append.contentLength, fixed_size = Util.generate_a_size(CONFIG['ObjectSize'])
put_times_for_one_obj -= 1
logging.debug("send append object request")
resp = obsPyCmd.OBSRequestHandler(rest_append, conn).make_request(cal_md5=CONFIG['CalHashMD5'],
memory_file=SHARE_MEM)
result_queue.put(
(process_id, user.username, rest_append.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
j += 1
buckets_cover += 1
else:
global APPEND_OBJECTS
logging.debug("append object performance test")
request_type = 'AppendObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.headers['content-type'] = 'application/octet-stream'
if CONFIG['ObjectStorageClass']:
if CONFIG['ObjectStorageClass'][-1:] == ',':
CONFIG['ObjectStorageClass'] = CONFIG['ObjectStorageClass'][:-1]
if CONFIG['ObjectStorageClass'].__contains__(','):
object_storage_class_provided = CONFIG['ObjectStorageClass'].split(',')
rest.headers['x-amz-storage-class'] = random.choice(object_storage_class_provided)
else:
rest.headers['x-amz-storage-class'] = CONFIG['ObjectStorageClass']
if CONFIG['PutWithACL']:
rest.headers['x-amz-acl'] = CONFIG['PutWithACL']
fixed_size = False
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aws:kms':
rest.headers['x-amz-server-side-encryption'] = 'aws:kms'
if CONFIG['SrvSideEncryptAWSKMSKeyId']:
rest.headers['x-amz-server-side-encryption-aws-kms-key-id'] = CONFIG['SrvSideEncryptAWSKMSKeyId']
if CONFIG['SrvSideEncryptContext']:
rest.headers['x-amz-server-side-encryption-context'] = CONFIG['SrvSideEncryptContext']
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aes256':
rest.headers['x-amz-server-side-encryption'] = 'AES256'
# 如果打开CalHashMD5开关,在对象上传时写入一个自定义元数据,用于标记为本工具put上传的对象。
if CONFIG['CalHashMD5']:
rest.headers['x-amz-meta-md5written'] = 'yes'
if CONFIG['Expires']:
rest.headers['x-obs-expires'] = CONFIG['Expires']
# 如果position下有上传记录的对象名和历史写入的位置信息,从该文件读。
obj_p_file = 'position/%s-%s-%d.dat' % (CONFIG['BucketNamePrefix'] if not CONFIG['BucketNameFixed'] else CONFIG['BucketNameFixed'],
CONFIG['ObjectNamePrefix'] if not CONFIG['ObjectNameFixed'] else CONFIG['ObjectNameFixed'],
process_id)
# 判断该对象是否已经有position记录
is_position_recorded = False
if os.path.exists(obj_p_file) and os.path.getsize(obj_p_file) > 0 and len(APPEND_OBJECTS) > 0:
is_position_recorded = True
os.remove(obj_p_file)
obj_p = ''
obj_p_file = 'position/%s-%s-%d.dat' % (CONFIG['BucketNamePrefix'] if not CONFIG['BucketNameFixed'] else CONFIG['BucketNameFixed'],
CONFIG['ObjectNamePrefix'] if not CONFIG['ObjectNameFixed'] else CONFIG['ObjectNameFixed'],
process_id)
open(obj_p_file, 'w').write(obj_p)
rest.queryArgs["append"] = None
# rest.queryArgs["position"] = "0"
range_arr = range(0, CONFIG['BucketsPerUser'])
if CONFIG['AvoidSinBkOp']:
range_arr = range(process_id % CONFIG['BucketsPerUser'], CONFIG['BucketsPerUser']) + range(0, process_id % CONFIG['BucketsPerUser'])
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
buckets_cover = 0 # 已经遍历桶数量
for i in range_arr:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if not CONFIG['ObjectNameFixed']:
if CONFIG['ObjectLexical']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
rest.key = Util.random_string_create(random.randint(300, 900))
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(
rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
logging.debug('side-encryption-customer-key: [%r]' % rest.key[-32:].zfill(32))
put_times_for_one_obj = CONFIG['PutTimesForOneObj']
while put_times_for_one_obj > 0:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (buckets_cover * CONFIG['ObjectsPerBucketPerThread'] * CONFIG['PutTimesForOneObj'] + j * CONFIG['PutTimesForOneObj'] + (CONFIG['PutTimesForOneObj'] - put_times_for_one_obj)) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
if not fixed_size:
# change size every request for the same obj.
rest.contentLength, fixed_size = Util.generate_a_size(CONFIG['ObjectSize'])
put_times_for_one_obj -= 1
if is_position_recorded:
rest.queryArgs["position"] = APPEND_OBJECTS[rest.key]
else:
rest.queryArgs["position"] = "0"
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'],
memory_file=SHARE_MEM)
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
# 更新对象追加写position
obj_p += '(%s,%s)\n' % (rest.key, resp.position)
# 每1KB,写入对象的position到本地文件objp-process_id.dat
if len(obj_p) >= 1024:
logging.info('write obj_v to file %s' % obj_p_file)
open(obj_p_file, 'a').write(obj_p)
obj_p = ''
j += 1
buckets_cover += 1
if obj_p:
open(obj_p_file, 'a').write(obj_p)
def image_process(process_id, user, conn, result_queue):
request_type = 'ImageProcess'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_cdn=CONFIG['IsCdn'], cdn_ak=CONFIG['CdnAK'], cdn_sk=CONFIG['CdnSK'],
cdn_sts_token=CONFIG['CdnSTSToken'])
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS, LIST_INDEX
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
# 从字典序对象名下载。
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
if not CONFIG['IsRandomGet']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
random.choice(
LIST_INDEX))).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
if not CONFIG['IsRandomGet']:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
index = random.choice(LIST_INDEX)
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
index)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
rest.queryArgs["x-image-process"] = CONFIG['x-image-process']
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def handle_from_objects(request_type, rest, process_id, user, conn, result_queue):
global OBJECTS
objects_per_user = len(OBJECTS) / CONFIG['Threads']
if objects_per_user == 0:
if process_id >= len(OBJECTS):
return
else:
start_index = end_index = process_id
else:
extra_obj = len(OBJECTS) % CONFIG['Threads']
if process_id == 0:
start_index = 0
end_index = objects_per_user + extra_obj
else:
start_index = process_id * objects_per_user + extra_obj
end_index = start_index + objects_per_user
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
pointer = start_index
while pointer < end_index:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (pointer - start_index) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
rest.bucket = OBJECTS[pointer][:OBJECTS[pointer].find('/')]
# 当Put对象时在obsPyCmd中,对对象名作了url的编译处理,此时如果要读取,则需要作反编译
rest.key = urllib.unquote_plus(OBJECTS[pointer][OBJECTS[pointer].find('/') + 1:])
if CONFIG['Testcase'] in (202,) and CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
pointer += 1
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
if CONFIG["Testcase"] in (202,):
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
else:
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue):
logging.debug("generate object name from obj_v")
obj_v_file_read = open(obj_v_file, 'r')
obj = obj_v_file_read.readline()
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
while obj:
if obj and len(obj.split('\t')) != 3:
logging.warning('obj [%r] format error in file %s' % (obj, obj_v_file))
continue
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
total_requests += 1
obj = obj[:-1]
rest.bucket = obj.split('\t')[0]
rest.key = obj.split('\t')[1]
rest.queryArgs['versionId'] = obj.split('\t')[2]
obj = obj_v_file_read.readline()
if rest.requestType == 'GetObject':
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
else:
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
def get_object(process_id, user, conn, result_queue):
request_type = 'GetObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_cdn=CONFIG['IsCdn'], cdn_ak=CONFIG['CdnAK'], cdn_sk=CONFIG['CdnSK'],
cdn_sts_token=CONFIG['CdnSTSToken'], is_http2=CONFIG['IsHTTP2'], host=conn.host)
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
if CONFIG['Testcase'] in (202, 900) and CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS, LIST_INDEX
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
# 从字典序对象名下载。
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
if CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
if not CONFIG['IsRandomGet']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
random.choice(
LIST_INDEX))).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
if not CONFIG['IsRandomGet']:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
index = random.choice(LIST_INDEX)
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
index)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def head_object(process_id, user, conn, result_queue):
request_type = 'HeadObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
elif not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace('ObjectNamePrefix',
CONFIG[
'ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def delete_object(process_id, user, conn, result_queue):
request_type = 'DeleteObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS, LIST_INDEX
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
elif not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
range_arr = range(0, CONFIG['BucketsPerUser'])
# 错开每个并发起始选桶,避免单桶性能瓶颈。
if CONFIG['AvoidSinBkOp']:
range_arr = range(process_id % CONFIG['BucketsPerUser'], CONFIG['BucketsPerUser']) + range(0,
process_id % CONFIG[
'BucketsPerUser'])
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
buckets_cover = 0 # 已经遍历桶数量
for i in range_arr:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += 1
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (buckets_cover * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG[
'TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['WindowMode']: # 运行窗口时间限制
window_time_now = (time.time() - valid_start_time.value) % CONFIG['WindowTime']
if window_time_now > CONFIG['RunWindowSeconds']:
time.sleep(CONFIG['WindowTime'] - window_time_now)
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
if not CONFIG['IsRandomDelete']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
index = random.choice(LIST_INDEX)
LIST_INDEX.remove(index)
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
index)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
if not CONFIG['IsRandomDelete']:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
index = random.choice(LIST_INDEX)
LIST_INDEX.remove(index)
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
index)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
j += 1
buckets_cover += 1
def restore_object(process_id, user, conn, result_queue):
request_type = 'RestoreObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
rest.queryArgs['restore'] = None
rest.sendContent = '<RestoreRequest xmlns="http://s3.amazonaws.com/doc/2006-3-01"><Days>%s</Days><GlacierJobParameters><Tier>%s</Tier></GlacierJobParameters></RestoreRequest>' % (
CONFIG['RestoreDays'], CONFIG['RestoreTier'])
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
elif not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
rest.queryArgs['restore'] = None
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
i = 0
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace('ObjectNamePrefix',
CONFIG[
'ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
rest.sendContent = '<RestoreRequest xmlns="http://s3.amazonaws.com/doc/2006-3-01"><Days>%s</Days><GlacierJobParameters><Tier>%s</Tier></GlacierJobParameters></RestoreRequest>' % (
CONFIG['RestoreDays'], CONFIG['RestoreTier'])
logging.debug('send content [%s] ' % rest.sendContent)
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def delete_multi_objects(process_id, user, conn, result_queue):
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
if CONFIG['ObjectsPerBucketPerThread'] <= 0:
logging.warn('ObjectsPerBucketPerThread <= 0, exit..')
return
request_type = 'DeleteMultiObjects'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['delete'] = None
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
i += 1
delete_times_per_bucket = math.ceil(
CONFIG['ObjectsPerBucketPerThread'] * 1.0 / CONFIG['DeleteObjectsPerRequest'])
logging.debug('ObjectsPerBucketPerThread: %d, DeleteObjectsPerRequest: %d, delete_times_per_bucket:%d' % (
CONFIG['ObjectsPerBucketPerThread'], CONFIG['DeleteObjectsPerRequest'], delete_times_per_bucket))
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * math.ceil(CONFIG['ObjectsPerBucketPerThread'] / CONFIG['DeleteObjectsPerRequest']) + j /
CONFIG[
'DeleteObjectsPerRequest']) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
rest.sendContent = '<Delete>'
k = 0
while k < CONFIG['DeleteObjectsPerRequest']:
if j >= CONFIG['ObjectsPerBucketPerThread']:
break
if not CONFIG['ObjNamePatternHash']:
key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
key = hashlib.md5(object_name).hexdigest() + '-' + object_name
rest.sendContent += '<Object><Key>%s</Key></Object>' % key
k += 1
j += 1
rest.sendContent += '</Delete>'
logging.debug('send content [%s] ' % rest.sendContent)
rest.headers['Content-MD5'] = base64.b64encode(hashlib.md5(rest.sendContent).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def copy_object(process_id, user, conn, result_queue):
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
request_type = 'CopyObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.headers['x-amz-acl'] = 'public-read-write'
rest.headers['x-amz-metadata-directive'] = 'COPY'
if CONFIG['copySrcObjFixed']:
rest.headers['x-amz-copy-source'] = '/' + CONFIG['copySrcObjFixed']
if CONFIG['copyDstObjFixed']:
rest.bucket = CONFIG['copyDstObjFixed'].split('/')[0]
rest.key = CONFIG['copyDstObjFixed'].split('/')[1]
elif CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
if CONFIG['copySrcSrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-copy-source-server-side-encryption-customer-algorithm'] = 'AES256'
if CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aws:kms':
rest.headers['x-amz-server-side-encryption'] = 'aws:kms'
if CONFIG['SrvSideEncryptAWSKMSKeyId']:
rest.headers['x-amz-server-side-encryption-aws-kms-key-id'] = CONFIG['SrvSideEncryptAWSKMSKeyId']
if CONFIG['SrvSideEncryptContext']:
rest.headers['x-amz-server-side-encryption-context'] = CONFIG['SrvSideEncryptContext']
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aes256':
rest.headers['x-amz-server-side-encryption'] = 'AES256'
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
i = 0
while i < CONFIG['BucketsPerUser']:
# 如果未配置目的对象和固定桶,设置目的桶为源对象所在的桶
if not CONFIG['copyDstObjFixed'] and not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['copyDstObjFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'] + '.copy')
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'] + '.copy')
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if not CONFIG['copySrcObjFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.headers['x-amz-copy-source'] = '/%s/%s' % (rest.bucket, CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix']))
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(j)).replace('ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
key = hashlib.md5(object_name).hexdigest() + '-' + object_name
rest.headers['x-amz-copy-source'] = '/%s/%s' % (rest.bucket, key)
if CONFIG['copySrcSrvSideEncryptType'].lower() == 'sse-c':
src_en_key = rest.headers['x-amz-copy-source'].split('/')[2][-32:].zfill(32)
rest.headers['x-amz-copy-source-server-side-encryption-customer-key'] = base64.b64encode(src_en_key)
rest.headers['x-amz-copy-source-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(src_en_key).digest())
logging.debug('src encrpt key: %s, src encrypt key md5: %s' % (
src_en_key, rest.headers['x-amz-copy-source-server-side-encryption-customer-key-MD5']))
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
# 同拷贝对象,若拷贝段操作先返回200 OK,并不代表拷贝成功。如果返回了200,但没有获取到ETag,将response修改为500错误。
if resp.status.startswith('200 ') and not resp.return_data:
logging.warning('response 200 OK without ETag, set status code 500 InternalError')
resp.status = '500 InternalError'
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes,
'copySrc:' + rest.headers['x-amz-copy-source'], resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def init_multi_upload(process_id, user, conn, result_queue):
# if not CONFIG['ObjectLexical']:
# logging.warn('Object name is not lexical, exit..')
# return
if CONFIG['ObjectsPerBucketPerThread'] <= 0 or CONFIG['BucketsPerUser'] <= 0:
logging.warn('ObjectsPerBucketPerThread or BucketsPerUser <= 0, exit..')
return
request_type = 'InitMultiUpload'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.queryArgs['uploads'] = None
if CONFIG['MultiUploadStorageClass']:
if CONFIG['MultiUploadStorageClass'][-1:] == ',':
CONFIG['MultiUploadStorageClass'] = CONFIG['MultiUploadStorageClass'][:-1]
if CONFIG['MultiUploadStorageClass'].__contains__(','):
multi_upload_storage_class_provided = CONFIG['MultiUploadStorageClass'].split(',')
rest.headers['x-amz-storage-class'] = random.choice(multi_upload_storage_class_provided)
else:
rest.headers['x-amz-storage-class'] = CONFIG['MultiUploadStorageClass']
if CONFIG['PutWithACL']:
rest.headers['x-amz-acl'] = CONFIG['PutWithACL']
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aws:kms':
rest.headers['x-amz-server-side-encryption'] = 'aws:kms'
if CONFIG['SrvSideEncryptAWSKMSKeyId']:
rest.headers['x-amz-server-side-encryption-aws-kms-key-id'] = CONFIG['SrvSideEncryptAWSKMSKeyId']
if CONFIG['SrvSideEncryptContext']:
rest.headers['x-amz-server-side-encryption-context'] = CONFIG['SrvSideEncryptContext']
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG['SrvSideEncryptAlgorithm'].lower() == 'aes256':
rest.headers['x-amz-server-side-encryption'] = 'AES256'
if CONFIG['Expires']:
rest.headers['x-obs-expires'] = CONFIG['Expires']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
upload_ids = ''
i = 0
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not CONFIG['ObjectNameFixed']:
if CONFIG['ObjectLexical']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
else:
rest.key = Util.random_string_create(random.randint(300, 900))
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
# 如果请求成功,记录return_data(UploadId)到本地文件
if resp.status.startswith('200 '):
logging.debug('rest.key:%s, rest.returndata:%s' % (rest.key, resp.return_data))
upload_ids += '%s\t%s\t%s\t%s\n' % (user.username, rest.bucket, rest.key, resp.return_data)
j += 1
i += 1
if upload_ids == '':
return None
# 退出前,写统计结果到本地文件
uploadid_writer = None
uploadid_file = 'data/upload_id-%d.dat' % process_id
try:
uploadid_writer = open(uploadid_file, 'w')
uploadid_writer.write(upload_ids)
except Exception, data:
logging.error('process [%d] write upload_ids error %s' % (process_id, data))
finally:
if uploadid_writer:
try:
uploadid_writer.close()
except IOError:
pass
def upload_part(process_id, user, conn, result_queue):
# 从本地加载本进程需要做的upload_ids。考虑到单upload_id多并发上传段场景,需要加载其它进程初始化的upload_ids。
# 如5个用户,每用户2个并发,则每个upload_id可以最大2个并发上传段。
# upload_id-0(usr0,p0) upload_id-1(usr0,p1) upload_id-2(usr1,p2) upload_id-3(usr1,p3) upload_id-4(usr2,p4)
# upload_id-5(usr2,p5) upload_id-6(usr3,p6) upload_id-7(usr3,p7) upload_id-8(usr4,p8) upload_id-9(usr4,p9)
# p0,p1需要顺序加载usr0,p0和usr0,p1
upload_ids = []
if not CONFIG['ConcurrentUpParts']:
id_files = [process_id]
else:
id_files = range(process_id / CONFIG['ThreadsPerUser'] * CONFIG['ThreadsPerUser'],
(process_id / CONFIG['ThreadsPerUser'] + 1) * CONFIG['ThreadsPerUser'])
for i in id_files:
upload_id_file = 'data/upload_id-%d.dat' % i
try:
with open(upload_id_file, 'r') as fd:
for line in fd:
if line.strip() == '':
continue
# 如果非本并发的用户初始化的upload_id,跳过。
if not line.startswith(user.username + '\t'):
continue
if len(line.split('\t')) != 4:
logging.warn('upload_ids record error [%s]' % line)
continue
# 记录upload_id的原并发号i
upload_ids.append((str(i) + '.' + line.strip()).split('\t'))
fd.close()
logging.info('process %d load upload_ids file %s end' % (process_id, upload_id_file))
except Exception, data:
logging.error("load %s for process %d error, [%r], exit" % (upload_id_file, process_id, data))
continue
if not upload_ids:
logging.warning("load no upload_ids for process %d, from file upload_id-%r exit" % (process_id, id_files))
return
else:
logging.info("total load %d upload_ids" % len(upload_ids))
fixed_size = False
request_type = 'UploadPart'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.headers['content-type'] = 'application/octet-stream'
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
for upload_id in upload_ids:
rest.bucket = upload_id[1]
rest.key = upload_id[2]
rest.queryArgs['uploadId'] = upload_id[3]
parts_record = ''
# 如果开启了并发上传段,本并发只处理部分段。
if not CONFIG['ConcurrentUpParts']:
part_ids = range(1, CONFIG['PartsForEachUploadID'] + 1)
else:
part_ids = range(process_id % CONFIG['ThreadsPerUser'] + 1, CONFIG['PartsForEachUploadID'] + 1,
CONFIG['ThreadsPerUser'])
logging.debug('process %d handle parts: %r' % (process_id, part_ids))
if not part_ids:
logging.warning(
'process %d has no parts to do for upload_id %s, break' % (process_id, rest.queryArgs['uploadId']))
continue
for i in part_ids:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
rest.queryArgs['partNumber'] = str(i)
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
for _ in xrange(CONFIG['PutTimesForOnePart']):
if not fixed_size:
rest.contentLength, fixed_size = Util.generate_a_size(CONFIG['PartSize'])
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'],
memory_file=SHARE_MEM)
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time, resp.end_time,
resp.send_bytes, resp.recv_bytes, resp.return_data, resp.request_id, resp.status, resp.id2))
total_requests += 1
if resp.status.startswith('200 '):
parts_record += '%d:%s,' % (i, resp.return_data)
upload_id.append(parts_record)
# 记录各段信息到本地文件 ,parts_etag-x.dat,格式:桶名\t对象名\tupload_id\tpartNo:Etag,partNo:Etag,...
part_record_file = 'data/parts_etag-%d.dat' % process_id
parts_record_writer = None
parts_records = ''
for upload_id in upload_ids:
parts_records += '\t'.join(upload_id) + '\n'
try:
parts_record_writer = open(part_record_file, 'w')
parts_record_writer.write(parts_records)
except Exception, data:
logging.error('process [%d] write file %s error, %s' % (process_id, part_record_file, data))
finally:
if parts_record_writer:
try:
parts_record_writer.close()
except IOError:
pass
def copy_part(process_id, user, conn, result_queue):
# 必须传入OBJECTS,否则无法拷贝。
global OBJECTS
if not OBJECTS:
logging.error("can not find source object, exit")
return
# 从本地加载本进程需要做的upload_ids。考虑到单upload_id多并发上传段场景,需要加载其它进程初始化的upload_ids。
# 如5个用户,每用户2个并发,则每个upload_id可以最大2个并发上传段。
# upload_id-0(usr0,p0) upload_id-1(usr0,p1) upload_id-2(usr1,p2) upload_id-3(usr1,p3) upload_id-4(usr2,p4)
# upload_id-5(usr2,p5) upload_id-6(usr3,p6) upload_id-7(usr3,p7) upload_id-8(usr4,p8) upload_id-9(usr4,p9)
# p0,p1需要顺序加载usr0,p0和usr0,p1
upload_ids = []
if not CONFIG['ConcurrentUpParts']:
id_files = [process_id]
else:
id_files = range(process_id / CONFIG['ThreadsPerUser'] * CONFIG['ThreadsPerUser'],
(process_id / CONFIG['ThreadsPerUser'] + 1) * CONFIG['ThreadsPerUser'])
for i in id_files:
upload_id_file = 'data/upload_id-%d.dat' % i
try:
with open(upload_id_file, 'r') as fd:
for line in fd:
if line.strip() == '':
continue
# 如果非本并发的用户初始化的upload_id,跳过。
if not line.startswith(user.username + '\t'):
continue
if len(line.split('\t')) != 4:
logging.warn('upload_ids record error [%s]' % line)
continue
# 记录upload_id的原并发号i
upload_ids.append((str(i) + '.' + line.strip()).split('\t'))
fd.close()
logging.info('process %d load upload_ids file %s end' % (process_id, upload_id_file))
except Exception, data:
logging.error("load %s for process %d error, [%r], exit" % (upload_id_file, process_id, data))
continue
if not upload_ids:
logging.warning("load no upload_ids for process %d, exit" % process_id)
return
fixed_size = False
request_type = 'CopyPart'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
if CONFIG['copySrcSrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-copy-source-server-side-encryption-customer-algorithm'] = 'AES256'
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
parts_record = ''
for upload_id in upload_ids:
rest.bucket = upload_id[1]
rest.key = upload_id[2]
rest.queryArgs['uploadId'] = upload_id[3]
# 如果开启了并发上传段,本并发只处理部分段。
if not CONFIG['ConcurrentUpParts']:
part_ids = range(1, CONFIG['PartsForEachUploadID'] + 1)
else:
part_ids = range(process_id % CONFIG['ThreadsPerUser'] + 1, CONFIG['PartsForEachUploadID'] + 1,
CONFIG['ThreadsPerUser'])
logging.debug('process %d handle parts: %r' % (process_id, part_ids))
if not part_ids:
logging.warning(
'process %d has no parts to do for upload_id %s, break' % (process_id, rest.queryArgs['uploadId']))
continue
for i in part_ids:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
rest.queryArgs['partNumber'] = str(i)
if not fixed_size:
range_size, fixed_size = Util.generate_a_size(CONFIG['PartSize'])
rest.headers['x-amz-copy-source'] = '/%s' % random.choice(OBJECTS)
range_start_index = random.randint(0, int(CONFIG['PartSize']) - range_size)
logging.debug('range_start_index:%d' % range_start_index)
rest.headers['x-amz-copy-source-range'] = 'bytes=%d-%d' % (
range_start_index, range_start_index + range_size - 1)
logging.debug('x-amz-copy-source-range:[%s]' % rest.headers['x-amz-copy-source-range'])
# 增加服务器端加密头域
if CONFIG['copySrcSrvSideEncryptType'].lower() == 'sse-c':
src_en_key = rest.headers['x-amz-copy-source'].split('/')[2][-32:].zfill(32)
rest.headers['x-amz-copy-source-server-side-encryption-customer-key'] = base64.b64encode(src_en_key)
rest.headers['x-amz-copy-source-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(src_en_key).digest())
logging.debug('src encrypt key: %s, src encrypt key md5: %s' % (
src_en_key, rest.headers['x-amz-copy-source-server-side-encryption-customer-key-MD5']))
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
# 同拷贝对象,若拷贝段操作先返回200 OK,并不代表拷贝成功。如果返回了200,但没有获取到ETag,将response修改为500错误。
if resp.status.startswith('200 ') and not resp.return_data:
logging.error('response 200 OK without ETag, set status code 500 InternalError')
resp.status = '500 InternalError'
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes,
'src:' + rest.headers['x-amz-copy-source'] + ':' + rest.headers[
'x-amz-copy-source-range'], resp.request_id, resp.status, resp.id2))
if resp.status.startswith('200 '):
parts_record += '%d:%s,' % (i, resp.return_data)
total_requests += 1
upload_id.append(parts_record)
# 记录各段信息到本地文件 ,parts_etag-x.dat,格式:桶名\t对象名\tupload_id\tpartNo:Etag,partNo:Etag,...
part_record_file = 'data/parts_etag-%d.dat' % process_id
parts_record_writer = None
parts_records = ''
for upload_id in upload_ids:
parts_records += '\t'.join(upload_id) + '\n'
try:
parts_record_writer = open(part_record_file, 'w')
parts_record_writer.write(parts_records)
except Exception, data:
logging.error('process [%d] write file %s error, %s' % (process_id, part_record_file, data))
finally:
if parts_record_writer:
try:
parts_record_writer.close()
except IOError:
pass
def complete_multi_upload(process_id, user, conn, result_queue):
# 从本地parts_etag-x.dat中加载本进程需要做的upload_ids。考虑到单upload_id多并发上传段场景,需要加载其它进程上传的段信息。
# 如3个用户,每用户3个并发,每个upload_id上传6个段,则每个upload_id 3个并发上传段,每个并发对每个upload_id上传2个段。
# parts_etag-0(usr0,p0,part1/4) parts_etag-1(usr0,p1,part2/5) parts_etag-2(usr1,p2,part3/6)
# parts_etag-3(usr1,p3,part1/4) parts_etag-4(usr0,p4,part2/5) parts_etag-5(usr1,p5,part3/6)
# parts_etag-0(usr2,p6,part1/4) parts_etag-1(usr0,p7,part2/5) parts_etag-2(usr1,p8,part3/6)
# p0,p1,p2需要顺序加载parts_etag-0, parts_etag-1, parts_etag-2,取里面属于自已的对象。
part_etags = {}
if not CONFIG['ConcurrentUpParts']:
part_files = [process_id]
else:
part_files = range(process_id / CONFIG['ThreadsPerUser'] * CONFIG['ThreadsPerUser'],
(process_id / CONFIG['ThreadsPerUser'] + 1) * CONFIG['ThreadsPerUser'])
for i in part_files:
part_record_file = 'data/parts_etag-%d.dat' % i
try:
with open(part_record_file, 'r') as fd:
for line in fd:
if line.strip() == '':
continue
if not line.startswith('%d.%s\t' % (process_id, user.username)):
continue
line_array = line.strip().split('\t')
if len(line_array) != 5 or not line_array[4]:
logging.warn('partEtag record error [%s]' % line)
continue
# 用户名\t桶名\t对象名\tupoadID\tpartNo:etag,partN0:etag,..
# 合并相同的upload_id多并发上传的段信息
if line_array[3] in part_etags:
part_etags[line_array[3]] = (
line_array[1], line_array[2], line_array[4] + part_etags[line_array[3]][2])
else:
part_etags[line_array[3]] = (line_array[1], line_array[2], line_array[4])
fd.close()
logging.debug('process %d load parts_etag file %s end' % (process_id, part_record_file))
except Exception, data:
logging.warning(
"load parts_etag from file %s for process %d error, [%r], exit" % (part_record_file, process_id, data))
continue
if not part_etags:
logging.error('process %d load nothing from files %r ' % (process_id, part_files))
return
request_type = 'CompleteMultiUpload'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.headers['content-type'] = 'application/xml'
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
for key, value in part_etags.items():
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
rest.bucket = value[0]
rest.key = value[1]
rest.queryArgs['uploadId'] = key
# 将parts信息排序
parts_dict = {}
for item in value[2].split(','):
if ':' in item:
parts_dict[int(item.split(':')[0])] = item.split(':')[1]
# 组装xml body
if not parts_dict:
continue
rest.sendContent = '<CompleteMultipartUpload>'
for part_index in sorted(parts_dict):
if not parts_dict[part_index]:
continue
rest.sendContent += '<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>' % (
part_index, parts_dict[part_index])
rest.sendContent += '</CompleteMultipartUpload>'
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
total_requests += 1
def abort_multi_upload(process_id, user, conn, result_queue):
# 从本地加载本进程需要做的upload_ids
upload_ids = []
upload_id_file = 'data/upload_id-%d.dat' % process_id
try:
with open(upload_id_file, 'r') as fd:
for line in fd:
if line.strip() == '':
continue
# 如果非本并发的用户初始化的upload_id,跳过。
if not line.startswith(user.username + '\t'):
continue
if len(line.split('\t')) != 4:
logging.warn('upload_ids record error [%s]' % line)
continue
upload_ids.append(line.strip().split('\t'))
fd.close()
logging.info('process %d load upload_ids file %s end' % (process_id, upload_id_file))
except Exception, data:
logging.error("load %s for process %d error, [%r], exit" % (upload_id_file, process_id, data))
return
if not upload_ids:
logging.warning("load no upload_ids for process %d, from file upload_id-%r exit" % (process_id, upload_id_file))
return
else:
logging.info("total load %d upload_ids" % len(upload_ids))
request_type = 'AbortMultiUpload'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
for upload_id in upload_ids:
rest.bucket = upload_id[1]
rest.key = upload_id[2]
rest.queryArgs['uploadId'] = upload_id[3]
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
total_requests += 1
def multi_parts_upload(process_id, user, conn, result_queue):
rest = obsPyCmd.OBSRequestDescriptor(request_type='', ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.bucket = CONFIG['BucketNameFixed']
rest.key = CONFIG['ObjectNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
i = 0
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace('ObjectNamePrefix',
CONFIG[
'ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
# 1. 初始化对象多段上传任务。
rest.requestType = 'InitMultiUpload'
rest.method = 'POST'
rest.headers = {}
rest.queryArgs = {}
rest.contentLength = 0
rest.sendContent = ''
rest.queryArgs['uploads'] = None
if CONFIG['PutWithACL']:
rest.headers['x-amz-acl'] = CONFIG['PutWithACL']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG[
'SrvSideEncryptAlgorithm'].lower() == 'aws:kms':
rest.headers['x-amz-server-side-encryption'] = 'aws:kms'
if CONFIG['SrvSideEncryptAWSKMSKeyId']:
rest.headers['x-amz-server-side-encryption-aws-kms-key-id'] = CONFIG['SrvSideEncryptAWSKMSKeyId']
if CONFIG['SrvSideEncryptContext']:
rest.headers['x-amz-server-side-encryption-context'] = CONFIG['SrvSideEncryptContext']
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-kms' and CONFIG[
'SrvSideEncryptAlgorithm'].lower() == 'aes256':
rest.headers['x-amz-server-side-encryption'] = 'AES256'
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, rest.requestType, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
total_requests += 1
upload_id = resp.return_data
logging.info("upload id: %s" % upload_id)
# 2. 串行上传多段
rest.requestType = 'UploadPart'
rest.method = 'PUT'
rest.headers = {}
rest.queryArgs = {}
rest.sendContent = ''
rest.headers['content-type'] = 'application/octet-stream'
rest.queryArgs['uploadId'] = upload_id
part_number = 1
fixed_size = False
part_etags = {}
while part_number <= CONFIG['PartsForEachUploadID']:
rest.queryArgs['partNumber'] = str(part_number)
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(
rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
for _ in xrange(CONFIG['PutTimesForOnePart']):
if not fixed_size:
rest.contentLength, fixed_size = Util.generate_a_size(CONFIG['PartSize'])
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'],
memory_file=SHARE_MEM)
result_queue.put(
(process_id, user.username, rest.recordUrl, rest.requestType, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
total_requests += 1
if resp.status.startswith('200 '):
part_etags[part_number] = resp.return_data
part_number += 1
# 3. 合并段
rest.requestType = 'CompleteMultiUpload'
rest.method = 'POST'
rest.headers = {}
rest.queryArgs = {}
rest.headers['content-type'] = 'application/xml'
rest.queryArgs['uploadId'] = upload_id
rest.sendContent = '<CompleteMultipartUpload>'
for part_index in sorted(part_etags):
rest.sendContent += '<Part><PartNumber>%d</PartNumber><ETag>%s</ETag></Part>' % (
part_index, part_etags[part_index])
rest.sendContent += '</CompleteMultipartUpload>'
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request()
result_queue.put(
(process_id, user.username, rest.recordUrl, rest.requestType, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
total_requests += 1
j += 1
i += 1
def get_object_upload(process_id, user, conn, result_queue):
upload_ids = []
upload_id_file = 'data/upload_id-%d.dat' % process_id
try:
with open(upload_id_file, 'r') as fd:
for line in fd:
if line.strip() == '':
continue
# 如果非本并发的用户初始化的upload_id,跳过。
if not line.startswith(user.username + '\t'):
continue
if len(line.split('\t')) != 4:
logging.warn('upload_ids record error [%s]' % line)
continue
upload_ids.append(line.strip().split('\t'))
fd.close()
logging.info('process %d load upload_ids file %s end' % (process_id, upload_id_file))
except Exception, data:
logging.error("load %s for process %d error, [%r], exit" % (upload_id_file, process_id, data))
return
if not upload_ids:
logging.warning("load no upload_ids for process %d, from file upload_id-%r exit" % (process_id, upload_id_file))
return
else:
logging.info("total load %d upload_ids" % len(upload_ids))
request_type = 'GetObjectUpload'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
total_requests = 0
for upload_id in upload_ids:
rest.bucket = upload_id[1]
rest.key = upload_id[2]
rest.queryArgs['uploadId'] = upload_id[3]
for upload_id in upload_ids:
rest.bucket = upload_id[1]
rest.key = upload_id[2]
rest.queryArgs['uploadId'] = upload_id[3]
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求 / 限制TPS + 并发开始时间
dst_time = total_requests * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, '', resp.request_id, resp.status, resp.id2))
def put_object_acl(process_id, user, conn, result_queue):
request_type = 'PutObjectAcl'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
if CONFIG['Testcase'] in (202, 900) and CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
# 从字典序对象名下载。
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
rest.queryArgs["acl"] = None
rest.headers["x-amz-acl"] = 'public-read'
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace('ObjectNamePrefix',
CONFIG[
'ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def get_object_acl(process_id, user, conn, result_queue):
request_type = 'GetObjectAcl'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
if CONFIG['Testcase'] in (202, 900) and CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
# 从字典序对象名下载。
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
rest.queryArgs["acl"] = None
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace('ObjectNamePrefix',
CONFIG[
'ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def options_object(process_id, user, conn, result_queue):
request_type = 'OptionsObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
# rest.headers['Access-Control-Request-Method'] = CONFIG['AllowedMethod']
if CONFIG['AllowedMethod']:
if ',' in CONFIG['AllowedMethod']:
rest.headers['Access-Control-Request-Method'] = []
for i in CONFIG['AllowedMethod'].split(','):
rest.headers['Access-Control-Request-Method'].append(i.upper())
else:
rest.headers['Access-Control-Request-Method'] = CONFIG['AllowedMethod'].upper()
else:
rest.headers['Access-Control-Request-Method'] = 'GET'
rest.headers['Origin'] = CONFIG['DomainName']
# 如果传入OBJECTS,则直接处理OBJECTS。
global OBJECTS
if OBJECTS:
handle_from_objects(request_type, rest, process_id, user, conn, result_queue)
return
# 如果data下有上传记录的对象名和版本,从该文件读。
obj_v_file = 'data/objv-%d.dat' % process_id
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
handle_from_obj_v(request_type, obj_v_file, rest, process_id, user, conn, result_queue)
return
# 从字典序对象名下载。
if not CONFIG['ObjectLexical']:
logging.warn('Object name is not lexical, exit..')
return
i = 0
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
while i < CONFIG['BucketsPerUser']:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (i * CONFIG['ObjectsPerBucketPerThread'] + j) * 1.0 / CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if CONFIG['Range']:
rest.headers['Range'] = 'bytes=%s' % random.choice(CONFIG['Range'].split(';')).strip()
if not CONFIG['ObjectNameFixed']:
if not CONFIG['ObjNamePatternHash']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace('ObjectNamePrefix',
CONFIG[
'ObjectNamePrefix'])
else:
object_name = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index',
str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
j += 1
i += 1
def post_object(process_id, user, conn, result_queue):
request_type = 'PostObject'
rest = obsPyCmd.OBSRequestDescriptor(request_type, ak=user.ak, sk=user.sk,
auth_algorithm=CONFIG['AuthAlgorithm'], virtual_host=CONFIG['VirtualHost'],
domain_name=CONFIG['DomainName'], region=CONFIG['Region'],
is_http2=CONFIG['IsHTTP2'], host=conn.host)
rest.headers['content-type'] = 'multipart/form-data; boundary=---------------------------7db143f50da2 '
fixed_size = False
if CONFIG['BucketNameFixed']:
rest.bucket = CONFIG['BucketNameFixed']
if CONFIG['ObjectNameFixed']:
rest.key = CONFIG['ObjectNameFixed']
if CONFIG['SrvSideEncryptType'].lower() == 'sse-kms':
rest.headers['x-amz-server-side-encryption'] = 'aws:kms'
elif CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-algorithm'] = 'AES256'
obj_v = ''
obj_v_file = 'data/objv-%d.dat' % process_id
open(obj_v_file, 'w').write(obj_v)
# 错开每个并发起始选桶,避免单桶性能瓶颈。
range_arr = range(0, CONFIG['BucketsPerUser'])
if CONFIG['AvoidSinBkOp']:
range_arr = range(process_id % CONFIG['BucketsPerUser'], CONFIG['BucketsPerUser']) + range(0,
process_id % CONFIG[
'BucketsPerUser'])
start_time = None
if CONFIG['TpsPerThread']:
start_time = time.time() # 开始时间
buckets_cover = 0 # 已经遍历桶数量
for i in range_arr:
if not CONFIG['BucketNameFixed']:
rest.bucket = '%s.%s.%d' % (user.ak.lower(), CONFIG['BucketNamePrefix'], i)
j = 0
while j < CONFIG['ObjectsPerBucketPerThread']:
if not CONFIG['ObjectNameFixed']:
if CONFIG['ObjectLexical']:
rest.key = CONFIG['ObjectNamePartten'].replace('processID', str(process_id)).replace('Index', str(
j)).replace(
'ObjectNamePrefix', CONFIG['ObjectNamePrefix'])
else:
object_name = Util.random_string_create(86)
rest.key = hashlib.md5(object_name).hexdigest() + '-' + object_name
if CONFIG['SrvSideEncryptType'].lower() == 'sse-c':
rest.headers['x-amz-server-side-encryption-customer-key'] = base64.b64encode(rest.key[-32:].zfill(32))
rest.headers['x-amz-server-side-encryption-customer-key-MD5'] = base64.b64encode(
hashlib.md5(rest.key[-32:].zfill(32)).digest())
logging.debug('side-encryption-customer-key: [%r]' % rest.key[-32:].zfill(32))
put_times_for_one_obj = CONFIG['PutTimesForOneObj']
while put_times_for_one_obj > 0:
if CONFIG['TpsPerThread']: # 限制tps
# 按限制的tps数计算当前应该到的时间。计算方法: 当前已完成的请求/限制TPS + 并发开始时间
dst_time = (buckets_cover * CONFIG['ObjectsPerBucketPerThread'] * CONFIG['PutTimesForOneObj'] + j *
CONFIG[
'PutTimesForOneObj'] + (CONFIG['PutTimesForOneObj'] - put_times_for_one_obj)) * 1.0 / \
CONFIG['TpsPerThread'] + start_time
wait_time = dst_time - time.time()
if wait_time > 0:
time.sleep(wait_time)
if not fixed_size:
# change size every request for the same obj.
rest.contentLength, fixed_size = Util.generate_a_size(CONFIG['ObjectSize'])
put_times_for_one_obj -= 1
rest.sendContent = '''
-----------------------------7db143f50da2
Content-Disposition: form-data; name="key"
%s
-----------------------------7db143f50da2
Content-Disposition: form-data; name="file"
Content-Type: text/plain
01234567890
Content-Disposition: form-data; name="submit"
Upload
-----------------------------7db143f50da2--
''' % rest.key
resp = obsPyCmd.OBSRequestHandler(rest, conn).make_request(cal_md5=CONFIG['CalHashMD5'])
result_queue.put(
(process_id, user.username, rest.recordUrl, request_type, resp.start_time,
resp.end_time, resp.send_bytes, resp.recv_bytes, 'MD5:' + str(resp.content_md5),
resp.request_id, resp.status, resp.id2))
if resp.return_data:
obj_v += '%s\t%s\t%s\n' % (rest.bucket, rest.key, resp.return_data)
# 每1KB,写入对象的versionID到本地文件objv-process_id.dat
if len(obj_v) >= 1024:
logging.info('write obj_v to file %s' % obj_v_file)
open(obj_v_file, 'a').write(obj_v)
obj_v = ''
j += 1
buckets_cover += 1
if obj_v:
open(obj_v_file, 'a').write(obj_v)
# 并发进程入口
def start_process(process_id, user, test_case, results_queue, valid_start_time, valid_end_time, current_threads, lock,
conn=None, call_itself=False):
global OBJECTS, CONFIG
# 如果混合操作自身调用,不增用户,不等待。
if not call_itself:
lock.acquire()
current_threads.value += 1
lock.release()
# 等待所有用户启动
while True:
# 如果时间已经被其它进程刷新,直接跳过。
if valid_start_time.value == float(sys.maxint):
# 若所有用户均启动,记为合法的有效开始时间
if current_threads.value == CONFIG['Threads']:
valid_start_time.value = time.time() + 2
else:
time.sleep(.06)
else:
break
time.sleep(2)
# 考虑混合操作重复执行场景,若已有连接,不分配连接
if not conn:
conn = obsPyCmd.MyHTTPConnection(host=CONFIG['OSCs'], is_secure=CONFIG['IsHTTPs'],
ssl_version=CONFIG['sslVersion'], timeout=CONFIG['ConnectTimeout'],
serial_no=process_id, long_connection=CONFIG['LongConnection'],
conn_header=CONFIG['ConnectionHeader'], anonymous=CONFIG['Anonymous'],
is_http2=CONFIG['IsHTTP2'])
if test_case != 900:
try:
method_to_call = globals()[TESTCASES[test_case].split(';')[1]]
logging.debug('method %s called ' % method_to_call.__name__)
method_to_call(process_id, user, conn, results_queue)
except KeyboardInterrupt:
pass
except Exception, e:
import traceback
logging.error('Call method for test case %d except: %s' % (test_case, traceback.format_exc()))
elif test_case == 900:
test_cases = [int(case) for case in CONFIG['MixOperations'].split(',')]
tmp = 0
while tmp < CONFIG['MixLoopCount']:
logging.debug("loop count: %d " % tmp)
tmp += 1
for case in test_cases:
logging.debug("case %d in mix loop called " % case)
start_process(process_id, user, case, results_queue, valid_start_time,
valid_end_time, current_threads, lock, conn, True)
# 如果混合操作自身调用,则直接返回,不断连接,不减用户。
if call_itself:
return
# close connection for this thread
if conn:
conn.close_connection()
# 执行完业务后,当前用户是第一个退出的用户,记为合法的结束时间
if current_threads.value == CONFIG['Threads']:
valid_end_time.value = time.time()
logging.info('thread [' + str(process_id) + '], exit, set valid_end_time = ' + str(valid_end_time.value))
# 退出
lock.acquire()
current_threads.value -= 1
lock.release()
logging.info('process_id [%d] exit, set current_threads.value = %d' % (process_id, current_threads.value))
def get_total_requests():
global OBJECTS, CONFIG
if CONFIG['Testcase'] == 100:
return CONFIG['RequestsPerThread'] * CONFIG['Threads']
elif CONFIG['Testcase'] in (
101, 103, 104, 105, 106, 111, 112, 141, 142, 143, 151, 152, 153, 161, 162, 163, 164, 165, 167, 168, 170, 171, 173,
174, 175, 176, 177, 178, 179, 180, 182, 185, 188):
return CONFIG['BucketsPerUser'] * CONFIG['Users']
elif CONFIG['Testcase'] in (201,):
return CONFIG['ObjectsPerBucketPerThread'] * CONFIG['BucketsPerUser'] * CONFIG['Threads'] * CONFIG[
'PutTimesForOneObj']
elif CONFIG['Testcase'] in (202, 203, 204, 206, 207, 211, 217, 218, 219, 221, 226):
if len(OBJECTS) > 0:
return len(OBJECTS)
# 如果从data下加载到对象版本数据,则不清楚总数。
if CONFIG['Testcase'] in (202, 204):
for i in range(CONFIG['Threads']):
obj_v_file = 'data/objv-%d.dat' % i
if os.path.exists(obj_v_file) and os.path.getsize(obj_v_file) > 0:
return -1
return CONFIG['ObjectsPerBucketPerThread'] * CONFIG['BucketsPerUser'] * CONFIG['Threads']
elif CONFIG['Testcase'] in (205,):
return int((CONFIG['ObjectsPerBucketPerThread'] + CONFIG['DeleteObjectsPerRequest'] - 1) / CONFIG[
'DeleteObjectsPerRequest']) * CONFIG['BucketsPerUser'] * CONFIG['Threads']
elif CONFIG['Testcase'] in (216,):
return CONFIG['ObjectsPerBucketPerThread'] * CONFIG['BucketsPerUser'] * CONFIG['Threads'] * (
2 + CONFIG['PartsForEachUploadID'])
# 对于某些请求无法计算请求总量,返回-1
else:
return -1
# return True: pass, False: failed
def precondition():
global CONFIG, TESTCASES
# 检查当前用户是否root用户
import getpass
import platform
if 'root' != getpass.getuser() and platform.system().lower().startswith('linux'):
return False, "\033[1;31;40m%s\033[0m Please run with root account other than '%s'" % (
"[ERROR]", getpass.getuser())
# 检查测试用例是否支持
if CONFIG['Testcase'] not in TESTCASES:
return False, "\033[1;31;40m%s\033[0m Test Case [%d] not supported" % ("[ERROR]", CONFIG['Testcase'])
# 如果开启服务器端加密功能,必须使用https+AWSV4
if CONFIG['SrvSideEncryptType']:
if not CONFIG['IsHTTPs']:
CONFIG['IsHTTPs'] = True
logging.warning('change IsHTTPs to True while use SrvSideEncryptType')
if CONFIG['AuthAlgorithm'] != 'AWSV4':
CONFIG['AuthAlgorithm'] = 'AWSV4'
logging.warning('change AuthAlgorithm to AWSV4 while use SrvSideEncryptType')
# 加载用户,检查user是否满足要求
logging.info('loading users...')
read_users()
if CONFIG['Users'] > len(USERS):
return False, "\033[1;31;40m%s\033[0m Not enough users in users.dat after index %d: %d < [Users=%d]" % (
"[ERROR]", CONFIG['UserStartIndex'], len(USERS), CONFIG['Users'])
# 测试网络连接
if CONFIG['IsHTTPs']:
try:
import ssl as ssl
if not CONFIG['sslVersion']:
CONFIG['sslVersion'] = 'SSLv23'
logging.info('import ssl module done, config ssl Version: %s' % CONFIG['sslVersion'])
except ImportError:
logging.warning('import ssl module error')
return False, 'Python version %s ,import ssl module error' % sys.version.split(' ')[0]
oscs = CONFIG['OSCs'].split(',')
for end_point in oscs:
print 'Testing connection to %s\t' % end_point.ljust(20),
sys.stdout.flush()
test_conn = None
try:
test_conn = obsPyCmd.MyHTTPConnection(host=end_point, is_secure=CONFIG['IsHTTPs'],
ssl_version=CONFIG['sslVersion'], timeout=60, serial_no=0,
is_http2=CONFIG['IsHTTP2'])
test_conn.create_connection()
test_conn.connect_connection()
ssl_ver = ''
if CONFIG['IsHTTPs'] and not CONFIG['IsHTTP2']:
if Util.compare_version(sys.version.split()[0], '2.7.9') < 0:
ssl_ver = test_conn.connection.sock._sslobj.cipher()[1]
else:
ssl_ver = test_conn.connection.sock._sslobj.version()
rst = '\033[1;32;40mSUCCESS %s\033[0m'.ljust(10) % ssl_ver
else:
rst = '\033[1;32;40mSUCCESS\033[0m'.ljust(10)
print rst
logging.info(
'connect %s success, python version: %s, ssl_ver: %s' % (
end_point, sys.version.replace('\n', ' '), ssl_ver))
except Exception, data:
logging.error('Caught exception when testing connection with %s, except: %s' % (end_point, data))
print '\033[1;31;40m%s *%s*\033[0m' % (' Failed'.ljust(8), data)
return False, 'Check connection failed'
finally:
if test_conn:
test_conn.close_connection()
# 创建data目录
if not os.path.exists('data'):
os.mkdir('data')
if not os.path.exists('position'):
os.mkdir('position')
return True, 'check passed'
def get_objects_from_file(file_name):
global OBJECTS
if not os.path.exists(file_name):
print 'ERROR,the file configured %s in config.dat not exist' % file_name
sys.exit(0)
try:
with open(file_name, 'r') as fd:
for line in fd:
if line.strip() == '':
continue
if len(line.split(',')) != 13:
continue
if line.split(',')[2][1:].find('/') == -1:
continue
if line.split(',')[11].strip().startswith('200'):
OBJECTS.append(line.split(',')[2][1:])
fd.close()
logging.warning('load file %s end, get objects [%d]' % (file_name, len(OBJECTS)))
except Exception, data:
msg = 'load file %s except, %s' % (file_name, data)
logging.error(msg)
print msg
sys.exit()
if len(OBJECTS) == 0:
print 'get no objects in file %s' % file_name
sys.exit()
# running config
CONFIG = {}
# test users
USERS = []
OBJECTS = []
# initialize a shared memory file with fixed size: 1M
SHARE_MEM = create_file_in_memory()
APPEND_OBJECTS = {}
LIST_INDEX = []
TESTCASES = {100: 'ListUserBuckets;list_user_buckets',
101: 'CreateBucket;create_bucket',
102: 'ListObjectsInBucket;list_objects_in_bucket',
103: 'HeadBucket;head_bucket',
104: 'DeleteBucket;delete_bucket',
105: 'BucketDelete;bucket_delete',
106: 'OptionsBucket;options_bucket',
111: 'PutBucketVersiong;put_bucket_versioning',
112: 'GetBucketVersioning;get_bucket_versioning',
141: 'PutBucketWebsite;put_bucket_website',
142: 'GetBucketWebsite;get_bucket_website',
143: 'DeleteBucketWebsite;delete_bucket_website',
151: 'PutBucketCors;put_bucket_cors',
152: 'GetBucketCors;get_bucket_cors',
153: 'DeleteBucketCors;delete_bucket_cors',
161: 'PutBucketTag;put_bucket_tag',
162: 'GetBucketTag;get_bucket_tag',
163: 'DeleteBucketTag;delete_bucket_tag',
164: 'PutBucketLog;put_bucket_log',
165: 'GetBucketLog;get_bucket_log',
167: 'PutBucketStorageQuota;put_bucket_storage_quota',
168: 'GetBucketStorageQuota;get_bucket_storage_quota',
170: 'PutBucketAcl;put_bucket_acl',
171: 'GetBucketAcl;get_bucket_acl',
173: 'PutBucketPolicy;put_bucket_policy',
174: 'GetBucketPolicy;get_bucket_policy',
175: 'DeleteBucketPolicy;delete_bucket_policy',
176: 'PutBucketLifecycle;put_bucket_lifecycle',
177: 'GetBucketLifecycle;get_bucket_lifecycle',
178: 'DeleteBucketLifecycle;delete_bucket_lifecycle',
179: 'PutBucketNotification;put_bucket_notification',
180: 'GetBucketNotification;get_bucket_notification',
182: 'GetBucketMultiPartsUpload;get_bucket_multi_parts_upload',
185: 'GetBucketLocation;get_bucket_location',
188: 'GetBucketStorageInfo;get_bucket_storageinfo',
201: 'PutObject;put_object',
202: 'GetObject;get_object',
203: 'HeadObject;head_object',
204: 'DeleteObject;delete_object',
205: 'DeleteMultiObjects;delete_multi_objects',
206: 'CopyObject;copy_object',
207: 'RestoreObject;restore_object',
208: 'AppendObject;append_object',
209: 'ImageProcess;image_process',
211: 'InitMultiUpload;init_multi_upload',
212: 'UploadPart;upload_part',
213: 'CopyPart;copy_part',
214: 'CompleteMultiUpload;complete_multi_upload',
215: 'AbortMultiUpload;abort_multi_upload',
216: 'MultiPartsUpload;multi_parts_upload',
217: 'GetObjectUpload;get_object_upload', # 需先执行InitMultiUpload
218: 'PutObjectAcl;put_object_acl',
219: 'GetObjectAcl;get_object_acl',
221: 'OptionsObject;options_object',
226: 'PostObject;post_object',
900: 'MixOperation;'
}
def generate_run_header(mode):
"""
generate tool running header
:param mode: running mode
:return: version
"""
mode = '------------------------Mode: %s----------------------------' % mode
logging.warning(VERSION)
logging.warning(mode)
print VERSION, mode
print 'Config loaded'
return VERSION
def generate_distributed_mode_information(master, slaves):
"""
:param master:
:param slaves:
:return: None
"""
print "Role IP"
print "%s %s" % (Role.MASTER, master)
for slave in slaves:
print "%s %s" % (Role.SLAVE, slave.localIP)
def run_in_distributed_mode(mode):
"""
run obscmdbench in distributed mode
:param mode: running mode
:return: None
"""
if not os.path.exists('result'):
os.mkdir('result')
master_path = os.getcwd() + '/result/'
# 初始化运行工具的版本的模式
version = generate_run_header(mode)
# 加载指定配置文件
logging.info('loading distributed mode config...')
distribute_config_file = 'distribute_config.dat'
# 获取distribute_config.dat所有相关配置
distribute_config = Util.read_distribute_config(distribute_config_file)
# 获取子服务器信息并建立连接
slaves = Util.generate_slave_servers(distribute_config)
connects = Util.generate_connections(slaves)
# 打印所提供的master和slaves服务器信息
generate_distributed_mode_information(distribute_config['Master'], slaves)
threads = []
for connect in connects:
t = threading.Thread(target=Util.start_tool, args=(connect, CONFIG['Testcase'],
int(distribute_config['RunTime']),))
threads.append(t)
for thread in threads:
thread.start()
print "\nAll threads started..."
for thread in threads:
thread.join()
time.sleep(int(distribute_config['RunTime']) + 10)
print "Close old connections"
for connect in connects:
connect.close()
print "Start to collect data from slaves..."
file_line_number_list = []
tps_list = []
avg_latency_list = []
requests_list = []
requests_ok_list = []
run_time_list = []
send_bytes_list = []
recv_bytes_list = []
result_file = time.strftime('result/%Y.%m.%d_%H.%M.%S', time.localtime()) + '_distributed_result.txt'
report_writer = open(result_file, 'w')
report_writer.write('\n*****************Result****************\n')
# start collecting brief data
logging.warn("start to collect data from slave servers")
logging.warn("build new connections")
new_connects = Util.generate_connections(slaves)
for connect in new_connects:
slave_brief_file_name = Util.get_brief_file_name(connect)
copy_slave_brief_to_master_cmd = r"scp `ls -t result/*_brief.txt | head -1` root@%s:%s%s" % (distribute_config['Master'], master_path, slave_brief_file_name + '[' + connect.ip + ']')
connect.execute_cmd(copy_slave_brief_to_master_cmd, expect_end="password", timeout=10)
connect.execute_cmd(connect.password, timeout=10)
tps = connect.execute_cmd(r"grep '\[TPS\]' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
avg_latency = connect.execute_cmd(
r"grep '\[AvgLatency\]' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
requests = connect.execute_cmd(r"grep '\[Requests\]' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
requests_ok = connect.execute_cmd(r"grep '\[OK\]' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
run_time = connect.execute_cmd(r"grep 'runTime' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
total_send_bytes = connect.execute_cmd(
r"grep 'roughTotalSendBytes' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
total_send_bytes = total_send_bytes.split('\r\n')
total_recv_bytes = connect.execute_cmd(
r"grep 'roughTotalRecvBytes' `ls -t result/*_brief.txt | head -1` | awk '{print $2}'")
total_recv_bytes = total_recv_bytes.split('\r\n')
tps_list.append(float(Util.generate_response(tps)))
avg_latency_list.append(float(Util.generate_response(avg_latency)))
requests_list.append(int(Util.generate_response(requests)))
requests_ok_list.append(int(Util.generate_response(requests_ok)))
run_time = Util.generate_response(run_time)
run_time_list.append(float(run_time))
send_bytes_list.append(int(total_send_bytes[0]))
recv_bytes_list.append(int(total_recv_bytes[0]))
# close connections
for connect in connects:
connect.close()
# prepare the data
# Util.create_result_folder()
tps = sum(requests_ok_list) / min(run_time_list)
avg_latency = sum(avg_latency_list) / len(avg_latency_list)
requests = sum(requests_list)
requests_ok = sum(requests_ok_list)
total_send_bytes = sum(send_bytes_list)
total_recv_bytes = sum(recv_bytes_list)
send_through_put = Util.convert_to_size_str(total_send_bytes / min(run_time_list)) + '/s'
recv_through_put = Util.convert_to_size_str(total_recv_bytes / min(run_time_list)) + '/s'
run_time = '%-52s' % Util.convert_time_format_str(min(run_time_list))
error_rate_without_format = 100.0 * ((float(requests) - float(requests_ok)) / float(requests))
error_rate_formatted = '%.2f %% ' % error_rate_without_format
total_result = '[RunTime] ' + run_time + '\n' + \
'[Requests] ' + str(requests) + '\n' + \
'[OK] ' + str(requests_ok) + '\n' + \
'[TPS] ' + str(tps) + '\n' + \
'[AvgLatency] ' + str(avg_latency) + '\n' + \
'[ErrorRate] ' + str(error_rate_formatted) + '\n' + \
'[DataSend] ' + Util.convert_to_size_str(total_send_bytes) + '\n' + \
'[DataRecv] ' + Util.convert_to_size_str(total_recv_bytes) + '\n' + \
'[SendThroughput] ' + send_through_put + '\n' + \
'[RecvThroughput] ' + recv_through_put + '\n'
if CONFIG["PrintProgress"]:
print "\n***Total result***"
print total_result
report_writer.write('\n*************************Result in brief*************************\n' + total_result + '\n')
report_writer.close()
time.sleep(2)
print "Your test data has been successfully saved to file: %s" % result_file
print "\nwaiting to exit..."
time.sleep(2)
print version
def check_connection(server, file_name):
"""
:param server:
:param file_name:
:return:
"""
while True:
start_time = int(round(time.time() * 1000))
command = r"curl --connect-timeout 8 -m 30 http://%s:5080 -v " % server + " > /dev/null 2>&1; echo $?"
result = commands.getstatusoutput(command)
end_time = int(round(time.time() * 1000))
used_time = end_time - start_time
if result[0] != 0:
line = str(result[0]) + ',' + str(start_time) + ',' + str(end_time) + ',' + str(used_time)
os.system(r"echo '%s' >> %s 2>&1" % (line, file_name))
time.sleep(1)
def run_connection_checker():
"""
:return:
"""
logging.warn("start to curl...")
oscs = CONFIG['OSCs'].split(',')
thread_list = []
for server in oscs:
file_name = time.strftime('result/%Y.%m.%d_%H.%M.%S', time.localtime()) + '_' + TESTCASES[CONFIG['Testcase']].split(';')[0].split(';')[0] + '_' + str(int(CONFIG['Users']) * int(CONFIG['ThreadsPerUser'])) + '_curl_' + server + '.txt'
t = threading.Thread(target=check_connection, args=(server, file_name, ))
t.daemon = True
thread_list.append(t)
for t in thread_list:
t.start()
def run_in_integrated_mode(mode):
"""
run obscmdbench in integrated mode, this is how we used obscmdbench before
:param mode: running mode
:return: None
"""
# 初始化运行工具的版本及模式
version = generate_run_header(mode)
# 处理获取到的配置
print str(CONFIG).replace('\'', '')
logging.info(CONFIG)
# 启动前检查
check_result, msg = precondition()
if not check_result:
print 'Check error, [%s] \nExit...' % msg
sys.exit()
if CONFIG['objectDesFile']:
# 判断操作类型,其它操作不预读文件,即使配置了objectDesFile
obj_op = ['202', '203', '204', '213']
if str(CONFIG['Testcase']) in obj_op or (
CONFIG['Testcase'] == 900 and (set(CONFIG['MixOperations'].split(',')) & set(obj_op))):
print 'begin to read object file %s' % CONFIG['objectDesFile']
get_objects_from_file(CONFIG['objectDesFile'])
print 'finish, get %d objects' % len(OBJECTS)
start_wait = False
if start_wait:
tip = '''
--------------------------------------------------------------------------------
Important: This is the way how we can run multi-clients at the same time.
Assuming all the client nodes are sync with the time server.
If now 02:10:00, enter 12 to change the minute, then it will start at 02:12:00
--------------------------------------------------------------------------------
'''
print '\033[1;32;40m%s\033[0m' % tip
def input_func(input_data):
input_data['data'] = raw_input()
while False:
n = datetime.datetime.now()
print 'Now it\'s %2d:\033[1;32;40m%2d\033[0m:%2d, please input to change the minute' % (
n.hour, n.minute, n.second),
print '(Press \'Enter\' or wait 30 sec to run, \'q\' to exit): ',
try:
input_data = {'data': 'default'}
t = threading.Thread(target=input_func, args=(input_data,))
t.daemon = True
t.start()
t.join(30) # 等待30秒
if input_data['data'] == 'q':
sys.exit()
elif '' == input_data['data'] or 'default' == input_data['data']:
break
try:
input_data['data'] = int(input_data['data'])
except ValueError:
print '[ERROR] I only receive numbers (*>﹏<*)'
continue
n = datetime.datetime.now()
diff = input_data['data'] * 60 - (n.minute * 60 + n.second)
if diff > 0:
print 'Wait for %d seconds...' % diff
time.sleep(diff)
break
else:
break
except KeyboardInterrupt:
print '\nSystem exit...'
sys.exit()
msg = 'Start at %s, pid:%d. Press Ctr+C to stop. Screen Refresh Interval: 3 sec' % (
time.strftime('%X %x %Z'), os.getpid())
print msg
logging.warning(msg)
# valid_start_time: 所有并发均启动。
# valid_end_time: 第一个并发退出时刻。
# current_threads:当前运行的并发数。-2表示手动退出,-1表示正常退出。
global valid_start_time
valid_start_time = multiprocessing.Value('d', float(sys.maxint))
valid_end_time = multiprocessing.Value('d', float(sys.maxint))
current_threads = multiprocessing.Value('i', 0)
# results_queue, 请求记录保存队列。多进程公用。
results_queue = multiprocessing.Queue(0)
# 启动统计计算结果的进程 。用于从队列取请求记录,保存到本地,并同时刷新实时结果。
results_writer = results.ResultWriter(CONFIG, TESTCASES[CONFIG['Testcase']].split(';')[0].split(';')[0],
results_queue, get_total_requests(),
valid_start_time, valid_end_time, current_threads)
results_writer.daemon = True
results_writer.name = 'resultsWriter'
results_writer.start()
print 'resultWriter started, pid: %d' % results_writer.pid
# 增加该进程的优先级
os.system('renice -19 -p ' + str(results_writer.pid) + ' >/dev/null 2>&1')
time.sleep(.2)
if CONFIG['TestNetwork']:
run_connection_checker()
# 顺序启动多个业务进程
process_list = []
# 多进程公用锁
lock = multiprocessing.Lock()
esc = chr(27) # escape key
i = 0
conn = None
if CONFIG['IsHTTP2'] and CONFIG['IsShareConnection']:
# http2 可以复用链接发送多个请求,此时conn共用
conn = obsPyCmd.MyHTTPConnection(host=CONFIG['OSCs'], is_secure=CONFIG['IsHTTPs'],
ssl_version=CONFIG['sslVersion'], timeout=CONFIG['ConnectTimeout'],
long_connection=CONFIG['LongConnection'],
conn_header=CONFIG['ConnectionHeader'], anonymous=CONFIG['Anonymous'],
is_http2=CONFIG['IsHTTP2'])
while i < CONFIG['Threads']:
p = multiprocessing.Process(target=start_process, args=(
i, USERS[i / CONFIG['ThreadsPerUser']], CONFIG['Testcase'], results_queue, valid_start_time, valid_end_time,
current_threads,
lock, conn,
False))
i += 1
p.daemon = True
p.name = 'worker-%d' % i
p.start()
# 将各工作进程的优先级提高1
os.system('renice -1 -p ' + str(p.pid) + ' >/dev/null 2>&1')
process_list.append(p)
logging.info('All %d threads started, valid_start_time: %.3f' % (len(process_list), valid_start_time.value))
# 请求未完成退出
def exit_force(signal_num, e):
msg = "\n\n\033[5;33;40m[WARN]Terminate Signal %d Received. Terminating... please wait\033[0m" % signal_num
logging.warn('%r' % msg)
print msg, '\nWaiting for all the threads exit....'
lock.acquire()
current_threads.value = -2
lock.release()
time.sleep(.1)
tmpi = 0
for j in process_list:
if j.is_alive():
if tmpi >= 100:
logging.warning('force to terminate process %s' % j.name)
j.terminate()
else:
time.sleep(.1)
tmpi += 1
break
print "\033[1;32;40mWorkers exited.\033[0m Waiting curl checker exit...",
os.system(r"kill \-9 `pgrep curl`")
print "\033[1;32;40m[WARN] Terminated\033[0m\n"
print "\033[1;32;40mWorkers exited.\033[0m Waiting results_writer exit...",
sys.stdout.flush()
while results_writer.is_alive():
current_threads.value = -2
tmpi += 1
if tmpi > 1000:
logging.warn('retry too many times, shutdown results_writer using terminate()')
results_writer.generate_write_final_result()
results_writer.terminate()
time.sleep(.01)
print "\n\033[1;33;40m[WARN] Terminated\033[0m\n"
print version
sys.exit()
import signal
signal.signal(signal.SIGINT, exit_force)
signal.signal(signal.SIGTERM, exit_force)
time.sleep(1)
# 正常退出
stop_mark = False
while not stop_mark:
time.sleep(.3)
if CONFIG['RunSeconds'] and (time.time() - valid_start_time.value >= CONFIG['RunSeconds']):
logging.warn('time is up, exit')
results_writer.generate_write_final_result()
exit_force(99, None)
for j in process_list:
if j.is_alive():
break
stop_mark = True
for j in process_list:
j.join()
# 等待结果进程退出。
logging.info('Waiting results_writer to exit...')
print "\033[1;32;40mWorkers exited.\033[0m Waiting curl checker exit...",
os.system(r"kill \-9 `pgrep curl`")
print "\033[1;32;40m[WARN] Terminated\033[0m\n"
while results_writer.is_alive():
current_threads.value = -1 # inform results_writer
time.sleep(.3)
print "\n\033[1;33;40m[WARN] Terminated after all requests\033[0m\n"
print version
if __name__ == '__main__':
if not os.path.exists('log'):
os.mkdir('log')
logging.config.fileConfig('logging.conf')
# 加载指定配置文件
logging.info('loading config...')
config_file = 'config.dat'
if len(sys.argv[1:]) > 2:
config_file = sys.argv[1:][2]
# 获取config.dat所有相关配置,并写入全局变量CONFIG
read_config(config_file)
# 如果携带参数,则使用参数,覆盖配置文件。
if len(sys.argv[1:]) > 0:
CONFIG['Testcase'] = int(sys.argv[1:][0])
if CONFIG['Testcase'] == 209 or (CONFIG['Testcase'] == 900 and '209' in CONFIG['MixOperations']):
generate_image_process_parameters()
if len(sys.argv[1:]) > 1:
CONFIG['Users'] = int(sys.argv[1:][1])
CONFIG['Threads'] = CONFIG['Users'] * CONFIG['ThreadsPerUser']
# 如果是追加写请求208,需要提前加载对象Position
if CONFIG['Testcase'] == 208:
APPEND_OBJECTS = generate_append_object_position()
# 将对象编号写入列表
if is_needed_to_build_list_index():
initialize_object_index()
# 判断运行模式
if CONFIG['Mode'] == '1' or not CONFIG['IsMaster']:
# integrated mode, execute obscmdbench like before
run_in_integrated_mode(Mode.INTEGRATED)
elif CONFIG['Mode'] == '2' and CONFIG['IsMaster']:
run_in_distributed_mode(Mode.DISTRIBUTED)
| 51.923741 | 273 | 0.571613 | 22,942 | 222,649 | 5.393688 | 0.049385 | 0.02308 | 0.014603 | 0.025747 | 0.777957 | 0.754691 | 0.741987 | 0.729413 | 0.718075 | 0.70769 | 0 | 0.015472 | 0.300415 | 222,649 | 4,287 | 274 | 51.935853 | 0.778951 | 0.042574 | 0 | 0.706201 | 0 | 0.00704 | 0.214952 | 0.05525 | 0 | 0 | 0.000057 | 0 | 0 | 0 | null | null | 0.002708 | 0.008665 | null | null | 0.01381 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
365d6cdb8c83642f5c15ac88d1010417640bcbf6 | 34,721 | py | Python | TradeThread.py | v2okimochi/AutoTA-TriangularArbitrage | 1b00cc672ed688d833a37611c934da2bb29154ad | [
"MIT"
] | 3 | 2021-04-19T08:16:26.000Z | 2022-03-26T13:20:41.000Z | TradeThread.py | v2okimochi/AutoTA-TriangularArbitrage | 1b00cc672ed688d833a37611c934da2bb29154ad | [
"MIT"
] | null | null | null | TradeThread.py | v2okimochi/AutoTA-TriangularArbitrage | 1b00cc672ed688d833a37611c934da2bb29154ad | [
"MIT"
] | 1 | 2019-12-16T08:58:13.000Z | 2019-12-16T08:58:13.000Z | from PyQt5.QtCore import QThread, pyqtSignal
import time
# thread========================================
class TradeThread(QThread):
# シグナルで値を返す場合は,その型を指定
stateChangeSignal = pyqtSignal(str) # 取引状態の表示を更新
monitoredSignal = pyqtSignal(list) # 三角裁定計算の結果
stoppedTradeSignal = pyqtSignal() # 取引終了後にGUIへ発信
fundsSignal = pyqtSignal(list) # 取引毎に余力額の表示を更新
profitSignal = pyqtSignal(int) # 取引終了後,最新の損益額の表示を更新
statisticsSignal = pyqtSignal(list) # DB内データの統計結果を発
def __init__(self):
super().__init__()
# 各取引の制限時間
self.limitTime_BTC_JPY = 60 * 60 * 2
self.limitTime_MONA_BTC = 60 * 60 * 6
self.limitTime_MONA_JPY = 60 * 60 * 3
self.limitTime_BCH_BTC = 60 * 60 * 6
self.limitTime_BCH_JPY = 60 * 60 * 3
self.limitTime_XEM_BTC = 60 * 60 * 4
self.limitTime_XEM_JPY = 60 * 60 * 3
self.limitTime_ETH_BTC = 60 * 60 * 4
self.limitTime_ETH_JPY = 60 * 60 * 3
# ループフラグ:終了時は0にする
self.loopFlag = 1
# 制限時間に到達した回数
self.limited_BtcJpy = 0
self.limited_MonaBtc = 0
self.limited_MonaJpy = 0
self.limited_BchBtc = 0
self.limited_BchJpy = 0
self.limited_XemBtc = 0
self.limited_XemJpy = 0
self.limited_EthBtc = 0
self.limited_EthJpy = 0
# ループフラグon/off: onである限りループ==========
def onLoop(self):
self.loopFlag = 1
def offLoop(self):
self.loopFlag = 0
# インスタンスを共有
def setObj(self, exc, dba):
"""
:type exc: six_funds.EXCaccess.EXCaccess
:type dba: six_funds.DBaccess.DBaccess
"""
self.exc = exc
self.dba = dba
# ループ処理
def run(self):
while 1:
if self.loopFlag == 1:
self.trading()
else:
self.stoppedTradeSignal.emit()
break
# 取引順決定
def trading(self):
# [0]:取引順
# [1]:日本円余力額
# [2]:最も高い予想利益額
# [3]:予想利益額のリスト
# [4]:買いと売りの中間値のリスト
self.stateChangeSignal.emit('Monitoring')
monitorList = self.exc.Monitoring()
judge = monitorList[0]
prevJPY = monitorList[1]
routeEstimate = monitorList[2]
diffs = monitorList[3]
T_aves = monitorList[4]
self.monitoredSignal.emit(diffs) # 各予想をGUIに表示させる
if judge != 'no routes':
print(judge, end=' est=: ')
print(routeEstimate)
# T_aves中身:
# [0]:T_aveBtcJpy,
# [1]:T_aveMonaBtc,
# [2]:T_aveMonaJpy,
# [3]:T_aveBchBtc,
# [4]:T_aveBchJpy,
# [5]:T_aveXemBtc,
# [6]:T_aveXemJpy,
# [7]:T_aveEthBtc,
# [8]:T_aveEthJpy
# 取引順に従って取引
if judge == 'Jpy_Btc_Mona':
self.stateChangeSignal.emit('JPY->BTC->MONA')
self.exchange_JpyBtcMona(prevJPY, T_aves[0], routeEstimate)
elif judge == 'Jpy_Mona_Btc':
self.stateChangeSignal.emit('JPY->MONA->BTC')
self.exchange_JpyMonaBtc(prevJPY, T_aves[2], routeEstimate)
elif judge == 'Jpy_Btc_Bch':
self.stateChangeSignal.emit('JPY->BTC->BCH')
self.exchange_JpyBtcBch(prevJPY, T_aves[0], routeEstimate)
elif judge == 'Jpy_Bch_Btc':
self.stateChangeSignal.emit('JPY->BCH->BTC')
self.exchange_JpyBchBtc(prevJPY, T_aves[4], routeEstimate)
elif judge == 'Jpy_Btc_Xem':
self.stateChangeSignal.emit('JPY->BTC->XEM')
self.exchange_JpyBtcXem(prevJPY, T_aves[0], routeEstimate)
elif judge == 'Jpy_Xem_Btc':
self.stateChangeSignal.emit('JPY->XEM->BTC')
self.exchange_JpyXemBtc(prevJPY, T_aves[6], routeEstimate)
elif judge == 'Jpy_Btc_Eth':
self.stateChangeSignal.emit('JPY->BTC->ETH')
self.exchange_JpyBtcEth(prevJPY, T_aves[0], routeEstimate)
elif judge == 'Jpy_Eth_Btc':
self.stateChangeSignal.emit('JPY->ETH->BTC')
self.exchange_JpyEthBtc(prevJPY, T_aves[8], routeEstimate)
else:
return
self.stateChangeSignal.emit('trade finished')
statList = self.dba.statisticsTradeResult()
self.statisticsSignal.emit(statList)
print('Complete. Ready >')
# 現在の余力額一覧をGUIに表示させる
def emitFunds(self):
funds = self.exc.getFunds()
fundsList = [funds['jpy'],
funds['btc'],
funds['mona'],
funds['BCH'],
funds['xem'],
funds['ETH']]
self.fundsSignal.emit(fundsList)
time.sleep(1)
return fundsList
def exchange_JpyBtcMona(self, prevJPY, aveBtcJpy, estJPY):
routeName = 'JPY->BTC->MONA'
# JPY->BTC==============================================
self.exc.order_JPY_BTC(prevJPY, aveBtcJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->BTC', min_BtcJpy, self.limited_BtcJpy)
self.emitFunds()
# BTC->MONA==============================================
aveMonaBtc = self.exc.getMONA_BTC()
time.sleep(1)
self.exc.order_BTC_MONA(aveMonaBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('mona_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_MONA_BTC:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'mona_btc')
if cancelFlag:
seconds = 0
aveMonaBtc = self.exc.getMONA_BTC()
time.sleep(1)
self.exc.order_BTC_MONA(aveMonaBtc)
self.limited_MonaBtc += 1
min_MonaBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->MONA', min_MonaBtc, self.limited_MonaBtc)
self.limited_MonaBtc = 0
self.emitFunds()
# MONA->JPY==============================================
aveMonaJpy = self.exc.getMONA_JPY()
time.sleep(1)
nextJPY = self.exc.order_MONA_JPY(aveMonaJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('mona_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_MONA_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'mona_jpy')
if cancelFlag:
seconds = 0
aveMonaJpy = self.exc.getMONA_JPY()
time.sleep(1)
nextJPY = self.exc.order_MONA_JPY(aveMonaJpy)
self.limited_MonaJpy += 1
min_MonaJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'MONA->JPY', min_MonaJpy, self.limited_MonaJpy)
self.limited_MonaJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyMonaBtc(self, prevJPY, aveMonaJpy, estJPY):
routeName = 'JPY->MONA->BTC'
# JPY->MONA==============================================
self.exc.order_JPY_MONA(prevJPY, aveMonaJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('mona_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_MONA_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'mona_jpy')
if cancelFlag:
seconds = 0
aveMonaJpy = self.exc.getMONA_JPY()
time.sleep(1)
self.exc.order_MONA_JPY(aveMonaJpy)
self.limited_MonaJpy += 1
min_MonaJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->MONA', min_MonaJpy, self.limited_MonaJpy)
self.emitFunds()
# MONA->BTC==============================================
aveMonaBtc = self.exc.getMONA_BTC()
time.sleep(1)
self.exc.order_MONA_BTC(aveMonaBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('mona_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_MONA_BTC:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'mona_btc')
if cancelFlag:
seconds = 0
aveMonaBtc = self.exc.getMONA_BTC()
time.sleep(1)
self.exc.order_MONA_BTC(aveMonaBtc)
self.limited_MonaBtc += 1
min_MonaBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'MONA->BTC', min_MonaBtc, self.limited_MonaBtc)
self.limited_MonaBtc = 0
self.emitFunds()
# BTC->JPY==============================================
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->JPY', min_BtcJpy, self.limited_BtcJpy)
self.limited_BtcJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyBtcBch(self, prevJPY, aveBtcJpy, estJPY):
routeName = 'JPY->BTC->BCH'
# JPY->BTC==============================================
self.exc.order_JPY_BTC(prevJPY, aveBtcJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->BTC', min_BtcJpy, self.limited_BtcJpy)
self.emitFunds()
# BTC->BCH==============================================
aveBchBtc = self.exc.getBCH_BTC()
time.sleep(1)
self.exc.order_BTC_BCH(aveBchBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('bch_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BCH_BTC:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'bch_btc')
if cancelFlag:
seconds = 0
aveBchBtc = self.exc.getBCH_BTC()
time.sleep(1)
self.exc.order_BTC_BCH(aveBchBtc)
self.limited_BchBtc += 1
min_BchBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->BCH', min_BchBtc, self.limited_BchBtc)
self.limited_BchBtc = 0
self.emitFunds()
# BCH->JPY==============================================
aveBchJpy = self.exc.getBCH_JPY()
time.sleep(1)
nextJPY = self.exc.order_BCH_JPY(aveBchJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('bch_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BCH_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'bch_jpy')
if cancelFlag:
seconds = 0
aveBchJpy = self.exc.getBCH_JPY()
time.sleep(1)
nextJPY = self.exc.order_BCH_JPY(aveBchJpy)
self.limited_BchJpy += 1
min_BchJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BCH->JPY', min_BchJpy, self.limited_BchJpy)
self.limited_BchJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyBchBtc(self, prevJPY, aveBchJpy, estJPY):
routeName = 'JPY->BCH->BTC'
# JPY->BCH==============================================
self.exc.order_JPY_BCH(prevJPY, aveBchJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('bch_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BCH_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'bch_jpy')
if cancelFlag:
seconds = 0
aveBchJpy = self.exc.getBCH_JPY()
time.sleep(1)
self.exc.order_BCH_JPY(aveBchJpy)
self.limited_BchJpy += 1
min_BchJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->BCH', min_BchJpy, self.limited_BchJpy)
self.emitFunds()
# BCH->BTC==============================================
aveBchBtc = self.exc.getBCH_BTC()
time.sleep(1)
self.exc.order_BCH_BTC(aveBchBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('bch_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BCH_BTC:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'bch_btc')
if cancelFlag:
seconds = 0
aveBchBtc = self.exc.getBCH_BTC()
time.sleep(1)
self.exc.order_BCH_BTC(aveBchBtc)
self.limited_BchBtc += 1
min_BchBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BCH->BTC', min_BchBtc, self.limited_BchBtc)
self.limited_BchBtc = 0
self.emitFunds()
# BTC->JPY==============================================
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->JPY', min_BtcJpy, self.limited_BtcJpy)
self.limited_BtcJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyBtcXem(self, prevJPY, aveBtcJpy, estJPY):
routeName = 'JPY->BTC->XEM'
# JPY->BTC==============================================
self.exc.order_JPY_BTC(prevJPY, aveBtcJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->BTC', min_BtcJpy, self.limited_BtcJpy)
self.emitFunds()
# BTC->XEM==============================================
aveXemBtc = self.exc.getXEM_BTC()
time.sleep(1)
self.exc.order_BTC_XEM(aveXemBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('xem_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_XEM_BTC:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'xem_btc')
if cancelFlag:
seconds = 0
aveXemBtc = self.exc.getXEM_BTC()
time.sleep(1)
self.exc.order_BTC_XEM(aveXemBtc)
self.limited_XemBtc += 1
min_XemBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->XEM', min_XemBtc, self.limited_XemBtc)
self.limited_XemBtc = 0
self.emitFunds()
# XEM->JPY==============================================
aveXemJpy = self.exc.getXEM_JPY()
time.sleep(1)
nextJPY = self.exc.order_XEM_JPY(aveXemJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('xem_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_XEM_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'xem_jpy')
if cancelFlag:
seconds = 0
aveXemJpy = self.exc.getXEM_JPY()
time.sleep(1)
nextJPY = self.exc.order_XEM_JPY(aveXemJpy)
self.limited_XemJpy += 1
min_XemJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'XEM->JPY', min_XemJpy, self.limited_XemJpy)
self.limited_XemJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyXemBtc(self, prevJPY, aveXemJpy, estJPY):
routeName = 'JPY->XEM->BTC'
# JPY->XEM==============================================
self.exc.order_JPY_XEM(prevJPY, aveXemJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('xem_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_XEM_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'xem_jpy')
if cancelFlag:
aveXemJpy = self.exc.getXEM_JPY()
time.sleep(1)
self.exc.order_XEM_JPY(aveXemJpy)
self.limited_XemJpy += 1
min_XemJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->XEM', min_XemJpy, self.limited_XemJpy)
self.emitFunds()
# XEM->BTC==============================================
aveXemBtc = self.exc.getXEM_BTC()
time.sleep(1)
self.exc.order_XEM_BTC(aveXemBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('xem_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_XEM_BTC:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'xem_btc')
if cancelFlag:
seconds = 0
aveXemBtc = self.exc.getXEM_BTC()
time.sleep(1)
self.exc.order_XEM_BTC(aveXemBtc)
self.limited_XemBtc += 1
min_XemBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'XEM->BTC', min_XemBtc, self.limited_XemBtc)
self.limited_XemBtc = 0
self.emitFunds()
# BTC->JPY==============================================
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->JPY', min_BtcJpy, self.limited_BtcJpy)
self.limited_BtcJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyBtcEth(self, prevJPY, aveBtcJpy, estJPY):
routeName = 'JPY->BTC->ETH'
# JPY->BTC==============================================
self.exc.order_JPY_BTC(prevJPY, aveBtcJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->BTC', min_BtcJpy, self.limited_BtcJpy)
self.emitFunds()
# BTC->ETH==============================================
aveEthBtc = self.exc.getETH_BTC()
time.sleep(1)
self.exc.order_BTC_ETH(aveEthBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('eth_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_ETH_BTC:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'eth_btc')
if cancelFlag:
seconds = 0
aveEthBtc = self.exc.getETH_BTC()
time.sleep(1)
self.exc.order_BTC_ETH(aveEthBtc)
self.limited_EthBtc += 1
min_EthBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->ETH', min_EthBtc, self.limited_EthBtc)
self.limited_EthBtc = 0
self.emitFunds()
# ETH->JPY==============================================
aveEthJpy = self.exc.getETH_JPY()
time.sleep(1)
nextJPY = self.exc.order_ETH_JPY(aveEthJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('eth_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_ETH_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'eth_jpy')
if cancelFlag:
seconds = 0
aveEthJpy = self.exc.getETH_JPY()
time.sleep(1)
nextJPY = self.exc.order_ETH_JPY(aveEthJpy)
self.limited_EthJpy += 1
min_EthJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'ETH->JPY', min_EthJpy, self.limited_EthJpy)
self.limited_EthJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
def exchange_JpyEthBtc(self, prevJPY, aveEthJpy, estJPY):
routeName = 'JPY->ETH->BTC'
# JPY->ETH==============================================
self.exc.order_JPY_ETH(prevJPY, aveEthJpy)
time.sleep(1)
# コメントが'AutoTA'の注文が残っている限りループ
# その注文が無くなれば0を受け取る→ループを抜ける
# 制限時間内にループが終わらなければ強制終了,監視からやり直し
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('eth_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_ETH_JPY:
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'eth_jpy')
if cancelFlag:
seconds = 0
aveEthJpy = self.exc.getETH_JPY()
time.sleep(1)
self.exc.order_ETH_JPY(aveEthJpy)
self.limited_EthJpy += 1
min_EthJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'JPY->ETH', min_EthJpy, self.limited_EthJpy)
self.emitFunds()
# ETH->BTC==============================================
aveEthBtc = self.exc.getETH_BTC()
time.sleep(1)
self.exc.order_ETH_BTC(aveEthBtc)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('eth_btc')
if check[0] == 0:
break
else:
if seconds > self.limitTime_ETH_BTC:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'eth_btc')
if cancelFlag:
seconds = 0
aveEthBtc = self.exc.getETH_BTC()
time.sleep(1)
self.exc.order_ETH_BTC(aveEthBtc)
self.limited_EthBtc += 1
min_EthBtc = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'ETH->BTC', min_EthBtc, self.limited_EthBtc)
self.limited_EthBtc = 0
self.emitFunds()
# BTC->JPY==============================================
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
time.sleep(1)
seconds = 0
while 1:
seconds += 1
# [0]:orderID, [1]:price, [2]:amount
check = self.exc.checkActiveOrders('btc_jpy')
if check[0] == 0:
break
else:
if seconds > self.limitTime_BTC_JPY:
seconds = 0
# オーダーキャンセル,再注文
cancelFlag = self.exc.cancelOrder(check[0], 'btc_jpy')
if cancelFlag:
seconds = 0
aveBtcJpy = self.exc.getBTC_JPY()
time.sleep(1)
nextJPY = self.exc.order_BTC_JPY(aveBtcJpy)
self.limited_BtcJpy += 1
min_BtcJpy = int(seconds / 60) # 取引にかかった時間[分]
# >>DBに追加
self.dba.insertTrade(
routeName, 'BTC->JPY', min_BtcJpy, self.limited_BtcJpy)
self.limited_BtcJpy = 0
self.emitFunds()
# 差の計算======================================
profit = int(nextJPY - prevJPY)
self.profitSignal.emit(int(profit))
# >>DBに追加
self.dba.insertRoute(routeName, prevJPY, estJPY, profit)
| 38.029573 | 75 | 0.473085 | 3,280 | 34,721 | 4.878659 | 0.053049 | 0.061242 | 0.04062 | 0.020997 | 0.850269 | 0.817148 | 0.792713 | 0.782465 | 0.770779 | 0.770779 | 0 | 0.023599 | 0.388554 | 34,721 | 912 | 76 | 38.071272 | 0.729769 | 0.134731 | 0 | 0.827389 | 0 | 0 | 0.030786 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.021398 | false | 0 | 0.002853 | 0 | 0.03709 | 0.00428 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
369d9a51ab8834139380b19309638af84745e7a4 | 66,014 | py | Python | tests/crm_test.py | phiea/sonic-utilities | f5efe8939530ba9767bcd92e5a688b2275c5f151 | [
"Apache-2.0"
] | null | null | null | tests/crm_test.py | phiea/sonic-utilities | f5efe8939530ba9767bcd92e5a688b2275c5f151 | [
"Apache-2.0"
] | null | null | null | tests/crm_test.py | phiea/sonic-utilities | f5efe8939530ba9767bcd92e5a688b2275c5f151 | [
"Apache-2.0"
] | null | null | null | import importlib
import os
import sys
from importlib import reload
from click.testing import CliRunner
import crm.main as crm
from utilities_common.db import Db
# Expected output for CRM
crm_show_summary = """\
Polling Interval: 300 second(s)
"""
crm_show_thresholds_acl_group = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
acl_group percentage 70 85
"""
crm_show_thresholds_acl_table = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
acl_table percentage 70 85
"""
crm_show_thresholds_all = """\
Resource Name Threshold Type Low Threshold High Threshold
-------------------- ---------------- --------------- ----------------
ipv4_route percentage 70 85
ipv6_route percentage 70 85
ipv4_nexthop percentage 70 85
ipv6_nexthop percentage 70 85
ipv4_neighbor percentage 70 85
ipv6_neighbor percentage 70 85
nexthop_group_member percentage 70 85
nexthop_group percentage 70 85
acl_table percentage 70 85
acl_group percentage 70 85
acl_entry percentage 70 85
acl_counter percentage 70 85
fdb_entry percentage 70 85
ipmc_entry percentage 70 85
snat_entry percentage 70 85
dnat_entry percentage 70 85
"""
crm_show_thresholds_fdb = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
fdb_entry percentage 70 85
"""
crm_show_thresholds_ipv4_neighbor = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv4_neighbor percentage 70 85
"""
crm_show_thresholds_ipv4_nexthop = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv4_nexthop percentage 70 85
"""
crm_show_thresholds_ipv4_route = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv4_route percentage 70 85
"""
crm_show_thresholds_ipv6_neighbor = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv6_neighbor percentage 70 85
"""
crm_show_thresholds_ipv6_nexthop = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv6_nexthop percentage 70 85
"""
crm_show_thresholds_ipv6_route= """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv6_route percentage 70 85
"""
crm_show_thresholds_nexthop_group_member = """\
Resource Name Threshold Type Low Threshold High Threshold
-------------------- ---------------- --------------- ----------------
nexthop_group_member percentage 70 85
"""
crm_show_thresholds_nexthop_group_object = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
nexthop_group percentage 70 85
"""
crm_show_thresholds_snat = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
snat_entry percentage 70 85
"""
crm_show_thresholds_dnat = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
dnat_entry percentage 70 85
"""
crm_show_thresholds_ipmc = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipmc_entry percentage 70 85
"""
crm_new_show_summary = """\
Polling Interval: 30 second(s)
"""
crm_new_show_thresholds_acl_group = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
acl_group percentage 60 90
"""
crm_new_show_thresholds_acl_table = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
acl_table percentage 60 90
"""
crm_new_show_thresholds_fdb = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
fdb_entry percentage 60 90
"""
crm_new_show_thresholds_ipv4_neighbor = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv4_neighbor percentage 60 90
"""
crm_new_show_thresholds_ipv4_nexthop = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv4_nexthop percentage 60 90
"""
crm_new_show_thresholds_ipv4_route = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv4_route percentage 60 90
"""
crm_new_show_thresholds_ipv6_neighbor = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv6_neighbor percentage 60 90
"""
crm_new_show_thresholds_ipv6_nexthop = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv6_nexthop percentage 60 90
"""
crm_new_show_thresholds_ipv6_route= """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipv6_route percentage 60 90
"""
crm_new_show_thresholds_nexthop_group_member = """\
Resource Name Threshold Type Low Threshold High Threshold
-------------------- ---------------- --------------- ----------------
nexthop_group_member percentage 60 90
"""
crm_new_show_thresholds_nexthop_group_object = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
nexthop_group percentage 60 90
"""
crm_new_show_thresholds_snat = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
snat_entry percentage 60 90
"""
crm_new_show_thresholds_dnat = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
dnat_entry percentage 60 90
"""
crm_new_show_thresholds_ipmc = """\
Resource Name Threshold Type Low Threshold High Threshold
--------------- ---------------- --------------- ----------------
ipmc_entry percentage 60 90
"""
crm_show_resources_acl_group = """\
Stage Bind Point Resource Name Used Count Available Count
------- ------------ --------------- ------------ -----------------
INGRESS PORT acl_group 16 232
INGRESS PORT acl_table 2 3
INGRESS LAG acl_group 8 232
INGRESS LAG acl_table 0 3
INGRESS VLAN acl_group 0 232
INGRESS VLAN acl_table 0 6
INGRESS RIF acl_group 0 232
INGRESS RIF acl_table 0 6
INGRESS SWITCH acl_group 0 232
INGRESS SWITCH acl_table 0 6
EGRESS PORT acl_group 0 232
EGRESS PORT acl_table 0 2
EGRESS LAG acl_group 0 232
EGRESS LAG acl_table 0 2
EGRESS VLAN acl_group 0 232
EGRESS VLAN acl_table 0 2
EGRESS RIF acl_group 0 232
EGRESS RIF acl_table 0 2
EGRESS SWITCH acl_group 0 232
EGRESS SWITCH acl_table 0 2
"""
crm_show_resources_acl_table = """\
Table ID Resource Name Used Count Available Count
--------------- --------------- ------------ -----------------
0x700000000063f acl_entry 0 2048
0x700000000063f acl_counter 0 2048
0x7000000000670 acl_entry 0 1024
0x7000000000670 acl_counter 0 1280
"""
crm_show_resources_all = """\
Resource Name Used Count Available Count
-------------------- ------------ -----------------
ipv4_route 58 98246
ipv6_route 60 16324
ipv4_nexthop 8 49086
ipv6_nexthop 8 49086
ipv4_neighbor 8 8168
ipv6_neighbor 8 4084
nexthop_group_member 0 16384
nexthop_group 0 512
fdb_entry 0 32767
ipmc_entry 0 24576
snat_entry 0 1024
dnat_entry 0 1024
Stage Bind Point Resource Name Used Count Available Count
------- ------------ --------------- ------------ -----------------
INGRESS PORT acl_group 16 232
INGRESS PORT acl_table 2 3
INGRESS LAG acl_group 8 232
INGRESS LAG acl_table 0 3
INGRESS VLAN acl_group 0 232
INGRESS VLAN acl_table 0 6
INGRESS RIF acl_group 0 232
INGRESS RIF acl_table 0 6
INGRESS SWITCH acl_group 0 232
INGRESS SWITCH acl_table 0 6
EGRESS PORT acl_group 0 232
EGRESS PORT acl_table 0 2
EGRESS LAG acl_group 0 232
EGRESS LAG acl_table 0 2
EGRESS VLAN acl_group 0 232
EGRESS VLAN acl_table 0 2
EGRESS RIF acl_group 0 232
EGRESS RIF acl_table 0 2
EGRESS SWITCH acl_group 0 232
EGRESS SWITCH acl_table 0 2
Table ID Resource Name Used Count Available Count
--------------- --------------- ------------ -----------------
0x700000000063f acl_entry 0 2048
0x700000000063f acl_counter 0 2048
0x7000000000670 acl_entry 0 1024
0x7000000000670 acl_counter 0 1280
"""
crm_show_resources_fdb = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
fdb_entry 0 32767
"""
crm_show_resources_ipv4_neighbor = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_neighbor 8 8168
"""
crm_show_resources_ipv4_nexthop = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_nexthop 8 49086
"""
crm_show_resources_ipv4_route = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_route 58 98246
"""
crm_show_resources_ipv6_route = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_route 60 16324
"""
crm_show_resources_ipv6_neighbor = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_neighbor 8 4084
"""
crm_show_resources_ipv6_nexthop = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_nexthop 8 49086
"""
crm_show_resources_nexthop_group_member = """\
Resource Name Used Count Available Count
-------------------- ------------ -----------------
nexthop_group_member 0 16384
"""
crm_show_resources_nexthop_group_object = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
nexthop_group 0 512
"""
crm_show_resources_snat = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
snat_entry 0 1024
"""
crm_show_resources_dnat = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
dnat_entry 0 1024
"""
crm_show_resources_ipmc = """\
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipmc_entry 0 24576
"""
crm_multi_asic_show_resources_acl_group = """\
ASIC0
Stage Bind Point Resource Name Used Count Available Count
------- ------------ --------------- ------------ -----------------
INGRESS PORT acl_group 16 232
INGRESS PORT acl_table 2 3
INGRESS LAG acl_group 8 232
INGRESS LAG acl_table 0 3
INGRESS VLAN acl_group 0 232
INGRESS VLAN acl_table 0 6
INGRESS RIF acl_group 0 232
INGRESS RIF acl_table 0 6
INGRESS SWITCH acl_group 0 232
INGRESS SWITCH acl_table 0 6
EGRESS PORT acl_group 0 232
EGRESS PORT acl_table 0 2
EGRESS LAG acl_group 0 232
EGRESS LAG acl_table 0 2
EGRESS VLAN acl_group 0 232
EGRESS VLAN acl_table 0 2
EGRESS RIF acl_group 0 232
EGRESS RIF acl_table 0 2
EGRESS SWITCH acl_group 0 232
EGRESS SWITCH acl_table 0 2
ASIC1
Stage Bind Point Resource Name Used Count Available Count
------- ------------ --------------- ------------ -----------------
INGRESS PORT acl_group 16 232
INGRESS PORT acl_table 2 3
INGRESS LAG acl_group 8 232
INGRESS LAG acl_table 0 3
INGRESS VLAN acl_group 0 232
INGRESS VLAN acl_table 0 6
INGRESS RIF acl_group 0 232
INGRESS RIF acl_table 0 6
INGRESS SWITCH acl_group 0 232
INGRESS SWITCH acl_table 0 6
EGRESS PORT acl_group 0 232
EGRESS PORT acl_table 0 2
EGRESS LAG acl_group 0 232
EGRESS LAG acl_table 0 2
EGRESS VLAN acl_group 0 232
EGRESS VLAN acl_table 0 2
EGRESS RIF acl_group 0 232
EGRESS RIF acl_table 0 2
EGRESS SWITCH acl_group 0 232
EGRESS SWITCH acl_table 0 2
"""
crm_multi_asic_show_resources_acl_table = """\
ASIC0
Table ID Resource Name Used Count Available Count
--------------- --------------- ------------ -----------------
0x700000000063f acl_entry 0 2048
0x700000000063f acl_counter 0 2048
0x7000000000670 acl_entry 0 1024
0x7000000000670 acl_counter 0 1280
ASIC1
Table ID Resource Name Used Count Available Count
--------------- --------------- ------------ -----------------
0x700000000063f acl_entry 0 2048
0x700000000063f acl_counter 0 2048
0x7000000000670 acl_entry 0 1024
0x7000000000670 acl_counter 0 1280
"""
crm_multi_asic_show_resources_all = """\
ASIC0
Resource Name Used Count Available Count
-------------------- ------------ -----------------
ipv4_route 58 98246
ipv6_route 60 16324
ipv4_nexthop 8 49086
ipv6_nexthop 8 49086
ipv4_neighbor 8 8168
ipv6_neighbor 8 4084
nexthop_group_member 0 16384
nexthop_group 0 512
fdb_entry 0 32767
ipmc_entry 0 24576
snat_entry 0 1024
dnat_entry 0 1024
ASIC1
Resource Name Used Count Available Count
-------------------- ------------ -----------------
ipv4_route 58 98246
ipv6_route 60 16324
ipv4_nexthop 8 49086
ipv6_nexthop 8 49086
ipv4_neighbor 8 8168
ipv6_neighbor 8 4084
nexthop_group_member 0 16384
nexthop_group 0 512
fdb_entry 0 32767
ipmc_entry 0 24576
snat_entry 0 1024
dnat_entry 0 1024
ASIC0
Stage Bind Point Resource Name Used Count Available Count
------- ------------ --------------- ------------ -----------------
INGRESS PORT acl_group 16 232
INGRESS PORT acl_table 2 3
INGRESS LAG acl_group 8 232
INGRESS LAG acl_table 0 3
INGRESS VLAN acl_group 0 232
INGRESS VLAN acl_table 0 6
INGRESS RIF acl_group 0 232
INGRESS RIF acl_table 0 6
INGRESS SWITCH acl_group 0 232
INGRESS SWITCH acl_table 0 6
EGRESS PORT acl_group 0 232
EGRESS PORT acl_table 0 2
EGRESS LAG acl_group 0 232
EGRESS LAG acl_table 0 2
EGRESS VLAN acl_group 0 232
EGRESS VLAN acl_table 0 2
EGRESS RIF acl_group 0 232
EGRESS RIF acl_table 0 2
EGRESS SWITCH acl_group 0 232
EGRESS SWITCH acl_table 0 2
ASIC1
Stage Bind Point Resource Name Used Count Available Count
------- ------------ --------------- ------------ -----------------
INGRESS PORT acl_group 16 232
INGRESS PORT acl_table 2 3
INGRESS LAG acl_group 8 232
INGRESS LAG acl_table 0 3
INGRESS VLAN acl_group 0 232
INGRESS VLAN acl_table 0 6
INGRESS RIF acl_group 0 232
INGRESS RIF acl_table 0 6
INGRESS SWITCH acl_group 0 232
INGRESS SWITCH acl_table 0 6
EGRESS PORT acl_group 0 232
EGRESS PORT acl_table 0 2
EGRESS LAG acl_group 0 232
EGRESS LAG acl_table 0 2
EGRESS VLAN acl_group 0 232
EGRESS VLAN acl_table 0 2
EGRESS RIF acl_group 0 232
EGRESS RIF acl_table 0 2
EGRESS SWITCH acl_group 0 232
EGRESS SWITCH acl_table 0 2
ASIC0
Table ID Resource Name Used Count Available Count
--------------- --------------- ------------ -----------------
0x700000000063f acl_entry 0 2048
0x700000000063f acl_counter 0 2048
0x7000000000670 acl_entry 0 1024
0x7000000000670 acl_counter 0 1280
ASIC1
Table ID Resource Name Used Count Available Count
--------------- --------------- ------------ -----------------
0x700000000063f acl_entry 0 2048
0x700000000063f acl_counter 0 2048
0x7000000000670 acl_entry 0 1024
0x7000000000670 acl_counter 0 1280
"""
crm_multi_asic_show_resources_fdb = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
fdb_entry 0 32767
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
fdb_entry 0 32767
"""
crm_multi_asic_show_resources_ipv4_neighbor = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_neighbor 8 8168
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_neighbor 8 8168
"""
crm_multi_asic_show_resources_ipv4_nexthop = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_nexthop 8 49086
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_nexthop 8 49086
"""
crm_multi_asic_show_resources_ipv4_route = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_route 58 98246
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv4_route 58 98246
"""
crm_multi_asic_show_resources_ipv6_route = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_route 60 16324
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_route 60 16324
"""
crm_multi_asic_show_resources_ipv6_neighbor = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_neighbor 8 4084
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_neighbor 8 4084
"""
crm_multi_asic_show_resources_ipv6_nexthop = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_nexthop 8 49086
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipv6_nexthop 8 49086
"""
crm_multi_asic_show_resources_nexthop_group_member = """\
ASIC0
Resource Name Used Count Available Count
-------------------- ------------ -----------------
nexthop_group_member 0 16384
ASIC1
Resource Name Used Count Available Count
-------------------- ------------ -----------------
nexthop_group_member 0 16384
"""
crm_multi_asic_show_resources_nexthop_group_object = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
nexthop_group 0 512
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
nexthop_group 0 512
"""
crm_multi_asic_show_resources_snat = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
snat_entry 0 1024
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
snat_entry 0 1024
"""
crm_multi_asic_show_resources_dnat = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
dnat_entry 0 1024
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
dnat_entry 0 1024
"""
crm_multi_asic_show_resources_ipmc = """\
ASIC0
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipmc_entry 0 24576
ASIC1
Resource Name Used Count Available Count
--------------- ------------ -----------------
ipmc_entry 0 24576
"""
class TestCrm(object):
@classmethod
def setup_class(cls):
print("SETUP")
os.environ["UTILITIES_UNIT_TESTING"] = "1"
def test_crm_show_summary(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'summary'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_summary
result = runner.invoke(crm.cli, ['config', 'polling', 'interval', '30'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'summary'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_summary
def test_crm_show_thresholds_acl_group(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'group'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_acl_group
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'group', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'group', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'group'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_acl_group
def test_crm_show_thresholds_acl_table(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'table'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_acl_table
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'table', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'table', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'table'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_acl_table
def test_crm_show_thresholds_all(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'all'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_all
def test_crm_show_thresholds_fdb(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'fdb'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_fdb
result = runner.invoke(crm.cli, ['config', 'thresholds', 'fdb', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'fdb', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'fdb'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_fdb
def test_crm_show_thresholds_ipv4_neighbor(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv4_neighbor
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'neighbor', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'neighbor', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv4_neighbor
def test_crm_show_thresholds_ipv4_nexthop(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv4_nexthop
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'nexthop', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'nexthop', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv4_nexthop
def test_crm_show_thresholds_ipv4_route(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv4_route
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'route', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'route', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv4_route
def test_crm_show_thresholds_ipv6_neighbor(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv6_neighbor
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'neighbor', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'neighbor', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv6_neighbor
def test_crm_show_thresholds_ipv6_nexthop(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv6_nexthop
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'nexthop', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'nexthop', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv6_nexthop
def test_crm_show_thresholds_ipv6_route(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv6_route
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'route', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'route', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv6_route
def test_crm_show_thresholds_nexthop_group_member(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'member'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_nexthop_group_member
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'member', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'member', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'member'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_nexthop_group_member
def test_crm_show_thresholds_nexthop_group_object(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'object'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_nexthop_group_object
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'object', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'object', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'object'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_nexthop_group_object
def test_crm_show_thresholds_snat(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'snat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_snat
result = runner.invoke(crm.cli, ['config', 'thresholds', 'snat', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'snat', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'snat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_snat
def test_crm_show_thresholds_dnat(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'dnat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_dnat
result = runner.invoke(crm.cli, ['config', 'thresholds', 'dnat', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'dnat', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'dnat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_dnat
def test_crm_show_thresholds_ipmc(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipmc'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipmc
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipmc', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipmc', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipmc'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipmc
def test_crm_show_resources_acl_group(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'acl', 'group'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_acl_group
def test_crm_show_resources_acl_table(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'acl', 'table'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_acl_table
def test_crm_show_resources_all(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'all'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_all
def test_crm_show_resources_fdb(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'fdb'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_fdb
def test_crm_show_resources_ipv4_neighbor(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv4', 'neighbor'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipv4_neighbor
def test_crm_show_resources_ipv4_nexthop(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv4', 'nexthop'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipv4_nexthop
def test_crm_show_resources_ipv4_route(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv4', 'route'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipv4_route
def test_crm_show_resources_ipv6_route(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv6', 'route'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipv6_route
def test_crm_show_resources_ipv6_neighbor(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv6', 'neighbor'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipv6_neighbor
def test_crm_show_resources_ipv6_nexthop(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv6', 'nexthop'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipv6_nexthop
def test_crm_show_resources_nexthop_group_member(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'nexthop', 'group', 'member'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_nexthop_group_member
def test_crm_show_resources_nexthop(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'nexthop', 'group', 'object'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_nexthop_group_object
def test_crm_show_resources_snat(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'snat'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_snat
def test_crm_show_resources_dnat(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'dnat'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_dnat
def test_crm_show_resources_ipmc(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipmc'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_resources_ipmc
@classmethod
def teardown_class(cls):
print("TEARDOWN")
os.environ["UTILITIES_UNIT_TESTING"] = "0"
class TestCrmMultiAsic(object):
@classmethod
def setup_class(cls):
print("SETUP")
os.environ["UTILITIES_UNIT_TESTING"] = "2"
os.environ["UTILITIES_UNIT_TESTING_TOPOLOGY"] = "multi_asic"
from .mock_tables import dbconnector
from .mock_tables import mock_multi_asic
dbconnector.load_namespace_config()
def test_crm_show_summary(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'summary'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_summary
result = runner.invoke(crm.cli, ['config', 'polling', 'interval', '30'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'summary'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_summary
def test_crm_show_thresholds_acl_group(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'group'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_acl_group
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'group', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'group', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'group'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_acl_group
def test_crm_show_thresholds_acl_table(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'table'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_acl_table
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'table', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'acl', 'table', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'acl', 'table'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_acl_table
def test_crm_show_thresholds_all(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'all'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_all
def test_crm_show_thresholds_fdb(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'fdb'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_fdb
result = runner.invoke(crm.cli, ['config', 'thresholds', 'fdb', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'fdb', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'fdb'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_fdb
def test_crm_show_thresholds_ipv4_neighbor(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv4_neighbor
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'neighbor', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'neighbor', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv4_neighbor
def test_crm_show_thresholds_ipv4_nexthop(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv4_nexthop
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'nexthop', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'nexthop', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv4_nexthop
def test_crm_show_thresholds_ipv4_route(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv4_route
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'route', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv4', 'route', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv4', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv4_route
def test_crm_show_thresholds_ipv6_neighbor(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv6_neighbor
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'neighbor', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'neighbor', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'neighbor'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv6_neighbor
def test_crm_show_thresholds_ipv6_nexthop(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv6_nexthop
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'nexthop', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'nexthop', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'nexthop'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv6_nexthop
def test_crm_show_thresholds_ipv6_route(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipv6_route
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'route', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipv6', 'route', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipv6', 'route'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipv6_route
def test_crm_show_thresholds_nexthop_group_member(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'member'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_nexthop_group_member
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'member', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'member', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'member'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_nexthop_group_member
def test_crm_show_thresholds_nexthop_group_object(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'object'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_nexthop_group_object
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'object', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'nexthop', 'group', 'object', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'nexthop', 'group', 'object'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_nexthop_group_object
def test_crm_show_thresholds_snat(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'snat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_snat
result = runner.invoke(crm.cli, ['config', 'thresholds', 'snat', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'snat', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'snat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_snat
def test_crm_show_thresholds_dnat(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'dnat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_dnat
result = runner.invoke(crm.cli, ['config', 'thresholds', 'dnat', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'dnat', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'dnat'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_dnat
def test_crm_show_thresholds_ipmc(self):
runner = CliRunner()
db = Db()
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipmc'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_show_thresholds_ipmc
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipmc', 'high', '90'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['config', 'thresholds', 'ipmc', 'low', '60'], obj=db)
print(sys.stderr, result.output)
result = runner.invoke(crm.cli, ['show', 'thresholds', 'ipmc'], obj=db)
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_new_show_thresholds_ipmc
def test_crm_multi_asic_show_resources_acl_group(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'acl', 'group'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_acl_group
def test_crm_multi_asic_show_resources_acl_table(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'acl', 'table'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_acl_table
def test_crm_multi_asic_show_resources_all(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'all'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_all
def test_crm_multi_asic_show_resources_fdb(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'fdb'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_fdb
def test_crm_multi_asic_show_resources_ipv4_neighbor(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv4', 'neighbor'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipv4_neighbor
def test_crm_multi_asic_show_resources_ipv4_nexthop(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv4', 'nexthop'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipv4_nexthop
def test_crm_multi_asic_show_resources_ipv4_route(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv4', 'route'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipv4_route
def test_crm_multi_asic_show_resources_ipv6_route(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv6', 'route'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipv6_route
def test_crm_multi_asic_show_resources_ipv6_neighbor(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv6', 'neighbor'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipv6_neighbor
def test_crm_multi_asic_show_resources_ipv6_nexthop(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipv6', 'nexthop'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipv6_nexthop
def test_crm_multi_asic_show_resources_nexthop_group_member(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'nexthop', 'group', 'member'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_nexthop_group_member
def test_crm_multi_asic_show_resources_nexthop(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'nexthop', 'group', 'object'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_nexthop_group_object
def test_crm_multi_asic_show_resources_snat(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'snat'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_snat
def test_crm_multi_asic_show_resources_dnat(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'dnat'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_dnat
def test_crm_multi_asic_show_resources_ipmc(self):
runner = CliRunner()
result = runner.invoke(crm.cli, ['show', 'resources', 'ipmc'])
print(sys.stderr, result.output)
assert result.exit_code == 0
assert result.output == crm_multi_asic_show_resources_ipmc
@classmethod
def teardown_class(cls):
print("TEARDOWN")
os.environ["UTILITIES_UNIT_TESTING"] = "0"
os.environ["UTILITIES_UNIT_TESTING_TOPOLOGY"] = ""
from .mock_tables import dbconnector
from .mock_tables import mock_single_asic
importlib.reload(mock_single_asic)
dbconnector.load_namespace_config()
| 41.728192 | 117 | 0.521723 | 6,781 | 66,014 | 4.873617 | 0.018876 | 0.087872 | 0.081699 | 0.095316 | 0.988108 | 0.973523 | 0.963175 | 0.946199 | 0.928074 | 0.925412 | 0 | 0.041491 | 0.327128 | 66,014 | 1,581 | 118 | 41.754586 | 0.702515 | 0.000348 | 0 | 0.876 | 0 | 0 | 0.473382 | 0.002273 | 0 | 0 | 0.005455 | 0 | 0.1472 | 1 | 0.0528 | false | 0 | 0.0096 | 0 | 0.064 | 0.1232 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7fc66cb15793a898d83ba4e3a27bd7dd67b63410 | 18,069 | py | Python | src/datamigration/azext_datamigration/generated/custom.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | null | null | null | src/datamigration/azext_datamigration/generated/custom.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 9 | 2022-03-25T19:35:49.000Z | 2022-03-31T06:09:47.000Z | src/datamigration/azext_datamigration/generated/custom.py | haroonf/azure-cli-extensions | 61c044d34c224372f186934fa7c9313f1cd3a525 | [
"MIT"
] | 1 | 2022-03-10T22:13:02.000Z | 2022-03-10T22:13:02.000Z | # --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
# pylint: disable=too-many-lines
from azure.cli.core.util import sdk_no_wait
def datamigration_sql_db_show(client,
resource_group_name,
sqldb_instance_name,
target_db_name,
migration_operation_id=None,
expand=None):
return client.get(resource_group_name=resource_group_name,
sql_db_instance_name=sqldb_instance_name,
target_db_name=target_db_name,
migration_operation_id=migration_operation_id,
expand=expand)
def datamigration_sql_db_create(client,
resource_group_name,
sqldb_instance_name,
target_db_name,
scope=None,
source_sql_connection=None,
source_database_name=None,
migration_service=None,
target_db_collation=None,
target_sql_connection=None,
table_list=None,
no_wait=False):
parameters = {}
parameters['properties'] = {}
if scope is not None:
parameters['properties']['scope'] = scope
if source_sql_connection is not None:
parameters['properties']['source_sql_connection'] = source_sql_connection
if source_database_name is not None:
parameters['properties']['source_database_name'] = source_database_name
if migration_service is not None:
parameters['properties']['migration_service'] = migration_service
if target_db_collation is not None:
parameters['properties']['target_database_collation'] = target_db_collation
if target_sql_connection is not None:
parameters['properties']['target_sql_connection'] = target_sql_connection
if table_list is not None:
parameters['properties']['table_list'] = table_list
if len(parameters['properties']) == 0:
del parameters['properties']
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
sql_db_instance_name=sqldb_instance_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_db_delete(client,
resource_group_name,
sqldb_instance_name,
target_db_name,
force=None,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
sql_db_instance_name=sqldb_instance_name,
target_db_name=target_db_name,
force=force)
def datamigration_sql_db_cancel(client,
resource_group_name,
sqldb_instance_name,
target_db_name,
migration_operation_id,
no_wait=False):
parameters = {}
parameters['migration_operation_id'] = migration_operation_id
return sdk_no_wait(no_wait,
client.begin_cancel,
resource_group_name=resource_group_name,
sql_db_instance_name=sqldb_instance_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_managed_instance_show(client,
resource_group_name,
managed_instance_name,
target_db_name,
migration_operation_id=None,
expand=None):
return client.get(resource_group_name=resource_group_name,
managed_instance_name=managed_instance_name,
target_db_name=target_db_name,
migration_operation_id=migration_operation_id,
expand=expand)
def datamigration_sql_managed_instance_create(client,
resource_group_name,
managed_instance_name,
target_db_name,
scope=None,
source_sql_connection=None,
source_database_name=None,
migration_service=None,
target_db_collation=None,
offline_configuration=None,
source_location=None,
target_location=None,
no_wait=False):
parameters = {}
parameters['properties'] = {}
if scope is not None:
parameters['properties']['scope'] = scope
if source_sql_connection is not None:
parameters['properties']['source_sql_connection'] = source_sql_connection
if source_database_name is not None:
parameters['properties']['source_database_name'] = source_database_name
if migration_service is not None:
parameters['properties']['migration_service'] = migration_service
if target_db_collation is not None:
parameters['properties']['target_database_collation'] = target_db_collation
if offline_configuration is not None:
parameters['properties']['offline_configuration'] = offline_configuration
parameters['properties']['backup_configuration'] = {}
if source_location is not None:
parameters['properties']['backup_configuration']['source_location'] = source_location
if target_location is not None:
parameters['properties']['backup_configuration']['target_location'] = target_location
if len(parameters['properties']['backup_configuration']) == 0:
del parameters['properties']['backup_configuration']
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
managed_instance_name=managed_instance_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_managed_instance_cancel(client,
resource_group_name,
managed_instance_name,
target_db_name,
migration_operation_id,
no_wait=False):
parameters = {}
parameters['migration_operation_id'] = migration_operation_id
return sdk_no_wait(no_wait,
client.begin_cancel,
resource_group_name=resource_group_name,
managed_instance_name=managed_instance_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_managed_instance_cutover(client,
resource_group_name,
managed_instance_name,
target_db_name,
migration_operation_id,
no_wait=False):
parameters = {}
parameters['migration_operation_id'] = migration_operation_id
return sdk_no_wait(no_wait,
client.begin_cutover,
resource_group_name=resource_group_name,
managed_instance_name=managed_instance_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_vm_show(client,
resource_group_name,
sql_vm_name,
target_db_name,
migration_operation_id=None,
expand=None):
return client.get(resource_group_name=resource_group_name,
sql_virtual_machine_name=sql_vm_name,
target_db_name=target_db_name,
migration_operation_id=migration_operation_id,
expand=expand)
def datamigration_sql_vm_create(client,
resource_group_name,
sql_vm_name,
target_db_name,
scope=None,
source_sql_connection=None,
source_database_name=None,
migration_service=None,
target_db_collation=None,
offline_configuration=None,
source_location=None,
target_location=None,
no_wait=False):
parameters = {}
parameters['properties'] = {}
if scope is not None:
parameters['properties']['scope'] = scope
if source_sql_connection is not None:
parameters['properties']['source_sql_connection'] = source_sql_connection
if source_database_name is not None:
parameters['properties']['source_database_name'] = source_database_name
if migration_service is not None:
parameters['properties']['migration_service'] = migration_service
if target_db_collation is not None:
parameters['properties']['target_database_collation'] = target_db_collation
if offline_configuration is not None:
parameters['properties']['offline_configuration'] = offline_configuration
parameters['properties']['backup_configuration'] = {}
if source_location is not None:
parameters['properties']['backup_configuration']['source_location'] = source_location
if target_location is not None:
parameters['properties']['backup_configuration']['target_location'] = target_location
if len(parameters['properties']['backup_configuration']) == 0:
del parameters['properties']['backup_configuration']
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
sql_virtual_machine_name=sql_vm_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_vm_cancel(client,
resource_group_name,
sql_vm_name,
target_db_name,
migration_operation_id,
no_wait=False):
parameters = {}
parameters['migration_operation_id'] = migration_operation_id
return sdk_no_wait(no_wait,
client.begin_cancel,
resource_group_name=resource_group_name,
sql_virtual_machine_name=sql_vm_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_vm_cutover(client,
resource_group_name,
sql_vm_name,
target_db_name,
migration_operation_id,
no_wait=False):
parameters = {}
parameters['migration_operation_id'] = migration_operation_id
return sdk_no_wait(no_wait,
client.begin_cutover,
resource_group_name=resource_group_name,
sql_virtual_machine_name=sql_vm_name,
target_db_name=target_db_name,
parameters=parameters)
def datamigration_sql_service_list(client,
resource_group_name=None):
if resource_group_name:
return client.list_by_resource_group(resource_group_name=resource_group_name)
return client.list_by_subscription()
def datamigration_sql_service_show(client,
resource_group_name,
sql_migration_service_name):
return client.get(resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name)
def datamigration_sql_service_create(client,
resource_group_name,
sql_migration_service_name,
location=None,
tags=None,
no_wait=False):
parameters = {}
if location is not None:
parameters['location'] = location
if tags is not None:
parameters['tags'] = tags
return sdk_no_wait(no_wait,
client.begin_create_or_update,
resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name,
parameters=parameters)
def datamigration_sql_service_update(client,
resource_group_name,
sql_migration_service_name,
tags=None,
no_wait=False):
parameters = {}
if tags is not None:
parameters['tags'] = tags
return sdk_no_wait(no_wait,
client.begin_update,
resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name,
parameters=parameters)
def datamigration_sql_service_delete(client,
resource_group_name,
sql_migration_service_name,
no_wait=False):
return sdk_no_wait(no_wait,
client.begin_delete,
resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name)
def datamigration_sql_service_delete_node(client,
resource_group_name,
sql_migration_service_name,
node_name=None,
integration_runtime_name=None):
parameters = {}
if node_name is not None:
parameters['node_name'] = node_name
if integration_runtime_name is not None:
parameters['integration_runtime_name'] = integration_runtime_name
return client.delete_node(resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name,
parameters=parameters)
def datamigration_sql_service_list_auth_key(client,
resource_group_name,
sql_migration_service_name):
return client.list_auth_keys(resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name)
def datamigration_sql_service_list_integration_runtime_metric(client,
resource_group_name,
sql_migration_service_name):
return client.list_monitoring_data(resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name)
def datamigration_sql_service_list_migration(client,
resource_group_name,
sql_migration_service_name):
return client.list_migrations(resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name)
def datamigration_sql_service_regenerate_auth_key(client,
resource_group_name,
sql_migration_service_name,
key_name=None,
auth_key1=None,
auth_key2=None):
parameters = {}
if key_name is not None:
parameters['key_name'] = key_name
if auth_key1 is not None:
parameters['auth_key1'] = auth_key1
if auth_key2 is not None:
parameters['auth_key2'] = auth_key2
return client.regenerate_auth_keys(resource_group_name=resource_group_name,
sql_migration_service_name=sql_migration_service_name,
parameters=parameters)
| 47.675462 | 93 | 0.535503 | 1,599 | 18,069 | 5.594121 | 0.072545 | 0.098826 | 0.127334 | 0.064394 | 0.891336 | 0.861599 | 0.856792 | 0.837563 | 0.815316 | 0.815316 | 0 | 0.001023 | 0.404948 | 18,069 | 378 | 94 | 47.801587 | 0.830915 | 0.026011 | 0 | 0.811146 | 0 | 0 | 0.063794 | 0.019047 | 0 | 0 | 0 | 0 | 0 | 1 | 0.068111 | false | 0 | 0.003096 | 0.027864 | 0.142415 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
7fda06874da883b699d44998fc7d20c5f7319a70 | 45,721 | py | Python | v0/aia_eis_v0/circuits/vogit_0.py | DreamBoatOve/aia_eis | 458b4d29846669b10db4da1b3e86c0b394614ceb | [
"MIT"
] | 1 | 2022-03-02T12:57:19.000Z | 2022-03-02T12:57:19.000Z | v0/aia_eis_v0/circuits/vogit_0.py | DreamBoatOve/aia_eis | 458b4d29846669b10db4da1b3e86c0b394614ceb | [
"MIT"
] | null | null | null | v0/aia_eis_v0/circuits/vogit_0.py | DreamBoatOve/aia_eis | 458b4d29846669b10db4da1b3e86c0b394614ceb | [
"MIT"
] | null | null | null | import sys
sys.path.append('../')
import numpy as np
import math
import copy
import os
from circuits.circuit_pack import aRCb
from circuits.elements import ele_C, ele_L
from loa.l_m.l_m_0 import Levenberg_Marquart_0, vogit_obj_fun_0, vogit_obj_fun_1
from IS.IS import IS_0
from IS.IS_criteria import cal_residual, cal_ChiSquare_pointWise_0
from utils.file_utils.pickle_utils import pickle_file
from utils.visualize_utils.impedance_plots import nyquist_multiPlots_1, nyquist_plot_1
"""
Special ECM
Vogit
是否有可能在M个RC的基础上 再包含N个RL??
目前MS的Lin-KK 和 Impedance 都没考虑,暂时不管,不考虑太复杂的情况
maybe some other ECMs, like transmission line, etc.
"""
class Vogit_2:
"""
Refer
papers:
paper1: A Linear Kronig-Kramers Transform Test for Immittance Data Validation
paper0: A Method for Improving the Robustness of linear Kramers-Kronig Validity Tests
Note:
Vogit 最基本的电路为
Rs-Ls-M*(RC)-[Cs]
Ls: inductive effects are considered byadding an additional inductivity [1]
Cs:
option to add a serial capacitance that helps validate data with no low-frequency intercept
due to their capacitive nature an additional capacityis added to the ECM.
1- 只考虑 complex / imag / real -fit中的complex-fit
2- 三种加权方式只考虑 modulus
3- add Capacity / Inductance 中 只考虑 add Capacity
Version:
2: 之前的Vogit中没有加入电感L,在这一版本中加上
"""
def __init__(self, impSpe, add_C=False):
"""
因为Vogit是一个measurement model,所以使用vogit之前一定会传进来一个IS
:param
impSpe: IS cls
M: int
number of (RC)
w: list(float)
RC_para_list:[
[R0, C0],
[R1, C1],
...
[Rm-1, Cm-1],
]
Rs: float
add_C: Bool
"""
self.impSpe = impSpe
self.w_arr = self.impSpe.w_arr
self.M = 1
"""
Paper1: As a rule of thumb we can conclude that, for the single fit and transformation, the v range should be
equal to the inverse w range with a distribution of 6 or 7 Tcs per decade. 在这里再稍微取的更大一些 8 * decades
"""
self.M_max = int(math.log10(self.w_arr.max() / self.w_arr.min())) * 8
self.add_C = add_C
def calc_timeConstant(self):
"""
timeConstant = tao = R * C
Refer:
A Method for Improving the Robustness of linear Kramers-Kronig Validity Tests
2.2. Distribution of Time Constants Eq 10-12
:return:
"""
sorted_w_arr = np.sort(copy.deepcopy(self.w_arr)) # small --> big number
w_min, w_max = sorted_w_arr[0], sorted_w_arr[-1]
# Time Constant τ 用 tao表示
tao_min = 1 / w_max
tao_max = 1 / w_min
tao_list = []
if self.M == 1:
tao_list.append(tao_min)
elif self.M == 2:
tao_list.extend([tao_min, tao_max])
elif self.M > 2:
tao_list.append(tao_min)
K = self.M - 1
for i in range(1, K):
tao = 10 ** (math.log10(tao_min) + i * math.log10(tao_max / tao_min) / (self.M - 1))
tao_list.append(tao)
tao_list.append(tao_max)
self.tao_arr = np.array(tao_list)
def init_para(self):
# refer the initialization of <impedance.py>
self.Rs = min(np.real(self.impSpe.z_arr))
self.Ls = 1e-3
self.M_R_arr = [(max(np.real(self.impSpe.z_arr)) - min(np.real(self.impSpe.z_arr))) / self.M for i in range(self.M)]
if self.add_C:
self.Cs = 1e-3
self.calc_timeConstant()
def init_para_0(self):
"""
1-由于时间常数Tao已经确定,Tao = Ri * Ci,所以只需要初始化M个Ri,i = 0,1,2,。。。,M-
2-根据paper《A Linear Kronig-Kramers Transform Test for Immittance Data Validation》 fig 6的结果,拟合得到的Ri大多数情况
下是一正一负,所以初始Ri为:R0=1,R1=-1,R2=1,R3=-1,。。。
:return:
"""
# 第一次初始化RC M = 1
if self.RC_para_list is None:
self.calc_timeConstant()
Ri_list = []
for i in range(self.M):
# even number: 0,2,4, Ri = 1
if i % 2 == 0:
Ri = 1.0
# odd number: 1,3,5, Ri = -1.0
else:
Ri = -1.0
Ri_list.append(Ri)
self.RC_para_list = [[Ri, self.tao_arr[i] / Ri] for i, Ri in enumerate(Ri_list)]
self.Rs = self.cal_Rs()
else:
# M > 1 , 如果M增加,保留之前的拟合结果,只初始化新加的RC
self.calc_timeConstant()
RC_para_existed_len = len(self.RC_para_list)
add_R_list = []
for i in range(RC_para_existed_len, self.M):
# even number: 0,2,4, Ri = 1
if i % 2 == 0:
R = 1.0
# odd number: 1,3,5, Ri = -1.0
else:
R = -1.0
add_R_list.append(R)
old_RC_para_list = copy.deepcopy(self.RC_para_list)
self.RC_para_list = []
# 之前的R
for i, RC in enumerate(old_RC_para_list):
self.RC_para_list.append([RC[0], self.tao_arr[i] / RC[0]])
# 新加的R
for i, R in enumerate(add_R_list):
self.RC_para_list.append([R, self.tao_arr[RC_para_existed_len + i] / R])
self.Rs = self.cal_Rs()
# def connect_circuit(self):
# """
# 默认 Vogit = Rs + (RC)_0 + (RC)_1 + ... + (RC)_m-1
# :return:
# """
# pass
def cal_Rs(self):
"""
根据 paper1-Eq7 计算 Rs
:return:
"""
z_arr = self.impSpe.z_arr
weight_arr = np.array([1 / (z.real ** 2 + z.imag ** 2) for z in z_arr])
Rs = 0.0
for i, weight in enumerate(weight_arr):
res_in_square_bracket = z_arr[i].real - \
sum([self.RC_para_list[k][0] / (1 + (self.w_arr[i] * self.tao_arr[k]) ** 2) for k in
range(self.M)])
Rs += weight * res_in_square_bracket
Rs /= weight_arr[:-1].sum()
return Rs
def update_para(self, tmp_para_arr):
"""
R_list / R_arr:
[Rs, R0, R1, ..., R_M-1]
优化算法迭代产生新的阻抗值,替换原来的R
同时更新对应的电容C
:return:
"""
if self.OA_obj_fun_mode == 'imag':
pass
elif (self.OA_obj_fun_mode == 'real') or (self.OA_obj_fun_mode == 'both'):
# para_arr = [*Rs*, *Ls*, (*Cs*), R0, R1, R2, ..., R_M-1]
self.Rs = tmp_para_arr[0]
self.Ls = tmp_para_arr[1]
RC_start_index = 2
if self.add_C:
# para_arr = [*Rs*, *Ls*, *Cs*, R0, R1, R2, ..., R_M-1]
self.Cs = tmp_para_arr[RC_start_index]
RC_start_index = 3
self.M_R_arr = tmp_para_arr[RC_start_index:]
def update_u(self):
"""
refer paper0-eq21
:return:
"""
positive_R_list = []
negtive_R_list = []
for R in self.M_R_arr:
if R >= 0:
positive_R_list.append(R)
elif R < 0:
negtive_R_list.append(R)
self.u = 1 - abs(sum(negtive_R_list)) / sum(positive_R_list)
def lin_KK(self, OA=Levenberg_Marquart_0, OA_obj_fun_mode='both', OA_obj_fun_weighting_type='modulus',
save_iter=False, u_optimum=0.85, manual_M=None):
self.OA_obj_fun_mode = OA_obj_fun_mode
self.OA_obj_fun_weighting_type = OA_obj_fun_weighting_type
if manual_M is not None:
self.M = manual_M
self.init_para()
self.update_u()
# init Levenberg_Marquardt
# OA: Optimization Algorithm
oa = OA(impSpe=self.impSpe,
obj_fun=vogit_obj_fun_1,
obj_fun_mode=OA_obj_fun_mode,
obj_fun_weighting_type=OA_obj_fun_weighting_type,
iter_max=500,
add_C=self.add_C)
while (self.u >= u_optimum) and (self.M <= self.M_max):
if OA_obj_fun_mode == 'imag':
# oa.get_initial_para_arr(para_arr=np.array([RC[0] for RC in self.RC_para_list]))
pass
elif (OA_obj_fun_mode == 'real') or (OA_obj_fun_mode == 'both'):
if self.add_C:
para_arr = np.array([self.Rs, self.Ls, self.Cs] + [R for R in self.M_R_arr])
print('Para into OA:', para_arr)
oa.get_initial_para_arr(para_arr)
else:
oa.get_initial_para_arr(para_arr=np.array([self.Rs, self.Ls] + [R for R in self.M_R_arr]))
oa.iterate(timeConstant_arr=self.tao_arr)
tmp_para_arr = oa.para_arr # N * 1
print('Para out from OA:', tmp_para_arr)
# update R
self.update_para(tmp_para_arr)
# update u
self.update_u()
if manual_M is not None:
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list = [chiSquare]
self.chiSquare_real_list = [chiSquare_real]
self.chiSquare_imag_list = [chiSquare_imag]
self.real_residual_list = [real_residual_list]
self.imag_residual_list = [imag_residual_list]
break
# The value of c (u_max) is a design parameter,
# however from the author’s experience c = 0.85 has proven to be an excellent choice.
if (self.u >= u_optimum) and (self.M <= self.M_max): # underfitting
# 打印输出、保存迭代的中间结果
print('M=', self.M, 'u=', self.u)
# print('M=', self.M, 'u=', self.u, 'Rs=', self.Rs, '(RC)s=', self.RC_para_list)
if save_iter == True:
if self.M == 1:
self.M_list = [1]
self.u_list = [copy.deepcopy(self.u)]
self.Rs_list = [copy.deepcopy(self.Rs)]
self.Ls_list = [copy.deepcopy(self.Ls)]
self.R_pack_list = [copy.deepcopy(self.M_R_arr)]
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list = [chiSquare]
self.chiSquare_real_list = [chiSquare_real]
self.chiSquare_imag_list = [chiSquare_imag]
self.real_residual_list = [real_residual_list]
self.imag_residual_list = [imag_residual_list]
elif self.M > 1:
self.M_list.append(copy.deepcopy(self.M))
self.u_list.append(copy.deepcopy(self))
self.Rs_list.append(copy.deepcopy(self.Rs))
self.R_pack_list.append(copy.deepcopy(self.M_R_arr))
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list.append(chiSquare)
self.chiSquare_real_list.append(chiSquare_real)
self.chiSquare_imag_list.append(chiSquare_imag)
self.real_residual_list.append(real_residual_list)
self.imag_residual_list.append(imag_residual_list)
print('M=', self.M, 'u=', self.u, chiSquare)
self.M += 1
self.init_para()
else:
print('M=', self.M, 'u=', self.u)
break
def simulate_Z(self):
"""
使用拟合的各种参数:Rs + M * RC
:return:
"""
self.z_sim_arr = np.empty(shape=(self.M, self.impSpe.z_arr.shape[0]), dtype=complex)
for i in range(self.M):
R = self.M_R_arr[i]
tao = self.tao_arr[i]
tmp_z_sim_list = [aRCb(w, R, tao/R) for w in self.w_arr]
self.z_sim_arr[i, :] = np.array(tmp_z_sim_list)
L_Z_sim_arr = np.array([ele_L(w, self.Ls) for w in self.w_arr]).reshape((1, self.w_arr.size))
if self.add_C:
# self.z_sim_arr[-1, :] = [ele_C(w, self.C) for w in self.w_arr]
c_z_arr = np.array([ele_C(w, self.Cs) for w in self.w_arr]).reshape((1, self.w_arr.shape[0]))
self.z_sim_arr = np.concatenate((self.z_sim_arr, L_Z_sim_arr, c_z_arr), axis=0)
else:
self.z_sim_arr = np.concatenate((self.z_sim_arr, L_Z_sim_arr), axis=0)
self.z_sim_arr = self.z_sim_arr.sum(axis=0)
self.z_sim_arr += self.Rs
def cal_various_criteria(self):
"""
calculate
weight = 1 / (z.real ** 2 + z.imag ** 2)
X^2, defined in paper0 - Eq 15
在这里没有办法计算ZSimpWin中的X^2,因为 过程ECM未知 == 代求参数的数量未知 --》 系统的自由度无法确定
这里的X^2计算如下:
N = data points
X^2 = (1/N) * ∑{ weight * [(Z(w)i.real - Zi.real) ** 2 + (Z(w)i.imag - Zi.imag) **2] }
X^2_imag, defined in paper0 - Eq 20
X^2_real, 模仿 X^2_imag 的计算
🔺Real, defined in paper0 - Eq 15
🔺Imag, defined in paper0 - Eq 16
:return:
"""
chiSquare = 0.0
chiSquare_real = 0.0
chiSquare_imag = 0.0
imag_residual_list = []
real_residual_list = []
self.simulate_Z()
z_arr = self.impSpe.z_arr
modulus_weight_list = [1 / (z.real ** 2 + z.imag ** 2) for z in z_arr]
for weight, z_sim, z in zip(modulus_weight_list, self.z_sim_arr, z_arr):
real_residual_list.append(math.sqrt(weight) * (z.real - z_sim.real))
imag_residual_list.append(math.sqrt(weight) * (z.imag - z_sim.imag))
chiSquare_real += (1 / z_arr.shape[0]) * weight * ((z_sim.real - z.real) ** 2)
chiSquare_imag += (1 / z_arr.shape[0]) * weight * ((z_sim.imag - z.imag) ** 2)
chiSquare += chiSquare_imag + chiSquare_real
return chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list
def save2pkl(self, fp, fn):
pickle_file(obj=self, fn=fn, fp=fp)
# ---------------------------------- Test Vogit_2 on Lin-KK-Ex1_LIB_time_invariant ----------------------------------
# 1- load data
lib_res_fp = '../plugins_test/jupyter_code/rbp_files/2/example_data_sets/LIB_res'
ex1_data_dict = np.load(os.path.join(lib_res_fp, 'Ex1_LIB_time_invariant_res.npz'))
ex1_z_arr = ex1_data_dict['z_arr']
ex1_f_arr = ex1_data_dict['fre']
ex1_z_MS_sim_arr = ex1_data_dict['z_sim']
ex1_real_residual_arr = ex1_data_dict['real_residual']
ex1_imag_residual_arr = ex1_data_dict['imag_residual']
ex1_IS = IS_0()
ex1_IS.raw_z_arr = ex1_z_arr
ex1_IS.exp_area = 1.0
ex1_IS.z_arr = ex1_z_arr
ex1_IS.fre_arr = ex1_f_arr
ex1_IS.w_arr = ex1_IS.fre_arr * 2 * math.pi
ex1_vogit = Vogit_2(impSpe=ex1_IS, add_C=True)
OA_obj_fun_mode = 'both'
ex1_vogit.lin_KK(OA_obj_fun_mode=OA_obj_fun_mode, save_iter=False, u_optimum=0.85, manual_M=30)
# ex1_vogit.lin_KK(OA_obj_fun_mode=OA_obj_fun_mode, save_iter=False, u_optimum=0.85, manual_M=None)
# compare nyquist plots of MS-Lin-KK and Mine
ex1_z_MS_sim_list = ex1_z_MS_sim_arr.tolist()
ex1_vogit.simulate_Z()
z_pack_list = [ex1_z_arr.tolist(), ex1_z_MS_sim_list, ex1_vogit.z_sim_arr.tolist()]
nyquist_multiPlots_1(z_pack_list=z_pack_list, x_lim=[-0.015, 0.045], y_lim=[0, 0.02], plot_label_list=['Ideal IS', 'MS-Fit','Mine-Fit'])
# nyquist_multiPlots_1(z_pack_list=z_pack_list, x_lim=[0., 10], y_lim=[0, 20], plot_label_list=['Ideal IS', 'MS-Fit','Mine-Fit'])
# nyquist_plot_1(z_list=ex1_vogit.z_sim_arr, x_lim=[-10.015, 10.045], y_lim=[-10, 150.02])
# ---------------------------------- Test Vogit_1 on Lin-KK-Ex1_LIB_time_invariant ----------------------------------
class Vogit_1:
"""
Refer
papers:
paper1: A Linear Kronig-Kramers Transform Test for Immittance Data Validation
paper0: A Method for Improving the Robustness of linear Kramers-Kronig Validity Tests
Note:
Vogit 最基本的电路为
Rs-Ls-M*(RC)-[Cs]
Ls: inductive effects are considered byadding an additional inductivity [1]
Cs:
option to add a serial capacitance that helps validate data with no low-frequency intercept
due to their capacitive nature an additional capacityis added to the ECM.
1- 只考虑 complex / imag / real -fit中的complex-fit
2- 三种加权方式只考虑 modulus
3- add Capacity / Inductance 中 只考虑 add Capacity
"""
def __init__(self, impSpe, add_C=False):
"""
因为Vogit是一个measurement model,所以使用vogit之前一定会传进来一个IS
:param
impSpe: IS cls
M: int
number of (RC)
w: list(float)
RC_para_list:[
[R0, C0],
[R1, C1],
...
[Rm-1, Cm-1],
]
Rs: float
add_C: Bool
"""
self.impSpe = impSpe
self.w_arr = self.impSpe.w_arr
self.M = 1
"""
Paper1: As a rule of thumb we can conclude that, for the single fit and transformation, the v range should be
equal to the inverse w range with a distribution of 6 or 7 Tcs per decade. 在这里再稍微取的更大一些 8 * decades
"""
self.M_max = int(math.log10(self.w_arr.max() / self.w_arr.min())) * 8
self.Rs = 1e-2
self.add_L = 1e-3
self.RC_para_list = None
self.add_C = add_C
if self.add_C:
self.C = 1e-3
def calc_timeConstant(self):
"""
timeConstant = tao = R * C
Refer:
A Method for Improving the Robustness of linear Kramers-Kronig Validity Tests
2.2. Distribution of Time Constants Eq 10-12
:return:
"""
sorted_w_arr = np.sort(copy.deepcopy(self.w_arr)) # small --> big number
w_min, w_max = sorted_w_arr[0], sorted_w_arr[-1]
# Time Constant τ 用 tao表示
tao_min = 1 / w_max
tao_max = 1 / w_min
tao_list = []
if self.M == 1:
tao_list.append(tao_min)
elif self.M == 2:
tao_list.extend([tao_min, tao_max])
elif self.M > 2:
tao_list.append(tao_min)
K = self.M - 1
for i in range(1, K):
tao = 10 ** (math.log10(tao_min) + i * math.log10(tao_max / tao_min) / (self.M - 1))
tao_list.append(tao)
tao_list.append(tao_max)
self.tao_arr = np.array(tao_list)
# def init_para(self):
# refer the initialization of impedance
# self.calc_timeConstant()
# self.Rs = min(np.real(self.impSpe.z_arr))
# R_list = [(max(np.real(self.impSpe.z_arr)) - min(np.real(self.impSpe.z_arr))) / self.M for i in range(self.M)]
# self.RC_para_list = [[Ri, self.tao_arr[i] / Ri] for i, Ri in enumerate(R_list)]
def init_para_0(self):
"""
1-由于时间常数Tao已经确定,Tao = Ri * Ci,所以只需要初始化M个Ri,i = 0,1,2,。。。,M-
2-根据paper《A Linear Kronig-Kramers Transform Test for Immittance Data Validation》 fig 6的结果,拟合得到的Ri大多数情况
下是一正一负,所以初始Ri为:R0=1,R1=-1,R2=1,R3=-1,。。。
:return:
"""
# 第一次初始化RC M = 1
if self.RC_para_list is None:
self.calc_timeConstant()
Ri_list = []
for i in range(self.M):
# even number: 0,2,4, Ri = 1
if i % 2 == 0:
Ri = 1.0
# odd number: 1,3,5, Ri = -1.0
else:
Ri = -1.0
Ri_list.append(Ri)
self.RC_para_list = [[Ri, self.tao_arr[i] / Ri] for i, Ri in enumerate(Ri_list)]
self.Rs = self.cal_Rs()
else:
# M > 1 , 如果M增加,保留之前的拟合结果,只初始化新加的RC
self.calc_timeConstant()
RC_para_existed_len = len(self.RC_para_list)
add_R_list = []
for i in range(RC_para_existed_len, self.M):
# even number: 0,2,4, Ri = 1
if i % 2 == 0:
R = 1.0
# odd number: 1,3,5, Ri = -1.0
else:
R = -1.0
add_R_list.append(R)
old_RC_para_list = copy.deepcopy(self.RC_para_list)
self.RC_para_list = []
# 之前的R
for i, RC in enumerate(old_RC_para_list):
self.RC_para_list.append([RC[0], self.tao_arr[i] / RC[0]])
# 新加的R
for i, R in enumerate(add_R_list):
self.RC_para_list.append([R, self.tao_arr[RC_para_existed_len + i] / R])
self.Rs = self.cal_Rs()
# def connect_circuit(self):
# """
# 默认 Vogit = Rs + (RC)_0 + (RC)_1 + ... + (RC)_m-1
# :return:
# """
# pass
def cal_Rs(self):
"""
根据 paper1-Eq7 计算 Rs
:return:
"""
z_arr = self.impSpe.z_arr
weight_arr = np.array([1 / (z.real ** 2 + z.imag ** 2) for z in z_arr])
Rs = 0.0
for i, weight in enumerate(weight_arr):
res_in_square_bracket = z_arr[i].real - \
sum([self.RC_para_list[k][0] / (1 + (self.w_arr[i] * self.tao_arr[k]) ** 2) for k in
range(self.M)])
Rs += weight * res_in_square_bracket
Rs /= weight_arr[:-1].sum()
return Rs
def update_para(self, tmp_para_arr):
"""
R_list / R_arr:
[Rs, R0, R1, ..., R_M-1]
优化算法迭代产生新的阻抗值,替换原来的R
同时更新对应的电容C
:return:
"""
if self.OA_obj_fun_mode == 'imag':
pass
# C_list = [tao / R for tao, R in zip(self.tao_arr, tmp_para_arr)]
# self.RC_para_list = [[R, C] for R, C in zip(tmp_para_arr, C_list)]
elif (self.OA_obj_fun_mode == 'real') or (self.OA_obj_fun_mode == 'both'):
# para_arr = [*Rs*, *Ls*, (*Cs*), R0, R1, R2, ..., R_M-1]
self.Rs = tmp_para_arr[0]
C_start_index = 1
if self.add_C:
# para_arr = [*Rs*, *Ls*, *Cs*, R0, R1, R2, ..., R_M-1]
self.C = tmp_para_arr[1]
C_start_index = 2
C_list = [tao / R for tao, R in zip(self.tao_arr, tmp_para_arr[C_start_index:])]
self.RC_para_list = [[R, C] for R, C in zip(tmp_para_arr[C_start_index:], C_list)]
def update_u(self):
"""
refer paper0-eq21
:return:
"""
positive_R_list = []
negtive_R_list = []
for RC_list in self.RC_para_list:
R = RC_list[0]
if R >= 0:
positive_R_list.append(R)
elif R < 0:
negtive_R_list.append(R)
self.u = 1 - abs(sum(negtive_R_list)) / sum(positive_R_list)
def cal_Zimag_residual(self):
pass
def lin_KK(self, OA=Levenberg_Marquart_0, OA_obj_fun_mode='both', OA_obj_fun_weighting_type='modulus',
save_iter=False, u_optimum=0.85, manual_M=None):
self.OA_obj_fun_mode = OA_obj_fun_mode
self.OA_obj_fun_weighting_type = OA_obj_fun_weighting_type
if manual_M is not None:
self.M = manual_M
self.init_para()
self.update_u()
# init Levenberg_Marquardt
# OA: Optimization Algorithm
oa = OA(impSpe=self.impSpe,
obj_fun=vogit_obj_fun_0,
# obj_fun=cal_ChiSquare_pointWise_0,
obj_fun_mode=OA_obj_fun_mode,
obj_fun_weighting_type=OA_obj_fun_weighting_type,
iter_max=1000,
add_C=True)
while (self.u >= u_optimum) and (self.M <= self.M_max):
if OA_obj_fun_mode == 'imag':
oa.get_initial_para_arr(para_arr=np.array([RC[0] for RC in self.RC_para_list]))
elif (OA_obj_fun_mode == 'real') or (OA_obj_fun_mode == 'both'):
if self.add_C:
para_arr = np.array([self.Rs] + [self.C] + [RC[0] for RC in self.RC_para_list])
oa.get_initial_para_arr(para_arr)
else:
oa.get_initial_para_arr(para_arr=np.array([self.Rs] + [RC[0] for RC in self.RC_para_list]))
"""
oa.iterate 传入z_arr, w_arr, tao_arr的目的:
z_arr是观测数据,和vogit的拟合数据对比来计算残差
w_arr, tao_arr用来确定Vogit模型的
在L-M中,R每变动一次,C要由 C = tao / R 计算更新
只有RC中的R是待求的未知数
"""
oa.iterate(timeConstant_arr=self.tao_arr)
tmp_para_arr = oa.para_arr # N * 1
# update RC
self.update_para(tmp_para_arr)
# update u
self.update_u()
if manual_M is not None:
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list = [chiSquare]
self.chiSquare_real_list = [chiSquare_real]
self.chiSquare_imag_list = [chiSquare_imag]
self.real_residual_list = [real_residual_list]
self.imag_residual_list = [imag_residual_list]
break
# The value of c (u_max) is a design parameter,
# however from the author’s experience c = 0.85 has proven to be an excellent choice.
if (self.u >= u_optimum) and (self.M <= self.M_max): # underfitting
# 打印输出、保存迭代的中间结果
print('M=', self.M, 'u=', self.u)
# print('M=', self.M, 'u=', self.u, 'Rs=', self.Rs, '(RC)s=', self.RC_para_list)
if save_iter == True:
if self.M == 1:
self.M_list = [1]
self.u_list = [copy.deepcopy(self.u)]
self.Rs_list = [copy.deepcopy(self.Rs)]
self.RC_para_pack_list = [copy.deepcopy(self.RC_para_list)]
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list = [chiSquare]
self.chiSquare_real_list = [chiSquare_real]
self.chiSquare_imag_list = [chiSquare_imag]
self.real_residual_list = [real_residual_list]
self.imag_residual_list = [imag_residual_list]
elif self.M > 1:
self.M_list.append(copy.deepcopy(self.M))
self.u_list.append(copy.deepcopy(self))
self.Rs_list.append(copy.deepcopy(self.Rs))
self.RC_para_pack_list.append(copy.deepcopy(self.RC_para_list))
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list.append(chiSquare)
self.chiSquare_real_list.append(chiSquare_real)
self.chiSquare_imag_list.append(chiSquare_imag)
self.real_residual_list.append(real_residual_list)
self.imag_residual_list.append(imag_residual_list)
print('M=', self.M, 'u=', self.u, chiSquare)
self.M += 1
self.init_para()
else:
print('M=', self.M, 'u=', self.u)
break
def simulate_Z(self):
"""
使用拟合的各种参数:Rs + M * RC
:return:
"""
self.z_sim_arr = np.empty(shape=(self.M, self.impSpe.z_arr.shape[0]), dtype=complex)
for i in range(self.M):
R, C0 = self.RC_para_list[i]
tmp_z_sim_list = [aRCb(w, R, C0) for w in self.w_arr]
self.z_sim_arr[i, :] = np.array(tmp_z_sim_list)
if self.add_C:
# self.z_sim_arr[-1, :] = [ele_C(w, self.C) for w in self.w_arr]
c_z_arr = np.array([ele_C(w, self.C) for w in self.w_arr]).reshape((1, self.w_arr.shape[0]))
self.z_sim_arr = np.concatenate((self.z_sim_arr, c_z_arr), axis=0)
self.z_sim_arr = self.z_sim_arr.sum(axis=0)
else:
self.z_sim_arr = self.z_sim_arr.sum(axis=0)
self.z_sim_arr += self.Rs
def cal_various_criteria(self):
"""
calculate
weight = 1 / (z.real ** 2 + z.imag ** 2)
X^2, defined in paper0 - Eq 15
在这里没有办法计算ZSimpWin中的X^2,因为 过程ECM未知 == 代求参数的数量未知 --》 系统的自由度无法确定
这里的X^2计算如下:
N = data points
X^2 = (1/N) * ∑{ weight * [(Z(w)i.real - Zi.real) ** 2 + (Z(w)i.imag - Zi.imag) **2] }
X^2_imag, defined in paper0 - Eq 20
X^2_real, 模仿 X^2_imag 的计算
🔺Real, defined in paper0 - Eq 15
🔺Imag, defined in paper0 - Eq 16
:return:
"""
chiSquare = 0.0
chiSquare_real = 0.0
chiSquare_imag = 0.0
imag_residual_list = []
real_residual_list = []
self.simulate_Z()
z_arr = self.impSpe.z_arr
modulus_weight_list = [1 / (z.real ** 2 + z.imag ** 2) for z in z_arr]
for weight, z_sim, z in zip(modulus_weight_list, self.z_sim_arr, z_arr):
real_residual_list.append(math.sqrt(weight) * (z.real - z_sim.real))
imag_residual_list.append(math.sqrt(weight) * (z.imag - z_sim.imag))
chiSquare_real += (1 / z_arr.shape[0]) * weight * ((z_sim.real - z.real) ** 2)
chiSquare_imag += (1 / z_arr.shape[0]) * weight * ((z_sim.imag - z.imag) ** 2)
chiSquare += chiSquare_imag + chiSquare_real
return chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list
def save2pkl(self, fp, fn):
pickle_file(obj=self, fn=fn, fp=fp)
# ---------------------------------- Test Vogit_1 on Lin-KK-Ex1_LIB_time_invariant ----------------------------------
# 1- load data
# lib_res_fp = '../plugins_test/jupyter_code/rbp_files/2/example_data_sets/LIB_res'
# ex1_data_dict = np.load(os.path.join(lib_res_fp, 'Ex1_LIB_time_invariant_res.npz'))
# ex1_z_arr = ex1_data_dict['z_arr']
# ex1_f_arr = ex1_data_dict['fre']
# ex1_z_MS_sim_arr = ex1_data_dict['z_sim']
# ex1_real_residual_arr = ex1_data_dict['real_residual']
# ex1_imag_residual_arr = ex1_data_dict['imag_residual']
#
# ex1_IS = IS_0()
# ex1_IS.raw_z_arr = ex1_z_arr
# ex1_IS.exp_area = 1.0
# ex1_IS.z_arr = ex1_z_arr
# ex1_IS.fre_arr = ex1_f_arr
# ex1_IS.w_arr = ex1_IS.fre_arr * 2 * math.pi
#
# ex1_vogit = Vogit_1(impSpe=ex1_IS, add_C=False)
# # ex1_vogit = Vogit_1(impSpe=ex1_IS, add_C=True)
# OA_obj_fun_mode = 'both'
# ex1_vogit.lin_KK(OA_obj_fun_mode=OA_obj_fun_mode, save_iter=True, u_optimum=0.85, manual_M=None)
#
# # compare nyquist plots of MS-Lin-KK and Mine
# ex1_z_MS_sim_list = ex1_z_MS_sim_arr.tolist()
# z_pack_list = [ex1_z_arr.tolist(), ex1_z_MS_sim_list, ex1_vogit.z_sim_arr.tolist()]
# nyquist_multiPlots_1(z_pack_list=z_pack_list, x_lim=[0.015, 0.045], y_lim=[0, 0.02], plot_label_list=['Ideal IS', 'MS-Fit','Mine-Fit'])
# nyquist_plot_1(z_list=ex1_vogit.z_sim_arr, x_lim=[-10.015, 10.045], y_lim=[-10, 150.02])
# ---------------------------------- Test Vogit_1 on Lin-KK-Ex1_LIB_time_invariant ----------------------------------
# ------------------------------------- Test Vogit_1 on my simulated/ecm_001/ -------------------------------------
# impS = IS_0()
# dpfc_src\datasets\goa_datasets\simulated\ecm_001\2020_07_04_sim_ecm_001_pickle.file
# impS.read_from_simPickle(fp='../datasets/goa_datasets/simulated/ecm_001/',
# fn='2020_07_04_sim_ecm_001_pickle.file')
# vogit_1 = Vogit_1(impSpe=impS, add_C=True)
# OA_obj_fun_mode = 'both'
# print(OA_obj_fun_mode)
# vogit_1.lin_KK(OA_obj_fun_mode=OA_obj_fun_mode)
# print('M=', vogit_1.M, 'u=',vogit_1.u, 'Rs=',vogit_1.Rs,'(RC)s=',vogit_1.RC_para_list)
# python vogit_0.py
# ------------------------------------- Test Vogit_1 on my simulated/ecm_001/ -------------------------------------
class Vogit_0:
"""
Refer
papers:
paper1: A Linear Kronig-Kramers Transform Test for Immittance Data Validation
paper0: A Method for Improving the Robustness of linear Kramers-Kronig Validity Tests
"""
def __init__(self, impSpe):
"""
因为Vogit是一个measurement model,所以使用vogit之前一定会传进来一个IS
:param
impSpe: IS cls
M: int
number of (RC)
w: list(float)
RC_para_list:[
[R0, C0],
[R1, C1],
...
[Rm-1, Cm-1],
]
Rs: float
add_C: Bool
add_L: Bool
"""
self.impSpe = impSpe
self.w_arr = self.impSpe.w_arr
self.M = 1
"""
Paper1: As a rule of thumb we can conclude that, for the single fit and transformation, the v range should be
equal to the inverse w range with a distribution of 6 or 7 Tcs per decade. 在这里再稍微取的更大一些 8 * decades
"""
self.M_max = int(math.log10(self.w_arr.max() / self.w_arr.min())) * 8
self.Rs = 0
self.RC_para_list = None
def calc_timeConstant(self):
"""
timeConstant = tao = R * C
Refer:
A Method for Improving the Robustness of linear Kramers-Kronig Validity Tests
2.2. Distribution of Time Constants Eq 10-12
:return:
"""
sorted_w_arr = np.sort(copy.deepcopy(self.w_arr)) # small --> big number
w_min, w_max = sorted_w_arr[0], sorted_w_arr[-1]
# Time Constant τ 用 tao表示
tao_min = 1 / w_max
tao_max = 1 / w_min
tao_list = []
if self.M == 1:
tao_list.append(tao_min)
elif self.M == 2:
tao_list.extend([tao_min, tao_max])
elif self.M > 2:
tao_list.append(tao_min)
K = self.M - 1
for i in range(1, K):
tao = 10 ** (math.log10(tao_min) + i * math.log10(tao_max / tao_min) / (self.M - 1))
tao_list.append(tao)
tao_list.append(tao_max)
self.tao_arr = np.array(tao_list)
def init_para(self):
# refer the initialization of impedance
self.calc_timeConstant()
self.Rs = min(np.real(self.impSpe.z_arr))
R_list = [(max(np.real(self.impSpe.z_arr)) - min(np.real(self.impSpe.z_arr))) / self.M for i in range(self.M)]
self.RC_para_list = [[Ri, self.tao_arr[i] / Ri] for i, Ri in enumerate(R_list)]
def init_para_0(self):
"""
1-由于时间常数Tao已经确定,Tao = Ri * Ci,所以只需要初始化M个Ri,i = 0,1,2,。。。,M-
2-根据paper《A Linear Kronig-Kramers Transform Test for Immittance Data Validation》 fig 6的结果,拟合得到的Ri大多数情况
下是一正一负,所以初始Ri为:R0=1,R1=-1,R2=1,R3=-1,。。。
:return:
"""
# 第一次初始化RC M = 1
if self.RC_para_list is None:
self.calc_timeConstant()
Ri_list = []
for i in range(self.M):
# even number: 0,2,4, Ri = 1
if i % 2 == 0:
Ri = 1.0
# odd number: 1,3,5, Ri = -1.0
else:
Ri = -1.0
Ri_list.append(Ri)
self.RC_para_list = [[Ri, self.tao_arr[i] / Ri] for i, Ri in enumerate(Ri_list)]
self.Rs = self.cal_Rs()
else:
# M > 1 , 如果M增加,保留之前的拟合结果,只初始化新加的RC
self.calc_timeConstant()
RC_para_existed_len = len(self.RC_para_list)
add_R_list = []
for i in range(RC_para_existed_len, self.M):
# even number: 0,2,4, Ri = 1
if i % 2 == 0:
R = 1.0
# odd number: 1,3,5, Ri = -1.0
else:
R = -1.0
add_R_list.append(R)
old_RC_para_list = copy.deepcopy(self.RC_para_list)
self.RC_para_list = []
# 之前的R
for i, RC in enumerate(old_RC_para_list):
self.RC_para_list.append([RC[0], self.tao_arr[i] / RC[0]])
# 新加的R
for i, R in enumerate(add_R_list):
self.RC_para_list.append([R, self.tao_arr[RC_para_existed_len+i] / R])
self.Rs = self.cal_Rs()
def connect_circuit(self):
"""
默认 Vogit = Rs + (RC)_0 + (RC)_1 + ... + (RC)_m-1
:return:
"""
pass
def cal_Rs(self):
"""
根据 paper1-Eq7 计算 Rs
:return:
"""
z_arr = self.impSpe.z_arr
weight_arr = np.array([1 / (z.real ** 2 + z.imag ** 2) for z in z_arr])
Rs = 0.0
for i, weight in enumerate(weight_arr):
res_in_square_bracket = z_arr[i].real - \
sum([self.RC_para_list[k][0] / (1 + (self.w_arr[i] * self.tao_arr[k])**2) for k in range(self.M)])
Rs += weight * res_in_square_bracket
Rs /= weight_arr[:-1].sum()
return Rs
def update_para(self, R_arr):
"""
R_list / R_arr:
[Rs, R0, R1, ..., R_M-1]
优化算法迭代产生新的阻抗值,替换原来的R
同时更新对应的电容C
:return:
"""
if self.OA_obj_fun_mode == 'imag':
C_list = [tao / R for tao, R in zip(self.tao_arr, R_arr)]
self.RC_para_list = [[R, C] for R, C in zip(R_arr, C_list)]
elif (self.OA_obj_fun_mode == 'real') or (self.OA_obj_fun_mode == 'both'):
self.Rs = R_arr[0]
C_list = [tao / R for tao, R in zip(self.tao_arr, R_arr[1:])]
self.RC_para_list = [[R, C] for R, C in zip(R_arr[1:], C_list)]
def update_u(self):
"""
refer paper0-eq21
:return:
"""
positive_R_list = []
negtive_R_list = []
for RC_list in self.RC_para_list:
R = RC_list[0]
if R >= 0:
positive_R_list.append(R)
elif R < 0:
negtive_R_list.append(R)
self.u = 1 - abs(sum(negtive_R_list)) / sum(positive_R_list)
def cal_Zimag_residual(self):
pass
def lin_KK(self, OA=Levenberg_Marquart_0, OA_obj_fun_mode='both', OA_obj_fun_weighting_type='modulus',
save_iter=False, u_optimum=0.85, manual_M=None):
self.OA_obj_fun_mode = OA_obj_fun_mode
self.OA_obj_fun_weighting_type = OA_obj_fun_weighting_type
if manual_M is not None:
self.M = manual_M
self.calc_timeConstant()
self.init_para()
self.update_u()
# init Levenberg_Marquardt
# OA: Optimization Algorithm
oa = OA(impSpe=self.impSpe,
obj_fun=vogit_obj_fun_0,
# obj_fun=cal_ChiSquare_pointWise_0,
obj_fun_mode=OA_obj_fun_mode,
obj_fun_weighting_type=OA_obj_fun_weighting_type,
iter_max=100)
while (self.u >= u_optimum) and (self.M <= self.M_max):
if OA_obj_fun_mode == 'imag':
oa.get_initial_para_arr(para_arr=np.array([RC[0] for RC in self.RC_para_list]))
elif (OA_obj_fun_mode == 'real') or (OA_obj_fun_mode == 'both'):
oa.get_initial_para_arr(para_arr=np.array([self.Rs] + [RC[0] for RC in self.RC_para_list]))
"""
oa.iterate 传入z_arr, w_arr, tao_arr的目的:
z_arr是观测数据,和vogit的拟合数据对比来计算残差
w_arr, tao_arr用来确定Vogit模型的
在L-M中,R每变动一次,C要由 C = tao / R 计算更新
只有RC中的R是待求的未知数
"""
oa.iterate(timeConstant_arr=self.tao_arr)
R_arr = oa.para_arr # N * 1
# update RC
self.update_para(R_arr)
# update u
self.update_u()
if manual_M is not None:
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list = [chiSquare]
self.chiSquare_real_list = [chiSquare_real]
self.chiSquare_imag_list = [chiSquare_imag]
self.real_residual_list = [real_residual_list]
self.imag_residual_list = [imag_residual_list]
break
# The value of c (u_max) is a design parameter,
# however from the author’s experience c = 0.85 has proven to be an excellent choice.
if (self.u >= u_optimum) and (self.M <= self.M_max): # underfitting
# 打印输出、保存迭代的中间结果
print('M=', self.M, 'u=', self.u)
# print('M=', self.M, 'u=', self.u, 'Rs=', self.Rs, '(RC)s=', self.RC_para_list)
if save_iter == True:
if self.M == 1:
self.M_list = [1]
self.u_list = [copy.deepcopy(self.u)]
self.Rs_list = [copy.deepcopy(self.Rs)]
self.RC_para_pack_list = [copy.deepcopy(self.RC_para_list)]
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list = [chiSquare]
self.chiSquare_real_list = [chiSquare_real]
self.chiSquare_imag_list = [chiSquare_imag]
self.real_residual_list = [real_residual_list]
self.imag_residual_list = [imag_residual_list]
elif self.M > 1:
self.M_list.append(copy.deepcopy(self.M))
self.u_list.append(copy.deepcopy(self))
self.Rs_list.append(copy.deepcopy(self.Rs))
self.RC_para_pack_list.append(copy.deepcopy(self.RC_para_list))
chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list = self.cal_various_criteria()
self.chiSquare_list.append(chiSquare)
self.chiSquare_real_list.append(chiSquare_real)
self.chiSquare_imag_list.append(chiSquare_imag)
self.real_residual_list.append(real_residual_list)
self.imag_residual_list.append(imag_residual_list)
print('M=', self.M, 'u=', self.u, chiSquare)
self.M += 1
self.init_para()
else:
print('M=', self.M, 'u=', self.u)
break
def simulate_Z(self):
"""
使用拟合的各种参数:Rs + M * RC
:return:
"""
self.z_sim_arr = np.empty(shape=(self.M, self.impSpe.z_arr.shape[0]), dtype=complex)
for i in range(self.M):
R, C = self.RC_para_list[i]
tmp_z_sim_list = [aRCb(w, R, C) for w in self.w_arr]
self.z_sim_arr[i, :] = np.array(tmp_z_sim_list)
self.z_sim_arr = self.z_sim_arr.sum(axis=0)
self.z_sim_arr += self.Rs
def cal_various_criteria(self):
"""
calculate
weight = 1 / (z.real ** 2 + z.imag ** 2)
X^2, defined in paper0 - Eq 15
在这里没有办法计算ZSimpWin中的X^2,因为 过程ECM未知 == 代求参数的数量未知 --》 系统的自由度无法确定
这里的X^2计算如下:
N = data points
X^2 = (1/N) * ∑{ weight * [(Z(w)i.real - Zi.real) ** 2 + (Z(w)i.imag - Zi.imag) **2] }
X^2_imag, defined in paper0 - Eq 20
X^2_real, 模仿 X^2_imag 的计算
🔺Real, defined in paper0 - Eq 15
🔺Imag, defined in paper0 - Eq 16
:return:
"""
chiSquare = 0.0
chiSquare_real = 0.0
chiSquare_imag = 0.0
imag_residual_list = []
real_residual_list = []
self.simulate_Z()
z_arr = self.impSpe.z_arr
modulus_weight_list = [1 / (z.real ** 2 + z.imag ** 2) for z in z_arr]
for weight, z_sim, z in zip(modulus_weight_list, self.z_sim_arr, z_arr):
real_residual_list.append(math.sqrt(weight) * (z.real - z_sim.real))
imag_residual_list.append(math.sqrt(weight) * (z.imag - z_sim.imag))
chiSquare_real += (1 / z_arr.shape[0]) * weight * ((z_sim.real - z.real)**2)
chiSquare_imag += (1 / z_arr.shape[0]) * weight * ((z_sim.imag - z.imag)**2)
chiSquare += chiSquare_imag + chiSquare_real
return chiSquare, chiSquare_real, chiSquare_imag, real_residual_list, imag_residual_list
def save2pkl(self, fp, fn):
pickle_file(obj=self, fn=fn, fp=fp)
# impS = IS_0()
# dpfc_src\datasets\goa_datasets\simulated\ecm_001\2020_07_04_sim_ecm_001_pickle.file
# impS.read_from_simPickle(fp='../datasets/goa_datasets/simulated/ecm_001/',
# fn='2020_07_04_sim_ecm_001_pickle.file')
# vogit = Vogit(impSpe=impS)
# OA_obj_fun_mode = 'both'
# print(OA_obj_fun_mode)
# vogit.lin_KK(OA_obj_fun_mode=OA_obj_fun_mode)
# print('M=', vogit.M, 'u=',vogit.u, 'Rs=',vogit.Rs,'(RC)s=',vogit.RC_para_list)
# python vogit_0.py | 40.532801 | 137 | 0.541524 | 6,412 | 45,721 | 3.589364 | 0.062539 | 0.020639 | 0.02607 | 0.029807 | 0.953856 | 0.948946 | 0.945514 | 0.942516 | 0.936433 | 0.933869 | 0 | 0.027545 | 0.338575 | 45,721 | 1,128 | 138 | 40.532801 | 0.73321 | 0.252772 | 0 | 0.842546 | 0 | 0 | 0.010827 | 0.003103 | 0 | 0 | 0 | 0 | 0 | 1 | 0.058626 | false | 0.01005 | 0.020101 | 0 | 0.093802 | 0.018425 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
3d592abca8af36e0eda319004ae88d234c46c483 | 241 | py | Python | extensions/.stubs/clrclasses/System/Configuration/Assemblies/__init__.py | vicwjb/Pycad | 7391cd694b7a91ad9f9964ec95833c1081bc1f84 | [
"MIT"
] | 1 | 2020-03-25T03:27:24.000Z | 2020-03-25T03:27:24.000Z | extensions/.stubs/clrclasses/System/Configuration/Assemblies/__init__.py | vicwjb/Pycad | 7391cd694b7a91ad9f9964ec95833c1081bc1f84 | [
"MIT"
] | null | null | null | extensions/.stubs/clrclasses/System/Configuration/Assemblies/__init__.py | vicwjb/Pycad | 7391cd694b7a91ad9f9964ec95833c1081bc1f84 | [
"MIT"
] | null | null | null | from __clrclasses__.System.Configuration.Assemblies import AssemblyHash
from __clrclasses__.System.Configuration.Assemblies import AssemblyHashAlgorithm
from __clrclasses__.System.Configuration.Assemblies import AssemblyVersionCompatibility
| 60.25 | 87 | 0.912863 | 21 | 241 | 9.904762 | 0.428571 | 0.201923 | 0.288462 | 0.475962 | 0.706731 | 0.706731 | 0 | 0 | 0 | 0 | 0 | 0 | 0.049793 | 241 | 3 | 88 | 80.333333 | 0.908297 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 9 |
3d5b90e79b1ec9a4db5f17eb3d0c6cc1ae947e2c | 116,029 | py | Python | napalm_yang/models/openconfig/network_instances/network_instance/afts/aft/entries/entry/match/state/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 64 | 2016-10-20T15:47:18.000Z | 2021-11-11T11:57:32.000Z | napalm_yang/models/openconfig/network_instances/network_instance/afts/aft/entries/entry/match/state/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 126 | 2016-10-05T10:36:14.000Z | 2019-05-15T08:43:23.000Z | napalm_yang/models/openconfig/network_instances/network_instance/afts/aft/entries/entry/match/state/__init__.py | ckishimo/napalm-yang | 8f2bd907bd3afcde3c2f8e985192de74748baf6c | [
"Apache-2.0"
] | 63 | 2016-11-07T15:23:08.000Z | 2021-09-22T14:41:16.000Z | # -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
class state(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/afts/aft/entries/entry/match/state. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Operational state parameters for match criteria of the
AFT entry
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__ip_prefix",
"__mac_address",
"__mpls_label",
"__mpls_tc",
"__ip_dscp",
"__ip_protocol",
"__l4_src_port",
"__l4_dst_port",
)
_yang_name = "state"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__ip_prefix = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))"
},
),
],
is_leaf=True,
yang_name="ip-prefix",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:ip-prefix",
is_config=False,
)
self.__mac_address = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={"pattern": "[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}"},
),
is_leaf=True,
yang_name="mac-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="yang:mac-address",
is_config=False,
)
self.__mpls_label = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=long,
restriction_dict={"range": ["0..4294967295"]},
int_size=32,
),
restriction_dict={"range": ["16..1048575"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IPV4_EXPLICIT_NULL": {"value": 0},
"ROUTER_ALERT": {"value": 1},
"IPV6_EXPLICIT_NULL": {"value": 2},
"IMPLICIT_NULL": {"value": 3},
"ENTROPY_LABEL_INDICATOR": {"value": 7},
"NO_LABEL": {},
},
),
],
is_leaf=True,
yang_name="mpls-label",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-mplst:mpls-label",
is_config=False,
)
self.__mpls_tc = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..7"]},
),
is_leaf=True,
yang_name="mpls-tc",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint8",
is_config=False,
)
self.__ip_dscp = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..63"]},
),
is_leaf=True,
yang_name="ip-dscp",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:dscp",
is_config=False,
)
self.__ip_protocol = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..254"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
},
),
],
is_leaf=True,
yang_name="ip-protocol",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-pkt-match-types:ip-protocol-type",
is_config=False,
)
self.__l4_src_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-src-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
self.__l4_dst_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-dst-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"afts",
"aft",
"entries",
"entry",
"match",
"state",
]
def _get_ip_prefix(self):
"""
Getter method for ip_prefix, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_prefix (inet:ip-prefix)
YANG Description: The IP prefix that the forwarding entry matches. Used for
Layer 3 forwarding entries.
"""
return self.__ip_prefix
def _set_ip_prefix(self, v, load=False):
"""
Setter method for ip_prefix, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_prefix (inet:ip-prefix)
If this variable is read-only (config: false) in the
source YANG file, then _set_ip_prefix is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ip_prefix() directly.
YANG Description: The IP prefix that the forwarding entry matches. Used for
Layer 3 forwarding entries.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))"
},
),
],
is_leaf=True,
yang_name="ip-prefix",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:ip-prefix",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """ip_prefix must be of a type compatible with inet:ip-prefix""",
"defined-type": "inet:ip-prefix",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))'}),RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))'}),], is_leaf=True, yang_name="ip-prefix", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:ip-prefix', is_config=False)""",
}
)
self.__ip_prefix = t
if hasattr(self, "_set"):
self._set()
def _unset_ip_prefix(self):
self.__ip_prefix = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))"
},
),
],
is_leaf=True,
yang_name="ip-prefix",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:ip-prefix",
is_config=False,
)
def _get_mac_address(self):
"""
Getter method for mac_address, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mac_address (yang:mac-address)
YANG Description: The MAC address that the forwarding entry matches. Used for
Layer 2 forwarding entries, e.g., within a VSI instance.
"""
return self.__mac_address
def _set_mac_address(self, v, load=False):
"""
Setter method for mac_address, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mac_address (yang:mac-address)
If this variable is read-only (config: false) in the
source YANG file, then _set_mac_address is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mac_address() directly.
YANG Description: The MAC address that the forwarding entry matches. Used for
Layer 2 forwarding entries, e.g., within a VSI instance.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={"pattern": "[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}"},
),
is_leaf=True,
yang_name="mac-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="yang:mac-address",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mac_address must be of a type compatible with yang:mac-address""",
"defined-type": "yang:mac-address",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}'}), is_leaf=True, yang_name="mac-address", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='yang:mac-address', is_config=False)""",
}
)
self.__mac_address = t
if hasattr(self, "_set"):
self._set()
def _unset_mac_address(self):
self.__mac_address = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={"pattern": "[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}"},
),
is_leaf=True,
yang_name="mac-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="yang:mac-address",
is_config=False,
)
def _get_mpls_label(self):
"""
Getter method for mpls_label, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_label (oc-mplst:mpls-label)
YANG Description: The MPLS label that the forwarding entry matches. Used for
MPLS forwarding entries, whereby the local device acts as an
LSR.
"""
return self.__mpls_label
def _set_mpls_label(self, v, load=False):
"""
Setter method for mpls_label, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_label (oc-mplst:mpls-label)
If this variable is read-only (config: false) in the
source YANG file, then _set_mpls_label is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mpls_label() directly.
YANG Description: The MPLS label that the forwarding entry matches. Used for
MPLS forwarding entries, whereby the local device acts as an
LSR.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=long,
restriction_dict={"range": ["0..4294967295"]},
int_size=32,
),
restriction_dict={"range": ["16..1048575"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IPV4_EXPLICIT_NULL": {"value": 0},
"ROUTER_ALERT": {"value": 1},
"IPV6_EXPLICIT_NULL": {"value": 2},
"IMPLICIT_NULL": {"value": 3},
"ENTROPY_LABEL_INDICATOR": {"value": 7},
"NO_LABEL": {},
},
),
],
is_leaf=True,
yang_name="mpls-label",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-mplst:mpls-label",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mpls_label must be of a type compatible with oc-mplst:mpls-label""",
"defined-type": "oc-mplst:mpls-label",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': ['16..1048575']}),RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={'IPV4_EXPLICIT_NULL': {'value': 0}, 'ROUTER_ALERT': {'value': 1}, 'IPV6_EXPLICIT_NULL': {'value': 2}, 'IMPLICIT_NULL': {'value': 3}, 'ENTROPY_LABEL_INDICATOR': {'value': 7}, 'NO_LABEL': {}},),], is_leaf=True, yang_name="mpls-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='oc-mplst:mpls-label', is_config=False)""",
}
)
self.__mpls_label = t
if hasattr(self, "_set"):
self._set()
def _unset_mpls_label(self):
self.__mpls_label = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=long,
restriction_dict={"range": ["0..4294967295"]},
int_size=32,
),
restriction_dict={"range": ["16..1048575"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IPV4_EXPLICIT_NULL": {"value": 0},
"ROUTER_ALERT": {"value": 1},
"IPV6_EXPLICIT_NULL": {"value": 2},
"IMPLICIT_NULL": {"value": 3},
"ENTROPY_LABEL_INDICATOR": {"value": 7},
"NO_LABEL": {},
},
),
],
is_leaf=True,
yang_name="mpls-label",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-mplst:mpls-label",
is_config=False,
)
def _get_mpls_tc(self):
"""
Getter method for mpls_tc, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_tc (uint8)
YANG Description: The value of the MPLS Traffic Class bits (formerly known as
the MPLS experimental bits) that are to be matched by the AFT
entry.
"""
return self.__mpls_tc
def _set_mpls_tc(self, v, load=False):
"""
Setter method for mpls_tc, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_tc (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_mpls_tc is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mpls_tc() directly.
YANG Description: The value of the MPLS Traffic Class bits (formerly known as
the MPLS experimental bits) that are to be matched by the AFT
entry.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..7"]},
),
is_leaf=True,
yang_name="mpls-tc",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint8",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mpls_tc must be of a type compatible with uint8""",
"defined-type": "uint8",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': ['0..7']}), is_leaf=True, yang_name="mpls-tc", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint8', is_config=False)""",
}
)
self.__mpls_tc = t
if hasattr(self, "_set"):
self._set()
def _unset_mpls_tc(self):
self.__mpls_tc = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..7"]},
),
is_leaf=True,
yang_name="mpls-tc",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint8",
is_config=False,
)
def _get_ip_dscp(self):
"""
Getter method for ip_dscp, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_dscp (inet:dscp)
YANG Description: The value of the differentiated services code point (DSCP) to
be matched for the forwarding entry. The value is specified in
cases where specific class-based forwarding based on IP is
implemented by the device.
"""
return self.__ip_dscp
def _set_ip_dscp(self, v, load=False):
"""
Setter method for ip_dscp, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_dscp (inet:dscp)
If this variable is read-only (config: false) in the
source YANG file, then _set_ip_dscp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ip_dscp() directly.
YANG Description: The value of the differentiated services code point (DSCP) to
be matched for the forwarding entry. The value is specified in
cases where specific class-based forwarding based on IP is
implemented by the device.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..63"]},
),
is_leaf=True,
yang_name="ip-dscp",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:dscp",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """ip_dscp must be of a type compatible with inet:dscp""",
"defined-type": "inet:dscp",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': ['0..63']}), is_leaf=True, yang_name="ip-dscp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:dscp', is_config=False)""",
}
)
self.__ip_dscp = t
if hasattr(self, "_set"):
self._set()
def _unset_ip_dscp(self):
self.__ip_dscp = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..63"]},
),
is_leaf=True,
yang_name="ip-dscp",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:dscp",
is_config=False,
)
def _get_ip_protocol(self):
"""
Getter method for ip_protocol, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_protocol (oc-pkt-match-types:ip-protocol-type)
YANG Description: The value of the IP protocol field of an IPv4 packet, or the
next-header field of an IPv6 packet which is to be matched by
the AFT entry. This field is utilised where forwarding is
performed based on L4 information.
"""
return self.__ip_protocol
def _set_ip_protocol(self, v, load=False):
"""
Setter method for ip_protocol, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_protocol (oc-pkt-match-types:ip-protocol-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_ip_protocol is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ip_protocol() directly.
YANG Description: The value of the IP protocol field of an IPv4 packet, or the
next-header field of an IPv6 packet which is to be matched by
the AFT entry. This field is utilised where forwarding is
performed based on L4 information.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..254"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
},
),
],
is_leaf=True,
yang_name="ip-protocol",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-pkt-match-types:ip-protocol-type",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """ip_protocol must be of a type compatible with oc-pkt-match-types:ip-protocol-type""",
"defined-type": "oc-pkt-match-types:ip-protocol-type",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': ['0..254']}),RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={'IP_TCP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_TCP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_UDP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_UDP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_ICMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_ICMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_IGMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_IGMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_PIM': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_PIM': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_RSVP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_RSVP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_GRE': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_GRE': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_AUTH': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_AUTH': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_L2TP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_L2TP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}},),], is_leaf=True, yang_name="ip-protocol", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='oc-pkt-match-types:ip-protocol-type', is_config=False)""",
}
)
self.__ip_protocol = t
if hasattr(self, "_set"):
self._set()
def _unset_ip_protocol(self):
self.__ip_protocol = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..254"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
},
),
],
is_leaf=True,
yang_name="ip-protocol",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-pkt-match-types:ip-protocol-type",
is_config=False,
)
def _get_l4_src_port(self):
"""
Getter method for l4_src_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_src_port (inet:port-number)
YANG Description: The value of the source port field of the transport header
that is to be matched by the AFT entry.
"""
return self.__l4_src_port
def _set_l4_src_port(self, v, load=False):
"""
Setter method for l4_src_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_src_port (inet:port-number)
If this variable is read-only (config: false) in the
source YANG file, then _set_l4_src_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l4_src_port() directly.
YANG Description: The value of the source port field of the transport header
that is to be matched by the AFT entry.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..65535"]},
int_size=16,
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-src-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """l4_src_port must be of a type compatible with inet:port-number""",
"defined-type": "inet:port-number",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['0..65535']}), is_leaf=True, yang_name="l4-src-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:port-number', is_config=False)""",
}
)
self.__l4_src_port = t
if hasattr(self, "_set"):
self._set()
def _unset_l4_src_port(self):
self.__l4_src_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-src-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
def _get_l4_dst_port(self):
"""
Getter method for l4_dst_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_dst_port (inet:port-number)
YANG Description: The value of the destination port field of the transport
header that is to be matched by the AFT entry.
"""
return self.__l4_dst_port
def _set_l4_dst_port(self, v, load=False):
"""
Setter method for l4_dst_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_dst_port (inet:port-number)
If this variable is read-only (config: false) in the
source YANG file, then _set_l4_dst_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l4_dst_port() directly.
YANG Description: The value of the destination port field of the transport
header that is to be matched by the AFT entry.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..65535"]},
int_size=16,
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-dst-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """l4_dst_port must be of a type compatible with inet:port-number""",
"defined-type": "inet:port-number",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['0..65535']}), is_leaf=True, yang_name="l4-dst-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:port-number', is_config=False)""",
}
)
self.__l4_dst_port = t
if hasattr(self, "_set"):
self._set()
def _unset_l4_dst_port(self):
self.__l4_dst_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-dst-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
ip_prefix = __builtin__.property(_get_ip_prefix)
mac_address = __builtin__.property(_get_mac_address)
mpls_label = __builtin__.property(_get_mpls_label)
mpls_tc = __builtin__.property(_get_mpls_tc)
ip_dscp = __builtin__.property(_get_ip_dscp)
ip_protocol = __builtin__.property(_get_ip_protocol)
l4_src_port = __builtin__.property(_get_l4_src_port)
l4_dst_port = __builtin__.property(_get_l4_dst_port)
_pyangbind_elements = OrderedDict(
[
("ip_prefix", ip_prefix),
("mac_address", mac_address),
("mpls_label", mpls_label),
("mpls_tc", mpls_tc),
("ip_dscp", ip_dscp),
("ip_protocol", ip_protocol),
("l4_src_port", l4_src_port),
("l4_dst_port", l4_dst_port),
]
)
class state(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/afts/aft/entries/entry/match/state. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: Operational state parameters for match criteria of the
AFT entry
"""
__slots__ = (
"_path_helper",
"_extmethods",
"__ip_prefix",
"__mac_address",
"__mpls_label",
"__mpls_tc",
"__ip_dscp",
"__ip_protocol",
"__l4_src_port",
"__l4_dst_port",
)
_yang_name = "state"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__ip_prefix = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))"
},
),
],
is_leaf=True,
yang_name="ip-prefix",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:ip-prefix",
is_config=False,
)
self.__mac_address = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={"pattern": "[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}"},
),
is_leaf=True,
yang_name="mac-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="yang:mac-address",
is_config=False,
)
self.__mpls_label = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=long,
restriction_dict={"range": ["0..4294967295"]},
int_size=32,
),
restriction_dict={"range": ["16..1048575"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IPV4_EXPLICIT_NULL": {"value": 0},
"ROUTER_ALERT": {"value": 1},
"IPV6_EXPLICIT_NULL": {"value": 2},
"IMPLICIT_NULL": {"value": 3},
"ENTROPY_LABEL_INDICATOR": {"value": 7},
"NO_LABEL": {},
},
),
],
is_leaf=True,
yang_name="mpls-label",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-mplst:mpls-label",
is_config=False,
)
self.__mpls_tc = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..7"]},
),
is_leaf=True,
yang_name="mpls-tc",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint8",
is_config=False,
)
self.__ip_dscp = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..63"]},
),
is_leaf=True,
yang_name="ip-dscp",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:dscp",
is_config=False,
)
self.__ip_protocol = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..254"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
},
),
],
is_leaf=True,
yang_name="ip-protocol",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-pkt-match-types:ip-protocol-type",
is_config=False,
)
self.__l4_src_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-src-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
self.__l4_dst_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-dst-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"afts",
"aft",
"entries",
"entry",
"match",
"state",
]
def _get_ip_prefix(self):
"""
Getter method for ip_prefix, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_prefix (inet:ip-prefix)
YANG Description: The IP prefix that the forwarding entry matches. Used for
Layer 3 forwarding entries.
"""
return self.__ip_prefix
def _set_ip_prefix(self, v, load=False):
"""
Setter method for ip_prefix, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_prefix (inet:ip-prefix)
If this variable is read-only (config: false) in the
source YANG file, then _set_ip_prefix is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ip_prefix() directly.
YANG Description: The IP prefix that the forwarding entry matches. Used for
Layer 3 forwarding entries.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))"
},
),
],
is_leaf=True,
yang_name="ip-prefix",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:ip-prefix",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """ip_prefix must be of a type compatible with inet:ip-prefix""",
"defined-type": "inet:ip-prefix",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))'}),RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))'}),], is_leaf=True, yang_name="ip-prefix", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:ip-prefix', is_config=False)""",
}
)
self.__ip_prefix = t
if hasattr(self, "_set"):
self._set()
def _unset_ip_prefix(self):
self.__ip_prefix = YANGDynClass(
base=[
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "(([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9][0-9]|2[0-4][0-9]|25[0-5])/(([0-9])|([1-2][0-9])|(3[0-2]))"
},
),
RestrictedClassType(
base_type=six.text_type,
restriction_dict={
"pattern": "((:|[0-9a-fA-F]{0,4}):)([0-9a-fA-F]{0,4}:){0,5}((([0-9a-fA-F]{0,4}:)?(:|[0-9a-fA-F]{0,4}))|(((25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])\\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9]?[0-9])))(/(([0-9])|([0-9]{2})|(1[0-1][0-9])|(12[0-8])))"
},
),
],
is_leaf=True,
yang_name="ip-prefix",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:ip-prefix",
is_config=False,
)
def _get_mac_address(self):
"""
Getter method for mac_address, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mac_address (yang:mac-address)
YANG Description: The MAC address that the forwarding entry matches. Used for
Layer 2 forwarding entries, e.g., within a VSI instance.
"""
return self.__mac_address
def _set_mac_address(self, v, load=False):
"""
Setter method for mac_address, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mac_address (yang:mac-address)
If this variable is read-only (config: false) in the
source YANG file, then _set_mac_address is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mac_address() directly.
YANG Description: The MAC address that the forwarding entry matches. Used for
Layer 2 forwarding entries, e.g., within a VSI instance.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={"pattern": "[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}"},
),
is_leaf=True,
yang_name="mac-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="yang:mac-address",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mac_address must be of a type compatible with yang:mac-address""",
"defined-type": "yang:mac-address",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=six.text_type, restriction_dict={'pattern': '[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}'}), is_leaf=True, yang_name="mac-address", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='yang:mac-address', is_config=False)""",
}
)
self.__mac_address = t
if hasattr(self, "_set"):
self._set()
def _unset_mac_address(self):
self.__mac_address = YANGDynClass(
base=RestrictedClassType(
base_type=six.text_type,
restriction_dict={"pattern": "[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}"},
),
is_leaf=True,
yang_name="mac-address",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="yang:mac-address",
is_config=False,
)
def _get_mpls_label(self):
"""
Getter method for mpls_label, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_label (oc-mplst:mpls-label)
YANG Description: The MPLS label that the forwarding entry matches. Used for
MPLS forwarding entries, whereby the local device acts as an
LSR.
"""
return self.__mpls_label
def _set_mpls_label(self, v, load=False):
"""
Setter method for mpls_label, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_label (oc-mplst:mpls-label)
If this variable is read-only (config: false) in the
source YANG file, then _set_mpls_label is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mpls_label() directly.
YANG Description: The MPLS label that the forwarding entry matches. Used for
MPLS forwarding entries, whereby the local device acts as an
LSR.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=long,
restriction_dict={"range": ["0..4294967295"]},
int_size=32,
),
restriction_dict={"range": ["16..1048575"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IPV4_EXPLICIT_NULL": {"value": 0},
"ROUTER_ALERT": {"value": 1},
"IPV6_EXPLICIT_NULL": {"value": 2},
"IMPLICIT_NULL": {"value": 3},
"ENTROPY_LABEL_INDICATOR": {"value": 7},
"NO_LABEL": {},
},
),
],
is_leaf=True,
yang_name="mpls-label",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-mplst:mpls-label",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mpls_label must be of a type compatible with oc-mplst:mpls-label""",
"defined-type": "oc-mplst:mpls-label",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=RestrictedClassType(base_type=long, restriction_dict={'range': ['0..4294967295']}, int_size=32), restriction_dict={'range': ['16..1048575']}),RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={'IPV4_EXPLICIT_NULL': {'value': 0}, 'ROUTER_ALERT': {'value': 1}, 'IPV6_EXPLICIT_NULL': {'value': 2}, 'IMPLICIT_NULL': {'value': 3}, 'ENTROPY_LABEL_INDICATOR': {'value': 7}, 'NO_LABEL': {}},),], is_leaf=True, yang_name="mpls-label", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='oc-mplst:mpls-label', is_config=False)""",
}
)
self.__mpls_label = t
if hasattr(self, "_set"):
self._set()
def _unset_mpls_label(self):
self.__mpls_label = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=long,
restriction_dict={"range": ["0..4294967295"]},
int_size=32,
),
restriction_dict={"range": ["16..1048575"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IPV4_EXPLICIT_NULL": {"value": 0},
"ROUTER_ALERT": {"value": 1},
"IPV6_EXPLICIT_NULL": {"value": 2},
"IMPLICIT_NULL": {"value": 3},
"ENTROPY_LABEL_INDICATOR": {"value": 7},
"NO_LABEL": {},
},
),
],
is_leaf=True,
yang_name="mpls-label",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-mplst:mpls-label",
is_config=False,
)
def _get_mpls_tc(self):
"""
Getter method for mpls_tc, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_tc (uint8)
YANG Description: The value of the MPLS Traffic Class bits (formerly known as
the MPLS experimental bits) that are to be matched by the AFT
entry.
"""
return self.__mpls_tc
def _set_mpls_tc(self, v, load=False):
"""
Setter method for mpls_tc, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/mpls_tc (uint8)
If this variable is read-only (config: false) in the
source YANG file, then _set_mpls_tc is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_mpls_tc() directly.
YANG Description: The value of the MPLS Traffic Class bits (formerly known as
the MPLS experimental bits) that are to be matched by the AFT
entry.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..7"]},
),
is_leaf=True,
yang_name="mpls-tc",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint8",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """mpls_tc must be of a type compatible with uint8""",
"defined-type": "uint8",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': ['0..7']}), is_leaf=True, yang_name="mpls-tc", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='uint8', is_config=False)""",
}
)
self.__mpls_tc = t
if hasattr(self, "_set"):
self._set()
def _unset_mpls_tc(self):
self.__mpls_tc = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..7"]},
),
is_leaf=True,
yang_name="mpls-tc",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="uint8",
is_config=False,
)
def _get_ip_dscp(self):
"""
Getter method for ip_dscp, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_dscp (inet:dscp)
YANG Description: The value of the differentiated services code point (DSCP) to
be matched for the forwarding entry. The value is specified in
cases where specific class-based forwarding based on IP is
implemented by the device.
"""
return self.__ip_dscp
def _set_ip_dscp(self, v, load=False):
"""
Setter method for ip_dscp, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_dscp (inet:dscp)
If this variable is read-only (config: false) in the
source YANG file, then _set_ip_dscp is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ip_dscp() directly.
YANG Description: The value of the differentiated services code point (DSCP) to
be matched for the forwarding entry. The value is specified in
cases where specific class-based forwarding based on IP is
implemented by the device.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..63"]},
),
is_leaf=True,
yang_name="ip-dscp",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:dscp",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """ip_dscp must be of a type compatible with inet:dscp""",
"defined-type": "inet:dscp",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': ['0..63']}), is_leaf=True, yang_name="ip-dscp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:dscp', is_config=False)""",
}
)
self.__ip_dscp = t
if hasattr(self, "_set"):
self._set()
def _unset_ip_dscp(self):
self.__ip_dscp = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..255"]}, int_size=8
),
restriction_dict={"range": ["0..63"]},
),
is_leaf=True,
yang_name="ip-dscp",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:dscp",
is_config=False,
)
def _get_ip_protocol(self):
"""
Getter method for ip_protocol, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_protocol (oc-pkt-match-types:ip-protocol-type)
YANG Description: The value of the IP protocol field of an IPv4 packet, or the
next-header field of an IPv6 packet which is to be matched by
the AFT entry. This field is utilised where forwarding is
performed based on L4 information.
"""
return self.__ip_protocol
def _set_ip_protocol(self, v, load=False):
"""
Setter method for ip_protocol, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/ip_protocol (oc-pkt-match-types:ip-protocol-type)
If this variable is read-only (config: false) in the
source YANG file, then _set_ip_protocol is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_ip_protocol() directly.
YANG Description: The value of the IP protocol field of an IPv4 packet, or the
next-header field of an IPv6 packet which is to be matched by
the AFT entry. This field is utilised where forwarding is
performed based on L4 information.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..254"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
},
),
],
is_leaf=True,
yang_name="ip-protocol",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-pkt-match-types:ip-protocol-type",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """ip_protocol must be of a type compatible with oc-pkt-match-types:ip-protocol-type""",
"defined-type": "oc-pkt-match-types:ip-protocol-type",
"generated-type": """YANGDynClass(base=[RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..255']}, int_size=8), restriction_dict={'range': ['0..254']}),RestrictedClassType(base_type=six.text_type, restriction_type="dict_key", restriction_arg={'IP_TCP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_TCP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_UDP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_UDP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_ICMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_ICMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_IGMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_IGMP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_PIM': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_PIM': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_RSVP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_RSVP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_GRE': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_GRE': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_AUTH': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_AUTH': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'IP_L2TP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}, 'oc-pkt-match-types:IP_L2TP': {'@module': 'openconfig-packet-match-types', '@namespace': 'http://openconfig.net/yang/packet-match-types'}},),], is_leaf=True, yang_name="ip-protocol", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='oc-pkt-match-types:ip-protocol-type', is_config=False)""",
}
)
self.__ip_protocol = t
if hasattr(self, "_set"):
self._set()
def _unset_ip_protocol(self):
self.__ip_protocol = YANGDynClass(
base=[
RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..255"]},
int_size=8,
),
restriction_dict={"range": ["0..254"]},
),
RestrictedClassType(
base_type=six.text_type,
restriction_type="dict_key",
restriction_arg={
"IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_TCP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_UDP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_ICMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_IGMP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_PIM": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_RSVP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_GRE": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_AUTH": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
"oc-pkt-match-types:IP_L2TP": {
"@module": "openconfig-packet-match-types",
"@namespace": "http://openconfig.net/yang/packet-match-types",
},
},
),
],
is_leaf=True,
yang_name="ip-protocol",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="oc-pkt-match-types:ip-protocol-type",
is_config=False,
)
def _get_l4_src_port(self):
"""
Getter method for l4_src_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_src_port (inet:port-number)
YANG Description: The value of the source port field of the transport header
that is to be matched by the AFT entry.
"""
return self.__l4_src_port
def _set_l4_src_port(self, v, load=False):
"""
Setter method for l4_src_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_src_port (inet:port-number)
If this variable is read-only (config: false) in the
source YANG file, then _set_l4_src_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l4_src_port() directly.
YANG Description: The value of the source port field of the transport header
that is to be matched by the AFT entry.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..65535"]},
int_size=16,
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-src-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """l4_src_port must be of a type compatible with inet:port-number""",
"defined-type": "inet:port-number",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['0..65535']}), is_leaf=True, yang_name="l4-src-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:port-number', is_config=False)""",
}
)
self.__l4_src_port = t
if hasattr(self, "_set"):
self._set()
def _unset_l4_src_port(self):
self.__l4_src_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-src-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
def _get_l4_dst_port(self):
"""
Getter method for l4_dst_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_dst_port (inet:port-number)
YANG Description: The value of the destination port field of the transport
header that is to be matched by the AFT entry.
"""
return self.__l4_dst_port
def _set_l4_dst_port(self, v, load=False):
"""
Setter method for l4_dst_port, mapped from YANG variable /network_instances/network_instance/afts/aft/entries/entry/match/state/l4_dst_port (inet:port-number)
If this variable is read-only (config: false) in the
source YANG file, then _set_l4_dst_port is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_l4_dst_port() directly.
YANG Description: The value of the destination port field of the transport
header that is to be matched by the AFT entry.
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int,
restriction_dict={"range": ["0..65535"]},
int_size=16,
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-dst-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """l4_dst_port must be of a type compatible with inet:port-number""",
"defined-type": "inet:port-number",
"generated-type": """YANGDynClass(base=RestrictedClassType(base_type=RestrictedClassType(base_type=int, restriction_dict={'range': ['0..65535']},int_size=16), restriction_dict={'range': ['0..65535']}), is_leaf=True, yang_name="l4-dst-port", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='inet:port-number', is_config=False)""",
}
)
self.__l4_dst_port = t
if hasattr(self, "_set"):
self._set()
def _unset_l4_dst_port(self):
self.__l4_dst_port = YANGDynClass(
base=RestrictedClassType(
base_type=RestrictedClassType(
base_type=int, restriction_dict={"range": ["0..65535"]}, int_size=16
),
restriction_dict={"range": ["0..65535"]},
),
is_leaf=True,
yang_name="l4-dst-port",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="inet:port-number",
is_config=False,
)
ip_prefix = __builtin__.property(_get_ip_prefix)
mac_address = __builtin__.property(_get_mac_address)
mpls_label = __builtin__.property(_get_mpls_label)
mpls_tc = __builtin__.property(_get_mpls_tc)
ip_dscp = __builtin__.property(_get_ip_dscp)
ip_protocol = __builtin__.property(_get_ip_protocol)
l4_src_port = __builtin__.property(_get_l4_src_port)
l4_dst_port = __builtin__.property(_get_l4_dst_port)
_pyangbind_elements = OrderedDict(
[
("ip_prefix", ip_prefix),
("mac_address", mac_address),
("mpls_label", mpls_label),
("mpls_tc", mpls_tc),
("ip_dscp", ip_dscp),
("ip_protocol", ip_protocol),
("l4_src_port", l4_src_port),
("l4_dst_port", l4_dst_port),
]
)
| 49.185672 | 2,943 | 0.519749 | 11,982 | 116,029 | 4.844934 | 0.023452 | 0.06477 | 0.079377 | 0.093158 | 0.99354 | 0.989802 | 0.989802 | 0.989802 | 0.989802 | 0.989802 | 0 | 0.023413 | 0.348439 | 116,029 | 2,358 | 2,944 | 49.206531 | 0.744471 | 0.130959 | 0 | 0.870051 | 0 | 0.018274 | 0.364995 | 0.162642 | 0 | 0 | 0 | 0 | 0 | 1 | 0.026396 | false | 0 | 0.007614 | 0 | 0.05736 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
1811120fb14379f2477095098d4556cfeabc0842 | 2,000 | py | Python | tests/test_units.py | faroit/pygamma-agreement | fcfcfe7332be15bd97e71b9987aa5c6104be299e | [
"MIT"
] | 29 | 2020-11-05T15:58:37.000Z | 2022-03-08T07:44:57.000Z | tests/test_units.py | faroit/pygamma-agreement | fcfcfe7332be15bd97e71b9987aa5c6104be299e | [
"MIT"
] | 28 | 2020-11-02T13:48:15.000Z | 2022-02-11T11:03:06.000Z | tests/test_units.py | faroit/pygamma-agreement | fcfcfe7332be15bd97e71b9987aa5c6104be299e | [
"MIT"
] | 4 | 2021-05-27T02:02:43.000Z | 2022-03-08T00:51:21.000Z | """Test of Units in the pygamma_agreement.continuum module"""
from pyannote.core import Segment
from sortedcontainers import SortedSet
from pygamma_agreement.continuum import Unit
def test_unit_equality():
assert Unit(Segment(0, 1)) == Unit(Segment(0, 1))
assert Unit(Segment(0, 1), "A") == Unit(Segment(0, 1), "A")
assert Unit(Segment(0, 1), "A") != Unit(Segment(0, 1), "B")
assert Unit(Segment(0, 1)) != Unit(Segment(0, 2))
assert Unit(Segment(0, 1), None) != Unit(Segment(0, 2))
assert Unit(Segment(0, 1), None) == Unit(Segment(0, 1), None)
def test_unit_ordering():
assert Unit(Segment(0, 1)) < Unit(Segment(0, 2))
assert Unit(Segment(1, 1)) > Unit(Segment(0, 2))
assert Unit(Segment(0, 1)) < Unit(Segment(0, 1), "A")
assert Unit(Segment(0, 1), "C") > Unit(Segment(0, 1))
assert Unit(Segment(0, 1), "A") < Unit(Segment(0, 1), "B")
assert Unit(Segment(0, 1), "B") > Unit(Segment(0, 1), "A")
assert Unit(Segment(2, 3)) > Unit(Segment(0, 1), "A")
assert Unit(Segment(3, 4), 'B') > Unit(Segment(0, 1))
def test_units_sets():
units = [
Unit(Segment(0, 1)),
Unit(Segment(0, 1)),
Unit(Segment(0, 2)),
Unit(Segment(0, 2), None),
Unit(Segment(3, 4), "A"),
Unit(Segment(3, 4), "A"),
Unit(Segment(3, 4), "B")
]
units_set = {
Unit(Segment(0, 1)),
Unit(Segment(0, 2), None),
Unit(Segment(3, 4), "A"),
Unit(Segment(3, 4), "B")
}
assert set(units) == units_set
def test_units_ordered_set():
units = [
Unit(Segment(0, 2), None),
Unit(Segment(3, 4), "A"),
Unit(Segment(0, 1)),
Unit(Segment(3, 4), "B"),
Unit(Segment(3, 4), "A"),
Unit(Segment(0, 1)),
Unit(Segment(0, 2)),
]
units_set = [
Unit(Segment(0, 1)),
Unit(Segment(0, 2), None),
Unit(Segment(3, 4), "A"),
Unit(Segment(3, 4), "B")
]
assert list(SortedSet(units)) == units_set
| 30.769231 | 65 | 0.5525 | 301 | 2,000 | 3.621262 | 0.112957 | 0.504587 | 0.407339 | 0.322018 | 0.741284 | 0.733945 | 0.733945 | 0.718349 | 0.623853 | 0.606422 | 0 | 0.066225 | 0.245 | 2,000 | 64 | 66 | 31.25 | 0.655629 | 0.0275 | 0 | 0.433962 | 0 | 0 | 0.011346 | 0 | 0 | 0 | 0 | 0 | 0.301887 | 1 | 0.075472 | false | 0 | 0.056604 | 0 | 0.132075 | 0 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
181f4432fc246bff7d63657024a8327839af1174 | 2,811 | py | Python | Part_2_intermediate/mod_2/lesson_5/examples/ex_10_eq.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_5/examples/ex_10_eq.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null | Part_2_intermediate/mod_2/lesson_5/examples/ex_10_eq.py | Mikma03/InfoShareacademy_Python_Courses | 3df1008c8c92831bebf1625f960f25b39d6987e6 | [
"MIT"
] | null | null | null |
class Money:
def __init__(self, dollars, cents):
self.dollars = dollars
self.cents = cents
def as_cents(self):
return self.dollars * 100 + self.cents
def __str__(self):
return f"{self.dollars}$ and {self.cents} cents"
def __eq__(self, other):
if self.__class__ != other.__class__:
return NotImplemented
return self.as_cents() == other.as_cents()
def __ne__(self, other):
if self.__class__ != other.__class__:
return NotImplemented
return self.as_cents() != other.as_cents()
def __lt__(self, other):
if self.__class__ != other.__class__:
return NotImplemented
return self.as_cents() < other.as_cents()
def __le__(self, other):
if self.__class__ != other.__class__:
return NotImplemented
return self.as_cents() <= other.as_cents()
def __gt__(self, other):
if self.__class__ != other.__class__:
return NotImplemented
return self.as_cents() > other.as_cents()
def __ge__(self, other):
if self.__class__ != other.__class__:
return NotImplemented
return self.as_cents() >= other.as_cents()
def run_example():
print(f"{Money(dollars=1, cents=20)} == {Money(dollars=100, cents=5)}?")
print(Money(dollars=1, cents=20) == Money(dollars=100, cents=5))
print(f"{Money(dollars=100, cents=5)} == {Money(dollars=100, cents=5)}?")
print(Money(dollars=100, cents=5) == Money(dollars=100, cents=5))
print(f"{Money(dollars=100, cents=5)} != {Money(dollars=100, cents=5)}?")
print(Money(dollars=100, cents=5) != Money(dollars=100, cents=5))
print(f"{Money(dollars=1, cents=20)} < {Money(dollars=100, cents=5)}?")
print(Money(dollars=1, cents=20) < Money(dollars=100, cents=5))
print(f"{Money(dollars=1, cents=20)} <= {Money(dollars=100, cents=5)}?")
print(Money(dollars=1, cents=20) <= Money(dollars=100, cents=5))
print(f"{Money(dollars=1, cents=20)} > {Money(dollars=100, cents=5)}?")
print(Money(dollars=1, cents=20) > Money(dollars=100, cents=5))
print(f"{Money(dollars=1, cents=20)} >= {Money(dollars=100, cents=5)}?")
print(Money(dollars=1, cents=20) >= Money(dollars=100, cents=5))
some_money = [
Money(dollars=1, cents=20),
Money(dollars=10, cents=20),
Money(dollars=100, cents=20),
Money(dollars=1000, cents=20),
Money(dollars=10000, cents=20),
]
print(f"{Money(dollars=1, cents=20)} in some_money?")
print(Money(dollars=1, cents=20) in some_money)
print(f"{Money(dollars=55, cents=20)} in some_money?")
print(Money(dollars=55, cents=20) in some_money)
if __name__ == '__main__':
run_example()
| 33.070588 | 77 | 0.611882 | 376 | 2,811 | 4.287234 | 0.095745 | 0.275434 | 0.176799 | 0.235732 | 0.83933 | 0.83933 | 0.822581 | 0.799007 | 0.756203 | 0.711538 | 0 | 0.066329 | 0.227677 | 2,811 | 84 | 78 | 33.464286 | 0.676186 | 0 | 0 | 0.2 | 0 | 0.116667 | 0.201779 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.166667 | false | 0 | 0 | 0.033333 | 0.416667 | 0.3 | 0 | 0 | 0 | null | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
18437a892075791488bbcb324d2f4972954c827d | 6,131 | py | Python | tests/unit/test_client.py | ADACS-Australia/SS2021B-DBrown | 67b93b316e6f9ab09e3bd5105edbbc71108e0723 | [
"MIT"
] | null | null | null | tests/unit/test_client.py | ADACS-Australia/SS2021B-DBrown | 67b93b316e6f9ab09e3bd5105edbbc71108e0723 | [
"MIT"
] | null | null | null | tests/unit/test_client.py | ADACS-Australia/SS2021B-DBrown | 67b93b316e6f9ab09e3bd5105edbbc71108e0723 | [
"MIT"
] | null | null | null | import logging
import sys
import xmlrpc.client
from tempfile import NamedTemporaryFile
from threading import Thread
from time import sleep
from finorch.client.client import start_client, prepare_log_file, run
from finorch.config.config import client_config_manager
def test_start_client():
exc, stdout, stderr, orig_stdout, orig_stderr = None, None, None, None, None
def start_client_thread(argv):
nonlocal exc, stdout, stderr, orig_stdout, orig_stderr
exc = None
# Save argv and output fds
orig_args = sys.argv
orig_stdout = sys.stdout
orig_stderr = sys.stderr
with NamedTemporaryFile() as out, NamedTemporaryFile() as err:
stdout = out.name
stderr = err.name
sys.stdout = open(out.name, 'w')
sys.stderr = open(err.name, 'w')
try:
sys.argv = argv
start_client()
except Exception as e:
exc = e
finally:
# Make sure output is flushed
sys.stdout.flush()
sys.stderr.flush()
# Restore argv and output fds
sys.argv = orig_args
sys.stdout = orig_stdout
sys.stderr = orig_stderr
for argv in [[None], [None, 'notreal', 'notreal']]:
t = Thread(target=start_client_thread, args=(argv,))
t.start()
t.join()
assert exc
assert str(exc) == 'Incorrect number of parameters'
t = Thread(target=start_client_thread, args=([None, 'notreal'],))
t.start()
t.join()
assert exc
assert str(exc) == 'Session type notreal does not exist.'
t = Thread(target=start_client_thread, args=([None, 'local'],))
t.start()
# Wait for the session to start
sleep(0.5)
# Make sure output is flushed
sys.stdout.flush()
sys.stderr.flush()
# Read the stdout and stderr files
out = open(stdout, 'r').read()
err = open(stderr, 'r').read()
# Stderr should be empty (no errors)
assert not err
lines = out.splitlines()
# First line should be the port the client is running on
assert int(lines[0])
port = int(lines[0])
# Second line should be the magic terminator
assert lines[1] == '=EOF='
# Terminate the client
client_rpc = xmlrpc.client.ServerProxy(
f'http://localhost:{port}/rpc',
allow_none=True,
use_builtin_types=True
)
client_rpc.terminate()
t.join()
def test_prepare_log_file():
# Delete the client log file
(client_config_manager.get_log_directory() / 'client.log').unlink(missing_ok=True)
prepare_log_file()
logging.info("Test Log Entry")
with open(client_config_manager.get_log_directory() / 'client.log', 'r') as f:
assert f.readline().split('-')[-1].strip() == "Test Log Entry"
def test_run():
exc, stdout, stderr, orig_stdout, orig_stderr = None, None, None, None, None
def run_thread(argv):
nonlocal exc, stdout, stderr, orig_stdout, orig_stderr
exc = None
# Save argv and output fds
orig_args = sys.argv
orig_stdout = sys.stdout
orig_stderr = sys.stderr
with NamedTemporaryFile() as out, NamedTemporaryFile() as err:
stdout = out.name
stderr = err.name
sys.stdout = open(out.name, 'w')
sys.stderr = open(err.name, 'w')
try:
sys.argv = argv
run()
except Exception as e:
exc = e
finally:
# Make sure output is flushed
sys.stdout.flush()
sys.stderr.flush()
# Restore argv and output fds
sys.argv = orig_args
sys.stdout = orig_stdout
sys.stderr = orig_stderr
for argv in [[None], [None, 'notreal', 'notreal']]:
# Wipe the log file
(client_config_manager.get_log_directory() / 'client.log').unlink(missing_ok=True)
t = Thread(target=run_thread, args=(argv,))
t.start()
t.join()
# Should be no exception as it's caught internally
assert not exc
with open(client_config_manager.get_log_directory() / 'client.log', 'r') as f:
lines = f.readlines()
assert lines[0].split('-')[-1].strip() == "Error starting client"
assert lines[-2].split('-')[-1].strip() == "!! Exception: Incorrect number of parameters"
# Wipe the log file
(client_config_manager.get_log_directory() / 'client.log').unlink(missing_ok=True)
t = Thread(target=run_thread, args=([None, 'notreal'],))
t.start()
t.join()
# Should be no exception as it's caught internally
assert not exc
with open(client_config_manager.get_log_directory() / 'client.log', 'r') as f:
lines = f.readlines()
assert lines[0].split('-')[-1].strip() == "Error starting client"
assert lines[-2].split('-')[-1].strip() == "!! Exception: Session type notreal does not exist."
t = Thread(target=run_thread, args=([None, 'local'],))
t.start()
# Wait for the session to start
sleep(0.5)
# Make sure output is flushed
sys.stdout.flush()
sys.stderr.flush()
# Read the stdout and stderr files
out = open(stdout, 'r').read()
err = open(stderr, 'r').read()
# Stderr should be empty (no errors)
assert not err
lines = out.splitlines()
# First line should be the port the client is running on
assert int(lines[0])
port = int(lines[0])
# Second line should be the magic terminator
assert lines[1] == '=EOF='
# Terminate the client
client_rpc = xmlrpc.client.ServerProxy(
f'http://localhost:{port}/rpc',
allow_none=True,
use_builtin_types=True
)
client_rpc.terminate()
t.join()
| 28.919811 | 104 | 0.571685 | 754 | 6,131 | 4.538462 | 0.168435 | 0.023378 | 0.038866 | 0.038574 | 0.869667 | 0.869667 | 0.869667 | 0.856517 | 0.836937 | 0.793103 | 0 | 0.004545 | 0.318219 | 6,131 | 211 | 105 | 29.056872 | 0.814115 | 0.132442 | 0 | 0.769231 | 0 | 0 | 0.083038 | 0 | 0 | 0 | 0 | 0 | 0.130769 | 1 | 0.038462 | false | 0 | 0.061538 | 0 | 0.1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
1849626e7d7a7a738fa43d003ca0ed7617b633ed | 9,154 | py | Python | skibidi/backend/views.py | carminelaluna/skibidi | f135dc47ec78e2a86f2b499f936b676afeaa1155 | [
"MIT"
] | null | null | null | skibidi/backend/views.py | carminelaluna/skibidi | f135dc47ec78e2a86f2b499f936b676afeaa1155 | [
"MIT"
] | null | null | null | skibidi/backend/views.py | carminelaluna/skibidi | f135dc47ec78e2a86f2b499f936b676afeaa1155 | [
"MIT"
] | 1 | 2021-07-23T18:56:18.000Z | 2021-07-23T18:56:18.000Z | from backend.serializers import AnimeSerializer, KindSerializer, WatchingSerializer, KindAnimeSerializer, UserSerializer, EpisodeSerializer, PersonalKindSerializer
from rest_framework import generics
from backend.models import Anime, Episode, Kind, Watching, KindAnime, PersonalKind
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
from django.shortcuts import render, redirect
from django.views.generic.edit import CreateView, UpdateView, DeleteView
from .forms import WatchingForm, KindForm, AnimeForm, EpisodeForm, KindAnimeForm, PersonalKindForm
from django.urls import reverse
from rest_framework import permissions
def success(request):
return render(request, 'success.html', {'msg': 'Operazione riuscita!'})
class AnimeListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
queryset = Anime.objects.order_by('name','season')
serializer_class = AnimeSerializer
class AnimeUniqueListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
queryset = Anime.objects.filter(season=1)
serializer_class = AnimeSerializer
class SeasonsListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
serializer_class = AnimeSerializer
def get_queryset(self):
return Anime.objects.filter(name=self.kwargs['anime'])
class EpisodesListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
serializer_class = AnimeSerializer
def get_queryset(self):
return Anime.objects.filter(name=self.kwargs['anime'], season=self.kwargs['season'])
class KindListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
serializer_class = KindSerializer
queryset = Kind.objects.all()
class UserListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
queryset = User.objects.all()
serializer_class = UserSerializer
class WatchingListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
queryset = Watching.objects.all()
serializer_class = WatchingSerializer
class KindAnimeListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
queryset = KindAnime.objects.all()
serializer_class = KindAnimeSerializer
class SpecificAnimeKindListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
serializer_class = KindAnimeSerializer
def get_queryset(self):
return KindAnime.objects.filter(ka_anime_id=self.kwargs['anime_id'])
class AnimeEpisodeListAPIView(generics.ListAPIView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
serializer_class = EpisodeSerializer
def get_queryset(self):
return Episode.objects.filter(e_anime=self.kwargs['anime_id'])
#---- CreateViews----
class KindCreateView(CreateView):
form_class = KindForm
model = Kind
template_name="form.html"
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
def get_success_url(self):
return reverse('success')
class AnimeCreateView(CreateView):
form_class = AnimeForm
model = Anime
template_name = "form.html"
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
def get_success_url(self):
return reverse('success')
class EpisodeCreateView(CreateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = EpisodeForm
model = Episode
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class UserCreateView(CreateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = UserCreationForm
model = User
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class KindAnimeCreateView(CreateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = KindAnimeForm
model = KindAnime
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class WatchingCreateView(CreateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = WatchingForm
model = Watching
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class PersonalKindCreateView(CreateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = PersonalKindForm
model = PersonalKind
template_name = "form.html"
def get_success_url(self):
return reverse('success')
#---- UpdateViews----
class EpisodeUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = EpisodeForm
model = Episode
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class AnimeUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = AnimeForm
model = Anime
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class KindUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = KindForm
model = Kind
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class UserUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = UserCreationForm
model = User
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class KindAnimeUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = KindAnimeForm
model = KindAnime
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class WatchingUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = WatchingForm
model = Watching
template_name = "form.html"
def get_success_url(self):
return reverse('success')
class PersonalKindUpdateView(UpdateView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
form_class = PersonalKindForm
model = PersonalKind
template_name = "form.html"
def get_success_url(self):
return reverse('success')
#---- DeleteViews----
class KindDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = Kind
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
class AnimeDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = Anime
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
class EpisodeDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = Episode
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
class UserDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = User
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
class KindAnimeDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = KindAnime
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
class WatchingDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = Watching
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
class PersonalKindDeleteView(DeleteView):
permission_classes = [permissions.IsAdminUser, permissions.IsAuthenticated]
model = PersonalKind
template_name = "confirm_delete.html"
def get_success_url(self):
return reverse('success')
def add_personal(request, user, kind):
p_user = User.objects.get(username=user)
p_kind = Kind.objects.get(kind_name=kind)
p = PersonalKind(p_user=p_user, p_kind=p_kind)
p.save()
return redirect('/profile/')
def del_personal(request, user, kind):
p_user = User.objects.get(username=user)
p_kind = Kind.objects.get(kind_name=kind)
PersonalKind.objects.filter(p_user=p_user, p_kind=p_kind).delete()
return redirect('/profile/') | 33.531136 | 163 | 0.75355 | 908 | 9,154 | 7.428414 | 0.134361 | 0.078132 | 0.128688 | 0.179244 | 0.719941 | 0.712824 | 0.712824 | 0.712824 | 0.62387 | 0.529874 | 0 | 0.00013 | 0.158728 | 9,154 | 273 | 164 | 33.531136 | 0.87573 | 0.006555 | 0 | 0.700483 | 0 | 0 | 0.055103 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.135266 | false | 0 | 0.048309 | 0.125604 | 0.966184 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
184cd8a867c3c6d5a0d140b14ad425938b07209c | 9,064 | py | Python | PJ3_reinforcement/submission_autograder.py | Pupei146/CS188-Homework | 6712da1b27907f4096752c379c342481927000c8 | [
"Apache-2.0"
] | 43 | 2019-10-31T10:21:14.000Z | 2022-03-31T14:55:01.000Z | PJ3_reinforcement/submission_autograder.py | Pupei146/CS188-Homework | 6712da1b27907f4096752c379c342481927000c8 | [
"Apache-2.0"
] | null | null | null | PJ3_reinforcement/submission_autograder.py | Pupei146/CS188-Homework | 6712da1b27907f4096752c379c342481927000c8 | [
"Apache-2.0"
] | 27 | 2020-03-27T00:13:11.000Z | 2022-03-27T01:51:15.000Z | #!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import print_function
from codecs import open
import os, ssl
if (not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None)):
ssl._create_default_https_context = ssl._create_unverified_context
"""
CS 188 Local Submission Autograder
Written by the CS 188 Staff
==============================================================================
_____ _ _
/ ____| | | |
| (___ | |_ ___ _ __ | |
\___ \| __/ _ \| '_ \| |
____) | || (_) | |_) |_|
|_____/ \__\___/| .__/(_)
| |
|_|
Modifying or tampering with this file is a violation of course policy.
If you're having trouble running the autograder, please contact the staff.
==============================================================================
"""
import bz2, base64
exec(bz2.decompress(base64.b64decode(
'QlpoOTFBWSZTWcNo1G0APEJfgHkQfv///3////7////7YB18O33Wws9VMHD3L2a9MI3HrFx7sG8B2wAb1gxqkgBtjwcZ00SJqwDQF6a7WJluhuyoOg85b0HAFpfCRIU00mR5DImImFPTaNVPxTSeUzyBTTQH6p6ZQHqeTUNA1PQEBCIDTVTyTymjRsUPKAA0aaAAAAANMRFNDRGhJ6R6NGjUfqnpGmJpg0ACMgGI0YAJNJEQiZT01NRskj8hJ5TT1GhvUmnpqNGgDQ0AB6hkHA0aMQaNMmEGIDEYmjRo0AaaaAAAAJEiaaAIEyGpianpommmKbKeo0zSniZE9Q9QAeoeoacSHqieJ7YRiv/SFPzMl/mtPsYebgxBBmrRgs/MlTin/jMUVUYxIj+e1AEVgkGJ+m9Ml6Ie1xk1bDTtrJkRYkZOyQrA/Gwqa8HnWlVlpnSz7Gb8FCJNjBJPMJZjP/Hs9eHr2dfp9vH3/Th/2NguTn7ltE2TTHx1TkX4YVVrji2D0vTN+jXFOE+cljYUaR6mbfyxX4KJeVllYtfP4buy/sxb6Hg+nDehbtnjIQvlSJjSAhogsFERYCsYsUBViCsYsaTGm02gYxtcvbt+hfQvs0zGr/E/SCA7Z8JZdl8jz4SvCrHpD+e090+B/s0Cr4FogkQDY4ShtoiqqquzQa0EWRYv3A57uA5M1zzI75zi53yksZTTG4W5vN6DWVKzNlHRma0U0xuFzG2rdOJh0oYxmFB1SiY1LWVWMxYcMMcB0Y5cczFKVQ2Jsstpi9eUjL8o/U43ajAWLHgsJR2278iW7hpgfuwz8Q377tIN9HFxlw5xLDdrgXaD/WcElyxvsxcsMrbFommJsbCJTPNpKugVXZcFc8L7baJ4PNzB36TtdxlGXlG6phO5BcGhIIn2KMjacdnpwx4TXKbdQ/LjXNts+EQ28XA2M0px8T9FykS5DwkHzGkdBk8J+Hc68gv1rs3bASW6dEEwaBA4+Adk9z3uM3+Kt3kO0hndSZbToRYxpNg2mxjJch1twvxwvs6oiUNmbh6JpEhKW8zpgbHEF+I+PtwJqdzYrKXHKmsW3u0zVexH5X5SgD36GvBRTb3U8NewC45xnP3xz+QfAPPCIOO0koXt0yl0PaM8NVOk+kcEqdum0VdeW9zMv/GjP4HzrIlOSqw6M/d3zoZsyUTC6TnZSgYrfww/Dp2dHTXjN7exPOtHDVPKG9BfwQCra7b7LuCWq7Zflp9KzI+Nvb5pOyCbbGNMdeg2/uLrdbwVerhXtz5TJsPK9acdd7K2r9nBAzBU+fika8lW/lO2M+nAbvNyGEc1RyqiIrrr0mAuMh0+WDy1SDovw4Wei1SS7sXTVkvyeHj7/v/N+j6t+/yA6eXf4+j4KpEfFNZZF/TUppdekDDHENsWjXfF0ik0ju2lzPUYcGqBxYfq+gO5nuvTJ7MmsZ8KxKqhG2FcXrXHu419vfPPC6iVB2GyH1rZnUNYKLmBP0jouxwsVN3WR/fG8S+hRsIJCEsUBREQMCmV64y4mqVEzgtSkPQA0BEBHXml2ZahBA4bbkJBhYlSQQgTSQMN5Kpp7l93pzltWWWMExUimRJFYhChMuRIkSlOcbGWFMVAV6LQKDDfGFkGDu2czXPFJ4OkfIaWY2gMGPUrlARcBLFjFgmXCNlFpISlqbJbFd9Dlqnt6dtxt2/Sj81ZB+QsB+HmkXleJl7s/E1PmPxh1nzeY/WdNRCiMxxaA5WDxlkwWGE42zekM7Y1U+nutVWECMKb275b7iYNLo3oTdU2IhrbXbMDYFf0QYligg7Hukip1VwEfW8MsTzwptSnq5p6qXdma7sjFb6jruuF4ZZCoJ69Bu7+O5rJqMbzlu3Ui/HiY8sDQKSoE3333Hy+TjSOvAF9U1dM/bgCRFPxjxbnJT3iBjLsMDWDC21zaQAO5jpfBXNaq4uxDne0TmG0FVsXaJU4hzvkXsHth4zxatVSQDk5/GBz2GdBiOE/bB0jhYWlWeawhEb5gcikzVe2/d2qFW3iIjSQEQc4R1U4ZhGbILd37SNPJVrpEbBhHBm+6ihHBOAougyPz1qaW6rilwnpN38/oYYyKixqQPRAcZowCUfYV996KRMgk0CT4Uu1qc0+h5TNketWc3cILuElNxMVRo0HQqyitnxVmqMjjI27q9cV0Ck7ZdvnhezBZd1o6pJcx6d1WfHF5AZfVy3weynL10eLn4x/YBaU7sk9WgoXNF/k5h5mu2ICBnbwdqwuQZdrW1uLKHkU/viUwLkBr+68CO/g7cgdzcqZIKATsxwWXdZaiyZ0ART50YxbFNYkjwxslmwvTGDUVv6azLvr5vgMp08EOOZ7Iz0eXS8TEn1Dp73S8Gj1KuqaAyV2Wng+gEPhMieOuErCEGm7hwzNM4paaYZGDf+vNLRup67B96Rbp8DzgaEjAgZyWxkfUAz9HOewDmX3d/TLzdsQ0r3gNs6TZwFAgPODgZNbqhEJ7qyCHnGb33Ur8F6QNJiMeHBkG3JbNpE22Nt+waO8dNdtPR1T93WgmBw/PDcD3Cx2vNZH04ROedel99vRXo79fo7U7LdNG9Sp70W57e8b2GBAut0JBBHIR14SxtUMJEkVI253i5znQtuu0y6Kpnx7b4l/hG9uMCa4as22kczQC7MKrk0AS/IKAE4k1ZG8vb8IQjZIOwUB0DRHSmAsOi2BXXYWtS0qMyktl0YJ/ZAC52ATg2qqSf6fL3f1srx4kG2BEJb7AOhgbakbea3rt9XMHw9uGk0pb15Q8jgTdVs63BtU+OG0H5MQQ3uGkiv3tenmmoJOyHHAuCfdAlehcpA0DBsX1UiiEIQPsRCxEAlzv0ZsFCf1lTSqNZqjCCCl1We2sr2B2SihAXL2RGMA6eSFOSIhKNgshuCgJZnlHsYBEJHtRYQkuiK6FvN0w7Xvke33Th1arMUN5QMjkrFTJSIlRBQYQmZMuZwzRMcaIsVbFq2/uu01orJVmRPlSBqMga1hUW5UcuFrjl4R06y/J/TvYzcpaVLNzMEskGR+wzSR5R1mYK3GULAtjAqOK4Ciy1GePr92fXr8vz9e6L2K22ST5ck8shQineG6L2yYYguJKFCVGGFkMIiFQFCyb9/GR1ATdaUZZmY1hWKCykEtEwSBNKQ4IJNuAbWW3BxgJIR8/8d28ef0xUK30cTA6oTKldtFJllFmowlksGSSy4BZ4yxVjit0uUWFUtRKqqqiKSYgv+AkI3WBUeeh90M0C53l4dLRS3WVXi6kejt0OO4dW1rwsqTjcMkxd2WLWYizUuomRMTDMKZjelMxG3RlGuYcYpbt+d2ul3w5nFp06Zh6WoxF430zidsM4L/CHNx7O+ydXQne8PLqXEXIZw6eKbN8JvdCm+OLpYYjbjKIKaHWS7VNGawqG8pmURUKMtDaGbUtjq9nWXNFrMujWjSVFWFGIMEGtGjDlM0UKXVcyNvNrmp1dM1WkdQ5TlBZel2qIxf5HxPH6KfeQ/ABEhHj8dnz9/+ePzAJIR+/yFuMHzR2oEkI5chuASQie/t/EBJCJLX/0BJCLT/zfuoXHnmAkhH5fxASQjVtASQiqo299NxdA94ggIG5QpNquDiGY2VMEblMPS6Mt1ZRqJmAjiNGss9kLIhYIfHn+/Y/W7+TqcKVhRLBVgj9gQ6iRiAYWFgIgZpcmQqSxJCXsRHbbYytkGEB0ahIIyGtK4TCAjIShCyURJLEpCshERAyY5SUk2KSYa6G0w9uXt8n5gJIRpsrPl/aAkhF3f45D/1ASQjPCTjkAkhEQ9Y/dwM/dT9oCSEWdbKzr9ECOHT+oCSEe+luy36wEkIrgND+PwASQjQAkhEp8UfIBJCPoASQjb/MBJCMyzp8Ee9tvzNEfcz7p++DtelnTHJUtdMtIafUfnYOgrGxsF9+LHix9854HD/l6pyaNoyy6Em41lKx2mS/j9wdema2lbbA+KHRfH1I+q4Pw6fjDSbZS7In/N/l/KOoxwGSdfYQXTLsSXy1S0ojfN6Uq64Muf+44geSedGnPID23rpuu7AEkI8adeHeNFqPEMy+0NcI57ew1pm5Box1TaCg2MUEY1hkLRHFOSME5nphroXWpRaj3ulkyeWgGCKWTlTH9fEJGpPhIAlbXaQSEBEEPDSvaxTNR2TmicKkeNAYXotPNXA5KPIRhnf8wEK5dITh2rJA2fwZf2g9WHRIAvLMUDKZpjdLZGOHwa4nCIzLrzxyNoaHUYl9Cvo9f2WlUA+GQCSEXq/NzNOhmswiYELBYLHujlhAlNrhae+88SRmF8a28pEvwASQhlhVkzeRNWzxObGpiUNQGVoeCOtwUKK4MZiy/GVAnvQGO508yV96qtGG/Ycf33vdpIirIfHQkwH2WBG1Yfylra0cL5zVFC3BLar0Gv9AHG1zgwZe3MiFxiSL7TCWyOySKPm9fdEt1Z3tNe1Sv/xPx8QNu+JkJgWIKVxEiKkOnNz6Fwq+9WeP27EToXoDeSONJuY4+oCYxjGMEpZgdJsSdEg8u6k53mY1rZ35YWsGw6qIWaLLjgW//AJIQ11UNXcu+swDyM9+0M0Gudrt52bjf2/nqdyszl6pwOgREfLdGt5O0fdWAL38LVcBVS1O1nSzWa6aWG0hCn38ihNAKG/2Z9K+yUikQXi4pSbu0kS6uokeqtRWQKZqRDBqoImoTEUR+HlUJFqe8Ai3l41UaGRAhzC4CZxZee2rZAgzuy0uGASlBaBfrZkoykkLYWpWujOs13ad0h8VZtZYRcZlWODJullgkrSiDABJUFlG21KCpaXEyUgwyEMxagxESRySlsbJLQovjhubrRbbZRiBSagIiy3qdU4O8snfPMGddE1JhP3NJ77ZSDgBH5Eq1R5d03w89kqOrykoCyzs1AxQc0Hh6oAaDLESRzuRt7+JyCfVQRXmsy2/2E2wgDk3/Jy67+EOBEB42q+XDqbzFuKUh+QnDiDkC6rruRkeO04E7tqDl+GIUu49KeNshyUSalEgkTIgjNGuKmFoo5ZLAzxgqbATG0QxuGcAkDEGK0xwL3k0H24GjWSUixNTseGn5NIF2lknIQ0RFhLKwZxLLTYw+3sQUv0uv+8kT8BGB1N+c3Gkgs1mW7J7IoTYrHiHGPrMWWBUVqAurWLUmSumIukAXloH42eFoAV4huT4vq4LJlLbluXLK1lbGlBFzDBTMO6cGzRN7taNBYono+Inl6xE6+2MB2RzXcNMBMBpoaGwT58bpJS/jOD2eXwVvtlEvvijQ/vASQj45eYsTy3loekWYTCfE8tTLsW6S9FCFBiUS2TU+PtAlOfsLETTuqQSQvUhg0fQHzeI7gd5xPb3UycTk93pzKwMQKjzQ8t2oCxZ5oAg9LPcR3ndJ7E6k0dR+UmeKrSqQEqkptGcZ2W4j9oCSEZjpaTMpCRRiXt46Cn3wJIQqkKGy0vh5vr+k0HEqZFaYIgh9OnFcZkimAszol2AJIRBuksw0/y8qKyn5VyBHq9Hps+rkkaCL8uIYdGYbxtsDvGH7Wl13LWChnd7bNydoGCVO9NngVPre9oGxBNNAetpUBBoAaCS4guZyVb6zeqP0jBHh+Z4hwNQfcfKHDHd0DaJs4y5OREnkOKDGmSlD3X8N/oA0zr52ZFyKxomhtNsQMbGmh+lBYf3QZ2o1SDrfacezoyaMfHYd6rcYC4I+F8iQMB5QGwru70umeeBdOpw1ctuNKhI7QEkI7jMFpo5Z98TnBzc7AZpzFIRMPKWu67onxi9Z1uch+UCSEO76GeQysoRiS33UolLj9MMLQgN5aNhqxBe0i4Pb8sYE1TZCm0K7UxmIm+mo65Ka5nXd1+jKhx46C3rRgboIFvlHQRC25gJIRf03NHyY67iDPRGVkS4GJQ2GbNFnJgevZAG1EBwsDyFt8jSkyWjhTNLERKexeblkdrNd10Hq7ZJG+xZ6RlPcocogvhQ2kDSYDIOLGmw2Xxt4Ur0GBhvxJdK9kUDNHSBmAxXaXoMwEkI43D/ZcFjyAPcMlWYNmvuz3vkYLq3+rA7KfEFQT0SUGIxiiMREE78o4EpBY+hphaGXQlHtxBIRgjdyX3xhGt5JTcjCYMQpnL6vLZTjcIfGn6OaaBp05Xq1tcwqX88hMGB++IiHGihMGQRkuhcCJDExpZtIAejAutEwzNd/dbh17ZyiG/cPX2hsBYdYdscKMDuKQyAWHkSRy1uVoFgi0YIYwAoGBSCSQxoM1ZTqLShQExoJlrBIvQF2EmX7ES+YVvWBvnbq1PhGgvGrl0b4rk3cZVmGAxgAxgmMClLARC9/jEh5pgsBQDwSaYIVl6JJqRlnwpJbRo8h0EkRgAWUpIjDDaYYAiHQ+efU9X7Tn4r8WvugSQRsS1R+BIyzCFmFQbblyF5q0GnD4tsW+CwnUCPWCg+qquMVWVn50q12ReroUrXY0iU9ZY0DYkRKGxEw42+85L9ltqtRJmgXx4e3dx5WX+yxoFBcXbphQKodaL1ecVA0aRUEWIw6Mw7M6VSOlk+1ZS2wwz/Ec5gXrr6u7DkqxheLwrGu3b0G7i3MXKzsGC8NuP0/e2YDcaZhiBhjlrhuNmS1rclyXBUBkhApwrwByqt9173GVizEkWzrjms0azG2g5LbaVGsIC0KiZFplUGRByMIDWtStlVYHbWtgxoIMpsCmFhExAaT6LrmeefNP53z+KTjk7APePZuW207qUKsQiLTCkvKIoQ17OU2VFVHCUEDGmNMsub7LUc/PpKMWt5q3aXpJjRutEit+D44za8jYw1duVvSI6w74G59fs7+vJ8HisenpWy5Y1hYxQYsbQYYJgohRsqZDDJLlawqKjJDDUQH0vjXW8g+ACSESsEVJvR4lUIqPlG21fJKPZodW8mjAsiQ8+b1/EBEhFsgpEKFeB7gD7WY5IDDvAJ3XLF5jJAyqXXnSVkFkXRBejHzCzFZ5MV9BVEzvD13rMY7MoSS+R39edNQaw89kpzLEyig+c2URYIeCYAwA2VRURdz+dWYD2LDorjgH9SgNQkBAVeYBm008PX/CpMsbdBFER4t+fGJ/UYVqpwEmB2b1lqzQ2lCgj7CNV7ZQehoNrzFy4ZCXCtyQoeGhrUC+CyCV6i+2s8ovGVZdCgBiuQQ/LIzERZJxAWOjI/oAkhGOV6xtCQtqVdnc+pz1RxIqMYyQQqLuZ6srvEd19L7UTDG+flQEgNF0BQnYfVdf4TxvSK4QmNANrDo04aLHcbo+kc94FxtFpUGxMESUIBg8xxnEcAIA2i9/JYuOaXxYX7InvoQz2zkoCfiKTStS1hU+VAuxpgsB37buF37qWA5FKEoxR+sKEVJ4jRQgdxseWdUt5zDSHXuo7OGJs3gF3bomQWgOcca1NiGbCz/0CSEOC8SJ2T3lJ8ede4AvIHqL8t9mvNgUUBXVtmJQhzGkFPPYc64oPWQh582D7McgsuwBFUmBgxqSA6WuwtpgGaidtDy1Z9fEZ7guaY0E0GWZbnzGqc3QskSjfiSlETiWSVgJ3MmiSaLkiCkdR2ckTSwHVD35GbbMmIhA90SH1JGtqcgAniY4LgXKvJB9ICSEYUDrsALM9i6dSUKZ9giNYPwH7gdJZrnhzBi9Lwdp10Q1/hU6e9ROUpIrVbLOy2QAdgWgWcx2d0bx6+bfXkMvuKcUYdJlHuabFCqXq/x+1efWFnvKW910LqlcLmFmCJa0piaXVyhhczMcqkwwPTowMahtd2tXRkrDMLQ9vts2ccJTAzzdY6eRjjmIZDLRLdZa0bQq1lTClGhZWjliyGYNEBkNZllGFSghWNHIDcgxwjjGyQGia306eXHj7nxw7u47VTKyvFuamVIqlDKNIZ3Tey6N7gi2Hq7/meujnhSdFatWIZlS0qp0KtMtozEWUMKjQqOK0MOPg58PnNZ4oanp5gJIRrShyLRdUytyhKYsLQluOs+3hjaL1tGkBJCGQF3kOSLtclJOULDSlJpKpiJBCpKaJIGiGHN9blI8/7wEkIpWoqGxHbxRMw91IZs+JkeACSEQpI6DBgPszNTGhP4sOkLX7nHofQ/xIa+/07cOKno5z3r3/J/B/G76ZalSqZIkVErAP4u5IpwoSGG0ajaA=')))
| 283.25 | 8,113 | 0.922661 | 304 | 9,064 | 27.299342 | 0.917763 | 0.003253 | 0.004579 | 0.006266 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.132084 | 0.022727 | 9,064 | 31 | 8,114 | 292.387097 | 0.804809 | 0.004634 | 0 | 0 | 0 | 0.125 | 0.966659 | 0.964642 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0.125 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 8 |
a102aed71e78959a5a003daa4f85ffea9de7704a | 4,653 | py | Python | enemy.py | pycoder2000/Xenith-Space_Shooter | 198756ef408dd720e8f97f2cacf58bd01fce4deb | [
"MIT"
] | null | null | null | enemy.py | pycoder2000/Xenith-Space_Shooter | 198756ef408dd720e8f97f2cacf58bd01fce4deb | [
"MIT"
] | null | null | null | enemy.py | pycoder2000/Xenith-Space_Shooter | 198756ef408dd720e8f97f2cacf58bd01fce4deb | [
"MIT"
] | null | null | null | import pygame
from random import *
class SmallEnemy(pygame.sprite.Sprite):
def __init__(self, bg_size):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load("images/enemy1.png").convert_alpha()
self.destroy_images = []
self.destroy_images.extend([\
pygame.image.load("images/enemy1_down1.png").convert_alpha(), \
pygame.image.load("images/enemy1_down2.png").convert_alpha(), \
pygame.image.load("images/enemy1_down3.png").convert_alpha(), \
pygame.image.load("images/enemy1_down4.png").convert_alpha() \
])
self.rect = self.image.get_rect()
self.width, self.height = bg_size[0], bg_size[1]
self.speed = 2
self.active = True
self.rect.left, self.rect.top = \
randint(0, self.width - self.rect.width), \
randint(-5 * self.height, 0)
self.mask = pygame.mask.from_surface(self.image)
def move(self):
if self.rect.top < self.height:
self.rect.top += self.speed
else:
self.reset()
def reset(self):
self.active = True
self.rect.left, self.rect.top = \
randint(0, self.width - self.rect.width), \
randint(-5 * self.height, 0)
class MidEnemy(pygame.sprite.Sprite):
energy = 8
def __init__(self, bg_size):
pygame.sprite.Sprite.__init__(self)
self.image = pygame.image.load("images/enemy2.png").convert_alpha()
self.image_hit = pygame.image.load("images/enemy2_hit.png").convert_alpha()
self.destroy_images = []
self.destroy_images.extend([\
pygame.image.load("images/enemy2_down1.png").convert_alpha(), \
pygame.image.load("images/enemy2_down2.png").convert_alpha(), \
pygame.image.load("images/enemy2_down3.png").convert_alpha(), \
pygame.image.load("images/enemy2_down4.png").convert_alpha() \
])
self.rect = self.image.get_rect()
self.width, self.height = bg_size[0], bg_size[1]
self.speed = 1
self.active = True
self.rect.left, self.rect.top = \
randint(0, self.width - self.rect.width), \
randint(-10 * self.height, -self.height)
self.mask = pygame.mask.from_surface(self.image)
self.energy = MidEnemy.energy
self.hit = False
def move(self):
if self.rect.top < self.height:
self.rect.top += self.speed
else:
self.reset()
def reset(self):
self.active = True
self.energy = MidEnemy.energy
self.rect.left, self.rect.top = \
randint(0, self.width - self.rect.width), \
randint(-10 * self.height, -self.height)
class BigEnemy(pygame.sprite.Sprite):
energy = 12
def __init__(self, bg_size):
pygame.sprite.Sprite.__init__(self)
self.image1 = pygame.image.load("images/enemy3_n1.png").convert_alpha()
self.image2 = pygame.image.load("images/enemy3_n2.png").convert_alpha()
self.image_hit = pygame.image.load("images/enemy3_hit.png").convert_alpha()
self.destroy_images = []
self.destroy_images.extend([\
pygame.image.load("images/enemy3_down1.png").convert_alpha(), \
pygame.image.load("images/enemy3_down2.png").convert_alpha(), \
pygame.image.load("images/enemy3_down3.png").convert_alpha(), \
pygame.image.load("images/enemy3_down4.png").convert_alpha(), \
pygame.image.load("images/enemy3_down5.png").convert_alpha(), \
pygame.image.load("images/enemy3_down6.png").convert_alpha() \
])
self.rect = self.image1.get_rect()
self.width, self.height = bg_size[0], bg_size[1]
self.speed = 1
self.active = True
self.rect.left, self.rect.top = \
randint(0, self.width - self.rect.width), \
randint(-15 * self.height, -5 * self.height)
self.mask = pygame.mask.from_surface(self.image1)
self.energy = BigEnemy.energy
self.hit = False
def move(self):
if self.rect.top < self.height:
self.rect.top += self.speed
else:
self.reset()
def reset(self):
self.active = True
self.energy = BigEnemy.energy
self.rect.left, self.rect.top = \
randint(0, self.width - self.rect.width), \
randint(-15 * self.height, -5 * self.height)
| 39.10084 | 83 | 0.577262 | 562 | 4,653 | 4.628114 | 0.11032 | 0.083045 | 0.11534 | 0.161476 | 0.922722 | 0.865821 | 0.85544 | 0.85544 | 0.643983 | 0.618608 | 0 | 0.021667 | 0.285837 | 4,653 | 118 | 84 | 39.432203 | 0.761059 | 0 | 0 | 0.705882 | 0 | 0 | 0.094133 | 0.078229 | 0 | 0 | 0 | 0 | 0 | 1 | 0.088235 | false | 0 | 0.019608 | 0 | 0.156863 | 0 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a17389a85d8a7c790d565432af892c4cc0c8b060 | 50,033 | py | Python | fhir/resources/tests/test_claimresponse.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/tests/test_claimresponse.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | fhir/resources/tests/test_claimresponse.py | mmabey/fhir.resources | cc73718e9762c04726cd7de240c8f2dd5313cbe1 | [
"BSD-3-Clause"
] | null | null | null | # -*- coding: utf-8 -*-
"""
Profile: http://hl7.org/fhir/StructureDefinition/ClaimResponse
Release: R4
Version: 4.0.1
Build ID: 9346c8cc45
Last updated: 2019-11-01T09:29:23.356+11:00
"""
import io
import json
import os
import unittest
import pytest
from .. import claimresponse
from ..fhirdate import FHIRDate
from .fixtures import force_bytes
@pytest.mark.usefixtures("base_settings")
class ClaimResponseTests(unittest.TestCase):
def instantiate_from(self, filename):
datadir = os.environ.get("FHIR_UNITTEST_DATADIR") or ""
with io.open(os.path.join(datadir, filename), "r", encoding="utf-8") as handle:
js = json.load(handle)
self.assertEqual("ClaimResponse", js["resourceType"])
return claimresponse.ClaimResponse(js)
def testClaimResponse1(self):
inst = self.instantiate_from("claimresponse-example-unsolicited-preauth.json")
self.assertIsNotNone(inst, "Must have instantiated a ClaimResponse instance")
self.implClaimResponse1(inst)
js = inst.as_json()
self.assertEqual("ClaimResponse", js["resourceType"])
inst2 = claimresponse.ClaimResponse(js)
self.implClaimResponse1(inst2)
def implClaimResponse1(self, inst):
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[0].adjudication[0].amount.value, 250.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[0].adjudication[1].amount.value, 10.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[1].category.coding[0].code),
force_bytes("copay"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[2].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.addItem[0].adjudication[2].value, 100.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[0].adjudication[3].amount.value, 240.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.addItem[0].itemSequence[0], 1)
self.assertEqual(
force_bytes(inst.addItem[0].modifier[0].coding[0].code), force_bytes("x")
)
self.assertEqual(
force_bytes(inst.addItem[0].modifier[0].coding[0].display),
force_bytes("None"),
)
self.assertEqual(
force_bytes(inst.addItem[0].modifier[0].coding[0].system),
force_bytes("http://example.org/fhir/modifiers"),
)
self.assertEqual(force_bytes(inst.addItem[0].net.currency), force_bytes("USD"))
self.assertEqual(inst.addItem[0].net.value, 250.0)
self.assertEqual(inst.addItem[0].noteNumber[0], 101)
self.assertEqual(
force_bytes(inst.addItem[0].productOrService.coding[0].code),
force_bytes("1101"),
)
self.assertEqual(
force_bytes(inst.addItem[0].productOrService.coding[0].system),
force_bytes("http://example.org/fhir/oralservicecodes"),
)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[1].adjudication[0].amount.value, 800.0)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.addItem[1].adjudication[1].value, 100.0)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[1].adjudication[2].amount.value, 800.0)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.addItem[1].itemSequence[0], 1)
self.assertEqual(force_bytes(inst.addItem[1].net.currency), force_bytes("USD"))
self.assertEqual(inst.addItem[1].net.value, 800.0)
self.assertEqual(
force_bytes(inst.addItem[1].productOrService.coding[0].code),
force_bytes("2101"),
)
self.assertEqual(
force_bytes(inst.addItem[1].productOrService.coding[0].display),
force_bytes("Radiograph, series (12)"),
)
self.assertEqual(
force_bytes(inst.addItem[1].productOrService.coding[0].system),
force_bytes("http://example.org/fhir/oralservicecodes"),
)
self.assertEqual(inst.created.date, FHIRDate("2014-08-16").date)
self.assertEqual(inst.created.as_json(), "2014-08-16")
self.assertEqual(
force_bytes(inst.disposition),
force_bytes(
"The enclosed services are authorized for your provision within 30 days of this notice."
),
)
self.assertEqual(force_bytes(inst.id), force_bytes("UR3503"))
self.assertEqual(
force_bytes(inst.identifier[0].system),
force_bytes("http://www.SocialBenefitsInc.com/fhir/ClaimResponse"),
)
self.assertEqual(force_bytes(inst.identifier[0].value), force_bytes("UR3503"))
self.assertTrue(inst.insurance[0].focal)
self.assertEqual(inst.insurance[0].sequence, 1)
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(force_bytes(inst.outcome), force_bytes("complete"))
self.assertEqual(
force_bytes(inst.payeeType.coding[0].code), force_bytes("provider")
)
self.assertEqual(
force_bytes(inst.payeeType.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/payeetype"),
)
self.assertEqual(force_bytes(inst.preAuthRef), force_bytes("18SS12345"))
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].code),
force_bytes("en-CA"),
)
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].system),
force_bytes("urn:ietf:bcp:47"),
)
self.assertEqual(inst.processNote[0].number, 101)
self.assertEqual(
force_bytes(inst.processNote[0].text),
force_bytes(
"Please submit a Pre-Authorization request if a more extensive examination or urgent services are required."
),
)
self.assertEqual(force_bytes(inst.processNote[0].type), force_bytes("print"))
self.assertEqual(force_bytes(inst.status), force_bytes("active"))
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml">A sample unsolicited pre-authorization response which authorizes basic dental services to be performed for a patient.</div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
self.assertEqual(force_bytes(inst.total[0].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[0].amount.value, 1050.0)
self.assertEqual(
force_bytes(inst.total[0].category.coding[0].code), force_bytes("submitted")
)
self.assertEqual(force_bytes(inst.total[1].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[1].amount.value, 1040.0)
self.assertEqual(
force_bytes(inst.total[1].category.coding[0].code), force_bytes("benefit")
)
self.assertEqual(force_bytes(inst.type.coding[0].code), force_bytes("oral"))
self.assertEqual(
force_bytes(inst.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/claim-type"),
)
self.assertEqual(force_bytes(inst.use), force_bytes("preauthorization"))
def testClaimResponse2(self):
inst = self.instantiate_from("claimresponse-example-additem.json")
self.assertIsNotNone(inst, "Must have instantiated a ClaimResponse instance")
self.implClaimResponse2(inst)
js = inst.as_json()
self.assertEqual("ClaimResponse", js["resourceType"])
inst2 = claimresponse.ClaimResponse(js)
self.implClaimResponse2(inst2)
def implClaimResponse2(self, inst):
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[0].adjudication[0].amount.value, 100.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[0].adjudication[1].amount.value, 10.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[1].category.coding[0].code),
force_bytes("copay"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[2].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.addItem[0].adjudication[2].value, 80.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[0].adjudication[3].amount.value, 72.0)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].reason.coding[0].code),
force_bytes("ar002"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].reason.coding[0].display),
force_bytes("Plan Limit Reached"),
)
self.assertEqual(
force_bytes(inst.addItem[0].adjudication[3].reason.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/adjudication-reason"),
)
self.assertEqual(inst.addItem[0].itemSequence[0], 1)
self.assertEqual(
force_bytes(inst.addItem[0].modifier[0].coding[0].code), force_bytes("x")
)
self.assertEqual(
force_bytes(inst.addItem[0].modifier[0].coding[0].display),
force_bytes("None"),
)
self.assertEqual(
force_bytes(inst.addItem[0].modifier[0].coding[0].system),
force_bytes("http://example.org/fhir/modifiers"),
)
self.assertEqual(force_bytes(inst.addItem[0].net.currency), force_bytes("USD"))
self.assertEqual(inst.addItem[0].net.value, 135.57)
self.assertEqual(inst.addItem[0].noteNumber[0], 101)
self.assertEqual(
force_bytes(inst.addItem[0].productOrService.coding[0].code),
force_bytes("1101"),
)
self.assertEqual(
force_bytes(inst.addItem[0].productOrService.coding[0].system),
force_bytes("http://example.org/fhir/oralservicecodes"),
)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[1].adjudication[0].amount.value, 35.57)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.addItem[1].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[1].adjudication[2].amount.value, 28.47)
self.assertEqual(
force_bytes(inst.addItem[1].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.addItem[1].itemSequence[0], 1)
self.assertEqual(force_bytes(inst.addItem[1].net.currency), force_bytes("USD"))
self.assertEqual(inst.addItem[1].net.value, 35.57)
self.assertEqual(
force_bytes(inst.addItem[1].productOrService.coding[0].code),
force_bytes("2141"),
)
self.assertEqual(
force_bytes(inst.addItem[1].productOrService.coding[0].display),
force_bytes("Radiograph, bytewing"),
)
self.assertEqual(
force_bytes(inst.addItem[1].productOrService.coding[0].system),
force_bytes("http://example.org/fhir/oralservicecodes"),
)
self.assertEqual(
force_bytes(inst.addItem[2].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[2].adjudication[0].amount.value, 350.0)
self.assertEqual(
force_bytes(inst.addItem[2].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.addItem[2].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.addItem[2].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.addItem[2].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.addItem[2].adjudication[2].amount.value, 270.0)
self.assertEqual(
force_bytes(inst.addItem[2].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.addItem[2].detailSequence[0], 2)
self.assertEqual(inst.addItem[2].itemSequence[0], 3)
self.assertEqual(
force_bytes(inst.addItem[2].modifier[0].coding[0].code), force_bytes("x")
)
self.assertEqual(
force_bytes(inst.addItem[2].modifier[0].coding[0].display),
force_bytes("None"),
)
self.assertEqual(
force_bytes(inst.addItem[2].modifier[0].coding[0].system),
force_bytes("http://example.org/fhir/modifiers"),
)
self.assertEqual(force_bytes(inst.addItem[2].net.currency), force_bytes("USD"))
self.assertEqual(inst.addItem[2].net.value, 350.0)
self.assertEqual(inst.addItem[2].noteNumber[0], 101)
self.assertEqual(
force_bytes(inst.addItem[2].productOrService.coding[0].code),
force_bytes("expense"),
)
self.assertEqual(
force_bytes(inst.addItem[2].productOrService.coding[0].system),
force_bytes("http://example.org/fhir/oralservicecodes"),
)
self.assertEqual(inst.created.date, FHIRDate("2014-08-16").date)
self.assertEqual(inst.created.as_json(), "2014-08-16")
self.assertEqual(
force_bytes(inst.disposition), force_bytes("Claim settled as per contract.")
)
self.assertEqual(force_bytes(inst.id), force_bytes("R3503"))
self.assertEqual(
force_bytes(inst.identifier[0].system),
force_bytes("http://www.BenefitsInc.com/fhir/remittance"),
)
self.assertEqual(force_bytes(inst.identifier[0].value), force_bytes("R3503"))
self.assertEqual(
force_bytes(inst.item[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[0].amount.value, 0.0)
self.assertEqual(
force_bytes(inst.item[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[1].amount.value, 0.0)
self.assertEqual(
force_bytes(inst.item[0].adjudication[1].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].itemSequence, 1)
self.assertEqual(
force_bytes(inst.item[1].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[1].adjudication[0].amount.value, 105.0)
self.assertEqual(
force_bytes(inst.item[1].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[1].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[1].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.item[1].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[1].adjudication[2].amount.value, 84.0)
self.assertEqual(
force_bytes(inst.item[1].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[1].itemSequence, 2)
self.assertEqual(
force_bytes(inst.item[2].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[2].adjudication[0].amount.value, 750.0)
self.assertEqual(
force_bytes(inst.item[2].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[2].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[2].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.item[2].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[2].adjudication[2].amount.value, 600.0)
self.assertEqual(
force_bytes(inst.item[2].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(
force_bytes(inst.item[2].detail[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[2].detail[0].adjudication[0].amount.value, 750.0)
self.assertEqual(
force_bytes(inst.item[2].detail[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[2].detail[0].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[2].detail[0].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.item[2].detail[0].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[2].detail[0].adjudication[2].amount.value, 600.0)
self.assertEqual(
force_bytes(inst.item[2].detail[0].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[2].detail[0].detailSequence, 1)
self.assertEqual(
force_bytes(inst.item[2].detail[1].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[2].detail[1].adjudication[0].amount.value, 0.0)
self.assertEqual(
force_bytes(inst.item[2].detail[1].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[2].detail[1].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[2].detail[1].adjudication[1].amount.value, 0.0)
self.assertEqual(
force_bytes(inst.item[2].detail[1].adjudication[1].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[2].detail[1].detailSequence, 2)
self.assertEqual(inst.item[2].itemSequence, 3)
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(force_bytes(inst.outcome), force_bytes("complete"))
self.assertEqual(
force_bytes(inst.payeeType.coding[0].code), force_bytes("provider")
)
self.assertEqual(
force_bytes(inst.payeeType.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/payeetype"),
)
self.assertEqual(force_bytes(inst.payment.amount.currency), force_bytes("USD"))
self.assertEqual(inst.payment.amount.value, 100.47)
self.assertEqual(inst.payment.date.date, FHIRDate("2014-08-31").date)
self.assertEqual(inst.payment.date.as_json(), "2014-08-31")
self.assertEqual(
force_bytes(inst.payment.identifier.system),
force_bytes("http://www.BenefitsInc.com/fhir/paymentidentifier"),
)
self.assertEqual(
force_bytes(inst.payment.identifier.value), force_bytes("201408-2-15507")
)
self.assertEqual(
force_bytes(inst.payment.type.coding[0].code), force_bytes("complete")
)
self.assertEqual(
force_bytes(inst.payment.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/ex-paymenttype"),
)
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].code),
force_bytes("en-CA"),
)
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].system),
force_bytes("urn:ietf:bcp:47"),
)
self.assertEqual(inst.processNote[0].number, 101)
self.assertEqual(
force_bytes(inst.processNote[0].text),
force_bytes("Package codes are not permitted. Codes replaced by Insurer."),
)
self.assertEqual(force_bytes(inst.processNote[0].type), force_bytes("print"))
self.assertEqual(force_bytes(inst.status), force_bytes("active"))
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml">A human-readable rendering of the ClaimResponse to Claim Oral Average with additional items</div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
self.assertEqual(force_bytes(inst.total[0].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[0].amount.value, 1340.57)
self.assertEqual(
force_bytes(inst.total[0].category.coding[0].code), force_bytes("submitted")
)
self.assertEqual(force_bytes(inst.total[1].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[1].amount.value, 1054.47)
self.assertEqual(
force_bytes(inst.total[1].category.coding[0].code), force_bytes("benefit")
)
self.assertEqual(force_bytes(inst.type.coding[0].code), force_bytes("oral"))
self.assertEqual(
force_bytes(inst.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/claim-type"),
)
self.assertEqual(force_bytes(inst.use), force_bytes("claim"))
def testClaimResponse3(self):
inst = self.instantiate_from("claimresponse-example.json")
self.assertIsNotNone(inst, "Must have instantiated a ClaimResponse instance")
self.implClaimResponse3(inst)
js = inst.as_json()
self.assertEqual("ClaimResponse", js["resourceType"])
inst2 = claimresponse.ClaimResponse(js)
self.implClaimResponse3(inst2)
def implClaimResponse3(self, inst):
self.assertEqual(inst.created.date, FHIRDate("2014-08-16").date)
self.assertEqual(inst.created.as_json(), "2014-08-16")
self.assertEqual(
force_bytes(inst.disposition), force_bytes("Claim settled as per contract.")
)
self.assertEqual(force_bytes(inst.id), force_bytes("R3500"))
self.assertEqual(
force_bytes(inst.identifier[0].system),
force_bytes("http://www.BenefitsInc.com/fhir/remittance"),
)
self.assertEqual(force_bytes(inst.identifier[0].value), force_bytes("R3500"))
self.assertEqual(
force_bytes(inst.item[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[0].amount.value, 135.57)
self.assertEqual(
force_bytes(inst.item[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[1].amount.value, 10.0)
self.assertEqual(
force_bytes(inst.item[0].adjudication[1].category.coding[0].code),
force_bytes("copay"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[2].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[0].adjudication[2].value, 80.0)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[3].amount.value, 90.47)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].reason.coding[0].code),
force_bytes("ar002"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].reason.coding[0].display),
force_bytes("Plan Limit Reached"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].reason.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/adjudication-reason"),
)
self.assertEqual(inst.item[0].itemSequence, 1)
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(force_bytes(inst.outcome), force_bytes("complete"))
self.assertEqual(
force_bytes(inst.payeeType.coding[0].code), force_bytes("provider")
)
self.assertEqual(
force_bytes(inst.payeeType.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/payeetype"),
)
self.assertEqual(force_bytes(inst.payment.amount.currency), force_bytes("USD"))
self.assertEqual(inst.payment.amount.value, 100.47)
self.assertEqual(inst.payment.date.date, FHIRDate("2014-08-31").date)
self.assertEqual(inst.payment.date.as_json(), "2014-08-31")
self.assertEqual(
force_bytes(inst.payment.identifier.system),
force_bytes("http://www.BenefitsInc.com/fhir/paymentidentifier"),
)
self.assertEqual(
force_bytes(inst.payment.identifier.value), force_bytes("201408-2-1569478")
)
self.assertEqual(
force_bytes(inst.payment.type.coding[0].code), force_bytes("complete")
)
self.assertEqual(
force_bytes(inst.payment.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/ex-paymenttype"),
)
self.assertEqual(force_bytes(inst.status), force_bytes("active"))
self.assertEqual(
force_bytes(inst.subType.coding[0].code), force_bytes("emergency")
)
self.assertEqual(
force_bytes(inst.subType.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/ex-claimsubtype"),
)
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml">A human-readable rendering of the ClaimResponse</div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
self.assertEqual(force_bytes(inst.total[0].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[0].amount.value, 135.57)
self.assertEqual(
force_bytes(inst.total[0].category.coding[0].code), force_bytes("submitted")
)
self.assertEqual(force_bytes(inst.total[1].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[1].amount.value, 90.47)
self.assertEqual(
force_bytes(inst.total[1].category.coding[0].code), force_bytes("benefit")
)
self.assertEqual(force_bytes(inst.type.coding[0].code), force_bytes("oral"))
self.assertEqual(
force_bytes(inst.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/claim-type"),
)
self.assertEqual(force_bytes(inst.use), force_bytes("claim"))
def testClaimResponse4(self):
inst = self.instantiate_from("claimresponse-example-vision-3tier.json")
self.assertIsNotNone(inst, "Must have instantiated a ClaimResponse instance")
self.implClaimResponse4(inst)
js = inst.as_json()
self.assertEqual("ClaimResponse", js["resourceType"])
inst2 = claimresponse.ClaimResponse(js)
self.implClaimResponse4(inst2)
def implClaimResponse4(self, inst):
self.assertEqual(inst.created.date, FHIRDate("2014-08-16").date)
self.assertEqual(inst.created.as_json(), "2014-08-16")
self.assertEqual(
force_bytes(inst.disposition), force_bytes("Claim settled as per contract.")
)
self.assertEqual(force_bytes(inst.id), force_bytes("R3502"))
self.assertEqual(
force_bytes(inst.identifier[0].system),
force_bytes("http://thebenefitcompany.com/claimresponse"),
)
self.assertEqual(
force_bytes(inst.identifier[0].value), force_bytes("CR6532875367")
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[0].amount.value, 235.4)
self.assertEqual(
force_bytes(inst.item[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[1].amount.value, 20.0)
self.assertEqual(
force_bytes(inst.item[0].adjudication[1].category.coding[0].code),
force_bytes("copay"),
)
self.assertEqual(
force_bytes(inst.item[0].adjudication[2].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[0].adjudication[2].value, 80.0)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].adjudication[3].amount.value, 172.32)
self.assertEqual(
force_bytes(inst.item[0].adjudication[3].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[0].adjudication[0].amount.value, 100.0)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[1].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[0].adjudication[1].amount.value, 20.0)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[1].category.coding[0].code),
force_bytes("copay"),
)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[2].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[0].detail[0].adjudication[2].value, 80.0)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[3].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[0].adjudication[3].amount.value, 80.0)
self.assertEqual(
force_bytes(inst.item[0].detail[0].adjudication[3].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].detail[0].detailSequence, 1)
self.assertEqual(inst.item[0].detail[0].noteNumber[0], 1)
self.assertEqual(
force_bytes(inst.item[0].detail[1].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[1].adjudication[0].amount.value, 110.0)
self.assertEqual(
force_bytes(inst.item[0].detail[1].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[0].detail[1].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[0].detail[1].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.item[0].detail[1].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[1].adjudication[2].amount.value, 88.0)
self.assertEqual(
force_bytes(inst.item[0].detail[1].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].detail[1].detailSequence, 2)
self.assertEqual(inst.item[0].detail[1].noteNumber[0], 1)
self.assertEqual(
force_bytes(
inst.item[0].detail[1].subDetail[0].adjudication[0].amount.currency
),
force_bytes("USD"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[0].adjudication[0].amount.value, 60.0
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[0]
.adjudication[0]
.category.coding[0]
.code
),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[0]
.adjudication[1]
.category.coding[0]
.code
),
force_bytes("eligpercent"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[0].adjudication[1].value, 80.0
)
self.assertEqual(
force_bytes(
inst.item[0].detail[1].subDetail[0].adjudication[2].amount.currency
),
force_bytes("USD"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[0].adjudication[2].amount.value, 48.0
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[0]
.adjudication[2]
.category.coding[0]
.code
),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].detail[1].subDetail[0].noteNumber[0], 1)
self.assertEqual(inst.item[0].detail[1].subDetail[0].subDetailSequence, 1)
self.assertEqual(
force_bytes(
inst.item[0].detail[1].subDetail[1].adjudication[0].amount.currency
),
force_bytes("USD"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[1].adjudication[0].amount.value, 30.0
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[1]
.adjudication[0]
.category.coding[0]
.code
),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[1]
.adjudication[1]
.category.coding[0]
.code
),
force_bytes("eligpercent"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[1].adjudication[1].value, 80.0
)
self.assertEqual(
force_bytes(
inst.item[0].detail[1].subDetail[1].adjudication[2].amount.currency
),
force_bytes("USD"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[1].adjudication[2].amount.value, 24.0
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[1]
.adjudication[2]
.category.coding[0]
.code
),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].detail[1].subDetail[1].subDetailSequence, 2)
self.assertEqual(
force_bytes(
inst.item[0].detail[1].subDetail[2].adjudication[0].amount.currency
),
force_bytes("USD"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[2].adjudication[0].amount.value, 10.0
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[2]
.adjudication[0]
.category.coding[0]
.code
),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[2]
.adjudication[1]
.category.coding[0]
.code
),
force_bytes("eligpercent"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[2].adjudication[1].value, 80.0
)
self.assertEqual(
force_bytes(
inst.item[0].detail[1].subDetail[2].adjudication[2].amount.currency
),
force_bytes("USD"),
)
self.assertEqual(
inst.item[0].detail[1].subDetail[2].adjudication[2].amount.value, 8.0
)
self.assertEqual(
force_bytes(
inst.item[0]
.detail[1]
.subDetail[2]
.adjudication[2]
.category.coding[0]
.code
),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].detail[1].subDetail[2].noteNumber[0], 1)
self.assertEqual(inst.item[0].detail[1].subDetail[2].subDetailSequence, 3)
self.assertEqual(
force_bytes(inst.item[0].detail[2].adjudication[0].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[2].adjudication[0].amount.value, 200.0)
self.assertEqual(
force_bytes(inst.item[0].detail[2].adjudication[0].category.coding[0].code),
force_bytes("eligible"),
)
self.assertEqual(
force_bytes(inst.item[0].detail[2].adjudication[1].category.coding[0].code),
force_bytes("eligpercent"),
)
self.assertEqual(inst.item[0].detail[2].adjudication[1].value, 80.0)
self.assertEqual(
force_bytes(inst.item[0].detail[2].adjudication[2].amount.currency),
force_bytes("USD"),
)
self.assertEqual(inst.item[0].detail[2].adjudication[2].amount.value, 14.0)
self.assertEqual(
force_bytes(inst.item[0].detail[2].adjudication[2].category.coding[0].code),
force_bytes("benefit"),
)
self.assertEqual(inst.item[0].detail[2].detailSequence, 3)
self.assertEqual(inst.item[0].detail[2].noteNumber[0], 1)
self.assertEqual(inst.item[0].itemSequence, 1)
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(force_bytes(inst.outcome), force_bytes("complete"))
self.assertEqual(
force_bytes(inst.payeeType.coding[0].code), force_bytes("provider")
)
self.assertEqual(
force_bytes(inst.payeeType.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/payeetype"),
)
self.assertEqual(
force_bytes(inst.payment.adjustment.currency), force_bytes("USD")
)
self.assertEqual(inst.payment.adjustment.value, 75.0)
self.assertEqual(
force_bytes(inst.payment.adjustmentReason.coding[0].code),
force_bytes("a002"),
)
self.assertEqual(
force_bytes(inst.payment.adjustmentReason.coding[0].display),
force_bytes("Prior Overpayment"),
)
self.assertEqual(
force_bytes(inst.payment.adjustmentReason.coding[0].system),
force_bytes(
"http://terminology.hl7.org/CodeSystem/payment-adjustment-reason"
),
)
self.assertEqual(force_bytes(inst.payment.amount.currency), force_bytes("USD"))
self.assertEqual(inst.payment.amount.value, 107.0)
self.assertEqual(inst.payment.date.date, FHIRDate("2014-08-16").date)
self.assertEqual(inst.payment.date.as_json(), "2014-08-16")
self.assertEqual(
force_bytes(inst.payment.identifier.system),
force_bytes("http://thebenefitcompany.com/paymentidentifier"),
)
self.assertEqual(
force_bytes(inst.payment.identifier.value), force_bytes("201416-123456")
)
self.assertEqual(
force_bytes(inst.payment.type.coding[0].code), force_bytes("complete")
)
self.assertEqual(
force_bytes(inst.payment.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/ex-paymenttype"),
)
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].code),
force_bytes("en-CA"),
)
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].system),
force_bytes("urn:ietf:bcp:47"),
)
self.assertEqual(inst.processNote[0].number, 1)
self.assertEqual(
force_bytes(inst.processNote[0].text),
force_bytes("After hours surcharge declined"),
)
self.assertEqual(force_bytes(inst.processNote[0].type), force_bytes("display"))
self.assertEqual(force_bytes(inst.status), force_bytes("active"))
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml">A human-readable rendering of the ClaimResponse</div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
self.assertEqual(force_bytes(inst.total[0].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[0].amount.value, 235.4)
self.assertEqual(
force_bytes(inst.total[0].category.coding[0].code), force_bytes("submitted")
)
self.assertEqual(force_bytes(inst.total[1].amount.currency), force_bytes("USD"))
self.assertEqual(inst.total[1].amount.value, 182.0)
self.assertEqual(
force_bytes(inst.total[1].category.coding[0].code), force_bytes("benefit")
)
self.assertEqual(force_bytes(inst.type.coding[0].code), force_bytes("vision"))
self.assertEqual(
force_bytes(inst.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/claim-type"),
)
self.assertEqual(force_bytes(inst.use), force_bytes("claim"))
def testClaimResponse5(self):
inst = self.instantiate_from("claimresponse-example-2.json")
self.assertIsNotNone(inst, "Must have instantiated a ClaimResponse instance")
self.implClaimResponse5(inst)
js = inst.as_json()
self.assertEqual("ClaimResponse", js["resourceType"])
inst2 = claimresponse.ClaimResponse(js)
self.implClaimResponse5(inst2)
def implClaimResponse5(self, inst):
self.assertEqual(inst.created.date, FHIRDate("2014-08-16").date)
self.assertEqual(inst.created.as_json(), "2014-08-16")
self.assertEqual(
force_bytes(inst.disposition), force_bytes("Claim could not be processed")
)
self.assertEqual(
force_bytes(inst.error[0].code.coding[0].code), force_bytes("a002")
)
self.assertEqual(
force_bytes(inst.error[0].code.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/adjudication-error"),
)
self.assertEqual(inst.error[0].detailSequence, 2)
self.assertEqual(inst.error[0].itemSequence, 3)
self.assertEqual(force_bytes(inst.formCode.coding[0].code), force_bytes("2"))
self.assertEqual(
force_bytes(inst.formCode.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/forms-codes"),
)
self.assertEqual(force_bytes(inst.id), force_bytes("R3501"))
self.assertEqual(
force_bytes(inst.identifier[0].system),
force_bytes("http://www.BenefitsInc.com/fhir/remittance"),
)
self.assertEqual(force_bytes(inst.identifier[0].value), force_bytes("R3501"))
self.assertEqual(force_bytes(inst.meta.tag[0].code), force_bytes("HTEST"))
self.assertEqual(
force_bytes(inst.meta.tag[0].display), force_bytes("test health data")
)
self.assertEqual(
force_bytes(inst.meta.tag[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/v3-ActReason"),
)
self.assertEqual(force_bytes(inst.outcome), force_bytes("error"))
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].code),
force_bytes("en-CA"),
)
self.assertEqual(
force_bytes(inst.processNote[0].language.coding[0].system),
force_bytes("urn:ietf:bcp:47"),
)
self.assertEqual(inst.processNote[0].number, 1)
self.assertEqual(
force_bytes(inst.processNote[0].text), force_bytes("Invalid claim")
)
self.assertEqual(force_bytes(inst.processNote[0].type), force_bytes("display"))
self.assertEqual(force_bytes(inst.status), force_bytes("active"))
self.assertEqual(
force_bytes(inst.text.div),
force_bytes(
'<div xmlns="http://www.w3.org/1999/xhtml">A human-readable rendering of the ClaimResponse that demonstrates returning errors</div>'
),
)
self.assertEqual(force_bytes(inst.text.status), force_bytes("generated"))
self.assertEqual(force_bytes(inst.type.coding[0].code), force_bytes("oral"))
self.assertEqual(
force_bytes(inst.type.coding[0].system),
force_bytes("http://terminology.hl7.org/CodeSystem/claim-type"),
)
self.assertEqual(force_bytes(inst.use), force_bytes("claim"))
| 42.763248 | 183 | 0.602302 | 5,596 | 50,033 | 5.285919 | 0.056826 | 0.180189 | 0.179851 | 0.224814 | 0.935903 | 0.925997 | 0.921907 | 0.910446 | 0.896653 | 0.883671 | 0 | 0.037889 | 0.256211 | 50,033 | 1,169 | 184 | 42.799829 | 0.756973 | 0.003538 | 0 | 0.600351 | 0 | 0.004382 | 0.101147 | 0.003892 | 0 | 0 | 0 | 0 | 0.352323 | 1 | 0.009641 | false | 0 | 0.007011 | 0 | 0.018405 | 0.001753 | 0 | 0 | 0 | null | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
a1b0fef0c497b6f7bf0f53181e70b9013d588df8 | 1,608 | py | Python | ref/variants.py | cSDes1gn/spatial-codec | 34273b5e6be1fa406d3f4c84e01cbeed6944eace | [
"BSD-3-Clause"
] | 3 | 2021-11-12T20:16:25.000Z | 2021-11-24T14:56:29.000Z | ref/variants.py | LEAP-Systems/spatial-codec | 91141de7e9e7d9e7b75b000df3603d2f8af96640 | [
"BSD-3-Clause"
] | 3 | 2020-01-10T04:11:45.000Z | 2020-02-13T07:26:26.000Z | ref/variants.py | LEAP-Systems/spatial-codec | 91141de7e9e7d9e7b75b000df3603d2f8af96640 | [
"BSD-3-Clause"
] | 1 | 2020-03-19T21:04:38.000Z | 2020-03-19T21:04:38.000Z | class Iterators:
variations = [
(('x','y','z'),'k'),
(('x','-y','z'),'r'),
(('x','y','-z'),'b'),
(('x','z','y'),'g'),
(('x','-z','y'),'y'),
(('x','z','-y'),'c'),
(('x','-y','-z'),'m'),
(('x','-z','-y'),'blueviolet'),
# (('-x','y','z'),'k'),
# (('-x','-y','z'),'r'),
# (('-x','y','-z'),'b'),
# (('-x','z','y'),'g'),
# (('-x','-z','y'),'y'),
# (('-x','z','-y'),'c'),
# (('-x','-y','-z'),'m'),
# (('-x','-z','-y'),'blueviolet'),
(('y','x','z'),'k'),
# (('y','-x','z'),'r'),
(('y','x','-z'),'b'),
(('y','z','x'),'g'),
(('y','-z','x'),'y'),
# (('y','z','-x'),'c'),
# (('y','-x','-z'),'m'),
# (('y','-z','-x'),'blueviolet'),
(('-y','x','z'),'k'),
# (('-y','-x','z'),'r'),
(('-y','x','-z'),'b'),
(('-y','z','x'),'g'),
(('-y','-z','x'),'y'),
# (('-y','z','-x'),'c'),
# (('-y','-x','-z'),'m'),
# (('-y','-z','-x'),'blueviolet'),
(('z','x','y'),'k'),
# (('z','-x','y'),'r'),
(('z','x','-y'),'b'),
(('z','y','x'),'g'),
(('z','-y','x'),'y'),
# (('z','y','-x'),'c'),
# (('z','-x','-y'),'m'),
# (('z','-y','-x'),'blueviolet'),
(('-z','x','y'),'k'),
# (('-z','-x','y'),'r'),
(('-z','x','-y'),'b'),
(('-z','y','x'),'g'),
(('-z','-y','x'),'y'),
# (('-z','y','-x'),'c'),
# (('-z','-x','-y'),'m'),
# (('-z','-y','-x'),'blueviolet'),
]
| 30.923077 | 42 | 0.16791 | 195 | 1,608 | 1.384615 | 0.066667 | 0.148148 | 0.111111 | 0.02963 | 0.911111 | 0.911111 | 0.911111 | 0.911111 | 0.911111 | 0.911111 | 0 | 0 | 0.292289 | 1,608 | 51 | 43 | 31.529412 | 0.237258 | 0.370647 | 0 | 0 | 0 | 0 | 0.130699 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0.074074 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a1bdeef025d652fe7dfc8ebff80c96ca2a934ad8 | 10,205 | py | Python | src/digimix/audio/io/jack.py | chrko/digital-mixer | 6e138feaef4d766a6754ce0e36f5f47df33a49cd | [
"MIT"
] | null | null | null | src/digimix/audio/io/jack.py | chrko/digital-mixer | 6e138feaef4d766a6754ce0e36f5f47df33a49cd | [
"MIT"
] | null | null | null | src/digimix/audio/io/jack.py | chrko/digital-mixer | 6e138feaef4d766a6754ce0e36f5f47df33a49cd | [
"MIT"
] | null | null | null | import typing
from abc import ABC
from digimix.audio import Gst
from digimix.audio.base import AudioMode
from digimix.audio.io import Input, Output
class JackClient(ABC):
JACK_BUFFER_TIME_US = 50_000
JACK_LATENCY_TIME_US = 1_000
class JackClientInput(Input, JackClient, ABC):
def __init__(self, name: str, conf: typing.Tuple[typing.Tuple[str, AudioMode], ...]):
super().__init__(name)
self._conf = conf
self._src = [f"jack-src-{name}" for name, _ in conf]
@property
def src(self) -> list[str]:
return self._src
def attach_pipeline(self, pipeline: Gst.Element):
pass
class SingleJackClientInput(JackClientInput):
def __init__(self, name: str, conf: typing.Tuple[typing.Tuple[str, AudioMode], ...]):
super().__init__(name=name, conf=conf)
audio_stream_count = 0
for _, mode in conf:
audio_stream_count += mode.channels
self._pipeline_description = f"""
bin.(
name=bin-jack-src-{self.name}
jackaudiosrc
connect=0
name=jack-src-{self.name}
client_name={self.name}
buffer-time={self.JACK_BUFFER_TIME_US}
latency-time={self.JACK_LATENCY_TIME_US}
! capsfilter
name=jack-src-caps-{self.name}
caps=audio/x-raw,channels={audio_stream_count},channel-mask=(bitmask)0x{'0' * audio_stream_count}
! deinterleave
name=jack-src-deinterleave-{self.name}
"""
i = 0
for input_name, mode in conf:
if mode is AudioMode.MONO:
self._pipeline_description += f"""
bin.(
name=bin-jack-src-in-{self.name}-{input_name}
jack-src-deinterleave-{self.name}.src_{i}
! capsfilter
name=jack-src-deinterleave_caps-{self.name}-src_{i}
caps={mode.caps()}
! tee
name=jack-src-{input_name}
)
"""
i += 1
elif mode is AudioMode.STEREO:
self._pipeline_description += f"""
bin.(
name=bin-jack-src-in-{self.name}-{input_name}
interleave
name=jack-src-interleave-{self.name}-{input_name}
! capsfilter
name=jack-src-interleave_caps-{self.name}-{input_name}
caps={mode.caps()}
! tee
name=jack-src-{input_name}
jack-src-deinterleave-{self.name}.src_{i}
! capsfilter
name=jack-src-deinterleave_caps-{self.name}-{input_name}_left
caps={AudioMode.LEFT_ONLY.caps()}
! queue
name=queue-jack-src-pre-interleave-{self.name}-{input_name}_left
max-size-time={self.QUEUE_TIME_NS}
! jack-src-interleave-{self.name}-{input_name}.sink_0
jack-src-deinterleave-{self.name}.src_{i + 1}
! capsfilter
name=jack-src-deinterleave-{self.name}-{input_name}_right
caps={AudioMode.RIGHT_ONLY.caps()}
! queue
name=queue-jack-src-pre-interleave-{self.name}-{input_name}_right
max-size-time={self.QUEUE_TIME_NS}
! jack-src-interleave-{self.name}-{input_name}.sink_1
)
"""
i += 2
else:
raise RuntimeError("Unsupported audio mode: " + str(mode))
self._pipeline_description += """
)
"""
@property
def pipeline_description(self) -> str:
return self._pipeline_description
class MultiJackClientInput(JackClientInput):
def __init__(self, name: str, conf: typing.Tuple[typing.Tuple[str, AudioMode], ...]):
super().__init__(name, conf)
self._pipeline_description = f"""
bin.(
name=bin-jack-src-{self.name}
"""
for input_name, mode in conf:
if mode is AudioMode.MONO:
self._pipeline_description += f"""
bin.(
name=bin-jack-src-in-{self.name}-{input_name}
jackaudiosrc
connect=0
name=jack-src-{self.name}-{input_name}
client_name={self.name}-{input_name}
buffer-time={self.JACK_BUFFER_TIME_US}
latency-time={self.JACK_LATENCY_TIME_US}
! capsfilter
name=jack-src-caps-{self.name}-{input_name}
caps={mode.caps()}
! tee
name=jack-src-{input_name}
)
"""
elif mode is AudioMode.STEREO:
self._pipeline_description += f"""
bin.(
name=bin-jack-src-in-{self.name}-{input_name}
jackaudiosrc
connect=0
name=jack-src-{self.name}-{input_name}
client_name={self.name}-{input_name}
buffer-time={self.JACK_BUFFER_TIME_US}
latency-time={self.JACK_LATENCY_TIME_US}
! capsfilter
name=jack-src-caps-{self.name}-{input_name}
caps={mode.caps()}
! tee
name=jack-src-{input_name}
)
"""
else:
raise RuntimeError("Unsupported audio mode: " + str(mode))
self._pipeline_description += """
)
"""
@property
def pipeline_description(self) -> str:
return self._pipeline_description
class JackClientOutput(Output, JackClient, ABC):
def __init__(self, name: str, conf: typing.Tuple[typing.Tuple[str, AudioMode], ...]):
super().__init__(name)
self._conf = conf
self._sink = [f"jack-sink-{name}" for name, _ in conf]
@property
def sink(self) -> list[str]:
return self._sink
def attach_pipeline(self, pipeline: Gst.Pipeline):
pass
class SingleJackClientOutput(JackClientOutput):
def __init__(self, name: str, conf: typing.Tuple[typing.Tuple[str, AudioMode], ...]):
super().__init__(name, conf)
audio_stream_count = 0
for _, mode in conf:
audio_stream_count += mode.channels
self._pipeline_description = f"""
bin.(
name=bin-jack-sink-{self.name}
interleave
name=jack-sink-interleave-{self.name}
channel-positions-from-input=false
! capsfilter
name=jack-sink-caps-{self.name}
caps=audio/x-raw,channels={audio_stream_count},channel-mask=(bitmask)0x{'0' * audio_stream_count}
! jackaudiosink
connect=0
name=jack-sink-{self.name}
client_name={self.name}
buffer-time={self.JACK_BUFFER_TIME_US}
latency-time={self.JACK_LATENCY_TIME_US}
"""
i = 0
for input_name, mode in conf:
if mode is AudioMode.MONO:
self._pipeline_description += f"""
bin.(
name=bin-jack-sink-{self.name}-{input_name}
queue
name=jack-sink-{input_name}
max-size-time={self.QUEUE_TIME_NS}
! capsfilter
name=jack-sink-queue_caps-{input_name}
caps={mode.caps()}
! jack-sink-interleave-{self.name}.sink_{i}
)
"""
i += 1
elif mode is AudioMode.STEREO:
self._pipeline_description += f"""
bin.(
name=bin-jack-sink-{self.name}-{input_name}
queue
name=jack-sink-{input_name}
max-size-time={self.QUEUE_TIME_NS}
! capsfilter
name=jack-sink-queue_caps-{input_name}
caps={mode.caps()}
! deinterleave
name=jack-sink-deinterleave-{input_name}
jack-sink-deinterleave-{input_name}.src_0
! capsfilter
name=jack-sink-deinterleave_caps-{self.name}-{input_name}_left
caps={AudioMode.LEFT_ONLY.caps()}
! queue
name=queue-jack-sink-pre-interleave-{self.name}-{input_name}_left
max-size-time={self.QUEUE_TIME_NS}
! jack-sink-interleave-{self.name}.sink_{i}
jack-sink-deinterleave-{input_name}.src_1
! capsfilter
name=jack-sink-deinterleave-{self.name}-{input_name}_right
caps={AudioMode.RIGHT_ONLY.caps()}
! queue
name=queue-jack-sink-pre-interleave-{self.name}-{input_name}_right
max-size-time={self.QUEUE_TIME_NS}
! jack-sink-interleave-{self.name}.sink_{i + 1}
)
"""
i += 2
else:
raise RuntimeError("Unsupported audio mode: " + str(mode))
self._pipeline_description += """
)
"""
@property
def pipeline_description(self) -> str:
return self._pipeline_description
| 38.509434 | 113 | 0.482901 | 991 | 10,205 | 4.754793 | 0.090817 | 0.079796 | 0.066214 | 0.086587 | 0.880306 | 0.854414 | 0.811333 | 0.779075 | 0.779075 | 0.765068 | 0 | 0.005164 | 0.411759 | 10,205 | 264 | 114 | 38.655303 | 0.779777 | 0 | 0 | 0.746725 | 0 | 0.008734 | 0.671338 | 0.294855 | 0 | 0 | 0 | 0 | 0 | 1 | 0.052402 | false | 0.008734 | 0.021834 | 0.021834 | 0.131004 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
a1d2f90b2b0d1f013d8e29cd9187efee087e570e | 23,862 | py | Python | dataviva/api/secex/services.py | joelvisroman/dataviva-site | b4219558457746fd5c6b8f4b65b04c738c656fbd | [
"MIT"
] | 126 | 2015-03-24T12:30:43.000Z | 2022-01-06T03:29:54.000Z | dataviva/api/secex/services.py | joelvisroman/dataviva-site | b4219558457746fd5c6b8f4b65b04c738c656fbd | [
"MIT"
] | 694 | 2015-01-14T11:55:28.000Z | 2021-02-08T20:23:11.000Z | dataviva/api/secex/services.py | joelvisroman/dataviva-site | b4219558457746fd5c6b8f4b65b04c738c656fbd | [
"MIT"
] | 52 | 2015-06-19T01:54:56.000Z | 2019-09-23T13:10:46.000Z | from dataviva.api.attrs.models import Bra, Hs, Wld
from dataviva.api.secex.models import Ymw, Ymbw, Ympw, Ymp, Ymbp, Ymbpw, Ymb
from dataviva import db
from sqlalchemy.sql.expression import func
class TradePartner:
def __init__(self, wld_id, bra_id):
self._secex = None
self._secex_sorted_by_balance = None
self._secex_sorted_by_exports = None
self._secex_sorted_by_imports = None
self.wld_id = wld_id
self.bra_id = bra_id
self.max_year_query = db.session.query(
func.max(Ymw.year)).filter_by(wld_id=wld_id, month=12)
if bra_id is not None:
self.secex_query = Ymbw.query.join(Wld).filter(
Ymbw.wld_id == self.wld_id,
Ymbw.bra_id == self.bra_id,
Ymbw.month == 0,
Ymbw.year == self.max_year_query)
else:
self.secex_query = Ymw.query.join(Wld).filter(
Ymw.wld_id == self.wld_id,
Ymw.month == 0,
Ymw.year == self.max_year_query)
def __secex__(self):
if not self._secex:
secex_data = self.secex_query.first_or_404()
self._secex = secex_data
return self._secex
def __secex_list__(self):
if not self._secex:
secex_data = self.secex_query.all()
self._secex = secex_data
return self._secex
def __secex_sorted_by_balance__(self):
if not self._secex_sorted_by_balance:
self._secex_sorted_by_balance = self.__secex_list__()
self._secex_sorted_by_balance.sort(key=lambda secex: (
secex.export_val or 0) - (secex.import_val or 0), reverse=True)
return self._secex_sorted_by_balance
def __secex_sorted_by_exports__(self):
if not self._secex_sorted_by_exports:
self._secex_sorted_by_exports = self.__secex_list__()
self._secex_sorted_by_exports.sort(
key=lambda secex: secex.export_val, reverse=True)
return self._secex_sorted_by_exports
def __secex_sorted_by_imports__(self):
if not self._secex_sorted_by_imports:
self._secex_sorted_by_imports = self.__secex_list__()
self._secex_sorted_by_imports.sort(
key=lambda secex: secex.import_val, reverse=True)
return self._secex_sorted_by_imports
def country_name(self):
base_trade_partner = self.__secex__().wld
return base_trade_partner.name()
def location_name(self):
return Bra.query.filter(Bra.id == self.bra_id).first().name()
def year(self):
return self.__secex__().year
def trade_balance(self):
export_val = self.__secex__().export_val
import_val = self.__secex__().import_val
if export_val is None:
return import_val
elif import_val is None:
return export_val
else:
return export_val - import_val
def total_exported(self):
export_val = self.__secex__().export_val
if export_val is None:
return 0
else:
return export_val
def unity_weight_export_price(self):
export_val = self.__secex__().export_val
export_kg = self.__secex__().export_kg
if export_val is None:
return None
else:
return export_val / export_kg
def total_imported(self):
return self.__secex__().import_val
def unity_weight_import_price(self):
import_val = self.__secex__().import_val
import_kg = self.__secex__().import_kg
if import_val is None:
return None
else:
return import_val / import_kg
def highest_import_value(self):
secex = self.__secex_sorted_by_imports__()[0]
return secex.import_val
def highest_export_value(self):
secex = self.__secex_sorted_by_exports__()[0]
return secex.export_val
def highest_balance(self):
secex = self.__secex_sorted_by_balance__()[0]
export_val = secex.export_val or 0
import_val = secex.import_val or 0
return export_val - import_val
def lowest_balance(self):
secex = self.__secex_sorted_by_balance__()[-1]
export_val = secex.export_val or 0
import_val = secex.import_val or 0
return export_val - import_val
class TradePartnerMunicipalities(TradePartner):
def __init__(self, wld_id, bra_id):
TradePartner.__init__(self, wld_id, bra_id)
self.max_year_query = db.session.query(
func.max(Ymbw.year)).filter_by(wld_id=wld_id, month=12)
if bra_id is not None:
self.secex_query = Ymbw.query.join(Wld).join(Bra).filter(
Ymbw.wld_id == self.wld_id,
Ymbw.bra_id.like(self.bra_id + '%'),
Ymbw.month == 0,
Ymbw.year == self.max_year_query,
func.length(Ymbw.bra_id) == 9)
else:
self.secex_query = Ymbw.query.join(Wld).join(Bra).filter(
Ymbw.wld_id == self.wld_id,
Ymbw.month == 0,
Ymbw.year == self.max_year_query,
func.length(Ymbw.bra_id) == 9)
def municipality_with_more_imports(self):
secex = self.__secex_sorted_by_imports__()[0]
return secex.bra.name()
def municipality_with_more_imports_state(self):
secex = self.__secex_sorted_by_imports__()[0]
return secex.bra.abbreviation
def municipality_with_more_exports(self):
secex = self.__secex_sorted_by_exports__()[0]
return secex.bra.name()
def municipality_with_more_exports_state(self):
secex = self.__secex_sorted_by_exports__()[0]
return secex.bra.abbreviation
class TradePartnerProducts(TradePartner):
def __init__(self, wld_id, bra_id):
TradePartner.__init__(self, wld_id, bra_id)
self.max_year_query = db.session.query(
func.max(Ympw.year)).filter_by(wld_id=wld_id, month=12)
if bra_id is not None:
self.secex_query = Ymbpw.query.join(Wld).filter(
Ymbpw.wld_id == self.wld_id,
Ymbpw.bra_id == self.bra_id,
Ymbpw.month == 0,
Ymbpw.hs_id_len == 6,
Ymbpw.year == self.max_year_query)
else:
self.secex_query = Ympw.query.join(Wld).join(Hs).filter(
Ympw.wld_id == self.wld_id,
Ympw.month == 0,
Ympw.hs_id_len == 6,
Ympw.year == self.max_year_query)
def product_with_more_exports(self):
secex = self.__secex_sorted_by_exports__()[0]
return secex.hs.name()
def product_with_more_imports(self):
secex = self.__secex_sorted_by_imports__()[0]
return secex.hs.name()
def product_with_highest_balance(self):
secex = self.__secex_sorted_by_balance__()[0]
return secex.hs.name()
def product_with_lowest_balance(self):
secex = self.__secex_sorted_by_balance__()[-1]
return secex.hs.name()
class Product:
def __init__(self, product_id):
self._secex = None
self._secex_sorted_by_balance = None
self._secex_sorted_by_exports = None
self._secex_sorted_by_imports = None
self.product_id = product_id
if product_id is None:
self.max_year_query = db.session.query(
func.max(Ymp.year)).filter_by(month=12)
self.secex_query = Ymp.query.join(Hs).filter(
Ymp.month == 0,
Ymp.year == self.max_year_query)
else:
self.max_year_query = db.session.query(
func.max(Ymp.year)).filter_by(hs_id=product_id, month=12)
self.secex_query = Ymp.query.join(Hs).filter(
Ymp.hs_id == self.product_id,
Ymp.month == 0,
Ymp.year == self.max_year_query)
def __secex__(self):
if not self._secex:
secex_data = self.secex_query.first_or_404()
self._secex = secex_data
return self._secex
def __secex_list__(self):
if not self._secex:
secex_data = self.secex_query.all()
self._secex = secex_data
return list(self._secex)
def __secex_sorted_by_balance__(self):
self._secex_sorted_by_balance = self.__secex_list__()
self._secex_sorted_by_balance.sort(key=lambda secex: (
secex.export_val or 0) - (secex.import_val or 0), reverse=True)
return self._secex_sorted_by_balance
def __secex_sorted_by_exports__(self):
self._secex_sorted_by_exports = self.__secex_list__()
self._secex_sorted_by_exports = filter(
lambda secex: secex.export_val, self._secex_sorted_by_exports)
self._secex_sorted_by_exports.sort(
key=lambda secex: secex.export_val, reverse=True)
return self._secex_sorted_by_exports
def __secex_sorted_by_imports__(self):
self._secex_sorted_by_imports = self.__secex_list__()
self._secex_sorted_by_imports = filter(
lambda secex: secex.import_val, self._secex_sorted_by_imports)
self._secex_sorted_by_imports.sort(
key=lambda secex: secex.import_val, reverse=True)
return self._secex_sorted_by_imports
def product_name(self):
product = self.__secex__().hs
return product.name()
def year(self):
return self.max_year_query.first()[0]
def location_name(self):
return "Brasil"
def trade_balance(self):
export_val = self.__secex__().export_val or 0
import_val = self.__secex__().import_val or 0
return export_val - import_val
def total_exported(self):
return self.__secex__().export_val
def unity_weight_export_price(self):
export_val = self.__secex__().export_val
export_kg = self.__secex__().export_kg
return export_val if not export_val else export_val / export_kg
def total_imported(self):
return self.__secex__().import_val
def unity_weight_import_price(self):
import_val = self.__secex__().import_val
import_kg = self.__secex__().import_kg
return import_val if not import_val else import_val / import_kg
def highest_import_value(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.import_val
def highest_export_value(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.export_val
def highest_import_value_name(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.hs.name()
def highest_export_value_name(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.hs.name()
def product_complexity(self):
product_complexity = self.__secex__()
return product_complexity.pci
def export_value_growth_in_five_years(self):
export_value_growth_in_five_years = self.__secex__()
return export_value_growth_in_five_years.export_val_growth_5
def all_imported(self):
total_imported = db.session.query(func.sum(Ymb.import_val)).filter_by(year=self.max_year_query,
month = 0,
bra_id_len = 1).one()
return float(total_imported[0])
def all_exported(self):
total_exported = db.session.query(func.sum(Ymb.export_val)).filter_by(year = self.max_year_query,
month = 0,
bra_id_len = 1).one()
return float(total_exported[0])
def all_trade_balance(self):
return self.all_exported() - self.all_imported()
class ProductTradePartners(Product):
def __init__(self, product_id, bra_id):
Product.__init__(self, product_id)
self.max_year_query = db.session.query(
func.max(Ympw.year)).filter_by(hs_id=product_id, month=12)
self.secex_query = Ympw.query.join(Wld).filter(
Ympw.hs_id == self.product_id,
Ympw.wld_id_len == 5,
Ympw.month == 0,
Ympw.year == self.max_year_query
)
if bra_id:
self.bra_id = bra_id
self.max_year_query = db.session.query(
func.max(Ymbpw.year)).filter_by(hs_id=product_id, bra_id=bra_id, month=12)
self.secex_query = Ymbpw.query.join(Wld).filter(
Ymbpw.hs_id == self.product_id,
Ymbpw.year == self.max_year_query,
Ymbpw.wld_id_len == 5,
Ymbpw.bra_id == self.bra_id,
Ymbpw.month == 0)
def destination_with_more_exports(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.wld.name()
def origin_with_more_imports(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.wld.name()
class ProductMunicipalities(Product):
def __init__(self, product_id, bra_id):
Product.__init__(self, product_id)
self.max_year_query = db.session.query(
func.max(Ymbp.year)).filter_by(hs_id=product_id, month=12)
self.secex_query = Ymbp.query.join(Bra).filter(
Ymbp.hs_id == self.product_id,
Ymbp.bra_id_len == 9,
Ymbp.month == 0,
Ymbp.year == self.max_year_query,
)
if bra_id:
self.bra_id = bra_id
self.max_year_query = db.session.query(
func.max(Ymbp.year)).filter_by(hs_id=product_id, bra_id=bra_id, month=12)
self.secex_query = Ymbp.query.join(Bra).filter(
Ymbp.hs_id == self.product_id,
Ymbp.year == self.max_year_query,
Ymbp.bra_id_len == 9,
Ymbp.bra_id.like(str(self.bra_id)+'%'),
Ymbp.month == 0)
def municipality_with_more_exports(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.bra.name()
def municipality_with_more_exports_state(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.bra.abbreviation
def municipality_with_more_imports(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.bra.name()
def municipality_with_more_imports_state(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.bra.abbreviation
class ProductLocations(Product):
def __init__(self, product_id, bra_id):
self._secex = None
self.bra_id = bra_id
self.product_id = product_id
self.max_database_year = db.session.query(func.max(Ymbp.year))
self.max_year_query = db.session.query(
func.max(Ymbp.year)).filter_by(bra_id=bra_id, hs_id=product_id).filter(
Ymbp.year < self.max_database_year)
self.secex_query = Ymbp.query.filter(
Ymbp.hs_id == self.product_id,
Ymbp.bra_id == self.bra_id,
Ymbp.month == 0,
Ymbp.year == self.max_year_query
)
def location_name(self):
return Bra.query.filter(Bra.id == self.bra_id).first().name()
def rca_wld(self):
secex = self.__secex__()
return secex.rca_wld
def distance_wld(self):
secex = self.__secex__()
return secex.distance_wld
def opp_gain_wld(self):
secex = self.__secex__()
return secex.opp_gain_wld
class Location:
def __init__(self, bra_id):
self._secex = None
self._secex_sorted_by_exports = None
self._secex_sorted_by_imports = None
self._secex_sorted_by_distance = None
self._secex_sorted_by_opp_gain = None
self.bra_id = bra_id
self.max_database_year = db.session.query(func.max(Ymbp.year))
self.max_year_query = db.session.query(
func.max(Ymbp.year)).filter_by(bra_id=self.bra_id).filter(
Ymbp.year < self.max_database_year)
self.secex_query = Ymbp.query.join(Hs).filter(
Ymbp.bra_id == self.bra_id,
Ymbp.month == 0,
Ymbp.hs_id_len == 6,
Ymbp.year == self.max_year_query)
def __secex__(self):
if not self._secex:
secex_data = self.secex_query.first_or_404()
self._secex = secex_data
return self._secex
def __secex_list__(self):
if not self._secex:
secex_data = self.secex_query.all()
self._secex = secex_data
return self._secex
def __secex_sorted_by_exports__(self):
if not self._secex_sorted_by_exports:
self._secex_sorted_by_exports = self.__secex_list__()
self._secex_sorted_by_exports.sort(
key=lambda secex: secex.export_val, reverse=True)
return self._secex_sorted_by_exports
def __secex_sorted_by_imports__(self):
if not self._secex_sorted_by_imports:
self._secex_sorted_by_imports = self.__secex_list__()
self._secex_sorted_by_imports.sort(
key=lambda secex: secex.import_val, reverse=True)
return self._secex_sorted_by_imports
def __secex_sorted_by_distance__(self):
if not self._secex_sorted_by_distance:
not_nulls_list = []
for i in self.__secex_list__():
if i.distance != None:
not_nulls_list.append(i)
not_nulls_list.sort(
key=lambda secex: secex.distance_wld, reverse=False)
self._secex_sorted_by_distance = not_nulls_list
return self._secex_sorted_by_distance
def __secex_sorted_by_opp_gain__(self):
if not self._secex_sorted_by_opp_gain:
not_nulls_list = []
for i in self.__secex_list__():
if i.opp_gain != None:
not_nulls_list.append(i)
not_nulls_list.sort(
key=lambda secex: secex.opp_gain_wld, reverse=True)
self._secex_sorted_by_opp_gain = not_nulls_list
return self._secex_sorted_by_opp_gain
def year(self):
return self.max_year_query.first()[0]
def main_product_by_export_value(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.export_val
def main_product_by_export_value_name(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.hs.name()
def main_product_by_import_value(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.import_val
def main_product_by_import_value_name(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.hs.name()
def total_exports(self):
try:
export_sum = 0
secex = self.__secex_sorted_by_exports__()
for i in secex:
if not i.export_val == None:
export_sum += i.export_val
except IndexError:
return None
else:
return export_sum
def total_imports(self):
try:
import_sum = 0
secex = self.__secex_sorted_by_imports__()
for i in secex:
if not i.import_val == None:
import_sum += i.import_val
except IndexError:
return None
else:
return import_sum
def less_distance_by_product(self):
try:
secex = self.__secex_sorted_by_distance__()[0]
except IndexError:
return None
else:
return secex.distance_wld
def less_distance_by_product_name(self):
try:
secex = self.__secex_sorted_by_distance__()[0]
except IndexError:
return None
else:
return secex.hs.name()
def opportunity_gain_by_product(self):
try:
secex = self.__secex_sorted_by_opp_gain__()[0]
except IndexError:
return None
else:
return secex.opp_gain_wld
def opportunity_gain_by_product_name(self):
try:
secex = self.__secex_sorted_by_opp_gain__()[0]
except IndexError:
return None
else:
return secex.hs.name()
class LocationWld(Location):
def __init__(self, bra_id):
Location.__init__(self, bra_id)
self.bra_id = bra_id
self.max_year_query = db.session.query(
func.max(Ymbw.year)).filter_by(bra_id=self.bra_id, month=12)
self.secex_query = Ymbw.query.join(Wld).filter(
Ymbw.bra_id == self.bra_id,
Ymbw.month == 0,
Ymbw.wld_id_len == 5,
Ymbw.year == self.max_year_query)
def main_destination_by_export_value(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.export_val
def main_destination_by_export_value_name(self):
try:
secex = self.__secex_sorted_by_exports__()[0]
except IndexError:
return None
else:
return secex.wld.name()
def main_destination_by_import_value(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.import_val
def main_destination_by_import_value_name(self):
try:
secex = self.__secex_sorted_by_imports__()[0]
except IndexError:
return None
else:
return secex.wld.name()
class LocationEciRankings:
def __init__(self, bra_id):
self._secex = None
self._secex_sorted_by_eci = None
self.bra_id = bra_id
self.max_year_query = db.session.query(
func.max(Ymb.year)).filter_by(bra_id=self.bra_id, month=12)
self.secex_query = Ymb.query.filter(
Ymb.year == self.max_year_query,
Ymb.month == 0,
func.length(Ymb.bra_id) == 5)
def __secex_sorted_by_eci__(self):
if not self._secex_sorted_by_eci:
self._secex_sorted_by_eci = self.__secex_list__()
self._secex_sorted_by_eci.sort(
key=lambda secex: secex.eci, reverse=True)
return self._secex_sorted_by_eci
def __secex_list__(self):
if not self._secex:
secex_data = self.secex_query.all()
self._secex = secex_data
return self._secex
def eci_rank(self):
eci_list = self.__secex_sorted_by_eci__()
rank = 1
for eci in eci_list:
if eci.bra_id == self.bra_id:
return rank
break
rank += 1
| 33.141667 | 105 | 0.61093 | 3,054 | 23,862 | 4.31205 | 0.042567 | 0.12985 | 0.100691 | 0.117473 | 0.891412 | 0.850786 | 0.808338 | 0.755107 | 0.728453 | 0.689498 | 0 | 0.006977 | 0.303202 | 23,862 | 719 | 106 | 33.187761 | 0.785048 | 0 | 0 | 0.711443 | 0 | 0 | 0.000335 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.150912 | false | 0 | 0.150912 | 0.016584 | 0.500829 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
a1e99d58438f875f2552b192338c58abd55a630b | 83 | py | Python | 07_Java_Experiment/PyTest/shell/__init__.py | Robert-Stackflow/HUST-Courses | 300752552e7af035b0e5c7663953850c81871242 | [
"MIT"
] | 4 | 2021-11-01T09:27:32.000Z | 2022-03-07T14:24:10.000Z | 07_Java_Experiment/PyTest/shell/__init__.py | Robert-Stackflow/HUST-Courses | 300752552e7af035b0e5c7663953850c81871242 | [
"MIT"
] | null | null | null | 07_Java_Experiment/PyTest/shell/__init__.py | Robert-Stackflow/HUST-Courses | 300752552e7af035b0e5c7663953850c81871242 | [
"MIT"
] | null | null | null | from shell.executor import ShellExecutionResult
from shell.executor import execute
| 27.666667 | 47 | 0.879518 | 10 | 83 | 7.3 | 0.6 | 0.246575 | 0.465753 | 0.630137 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.096386 | 83 | 2 | 48 | 41.5 | 0.973333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 7 |
62cc7c50f75850086bb1cf1b63ad618eadf77cf0 | 14,116 | py | Python | sacrerouge/tests/datasets/duc_tac/tac2010/system_level_test.py | danieldeutsch/decomposed-rouge | 0d723be8e3359f0bdcc9c7940336800895e46dbb | [
"Apache-2.0"
] | 81 | 2020-07-10T15:45:08.000Z | 2022-03-30T12:19:11.000Z | sacrerouge/tests/datasets/duc_tac/tac2010/system_level_test.py | danieldeutsch/decomposed-rouge | 0d723be8e3359f0bdcc9c7940336800895e46dbb | [
"Apache-2.0"
] | 29 | 2020-08-03T21:50:45.000Z | 2022-02-23T14:34:16.000Z | sacrerouge/tests/datasets/duc_tac/tac2010/system_level_test.py | danieldeutsch/decomposed-rouge | 0d723be8e3359f0bdcc9c7940336800895e46dbb | [
"Apache-2.0"
] | 7 | 2020-08-14T09:54:08.000Z | 2022-03-30T12:19:25.000Z | import os
import pytest
import unittest
from sacrerouge.commands.correlate import aggregate_metrics
from sacrerouge.data import Metrics
from sacrerouge.io import JsonlReader
_metrics_A_file_path = 'datasets/duc-tac/tac2010/v1.0/task1.A.metrics.jsonl'
_metrics_B_file_path = 'datasets/duc-tac/tac2010/v1.0/task1.B.metrics.jsonl'
class TestTAC2010SystemLevel(unittest.TestCase):
@pytest.mark.skipif(not os.path.exists(_metrics_A_file_path), reason='TAC 2010-A metrics file does not exist')
def test_system_level_A(self):
summary_level_metrics = JsonlReader(_metrics_A_file_path, Metrics).read()
system_level_metrics = aggregate_metrics(summary_level_metrics)
# Check a few metrics to make sure they are equal to what's in the NIST files
# ROUGE/rouge2_A.m.avg
assert system_level_metrics['22']['rouge-2']['recall'] == pytest.approx(9.574, 1e-2)
assert system_level_metrics['18']['rouge-2']['recall'] == pytest.approx(9.418, 1e-2)
assert system_level_metrics['23']['rouge-2']['recall'] == pytest.approx(9.404, 1e-2)
assert system_level_metrics['24']['rouge-2']['recall'] == pytest.approx(9.196, 1e-2)
assert system_level_metrics['36']['rouge-2']['recall'] == pytest.approx(9.194, 1e-2)
# ROUGE/rouge2_A.jk.m.avg
assert system_level_metrics['D']['rouge-2_jk']['recall'] == pytest.approx(12.862, 1e-2)
assert system_level_metrics['H']['rouge-2_jk']['recall'] == pytest.approx(12.841, 1e-1)
assert system_level_metrics['F']['rouge-2_jk']['recall'] == pytest.approx(12.556, 1e-2)
assert system_level_metrics['22']['rouge-2_jk']['recall'] == pytest.approx(9.620, 1e-2)
assert system_level_metrics['18']['rouge-2_jk']['recall'] == pytest.approx(9.451, 1e-2)
# ROUGE/rougeSU4_A.m.avg
assert system_level_metrics['22']['rouge-su4']['recall'] == pytest.approx(13.014, 1e-2)
assert system_level_metrics['23']['rouge-su4']['recall'] == pytest.approx(12.963, 1e-2)
assert system_level_metrics['24']['rouge-su4']['recall'] == pytest.approx(12.829, 1e-2)
assert system_level_metrics['18']['rouge-su4']['recall'] == pytest.approx(12.407, 1e-2)
assert system_level_metrics['34']['rouge-su4']['recall'] == pytest.approx(12.283, 1e-2)
# ROUGE/rougeSU4_A.jk.m.avg
assert system_level_metrics['H']['rouge-su4_jk']['recall'] == pytest.approx(16.294, 1e-2)
assert system_level_metrics['F']['rouge-su4_jk']['recall'] == pytest.approx(16.212, 1e-2)
assert system_level_metrics['D']['rouge-su4_jk']['recall'] == pytest.approx(16.200, 1e-2)
assert system_level_metrics['22']['rouge-su4_jk']['recall'] == pytest.approx(13.049, 1e-2)
assert system_level_metrics['23']['rouge-su4_jk']['recall'] == pytest.approx(12.978, 1e-2)
# manual/manual.model.A.avg
assert system_level_metrics['A']['num_scus_jk'] == pytest.approx(10.870, 1e-2)
assert system_level_metrics['B']['num_scus_jk'] == pytest.approx(11.087, 1e-2)
assert system_level_metrics['C']['num_scus_jk'] == pytest.approx(9.826, 1e-2)
assert system_level_metrics['A']['modified_pyramid_score_jk'] == pytest.approx(0.779, 1e-2)
assert system_level_metrics['B']['modified_pyramid_score_jk'] == pytest.approx(0.747, 1e-2)
assert system_level_metrics['C']['modified_pyramid_score_jk'] == pytest.approx(0.661, 1e-2)
assert system_level_metrics['A']['linguistic_quality'] == pytest.approx(4.913, 1e-2)
assert system_level_metrics['B']['linguistic_quality'] == pytest.approx(4.870, 1e-2)
assert system_level_metrics['C']['linguistic_quality'] == pytest.approx(4.826, 1e-2)
assert system_level_metrics['A']['overall_responsiveness'] == pytest.approx(4.783, 1e-2)
assert system_level_metrics['B']['overall_responsiveness'] == pytest.approx(4.696, 1e-2)
assert system_level_metrics['C']['overall_responsiveness'] == pytest.approx(4.565, 1e-2)
# manual/manual.peer.A.avg
assert system_level_metrics['1']['modified_pyramid_score'] == pytest.approx(0.233, 1e-2)
assert system_level_metrics['2']['modified_pyramid_score'] == pytest.approx(0.296, 1e-2)
assert system_level_metrics['3']['modified_pyramid_score'] == pytest.approx(0.399, 1e-2)
assert system_level_metrics['1']['num_scus'] == pytest.approx(3.304, 1e-2)
assert system_level_metrics['2']['num_scus'] == pytest.approx(4.217, 1e-2)
assert system_level_metrics['3']['num_scus'] == pytest.approx(5.500, 1e-2)
assert system_level_metrics['1']['num_repetitions'] == pytest.approx(0.522, 1e-2)
assert system_level_metrics['2']['num_repetitions'] == pytest.approx(1.217, 1e-2)
assert system_level_metrics['3']['num_repetitions'] == pytest.approx(1.413, 1e-2)
assert system_level_metrics['1']['modified_pyramid_score_jk'] == pytest.approx(0.229, 1e-2)
assert system_level_metrics['2']['modified_pyramid_score_jk'] == pytest.approx(0.291, 1e-2)
assert system_level_metrics['3']['modified_pyramid_score_jk'] == pytest.approx(0.393, 1e-2)
assert system_level_metrics['1']['linguistic_quality'] == pytest.approx(3.652, 1e-2)
assert system_level_metrics['2']['linguistic_quality'] == pytest.approx(2.717, 1e-2)
assert system_level_metrics['3']['linguistic_quality'] == pytest.approx(3.043, 1e-2)
assert system_level_metrics['1']['overall_responsiveness'] == pytest.approx(2.174, 1e-2)
assert system_level_metrics['2']['overall_responsiveness'] == pytest.approx(2.500, 1e-2)
assert system_level_metrics['3']['overall_responsiveness'] == pytest.approx(2.978, 1e-2)
# BE/simple_A.m.hm.avg
assert system_level_metrics['22']['rouge-be-hm']['recall'] == pytest.approx(5.937, 1e-2)
assert system_level_metrics['23']['rouge-be-hm']['recall'] == pytest.approx(5.809, 1e-2)
assert system_level_metrics['18']['rouge-be-hm']['recall'] == pytest.approx(5.749, 1e-2)
assert system_level_metrics['13']['rouge-be-hm']['recall'] == pytest.approx(5.553, 1e-2)
assert system_level_metrics['16']['rouge-be-hm']['recall'] == pytest.approx(5.497, 1e-2)
# BE/simplejk_A.m.hm.avg
assert system_level_metrics['F']['rouge-be-hm_jk']['recall'] == pytest.approx(9.114, 1e-2)
assert system_level_metrics['H']['rouge-be-hm_jk']['recall'] == pytest.approx(8.690, 1e-1)
assert system_level_metrics['D']['rouge-be-hm_jk']['recall'] == pytest.approx(8.449, 1e-1)
assert system_level_metrics['22']['rouge-be-hm_jk']['recall'] == pytest.approx(5.973, 1e-2)
assert system_level_metrics['23']['rouge-be-hm_jk']['recall'] == pytest.approx(5.828, 1e-2)
# aesop_allpeers_A
assert system_level_metrics['A']['aesop']['1'] == pytest.approx(0.09517478261, 1e-2)
assert system_level_metrics['C']['aesop']['8'] == pytest.approx(0.0, 1e-2)
assert system_level_metrics['4']['aesop']['13'] == pytest.approx(0.6150630435, 1e-2)
assert system_level_metrics['8']['aesop']['22'] == pytest.approx(0.3684913043, 1e-2)
assert system_level_metrics['16']['aesop']['27'] == pytest.approx(11.80434783, 1e-2)
@pytest.mark.skipif(not os.path.exists(_metrics_B_file_path), reason='TAC 2010-B metrics file does not exist')
def test_system_level_B(self):
summary_level_metrics = JsonlReader(_metrics_B_file_path, Metrics).read()
system_level_metrics = aggregate_metrics(summary_level_metrics)
# Check a few metrics to make sure they are equal to what's in the NIST files
# ROUGE/rouge2_B.m.avg
assert system_level_metrics['16']['rouge-2']['recall'] == pytest.approx(8.024, 1e-2)
assert system_level_metrics['13']['rouge-2']['recall'] == pytest.approx(7.913, 1e-2)
assert system_level_metrics['36']['rouge-2']['recall'] == pytest.approx(7.311, 1e-2)
assert system_level_metrics['8']['rouge-2']['recall'] == pytest.approx(7.251, 1e-2)
assert system_level_metrics['4']['rouge-2']['recall'] == pytest.approx(7.058, 1e-2)
# ROUGE/rouge2_B.jk.m.avg
assert system_level_metrics['D']['rouge-2_jk']['recall'] == pytest.approx(13.021, 1e-2)
assert system_level_metrics['E']['rouge-2_jk']['recall'] == pytest.approx(10.196, 1e-1)
assert system_level_metrics['F']['rouge-2_jk']['recall'] == pytest.approx(9.777, 1e-2)
assert system_level_metrics['16']['rouge-2_jk']['recall'] == pytest.approx(7.993, 1e-2)
assert system_level_metrics['13']['rouge-2_jk']['recall'] == pytest.approx(7.902, 1e-2)
# ROUGE/rougeSU4_B.m.avg
assert system_level_metrics['16']['rouge-su4']['recall'] == pytest.approx(12.006, 1e-2)
assert system_level_metrics['13']['rouge-su4']['recall'] == pytest.approx(11.878, 1e-2)
assert system_level_metrics['6']['rouge-su4']['recall'] == pytest.approx(11.198, 1e-2)
assert system_level_metrics['22']['rouge-su4']['recall'] == pytest.approx(11.107, 1e-2)
assert system_level_metrics['8']['rouge-su4']['recall'] == pytest.approx(11.039, 1e-2)
# ROUGE/rougeSU4_B.jk.m.avg
assert system_level_metrics['D']['rouge-su4_jk']['recall'] == pytest.approx(16.193, 1e-2)
assert system_level_metrics['E']['rouge-su4_jk']['recall'] == pytest.approx(13.978, 1e-2)
assert system_level_metrics['G']['rouge-su4_jk']['recall'] == pytest.approx(13.573, 1e-2)
assert system_level_metrics['16']['rouge-su4_jk']['recall'] == pytest.approx(11.979, 1e-2)
assert system_level_metrics['13']['rouge-su4_jk']['recall'] == pytest.approx(11.869, 1e-2)
# manual/manual.model.B.avg
assert system_level_metrics['A']['num_scus_jk'] == pytest.approx(6.609, 1e-2)
assert system_level_metrics['B']['num_scus_jk'] == pytest.approx(7.696, 1e-2)
assert system_level_metrics['C']['num_scus_jk'] == pytest.approx(5.913, 1e-2)
assert system_level_metrics['A']['modified_pyramid_score_jk'] == pytest.approx(0.629, 1e-2)
assert system_level_metrics['B']['modified_pyramid_score_jk'] == pytest.approx(0.729, 1e-2)
assert system_level_metrics['C']['modified_pyramid_score_jk'] == pytest.approx(0.551, 1e-2)
assert system_level_metrics['A']['linguistic_quality'] == pytest.approx(4.913, 1e-2)
assert system_level_metrics['B']['linguistic_quality'] == pytest.approx(4.826, 1e-2)
assert system_level_metrics['C']['linguistic_quality'] == pytest.approx(4.870, 1e-2)
assert system_level_metrics['A']['overall_responsiveness'] == pytest.approx(4.783, 1e-2)
assert system_level_metrics['B']['overall_responsiveness'] == pytest.approx(4.783, 1e-2)
assert system_level_metrics['C']['overall_responsiveness'] == pytest.approx(4.826, 1e-2)
# manual/manual.peer.B.avg
assert system_level_metrics['1']['modified_pyramid_score'] == pytest.approx(0.187, 1e-2)
assert system_level_metrics['2']['modified_pyramid_score'] == pytest.approx(0.262, 1e-2)
assert system_level_metrics['3']['modified_pyramid_score'] == pytest.approx(0.235, 1e-2)
assert system_level_metrics['1']['num_scus'] == pytest.approx(2.065, 1e-2)
assert system_level_metrics['2']['num_scus'] == pytest.approx(2.804, 1e-2)
assert system_level_metrics['3']['num_scus'] == pytest.approx(2.609, 1e-2)
assert system_level_metrics['1']['num_repetitions'] == pytest.approx(0.348, 1e-2)
assert system_level_metrics['2']['num_repetitions'] == pytest.approx(0.522, 1e-2)
assert system_level_metrics['3']['num_repetitions'] == pytest.approx(0.348, 1e-2)
assert system_level_metrics['1']['modified_pyramid_score_jk'] == pytest.approx(0.184, 1e-2)
assert system_level_metrics['2']['modified_pyramid_score_jk'] == pytest.approx(0.256, 1e-2)
assert system_level_metrics['3']['modified_pyramid_score_jk'] == pytest.approx(0.228, 1e-2)
assert system_level_metrics['1']['linguistic_quality'] == pytest.approx(3.739, 1e-2)
assert system_level_metrics['2']['linguistic_quality'] == pytest.approx(2.696, 1e-2)
assert system_level_metrics['3']['linguistic_quality'] == pytest.approx(2.957, 1e-2)
assert system_level_metrics['1']['overall_responsiveness'] == pytest.approx(2.022, 1e-2)
assert system_level_metrics['2']['overall_responsiveness'] == pytest.approx(2.478, 1e-2)
assert system_level_metrics['3']['overall_responsiveness'] == pytest.approx(2.217, 1e-2)
# BE/simple_B.m.hm.avg
assert system_level_metrics['16']['rouge-be-hm']['recall'] == pytest.approx(4.445, 1e-2)
assert system_level_metrics['13']['rouge-be-hm']['recall'] == pytest.approx(4.417, 1e-2)
assert system_level_metrics['8']['rouge-be-hm']['recall'] == pytest.approx(4.350, 1e-1)
assert system_level_metrics['4']['rouge-be-hm']['recall'] == pytest.approx(4.115, 1e-2)
assert system_level_metrics['22']['rouge-be-hm']['recall'] == pytest.approx(4.050, 1e-2)
# BE/simplejk_B.m.hm.avg
assert system_level_metrics['D']['rouge-be-hm_jk']['recall'] == pytest.approx(8.842, 1e-2)
assert system_level_metrics['F']['rouge-be-hm_jk']['recall'] == pytest.approx(7.842, 1e-1)
assert system_level_metrics['B']['rouge-be-hm_jk']['recall'] == pytest.approx(7.081, 1e-1)
assert system_level_metrics['16']['rouge-be-hm_jk']['recall'] == pytest.approx(4.411, 1e-2)
assert system_level_metrics['13']['rouge-be-hm_jk']['recall'] == pytest.approx(4.402, 1e-2)
# aesop_allpeers_B
assert system_level_metrics['B']['aesop']['2'] == pytest.approx(0.1358091304, 1e-2)
assert system_level_metrics['E']['aesop']['4'] == pytest.approx(0.1376682609, 1e-2)
assert system_level_metrics['6']['aesop']['7'] == pytest.approx(0.2641304348, 1e-2)
assert system_level_metrics['9']['aesop']['20'] == pytest.approx(0.09438347826, 1e-2)
assert system_level_metrics['14']['aesop']['22'] == pytest.approx(0.3394478261, 1e-2)
| 68.193237 | 114 | 0.668674 | 2,107 | 14,116 | 4.267679 | 0.105838 | 0.181495 | 0.264235 | 0.346975 | 0.901246 | 0.865436 | 0.794262 | 0.72242 | 0.622331 | 0.555271 | 0 | 0.089649 | 0.146571 | 14,116 | 206 | 115 | 68.524272 | 0.656761 | 0.039884 | 0 | 0.040816 | 0 | 0 | 0.186004 | 0.058971 | 0 | 0 | 0 | 0 | 0.884354 | 1 | 0.013605 | false | 0 | 0.040816 | 0 | 0.061224 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
62e0ebbea766df790967eb63c178eb47de226fe6 | 45,262 | py | Python | packages/risksense_api/__subject/__connectors/__connectors.py | PRASANTHBHARADHWAAJ/risksense_tools | d9f95ac3c7107bb4114c958455c7194211ff951b | [
"Apache-2.0"
] | 4 | 2020-12-24T15:20:23.000Z | 2021-12-26T17:41:46.000Z | packages/risksense_api/__subject/__connectors/__connectors.py | PRASANTHBHARADHWAAJ/risksense_tools | d9f95ac3c7107bb4114c958455c7194211ff951b | [
"Apache-2.0"
] | 4 | 2020-10-08T19:53:36.000Z | 2020-11-11T20:52:36.000Z | packages/risksense_api/__subject/__connectors/__connectors.py | PRASANTHBHARADHWAAJ/risksense_tools | d9f95ac3c7107bb4114c958455c7194211ff951b | [
"Apache-2.0"
] | 2 | 2021-06-18T01:27:31.000Z | 2021-12-20T03:19:31.000Z | """ *******************************************************************************************************************
|
| Name : __connectors.py
| Module : risksense_api
| Description : A class to be used for interacting with connectors on the RiskSense Platform.
| Copyright : (c) RiskSense, Inc.
| License : Apache-2.0 (http://www.apache.org/licenses/LICENSE-2.0)
|
******************************************************************************************************************* """
import json
from ...__subject import Subject
from ..._api_request_handler import *
class Connectors(Subject):
""" Connectors class """
class Type:
""" Connectors.Type class """
NESSUS = 'NESSUS'
QUALYS_VULN = 'QUALYS_VULNERABILITY'
QUALYS_ASSET = 'QUALYS_ASSET'
NEXPOSE = 'NEXPOSE'
TENEBLE_SEC_CENTER = 'TENEBLE_SECURITY_CENTER'
class ScheduleFreq:
""" Connectors.ScheduleFreq class """
DAILY = "DAILY"
WEEKLY = "WEEKLY"
MONTHLY = "MONTHLY"
def __init__(self, profile):
"""
Initialization of Connectors object.
:param profile: Profile Object
:type profile: _profile
"""
self.subject_name = "connector"
Subject.__init__(self, profile, self.subject_name)
def get_list(self, page_num=0, page_size=150, client_id=None):
"""
Get a list of connectors associated with the client.
:param page_num: The page number of results to be returned.
:type page_num: int
:param page_size: The number of results to return per page.
:type page_size: int
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:return: The JSON response from the platform is returned.
:rtype: dict
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises PageSizeError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
url = self.api_base_url.format(str(client_id)) + "?size=" + str(page_size) + "&page=" + str(page_num)
try:
raw_response = self.request_handler.make_request(ApiRequestHandler.GET, url)
except (RequestFailed, StatusCodeError, MaxRetryError, PageSizeError):
raise
jsonified_response = json.loads(raw_response.text)
return jsonified_response
def create(self, conn_name, conn_type, conn_url, schedule_freq, network_id,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Create a new Nessus connector.
:param conn_name: The connector name.
:type conn_name: str
:param conn_type: The connector type. (Valid options are: Connectors.Type.NESSUS, Connectors.Type.NEXPOSE,
Connectors.Type.QUALYS_VULN, Connectors.Type.QUALYS_ASSET,
Connectors.Type.TENEBLE_SEC_CENTER)
:type conn_type: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param network_id: The network ID
:type network_id: int
:param username_or_access_key: The username to use for connector authentication.
:type username_or_access_key: str
:param password_or_secret_key: The password to use for connector authentication.
:type password_or_secret_key: str
:param auto_urba: Automatically run URBA after connector runs?
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword ssl_cert: Optional SSL certificate.
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
url = self.api_base_url.format(str(client_id))
ssl_cert = kwargs.get('ssl_cert', None)
hour_of_day = kwargs.get('hour_of_day', None)
day_of_week = kwargs.get('day_of_week', None)
day_of_month = kwargs.get('day_of_month', None)
if conn_type == Connectors.Type.NESSUS:
attributes = {
"accessKey": username_or_access_key,
"secretKey": password_or_secret_key
}
else:
attributes = {
"username": username_or_access_key,
"password": password_or_secret_key
}
body = {
"type": conn_type,
"name": conn_name,
"connection": {
"url": conn_url
},
"networkId": network_id,
"attributes": attributes,
"autoUrba": auto_urba
}
if ssl_cert is not None:
body['connection'].update(sslCertificates=ssl_cert)
if schedule_freq == Connectors.ScheduleFreq.DAILY:
if hour_of_day is None:
hour_of_day = 12
connector_schedule = {
"type": schedule_freq,
"hourOfDay": hour_of_day
}
elif schedule_freq == Connectors.ScheduleFreq.WEEKLY:
if day_of_week is None:
day_of_week = 1
if hour_of_day is None:
hour_of_day = 12
connector_schedule = {
"type": schedule_freq,
"hourOfDay": hour_of_day,
"dayOfWeek": day_of_week
}
elif schedule_freq == Connectors.ScheduleFreq.MONTHLY:
if day_of_month is None:
day_of_month = 1
if hour_of_day is None:
hour_of_day = 12
connector_schedule = {
"type": schedule_freq,
"hourOfDay": hour_of_day,
"dayOfMonth": day_of_month
}
else:
raise ValueError("Schedule freq. should be DAILY, WEEKLY, or MONTHLY.")
body.update(schedule=connector_schedule)
try:
raw_response = self.request_handler.make_request(ApiRequestHandler.POST, url, body=body)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
jsonified_response = json.loads(raw_response.text)
job_id = jsonified_response['id']
return job_id
def create_nessus(self, conn_name, conn_url, schedule_freq, network_id,
access_key, secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Create a new Nessus connector.
:param conn_name: The connector name.
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param network_id: The network ID
:type network_id: int
:param access_key: The username to use for connector authentication.
:type access_key: str
:param secret_key: The password to use for connector authentication.
:type secret_key: str
:param auto_urba: Automatically run URBA after connector runs?
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword ssl_cert: Optional SSL certificate.
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
connector_id = self.create(conn_name, Connectors.Type.NESSUS, conn_url, schedule_freq, network_id,
access_key, secret_key, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError, ValueError):
raise
return connector_id
def create_qualys_vuln(self, conn_name, conn_url, schedule_freq, network_id,
username, password, auto_urba=True, client_id=None, **kwargs):
"""
Create a new Qualys Vulnerability connector.
:param conn_name: The connector name.
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param network_id: The network ID
:type network_id: int
:param username: The username to use for connector authentication.
:type username: str
:param password: The password to use for connector authentication.
:type password: str
:param auto_urba: Automatically run URBA after connector runs?
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword ssl_cert: Optional SSL certificate.
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
connector_id = self.create(conn_name, Connectors.Type.QUALYS_VULN, conn_url, schedule_freq,
network_id, username, password, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError, ValueError):
raise
return connector_id
def create_qualys_asset(self, conn_name, conn_url, schedule_freq, network_id,
username, password, auto_urba=True, client_id=None, **kwargs):
"""
Create a new Qualys Asset connector.
:param conn_name: The connector name.
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param network_id: The network ID
:type network_id: int
:param username: The username to use for connector authentication.
:type username: str
:param password: The password to use for connector authentication.
:type password: str
:param auto_urba: Automatically run URBA after connector runs?
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword ssl_cert: Optional SSL certificate.
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
connector_id = self.create(conn_name, Connectors.Type.QUALYS_ASSET, conn_url, schedule_freq,
network_id, username, password, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError, ValueError):
raise
return connector_id
def create_nexpose(self, conn_name, conn_url, schedule_freq, network_id,
username, password, auto_urba=True, client_id=None, **kwargs):
"""
Create a new Nexpose connector.
:param conn_name: The connector name.
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param network_id: The network ID
:type network_id: int
:param username: The username to use for connector authentication.
:type username: str
:param password: The password to use for connector authentication.
:type password: str
:param auto_urba: Automatically run URBA after connector runs?
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword ssl_cert: Optional SSL certificate.
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
connector_id = self.create(conn_name, Connectors.Type.NEXPOSE, conn_url, schedule_freq, network_id,
username, password, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError, ValueError):
raise
return connector_id
def create_teneble(self, conn_name, conn_url, schedule_freq, network_id,
username, password, auto_urba=True, client_id=None, **kwargs):
"""
Create a new Teneble Security Center connector.
:param conn_name: The connector name.
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param network_id: The network ID
:type network_id: int
:param username: The username to use for connector authentication.
:type username: str
:param password: The password to use for connector authentication.
:type password: str
:param auto_urba: Automatically run URBA after connector runs?
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword ssl_cert: Optional SSL certificate.
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
connector_id = self.create(conn_name, Connectors.Type.TENEBLE_SEC_CENTER, conn_url, schedule_freq,
network_id, username, password, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError, ValueError):
raise
return connector_id
def get_connector_detail(self, connector_id, client_id=None):
"""
Get the details associated with a specific connector.
:param connector_id: The connector ID.
:type connector_id: int
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:return: The JSON response from the platform is returned.
:rtype: dict
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
url = self.api_base_url.format(str(client_id)) + "/{}".format(str(connector_id))
try:
raw_response = self.request_handler.make_request(ApiRequestHandler.GET, url)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
jsonified_response = json.loads(raw_response.text)
return jsonified_response
def update(self, connector_id, conn_type, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Update an existing connector
:param connector_id: Connector ID to update
:type connector_id: int
:param conn_type: Type of Connector (Valid options are: Connectors.Type.NESSUS,
Connectors.Type.NEXPOSE,
Connectors.Type.QUALYS_VULN,
Connectors.Type.QUALYS_ASSET,
Connectors.Type.TENEBLE_SEC_CENTER)
:type conn_type: str
:param conn_name: The name for the connector
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param network_id: The network ID
:type network_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param username_or_access_key: The username or access key to be used
:type username_or_access_key: str
:param password_or_secret_key: The password or secret key to be used
:type password_or_secret_key: str
:param auto_urba: Indicates whether URBA should be automatically run after connector runs.
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
connector_schedule = None
url = self.api_base_url.format(str(client_id)) + "/{}".format(str(connector_id))
hour_of_day = kwargs.get('hour_of_day', None)
day_of_week = kwargs.get('day_of_week', None)
day_of_month = kwargs.get('day_of_month', None)
if schedule_freq == Connectors.ScheduleFreq.DAILY:
if hour_of_day is None:
hour_of_day = 12
connector_schedule = {
"type": schedule_freq,
"hourOfDay": hour_of_day
}
elif schedule_freq == Connectors.ScheduleFreq.WEEKLY:
if day_of_week is None:
day_of_week = 1
if hour_of_day is None:
hour_of_day = 12
connector_schedule = {
"type": schedule_freq,
"hourOfDay": hour_of_day,
"dayOfWeek": day_of_week
}
elif schedule_freq == Connectors.ScheduleFreq.MONTHLY:
if day_of_month is None:
day_of_month = 1
if hour_of_day is None:
hour_of_day = 12
connector_schedule = {
"type": schedule_freq,
"hourOfDay": hour_of_day,
"dayOfMonth": day_of_month
}
if conn_type == Connectors.Type.NESSUS:
attributes = {
"accessKey": username_or_access_key,
"secretKey": password_or_secret_key
}
else:
attributes = {
"username": username_or_access_key,
"password": password_or_secret_key
}
body = {
"type": conn_type,
"name": conn_name,
"connection": {
"url": conn_url
},
"schedule": connector_schedule,
"networkId": network_id,
"attributes": attributes,
"autoUrba": auto_urba
}
try:
raw_response = self.request_handler.make_request(ApiRequestHandler.PUT, url, body=body)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
jsonified_response = json.loads(raw_response.text)
returned_id = jsonified_response['id']
return returned_id
def update_nessus_connector(self, connector_id, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Update an existing Nessus connector
:param connector_id: Connector ID to update
:type connector_id: int
:param conn_name: The name for the connector
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param network_id: The network ID
:type network_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param username_or_access_key: The username or access key to be used
:type username_or_access_key: str
:param password_or_secret_key: The password or secret key to be used
:type password_or_secret_key: str
:param auto_urba: Indicates whether URBA should be automatically run after connector runs.
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
returned_id = self.update(connector_id, Connectors.Type.NESSUS, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
return returned_id
def update_qualys_vuln_connector(self, connector_id, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Update an existing QUALYS VULN connector
:param connector_id: Connector ID to update
:type connector_id: int
:param conn_name: The name for the connector
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param network_id: The network ID
:type network_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param username_or_access_key: The username or access key to be used
:type username_or_access_key: str
:param password_or_secret_key: The password or secret key to be used
:type password_or_secret_key: str
:param auto_urba: Indicates whether URBA should be automatically run after connector runs.
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
returned_id = self.update(connector_id, Connectors.Type.QUALYS_VULN, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
return returned_id
def update_qualys_asset_connector(self, connector_id, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Update an existing QUALYS ASSET connector
:param connector_id: Connector ID to update
:type connector_id: int
:param conn_name: The name for the connector
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param network_id: The network ID
:type network_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param username_or_access_key: The username or access key to be used
:type username_or_access_key: str
:param password_or_secret_key: The password or secret key to be used
:type password_or_secret_key: str
:param auto_urba: Indicates whether URBA should be automatically run after connector runs.
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
returned_id = self.update(connector_id, Connectors.Type.QUALYS_ASSET, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
return returned_id
def update_nexpose_connector(self, connector_id, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Update an existing NEXPOSE connector
:param connector_id: Connector ID to update
:type connector_id: int
:param conn_name: The name for the connector
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param network_id: The network ID
:type network_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param username_or_access_key: The username or access key to be used
:type username_or_access_key: str
:param password_or_secret_key: The password or secret key to be used
:type password_or_secret_key: str
:param auto_urba: Indicates whether URBA should be automatically run after connector runs.
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
returned_id = self.update(connector_id, Connectors.Type.NEXPOSE, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
return returned_id
def update_teneble_connector(self, connector_id, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba=True, client_id=None, **kwargs):
"""
Update an existing TENEBLE SECURITY CENTER connector
:param connector_id: Connector ID to update
:type connector_id: int
:param conn_name: The name for the connector
:type conn_name: str
:param conn_url: The URL for the connector to communicate with.
:type conn_url: str
:param network_id: The network ID
:type network_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param username_or_access_key: The username or access key to be used
:type username_or_access_key: str
:param password_or_secret_key: The password or secret key to be used
:type password_or_secret_key: str
:param auto_urba: Indicates whether URBA should be automatically run after connector runs.
:type auto_urba: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
try:
returned_id = self.update(connector_id, Connectors.Type.TENEBLE_SEC_CENTER, conn_name, conn_url, network_id, schedule_freq,
username_or_access_key, password_or_secret_key, auto_urba, client_id, **kwargs)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
return returned_id
def delete(self, connector_id, delete_tag=True, client_id=None):
"""
Delete a connector.
:param connector_id: The connector ID.
:type connector_id: int
:param delete_tag: Force delete tag associated with connector?
:type delete_tag: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:return: Indicator reflecting whether or not the operation was successful.
:rtype: bool
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
url = self.api_base_url.format(str(client_id)) + "/{}".format(str(connector_id))
body = {
"deleteTag": delete_tag
}
try:
self.request_handler.make_request(ApiRequestHandler.DELETE, url, body)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
success = True
return success
def get_jobs(self, connector_id, page_num=0, page_size=150, client_id=None):
"""
Get the jobs associated with a connector.
:param connector_id: The connector ID.
:type connector_id: int
:param page_num: The page number of results to be returned.
:type page_num: int
:param page_size: The number of results to return per page.
:type page_size: int
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:return: The JSON response from the platform is returned.
:rtype: dict
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises PageSizeError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
url = self.api_base_url.format(str(client_id)) + "/{}/job".format(str(connector_id))
params = {'page': page_num, 'size': page_size}
try:
raw_response = self.request_handler.make_request(ApiRequestHandler.GET, url, params=params)
except (RequestFailed, StatusCodeError, MaxRetryError, PageSizeError):
raise
jsonified_response = json.loads(raw_response.text)
return jsonified_response
def update_schedule(self, connector_id, schedule_freq, enabled, client_id=None, **kwargs):
"""
Update the schedule of an existing Connector.
:param connector_id: Connector ID
:type connector_id: int
:param schedule_freq: The frequency for the connector to run. Connectors.ScheduleFreq.DAILY,
Connectors.ScheduleFreq.WEEKLY,
Connectors.ScheduleFreq.MONTHLY
:type schedule_freq: str
:param enabled: Enable connector?
:type enabled: bool
:param client_id: Client ID. If an ID isn't passed, will use the profile's default Client ID.
:type client_id: int
:keyword hour_of_day: The time the connector should run. Req. for DAILY, WEEKLY, and MONTHLY. Integer. 0-23.
:keyword day_of_week: The day of the week the connector should run. Req. for WEEKLY. Integer. 1-7
:keyword day_of_month: The day of the month the connector should run. Req. for MONTHLY. Integer. 1-31
:return: The connector ID from the platform is returned.
:rtype: int
:raises RequestFailed:
:raises StatusCodeError:
:raises MaxRetryError:
:raises ValueError:
"""
if client_id is None:
client_id = self._use_default_client_id()[0]
hour_of_day = kwargs.get('hour_of_day', None)
day_of_week = kwargs.get('day_of_week', None)
day_of_month = kwargs.get('day_of_month', None)
url = self.api_base_url.format(str(client_id)) + "/{}/schedule".format(str(connector_id))
body = {
"type": schedule_freq,
"enabled": enabled
}
if schedule_freq == Connectors.ScheduleFreq.DAILY:
if hour_of_day is None:
raise ValueError("hour_of_day is required for a DAILY connector schedule.")
body.update(hourOfDay=hour_of_day)
elif schedule_freq == Connectors.ScheduleFreq.WEEKLY:
if day_of_week is None and hour_of_day is None:
raise ValueError("hour_of_day and day_of_week are required for a WEEKLY connector schedule.")
if day_of_week is None:
raise ValueError("day_of_week is required for a WEEKLY connector schedule.")
if hour_of_day is None:
raise ValueError("hour_of_day is required for a WEEKLY connector schedule.")
body.update(hourOfDay=hour_of_day)
body.update(dayOfWeek=day_of_week)
elif schedule_freq == Connectors.ScheduleFreq.MONTHLY:
if day_of_month is None and hour_of_day is None:
raise ValueError("day_of_month and day_of_week are required for a WEEKLY connector schedule.")
if day_of_month is None:
raise ValueError("day_of_month is required for a WEEKLY connector schedule.")
if hour_of_day is None:
raise ValueError("hour_of_day is required for a WEEKLY connector schedule.")
body.update(hourOfDay=hour_of_day)
body.update(dayOfMonth=day_of_month)
else:
raise ValueError("schedule_freq should be one of DAILY, WEEKLY, or MONTHLY")
try:
raw_response = self.request_handler.make_request(ApiRequestHandler.PUT, url, body=body)
except (RequestFailed, StatusCodeError, MaxRetryError):
raise
jsonified_response = json.loads(raw_response.text)
returned_id = jsonified_response['id']
return returned_id
# Future: Add support for ticket connectors (SNOW, JIRA, etc.).
"""
Copyright 2019 RiskSense, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
| 38.88488 | 135 | 0.577305 | 5,100 | 45,262 | 4.913725 | 0.046275 | 0.048843 | 0.017598 | 0.032682 | 0.91253 | 0.905786 | 0.898324 | 0.894972 | 0.885116 | 0.881844 | 0 | 0.005384 | 0.355795 | 45,262 | 1,163 | 136 | 38.918315 | 0.854071 | 0.523574 | 0 | 0.719403 | 0 | 0 | 0.064514 | 0.001336 | 0 | 0 | 0 | 0 | 0 | 1 | 0.053731 | false | 0.071642 | 0.008955 | 0 | 0.122388 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 8 |
62fa7bb77deb0a6c641da6cd32661eaa332b9a3a | 7,137 | py | Python | scripts/artifacts/takeoutSavedLinks.py | f0r3ns1cat0r/RLEAPP | 527799c3705b3b695dd355c2178b095c49a021ae | [
"MIT"
] | 26 | 2021-08-17T21:56:48.000Z | 2022-03-21T09:35:01.000Z | scripts/artifacts/takeoutSavedLinks.py | f0r3ns1cat0r/RLEAPP | 527799c3705b3b695dd355c2178b095c49a021ae | [
"MIT"
] | 3 | 2021-08-19T01:28:23.000Z | 2022-03-01T03:11:33.000Z | scripts/artifacts/takeoutSavedLinks.py | f0r3ns1cat0r/RLEAPP | 527799c3705b3b695dd355c2178b095c49a021ae | [
"MIT"
] | 10 | 2021-08-19T01:14:52.000Z | 2022-03-13T08:38:19.000Z | import os
import datetime
import csv
from scripts.artifact_report import ArtifactHtmlReport
from scripts.ilapfuncs import logfunc, tsv, timeline, is_platform_windows
def get_takeoutSavedLinks(files_found, report_folder, seeker, wrap_text):
for file_found in files_found:
file_found = str(file_found)
filename = os.path.basename(file_found)
if filename.startswith('Default list.csv'):
data_list = []
has_header = True
with open(file_found, 'r', encoding='utf-8') as f:
delimited = csv.reader(f, delimiter=',')
next(delimited)
for item in delimited:
if len(item) == 0:
continue
else:
title = item[0]
note = item[1]
url = item[2]
comment = item[3]
data_list.append((title,note,url,comment))
if data_list:
description = 'Collections of saved links (images, places, web pages, etc.) from Google Search and Maps.'
report = ArtifactHtmlReport('Saved Links - Default List')
report.start_artifact_report(report_folder, 'Saved Links - Default List', description)
html_report = report.get_report_file_path()
report.add_script()
data_headers = ('Title','Note','URL','Comment')
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = f'Saved Links - Default List'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = f'Saved Links - Default List'
timeline(report_folder, tlactivity, data_list, data_headers)
else:
logfunc('No Saved Links - Default List data available')
if filename.startswith('Favorite images.csv'):
data_list = []
has_header = True
with open(file_found, 'r', encoding='utf-8') as f:
delimited = csv.reader(f, delimiter=',')
next(delimited)
for item in delimited:
if len(item) == 0:
continue
else:
title = item[0]
note = item[1]
url = item[2]
comment = item[3]
data_list.append((title,note,url,comment))
if data_list:
description = 'Collections of saved links (images, places, web pages, etc.) from Google Search and Maps.'
report = ArtifactHtmlReport('Saved Links - Favorite Images')
report.start_artifact_report(report_folder, 'Saved Links - Favorite Images', description)
html_report = report.get_report_file_path()
report.add_script()
data_headers = ('Title','Note','URL','Comment')
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = f'Saved Links - Favorite Images'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = f'Saved Links - Favorite Images'
timeline(report_folder, tlactivity, data_list, data_headers)
else:
logfunc('No Saved Links - Favorite Images data available')
if filename.startswith('Favorite pages.csv'):
data_list = []
has_header = True
with open(file_found, 'r', encoding='utf-8') as f:
delimited = csv.reader(f, delimiter=',')
next(delimited)
for item in delimited:
if len(item) == 0:
continue
else:
title = item[0]
note = item[1]
url = item[2]
comment = item[3]
data_list.append((title,note,url,comment))
if data_list:
description = 'Collections of saved links (images, places, web pages, etc.) from Google Search and Maps.'
report = ArtifactHtmlReport('Saved Links - Favorite Pages')
report.start_artifact_report(report_folder, 'Saved Links - Favorite Pages', description)
html_report = report.get_report_file_path()
report.add_script()
data_headers = ('Title','Note','URL','Comment')
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = f'Saved Links - Favorite Pages'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = f'Saved Links - Favorite Pages'
timeline(report_folder, tlactivity, data_list, data_headers)
else:
logfunc('No Saved Links - Favorite Pages data available')
if filename.startswith('Want to go.csv'):
data_list = []
has_header = True
with open(file_found, 'r', encoding='utf-8') as f:
delimited = csv.reader(f, delimiter=',')
next(delimited)
for item in delimited:
if len(item) == 0:
continue
else:
title = item[0]
note = item[1]
url = item[2]
comment = item[3]
data_list.append((title,note,url,comment))
if data_list:
description = 'Collections of saved links (images, places, web pages, etc.) from Google Search and Maps.'
report = ArtifactHtmlReport('Saved Links - Want To Go')
report.start_artifact_report(report_folder, 'Saved Links - Want To Go', description)
html_report = report.get_report_file_path()
report.add_script()
data_headers = ('Title','Note','URL','Comment')
report.write_artifact_data_table(data_headers, data_list, file_found)
report.end_artifact_report()
tsvname = f'Saved Links - Want To Go'
tsv(report_folder, data_headers, data_list, tsvname)
tlactivity = f'Saved Links - Want To Go'
timeline(report_folder, tlactivity, data_list, data_headers)
else:
logfunc('No Saved Links - Want To Go data available') | 44.329193 | 121 | 0.506095 | 695 | 7,137 | 5.018705 | 0.136691 | 0.055046 | 0.051606 | 0.043578 | 0.883888 | 0.854931 | 0.826835 | 0.826835 | 0.799885 | 0.768349 | 0 | 0.005736 | 0.413759 | 7,137 | 161 | 122 | 44.329193 | 0.827916 | 0 | 0 | 0.744186 | 0 | 0 | 0.158868 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.007752 | false | 0 | 0.03876 | 0 | 0.046512 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c503a0682bf956ef3dfcfaaa9b8ab673c6e99251 | 7,707 | py | Python | koila/interfaces/components/arithmetic.py | rentruewang/koila | f6fe8274901dd0afbe734765fedef8d64fca96da | [
"MIT"
] | 1,620 | 2021-11-24T07:56:53.000Z | 2022-03-31T01:04:25.000Z | koila/interfaces/components/arithmetic.py | rentruewang/koila | f6fe8274901dd0afbe734765fedef8d64fca96da | [
"MIT"
] | 25 | 2021-11-30T07:39:48.000Z | 2022-03-28T17:22:13.000Z | koila/interfaces/components/arithmetic.py | rentruewang/koila | f6fe8274901dd0afbe734765fedef8d64fca96da | [
"MIT"
] | 56 | 2021-11-29T17:51:13.000Z | 2022-02-09T14:49:46.000Z | from __future__ import annotations
from abc import abstractmethod
from functools import wraps
from typing import Any, NoReturn, Protocol, Union, runtime_checkable
Numeric = Union[int, float, bool]
"Numeric is a union for `int`, `float`, and `bool`, which are all primitive values in C's sense."
@runtime_checkable
class Arithmetic(Protocol):
"""
`Arithmetic` is a type that supports arithmetic operations.
Operations such as +-*/ etc are considered arithmetic, basically everything that can be used on a scalar.
Inheriting this class, requires half of the methods to be overwritten.
For example, either overload `add` or `__add__`.
If `__add__` is overwritten, `add` is implemented automatically using `__add__`, and vice versa.
The only exception is `eq` and `ne`. They must be manually implemented.
"""
def __invert__(self) -> Arithmetic:
"The `not` operator."
return self.logical_not()
@abstractmethod
def logical_not(self) -> Arithmetic:
"The `not` operator."
...
def __pos__(self) -> Arithmetic:
"The binary `+` operator."
return self.pos()
def pos(self) -> Arithmetic:
"The binary `+` operator."
return +self
def __neg__(self) -> Arithmetic:
"The unary `-` operator."
return self.neg()
def neg(self) -> Arithmetic:
"The unary `-` operator."
return -self
def __add__(self, other: Arithmetic) -> Arithmetic:
"The `+` operator."
return Arithmetic.add(self, other)
def __radd__(self, other: Arithmetic) -> Arithmetic:
"The `+` operator."
return Arithmetic.add(other, self)
def add(self, other: Arithmetic) -> Arithmetic:
"The `+` operator."
return self + other
def __sub__(self, other: Arithmetic) -> Arithmetic:
"The `-` operator."
return Arithmetic.sub(self, other)
def __rsub__(self, other: Arithmetic) -> Arithmetic:
"The `-` operator."
return Arithmetic.sub(other, self)
def sub(self, other: Arithmetic) -> Arithmetic:
"The `-` operator."
return self - other
@wraps(sub)
def subtract(self, other: Arithmetic) -> Arithmetic:
return self.sub(other)
def __mul__(self, other: Arithmetic) -> Arithmetic:
"The `*` operator."
return Arithmetic.mul(self, other)
def __rmul__(self, other: Arithmetic) -> Arithmetic:
"The `*` operator."
return Arithmetic.mul(other, self)
def mul(self, other: Arithmetic) -> Arithmetic:
"The `*` operator."
return self * other
@wraps(mul)
def multiply(self, other: Arithmetic) -> Arithmetic:
return self.mul(other)
def __truediv__(self, other: Arithmetic) -> Arithmetic:
"The `/` operator."
return self.div(other)
def __rtruediv__(self, other: Arithmetic) -> Arithmetic:
"The `/` operator."
return other.div(self)
def __floordiv__(self, other: Arithmetic) -> Arithmetic:
"""
The `//` operator.
It should not be implemented because of semantic differences between
`torch`'s `//` and `numpy`'s `//` operator.
"""
raise NotImplementedError
def __rfloordiv__(self, other: Arithmetic) -> Arithmetic:
"""
The `//` operator.
It should not be implemented because of semantic differences between
`torch`'s `//` and `numpy`'s `//` operator.
"""
raise NotImplementedError
def div(self, other: Arithmetic) -> Arithmetic:
"The `/` operator."
return self / other
@wraps(div)
def divide(self, other: Arithmetic) -> Arithmetic:
return self.div(other)
@wraps(div)
def truediv(self, other: Arithmetic) -> Arithmetic:
return self.div(other)
def __pow__(self, other: Arithmetic) -> Arithmetic:
"The `**` operator."
return self.pow(other)
def __rpow__(self, other: Arithmetic) -> Arithmetic:
"The `**` operator."
return Arithmetic.pow(other, self)
def pow(self, other: Arithmetic) -> Arithmetic:
"The `**` operator."
return self ** other
def __mod__(self, other: Arithmetic) -> Arithmetic:
"The `%` operator."
return self.mod(other)
def __rmod__(self, other: Arithmetic) -> Arithmetic:
"The `%` operator."
return other.mod(self)
def mod(self, other: Arithmetic) -> Arithmetic:
"The `%` operator."
return self % other
@wraps(mod)
def fmod(self, other: Arithmetic) -> Arithmetic:
return self.mod(other)
@wraps(mod)
def remainder(self, other: Arithmetic) -> Arithmetic:
return self.mod(other)
def __divmod__(self, other: Arithmetic) -> NoReturn:
"The `divmod` operator is not and should not be implemented."
raise NotImplementedError
def __rdivmod__(self, other: Arithmetic) -> NoReturn:
"The `divmod` operator is not and should not be implemented."
raise NotImplementedError
def __abs__(self) -> Arithmetic:
"The `abs` operator."
return self.abs()
def abs(self) -> Arithmetic:
"The `abs` operator."
return abs(self)
def __hash__(self) -> int:
"""
The `hash` operator.
Since arithmetic types should be value types, the hashing value depends only on its values.
"""
return id(self)
def __matmul__(self, other: Arithmetic) -> Arithmetic:
"The `@` operator."
return self.matmul(other)
def __rmatmul__(self, other: Arithmetic) -> Arithmetic:
"The `@` operator."
return other.matmul(self)
def matmul(self, other: Arithmetic) -> Arithmetic:
"The `@` operator."
return self @ other
def __eq__(self, other: Arithmetic | Numeric | Any) -> Arithmetic | bool:
"The `==` operator."
if not isinstance(other, (Arithmetic, int, float, bool)):
return False
return self.eq(other)
@abstractmethod
def eq(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `==` operator. Variables on both sides of the operator are of the same type."
return self == other
def __ne__(self, other: Arithmetic | Numeric | Any) -> Arithmetic | bool:
"The `!=` operator."
if not isinstance(other, (Arithmetic, int, float, bool)):
return True
return self.ne(other)
@abstractmethod
def ne(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `!=` operator. Variables on both sides of the operator are of the same type."
return self != other
def __gt__(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `>` operator."
return self.gt(other)
def gt(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `>` operator."
return self > other
def __ge__(self, other: Arithmetic | Numeric) -> Arithmetic:
"The >= operator."
return self.ge(other)
def ge(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `>=` operator."
return self >= other
def __lt__(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `<` operator."
return self.lt(other)
def lt(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `<` operator."
return self < other
def __le__(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `<=` operator."
return self.le(other)
def le(self, other: Arithmetic | Numeric) -> Arithmetic:
"The `<=` operator."
return self <= other
| 26.037162 | 109 | 0.604905 | 831 | 7,707 | 5.44645 | 0.169675 | 0.117322 | 0.180513 | 0.185815 | 0.744587 | 0.72669 | 0.701944 | 0.701944 | 0.586169 | 0.473266 | 0 | 0 | 0.276372 | 7,707 | 295 | 110 | 26.125424 | 0.811547 | 0.24095 | 0 | 0.333333 | 0 | 0.005952 | 0.16003 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.309524 | false | 0 | 0.02381 | 0.035714 | 0.630952 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 7 |
c50e61a57eb041f9fb1474c1e6f5453335e573b7 | 213 | py | Python | torch/ao/quantization/fx/backend_config_dict/__init__.py | sanchitintel/pytorch | 416f59308023b5d98f6ea4ecdd0bcd3829edb7a7 | [
"Intel"
] | 60,067 | 2017-01-18T17:21:31.000Z | 2022-03-31T21:37:45.000Z | torch/ao/quantization/fx/backend_config_dict/__init__.py | Jam3/pytorch | 33d8769c285b51922c378d11a90a442a28e06762 | [
"Intel"
] | 66,955 | 2017-01-18T17:21:38.000Z | 2022-03-31T23:56:11.000Z | torch/ao/quantization/fx/backend_config_dict/__init__.py | Jam3/pytorch | 33d8769c285b51922c378d11a90a442a28e06762 | [
"Intel"
] | 19,210 | 2017-01-18T17:45:04.000Z | 2022-03-31T23:51:56.000Z | from .fbgemm import get_fbgemm_backend_config_dict
from .tensorrt import get_tensorrt_backend_config_dict
def validate_backend_config_dict(backend_config_dict):
return "quant_patterns" in backend_config_dict
| 35.5 | 54 | 0.877934 | 31 | 213 | 5.516129 | 0.451613 | 0.380117 | 0.497076 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.089202 | 213 | 5 | 55 | 42.6 | 0.881443 | 0 | 0 | 0 | 0 | 0 | 0.065728 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.25 | false | 0 | 0.5 | 0.25 | 1 | 0 | 0 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 7 |
c54191710a571c2a045f3fb2c77c928877c3ead4 | 106 | py | Python | rain/tests/test_one_dimension.py | ivankeller/discrepancy | 1e4806e4c9cdbb16ff3c1af9c591a110c4db7828 | [
"MIT"
] | null | null | null | rain/tests/test_one_dimension.py | ivankeller/discrepancy | 1e4806e4c9cdbb16ff3c1af9c591a110c4db7828 | [
"MIT"
] | null | null | null | rain/tests/test_one_dimension.py | ivankeller/discrepancy | 1e4806e4c9cdbb16ff3c1af9c591a110c4db7828 | [
"MIT"
] | null | null | null | from rain.one_dimension import unif
def test_unif():
assert unif(0.1) == 0
assert unif(0.5) == 1
| 17.666667 | 35 | 0.650943 | 19 | 106 | 3.526316 | 0.631579 | 0.298507 | 0.328358 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.072289 | 0.216981 | 106 | 5 | 36 | 21.2 | 0.73494 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.5 | 1 | 0.25 | true | 0 | 0.25 | 0 | 0.5 | 0 | 1 | 0 | 0 | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c54fb73f66ab1c513fe17e4e32d2af479f3deb09 | 39,428 | py | Python | HandCraftedModules.py | Childhoo/Chen_Matcher | ca89a4774a083d10177186020c35f60c3e8b7b37 | [
"MIT"
] | null | null | null | HandCraftedModules.py | Childhoo/Chen_Matcher | ca89a4774a083d10177186020c35f60c3e8b7b37 | [
"MIT"
] | null | null | null | HandCraftedModules.py | Childhoo/Chen_Matcher | ca89a4774a083d10177186020c35f60c3e8b7b37 | [
"MIT"
] | null | null | null | import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import math
import numpy as np
from Utils import GaussianBlur, CircularGaussKernel
from LAF import abc2A,rectifyAffineTransformationUpIsUp, sc_y_x2LAFs,sc_y_x_and_A2LAFs
from Utils import generate_2dgrid, generate_2dgrid, generate_3dgrid
from Utils import zero_response_at_border
class ScalePyramid(nn.Module):
def __init__(self, nLevels = 3, init_sigma = 1.6, border = 5):
super(ScalePyramid,self).__init__()
self.nLevels = nLevels;
self.init_sigma = init_sigma
self.sigmaStep = 2 ** (1. / float(self.nLevels))
#print 'step',self.sigmaStep
self.b = border
self.minSize = 2 * self.b + 2 + 1;
return
def forward(self,x):
pixelDistance = 1.0;
curSigma = 0.5
if self.init_sigma > curSigma:
sigma = np.sqrt(self.init_sigma**2 - curSigma**2)
curSigma = self.init_sigma
curr = GaussianBlur(sigma = sigma)(x)
else:
curr = x
sigmas = [[curSigma]]
pixel_dists = [[1.0]]
pyr = [[curr]]
j = 0
while True:
curr = pyr[-1][0]
for i in range(1, self.nLevels + 2):
sigma = curSigma * np.sqrt(self.sigmaStep*self.sigmaStep - 1.0 )
#print 'blur sigma', sigma
curr = GaussianBlur(sigma = sigma)(curr)
curSigma *= self.sigmaStep
pyr[j].append(curr)
sigmas[j].append(curSigma)
pixel_dists[j].append(pixelDistance)
if i == self.nLevels:
nextOctaveFirstLevel = F.avg_pool2d(curr, kernel_size = 1, stride = 2, padding = 0)
pixelDistance = pixelDistance * 2.0
curSigma = self.init_sigma
if (nextOctaveFirstLevel[0,0,:,:].size(0) <= self.minSize) or (nextOctaveFirstLevel[0,0,:,:].size(1) <= self.minSize):
break
pyr.append([nextOctaveFirstLevel])
sigmas.append([curSigma])
pixel_dists.append([pixelDistance])
j+=1
return pyr, sigmas, pixel_dists
class HessianResp(nn.Module):
def __init__(self):
super(HessianResp, self).__init__()
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
self.gx.weight.data = torch.from_numpy(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32))
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
self.gy.weight.data = torch.from_numpy(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32))
self.gxx = nn.Conv2d(1, 1, kernel_size=(1,3),bias = False)
self.gxx.weight.data = torch.from_numpy(np.array([[[[1.0, -2.0, 1.0]]]], dtype=np.float32))
self.gyy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
self.gyy.weight.data = torch.from_numpy(np.array([[[[1.0], [-2.0], [1.0]]]], dtype=np.float32))
return
def forward(self, x, scale):
gxx = self.gxx(F.pad(x, (1,1,0, 0), 'replicate'))
gyy = self.gyy(F.pad(x, (0,0, 1,1), 'replicate'))
gxy = self.gy(F.pad(self.gx(F.pad(x, (1,1,0, 0), 'replicate')), (0,0, 1,1), 'replicate'))
return torch.abs(gxx * gyy - gxy * gxy) * (scale**4)
class AffineShapeEstimator(nn.Module):
def __init__(self, threshold = 0.001, patch_size = 19):
super(AffineShapeEstimator, self).__init__()
self.threshold = threshold;
self.PS = patch_size
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
self.gx.weight.data = torch.from_numpy(np.array([[[[-1, 0, 1]]]], dtype=np.float32))
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
self.gy.weight.data = torch.from_numpy(np.array([[[[-1], [0], [1]]]], dtype=np.float32))
self.gk = torch.from_numpy(CircularGaussKernel(kernlen = self.PS, sigma = (self.PS / 2) /3.0).astype(np.float32))
self.gk = Variable(self.gk, requires_grad=False)
return
def invSqrt(self,a,b,c):
eps = 1e-12
mask = (b != 0).float()
r1 = mask * (c - a) / (2. * b + eps)
t1 = torch.sign(r1) / (torch.abs(r1) + torch.sqrt(1. + r1*r1));
r = 1.0 / torch.sqrt( 1. + t1*t1)
t = t1*r;
r = r * mask + 1.0 * (1.0 - mask);
t = t * mask;
x = 1. / torch.sqrt( r*r*a - 2.0*r*t*b + t*t*c)
z = 1. / torch.sqrt( t*t*a + 2.0*r*t*b + r*r*c)
d = torch.sqrt( x * z)
x = x / d
z = z / d
l1 = torch.max(x,z)
l2 = torch.min(x,z)
new_a = r*r*x + t*t*z
new_b = -r*t*x + t*r*z
new_c = t*t*x + r*r *z
return new_a, new_b, new_c, l1, l2
def forward(self,x, return_A_matrix = False):
if x.is_cuda:
self.gk = self.gk.cuda()
else:
self.gk = self.gk.cpu()
gx = self.gx(F.pad(x, (1, 1, 0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0, 0, 1, 1), 'replicate'))
a1 = (gx * gx * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
b1 = (gx * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
c1 = (gy * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
a, b, c, l1, l2 = self.invSqrt(a1,b1,c1)
rat1 = l1/l2
mask = (torch.abs(rat1) <= 6.).float().view(-1);
return rectifyAffineTransformationUpIsUp(abc2A(a,b,c))#, mask
class OrientationDetector(nn.Module):
def __init__(self,
mrSize = 3.0, patch_size = None):
super(OrientationDetector, self).__init__()
if patch_size is None:
patch_size = 32;
self.PS = patch_size;
self.bin_weight_kernel_size, self.bin_weight_stride = self.get_bin_weight_kernel_size_and_stride(self.PS, 1)
self.mrSize = mrSize;
self.num_ang_bins = 36
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
self.gx.weight.data = torch.from_numpy(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32))
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
self.gy.weight.data = torch.from_numpy(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32))
self.angular_smooth = nn.Conv1d(1, 1, kernel_size=3, padding = 1, bias = False)
self.angular_smooth.weight.data = torch.from_numpy(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32))
self.gk = 10. * torch.from_numpy(CircularGaussKernel(kernlen=self.PS).astype(np.float32))
self.gk = Variable(self.gk, requires_grad=False)
return
def get_bin_weight_kernel_size_and_stride(self, patch_size, num_spatial_bins):
bin_weight_stride = int(round(2.0 * np.floor(patch_size / 2) / float(num_spatial_bins + 1)))
bin_weight_kernel_size = int(2 * bin_weight_stride - 1);
return bin_weight_kernel_size, bin_weight_stride
def get_rotation_matrix(self, angle_in_radians):
angle_in_radians = angle_in_radians.view(-1, 1, 1);
sin_a = torch.sin(angle_in_radians)
cos_a = torch.cos(angle_in_radians)
A1_x = torch.cat([cos_a, sin_a], dim = 2)
A2_x = torch.cat([-sin_a, cos_a], dim = 2)
transform = torch.cat([A1_x,A2_x], dim = 1)
return transform
def forward(self, x, return_rot_matrix = False):
gx = self.gx(F.pad(x, (1,1,0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0,0, 1,1), 'replicate'))
mag = torch.sqrt(gx * gx + gy * gy + 1e-10)
if x.is_cuda:
self.gk = self.gk.cuda()
mag = mag * self.gk.unsqueeze(0).unsqueeze(0).expand_as(mag)
ori = torch.atan2(gy,gx)
o_big = float(self.num_ang_bins) *(ori + 1.0 * math.pi )/ (2.0 * math.pi)
bo0_big = torch.floor(o_big)
wo1_big = o_big - bo0_big
bo0_big = bo0_big % self.num_ang_bins
bo1_big = (bo0_big + 1) % self.num_ang_bins
wo0_big = (1.0 - wo1_big) * mag
wo1_big = wo1_big * mag
ang_bins = []
for i in range(0, self.num_ang_bins):
ang_bins.append(F.adaptive_avg_pool2d((bo0_big == i).float() * wo0_big, (1,1)))
ang_bins = torch.cat(ang_bins,1).view(-1,1,self.num_ang_bins)
ang_bins = self.angular_smooth(ang_bins)
values, indices = ang_bins.view(-1,self.num_ang_bins).max(1)
angle = -((2. * float(np.pi) * indices.float() / float(self.num_ang_bins)) - float(math.pi))
if return_rot_matrix:
return self.get_rotation_matrix(angle)
return angle
#find the largest orientation according to HoG method in SIFT
class OrientationFinder(nn.Module):
def __init__(self, mrSize = 3.0, patch_size = None):
super(OrientationFinder, self).__init__()
if patch_size is None:
patch_size = 32;
self.PS = patch_size;
self.bin_weight_kernel_size, self.bin_weight_stride = self.get_bin_weight_kernel_size_and_stride(self.PS, 1)
self.mrSize = mrSize;
self.num_ang_bins = 36
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
# self.gx.weight.data = torch.from_numpy(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32))
self.gx.weight.data = torch.tensor(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32), requires_grad=True)
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
# self.gy.weight.data = torch.from_numpy(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32))
self.gy.weight.data = torch.tensor(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32), requires_grad=True)
self.angular_smooth = nn.Conv1d(1, 1, kernel_size=3, padding = 1, bias = False)
# self.angular_smooth.weight.data = torch.from_numpy(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32))
self.angular_smooth.weight.data = torch.tensor(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32), requires_grad=True)
# self.gk = 10. * torch.from_numpy(CircularGaussKernel(kernlen=self.PS).astype(np.float32))
self.gk = 10. * torch.tensor(CircularGaussKernel(kernlen=self.PS).astype(np.float32),requires_grad=True)
self.gk = Variable(self.gk, requires_grad=True) #set this as true to train desecriptors
return
def get_bin_weight_kernel_size_and_stride(self, patch_size, num_spatial_bins):
bin_weight_stride = int(round(2.0 * np.floor(patch_size / 2) / float(num_spatial_bins + 1)))
bin_weight_kernel_size = int(2 * bin_weight_stride - 1);
return bin_weight_kernel_size, bin_weight_stride
def get_rotation_matrix(self, angle_in_radians):
angle_in_radians = angle_in_radians.view(-1, 1, 1);
sin_a = torch.sin(angle_in_radians)
cos_a = torch.cos(angle_in_radians)
A1_x = torch.cat([cos_a, sin_a], dim = 2)
A2_x = torch.cat([-sin_a, cos_a], dim = 2)
transform = torch.cat([A1_x,A2_x], dim = 1)
return transform
def forward(self, x, return_rot_matrix = False):
gx = self.gx(F.pad(x, (1,1,0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0,0, 1,1), 'replicate'))
mag = torch.sqrt(gx * gx + gy * gy + 1e-10)
if x.is_cuda:
self.gk = self.gk.cuda()
mag = mag * self.gk.unsqueeze(0).unsqueeze(0).expand_as(mag)
# filter out small gx and gy (if necessary)
ind_reserve = (mag>1e-3).type(torch.FloatTensor)
if gx.is_cuda:
gy = gy*ind_reserve.cuda()
gx = gx*ind_reserve.cuda()
mag = mag * ind_reserve.cuda()
else:
gy = gy*ind_reserve
gx = gx*ind_reserve
mag = mag * ind_reserve
# ori = torch.atan2(gy + 1e-8, gx + 1e-8)
ori = torch.atan2(gy, gx)
# o_big = torch.tensor(float(self.num_ang_bins) *(ori + 1.0 * math.pi )/ (2.0 * math.pi), requires_grad=True)
# instead of +pi, use %2pi to convert the angle to 0-2pi
# o_big = torch.tensor(float(self.num_ang_bins) *(ori % (2.0 * math.pi) )/ (2.0 * math.pi), requires_grad=True)
o_big = torch.tensor(float(self.num_ang_bins) *(torch.remainder(ori,2.0 * math.pi ) )/ (2.0 * math.pi), requires_grad=True)
bo0_big = torch.floor(o_big)
wo1_big = o_big - bo0_big
bo0_big = bo0_big % self.num_ang_bins
bo1_big = (bo0_big + 1) % self.num_ang_bins
wo0_big = (1.0 - wo1_big) * mag
wo1_big = wo1_big * mag
ang_bins = []
for i in range(0, self.num_ang_bins):
ang_bins.append(F.adaptive_avg_pool2d((bo0_big == i).float() * wo0_big, (1,1)))
ang_bins = torch.cat(ang_bins,1).view(-1,1,self.num_ang_bins)
ang_bins = self.angular_smooth(ang_bins)
values, indices = ang_bins.view(-1,self.num_ang_bins).max(1)
# angle = torch.tensor(-((2. * float(np.pi) * indices.float() / float(self.num_ang_bins)) - float(math.pi)), requires_grad=True)
angle = torch.tensor(-((2. * float(np.pi) * indices.float() / float(self.num_ang_bins)) - float(math.pi)), requires_grad=True)
if return_rot_matrix:
return self.get_rotation_matrix(angle)
return angle
#find the main orientation in surf manner
class OrientationFinder_MeanGradient(nn.Module):
def __init__(self, mrSize = 3.0, patch_size = None):
super(OrientationFinder_MeanGradient, self).__init__()
if patch_size is None:
patch_size = 32;
self.PS = patch_size;
self.bin_weight_kernel_size, self.bin_weight_stride = self.get_bin_weight_kernel_size_and_stride(self.PS, 1)
self.mrSize = mrSize;
self.num_ang_bins = 36
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
# self.gx.weight.data = torch.from_numpy(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32))
self.gx.weight.data = torch.tensor(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32), requires_grad=True)
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
# self.gy.weight.data = torch.from_numpy(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32))
self.gy.weight.data = torch.tensor(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32), requires_grad=True)
self.angular_smooth = nn.Conv1d(1, 1, kernel_size=3, padding = 1, bias = False)
# self.angular_smooth.weight.data = torch.from_numpy(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32))
self.angular_smooth.weight.data = torch.tensor(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32), requires_grad=True)
# self.gk = 10. * torch.from_numpy(CircularGaussKernel(kernlen=self.PS).astype(np.float32))
self.gk = 10. * torch.tensor(CircularGaussKernel(kernlen=self.PS).astype(np.float32),requires_grad=True)
self.gk = Variable(self.gk, requires_grad=True) #set this as true to train desecriptors
self.avgPool = nn.AdaptiveAvgPool2d(1)
return
def get_bin_weight_kernel_size_and_stride(self, patch_size, num_spatial_bins):
bin_weight_stride = int(round(2.0 * np.floor(patch_size / 2) / float(num_spatial_bins + 1)))
bin_weight_kernel_size = int(2 * bin_weight_stride - 1);
return bin_weight_kernel_size, bin_weight_stride
def get_rotation_matrix(self, angle_in_radians):
angle_in_radians = angle_in_radians.view(-1, 1, 1);
sin_a = torch.sin(angle_in_radians)
cos_a = torch.cos(angle_in_radians)
A1_x = torch.cat([cos_a, sin_a], dim = 2)
A2_x = torch.cat([-sin_a, cos_a], dim = 2)
transform = torch.cat([A1_x,A2_x], dim = 1)
return transform
def forward(self, x, return_rot_matrix = False):
gx = self.gx(F.pad(x, (1,1,0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0,0, 1,1), 'replicate'))
# mag = torch.sqrt(gx * gx + gy * gy + 1e-10)
if x.is_cuda:
self.gk = self.gk.cuda()
gx = gx * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)
gy = gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)
#smooth gx and gy and then derive the avarage gradient in x and y direction
gx_mean = self.avgPool(gx)
gy_mean = self.avgPool(gy)
mod = torch.sqrt(gx_mean * gx_mean + gy_mean * gy_mean + 1e-12)
cs = torch.cat([gx_mean, gy_mean], dim=1).squeeze(2).squeeze(2)
cs_norm = cs/mod.squeeze(2).squeeze(2)
angles = torch.atan2(cs_norm[:,0], cs_norm[:,1])
return angles
#return CS norm of [cos(theta), sin(theta)]
class OrientationFinder_MeanGradient_CS(nn.Module):
def __init__(self, mrSize = 3.0, patch_size = None):
super(OrientationFinder_MeanGradient_CS, self).__init__()
if patch_size is None:
patch_size = 32;
self.PS = patch_size;
self.bin_weight_kernel_size, self.bin_weight_stride = self.get_bin_weight_kernel_size_and_stride(self.PS, 1)
self.mrSize = mrSize;
self.num_ang_bins = 36
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
# self.gx.weight.data = torch.from_numpy(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32))
self.gx.weight.data = torch.tensor(np.array([[[[0.5, 0, -0.5]]]], dtype=np.float32), requires_grad=True)
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
# self.gy.weight.data = torch.from_numpy(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32))
self.gy.weight.data = torch.tensor(np.array([[[[0.5], [0], [-0.5]]]], dtype=np.float32), requires_grad=True)
self.angular_smooth = nn.Conv1d(1, 1, kernel_size=3, padding = 1, bias = False)
# self.angular_smooth.weight.data = torch.from_numpy(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32))
self.angular_smooth.weight.data = torch.tensor(np.array([[[0.33, 0.34, 0.33]]], dtype=np.float32), requires_grad=True)
# self.gk = 10. * torch.from_numpy(CircularGaussKernel(kernlen=self.PS).astype(np.float32))
self.gk = 10. * torch.tensor(CircularGaussKernel(kernlen=self.PS).astype(np.float32),requires_grad=True)
self.gk = Variable(self.gk, requires_grad=True) #set this as true to train desecriptors
self.avgPool = nn.AdaptiveAvgPool2d(1)
return
def get_bin_weight_kernel_size_and_stride(self, patch_size, num_spatial_bins):
bin_weight_stride = int(round(2.0 * np.floor(patch_size / 2) / float(num_spatial_bins + 1)))
bin_weight_kernel_size = int(2 * bin_weight_stride - 1);
return bin_weight_kernel_size, bin_weight_stride
def get_rotation_matrix(self, angle_in_radians):
angle_in_radians = angle_in_radians.view(-1, 1, 1);
sin_a = torch.sin(angle_in_radians)
cos_a = torch.cos(angle_in_radians)
A1_x = torch.cat([cos_a, sin_a], dim = 2)
A2_x = torch.cat([-sin_a, cos_a], dim = 2)
transform = torch.cat([A1_x,A2_x], dim = 1)
return transform
def forward(self, x, return_rot_matrix = False):
gx = self.gx(F.pad(x, (1,1,0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0,0, 1,1), 'replicate'))
# mag = torch.sqrt(gx * gx + gy * gy + 1e-10)
if x.is_cuda:
self.gk = self.gk.cuda()
gx = gx * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)
gy = gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)
#smooth gx and gy and then derive the avarage gradient in x and y direction
gx_mean = self.avgPool(gx)
gy_mean = self.avgPool(gy)
mod = torch.sqrt(gx_mean * gx_mean + gy_mean * gy_mean + 1e-12)
cs = torch.cat([gx_mean, gy_mean], dim=1).squeeze(2).squeeze(2)
cs_norm = cs/mod.squeeze(2).squeeze(2)
# angles = torch.atan2(cs_norm[:,0], cs_norm[:,1])
return cs_norm
#output the 2nd Momentum of an input patch
class SecondMomentum(nn.Module):
def __init__(self, threshold = 0.001, patch_size = 19):
super(SecondMomentum, self).__init__()
self.threshold = threshold;
self.PS = patch_size
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
# self.gx.weight.data = torch.from_numpy(np.array([[[[-1, 0, 1]]]], dtype=np.float32))
self.gx.weight.data = torch.tensor(np.array([[[[-1, 0, 1]]]], dtype=np.float32), requires_grad=True)
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
# self.gy.weight.data = torch.from_numpy(np.array([[[[-1], [0], [1]]]], dtype=np.float32))
self.gy.weight.data = torch.tensor(np.array([[[[-1], [0], [1]]]], dtype=np.float32), requires_grad=True)
self.gk = torch.tensor((CircularGaussKernel(kernlen=self.PS,sigma = (self.PS / 2) /3.0).astype(np.float32)).astype(np.float32), requires_grad=True)
# self.gk = torch.from_numpy(CircularGaussKernel(kernlen = self.PS, sigma = (self.PS / 2) /3.0).astype(np.float32))
self.gk = Variable(self.gk, requires_grad=True)
return
def invSqrt(self,a,b,c):
eps = 1e-12
mask = (b != 0).float()
r1 = mask * (c - a) / (2. * b + eps)
t1 = torch.sign(r1) / (torch.abs(r1) + torch.sqrt(1. + r1*r1));
r = 1.0 / torch.sqrt( 1. + t1*t1)
t = t1*r;
r = r * mask + 1.0 * (1.0 - mask);
t = t * mask;
x = 1. / torch.sqrt( r*r*a - 2.0*r*t*b + t*t*c)
z = 1. / torch.sqrt( t*t*a + 2.0*r*t*b + r*r*c)
d = torch.sqrt( x * z)
x = x / d
z = z / d
l1 = torch.max(x,z)
l2 = torch.min(x,z)
new_a = r*r*x + t*t*z
new_b = -r*t*x + t*r*z
new_c = t*t*x + r*r *z
return new_a, new_b, new_c, l1, l2
def derive_eig(self, a, b, c, return_norm_b = False):
# derive the eigenvalues: https://croninprojects.org/Vince/Geodesy/FindingEigenvectors.pdf
qq = a + c
pp = torch.sqrt(4*b*b + (a-c)*(a-c) +1e-12)
eig1 = 0.5*(qq + pp)
eig2 = 0.5*(qq - pp)
# eigs = torch.cat([eig1, eig2], dim=1)
eigs = torch.abs(torch.cat([eig1, eig2], dim=1)) #change to make sure that each eigenvalue is >=0
eig_l1, ind1 = torch.max(eigs,1)
eig_l2, ind2 = torch.min(eigs,1)
ratios = eig_l2/(eig_l1 + 1e-12)
if return_norm_b:
deter = torch.sqrt(torch.abs(a*c - b*b)+1e-12)
#return the normalized b
norm_b = b/deter
return ratios, norm_b
else:
return ratios
def forward(self,x, return_A_matrix = False, return_norm_b = False):
if x.is_cuda:
self.gk = self.gk.cuda()
else:
self.gk = self.gk.cpu()
gx = self.gx(F.pad(x, (1, 1, 0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0, 0, 1, 1), 'replicate'))
a1 = (gx * gx * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
b1 = (gx * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
c1 = (gy * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
a1 = a1.view(x.size(0),-1)
b1 = b1.view(x.size(0),-1)
c1 = c1.view(x.size(0),-1)
if return_norm_b:
ratios, norm_b = self.derive_eig(a1, b1, c1, return_norm_b = True)
return ratios, norm_b
else:
ratios = self.derive_eig(a1, b1, c1)
ratios = ratios.view(x.size(0),-1)
return ratios#, mask
# 2nd Moment matrix with smaller amount of integration scale
class SecondMomentum_smaller_integration_scale(nn.Module):
def __init__(self, threshold = 0.001, patch_size = 19):
super(SecondMomentum, self).__init__()
self.threshold = threshold;
self.PS = patch_size
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
# self.gx.weight.data = torch.from_numpy(np.array([[[[-1, 0, 1]]]], dtype=np.float32))
self.gx.weight.data = torch.tensor(np.array([[[[-1, 0, 1]]]], dtype=np.float32), requires_grad=True)
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
# self.gy.weight.data = torch.from_numpy(np.array([[[[-1], [0], [1]]]], dtype=np.float32))
self.gy.weight.data = torch.tensor(np.array([[[[-1], [0], [1]]]], dtype=np.float32), requires_grad=True)
self.gk = torch.tensor((CircularGaussKernel(kernlen=self.PS, sigma = (self.PS / 2) /3.0).astype(np.float32)).astype(np.float32), requires_grad=True)
# self.gk = torch.from_numpy(CircularGaussKernel(kernlen = self.PS, sigma = (self.PS / 2) /3.0).astype(np.float32))
self.gk = Variable(self.gk, requires_grad=True)
return
def invSqrt(self,a,b,c):
eps = 1e-12
mask = (b != 0).float()
r1 = mask * (c - a) / (2. * b + eps)
t1 = torch.sign(r1) / (torch.abs(r1) + torch.sqrt(1. + r1*r1));
r = 1.0 / torch.sqrt( 1. + t1*t1)
t = t1*r;
r = r * mask + 1.0 * (1.0 - mask);
t = t * mask;
x = 1. / torch.sqrt( r*r*a - 2.0*r*t*b + t*t*c)
z = 1. / torch.sqrt( t*t*a + 2.0*r*t*b + r*r*c)
d = torch.sqrt( x * z)
x = x / d
z = z / d
l1 = torch.max(x,z)
l2 = torch.min(x,z)
new_a = r*r*x + t*t*z
new_b = -r*t*x + t*r*z
new_c = t*t*x + r*r *z
return new_a, new_b, new_c, l1, l2
def derive_eig(self, a, b, c, return_norm_b = False):
# derive the eigenvalues: https://croninprojects.org/Vince/Geodesy/FindingEigenvectors.pdf
qq = a + c
pp = torch.sqrt(4*b*b + (a-c)*(a-c) +1e-12)
eig1 = 0.5*(qq + pp)
eig2 = 0.5*(qq - pp)
eigs = torch.cat([eig1, eig2], dim=1)
eig_l1, ind1 = torch.max(eigs,1)
eig_l2, ind2 = torch.min(eigs,1)
ratios = eig_l2/(eig_l1 + 1e-12)
if return_norm_b:
deter = torch.sqrt(torch.abs(a*c - b*b)+1e-12)
#return the normalized b
norm_b = b/deter
return ratios, norm_b
else:
return ratios
def forward(self,x, return_A_matrix = False, return_norm_b = False):
if x.is_cuda:
self.gk = self.gk.cuda()
else:
self.gk = self.gk.cpu()
gx = self.gx(F.pad(x, (1, 1, 0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0, 0, 1, 1), 'replicate'))
a1 = (gx * gx * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
b1 = (gx * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
c1 = (gy * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
a1 = a1.view(x.size(0),-1)
b1 = b1.view(x.size(0),-1)
c1 = c1.view(x.size(0),-1)
if return_norm_b:
ratios, norm_b = self.derive_eig(a1, b1, c1, return_norm_b = True)
return ratios, norm_b
else:
ratios = self.derive_eig(a1, b1, c1)
ratios = ratios.view(x.size(0),-1)
return ratios#, mask
class SecondMomentum_Ratio_Skew(nn.Module):
def __init__(self, threshold = 0.001, patch_size = 19):
super(SecondMomentum_Ratio_Skew, self).__init__()
self.threshold = threshold;
self.PS = patch_size
self.gx = nn.Conv2d(1, 1, kernel_size=(1,3), bias = False)
# self.gx.weight.data = torch.from_numpy(np.array([[[[-1, 0, 1]]]], dtype=np.float32))
self.gx.weight.data = torch.tensor(np.array([[[[-1, 0, 1]]]], dtype=np.float32), requires_grad=True)
self.gy = nn.Conv2d(1, 1, kernel_size=(3,1), bias = False)
# self.gy.weight.data = torch.from_numpy(np.array([[[[-1], [0], [1]]]], dtype=np.float32))
self.gy.weight.data = torch.tensor(np.array([[[[-1], [0], [1]]]], dtype=np.float32), requires_grad=True)
self.gk = torch.tensor((CircularGaussKernel(kernlen=self.PS,sigma = (self.PS / 2) /3.0).astype(np.float32)).astype(np.float32), requires_grad=True)
# self.gk = torch.from_numpy(CircularGaussKernel(kernlen = self.PS, sigma = (self.PS / 2) /3.0).astype(np.float32))
self.gk = Variable(self.gk, requires_grad=True)
return
def invSqrt(self,a,b,c):
eps = 1e-12
mask = (b != 0).float()
r1 = mask * (c - a) / (2. * b + eps)
t1 = torch.sign(r1) / (torch.abs(r1) + torch.sqrt(1. + r1*r1));
r = 1.0 / torch.sqrt( 1. + t1*t1)
t = t1*r;
r = r * mask + 1.0 * (1.0 - mask);
t = t * mask;
x = 1. / torch.sqrt( r*r*a - 2.0*r*t*b + t*t*c)
z = 1. / torch.sqrt( t*t*a + 2.0*r*t*b + r*r*c)
d = torch.sqrt( x * z)
x = x / d
z = z / d
l1 = torch.max(x,z)
l2 = torch.min(x,z)
new_a = r*r*x + t*t*z
new_b = -r*t*x + t*r*z
new_c = t*t*x + r*r *z
return new_a, new_b, new_c, l1, l2
def derive_eig(self, a, b, c, return_skew = False):
# derive the eigenvalues: https://croninprojects.org/Vince/Geodesy/FindingEigenvectors.pdf
qq = a + c
pp = torch.sqrt(4*b*b + (a-c)*(a-c) +1e-12)
eig1 = 0.5*(qq + pp)
eig2 = 0.5*(qq - pp)
eigs = torch.cat([eig1, eig2], dim=1)
eig_l1, ind1 = torch.max(eigs,1)
eig_l2, ind2 = torch.min(eigs,1)
ratios = eig_l2/(eig_l1 + 1e-12)
if return_skew:
# deter = torch.sqrt(torch.abs(a*c - b*b)+1e-12)
#return the normalized b
deter = torch.sqrt(torch.abs(b*b + (eig_l2.view(-1, 1)-c)*(eig_l2.view(-1, 1)-c))+1e-12)
norm_b = b/deter
eig_l2 = eig_l2.view(-1, 1)/deter
eig_l1 = eig_l1.view(-1, 1)/deter
norm_c = c/deter
norm_a = a/deter
norm_b = norm_b.view(-1, 1, 1)
eig_l2 = eig_l2.view(-1, 1, 1)
eig_l1 = eig_l1.view(-1, 1, 1)
A1_x = torch.cat([norm_b, eig_l2-norm_c.view(-1, 1, 1)], dim = 2)
A2_x = torch.cat([eig_l1-norm_a.view(-1, 1, 1), norm_b], dim = 2)
skew_R = torch.cat([A1_x,A2_x], dim = 1)
return ratios, skew_R
else:
return ratios
def forward(self,x, return_A_matrix = False, return_skew = False):
if x.is_cuda:
self.gk = self.gk.cuda()
else:
self.gk = self.gk.cpu()
gx = self.gx(F.pad(x, (1, 1, 0, 0), 'replicate'))
gy = self.gy(F.pad(x, (0, 0, 1, 1), 'replicate'))
a1 = (gx * gx * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
b1 = (gx * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
c1 = (gy * gy * self.gk.unsqueeze(0).unsqueeze(0).expand_as(gx)).view(x.size(0),-1).mean(dim=1)
a1 = a1.view(x.size(0),-1)
b1 = b1.view(x.size(0),-1)
c1 = c1.view(x.size(0),-1)
if return_skew:
ratios, skew_R = self.derive_eig(a1, b1, c1, return_skew = True)
return ratios, skew_R
else:
ratios = self.derive_eig(a1, b1, c1)
ratios = ratios.view(x.size(0),-1)
return ratios#, mask
class NMS2d(nn.Module):
def __init__(self, kernel_size = 3, threshold = 0):
super(NMS2d, self).__init__()
self.MP = nn.MaxPool2d(kernel_size, stride=1, return_indices=False, padding = kernel_size/2)
self.eps = 1e-5
self.th = threshold
return
def forward(self, x):
#local_maxima = self.MP(x)
if self.th > self.eps:
return x * (x > self.th).float() * ((x + self.eps - self.MP(x)) > 0).float()
else:
return ((x - self.MP(x) + self.eps) > 0).float() * x
class NMS3d(nn.Module):
def __init__(self, kernel_size = 3, threshold = 0):
super(NMS3d, self).__init__()
self.MP = nn.MaxPool3d(kernel_size, stride=1, return_indices=False, padding = (0, kernel_size//2, kernel_size//2))
self.eps = 1e-5
self.th = threshold
return
def forward(self, x):
#local_maxima = self.MP(x)
if self.th > self.eps:
return x * (x > self.th).float() * ((x + self.eps - self.MP(x)) > 0).float()
else:
return ((x - self.MP(x) + self.eps) > 0).float() * x
class NMS3dAndComposeA(nn.Module):
def __init__(self, w = 0, h = 0, kernel_size = 3, threshold = 0, scales = None, border = 3, mrSize = 1.0):
super(NMS3dAndComposeA, self).__init__()
self.eps = 1e-7
self.ks = 3
self.th = threshold
self.cube_idxs = []
self.border = border
self.mrSize = mrSize
self.beta = 1.0
self.grid_ones = Variable(torch.ones(3,3,3,3), requires_grad=False)
self.NMS3d = NMS3d(kernel_size, threshold)
if (w > 0) and (h > 0):
self.spatial_grid = generate_2dgrid(h, w, False).view(1, h, w,2).permute(3,1, 2, 0)
self.spatial_grid = Variable(self.spatial_grid)
else:
self.spatial_grid = None
return
def forward(self, low, cur, high, num_features = 0, octaveMap = None, scales = None):
assert low.size() == cur.size() == high.size()
#Filter responce map
self.is_cuda = low.is_cuda;
resp3d = torch.cat([low,cur,high], dim = 1)
mrSize_border = int(self.mrSize);
if octaveMap is not None:
nmsed_resp = zero_response_at_border(self.NMS3d(resp3d.unsqueeze(1)).squeeze(1)[:,1:2,:,:], mrSize_border) * (1. - octaveMap.float())
else:
nmsed_resp = zero_response_at_border(self.NMS3d(resp3d.unsqueeze(1)).squeeze(1)[:,1:2,:,:], mrSize_border)
num_of_nonzero_responces = (nmsed_resp > 0).float().sum().item()#data[0]
if (num_of_nonzero_responces <= 1):
return None,None,None
if octaveMap is not None:
octaveMap = (octaveMap.float() + nmsed_resp.float()).byte()
nmsed_resp = nmsed_resp.view(-1)
if (num_features > 0) and (num_features < num_of_nonzero_responces):
nmsed_resp, idxs = torch.topk(nmsed_resp, k = num_features, dim = 0);
else:
idxs = nmsed_resp.data.nonzero().squeeze()
nmsed_resp = nmsed_resp[idxs]
#Get point coordinates grid
if type(scales) is not list:
self.grid = generate_3dgrid(3,self.ks,self.ks)
else:
self.grid = generate_3dgrid(scales,self.ks,self.ks)
self.grid = Variable(self.grid.t().contiguous().view(3,3,3,3), requires_grad=False)
if self.spatial_grid is None:
self.spatial_grid = generate_2dgrid(low.size(2), low.size(3), False).view(1, low.size(2), low.size(3),2).permute(3,1, 2, 0)
self.spatial_grid = Variable(self.spatial_grid)
if self.is_cuda:
self.spatial_grid = self.spatial_grid.cuda()
self.grid_ones = self.grid_ones.cuda()
self.grid = self.grid.cuda()
#residual_to_patch_center
sc_y_x = F.conv2d(resp3d, self.grid,
padding = 1) / (F.conv2d(resp3d, self.grid_ones, padding = 1) + 1e-8)
##maxima coords
sc_y_x[0,1:,:,:] = sc_y_x[0,1:,:,:] + self.spatial_grid[:,:,:,0]
sc_y_x = sc_y_x.view(3,-1).t()
sc_y_x = sc_y_x[idxs,:]
min_size = float(min((cur.size(2)), cur.size(3)))
sc_y_x[:,0] = sc_y_x[:,0] / min_size
sc_y_x[:,1] = sc_y_x[:,1] / float(cur.size(2))
sc_y_x[:,2] = sc_y_x[:,2] / float(cur.size(3))
return nmsed_resp, sc_y_x2LAFs(sc_y_x), octaveMap
class NMS3dAndComposeAAff(nn.Module):
def __init__(self, w = 0, h = 0, kernel_size = 3, threshold = 0, scales = None, border = 3, mrSize = 1.0):
super(NMS3dAndComposeAAff, self).__init__()
self.eps = 1e-7
self.ks = 3
self.th = threshold
self.cube_idxs = []
self.border = border
self.mrSize = mrSize
self.beta = 1.0
self.grid_ones = Variable(torch.ones(3,3,3,3), requires_grad=False)
self.NMS3d = NMS3d(kernel_size, threshold)
if (w > 0) and (h > 0):
self.spatial_grid = generate_2dgrid(h, w, False).view(1, h, w,2).permute(3,1, 2, 0)
self.spatial_grid = Variable(self.spatial_grid)
else:
self.spatial_grid = None
return
def forward(self, low, cur, high, num_features = 0, octaveMap = None, scales = None, aff_resp = None):
assert low.size() == cur.size() == high.size()
#Filter responce map
self.is_cuda = low.is_cuda;
resp3d = torch.cat([low,cur,high], dim = 1)
mrSize_border = int(self.mrSize);
if octaveMap is not None:
nmsed_resp = zero_response_at_border(self.NMS3d(resp3d.unsqueeze(1)).squeeze(1)[:,1:2,:,:], mrSize_border) * (1. - octaveMap.float())
else:
nmsed_resp = zero_response_at_border(self.NMS3d(resp3d.unsqueeze(1)).squeeze(1)[:,1:2,:,:], mrSize_border)
num_of_nonzero_responces = (nmsed_resp > 0).float().sum().item()#data[0]
if (num_of_nonzero_responces <= 1):
return None,None,None
if octaveMap is not None:
octaveMap = (octaveMap.float() + nmsed_resp.float()).byte()
nmsed_resp = nmsed_resp.view(-1)
if (num_features > 0) and (num_features < num_of_nonzero_responces):
nmsed_resp, idxs = torch.topk(nmsed_resp, k = num_features, dim = 0);
else:
idxs = nmsed_resp.data.nonzero().squeeze()
nmsed_resp = nmsed_resp[idxs]
#Get point coordinates grid
if type(scales) is not list:
self.grid = generate_3dgrid(3,self.ks,self.ks)
else:
self.grid = generate_3dgrid(scales,self.ks,self.ks)
self.grid = Variable(self.grid.t().contiguous().view(3,3,3,3), requires_grad=False)
if self.spatial_grid is None:
self.spatial_grid = generate_2dgrid(low.size(2), low.size(3), False).view(1, low.size(2), low.size(3),2).permute(3,1, 2, 0)
self.spatial_grid = Variable(self.spatial_grid)
if self.is_cuda:
self.spatial_grid = self.spatial_grid.cuda()
self.grid_ones = self.grid_ones.cuda()
self.grid = self.grid.cuda()
#residual_to_patch_center
sc_y_x = F.conv2d(resp3d, self.grid,
padding = 1) / (F.conv2d(resp3d, self.grid_ones, padding = 1) + 1e-8)
##maxima coords
sc_y_x[0,1:,:,:] = sc_y_x[0,1:,:,:] + self.spatial_grid[:,:,:,0]
sc_y_x = sc_y_x.view(3,-1).t()
sc_y_x = sc_y_x[idxs,:]
if aff_resp is not None:
A_matrices = aff_resp.view(4,-1).t()[idxs,:]
min_size = float(min((cur.size(2)), cur.size(3)))
sc_y_x[:,0] = sc_y_x[:,0] / min_size
sc_y_x[:,1] = sc_y_x[:,1] / float(cur.size(2))
sc_y_x[:,2] = sc_y_x[:,2] / float(cur.size(3))
return nmsed_resp, sc_y_x_and_A2LAFs(sc_y_x,A_matrices), octaveMap
| 47.219162 | 156 | 0.572892 | 6,087 | 39,428 | 3.547068 | 0.052571 | 0.006855 | 0.027095 | 0.013339 | 0.883377 | 0.872169 | 0.867352 | 0.86147 | 0.854754 | 0.852068 | 0 | 0.050346 | 0.267018 | 39,428 | 834 | 157 | 47.275779 | 0.696747 | 0.104139 | 0 | 0.807636 | 0 | 0 | 0.005106 | 0 | 0 | 0 | 0 | 0 | 0.002937 | 1 | 0.063142 | false | 0 | 0.014684 | 0 | 0.179148 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
c56dd51634ad148cd9d086f5f6db3a92486b0d79 | 1,668 | py | Python | tests/test_strava_local_heatmap.py | j-hiller/Strava-export-local-heatmap | 75b5c1244d2ea2bcf67396e4b4559e2f0231d22a | [
"MIT"
] | 1 | 2020-06-29T08:00:53.000Z | 2020-06-29T08:00:53.000Z | tests/test_strava_local_heatmap.py | j-hiller/Strava-export-local-heatmap | 75b5c1244d2ea2bcf67396e4b4559e2f0231d22a | [
"MIT"
] | null | null | null | tests/test_strava_local_heatmap.py | j-hiller/Strava-export-local-heatmap | 75b5c1244d2ea2bcf67396e4b4559e2f0231d22a | [
"MIT"
] | null | null | null | import unittest
import strava_local_heatmap
class TestMonthExtraction(unittest.TestCase):
def test_simple_range(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('1-12')
self.assertEqual((start, stop), (1, 13))
def test_shorter_range(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('3-9')
self.assertEqual((start, stop), (3, 10))
def test_lower_out_of_range(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('0-12')
self.assertEqual((start, stop), (1, 13))
def test_upper_out_of_range(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('1-13')
self.assertEqual((start, stop), (1, 13))
def test_both_out_of_ranage(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('0-13')
self.assertEqual((start, stop), (1, 13))
def test_wrong_order(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('12-1')
self.assertEqual((start, stop), (1, 13))
def test_wrong_order_lower_out_of_range(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('12-0')
self.assertEqual((start, stop), (1, 13))
def test_wrong_order_upper_out_of_range(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('13-1')
self.assertEqual((start, stop), (1, 13))
def test_wrong_format(self):
start, stop = strava_local_heatmap.extract_start_stop_from_month('1-')
self.assertEqual((start, stop), (1, 13))
if __name__ == '__main__':
unittest.main()
| 36.26087 | 80 | 0.700839 | 235 | 1,668 | 4.565957 | 0.165957 | 0.226468 | 0.167754 | 0.159366 | 0.832246 | 0.832246 | 0.832246 | 0.807083 | 0.807083 | 0.704567 | 0 | 0.037226 | 0.178657 | 1,668 | 45 | 81 | 37.066667 | 0.745985 | 0 | 0 | 0.25 | 0 | 0 | 0.02458 | 0 | 0 | 0 | 0 | 0 | 0.28125 | 1 | 0.28125 | false | 0 | 0.0625 | 0 | 0.375 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
9af48aecae0b31004c945fd8d788d78b051318d4 | 9,990 | py | Python | miso/utils/wave.py | Thubaralei/particle-classification | 01d174e48aae1bb18a411008bf7ae92756e32892 | [
"MIT"
] | 1 | 2021-11-16T16:46:35.000Z | 2021-11-16T16:46:35.000Z | miso/utils/wave.py | Thubaralei/particle-classification | 01d174e48aae1bb18a411008bf7ae92756e32892 | [
"MIT"
] | null | null | null | miso/utils/wave.py | Thubaralei/particle-classification | 01d174e48aae1bb18a411008bf7ae92756e32892 | [
"MIT"
] | null | null | null | def wave():
print("+----------------------------------------------------------------------------------+")
# print(",,,............................................. ... ... .......... . ..... .. . . .. . . ... ........ . ....")
# print(",,,......................................... ..... .. . . ..... ... .... .. . ....")
# print(",,,.................................... ..... . . .. .. . . ...")
# print(",,......,,,,,,........................... . . . . . . . . ..... . ...")
# print(",......,..(. ,......................... .... . ,.,.,, ... ... . . ..... ...")
# print(",......,../. ,....................... . .../. .. . . .. ..")
# print(",,.....,..,, ,..................... . .* ,* . .*,. ,. . . . ... ..")
# print(",......,..*. ,.... ......... * . ** ., *, . . . .. . . .")
# print(",.....,,..** *............ ,, .*/ * ../, . . ...... . ...... ...")
# print(",,*/,,.,..(* *..... ..... / .. ,*,*,/ ./ .,, .,/.* .*,*., .. . . . ........... ............")
# print("...,/,.....,,*......... . ( .*/. ,*.,, .%* ..,..**, ,.., //.. ...... . . ...........................")
# print(",.,*(,,,...,.,......... .. .**.(%%%.. //#%%,#%%,*,*. ..*/.*..,,/.*....... . ...............................")
# print(",,,((,,,.,(..,,........../. ..*..* .,*%#%%,/*.%%%%%%. , **.*....,,...... . . ........ .......... ............")
# print(",,,/*,,,.,...,,.,...../ .,,,****(%%%%#%%%%(%#%%%%%,,.*. . *. .*,*. . . ........................ ...........")
# print(",,,(*,,,,//,*.,,,...,, ,%%*%#%.%%%/%%#%%%%%#%%%%%(%% , /(**. .*/.,, ..,..* ,. . . .....................................")
# print(",,,*/,,,,,,,,,,,,..* %%#%%%%#%%%/(%#%%%#%%%/%%#%%%%/#(#/. ,... ,.. *. ...,.. ................................")
# print(",,,,*,,,,,,,,,,,,/ %%#%%%(%%%(/%%%%#/%%%%%#%%%%%%%*(* ,. */ ,, . *, ,*. ..............................")
# print(",,,,,.,,,,,,,,,/.. ##%%((%%%#%%%%#(%%%%/%%%/%%%#(#. . .,. *,,, . ,. ../ ** . . . . . ...... . ....")
# print(",,,,,,,,,,,,*/. (%#*%%%%(%%%#(%%*%&%##(##,., * ,**..* ,,, / . . .. .")
# print(",,,,,,**((#... . .., **.,#%###(%%%###%%%%*###.. *** * . *. .*,*, . .")
# print("#%%#%(#**.. . . .., .,..,* ,/** .*%(%%%##%(%.%%*%%%/#, ,*,,* *. ., ,*.,.*. . . . ")
# print("####.... . . ...,....*/,. ... .*..,.* /#(. ,.#/###%#%%### /. , ,.. * . .")
# print("............ ,*... .*.,,** *, *,,..**/ (*%%(#%%%%###( . ,.* ../ . , ")
# print("...,,,.,,,*,..,,/,*, . **,,*..* ,.,, ,. **,..,, ,,..(%%###* . ,* /.... . .. ")
# print("......,*(*/,*,,,*%##/,(.,,../** .,..%%/* #/(%(. ..,, /#%%(%%## . . . .. ....... . , ")
# print("...,*/*,//.**. (%##%,%%,... ,*,,/. %%#*#%%%#//%# #%%#%/%%(# . . . ")
# print(".,*,,*,*,/,.,,,***/(%*%%% ,./,,,., ,(,./#%%/#*%% %%(%%##%%%/* . . .,*,")
# print("***,,,,,%%*..,*..(*###%%(,..*/.* .*.*. *.*,%%#%%#%%%*/#%(#%%%%(# . . . ,.")
# print(".*,,/#..(%%%###%,%%((%(#.***.,/,/*/ ,. ..,,.%%*#%(#*,/%#%%#%%%%/...... . ... . . . . ........ .. ** .. *..")
# print("*,,#*%%,,,#(%*%%#%%%#(%(#...%,#,#*#%,*/*,%*((%%%(%#%####((/*#/(%(,.. .. .. ...... . . .... . ... . . .. .. .......... .*.,,/. *,/.(,/#")
# print("...,((...,*,/%#%#%%%#/#%%/.#%%%(#(%%%/**.%//%(%/%### ###%%%%%##/.................... ... . ...... .........................,,,,.,,,.,//*.* **,../#%")
# print("**..../**.,/,//(%%(%%##/#,%##%#*(*#%#(##%*%%%(##(%( (%#%%#,,,,,,,,,,........ ........................,...,,.,,,,,,,*,********,#%#*, ..,* .###%")
# print(",*/,,.*/... ,.**/%%%####%%%(%%%%###%%%%%#%#((#%. %(#%(*******.,,,,,,,,,.,.,,,,,,,,,,,,,,.,,,,,,,,,,**,,*********/////.#(,##/ / ,%%%%")
# print(".*.,*.*,,.. ,*,,,.*/(*/#%###/*%#%%#%*(#/###(.%, #%###****/./**,**********,*************************//*//////////*##,*#(/. ,....###%(#%%")
# print(".*/.**.*.... .///,.//.,/*#*(%#%##%(/%%##/#/%% ,.#%%#/,***//////////////////////////////*/////////////////////#(.(///. *#((%%#%##%%%")
# print(",,,,(%,***.((/*/./.*.*,**,,.,/(#/*#%#/#%%%% .* , /%%/#/////(///////////////////. .////////////////////////.,//** ... .(#,(#%%%%#####,.*")
# print(",,/%%%*%%(/(%%#,,///*,(,,..*.****...**.. /.,.. . (#(//((/*(((((((((/(//// ///((((((////////////,.., . *.. ,%%%%%%%(...* ")
# print("...%%%#%/%*%%%(##/*/**,*,,/,*#%#*/#* ./ .*,* .. *((/((/((((*(((((( (/((((/(/(#(##*...,/ .###%%%%%#/ .*(( .")
# print("....%#(##(.#%*%%%##.*.*,*(*..,##, . ..., . .. .*.. (/(((/(((/(,.,/#/%%#* #,. /((((/.....,(, . *..,., #(*/ ..#(#/ .....")
# print("......(%,#%,%%((#%#, .. ,/. .,.... .,.,, . , . * , /%%%(#%%(%%%#.,(((((((#(#(#(####(,,/(. ,,.. .**#/#(((* ./(##(. ........")
# print(".......... ,,,.. ,*. , ....,. .... #*,* ,.,/. / .#%%%%%%%%########/(,//#(, ,.*. .*/###(, ...**/(/ ........*#")
# print(".......... ... .**.... .*,/**,.,..... *.,* ...,..*(#.., ,#.((,./##, #... *. . .#(.##//,. .... ..*/(#(#(# ........,####")
# print("....,,,,,,. /... .**,..... .*.. .* ..... .##/#,/#*.../ .(%%# /(*(*.##*... . . /##(#####%#%##/ ......(###(###(")
# print("....,,,/,,/*.* ..,.*... .*, ,.. (. .,. . *** #/#.#*,/.///# .#%%,#%%#(.#%##.... .... .. . .... .*,,,,........, ../##%%%%%%#/###")
# print("............,*,..,.... %%#,.,.. *.*/. *.** *..,/ . /,(((#/,,/##((.* ,%%#%%%%# ##%%,. .. ....... ......##(/,,. .((####%%%%%%# .,,*((")
# print("........./* *,.(/.,..//.%#/,**/ ,.. ... . ...../.*. ,.. , *(%(#%##,,/,/#//(*. %%%%%#.(%%%#*... (%%#... ..(/...*#%%%%%%####((*, ........../##%%%%%%%%%")
# print("....../((***.,%*%%*#.%%%%#**,,/.,, *........ ,. *,,,.*,/ ,.,. #%#%#%%%#,*//(//,/(#,#(..#%%%%%#%%%%%#*.. (%%%%%/. /%%%%#(//*,/,*(, .....*//*,(###(/*,**. ")
# print("....#*##%%,%(%*%#%%%%####* ,/,.. ,..... .., *.,.. %% ##*..... .%%%#%%%%(*/,./(/,*(,,,((...,(%%%%%%%%%%#,.. (%%%%%%#(##%###%((/ ....*,/(#(/, .(######")
# print("..,%%%##/(%%(%#%%(%/#. ../,,. %%#,. ....*....,... /%##(* ##*...,. %#%#%%%#%%*,*,///*,,/(//*,,,,,,*//*,/#%%%%#(*(#.,#(.##,/##. .((################%%%%%%%%")
# print("..%%(,%%*.*,#(%%%//(%%.,(#%%#/%(%%%.* .,/*...... .%%%%%/#/ /(/... #%%%%%%%#%(,,/*//*,,,,,,,,,,,,,,,,,,,,,,,*(**///*((/ ......*#%%%%%%%%%%%####%%%%%#%%%%%")
# print(".*%#%%#%%%%%(#%/%((%%%%%#%#%%%%(##%%%% ,# ,* ./#%##/#%/ .#%%##%%%#%(,*,.............,,**///(///////((. ..,*##%%%%%##**###(*/(*,. .,/")
# print(".,%#(%(%%%/(%%#%%%%%(%%(#%%*#%%#&%%(.. ../*%%% * .%%/,,*(&/#%&%#..*.. .#%%#%%%%#%%(//...,,*,.....,/**(###*. ,%%%%%%%%%%%%%%%%%%#. . ............... ")
# print(".#%/%%#%#%%%%/#%%(%#%%*%##%%%%#%%#%%%, ..,/%.,//,,%%*./..*#%*%/&%&/#/*,..,. ./%%%%%%%%%#%%%%%%%%%#/, . .. .,.... ..........................")
def intro():
print("+----------------------------------------------------------------------------------+")
print("| MISO Particle Classification Library |")
print("+----------------------------------------------------------------------------------+")
print("| To update library: |")
print("| pip install -U git+http://www.github.com/microfossil/particle-classification |")
print("+----------------------------------------------------------------------------------+")
| 149.104478 | 174 | 0.042643 | 82 | 9,990 | 5.195122 | 0.231707 | 1.29108 | 1.830986 | 2.394366 | 0.633803 | 0.633803 | 0.633803 | 0.633803 | 0.633803 | 0.633803 | 0 | 0 | 0.382683 | 9,990 | 66 | 175 | 151.363636 | 0.069077 | 0.846947 | 0 | 0.444444 | 0 | 0.111111 | 0.601227 | 0.343558 | 0 | 0 | 0 | 0 | 0 | 1 | 0.222222 | true | 0 | 0 | 0 | 0.222222 | 0.777778 | 0 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
b147e83cbeb9982103ce7ec682f15e14a453e539 | 1,519 | py | Python | archivos/op_relacionales.py | lecovi/reveal.js | 60bfdea623d326bcd9b52fe82135667a704c79f5 | [
"MIT"
] | null | null | null | archivos/op_relacionales.py | lecovi/reveal.js | 60bfdea623d326bcd9b52fe82135667a704c79f5 | [
"MIT"
] | null | null | null | archivos/op_relacionales.py | lecovi/reveal.js | 60bfdea623d326bcd9b52fe82135667a704c79f5 | [
"MIT"
] | 1 | 2021-03-03T12:22:04.000Z | 2021-03-03T12:22:04.000Z | #!/usr/bin/python
## Los operadores relacioales devuelven valores lógicos.
a = 10
b = 30
print("Comparemos 2 valores enteros:")
print("-"*50)
print("La variable a vale {}, la variable b {}. Entonces ¿a es igual a b?: {}".format(a, b, a == b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es distinta a b?: {}".format(a, b, a != b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es mayor a b?: {}".format(a, b, a > b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es menor a b?: {}".format(a, b, a < b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es mayor o igual a b?: {}".format(a, b, a >= b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es menor o igual a b?: {}".format(a, b, a <= b))
print("="*60)
a = 12
b = 25.5
print("Comparemos 1 entero con 1 real:")
print("-"*50)
print("La variable a vale {}, la variable b {}. Entonces ¿a es igual a b?: {}".format(a, b, a == b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es distinta a b?: {}".format(a, b, a != b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es mayor a b?: {}".format(a, b, a > b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es menor a b?: {}".format(a, b, a < b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es mayor o igual a b?: {}".format(a, b, a >= b))
print("La variable a vale {}, la variable b {}. Entonces ¿a es menor o igual a b?: {}".format(a, b, a <= b))
print("="*60)
| 52.37931 | 109 | 0.591178 | 286 | 1,519 | 3.181818 | 0.132867 | 0.079121 | 0.197802 | 0.210989 | 0.854945 | 0.854945 | 0.854945 | 0.854945 | 0.854945 | 0.854945 | 0 | 0.016598 | 0.206715 | 1,519 | 28 | 110 | 54.25 | 0.728631 | 0.046083 | 0 | 0.727273 | 0 | 0 | 0.664316 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0 | 0 | 0 | 0.818182 | 0 | 0 | 0 | null | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 11 |
b18c343f1bd7c7d6d6c51590ab2d9608c76b7dc9 | 11,421 | py | Python | reporting/basic_reporting.py | Nijerik/Wochenende | 38edb377ef77e0bbfea39c88aec45d1950d040f2 | [
"MIT"
] | 1 | 2020-04-23T11:52:12.000Z | 2020-04-23T11:52:12.000Z | reporting/basic_reporting.py | konnosif/Wochenende | c076462f9f1582ff86a139e95dcee1d6d88af996 | [
"MIT"
] | null | null | null | reporting/basic_reporting.py | konnosif/Wochenende | c076462f9f1582ff86a139e95dcee1d6d88af996 | [
"MIT"
] | null | null | null | # Tobias Scheithauer, August 2018
# This script can be used for reporting the results of the Wochenende pipeline
import os, sys, time
from Bio import SeqIO, SeqUtils
import pysam
import numpy as np
import pandas as pd
import click
def solid_normalization(gc):
# TODO: get normalization model
return 1
# Command Line Argument Parsing
@click.command()
# Slow mode uses the bam file while standard mode uses the bam.text created by wochenende pipeline. Slow mode gives an advanced output.
@click.option('--slow', default=False, help='Use this flag if you want to process the bam file instead of *.bam.txt')
@click.option('--input_file', help='The output file of Wochenende. Either *.bam.txt or .bam (use with --slow flag only)')
@click.option('--refseq_file', help='The refseq file used by Wochenende.')
# While illumina does not, SOLID sequencing data requires special normalization. Therefore an extra step is required. The model has to be defined in the function above.
@click.option('--sequencer', help='Sequencer technology used (solid or illumina)')
@click.option('--sample_name', help='Name of the sample. Used for output file naming.')
def reporting(slow, input_file, refseq_file, sequencer, sample_name):
if slow:
# This section is for SLOW mode only, using BAM files
click.echo('Started slow mode.')
click.echo(f'Using {input_file} as alignment file')
click.echo(f'Using {refseq_file} as refseq file')
click.echo()
if sequencer == 'illumina':
# slow illumina reporting
click.echo('starting illumina reporting')
# creating lists for dataframe creation
species_list, chr_length_list, read_count_list, basecount_list, gc_ref_list, gc_reads_list = [], [], [], [], [], []
for seq_record in SeqIO.parse(refseq_file, 'fasta'):
species_list.append(seq_record.name)
chr_length_list.append(len(seq_record.seq))
read_count_list.append(pysam.AlignmentFile(input_file, 'rb').count(contig=seq_record.name))
# joining all reads to get number of bases in experiment and gc content of reads
joined_reads = ''.join([read.query_sequence for read in pysam.AlignmentFile(input_file, 'rb').fetch(contig=seq_record.name)])
basecount_list.append(sum([len(joined_reads)]))
gc_ref_list.append(SeqUtils.GC(seq_record.seq))
gc_reads_list.append(SeqUtils.GC(joined_reads))
res_df = pd.DataFrame(data={
'species': species_list,
'chr_length': chr_length_list,
'gc_ref': gc_ref_list,
'gc_reads': gc_reads_list,
'read_count': read_count_list,
'basecount': basecount_list,
})
res_df['reads_per_million_ref_bases'] = res_df['read_count']/(res_df['chr_length']/1000000)
res_df['reads_per_million_reads_in_experiment'] = res_df['read_count'] / (res_df['read_count'].sum()/1000000)
# calculating bacteria per human cell
human_refs = ['1','2','3','4','5','6','7','8','9','10', '11','12','13','14','15','16','17','18','19','20','21','22','X','Y','MT']
human_cov = res_df[res_df['species'].isin(human_refs)]['basecount'].sum()/res_df[res_df['species'].isin(human_refs)]['chr_length'].sum()
print(human_cov)
res_df['bacteria_per_human_cell'] = (res_df['basecount']/res_df['chr_length']) / human_cov
# total normalization RPMM. Corrected
res_df['RPMM'] = res_df['read_count'] / (res_df['chr_length']/1000000) * res_df['read_count'].sum()/1000000
res_df.to_csv(f'{sample_name}.reporting.unsorted.csv', sep='\t', float_format='%.1f', index=False)
res_df_filtered_and_sorted = res_df.loc[res_df['read_count'] >= 20].sort_values(by='RPMM', ascending=False)
res_df_filtered_and_sorted.to_csv(f'{sample_name}.reporting.sorted.csv', sep='\t', float_format='%.1f', index=False)
elif sequencer == 'solid':
# slow solid reporting (works like illumina reporting but width normalization)
click.echo('starting solid reporting')
species_list, chr_length_list, read_count_list, basecount_list, gc_ref_list, gc_reads_list = [], [], [], [], [], []
for seq_record in SeqIO.parse(refseq_file, 'fasta'):
species_list.append(seq_record.name)
chr_length_list.append(len(seq_record.seq))
read_count_list.append(pysam.AlignmentFile(input_file, 'rb').count(contig=seq_record.name))
joined_reads = ''.join([read.query_sequence for read in pysam.AlignmentFile(input_file, 'rb').fetch(contig=seq_record.name)])
basecount_list.append(sum([len(joined_reads)]))
gc_ref_list.append(SeqUtils.GC(seq_record.seq))
gc_reads_list.append(SeqUtils.GC(joined_reads))
res_df = pd.DataFrame(data={
'species': species_list,
'chr_length': chr_length_list,
'gc_ref': gc_ref_list,
'gc_reads': gc_reads_list,
'read_count': read_count_list,
'basecount': basecount_list,
})
res_df['reads_per_million_ref_bases'] = res_df['read_count']/(res_df['chr_length']/1000000)
res_df['reads_per_million_reads_in_experiment'] = res_df['read_count'] / (res_df['read_count'].sum()/1000000)
# special SOLID normalization steps
res_df['norm_factor'] = [solid_normalization(gc) for gc in res_df['gc_ref']]
res_df['read_count'] = res_df['read_count'] * res_df['norm_factor']
res_df['basecount'] = res_df['basecount'] * res_df['norm_factor']
res_df['reads_per_million_ref_bases'] = res_df['reads_per_million_ref_bases'] * res_df['norm_factor']
res_df['reads_per_million_reads_in_experiment'] = res_df['reads_per_million_reads_in_experiment'] * res_df['norm_factor']
human_refs = ['1','2','3','4','5','6','7','8','9','10', '11','12','13','14','15','16','17','18','19','20','21','22','X','Y','MT']
human_cov = res_df[res_df['species'].isin(human_refs)]['basecount'].sum()/res_df[res_df['species'].isin(human_refs)]['chr_length'].sum()
print(human_cov)
res_df['bacteria_per_human_cell'] = (res_df['ibasecount']/res_df['chr_length']) / human_cov
res_df['norm_factor'] = None
# total normalization RPMM
res_df['RPMM'] = res_df['read_count'] / (res_df['chr_length']/1000000) * res_df['read_count'].sum()/1000000
res_df.to_csv(f'{sample_name}.reporting.unsorted.csv', sep='\t', float_format='%.1f', index=False)
res_df_filtered_and_sorted = res_df.loc[res_df['read_count'] >= 20].sort_values(by='RPMM', ascending=False)
res_df_filtered_and_sorted.to_csv(f'{sample_name}.reporting.sorted.csv', sep='\t', float_format='%.1f', index=False)
else:
click.echo('please specify sequencing technology')
sys.exit(1)
else:
# This section uses bam.txt files as opposed to full BAM files
click.echo(f'Using {input_file} as alignment file')
click.echo(f'Using {refseq_file} as refseq file')
click.echo()
if sequencer == 'illumina':
# standard illumina reporting
click.echo('starting illumina reporting')
# reading in wochenende output file without last line (* as species name)
res_df = pd.read_csv(input_file, sep='\t', header=None, names=['species', 'chr_length', 'read_count'], usecols=[0,1,2])[:-1]
# get gc content of ref sequences
gc_ref_dict = {}
for seq_record in SeqIO.parse(refseq_file, 'fasta'):
gc_ref_dict[seq_record.name] = SeqUtils.GC(str(seq_record.seq).replace('N', ''))
res_df['gc_ref'] = [gc_ref_dict[s] for s in res_df['species']]
res_df['reads_per_million_ref_bases'] = res_df['read_count']/(res_df['chr_length']/1000000)
res_df['reads_per_million_reads_in_experiment'] = res_df['read_count'] / (res_df['read_count'].sum()/1000000)
# total normalization RPMM
res_df['RPMM'] = res_df['read_count'] / (res_df['chr_length']/1000000 * res_df['read_count'].sum()/1000000)
#calculating bacteria per human cell
#check for human_refs to be correct!
#the mitochondrial reads have NOT been added to the sum of human reads, as the bacteria/human ratio would have been extremly small.
#this needs to be further discussed
human_refs = ['1_1_1_1','1_1_1_2','1_1_1_3','1_1_1_4','1_1_1_5','1_1_1_6','1_1_1_7','1_1_1_8','1_1_1_9','1_1_1_10', '1_1_1_11','1_1_1_12','1_1_1_13','1_1_1_14','1_1_1_15',\
'1_1_1_16','1_1_1_17','1_1_1_18','1_1_1_19','1_1_1_20','1_1_1_21','1_1_1_22','1_1_1_X','1_1_1_Y']
human_cov = res_df[res_df['species'].isin(human_refs)]['read_count'].sum()
res_df['bacteria_per_human_cell'] = (6191.39 * res_df['reads_per_million_ref_bases']) / human_cov
#rounding to 2 decimals, except for bacteria_per_human_cell, which gets 4 decimals
cols = ['gc_ref', 'reads_per_million_ref_bases', 'reads_per_million_reads_in_experiment', 'RPMM']
res_df[cols] = res_df[cols].round(2)
res_df['bacteria_per_human_cell'] = res_df['bacteria_per_human_cell'].round(4)
res_df.to_csv(f'{sample_name}.reporting.unsorted.csv', sep='\t', index=False)
res_df_filtered_and_sorted = res_df.loc[res_df['read_count'] >= 20].sort_values(by='RPMM', ascending=False)
res_df_filtered_and_sorted.to_csv(f'{sample_name}.reporting.sorted.csv', sep='\t', index=False)
elif sequencer == 'solid':
# standard solid reporting (works like illumina reporting but width normalization)
click.echo('starting solid reporting')
res_df = pd.read_csv(input_file, sep='\t', header=None, names=['species', 'chr_length', 'read_count'], usecols=[0,1,2])[:-1]
gc_ref_list = []
for seq_record in SeqIO.parse(refseq_file, 'fasta'):
gc_ref_dict[seq_record.name] = SeqUtils.GC(str(seq_record.seq).replace('N', ''))
res_df['gc_ref'] = gc_ref_list
res_df['reads_per_million_ref_bases'] = res_df['read_count']/(res_df['chr_length']/1000000)
res_df['reads_per_million_reads_in_experiment'] = res_df['read_count'] / (res_df['read_count'].sum()/1000000)
# special SOLID normalization
res_df['norm_factor'] = [solid_normalization(gc) for gc in res_df['gc_ref']]
res_df['read_count'] = res_df['read_count'] * res_df['norm_factor']
res_df['basecount'] = res_df['basecount'] * res_df['norm_factor']
res_df['reads_per_million_ref_bases'] = res_df['reads_per_million_ref_bases'] * res_df['norm_factor']
res_df['reads_per_million_reads_in_experiment'] = res_df['reads_per_million_reads_in_experiment'] * res_df['norm_factor']
res_df['norm_factor'] = None
# total normalization RPMM
res_df['RPMM'] = res_df['read_count'] / (res_df['chr_length']/1000000 * res_df['read_count'].sum()/1000000)
#calculating bacteria per human cell
#check for human_refs to be correct!
human_refs = ['1_1_1_1','1_1_1_2','1_1_1_3','1_1_1_4','1_1_1_5','1_1_1_6','1_1_1_7','1_1_1_8','1_1_1_9','1_1_1_10', '1_1_1_11','1_1_1_12','1_1_1_13','1_1_1_14','1_1_1_15',\
'1_1_1_16','1_1_1_17','1_1_1_18','1_1_1_19','1_1_1_20','1_1_1_21','1_1_1_22','1_1_1_X','1_1_1_Y']
human_cov = res_df[res_df['species'].isin(human_refs)]['read_count'].sum()
res_df['bacteria_per_human_cell'] = (6191.39 * res_df['reads_per_million_ref_bases']) / human_cov
#rounding to 2 decimals, except for bacteria_per_human_cell, which gets 4 decimals
cols = ['gc_ref', 'reads_per_million_ref_bases', 'reads_per_million_reads_in_experiment', 'RPMM']
res_df[cols] = res_df[cols].round(2)
res_df['bacteria_per_human_cell'] = res_df['bacteria_per_human_cell'].round(4)
res_df.to_csv(f'{sample_name}.reporting.unsorted.csv', sep='\t', index=False)
res_df_filtered_and_sorted = res_df.loc[res_df['read_count'] >= 20].sort_values(by='RPMM', ascending=False)
res_df_filtered_and_sorted.to_csv(f'{sample_name}.reporting.sorted.csv', sep='\t', index=False)
else:
click.echo('please specify sequencing technology')
sys.exit(1)
if __name__ == '__main__':
reporting()
| 61.403226 | 175 | 0.722529 | 1,899 | 11,421 | 3.991575 | 0.130595 | 0.083773 | 0.021372 | 0.051715 | 0.803166 | 0.79723 | 0.791425 | 0.777968 | 0.777968 | 0.777968 | 0 | 0.043948 | 0.117415 | 11,421 | 185 | 176 | 61.735135 | 0.708036 | 0.147885 | 0 | 0.823944 | 0 | 0.007042 | 0.337113 | 0.119381 | 0 | 0 | 0 | 0.005405 | 0 | 1 | 0.014085 | false | 0 | 0.042254 | 0.007042 | 0.06338 | 0.014085 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
490b6b3e40365008e9f5d04840b19c1bab109b18 | 20,283 | py | Python | document_merge_service/api/tests/snapshots/snap_test_template.py | winged/document-merge-service | bc6b4098db66cc56ac5aa0d518fe0aa1ea97a4bf | [
"MIT"
] | null | null | null | document_merge_service/api/tests/snapshots/snap_test_template.py | winged/document-merge-service | bc6b4098db66cc56ac5aa0d518fe0aa1ea97a4bf | [
"MIT"
] | null | null | null | document_merge_service/api/tests/snapshots/snap_test_template.py | winged/document-merge-service | bc6b4098db66cc56ac5aa0d518fe0aa1ea97a4bf | [
"MIT"
] | null | null | null | # -*- coding: utf-8 -*-
# snapshottest: v1 - https://goo.gl/zC4yUc
from __future__ import unicode_literals
from snapshottest import Snapshot
snapshots = Snapshot()
snapshots[
"test_template_merge_jinja_filters_docx[docx-template-template__template0] 1"
] = """<w:body xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml">
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t>15.09.1984</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t>1984</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t>23:24</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t>15.09.1984, 23:23:00</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t>23:23</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t xml:space="preserve">something</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="DejaVu Sans" w:hAnsi="DejaVu Sans"/>
</w:rPr>
<w:t>This is</w:t>
<w:br/>
<w:t xml:space="preserve">a test.</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr/>
</w:r>
</w:p>
<w:sectPr>
<w:type w:val="nextPage"/>
<w:pgSz w:w="11906" w:h="16838"/>
<w:pgMar w:left="1134" w:right="1134" w:header="0" w:top="1134" w:footer="0" w:bottom="1134" w:gutter="0"/>
<w:pgNumType w:fmt="decimal"/>
<w:formProt w:val="false"/>
<w:textDirection w:val="lrTb"/>
<w:docGrid w:type="default" w:linePitch="240" w:charSpace="0"/>
</w:sectPr>
</w:body>
"""
snapshots[
"test_template_merge_docx[TestNameTemplate-docx-template-template__template0] 1"
] = """<w:body xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml">
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t>Test</w:t>
</w:r>
<w:r>
<w:rPr>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t xml:space="preserve">: Test input</w:t>
</w:r>
</w:p>
<w:sectPr>
<w:type w:val="nextPage"/>
<w:pgSz w:w="11906" w:h="16838"/>
<w:pgMar w:left="1134" w:right="1134" w:header="0" w:top="1134" w:footer="0" w:bottom="1134" w:gutter="0"/>
<w:pgNumType w:fmt="decimal"/>
<w:formProt w:val="false"/>
<w:textDirection w:val="lrTb"/>
<w:docGrid w:type="default" w:linePitch="240" w:charSpace="0"/>
</w:sectPr>
</w:body>
"""
snapshots[
"test_template_merge_jinja_extensions_docx[docx-template-template__template0] 1"
] = """<w:body xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml">
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="3E4349"/>
<w:spacing w:val="0"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:ind w:left="0" w:right="0" w:hanging="0"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:ind w:left="0" w:right="0" w:hanging="0"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t xml:space="preserve"> </w:t>
</w:r>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:i w:val="false"/>
<w:color w:val="B11414"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="3E4349"/>
<w:spacing w:val="0"/>
</w:rPr>
<w:t xml:space="preserve"> </w:t>
</w:r>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
</w:rPr>
<w:t>1</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="3E4349"/>
<w:spacing w:val="0"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:ind w:left="0" w:right="0" w:hanging="0"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t xml:space="preserve"> </w:t>
</w:r>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:i w:val="false"/>
<w:color w:val="B11414"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="3E4349"/>
<w:spacing w:val="0"/>
</w:rPr>
<w:t xml:space="preserve"> </w:t>
</w:r>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
</w:rPr>
<w:t>2</w:t>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="3E4349"/>
<w:spacing w:val="0"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:ind w:left="0" w:right="0" w:hanging="0"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t xml:space="preserve"> </w:t>
</w:r>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="PreformattedText"/>
<w:widowControl/>
<w:shd w:val="clear" w:fill="EEEEEE"/>
<w:spacing w:before="225" w:after="225"/>
<w:ind w:left="0" w:right="0" w:hanging="0"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t xml:space="preserve"> </w:t>
</w:r>
<w:r>
<w:rPr>
<w:rFonts w:ascii="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace" w:hAnsi="Consolas;Menlo;Deja Vu Sans Mono;Bitstream Vera Sans Mono;monospace"/>
<w:b w:val="false"/>
<w:i w:val="false"/>
<w:caps w:val="false"/>
<w:smallCaps w:val="false"/>
<w:color w:val="B11414"/>
<w:spacing w:val="0"/>
<w:lang w:val="de-CH" w:eastAsia="zh-CN" w:bidi="hi-IN"/>
</w:rPr>
<w:t/>
</w:r>
</w:p>
<w:p>
<w:pPr>
<w:pStyle w:val="Normal"/>
<w:rPr/>
</w:pPr>
<w:r>
<w:rPr/>
</w:r>
</w:p>
<w:sectPr>
<w:type w:val="nextPage"/>
<w:pgSz w:w="11906" w:h="16838"/>
<w:pgMar w:left="1134" w:right="1134" w:header="0" w:top="1134" w:footer="0" w:bottom="1134" w:gutter="0"/>
<w:pgNumType w:fmt="decimal"/>
<w:formProt w:val="false"/>
<w:textDirection w:val="lrTb"/>
<w:docGrid w:type="default" w:linePitch="240" w:charSpace="0"/>
</w:sectPr>
</w:body>
"""
snapshots[
"test_template_merge_docx[TestNameMailMerge-docx-mailmerge-template__template1] 1"
] = """<w:body xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape">
<w:p w:rsidR="0083709B" w:rsidRPr="0083709B" w:rsidRDefault="0083709B">
<w:pPr>
<w:rPr>
<w:lang w:val="en-US"/>
</w:rPr>
</w:pPr>
<w:r>
<w:rPr>
<w:lang w:val="en-US"/>
</w:rPr>
<w:t xml:space="preserve">Test: </w:t>
</w:r>
<w:r>
<w:rPr>
<w:lang w:val="en-US"/>
</w:rPr>
<w:t>Test input</w:t>
</w:r>
<w:bookmarkStart w:id="0" w:name="_GoBack"/>
<w:bookmarkEnd w:id="0"/>
</w:p>
<w:sectPr w:rsidR="0083709B" w:rsidRPr="0083709B">
<w:pgSz w:w="11906" w:h="16838"/>
<w:pgMar w:top="1417" w:right="1417" w:bottom="1134" w:left="1417" w:header="708" w:footer="708" w:gutter="0"/>
<w:cols w:space="708"/>
<w:docGrid w:linePitch="360"/>
</w:sectPr>
</w:body>
"""
| 35.899115 | 1,113 | 0.566632 | 3,208 | 20,283 | 3.573254 | 0.056733 | 0.06316 | 0.038821 | 0.077641 | 0.956643 | 0.951147 | 0.941028 | 0.926023 | 0.921574 | 0.91852 | 0 | 0.036974 | 0.214613 | 20,283 | 564 | 1,114 | 35.962766 | 0.682611 | 0.003057 | 0 | 0.945946 | 0 | 0.088288 | 0.98902 | 0.122416 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.003604 | 0 | 0.003604 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 9 |
49294cb7068c161af0a3f73c010ad946ad780f6d | 53,214 | py | Python | reviewboard/scmtools/tests/test_repository_form.py | amalik2/reviewboard | 676aa2dce38ce619a74f2d4cb3cfae9bce21416e | [
"MIT"
] | 2 | 2020-06-19T14:57:49.000Z | 2020-06-19T15:17:40.000Z | reviewboard/scmtools/tests/test_repository_form.py | amalik2/reviewboard | 676aa2dce38ce619a74f2d4cb3cfae9bce21416e | [
"MIT"
] | 1 | 2019-08-03T01:48:33.000Z | 2019-08-03T01:48:33.000Z | reviewboard/scmtools/tests/test_repository_form.py | amalik2/reviewboard | 676aa2dce38ce619a74f2d4cb3cfae9bce21416e | [
"MIT"
] | null | null | null | from __future__ import unicode_literals
from django.contrib.auth.models import User
from django.utils import six
from reviewboard.hostingsvcs.models import HostingServiceAccount
from reviewboard.hostingsvcs.service import (register_hosting_service,
unregister_hosting_service)
from reviewboard.scmtools.forms import RepositoryForm
from reviewboard.scmtools.models import Repository, Tool
from reviewboard.site.models import LocalSite
from reviewboard.testing.hosting_services import (SelfHostedTestService,
TestService)
from reviewboard.testing.testcase import TestCase
class RepositoryFormTests(TestCase):
"""Unit tests for the repository form."""
fixtures = ['test_scmtools']
def setUp(self):
super(RepositoryFormTests, self).setUp()
register_hosting_service('test', TestService)
register_hosting_service('self_hosted_test', SelfHostedTestService)
self.git_tool_id = Tool.objects.get(name='Git').pk
def tearDown(self):
super(RepositoryFormTests, self).tearDown()
unregister_hosting_service('self_hosted_test')
unregister_hosting_service('test')
def test_without_localsite(self):
"""Testing RepositoryForm without a LocalSite"""
local_site = LocalSite.objects.create(name='test')
local_site_user = User.objects.create_user(username='testuser1')
local_site.users.add(local_site_user)
global_site_user = User.objects.create_user(username='testuser2')
local_site_group = self.create_review_group(name='test1',
invite_only=True,
local_site=local_site)
global_site_group = self.create_review_group(name='test2',
invite_only=True)
local_site_account = HostingServiceAccount.objects.create(
username='local-test-user',
service_name='test',
local_site=local_site)
global_site_account = HostingServiceAccount.objects.create(
username='global-test-user',
service_name='test')
# Make sure the initial state and querysets are what we expect on init.
form = RepositoryForm()
self.assertIsNone(form.limited_to_local_site)
self.assertIn('local_site', form.fields)
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user, global_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group, global_site_group])
self.assertEqual(list(form.fields['hosting_account'].queryset),
[local_site_account, global_site_account])
# Now test what happens when it's been fed data and validated.
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'users': [global_site_user.pk],
'review_groups': [global_site_group.pk],
})
self.assertIsNone(form.limited_to_local_site)
self.assertIn('local_site', form.fields)
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user, global_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group, global_site_group])
self.assertIsNone(form.fields['users'].widget.local_site_name)
self.assertEqual(list(form.fields['hosting_account'].queryset),
[local_site_account, global_site_account])
self.assertTrue(form.is_valid())
# Make sure any overridden querysets have been restored, so users can
# still change entries.
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user, global_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group, global_site_group])
self.assertEqual(list(form.fields['hosting_account'].queryset),
[local_site_account, global_site_account])
repository = form.save()
form.save_m2m()
self.assertIsNone(repository.local_site)
self.assertEqual(list(repository.users.all()), [global_site_user])
self.assertEqual(list(repository.review_groups.all()),
[global_site_group])
def test_without_localsite_and_instance(self):
"""Testing RepositoryForm without a LocalSite and editing instance"""
local_site = LocalSite.objects.create(name='test')
repository = self.create_repository(local_site=local_site)
form = RepositoryForm(
data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
},
instance=repository)
self.assertTrue(form.is_valid())
new_repository = form.save()
self.assertEqual(repository.pk, new_repository.pk)
self.assertIsNone(new_repository.local_site)
def test_without_localsite_and_with_local_site_user(self):
"""Testing RepositoryForm without a LocalSite and User on a LocalSite
"""
local_site = LocalSite.objects.create(name='test')
user = User.objects.create_user(username='testuser1')
local_site.users.add(user)
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'users': [user.pk],
})
self.assertTrue(form.is_valid())
def test_without_localsite_and_with_local_site_group(self):
"""Testing RepositoryForm without a LocalSite and Group on a LocalSite
"""
local_site = LocalSite.objects.create(name='test')
group = self.create_review_group(local_site=local_site)
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'review_groups': [group.pk],
})
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'review_groups': [
'Select a valid choice. 1 is not one of the available '
'choices.',
],
})
def test_without_localsite_and_with_local_site_hosting_account(self):
"""Testing RepositoryForm without a LocalSite and
HostingServiceAccount on a LocalSite
"""
local_site = LocalSite.objects.create(name='test')
hosting_account = HostingServiceAccount.objects.create(
username='test-user',
service_name='test',
local_site=local_site)
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'test',
'hosting_account': hosting_account.pk,
'test_repo_name': 'test',
'tool': self.git_tool_id,
'bug_tracker_type': 'none',
})
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'hosting_account': [
'Select a valid choice. That choice is not one of the '
'available choices.',
],
})
def test_with_limited_localsite(self):
"""Testing RepositoryForm limited to a LocalSite"""
local_site = LocalSite.objects.create(name='test')
local_site_user = User.objects.create_user(username='testuser1')
local_site.users.add(local_site_user)
User.objects.create_user(username='testuser2')
local_site_group = self.create_review_group(name='test1',
invite_only=True,
local_site=local_site)
self.create_review_group(name='test2', invite_only=True)
form = RepositoryForm(limit_to_local_site=local_site)
self.assertEqual(form.limited_to_local_site, local_site)
self.assertNotIn('local_site', form.fields)
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group])
self.assertEqual(form.fields['users'].widget.local_site_name,
local_site.name)
def test_with_limited_localsite_and_changing_site(self):
"""Testing RepositoryForm limited to a LocalSite and changing
LocalSite
"""
local_site1 = LocalSite.objects.create(name='test-site-1')
local_site2 = LocalSite.objects.create(name='test-site-2')
form = RepositoryForm(
data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'local_site': local_site2.pk,
},
limit_to_local_site=local_site1)
self.assertEqual(form.limited_to_local_site, local_site1)
self.assertTrue(form.is_valid())
self.assertEqual(form.cleaned_data['local_site'], local_site1)
repository = form.save()
self.assertEqual(repository.local_site, local_site1)
def test_with_limited_localsite_and_compatible_instance(self):
"""Testing RepositoryForm limited to a LocalSite and editing compatible
instance
"""
local_site = LocalSite.objects.create(name='test')
repository = self.create_repository(local_site=local_site)
# This should just simply not raise an exception.
RepositoryForm(instance=repository,
limit_to_local_site=local_site)
def test_with_limited_localsite_and_incompatible_instance(self):
"""Testing RepositoryForm limited to a LocalSite and editing
incompatible instance
"""
local_site = LocalSite.objects.create(name='test')
repository = self.create_repository()
error_message = (
'The provided instance is not associated with a LocalSite '
'compatible with this form. Please contact support.'
)
with self.assertRaisesMessage(ValueError, error_message):
RepositoryForm(instance=repository,
limit_to_local_site=local_site)
def test_with_limited_localsite_and_invalid_user(self):
"""Testing DefaultReviewerForm limited to a LocalSite with a User
not on the LocalSite
"""
local_site = LocalSite.objects.create(name='test')
user = User.objects.create_user(username='test')
form = RepositoryForm(
data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'users': [user.pk]
},
limit_to_local_site=local_site)
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'users': [
'Select a valid choice. 1 is not one of the available '
'choices.',
],
})
def test_with_limited_localsite_and_invalid_group(self):
"""Testing DefaultReviewerForm limited to a LocalSite with a Group
not on the LocalSite
"""
local_site = LocalSite.objects.create(name='test')
group = self.create_review_group()
form = RepositoryForm(
data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'review_groups': [group.pk]
},
limit_to_local_site=local_site)
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'review_groups': [
'Select a valid choice. 1 is not one of the available '
'choices.',
],
})
def test_with_limited_localsite_and_invalid_hosting_account(self):
"""Testing DefaultReviewerForm limited to a LocalSite with a
HostingServiceAccount not on the LocalSite
"""
local_site = LocalSite.objects.create(name='test')
hosting_account = HostingServiceAccount.objects.create(
username='test-user',
service_name='test')
form = RepositoryForm(
data={
'name': 'test',
'hosting_type': 'test',
'hosting_account': hosting_account.pk,
'test_repo_name': 'test',
'tool': self.git_tool_id,
'bug_tracker_type': 'none',
},
limit_to_local_site=local_site)
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'hosting_account': [
'Select a valid choice. That choice is not one of the '
'available choices.',
],
})
def test_with_localsite_in_data(self):
"""Testing RepositoryForm with a LocalSite in form data"""
local_site = LocalSite.objects.create(name='test')
local_site_user = User.objects.create_user(username='testuser1')
local_site.users.add(local_site_user)
global_site_user = User.objects.create_user(username='testuser2')
local_site_group = self.create_review_group(name='test1',
invite_only=True,
local_site=local_site)
global_site_group = self.create_review_group(name='test2',
invite_only=True)
local_site_account = HostingServiceAccount.objects.create(
username='local-test-user',
service_name='test',
local_site=local_site)
local_site_account.data['password'] = 'testpass'
local_site_account.save(update_fields=('data',))
global_site_account = HostingServiceAccount.objects.create(
username='global-test-user',
service_name='test')
# Make sure the initial state and querysets are what we expect on init.
form = RepositoryForm()
self.assertIsNone(form.limited_to_local_site)
self.assertIn('local_site', form.fields)
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user, global_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group, global_site_group])
self.assertIsNone(form.fields['users'].widget.local_site_name)
self.assertEqual(list(form.fields['hosting_account'].queryset),
[local_site_account, global_site_account])
# Now test what happens when it's been fed data and validated.
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'test',
'hosting_account': local_site_account.pk,
'test_repo_name': 'test',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'local_site': local_site.pk,
'users': [local_site_user.pk],
'review_groups': [local_site_group.pk],
})
self.assertIsNone(form.limited_to_local_site)
self.assertIn('local_site', form.fields)
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user, global_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group, global_site_group])
self.assertIsNone(form.fields['users'].widget.local_site_name)
self.assertEqual(list(form.fields['hosting_account'].queryset),
[local_site_account, global_site_account])
self.assertTrue(form.is_valid())
# Make sure any overridden querysets have been restored, so users can
# still change entries.
self.assertEqual(list(form.fields['users'].queryset),
[local_site_user, global_site_user])
self.assertEqual(list(form.fields['review_groups'].queryset),
[local_site_group, global_site_group])
self.assertEqual(list(form.fields['hosting_account'].queryset),
[local_site_account, global_site_account])
repository = form.save()
form.save_m2m()
self.assertEqual(repository.local_site, local_site)
self.assertEqual(repository.hosting_account, local_site_account)
self.assertEqual(list(repository.users.all()), [local_site_user])
self.assertEqual(list(repository.review_groups.all()),
[local_site_group])
def test_with_localsite_in_data_and_instance(self):
"""Testing RepositoryForm with a LocalSite in form data and editing
instance
"""
local_site = LocalSite.objects.create(name='test')
repository = self.create_repository()
form = RepositoryForm(
data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'local_site': local_site.pk,
},
instance=repository)
self.assertTrue(form.is_valid())
new_repository = form.save()
self.assertEqual(repository.pk, new_repository.pk)
self.assertEqual(new_repository.local_site, local_site)
def test_with_localsite_in_data_and_invalid_user(self):
"""Testing RepositoryForm with a LocalSite in form data and User not
on the LocalSite
"""
local_site = LocalSite.objects.create(name='test')
user = User.objects.create_user(username='test-user')
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'local_site': local_site.pk,
'users': [user.pk],
})
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'users': [
'Select a valid choice. 1 is not one of the available '
'choices.',
],
})
def test_with_localsite_in_data_and_invalid_group(self):
"""Testing RepositoryForm with a LocalSite in form data and Group not
on the LocalSite
"""
local_site = LocalSite.objects.create(name='test')
group = self.create_review_group()
form = RepositoryForm(data={
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'local_site': local_site.pk,
'review_groups': [group.pk],
})
self.assertFalse(form.is_valid())
self.assertEqual(
form.errors,
{
'review_groups': [
'Select a valid choice. 1 is not one of the available '
'choices.',
],
})
def test_plain_repository(self):
"""Testing RepositoryForm with a plain repository"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
})
self.assertTrue(form.is_valid())
repository = form.save()
self.assertEqual(repository.name, 'test')
self.assertEqual(repository.hosting_account, None)
self.assertEqual(repository.extra_data, {})
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_plain_repository_with_missing_fields(self):
"""Testing RepositoryForm with a plain repository with missing fields
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'bug_tracker_type': 'none',
})
self.assertFalse(form.is_valid())
self.assertIn('path', form.errors)
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_new_account(self):
"""Testing RepositoryForm with a hosting service and new account"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'test-hosting_account_username': 'testuser',
'test-hosting_account_password': 'testpass',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertTrue(form.is_valid())
self.assertTrue(form.hosting_account_linked)
repository = form.save()
self.assertEqual(repository.name, 'test')
self.assertEqual(repository.hosting_account.username, 'testuser')
self.assertEqual(repository.hosting_account.service_name, 'test')
self.assertEqual(repository.hosting_account.local_site, None)
self.assertEqual(repository.extra_data['repository_plan'], '')
self.assertEqual(repository.path, 'http://example.com/testrepo/')
self.assertEqual(repository.mirror_path, '')
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_new_account_auth_error(self):
"""Testing RepositoryForm with a hosting service and new account and
authorization error
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'test-hosting_account_username': 'baduser',
'test-hosting_account_password': 'testpass',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
self.assertIn('hosting_account', form.errors)
self.assertEqual(form.errors['hosting_account'],
['Unable to link the account: The username is '
'very very bad.'])
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_new_account_2fa_code_required(self):
"""Testing RepositoryForm with a hosting service and new account and
two-factor auth code required
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'test-hosting_account_username': '2fa-user',
'test-hosting_account_password': 'testpass',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
self.assertIn('hosting_account', form.errors)
self.assertEqual(form.errors['hosting_account'],
['Enter your 2FA code.'])
self.assertTrue(
form.hosting_service_info['test']['needs_two_factor_auth_code'])
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_new_account_2fa_code_provided(self):
"""Testing RepositoryForm with a hosting service and new account and
two-factor auth code provided
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'test-hosting_account_username': '2fa-user',
'test-hosting_account_password': 'testpass',
'test-hosting_account_two_factor_auth_code': '123456',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertTrue(form.is_valid())
self.assertTrue(form.hosting_account_linked)
self.assertFalse(
form.hosting_service_info['test']['needs_two_factor_auth_code'])
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_new_account_missing_fields(self):
"""Testing RepositoryForm with a hosting service and new account and
missing fields
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
self.assertIn('hosting_account_username', form.errors)
self.assertIn('hosting_account_password', form.errors)
# Make sure the auth form also contains the errors.
auth_form = form.hosting_auth_forms.pop('test')
self.assertIn('hosting_account_username', auth_form.errors)
self.assertIn('hosting_account_password', auth_form.errors)
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_self_hosted_and_new_account(self):
"""Testing RepositoryForm with a self-hosted hosting service and new
account
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'self_hosted_test',
'self_hosted_test-hosting_url': 'https://myserver.com',
'self_hosted_test-hosting_account_username': 'testuser',
'self_hosted_test-hosting_account_password': 'testpass',
'test_repo_name': 'myrepo',
'tool': self.git_tool_id,
'bug_tracker_type': 'none',
})
form.validate_repository = False
self.assertTrue(form.is_valid())
self.assertTrue(form.hosting_account_linked)
repository = form.save()
self.assertEqual(repository.name, 'test')
self.assertEqual(repository.hosting_account.hosting_url,
'https://myserver.com')
self.assertEqual(repository.hosting_account.username, 'testuser')
self.assertEqual(repository.hosting_account.service_name,
'self_hosted_test')
self.assertEqual(repository.hosting_account.local_site, None)
self.assertEqual(repository.extra_data['test_repo_name'], 'myrepo')
self.assertEqual(repository.extra_data['hosting_url'],
'https://myserver.com')
self.assertEqual(repository.path, 'https://myserver.com/myrepo/')
self.assertEqual(repository.mirror_path, 'git@myserver.com:myrepo/')
# Make sure none of the other auth forms are unhappy. That would be
# an indicator that we're doing form processing and validation wrong.
for auth_form in six.itervalues(form.hosting_auth_forms):
self.assertEqual(auth_form.errors, {})
def test_with_hosting_service_self_hosted_and_blank_url(self):
"""Testing RepositoryForm with a self-hosted hosting service and blank
URL
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'self_hosted_test',
'self_hosted_test-hosting_url': '',
'self_hosted_test-hosting_account_username': 'testuser',
'self_hosted_test-hosting_account_password': 'testpass',
'test_repo_name': 'myrepo',
'tool': self.git_tool_id,
'bug_tracker_type': 'none',
})
form.validate_repository = False
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
def test_with_hosting_service_new_account_localsite(self):
"""Testing RepositoryForm with a hosting service, new account and
LocalSite
"""
local_site = LocalSite.objects.create(name='testsite')
form = RepositoryForm(
{
'name': 'test',
'hosting_type': 'test',
'test-hosting_account_username': 'testuser',
'test-hosting_account_password': 'testpass',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
'local_site': local_site.pk,
},
limit_to_local_site=local_site)
self.assertTrue(form.is_valid())
self.assertTrue(form.hosting_account_linked)
repository = form.save()
self.assertEqual(repository.name, 'test')
self.assertEqual(repository.local_site, local_site)
self.assertEqual(repository.hosting_account.username, 'testuser')
self.assertEqual(repository.hosting_account.service_name, 'test')
self.assertEqual(repository.hosting_account.local_site, local_site)
self.assertEqual(repository.extra_data['repository_plan'], '')
def test_with_hosting_service_existing_account(self):
"""Testing RepositoryForm with a hosting service and existing
account
"""
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertTrue(form.is_valid())
self.assertFalse(form.hosting_account_linked)
repository = form.save()
self.assertEqual(repository.name, 'test')
self.assertEqual(repository.hosting_account, account)
self.assertEqual(repository.extra_data['repository_plan'], '')
def test_with_hosting_service_existing_account_needs_reauth(self):
"""Testing RepositoryForm with a hosting service and existing
account needing re-authorization
"""
# We won't be setting the password, so that is_authorized() will
# fail.
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
self.assertEqual(set(form.errors.keys()),
set(['hosting_account_username',
'hosting_account_password']))
def test_with_hosting_service_existing_account_reauthing(self):
"""Testing RepositoryForm with a hosting service and existing
account with re-authorizating
"""
# We won't be setting the password, so that is_authorized() will
# fail.
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'test-hosting_account_username': 'testuser2',
'test-hosting_account_password': 'testpass2',
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertTrue(form.is_valid())
self.assertTrue(form.hosting_account_linked)
account = HostingServiceAccount.objects.get(pk=account.pk)
self.assertEqual(account.username, 'testuser2')
self.assertEqual(account.data['password'], 'testpass2')
def test_with_hosting_service_self_hosted_and_existing_account(self):
"""Testing RepositoryForm with a self-hosted hosting service and
existing account
"""
account = HostingServiceAccount.objects.create(
username='testuser',
service_name='self_hosted_test',
hosting_url='https://example.com')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'self_hosted_test',
'self_hosted_test-hosting_url': 'https://example.com',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'myrepo',
'bug_tracker_type': 'none',
})
form.validate_repository = False
self.assertTrue(form.is_valid())
self.assertFalse(form.hosting_account_linked)
repository = form.save()
self.assertEqual(repository.name, 'test')
self.assertEqual(repository.hosting_account, account)
self.assertEqual(repository.extra_data['hosting_url'],
'https://example.com')
def test_with_self_hosted_and_invalid_account_service(self):
"""Testing RepositoryForm with a self-hosted hosting service and
invalid existing account due to mismatched service type
"""
account = HostingServiceAccount.objects.create(
username='testuser',
service_name='self_hosted_test',
hosting_url='https://example1.com')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'myrepo',
'bug_tracker_type': 'none',
})
form.validate_repository = False
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
def test_with_self_hosted_and_invalid_account_local_site(self):
"""Testing RepositoryForm with a self-hosted hosting service and
invalid existing account due to mismatched Local Site
"""
account = HostingServiceAccount.objects.create(
username='testuser',
service_name='self_hosted_test',
hosting_url='https://example1.com',
local_site=LocalSite.objects.create(name='test-site'))
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'myrepo',
'bug_tracker_type': 'none',
})
form.validate_repository = False
self.assertFalse(form.is_valid())
self.assertFalse(form.hosting_account_linked)
def test_with_hosting_service_custom_bug_tracker(self):
"""Testing RepositoryForm with a custom bug tracker"""
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'custom',
'bug_tracker': 'http://example.com/issue/%s',
})
self.assertTrue(form.is_valid())
repository = form.save()
self.assertFalse(repository.extra_data['bug_tracker_use_hosting'])
self.assertEqual(repository.bug_tracker, 'http://example.com/issue/%s')
self.assertNotIn('bug_tracker_type', repository.extra_data)
def test_with_hosting_service_bug_tracker_service(self):
"""Testing RepositoryForm with a bug tracker service"""
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'test',
'bug_tracker_hosting_account_username': 'testuser',
'bug_tracker-test_repo_name': 'testrepo',
})
self.assertTrue(form.is_valid())
repository = form.save()
self.assertFalse(repository.extra_data['bug_tracker_use_hosting'])
self.assertEqual(repository.bug_tracker,
'http://example.com/testuser/testrepo/issue/%s')
self.assertEqual(repository.extra_data['bug_tracker_type'],
'test')
self.assertEqual(
repository.extra_data['bug_tracker-test_repo_name'],
'testrepo')
self.assertEqual(
repository.extra_data['bug_tracker-hosting_account_username'],
'testuser')
def test_with_hosting_service_self_hosted_bug_tracker_service(self):
"""Testing RepositoryForm with a self-hosted bug tracker service"""
account = HostingServiceAccount.objects.create(
username='testuser',
service_name='self_hosted_test',
hosting_url='https://example.com')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'self_hosted_test',
'hosting_url': 'https://example.com',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'self_hosted_test',
'bug_tracker_hosting_url': 'https://example.com',
'bug_tracker-test_repo_name': 'testrepo',
})
form.validate_repository = False
self.assertTrue(form.is_valid())
repository = form.save()
self.assertFalse(repository.extra_data['bug_tracker_use_hosting'])
self.assertEqual(repository.bug_tracker,
'https://example.com/testrepo/issue/%s')
self.assertEqual(repository.extra_data['bug_tracker_type'],
'self_hosted_test')
self.assertEqual(
repository.extra_data['bug_tracker-test_repo_name'],
'testrepo')
self.assertEqual(
repository.extra_data['bug_tracker_hosting_url'],
'https://example.com')
def test_with_hosting_service_with_hosting_bug_tracker(self):
"""Testing RepositoryForm with hosting service's bug tracker"""
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_use_hosting': True,
'bug_tracker_type': 'googlecode',
})
form.validate_repository = False
self.assertTrue(form.is_valid())
repository = form.save()
self.assertTrue(repository.extra_data['bug_tracker_use_hosting'])
self.assertEqual(repository.bug_tracker,
'http://example.com/testuser/testrepo/issue/%s')
self.assertNotIn('bug_tracker_type', repository.extra_data)
self.assertFalse('bug_tracker-test_repo_name'
in repository.extra_data)
self.assertFalse('bug_tracker-hosting_account_username'
in repository.extra_data)
def test_with_hosting_service_with_hosting_bug_tracker_and_self_hosted(
self):
"""Testing RepositoryForm with self-hosted hosting service's bug
tracker
"""
account = HostingServiceAccount.objects.create(
username='testuser',
service_name='self_hosted_test',
hosting_url='https://example.com')
account.data['password'] = 'testpass'
account.save()
account.data['authorization'] = {
'token': '1234',
}
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'self_hosted_test',
'hosting_url': 'https://example.com',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_use_hosting': True,
'bug_tracker_type': 'googlecode',
})
form.validate_repository = False
self.assertTrue(form.is_valid())
repository = form.save()
self.assertTrue(repository.extra_data['bug_tracker_use_hosting'])
self.assertEqual(repository.bug_tracker,
'https://example.com/testrepo/issue/%s')
self.assertNotIn('bug_tracker_type', repository.extra_data)
self.assertFalse('bug_tracker-test_repo_name'
in repository.extra_data)
self.assertFalse('bug_tracker_hosting_url'
in repository.extra_data)
def test_with_hosting_service_no_bug_tracker(self):
"""Testing RepositoryForm with no bug tracker"""
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
account.data['password'] = 'testpass'
account.save()
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
'hosting_account': account.pk,
'tool': self.git_tool_id,
'test_repo_name': 'testrepo',
'bug_tracker_type': 'none',
})
self.assertTrue(form.is_valid())
repository = form.save()
self.assertFalse(repository.extra_data['bug_tracker_use_hosting'])
self.assertEqual(repository.bug_tracker, '')
self.assertNotIn('bug_tracker_type', repository.extra_data)
def test_with_hosting_service_with_existing_custom_bug_tracker(self):
"""Testing RepositoryForm with existing custom bug tracker"""
repository = Repository(name='test',
bug_tracker='http://example.com/issue/%s')
form = RepositoryForm(instance=repository)
self.assertFalse(form._get_field_data('bug_tracker_use_hosting'))
self.assertEqual(form._get_field_data('bug_tracker_type'), 'custom')
self.assertEqual(form.initial['bug_tracker'],
'http://example.com/issue/%s')
def test_with_hosting_service_with_existing_bug_tracker_service(self):
"""Testing RepositoryForm with existing bug tracker service"""
repository = Repository(name='test')
repository.extra_data['bug_tracker_type'] = 'test'
repository.extra_data['bug_tracker-test_repo_name'] = 'testrepo'
repository.extra_data['bug_tracker-hosting_account_username'] = \
'testuser'
form = RepositoryForm(instance=repository)
self.assertFalse(form._get_field_data('bug_tracker_use_hosting'))
self.assertEqual(form._get_field_data('bug_tracker_type'), 'test')
self.assertEqual(
form._get_field_data('bug_tracker_hosting_account_username'),
'testuser')
self.assertIn('test', form.bug_tracker_forms)
self.assertIn('default', form.bug_tracker_forms['test'])
bitbucket_form = form.bug_tracker_forms['test']['default']
self.assertEqual(
bitbucket_form.fields['test_repo_name'].initial,
'testrepo')
def test_with_hosting_service_with_existing_bug_tracker_using_hosting(
self):
"""Testing RepositoryForm with existing bug tracker using hosting
service
"""
account = HostingServiceAccount.objects.create(username='testuser',
service_name='test')
repository = Repository(name='test',
hosting_account=account)
repository.extra_data['bug_tracker_use_hosting'] = True
repository.extra_data['test_repo_name'] = 'testrepo'
form = RepositoryForm(instance=repository)
self.assertTrue(form._get_field_data('bug_tracker_use_hosting'))
def test_bound_forms_with_post_with_repository_service(self):
"""Testing RepositoryForm binds hosting service forms only if matching
posted repository hosting_service using default plan
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'test',
})
# Make sure only the relevant forms are bound.
for hosting_type, repo_forms in six.iteritems(form.repository_forms):
for plan_id, repo_form in six.iteritems(repo_forms):
self.assertEqual(repo_form.is_bound,
hosting_type == 'test' and
plan_id == form.DEFAULT_PLAN_ID)
# Bug tracker info wasn't set in the form above.
for hosting_type, bug_forms in six.iteritems(form.bug_tracker_forms):
for plan_id, bug_form in six.iteritems(bug_forms):
self.assertFalse(bug_form.is_bound)
# Auth forms are never bound on initialize.
for hosting_type, auth_form in six.iteritems(form.hosting_auth_forms):
self.assertFalse(auth_form.is_bound)
def test_bound_forms_with_post_with_bug_tracker_service(self):
"""Testing RepositoryForm binds hosting service forms only if matching
posted bug tracker hosting_service using default plan
"""
form = RepositoryForm({
'name': 'test',
'bug_tracker_type': 'test',
})
# Make sure only the relevant forms are bound.
for hosting_type, bug_forms in six.iteritems(form.bug_tracker_forms):
for plan_id, bug_form in six.iteritems(bug_forms):
self.assertEqual(bug_form.is_bound,
hosting_type == 'test' and
plan_id == form.DEFAULT_PLAN_ID)
# Repository info wasn't set in the form above.
for hosting_type, repo_forms in six.iteritems(form.repository_forms):
for plan_id, repo_form in six.iteritems(repo_forms):
self.assertFalse(repo_form.is_bound)
# Auth forms are never bound on initialize.
for hosting_type, auth_form in six.iteritems(form.hosting_auth_forms):
self.assertFalse(auth_form.is_bound)
def test_bound_forms_with_post_with_repo_service_and_plan(self):
"""Testing RepositoryForm binds hosting service forms only if matching
posted repository hosting_service with specific plans
"""
form = RepositoryForm({
'name': 'test',
'hosting_type': 'github',
'repository_plan': 'public',
})
# Make sure only the relevant forms are bound.
for hosting_type, repo_forms in six.iteritems(form.repository_forms):
for plan_id, repo_form in six.iteritems(repo_forms):
self.assertEqual(repo_form.is_bound,
hosting_type == 'github' and
plan_id == 'public')
# Bug tracker info wasn't set in the form above.
for hosting_type, bug_forms in six.iteritems(form.bug_tracker_forms):
for plan_id, bug_form in six.iteritems(bug_forms):
self.assertFalse(bug_form.is_bound)
# Auth forms are never bound on initialize.
for hosting_type, auth_form in six.iteritems(form.hosting_auth_forms):
self.assertFalse(auth_form.is_bound)
def test_bound_forms_with_post_with_bug_tracker_service_and_plan(self):
"""Testing RepositoryForm binds hosting service forms only if matching
posted bug tracker hosting_service with specific plans
"""
form = RepositoryForm({
'name': 'test',
'bug_tracker_type': 'github',
'bug_tracker_plan': 'public',
})
# Make sure only the relevant forms are bound.
for hosting_type, bug_forms in six.iteritems(form.bug_tracker_forms):
for plan_id, bug_form in six.iteritems(bug_forms):
self.assertEqual(bug_form.is_bound,
hosting_type == 'github' and
plan_id == 'public')
# Repository info wasn't set in the form above.
for hosting_type, repo_forms in six.iteritems(form.repository_forms):
for plan_id, repo_form in six.iteritems(repo_forms):
self.assertFalse(repo_form.is_bound)
# Auth forms are never bound on initialize.
for hosting_type, auth_form in six.iteritems(form.hosting_auth_forms):
self.assertFalse(auth_form.is_bound)
def test_with_set_access_list(self):
"""Testing RepositoryForm with setting users access list"""
user1 = User.objects.create(username='user1')
user2 = User.objects.create(username='user2')
User.objects.create(username='user3')
group1 = self.create_review_group(name='group1', invite_only=True)
group2 = self.create_review_group(name='group2', invite_only=True)
self.create_review_group(name='group3', invite_only=True)
form = RepositoryForm({
'name': 'test',
'hosting_type': 'custom',
'tool': self.git_tool_id,
'path': '/path/to/test.git',
'bug_tracker_type': 'none',
'public': False,
'users': [user1.pk, user2.pk],
'review_groups': [group1.pk, group2.pk],
})
form.is_valid()
self.assertTrue(form.is_valid())
repository = form.save()
self.assertFalse(repository.public)
self.assertEqual(list(repository.users.all()), [user1, user2])
self.assertEqual(list(repository.review_groups.all()),
[group1, group2])
| 40.745789 | 79 | 0.603995 | 5,633 | 53,214 | 5.441328 | 0.047577 | 0.041108 | 0.03915 | 0.023556 | 0.903918 | 0.880852 | 0.856514 | 0.812991 | 0.78526 | 0.75469 | 0 | 0.001808 | 0.293231 | 53,214 | 1,305 | 80 | 40.777011 | 0.813161 | 0.110554 | 0 | 0.779381 | 0 | 0 | 0.170465 | 0.03359 | 0 | 0 | 0 | 0 | 0.216495 | 1 | 0.049485 | false | 0.023711 | 0.010309 | 0 | 0.061856 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4946886fed82288d89d4f979800ef79673c60b6f | 216 | py | Python | django_redis/compressors/identity.py | DoubleCai/django-redis | 925cfcd3f7ed2ba491b3474023794d9426c69cbc | [
"BSD-3-Clause"
] | 727 | 2020-02-22T16:27:23.000Z | 2022-03-31T13:30:20.000Z | django_redis/compressors/identity.py | DoubleCai/django-redis | 925cfcd3f7ed2ba491b3474023794d9426c69cbc | [
"BSD-3-Clause"
] | 172 | 2020-02-22T10:41:48.000Z | 2022-03-30T15:13:53.000Z | django_redis/compressors/identity.py | DoubleCai/django-redis | 925cfcd3f7ed2ba491b3474023794d9426c69cbc | [
"BSD-3-Clause"
] | 104 | 2020-02-26T15:33:55.000Z | 2022-03-06T05:01:33.000Z | from .base import BaseCompressor
class IdentityCompressor(BaseCompressor):
def compress(self, value: bytes) -> bytes:
return value
def decompress(self, value: bytes) -> bytes:
return value
| 21.6 | 48 | 0.689815 | 23 | 216 | 6.478261 | 0.565217 | 0.120805 | 0.187919 | 0.255034 | 0.402685 | 0.402685 | 0 | 0 | 0 | 0 | 0 | 0 | 0.226852 | 216 | 9 | 49 | 24 | 0.892216 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0.333333 | false | 0 | 0.166667 | 0.333333 | 1 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 7 |
498f797e9a2a9f2186d34868f94718df2bad0dd3 | 1,758 | py | Python | My_Account/migrations/0028_auto_20200708_1845.py | CHESyrian/Syrians | 8376e9bed6e3a03f536d8aacd523d630f6bc4345 | [
"MIT"
] | null | null | null | My_Account/migrations/0028_auto_20200708_1845.py | CHESyrian/Syrians | 8376e9bed6e3a03f536d8aacd523d630f6bc4345 | [
"MIT"
] | null | null | null | My_Account/migrations/0028_auto_20200708_1845.py | CHESyrian/Syrians | 8376e9bed6e3a03f536d8aacd523d630f6bc4345 | [
"MIT"
] | null | null | null | # Generated by Django 3.0.6 on 2020-07-08 15:45
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('My_Account', '0027_auto_20200708_1413'),
]
operations = [
migrations.AlterField(
model_name='profile_model',
name='Codepen_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='Facebook_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='Github_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='Instagram_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='Kaggle_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='LinkedIn_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='Twitter_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
migrations.AlterField(
model_name='profile_model',
name='Youtube_Link',
field=models.CharField(blank=True, max_length=160, null=True),
),
]
| 32.555556 | 74 | 0.585893 | 182 | 1,758 | 5.461538 | 0.263736 | 0.144869 | 0.201207 | 0.2334 | 0.788732 | 0.788732 | 0.788732 | 0.743461 | 0.743461 | 0.743461 | 0 | 0.044643 | 0.299204 | 1,758 | 53 | 75 | 33.169811 | 0.762175 | 0.025597 | 0 | 0.680851 | 1 | 0 | 0.137347 | 0.013442 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.021277 | 0 | 0.085106 | 0 | 0 | 0 | 0 | null | 0 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
4992568fe865b5cfc5af2dea09138d9c28eb5fd0 | 23,543 | py | Python | tests/test_app.py | wllmsash/yget | cb3828c62afc00655d8a3e72987c6c563c437580 | [
"MIT"
] | null | null | null | tests/test_app.py | wllmsash/yget | cb3828c62afc00655d8a3e72987c6c563c437580 | [
"MIT"
] | null | null | null | tests/test_app.py | wllmsash/yget | cb3828c62afc00655d8a3e72987c6c563c437580 | [
"MIT"
] | null | null | null | import unittest
from collections import deque
from yget.app import App
from .mock_argument_parser import MockArgumentParser
from .mock_bookmarks_parser import MockBookmarksParser
from .mock_downloader_factory import MockDownloaderFactory
from .mock_file_reader import MockFileReader
from .mock_input_provider import MockInputProvider
from .mock_path_validator import MockPathValidator
from .mock_logger import MockLogger
class TestApp(unittest.TestCase):
def make_app(self,
mock_argument_parser=None,
mock_bookmarks_parser=None,
mock_downloader_factory=None,
mock_file_reader=None,
mock_input_provider=None,
mock_path_validator=None,
mock_logger=None):
if not mock_argument_parser:
mock_argument_parser = MockArgumentParser()
if not mock_bookmarks_parser:
mock_bookmarks_parser = MockBookmarksParser()
if not mock_downloader_factory:
mock_downloader_factory = MockDownloaderFactory()
lines_for_file = {
"FILE_1": ["LINE_1", "LINE_2"],
"FILE_2": ["LINE_3", "LINE_4"]
}
if not mock_file_reader:
mock_file_reader = MockFileReader(lambda x: "", lambda x: lines_for_file[x])
if not mock_input_provider:
mock_input_provider = MockInputProvider(lambda x: "", lambda x: "")
if not mock_path_validator:
mock_path_validator = MockPathValidator(lambda x: True, lambda x: True)
if not mock_logger:
mock_logger = MockLogger()
return App(mock_argument_parser, mock_bookmarks_parser, mock_downloader_factory, mock_file_reader, mock_input_provider, mock_path_validator, mock_logger)
def stdin_get_response(self, responses):
if len(responses) > 0:
return responses.popleft()
raise EOFError()
def test_app_called_with_invalid_arguments_logs_and_returns_error_code(self):
mock_argument_parser = MockArgumentParser()
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["ARGUMENTS_INVALID_MESSAGE"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 1)
def test_app_in_help_mode_logs_and_returns_success_code(self):
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_help_mode()
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["HELP_MESSAGE"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 0)
def test_app_with_non_existent_output_directory_logs_and_returns_error_code(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode([], options)
mock_argument_parser.set_output_directory("MY_OUTPUT_DIRECTORY")
mock_path_validator = MockPathValidator(lambda x: False, lambda x: True)
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_path_validator=mock_path_validator, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["Output directory 'MY_OUTPUT_DIRECTORY' does not exist"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 1)
def test_app_in_help_mode_with_non_existent_output_directory_logs_and_returns_error_code(self):
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_help_mode()
mock_path_validator = MockPathValidator(lambda x: False, lambda x: True)
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_path_validator=mock_path_validator, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["HELP_MESSAGE"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 0)
def test_app_in_files_mode_with_non_existent_file_logs_and_returns_error_code(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_path_validator = MockPathValidator(lambda x: True, lambda x: False)
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_path_validator=mock_path_validator, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["Input file 'FILE_1' does not exist"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 1)
def test_app_in_files_mode_with_no_files_accepts_stdin(self):
options = (False, False, False, False, False)
responses = deque(["URL_1", "URL_2"])
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode([], options)
mock_downloader_factory = MockDownloaderFactory()
mock_input_provider = MockInputProvider(lambda x: self.stdin_get_response(responses), lambda x: "")
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory, mock_input_provider=mock_input_provider, mock_logger=mock_logger)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["URL_1", "URL_2"])
self.assertEqual(code, 0)
def test_app_in_files_mode_with_audio_only_with_successful_download_returns(self):
options = (False, True, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["LINE_1", "LINE_2", "LINE_3", "LINE_4"])
self.assertEqual(code, 0)
def test_app_in_files_mode_with_audio_only_with_raised_download_propagates(self):
options = (False, True, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_files_mode_with_wav_with_successful_download_returns(self):
options = (False, False, True, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["LINE_1", "LINE_2", "LINE_3", "LINE_4"])
self.assertEqual(code, 0)
def test_app_in_files_mode_with_wav_with_raised_download_propagates(self):
options = (False, False, True, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_files_mode_with_mp3_with_successful_download_returns(self):
options = (False, False, False, True, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["LINE_1", "LINE_2", "LINE_3", "LINE_4"])
self.assertEqual(code, 0)
def test_app_in_files_mode_with_mp3_with_raised_download_propagates(self):
options = (False, False, False, True, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_files_mode_with_successful_download_returns(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["LINE_1", "LINE_2", "LINE_3", "LINE_4"])
self.assertEqual(code, 0)
def test_app_in_files_mode_with_raised_download_propagates(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_files_mode(["FILE_1", "FILE_2"], options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_url_mode_with_audio_only_with_successful_download_returns(self):
options = (False, True, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["MY_URL"])
self.assertEqual(code, 0)
def test_app_in_url_mode_with_audio_only_with_raised_download_propagates(self):
options = (False, True, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_url_mode_with_wav_with_successful_download_returns(self):
options = (False, False, True, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["MY_URL"])
self.assertEqual(code, 0)
def test_app_in_url_mode_with_wav_with_raised_download_propagates(self):
options = (False, False, True, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_url_mode_with_mp3_with_successful_download_returns(self):
options = (False, False, False, True, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["MY_URL"])
self.assertEqual(code, 0)
def test_app_in_url_mode_with_mp3_with_raised_download_propagates(self):
options = (False, False, False, True, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_url_mode_with_successful_download_returns(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["MY_URL"])
self.assertEqual(code, 0)
def test_app_in_url_mode_with_raised_download_propagates(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_url_mode("MY_URL", options)
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_bookmarks_mode_with_non_existent_bookmarks_return_error_code(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_path_validator = MockPathValidator(lambda x: True, lambda x: False)
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_path_validator=mock_path_validator, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["Bookmarks file 'MY_BOOKMARKS' does not exist"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 1)
def test_app_in_bookmarks_mode_with_invalid_bookmarks_return_error_code(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_logger = MockLogger()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_logger=mock_logger)
code = app.run()
expected_line_list = ["Bookmarks file 'MY_BOOKMARKS' not valid"]
self.assertListEqual(mock_logger.write_line_calls, expected_line_list)
self.assertEqual(code, 1)
def test_app_in_bookmarks_mode_with_audio_only_with_successful_download_returns(self):
options = (False, True, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["URL_1", "URL_2"])
self.assertEqual(code, 0)
def test_app_in_bookmarks_mode_with_audio_only_with_raised_download_propagates(self):
options = (False, True, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_bookmarks_mode_with_wav_with_successful_download_returns(self):
options = (False, False, True, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["URL_1", "URL_2"])
self.assertEqual(code, 0)
def test_app_in_bookmarks_mode_with_wav_with_raised_download_propagates(self):
options = (False, False, True, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_bookmarks_mode_with_mp3_with_successful_download_returns(self):
options = (False, False, False, True, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["URL_1", "URL_2"])
self.assertEqual(code, 0)
def test_app_in_bookmarks_mode_with_mp3_with_raised_download_propagates(self):
options = (False, False, False, True, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
def test_app_in_bookmarks_mode_with_successful_download_returns(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory()
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
code = app.run()
mock_downloader = mock_downloader_factory.downloader
self.assertListEqual(mock_downloader.download_video_calls, ["URL_1", "URL_2"])
self.assertEqual(code, 0)
def test_app_in_bookmarks_mode_with_raised_download_propagates(self):
options = (False, False, False, False, False)
mock_argument_parser = MockArgumentParser()
mock_argument_parser.set_bookmarks_mode("MY_BOOKMARKS", options)
mock_bookmarks_parser = MockBookmarksParser()
mock_bookmarks_parser.set_valid(["URL_1", "URL_2"])
mock_downloader_factory = MockDownloaderFactory(raise_in_download_videos=True)
app = self.make_app(mock_argument_parser=mock_argument_parser, mock_bookmarks_parser=mock_bookmarks_parser, mock_downloader_factory=mock_downloader_factory)
raised = False
try:
app.run()
except:
raised = True
self.assertTrue(raised)
| 35.943511 | 185 | 0.725396 | 2,787 | 23,543 | 5.665949 | 0.038034 | 0.10107 | 0.151605 | 0.091951 | 0.929643 | 0.920081 | 0.903236 | 0.898107 | 0.89494 | 0.891647 | 0 | 0.005083 | 0.197808 | 23,543 | 654 | 186 | 35.998471 | 0.831039 | 0 | 0 | 0.805419 | 0 | 0 | 0.034405 | 0.001954 | 0 | 0 | 0 | 0 | 0.128079 | 1 | 0.083744 | false | 0 | 0.024631 | 0 | 0.115764 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
4995e4f381c7bdbaa18f26cab6ce709adf82d556 | 5,574 | py | Python | B0k3p.py | facebooktool446/Cr4ck | 166e118e134c59c0bb1ab3e46eafdc88a132c838 | [
"Apache-2.0"
] | 2 | 2020-11-30T06:05:20.000Z | 2020-11-30T06:06:27.000Z | B0k3p.py | facebooktool446/Cr4ck | 166e118e134c59c0bb1ab3e46eafdc88a132c838 | [
"Apache-2.0"
] | null | null | null | B0k3p.py | facebooktool446/Cr4ck | 166e118e134c59c0bb1ab3e46eafdc88a132c838 | [
"Apache-2.0"
] | null | null | null | #kalo mau recode ngaca dulu ngentod
#Facebook : https://m.facebook.com/KM39453
#instagram : @yayanxd_
import zlib,base64
exec(zlib.decompress(base64.b64decode("eJztW1uP3Dh2fu9fQWsQqGpcXfduX3bKXo8vs17PxVi3gQTdRoElUVUcXVeU3NWxO5jHfdiH9a5nDQQZBNg85S/kNf+kf0F+Qg4pUbeiLtV2JhMgRKOriuI5PDz8eC4k9dmNUczC0Yp6I+K9RsFFtPG92cFn6PDzQ2T4JvXWd1EcWYe3ec3BZ8+w4yMXxygk8JQgb40NjMzYiZHte5HvHHz2zQWysEFWvm+j3iaKAnZ3NDo/Px/K2qHhu6Nn38zuzI9m/QPqBn4YIZ8N2AUbhGQQUZcMvme+NwixZ/ou1P0+JixiB1bou2jF5iil+ZLgOKJW7Lzw4wBhhgIcMhIm7QzfM+IwJF40tOIoDgmTZCebkGDzue87j7fEiCM/PDgwiYUMh+Cw1797gKD4bAjyRMTtaaJe64s2No5wj6VtLD9EBqIeYugm0s88PanmBWiHLDL9OBqehzQiPaOvemY5Mdv08kd87EPmEBL0JsPRbDxOel1hj2SSFdqMh5OEVoiladqZdxadjWez0ztjl9f/E/wlv6fu1fs/Xv3h36/e/ympmLu95MvEvfrwb8urD3+T9f0airHL2aGDtIsx7+K//vXPPzz83fzhM/Tg2ctv0ZMHDx9/+d13z3j1QUGUw7oi+zzzDpD4PvkVF+ivf6j+/fhBWYnyb2inUklfanmw0+eP769+/FPh76dqfZXdu5zduzYe71ChxU9Z58dugRlCSvHf52yqj941UwtS1NbZ7vh+/OfSKBo7S6gzBWedzVzOpmk2iuorcXy3B+WHQoeoKngyA2nlTwgVvqLir850BxK0V+9/+EX9yWVZ0MabfNr/80NefZkttqn7IAabH3Jd5IvhLsrp/gFfYO/vH12P+VOPRXgdYreG+a8vOPetubwe+69otIlX9bKvxXPhcMQwDq87jq9C7mKerOo6+g02bBKiExK68RY9Bb/lEUZxubNfMGTAd0hX41BbuhprQb0gjnqabHYWnSYjOXZhrDb20GMvgnG/9KLYRs+Iy8ll41da4pyohSy0WCBNy/xjwbsmIQf6cmzPgmFwkdIQh5GsNXeYZEulHEL5z4gT4zATPxHeXVlNXjIIqVcYzBs5lImsuUTfxA6m6GE4N2z09JHWTDfN6U7iMOLRAA4hGsIBjrhuDN+mEHZYqxY+s5zPy8DEEUEnEJywFqp5TvVbH4KQtYCoDLJaiMc5caLJtL1KbTkKPOnQr96/TyH01//IF0x1vif5hJclgda/4KWQh2N5XJXh7jmBcayh2rtxA+WDifiEQShIkUu89Tr2MAeA468ph0GKAxkVx2K1uMQVYDNCsB1oA8gJkE3Q2vfXDkHGBoJYkgAKWO5gKuVVWr+Z2IXltTXXh35APCQjcYVJHAm8a32Uc4CFzKiBFkiXdElNKYh/c6nnFGvHh8WPTEoGxoYYNoTtLHaiAUgM483bURO4nr7KK4ACasZ5BZDblTYbrqJylWC7QyiaFaoSIcp1wlJgFmdmLitReFGpkZyFRFyPkA4k06n1hzyNKATvopCtQYIIPaEO+daPnoCQ5uMw9MMmvkUjC4vrHpKz+lAiZ4UgFcL3lc5nWkJszvVNKql2F35flltQ5lJowggbrknUk3Prhy4GQUYu0QavAejWxeIJBkvcH6SDXgCrPkw9GP3inAqWFtISPkuAPSQ3y1UcRb6niQQpCnu8z6rCi8QPAowAj2v0AIaKAmrTkAOegZuBRHDdiY8s5wDyynQNtHNAOCSJVgutLJbM3Ih9KlX5qq+mLbkrVSlav9IUv1zhDTjdDWZ4kBqBKOYWJJ/qGzdUpmm31KK3XGQ6rZj5JH8WGh5om8h1hkkNYN2intnTsDaAKQADuNC+FDLnoY7WP9UgsbZARyW0NEmULJYuQgeYMXWz9lFTO4442tPRNYxfG23/kW9rhMP0U1Mjv1U5T3mXuUaa5VOvQiF1F1WmFqdFCfUKDEkUhx4qwbxi02rQXYjJAJyncjshg+urqh3LMf0CO3ijBjY3z8Jx9mLQroddMuDCn/uhOQAhU4tUkafO+1ScDy8rgII2Oxof3z46mk1uTW//3a2Hx1PrtkHuWLfmqwl8nRuT6cwwprP57Bae49lUK7OAiccu4zZWrVIdGwZhbBn5NvH0u2g1qGmXzDW00H/74rtv9bpmzLSXYI8Z9Tk3fVrbkLiYOtAk01xNO8c3sEM4L+ItX76o5ScVDy2zOaiXkfOjPqvltiYeCSG8XQLi+ViWKbg53aR+8HTNG8yso6Mj684da3U8sQzzFsZjYz63jm5bR9MpsY6r9BWPhwNajGZWh1BR3pF0CSQi5ghDOjwU+NPLHABRge8xHqyUTAgwGqSIWCQflYUKzk1//OCBzh2YZDKMyFZl+BI/YWlnYXlZTX41m7pX//KXbGmhvPqNnO00YRXx+Q/3ii3k5F0qVYyQNoAQc6Gp/EsikeJBGlrdXKDJ7kMYM6zVepvkUIsMcQAe2syW+U3trXZTSqrosNnJ5i5fTyRjh6KTaBvpAx3rXX2/dPtSKr61+1aH/1KydK+3Gvc5fJqr29zlOeeb2r3+qU54QLh02Vp/pZSnHgRgObe7EJi1QmDWAoGG+W9AgIy41RBoB0HCwN4XBx3CLQUYhKg/HxpqBOTJzq66+EEC5VBxKIt6+tnb0eHVD3/TW6LcRpi8oZc5UHj6REx0NwNHL08e3vCYOlFS/1I+72fxqVBb4ENPKvJZQi4aqaj5cBVkk4QMnhaIsligCYtZKW1UTCvhA5hlCngKnaoGQ3xeNd/QKguy7NimNekNNOUYENQi3sOO09NdapoO0e59gZHhACIW2nCoIR70LbTe8PP7fe2e+PhihO/pA6FpfN7v787+ls9+2odi1rltCULfgqxSmJTt6VhtOpLWplxS29PJqwyyBcG1Re/M/Lx/X0SsPc6t3+f/VEaX922FFNixpO/6jrnmqBeTfW13jcBcriELHIgy9ZHeh0fy1329RtxkTejpksgzLbFJy3ny8TpgF6jZ78NPLYMqIyZPPvn2DXZX1EFPHw1hMtVg5Dr5mm5whE4g5PLQ1xjCBZmg8ilWjDUBpTLhAoqBXkgp9DSlAEMlU4rd7sCRcKDpr6p4SgN6apYXBT8VdahNmGph1GRRvP01F0xC778mlSWTLo5RbFGxQJJlwTsSEFQwwec81IaueaOKAnkH1fEXlQC0qv2ZlmymhJuJa2CwfmbsBqDT7OizmrJI+UKCjaiqXpUeRcMumqQmQxUlpsZmNRSmR21vvhhtZveK2t21OsKiUQ4V1sHo8NaNdqewjkVbsZTBmSttT2J6Up711qfR0df3l/ItW49ae2EVDEbBXrwpGovLFkuht1mKFwRYrW0ceBc4sxViZtTGQoH21FzwJ232QtvttLBD081irC4YwaGx6flBBOlaVc7kYRXWSdsuuOZzxNTA3kJgWID2KMX2FyZ9Dd/SRsb3GdqhPoV6IlQT2EW3KlDxTcgU79r/Et4/FeC1+1pdD5WoseIi94J8vXNM0fcbAK/DMYi97+OIQ1DiPp0m1VAz1KVoT34p8V7ccqvrMQd9J8yvw3gFbhoGJoLenX2mEK+qiJdtr4t5Rew4UgePvPfrQPt/CNk6NRNs6z+HLa+33j8LlkuWVMJYTIjKfCcoUlpvTtO6gfyR1hvEXi55vrpc8pNYfbl0IWRcLvUdWeu2zxlhRZy/SHbs1LsAHO+8dXqoVteE75fqyRrR7wqaS1XT9Bqe6lF6D07ZAT+p1eXk38rvdMzS431Rm6PgoTh1fYRDmsTU1Qy+leu0hevX1LPRc59FeTu0dyezlk6eE8+AT0gJvsUu3pv9vIX9V2Ec7C/0URd9o+fxCuKYvZkfq5gXbb/oqJXtL/b+gbu7mZSUhl24CIN9yM6R9eZLGjXzCRYjZbPQtAYz3Xrmg06oiW208h2yQbbPfG+t3soROwuyT34A0NAr5Jz5AZ76vEwfuUTvq31wa3LdYAJSAeT+30KZx4N8ddZZOdhp42DzvmpvBmTW5T5quglQLCBAzrl0G6u+fLrZLqihtEueZZZcrg4SFZQjvw5DEjjArldmPNArV1aa5jcRzN3dvP9UolVY7wgnarsBsLCRk1Z2gdysCXK2ZzWhLSChYARmu9sNlBrJs9C+unST6lFA/MAho/u/X+g3QST1UkoKjI2HdpkG+BjH+yH6jhrRkUA0NnEnszVvUmvtrY+NoF9RhgtxqJRlMh5nDZ8+Qk26TdOR2okD8uRa4EdNmzKa1Uar0D9nZCQ6AOPrriD5GN2HnIAH7apUpVg+wfypLRIM+WSvGTz6iIXxMhV/9NS8hopBDOA/pMykaxhV26UmXtIzCG2UpnPDYBNIlQOvJgPXcmRW4d/KsNtto5Rf+9Ub3rDOe6uSpeJWl/DeXW/bSKmavDmXpclZJDduTi4C0nC3r0LS5kg5lHLk1qz6CnSPm6DbbYI4iCbZ3cadw9KaW451nKY7nPjR456MeCibiHUzYdqlbw/zCBEPOSaCXl/ug515LeeIJXIGsBTfO9Bkew/yQCA7fJ3lOw8Js37hKBbbsZdeADYp+FVqs7rJLhdxpL376lrPxdvluR/aYHwXk7E41yYNR3S7Yy/sHwlx96CVBachj9T62y4DqhayHbJ45VJ+qLOm3qCH+aYP/5jwj5MwVp/0VIrPAGiu/5oo0XwtenmPpBOU5GEuJ9pDlYobC3Lp7HVhYVeg9AbD9qbinkLjCPjdjI8bwfXuXDQMIVnPXRenfGEht7iVO1R8f41h2mVea0M5hl8TE0U+ypf/zvAl1ds8Lti1j63+puOd8gppm9+xsRunMW/xPYMN30ypc0KNoUR7l57vAsYYvwK6940jsuU3b5pisufyZs4JhNV4s2fO0sXMzrqY2eq+fMckMjshKNhT/W2XdYuYJGOFk8+a7R6FsGLNM+XJWF3h15XS6Tjdf1mLHfQtv4WhT6azuvuf3TnMPwGLo4/kYYWEWDQkH8mG8Tdz1h/JZAUsGI4+kgu//EDtazHpEIlnRUJJHgIl63yfMIJjWNzKEziGSE6y7JJX7ZSdaCRZWCIUEb2oYxF+5VHcibwB6SwCidJbsjfaslvVi3c15f/km3T1w/gUPrrymqOSU+US5K4L3k/sev+muDJZFxPsoaSaROYaDvraYPt/qKEu81/I4muiqDSg6z0jFysfh+ZT/hJ1GAfR4PF3T0RoV2+zRHhVxzHbYEl+U99jw4e+5xGDf28OGrvu9fNx2b5HbEZlLC62JZJ3fqfFY4XW91C9tvdQ09fRd7qZ1XQDXFEQOyWd1/GYdxW1vHefbHiy0fHt41vzyezWZHY8Ob497ST2uNhl5bX2hleooIYRB7s4gqXrrdfYUeE+RcbBQeH0f7HQ5Nm/7Lp0vl46URevzx+UBvDfMzBN8g==")))
| 929 | 5,451 | 0.957661 | 207 | 5,574 | 25.782609 | 0.975845 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.162828 | 0.00287 | 5,574 | 5 | 5,452 | 1,114.8 | 0.797409 | 0.017402 | 0 | 0 | 0 | 0.5 | 0.987943 | 0.987943 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | true | 0 | 0.5 | 0 | 0.5 | 0 | 0 | 0 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | null | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 10 |
4996bb8cd05a0956bee315f66770e2bd03cf2aaf | 10,979 | py | Python | research/object_detection/matchers/argmax_matcher_test.py | hjkim-haga/TF-OD-API | 22ac477ff4dfb93fe7a32c94b5f0b1e74330902b | [
"Apache-2.0"
] | 1 | 2021-05-22T12:50:50.000Z | 2021-05-22T12:50:50.000Z | object_detection/matchers/argmax_matcher_test.py | DemonDamon/mask-detection-based-on-tf2odapi | 192ae544169c1230c21141c033800aa1bd94e9b6 | [
"MIT"
] | null | null | null | object_detection/matchers/argmax_matcher_test.py | DemonDamon/mask-detection-based-on-tf2odapi | 192ae544169c1230c21141c033800aa1bd94e9b6 | [
"MIT"
] | null | null | null | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for object_detection.matchers.argmax_matcher."""
import numpy as np
import tensorflow.compat.v1 as tf
from object_detection.matchers import argmax_matcher
from object_detection.utils import test_case
class ArgMaxMatcherTest(test_case.TestCase):
def test_return_correct_matches_with_default_thresholds(self):
def graph_fn(similarity_matrix):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=None)
match = matcher.match(similarity_matrix)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1., 1, 1, 3, 1],
[2, -1, 2, 0, 4],
[3, 0, -1, 0, 0]], dtype=np.float32)
expected_matched_rows = np.array([2, 0, 1, 0, 1])
(res_matched_cols, res_unmatched_cols,
res_match_results) = self.execute(graph_fn, [similarity])
self.assertAllEqual(res_match_results[res_matched_cols],
expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], [0, 1, 2, 3, 4])
self.assertFalse(np.all(res_unmatched_cols))
def test_return_correct_matches_with_empty_rows(self):
def graph_fn(similarity_matrix):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=None)
match = matcher.match(similarity_matrix)
return match.unmatched_column_indicator()
similarity = 0.2 * np.ones([0, 5], dtype=np.float32)
res_unmatched_cols = self.execute(graph_fn, [similarity])
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0], np.arange(5))
def test_return_correct_matches_with_matched_threshold(self):
def graph_fn(similarity):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=3.)
match = matcher.match(similarity)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1, 1, 1, 3, 1],
[2, -1, 2, 0, 4],
[3, 0, -1, 0, 0]], dtype=np.float32)
expected_matched_cols = np.array([0, 3, 4])
expected_matched_rows = np.array([2, 0, 1])
expected_unmatched_cols = np.array([1, 2])
(res_matched_cols, res_unmatched_cols,
match_results) = self.execute(graph_fn, [similarity])
self.assertAllEqual(match_results[res_matched_cols], expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], expected_matched_cols)
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0],
expected_unmatched_cols)
def test_return_correct_matches_with_matched_and_unmatched_threshold(self):
def graph_fn(similarity):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=3.,
unmatched_threshold=2.)
match = matcher.match(similarity)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1, 1, 1, 3, 1],
[2, -1, 2, 0, 4],
[3, 0, -1, 0, 0]], dtype=np.float32)
expected_matched_cols = np.array([0, 3, 4])
expected_matched_rows = np.array([2, 0, 1])
expected_unmatched_cols = np.array([1]) # col 2 has too high maximum val
(res_matched_cols, res_unmatched_cols,
match_results) = self.execute(graph_fn, [similarity])
self.assertAllEqual(match_results[res_matched_cols], expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], expected_matched_cols)
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0],
expected_unmatched_cols)
def test_return_correct_matches_negatives_lower_than_unmatched_false(self):
def graph_fn(similarity):
matcher = argmax_matcher.ArgMaxMatcher(
matched_threshold=3.,
unmatched_threshold=2.,
negatives_lower_than_unmatched=False)
match = matcher.match(similarity)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1, 1, 1, 3, 1],
[2, -1, 2, 0, 4],
[3, 0, -1, 0, 0]], dtype=np.float32)
expected_matched_cols = np.array([0, 3, 4])
expected_matched_rows = np.array([2, 0, 1])
expected_unmatched_cols = np.array([2]) # col 1 has too low maximum val
(res_matched_cols, res_unmatched_cols,
match_results) = self.execute(graph_fn, [similarity])
self.assertAllEqual(match_results[res_matched_cols], expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], expected_matched_cols)
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0],
expected_unmatched_cols)
def test_return_correct_matches_unmatched_row_not_using_force_match(self):
def graph_fn(similarity):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=3.,
unmatched_threshold=2.)
match = matcher.match(similarity)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1, 1, 1, 3, 1],
[-1, 0, -2, -2, -1],
[3, 0, -1, 2, 0]], dtype=np.float32)
expected_matched_cols = np.array([0, 3])
expected_matched_rows = np.array([2, 0])
expected_unmatched_cols = np.array([1, 2, 4])
(res_matched_cols, res_unmatched_cols,
match_results) = self.execute(graph_fn, [similarity])
self.assertAllEqual(match_results[res_matched_cols], expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], expected_matched_cols)
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0],
expected_unmatched_cols)
def test_return_correct_matches_unmatched_row_while_using_force_match(self):
def graph_fn(similarity):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=3.,
unmatched_threshold=2.,
force_match_for_each_row=True)
match = matcher.match(similarity)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1, 1, 1, 3, 1],
[-1, 0, -2, -2, -1],
[3, 0, -1, 2, 0]], dtype=np.float32)
expected_matched_cols = np.array([0, 1, 3])
expected_matched_rows = np.array([2, 1, 0])
expected_unmatched_cols = np.array([2, 4]) # col 2 has too high max val
(res_matched_cols, res_unmatched_cols,
match_results) = self.execute(graph_fn, [similarity])
self.assertAllEqual(match_results[res_matched_cols], expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], expected_matched_cols)
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0],
expected_unmatched_cols)
def test_return_correct_matches_using_force_match_padded_groundtruth(self):
def graph_fn(similarity, valid_rows):
matcher = argmax_matcher.ArgMaxMatcher(matched_threshold=3.,
unmatched_threshold=2.,
force_match_for_each_row=True)
match = matcher.match(similarity, valid_rows)
matched_cols = match.matched_column_indicator()
unmatched_cols = match.unmatched_column_indicator()
match_results = match.match_results
return (matched_cols, unmatched_cols, match_results)
similarity = np.array([[1, 1, 1, 3, 1],
[-1, 0, -2, -2, -1],
[0, 0, 0, 0, 0],
[3, 0, -1, 2, 0],
[0, 0, 0, 0, 0]], dtype=np.float32)
valid_rows = np.array([True, True, False, True, False])
expected_matched_cols = np.array([0, 1, 3])
expected_matched_rows = np.array([3, 1, 0])
expected_unmatched_cols = np.array([2, 4]) # col 2 has too high max val
(res_matched_cols, res_unmatched_cols,
match_results) = self.execute(graph_fn, [similarity, valid_rows])
self.assertAllEqual(match_results[res_matched_cols], expected_matched_rows)
self.assertAllEqual(np.nonzero(res_matched_cols)[0], expected_matched_cols)
self.assertAllEqual(np.nonzero(res_unmatched_cols)[0],
expected_unmatched_cols)
def test_valid_arguments_corner_case(self):
argmax_matcher.ArgMaxMatcher(matched_threshold=1,
unmatched_threshold=1)
def test_invalid_arguments_corner_case_negatives_lower_than_thres_false(self):
with self.assertRaises(ValueError):
argmax_matcher.ArgMaxMatcher(matched_threshold=1,
unmatched_threshold=1,
negatives_lower_than_unmatched=False)
def test_invalid_arguments_no_matched_threshold(self):
with self.assertRaises(ValueError):
argmax_matcher.ArgMaxMatcher(matched_threshold=None,
unmatched_threshold=4)
def test_invalid_arguments_unmatched_thres_larger_than_matched_thres(self):
with self.assertRaises(ValueError):
argmax_matcher.ArgMaxMatcher(matched_threshold=1,
unmatched_threshold=2)
if __name__ == '__main__':
tf.test.main()
| 46.521186 | 81 | 0.653338 | 1,329 | 10,979 | 5.083521 | 0.116629 | 0.076525 | 0.043517 | 0.05595 | 0.826229 | 0.799438 | 0.789668 | 0.769982 | 0.754292 | 0.736382 | 0 | 0.028444 | 0.244285 | 10,979 | 235 | 82 | 46.719149 | 0.785826 | 0.075417 | 0 | 0.707182 | 0 | 0 | 0.000809 | 0 | 0 | 0 | 0 | 0 | 0.138122 | 1 | 0.110497 | false | 0 | 0.022099 | 0 | 0.18232 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
77238ac31c6927d0f34823fae106047025a629ed | 59,771 | py | Python | sample/procedure.py | Nurtal/RD | e3aaabb069d6a48ff3a91662e806bb2eeb6788bd | [
"MIT"
] | null | null | null | sample/procedure.py | Nurtal/RD | e3aaabb069d6a48ff3a91662e806bb2eeb6788bd | [
"MIT"
] | null | null | null | sample/procedure.py | Nurtal/RD | e3aaabb069d6a48ff3a91662e806bb2eeb6788bd | [
"MIT"
] | null | null | null | """
A few procedures
ready to use
"""
import os
from analysis import *
from machineLearning import *
from reorder import *
from preprocessing import *
from patternMining import *
def show_PCA(inputFolder, target, projection, saveFile, dataType, details, show):
"""
-> Perform and display PCA
-> inputFolder is a string, indicate the folder where are patients files
-> target is a string, currently only "center" and "date" are availabe
-> projection can be set to "3d" or "2d"
-> saveFile is a string, used to save graphical
-> dataType is a string, ndicate the type of parameter
-ABSOLUTE
-PROPORTION
-MFI
-ALL
-> details is a boolean, 1 to have details, else 0
-> show is a boolean, 1 to display graphe, else 0
"""
data = generate_DataMatrixFromPatientFiles2(inputFolder, dataType)
data = scale_Data(data) # Add for test
y = get_targetedY(target, inputFolder)
target_name = get_targetNames(target, inputFolder)
quickPCA(data, y, target_name, projection, saveFile, details, show)
#quickPCA_test(data, y, target_name, projection, saveFile, details, show)
def show_cluster(inputFolder, numberOfCluster, saveFile):
"""
Perform kmean clustering
-> inputFolder is a string, name of the folder containing data
-> numberOfCluster is an int, number of cluster to generate
-> saveFile is a string, filename where graphical output is saved
"""
data = generate_DataMatrixFromPatientFiles(inputFolder)
quickClustering(data, numberOfCluster, saveFile)
def show_correlationMatrix(inputFolder, saveName, dataType, show):
"""
-> Compute the correlation matrix for data present in the inputFolder
-> inputFolder is a string, name of the folder containing data
-> saveName is a string, used to save graphical
-> dataType is a string, ndicate the type of parameter
-ABSOLUTE
-PROPORTION
-MFI
-ALL
-> show is a boolean, 1 to display graphe, else 0
"""
data = generate_DataMatrixFromPatientFiles2(inputFolder, dataType)
listOfParametres = get_listOfParameters2(inputFolder, dataType)
display_correlationMatrix(data.transpose(), listOfParametres, saveName, show)
def checkAndFormat(inputFolder, outputFolder):
"""
-> clean outputFolder, clean VECTOR folder, convert
tab separated files present in inputFolder to semi-column
separated diles in outputFolder.
-> inputFolder is a string
-> outputFolder is a string
"""
listOfFilesToDelete = glob.glob(outputFolder+"/*.csv")
for fileName in listOfFilesToDelete:
os.remove(fileName)
listOfFilesToDelete = glob.glob("DATA/VECTOR/*.csv")
for fileName in listOfFilesToDelete:
os.remove(fileName)
convert_tabSepratedFile(inputFolder, outputFolder)
def OverviewOnPanel(panel, dataType, target):
"""
-> Peform a few analysis on panel, datatype, focus on
target.
-> panel is a string, folder name
-> dataType is a string, ndicate the type of parameter
-ABSOLUTE
-PROPORTION
-MFI
-ALL
-> targetType is a string, could be:
- center
- date
- disease
"""
folder = "DATA/"+str(panel)
saveName1 = "IMAGES/"+str(panel)+"_matrixCorrelation.jpg"
saveName2 = "IMAGES/"+str(panel)+"_PCA2D.jpg"
saveName3 = "IMAGES/"+str(panel)+"_PCA3D.jpg"
checkAndFormat(folder, "DATA/PATIENT")
show_correlationMatrix("DATA/PATIENT", saveName1, dataType)
show_PCA("DATA/PATIENT", target, "2d", saveName2, dataType, 0)
show_PCA("DATA/PATIENT", target, "3d", saveName3, dataType, 1)
def OverviewOnDisease(disease, control, dataType, target, show):
"""
-> Perform a few PCA analysis on a specific disease, compare to a specific
control, focus on specific dataType.
-> disease is a string, specific disease to investigate.
-> control is a string, specific disease to compare
-> dataType is a string, ndicate the type of parameter
-ABSOLUTE
-PROPORTION
-MFI
-ALL
-> target is a string, could be:
- center
- date
- disease
-> show is a boolean, 1 to display graphe, else 0
"""
saveName1 = "IMAGES/"+str(disease)+"_vs_"+str(control)+"_matrixCorrelation.jpg"
saveName2 = "IMAGES/"+str(disease)+"_vs_"+str(control)+"_PCA2D.jpg"
saveName3 = "IMAGES/"+str(disease)+"_vs_"+str(control)+"_PCA3D.jpg"
show_correlationMatrix("DATA/PATIENT", saveName1, dataType, show)
show_PCA("DATA/PATIENT", target, "2d", saveName2, dataType, 0, show)
show_PCA("DATA/PATIENT", target, "3d", saveName3, dataType, 1, show)
def use_SupportVectorMachine(panel, dataType, targetType, target, saveFileName, kernel):
"""
IN PROGRESS
TODO : - pass argument to svmClassification function
- resolve module problem on windows
"""
checkAndFormat("DATA/"+str(panel), "DATA/PATIENT")
X = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X = PCA(n_components=2).fit_transform(X)
y = get_targetAgainstTheRest(targetType, target, "DATA/PATIENT")
scores = svmClassification(X, y, kernel, saveFileName, 0, 1, 0)
def outlierDetection(targetType1, target1, targetType2, target2, dataType, show):
"""
-> targetType (1 & 2) is a string, could be:
- center
- date
- disease
-> target is a string, the actual center, disease, date
you're looking for ( e.g : UBO, RA ... )
-> dataType is a string, indicate the type of parameter
-ABSOLUTE
-PROPORTION
-RATIO
-MFI
-ALL
-> show is a boolean, if 1: display graphe
-> TODO:
- deal with restire_Data() : create bug image for OverviewOnDisease
"""
saveFileName = "IMAGES/"+target1+"_vs_"+target2+"_outlierDetection.jpg"
# training set
restore_Data()
apply_filter(targetType1, target1)
X = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X = scale_Data(X)
X = PCA(n_components=2).fit_transform(X)
# new observation
restore_Data()
apply_filter(targetType2, target2)
X_test = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X_test = scale_Data(X_test)
X_test = PCA(n_components=2).fit_transform(X_test)
show_outlierDetection(X, X_test, target1, target2, saveFileName, show)
def inlierDetection(targetType1, target1, targetType2, target2, dataType, saveFileName):
"""
IN PROGRESS
-> targetType (1 & 2) is a string, could be:
- center
- date
- disease
-> target is a string, the actual center, disease, date
you're looking for ( e.g : UBO, RA ... )
-> dataType is a string, indicate the type of parameter
-ABSOLUTE
-PROPORTION
-RATIO
-MFI
-ALL
-> saveFileName is a string, filename where the model is saved
TODO:
-> implement panel gestion
"""
# training set
restore_Data()
apply_filter(targetType1, target1)
X = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X = scale_Data(X)
X = PCA(n_components=2).fit_transform(X)
# new observation
restore_Data()
apply_filter(targetType2, target2)
X_test = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X_test = scale_Data(X_test)
X_test = PCA(n_components=2).fit_transform(X_test)
show_inlierDetection(saveFileName, X, X_test)
def noveltyDetection(targetType1, target1, targetType2, target2, dataType, show):
"""
IN PROGRESS
-> targetType (1 & 2) is a string, could be:
- center
- date
- disease
-> target is a string, the actual center, disease, date
you're looking for ( e.g : UBO, RA ... )
-> dataType is a string, indicate the type of parameter
-ABSOLUTE
-PROPORTION
-RATIO
-MFI
-ALL
-> show is a boolean, if 1: display graphe
-> TODO:
- deal with restire_Data() : create bug image for OverviewOnDisease
"""
saveFileName = "IMAGES/"+target1+"_vs_"+target2+"_noveltyDetection.jpg"
# training set
restore_Data()
apply_filter(targetType1, target1)
X = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X = scale_Data(X)
X = PCA(n_components=2).fit_transform(X)
# new observation
restore_Data()
apply_filter(targetType2, target2)
X_test = generate_DataMatrixFromPatientFiles2("DATA/PATIENT", dataType)
X_test = scale_Data(X_test)
X_test = PCA(n_components=2).fit_transform(X_test)
oneClassSvm(X, X_test, target1, target2, saveFileName, show)
"""GENERAL PROCEDURE"""
def diseaseExplorationProcedure(listOfDisease, listOfPanelToConcat):
"""
IN PROGRESS
"""
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue("ABSOLUTE")
print "----PCA Analysis----"
clean_report()
clean_image()
for disease in listOfDisease:
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
discretization(threshold)
print "----Pattern Mining----"
cohorte = assemble_Cohorte()
optimalThreshold = get_controledValueOfThreshold(cohorte, 60, 5, 3)
listOfNormalParameters = get_listOfNormalParameters(cohorte, optimalThreshold)
print "----Perform PCA----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", ["Control", disease])
remove_parameter("PROPORTION", "mDC1_IN_leukocytes")
#remove_parameter("ABSOLUTE", "Lymphocytes")
for parameter in listOfNormalParameters:
remove_parameter("ABSOLUTE", parameter)
check_patient()
save_data()
OverviewOnDisease("Control", disease, "ABSOLUTE", "disease", 1)
def RunOnFullData():
"""
IN PROGRESS
"""
listOfElements = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6","PANEL_7","PANEL_8","PANEL_9"]
for panel in listOfElements:
folder = "DATA/"+str(panel)
saveName = str(panel)+"_matrixCorrelation.jpg"
checkAndFormat(folder, "DATA/PATIENT")
show_correlationMatrix("DATA/PATIENT", saveName)
def patternMining_run1():
"""
- ABSOLUTE data
- poor discretisation
"""
#listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfDisease = ["RA", "MCTD", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue("ABSOLUTE", 0, "Classic")
for disease in listOfDisease:
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
discretization(threshold)
print "----Pattern Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_ABSOLUTE_discretisationAlArrache.csv"
minNumberOfParamToRemove = 5
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
searchForPattern(cohorte, maxTry, maxNumberOfPattern, "DATA/PATTERN/"+patternSaveFile)
def patternMining_run2():
"""
- ABSOLUTE data
- discretisation on scaled data
"""
#listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfDisease = ["RA", "MCTD", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue("ABSOLUTE", 1, "Classic")
for disease in listOfDisease:
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder("ABSOLUTE")
discretization(threshold)
print "----Pattern Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_ABSOLUTE_poorDiscretization_scaledData.csv"
minNumberOfParamToRemove = 5
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
searchForPattern(cohorte, maxTry, maxNumberOfPattern, "DATA/PATTERN/"+patternSaveFile)
def patternMining_run2Reverse():
"""
- ABSOLUTE data
- discretisation on scaled data
"""
listOfDisease = ["UCTD", "SSc", "SLE", "SjS", "PAPs", "MCTD", "RA"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue("ABSOLUTE", 1, "Classic")
for disease in listOfDisease:
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder("ABSOLUTE")
discretization(threshold)
print "----Pattern Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_ABSOLUTE_scaledData_reverseOrder.csv"
minNumberOfParamToRemove = 5
maxTry = 60
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
searchForPattern(cohorte, maxTry, "DATA/PATTERN/"+patternSaveFile)
def patternMining_run3():
"""
- ABSOLUTE data
- discretisation using mean Generated threshold
"""
listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue("ABSOLUTE", 0, "Mean")
for disease in listOfDisease:
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
discretization(threshold)
print "----Pattern Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_ABSOLUTE_MeanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
searchForPattern(cohorte, maxTry, "DATA/PATTERN/"+patternSaveFile)
def patternMining_run4():
"""
- ABSOLUTE data
- discretisation using mean Generated threshold
- dynamic generation threshold
- delta is a used as a %
- maxNumberOfPattern limitation is set to 100, i.e
when start to generate more than 1000 pattern, stop
the mining. (trying to avoid memory issues)
"""
listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
for disease in listOfDisease:
delta = 0
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue_DynamicDelta("ABSOLUTE", 1, "Mean", delta)
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
discretization(threshold)
print "----Pattern Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_ABSOLUTE_MeanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
searchForPattern(cohorte, maxTry, maxNumberOfPattern, "DATA/PATTERN/"+patternSaveFile)
# control number of pattern after filter
fileName = "DATA/PATTERN/"+patternSaveFile
filter_Pattern("DATA/PATTERN/"+patternSaveFile)
filterDataName = fileName.split(".")
heavyFilterName = filterDataName[0] + "_HeavyFilter.csv"
lowFilterName = filterDataName[0] + "_LowFilter.csv"
cmpt = 0
dataToInspect = open(lowFilterName, "r")
for line in dataToInspect:
cmpt = cmpt + 1
dataToInspect.close()
if(cmpt < 0):
goodDiscretization = 0
else:
goodDiscretization = 1
while(not goodDiscretization):
print "----Distribution Analysis (delta exploration)----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
delta = delta + 0.05
threshold = get_ThresholdValue_DynamicDelta("ABSOLUTE", 1, "Mean", delta)
print "----Pattern Mining on "+str(disease)+" (delta exploration)----"
print "----Discretization (delta exploration)----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
discretization(threshold)
print "----Pattern Mining (delta exploration)----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_ABSOLUTE_MeanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
searchForPattern(cohorte, maxTry, maxNumberOfPattern, "DATA/PATTERN/"+patternSaveFile)
# control number of pattern after filter
fileName = "DATA/PATTERN/"+patternSaveFile
filter_Pattern("DATA/PATTERN/"+patternSaveFile)
filterDataName = fileName.split(".")
heavyFilterName = filterDataName[0] + "_HeavyFilter.csv"
lowFilterName = filterDataName[0] + "_LowFilter.csv"
cmpt = 0
dataToInspect = open(lowFilterName, "r")
for line in dataToInspect:
cmpt = cmpt + 1
dataToInspect.close()
if(cmpt == 0):
goodDiscretization = 0
else:
goodDiscretization = 1
if(delta == 1):
break
def FrequentItemMining():
"""
- ABSOLUTE data
- discretisation on scaled data
- frequent item retrieval, no pattern mining
"""
#listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfDisease = ["RA", "MCTD", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue("ABSOLUTE", 1, "Classic")
for disease in listOfDisease:
print "----Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder("ABSOLUTE")
discretization(threshold)
print "----Frequent Item Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_FrequentItem_ABSOLUTE_poorDiscretization_scaledData.csv"
minNumberOfParamToRemove = 5
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
search_FrequentItem(cohorte, patternSaveFile)
def FrequentItemMining2(minSupport, dataType):
"""
IN PROGRESS (adapt to PROPORTION data)
- ABSOLUTE data
- discretisation using mean Generated threshold
- dynamic generation threshold
- delta is a used as a %
- frequent item retrieval, no pattern mining
- minSupport is a float, % of patient in cohorte that must
suppport the item
"""
#listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfDisease = ["RA", "MCTD", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
# Initilaise log file
logFile = open("DATA/PATTERN/FrequentItemMining2_"+str(minSupport)+"_"+dataType+".log", "w")
logFile.close()
for disease in listOfDisease:
delta = 0
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
threshold = get_ThresholdValue_DynamicDelta(dataType, 1, "Mean", delta)
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder(dataType)
discretization(threshold)
print "----Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_FrequentItem_"+str(minSupport)+"_"+dataType+"_meanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
search_FrequentItem(cohorte, patternSaveFile, minSupport)
# control number of pattern after filter
fileName = "DATA/PATTERN/"+patternSaveFile
filter_Pattern("DATA/PATTERN/"+patternSaveFile)
filterDataName = fileName.split(".")
heavyFilterName = filterDataName[0] + "_HeavyFilter.csv"
lowFilterName = filterDataName[0] + "_LowFilter.csv"
cmpt = 0
dataToInspect = open(lowFilterName, "r")
for line in dataToInspect:
cmpt = cmpt + 1
dataToInspect.close()
if(cmpt == 0):
goodDiscretization = 0
else:
goodDiscretization = 1
while(not goodDiscretization):
print "----Distribution Analysis (delta exploration)----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", "Control")
delta = delta + 0.05
threshold = get_ThresholdValue_DynamicDelta(dataType, 1, "Mean", delta)
print "----Mining on "+str(disease)+" (delta exploration)----"
print "----Discretization (delta exploration)----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder(dataType)
discretization(threshold)
print "----Mining (delta exploration)----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_FrequentItem_"+str(minSupport)+"_"+dataType+"_meanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
search_FrequentItem(cohorte, patternSaveFile, minSupport)
# control number of pattern after filter
fileName = "DATA/PATTERN/"+patternSaveFile
filter_Pattern("DATA/PATTERN/"+patternSaveFile)
filterDataName = fileName.split(".")
heavyFilterName = filterDataName[0] + "_HeavyFilter.csv"
lowFilterName = filterDataName[0] + "_LowFilter.csv"
cmpt = 0
dataToInspect = open(lowFilterName, "r")
for line in dataToInspect:
cmpt = cmpt + 1
dataToInspect.close()
if(cmpt == 0):
goodDiscretization = 0
else:
goodDiscretization = 1
if(delta == 1):
break
# write in log file
numberOfItem = 0
dataToInspect = open(heavyFilterName, "r")
for line in dataToInspect:
numberOfItem = numberOfItem + 1
dataToInspect.close()
logFile = open("DATA/PATTERN/FrequentItemMining2_"+str(minSupport)+"_"+dataType+".log", "a")
logFile.write(disease+";"+str(numberOfItem)+";"+str(minSupport)+";"+str(delta)+"\n")
logFile.close()
def FrequentItemMining3(minSupport, controlDisease, dataType):
"""
IN PROGRESS
- discretisation using mean Generated threshold
- dynamic generation threshold
- delta is a used as a %
- frequent item retrieval, no pattern mining
- minSupport is a float, % of patient in cohorte that must
suppport the item
- use controlDisease as a control for discretization process
TODO:
- test with dataType
"""
#listOfDisease = ["RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
listOfDisease = ["RA", "MCTD", "SjS", "SLE", "SSc", "UCTD"]
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
# Initilaise log file
logFile = open("DATA/PATTERN/FrequentItemMining_"+str(minSupport)+"_discretizationWith"+str(controlDisease)+"_"+dataType+".log", "w")
logFile.close()
for disease in listOfDisease:
if(disease != controlDisease):
delta = 0
print "----Distribution Analysis----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", controlDisease)
check_patient()
threshold = get_ThresholdValue_DynamicDelta(dataType, 1, "Mean", delta)
print "----Pattern Mining on "+str(disease)+"----"
print "----Discretization----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder(dataType)
discretization(threshold)
print "----Mining----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_FrequentItem_"+str(minSupport)+"_discretizationWith"+controlDisease+"_"+dataType+"_meanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
search_FrequentItem(cohorte, patternSaveFile, minSupport)
# control number of pattern after filter
fileName = "DATA/PATTERN/"+patternSaveFile
filter_Pattern("DATA/PATTERN/"+patternSaveFile)
filterDataName = fileName.split(".")
heavyFilterName = filterDataName[0] + "_HeavyFilter.csv"
lowFilterName = filterDataName[0] + "_LowFilter.csv"
cmpt = 0
dataToInspect = open(lowFilterName, "r")
for line in dataToInspect:
cmpt = cmpt + 1
dataToInspect.close()
if(cmpt == 0):
goodDiscretization = 0
else:
goodDiscretization = 1
while(not goodDiscretization):
print "----Distribution Analysis (delta exploration)----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", controlDisease)
check_patient()
delta = delta + 0.05
threshold = get_ThresholdValue_DynamicDelta(dataType, 1, "Mean", delta)
print "----Mining on "+str(disease)+" (delta exploration)----"
print "----Discretization (delta exploration)----"
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", disease)
check_patient()
scaleDataInPatientFolder(dataType)
discretization(threshold)
print "----Mining (delta exploration)----"
cohorte = assemble_Cohorte()
patternSaveFile = disease+"_FrequentItem_"+str(minSupport)+"_discretizationWith"+controlDisease+"_"+dataType+"_meanGeneratedThreshold.csv"
minNumberOfParamToRemove = 10
maxTry = 60
maxNumberOfPattern = 1000
machin = get_controledValueOfThreshold(cohorte, maxTry, minNumberOfParamToRemove, 3)
cohorte = alleviate_cohorte(cohorte, machin)
search_FrequentItem(cohorte, patternSaveFile, minSupport)
# control number of pattern after filter
fileName = "DATA/PATTERN/"+patternSaveFile
filter_Pattern("DATA/PATTERN/"+patternSaveFile)
filterDataName = fileName.split(".")
heavyFilterName = filterDataName[0] + "_HeavyFilter.csv"
lowFilterName = filterDataName[0] + "_LowFilter.csv"
cmpt = 0
dataToInspect = open(lowFilterName, "r")
for line in dataToInspect:
cmpt = cmpt + 1
dataToInspect.close()
if(cmpt == 0):
goodDiscretization = 0
else:
goodDiscretization = 1
if(delta == 1):
break
# write in log file
numberOfItem = 0
dataToInspect = open(heavyFilterName, "r")
for line in dataToInspect:
numberOfItem = numberOfItem + 1
dataToInspect.close()
logFile = open("DATA/PATTERN/FrequentItemMining_"+str(minSupport)+"_discretizationWith"+str(controlDisease)+"_"+dataType+".log", "a")
logFile.write(disease+";"+str(numberOfItem)+";"+str(minSupport)+";"+str(delta)+"\n")
logFile.close()
def visualisation3(disease, control):
"""
IN PROGRESS
"""
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", [disease, control])
check_patient()
filter_Pattern("DATA/PATTERN/"+disease+"_FrequentItem_ABSOLUTE_meanGeneratedThreshold.csv")
convert_PatternFile("DATA/PATTERN/"+disease+"_FrequentItem_ABSOLUTE_meanGeneratedThreshold_HeavyFilter.csv")
parametersOfInterest_disease = extract_parametersFromPattern("DATA/PATTERN/"+disease+"_FrequentItem_ABSOLUTE_meanGeneratedThreshold_HeavyFilter_converted.csv", 0)
parametersOfInterest_control = []
if(control != "Control"):
parametersOfInterest_control = extract_parametersFromPattern("DATA/PATTERN/"+control+"_FrequentItem_ABSOLUTE_meanGeneratedThreshold_HeavyFilter_converted.csv", 0)
parametersOfInterest = parametersOfInterest_control + parametersOfInterest_disease
listOfAllParameters = get_allParam("ABSOLUTE")
for parameter in listOfAllParameters:
if(parameter not in parametersOfInterest):
remove_parameter("ABSOLUTE", parameter)
filter_ArtefactValue("ABSOLUTE", "CD27pos CD43pos Bcells", 500)
filter_ArtefactValue("ABSOLUTE", "CD45RAposCD62LhighCD27posCD4pos_Naive_Tcells", 1000000)
filter_ArtefactValue("ABSOLUTE", "gdpos_Tcells", 20000)
#filter_ArtefactValue("ABSOLUTE", "CD27pos CD43pos Bcells", 500)
#filter_ArtefactValue("ABSOLUTE", "Monocytes", 1200)
#filter_ArtefactValue("ABSOLUTE", "CD14highCD16neg_classicalMonocytes", 1000)
#filter_ArtefactValue("ABSOLUTE", "CD15highCD16neg_Eosinophils", 800)
#filter_ArtefactValue("ABSOLUTE", "CD15lowCD16high_Neutrophils", 1100)
#filter_ArtefactValue("ABSOLUTE", "CD69pos_activated_CD4pos_Tcells", 600)
#filter_ArtefactValue("ABSOLUTE", "CD8pos_CD57pos_Cytotoxic_Tcells", 500)
#filter_ArtefactValue("ABSOLUTE", "CD14pos_monocytes", 1200)
check_patient()
save_data()
saveName1 = "IMAGES/"+disease+"_vs_"+control+"_matrixCorrelation.jpg"
saveName2 = "IMAGES/"+disease+"_vs_"+control+"_PCA2D.jpg"
show_correlationMatrix("DATA/PATIENT", saveName1, "ABSOLUTE", 1)
show_PCA("DATA/PATIENT", "disease", "2d", saveName2, "ABSOLUTE", 1, 1)
def visualisation2(disease, control, dataType, minSupport):
"""
IN PROGRESS
"""
listOfPanelToConcat = ["PANEL_1","PANEL_2","PANEL_3","PANEL_4","PANEL_5","PANEL_6"]
clean_folders("ALL")
fusion_panel(listOfPanelToConcat)
checkAndFormat("DATA/FUSION", "DATA/PATIENT")
apply_filter("disease", [disease, control])
check_patient()
filter_Pattern("DATA/PATTERN/"+disease+"_FrequentItem_"+str(minSupport)+"_"+dataType+"_meanGeneratedThreshold.csv")
convert_PatternFile("DATA/PATTERN/"+disease+"_FrequentItem_"+str(minSupport)+"_"+dataType+"_meanGeneratedThreshold_HeavyFilter.csv")
parametersOfInterest_disease = extract_parametersFromPattern("DATA/PATTERN/"+disease+"_FrequentItem_"+str(minSupport)+"_"+dataType+"_meanGeneratedThreshold_HeavyFilter_converted.csv", 0)
parametersOfInterest_control = []
if(control != "Control"):
parametersOfInterest_control = extract_parametersFromPattern("DATA/PATTERN/"+control+"_FrequentItem_"+str(minSupport)+"_"+dataType+"_meanGeneratedThreshold_HeavyFilter_converted.csv", 0)
parametersOfInterest = parametersOfInterest_control + parametersOfInterest_disease
listOfAllParameters = get_allParam(dataType)
listOfAllParameters = get_allParam(dataType)
for parameter in listOfAllParameters:
if(parameter not in parametersOfInterest):
remove_parameter(dataType, parameter)
filter_ArtefactValue("ABSOLUTE", "CD27pos CD43pos Bcells", 500)
filter_ArtefactValue("ABSOLUTE", "CD45RAposCD62LhighCD27posCD4pos_Naive_Tcells", 1000000)
filter_ArtefactValue("ABSOLUTE", "gdpos_Tcells", 20000)
#filter_ArtefactValue("ABSOLUTE", "CD27pos CD43pos Bcells", 500)
#filter_ArtefactValue("ABSOLUTE", "Monocytes", 1200)
#filter_ArtefactValue("ABSOLUTE", "CD14highCD16neg_classicalMonocytes", 1000)
#filter_ArtefactValue("ABSOLUTE", "CD15highCD16neg_Eosinophils", 800)
#filter_ArtefactValue("ABSOLUTE", "CD15lowCD16high_Neutrophils", 1100)
#filter_ArtefactValue("ABSOLUTE", "CD69pos_activated_CD4pos_Tcells", 600)
#filter_ArtefactValue("ABSOLUTE", "CD8pos_CD57pos_Cytotoxic_Tcells", 500)
#filter_ArtefactValue("ABSOLUTE", "CD14pos_monocytes", 1200)
check_patient()
save_data()
saveName1 = "IMAGES/"+disease+"_vs_"+control+"_"+dataType+"_matrixCorrelation.jpg"
saveName2 = "IMAGES/"+disease+"_vs_"+control+"_"+dataType+"_PCA2D.jpg"
show_correlationMatrix("DATA/PATIENT", saveName1, dataType, 1)
show_PCA("DATA/PATIENT", "disease", "2d", saveName2, dataType, 1, 1)
def plot_autoantibodiesData(diagnostic, displayAll):
"""
-> Plot 4 bar graphe to show the number
of patient positive and negatove for each
autoantobodies
-> diagnostuc could be a string:
- Control
- RA
- MCTD
- PAPs
- SjS
- SLE
- SSc
- UCTD
- all
Could be a list of string.
Set to all, i.e list of all elements.
Set to overview, list of all elements, display % (not raw count)
-> displayAll is a boolean, set to 1 all the data
(i.e positive and negative count) are display, set to 0
only the positive count are displayed
only for multiple disease plot.
"""
displayAll = int(displayAll)
if isinstance(diagnostic, list):
db = TinyDB("DATA/DATABASES/machin.json")
AutoantibodyTable = db.table('Autoantibody')
Patient = Query()
DiseaseToData = {}
DiseaseToParameterToCount = {}
for disease in diagnostic:
test_function = lambda s: s in get_listOfPatientWithDiagnostic(disease)
machin = AutoantibodyTable.search(Patient.OMIC_ID.test(test_function))
listOfSelectedParameter = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL", "B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL", "SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL", "U1_RNP_CALL", "ENA_CALL", "RF_CALL", "B2M_CALL", "CLM_CALL"]
data = parse_request(machin, listOfSelectedParameter)
DiseaseToData[disease] = data
# Initialise count dictionnary
parameterToCount = {}
for param in data[0]:
param_negative = str(param)+"_negative"
param_positive = str(param)+"_positive"
parameterToCount[param_negative] = 0
parameterToCount[param_positive] = 0
# Remplir dictionnary
for patient in data:
for key in patient.keys():
key_negative = str(key)+"_negative"
key_positive = str(key)+"_positive"
if(patient[key] == "negative"):
parameterToCount[key_negative] += 1
elif(patient[key] == "positive"):
parameterToCount[key_positive] += 1
DiseaseToParameterToCount[disease] = parameterToCount
paramForSubPlot1 = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL"]
paramForSubPlot2 = ["B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL"]
paramForSubPlot3 = ["SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL"]
paramForSubPlot4 = ["U1_RNP_CALL", "ENA_CALL", "B2M_CALL", "CLM_CALL"]
listOfParametres = paramForSubPlot1 + paramForSubPlot2 + paramForSubPlot3 + paramForSubPlot4
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
width = 1.5
for z in range(len(diagnostic)):
disease = diagnostic[z]
xs_positive = range(0, len(listOfParametres)*5, 5)
xs_negative = []
for position in xs_positive:
xs_negative.append(position + width)
ys_positive = []
ys_negative = []
for param in listOfParametres:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
ys_positive.append(DiseaseToParameterToCount[disease][param_positive])
ys_negative.append(DiseaseToParameterToCount[disease][param_negative])
ax.bar(xs_positive, ys_positive, zs=z, zdir='y', color="blue", alpha=0.8)
if(displayAll):
ax.bar(xs_negative, ys_negative, zs=z, zdir='y', color="red", alpha=0.8)
xTickMarks = [param for param in listOfParametres]
ax.set_xticks(xs_positive)
xtickNames = ax.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=90, fontsize=10)
yTickMarks = [param for param in diagnostic]
ax.set_yticks(range(len(diagnostic)))
ytickNames = ax.set_yticklabels(yTickMarks)
plt.setp(ytickNames, rotation=45, fontsize=10)
ax.set_zlabel('Count')
fig.canvas.set_window_title("Autoantobodies")
plt.show()
elif(diagnostic == "all"):
db = TinyDB("DATA/DATABASES/machin.json")
AutoantibodyTable = db.table('Autoantibody')
Patient = Query()
diagnostic = ["Control", "RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
DiseaseToData = {}
DiseaseToParameterToCount = {}
for disease in diagnostic:
test_function = lambda s: s in get_listOfPatientWithDiagnostic(disease)
machin = AutoantibodyTable.search(Patient.OMIC_ID.test(test_function))
listOfSelectedParameter = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL", "B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL", "SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL", "U1_RNP_CALL", "ENA_CALL", "RF_CALL", "B2M_CALL", "CLM_CALL"]
data = parse_request(machin, listOfSelectedParameter)
DiseaseToData[disease] = data
# Initialise count dictionnary
parameterToCount = {}
for param in data[0]:
param_negative = str(param)+"_negative"
param_positive = str(param)+"_positive"
parameterToCount[param_negative] = 0
parameterToCount[param_positive] = 0
# Remplir dictionnary
for patient in data:
for key in patient.keys():
key_negative = str(key)+"_negative"
key_positive = str(key)+"_positive"
if(patient[key] == "negative"):
parameterToCount[key_negative] += 1
elif(patient[key] == "positive"):
parameterToCount[key_positive] += 1
DiseaseToParameterToCount[disease] = parameterToCount
paramForSubPlot1 = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL"]
paramForSubPlot2 = ["B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL"]
paramForSubPlot3 = ["SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL"]
paramForSubPlot4 = ["U1_RNP_CALL", "ENA_CALL", "B2M_CALL", "CLM_CALL"]
listOfParametres = paramForSubPlot1 + paramForSubPlot2 + paramForSubPlot3 + paramForSubPlot4
fig = plt.figure()
ax = fig.add_subplot(111,projection='3d')
width = 1.5
for z in range(len(diagnostic)):
disease = diagnostic[z]
xs_positive = range(0, len(listOfParametres)*5, 5)
xs_negative = []
for position in xs_positive:
xs_negative.append(position + width)
ys_positive = []
ys_negative = []
for param in listOfParametres:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
ys_positive.append(DiseaseToParameterToCount[disease][param_positive])
ys_negative.append(DiseaseToParameterToCount[disease][param_negative])
ax.bar(xs_positive, ys_positive, zs=z, zdir='y', color="blue", alpha=0.8)
if(displayAll):
ax.bar(xs_negative, ys_negative, zs=z, zdir='y', color="red", alpha=0.8)
xTickMarks = [param for param in listOfParametres]
ax.set_xticks(xs_positive)
xtickNames = ax.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=90, fontsize=10)
yTickMarks = [param for param in diagnostic]
ax.set_yticks(range(len(diagnostic)))
ytickNames = ax.set_yticklabels(yTickMarks)
plt.setp(ytickNames, rotation=45, fontsize=10)
ax.set_zlabel('Count')
fig.canvas.set_window_title("Autoantobodies")
plt.show()
elif(diagnostic == "overview"):
db = TinyDB("DATA/DATABASES/machin.json")
AutoantibodyTable = db.table('Autoantibody')
Patient = Query()
diagnostic = ["Control", "RA", "MCTD", "PAPs", "SjS", "SLE", "SSc", "UCTD"]
DiseaseToData = {}
DiseaseToParameterToCount = {}
for disease in diagnostic:
test_function = lambda s: s in get_listOfPatientWithDiagnostic(disease)
machin = AutoantibodyTable.search(Patient.OMIC_ID.test(test_function))
listOfSelectedParameter = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL", "B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL", "SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL", "U1_RNP_CALL", "ENA_CALL", "RF_CALL", "B2M_CALL", "CLM_CALL"]
data = parse_request(machin, listOfSelectedParameter)
DiseaseToData[disease] = data
# Initialise count dictionnary
parameterToCount = {}
for param in data[0]:
param_negative = str(param)+"_negative"
param_positive = str(param)+"_positive"
parameterToCount[param_negative] = 0
parameterToCount[param_positive] = 0
# Remplir dictionnary
for patient in data:
for key in patient.keys():
key_negative = str(key)+"_negative"
key_positive = str(key)+"_positive"
if(patient[key] == "negative"):
parameterToCount[key_negative] += 1
elif(patient[key] == "positive"):
parameterToCount[key_positive] += 1
DiseaseToParameterToCount[disease] = parameterToCount
paramForSubPlot1 = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL"]
paramForSubPlot2 = ["B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL"]
paramForSubPlot3 = ["SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL"]
paramForSubPlot4 = ["U1_RNP_CALL", "ENA_CALL", "B2M_CALL", "CLM_CALL"]
listOfParametres = paramForSubPlot1 + paramForSubPlot2 + paramForSubPlot3 + paramForSubPlot4
listOfParametres_part1 = paramForSubPlot1 + paramForSubPlot2
listOfParametres_part2 = paramForSubPlot3 + paramForSubPlot4
fig, ((ax1), (ax2)) = plt.subplots(nrows=2, ncols=1)
N = 8
positiveCount_Control = []
positiveCount_RA = []
positiveCount_MCTD = []
positiveCount_PAPs = []
positiveCount_SjS = []
positiveCount_SLE = []
positiveCount_SSc = []
positiveCount_UCTD = []
for param in listOfParametres_part1:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
parameterToCount = DiseaseToParameterToCount["Control"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_Control.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["RA"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_RA.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["MCTD"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_MCTD.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["PAPs"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_PAPs.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["SjS"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_SjS.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["SLE"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_SLE.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["SSc"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_SSc.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["UCTD"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_UCTD.append((float(parameterToCount[param_positive])/float(total_count))*100)
ind = np.arange(N) # the x locations for the groups
width = 0.10 # the width of the bars
rects_Control = ax1.bar(ind, positiveCount_Control, width, color='blue')
rects_RA = ax1.bar(ind+width, positiveCount_RA, width, color='red')
rects_MCTD = ax1.bar(ind+width*2, positiveCount_MCTD, width, color='green')
rects_PAPs = ax1.bar(ind+width*3, positiveCount_PAPs, width, color='yellow')
rects_SjS = ax1.bar(ind+width*4, positiveCount_SjS, width, color='grey')
rects_SLE = ax1.bar(ind+width*5, positiveCount_SLE, width, color='black')
rects_SSc = ax1.bar(ind+width*6, positiveCount_SSc, width, color='orange')
rects_UCTD = ax1.bar(ind+width*7, positiveCount_UCTD, width, color='cyan')
ax1.set_xlim(-width,len(ind)+width)
ax1.set_ylim(0,100)
ax1.set_ylabel("% of positive")
#ax1.set_title('Autoantibody')
xTickMarks = [param for param in listOfParametres_part1]
ax1.set_xticks(ind+width*4)
xtickNames = ax1.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
N = 8
positiveCount_Control = []
positiveCount_RA = []
positiveCount_MCTD = []
positiveCount_PAPs = []
positiveCount_SjS = []
positiveCount_SLE = []
positiveCount_SSc = []
positiveCount_UCTD = []
for param in listOfParametres_part2:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
parameterToCount = DiseaseToParameterToCount["Control"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_Control.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["RA"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_RA.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["MCTD"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_MCTD.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["PAPs"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_PAPs.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["SjS"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_SjS.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["SLE"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_SLE.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["SSc"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_SSc.append((float(parameterToCount[param_positive])/float(total_count))*100)
parameterToCount = DiseaseToParameterToCount["UCTD"]
total_count = parameterToCount[param_positive] + parameterToCount[param_negative]
positiveCount_UCTD.append((float(parameterToCount[param_positive])/float(total_count))*100)
ind = np.arange(N) # the x locations for the groups
width = 0.10 # the width of the bars
rects_Control = ax2.bar(ind+10, positiveCount_Control, width, color='blue')
rects_RA = ax2.bar(ind+width, positiveCount_RA, width, color='red')
rects_MCTD = ax2.bar(ind+width*2, positiveCount_MCTD, width, color='green')
rects_PAPs = ax2.bar(ind+width*3, positiveCount_PAPs, width, color='yellow')
rects_SjS = ax2.bar(ind+width*4, positiveCount_SjS, width, color='grey')
rects_SLE = ax2.bar(ind+width*5, positiveCount_SLE, width, color='black')
rects_SSc = ax2.bar(ind+width*6, positiveCount_SSc, width, color='orange')
rects_UCTD = ax2.bar(ind+width*7, positiveCount_UCTD, width, color='cyan')
ax2.set_xlim(-width,len(ind)+width)
ax2.set_ylim(0,100)
ax2.set_ylabel("% of positive")
#ax2.set_title('Autoantibody')
xTickMarks = [param for param in listOfParametres_part2]
ax2.set_xticks(ind+width*4)
xtickNames = ax2.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
ax1.legend( (rects_Control[0], rects_RA[0], rects_MCTD[0], rects_SSc[0], rects_UCTD[0], rects_SLE[0], rects_SjS[0], rects_PAPs[0]),
('Control', 'RA', 'MCTD', 'SSc', 'UCTD', 'SLE', 'SjS', 'PAPs') )
fig.canvas.set_window_title("Overview")
plt.show()
else:
db = TinyDB("DATA/DATABASES/machin.json")
AutoantibodyTable = db.table('Autoantibody')
Patient = Query()
test_function = lambda s: s in get_listOfPatientWithDiagnostic(diagnostic)
machin = AutoantibodyTable.search(Patient.OMIC_ID.test(test_function))
listOfSelectedParameter = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL", "B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL", "SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL",
"U1_RNP_CALL", "ENA_CALL", "RF_CALL", "B2M_CALL", "CLM_CALL"]
data = parse_request(machin, listOfSelectedParameter)
# Initialise count dictionnary
parameterToCount = {}
for param in data[0]:
param_negative = str(param)+"_negative"
param_positive = str(param)+"_positive"
parameterToCount[param_negative] = 0
parameterToCount[param_positive] = 0
# Remplir dictionnary
for patient in data:
for key in patient.keys():
key_negative = str(key)+"_negative"
key_positive = str(key)+"_positive"
if(patient[key] == "negative"):
parameterToCount[key_negative] += 1
elif(patient[key] == "positive"):
parameterToCount[key_positive] += 1
structureToPlot = parameterToCount
paramForSubPlot1 = ["CLG_CALL", "RF_CALL", "SSB_CALL", "SCL70_CALL"]
paramForSubPlot2 = ["B2G_CALL", "CCP2_CALL", "SSA_CALL", "DNA_CALL"]
paramForSubPlot3 = ["SM_CALL", "MPO_CALL", "JO1_CALL", "PR3_CALL"]
paramForSubPlot4 = ["U1_RNP_CALL", "ENA_CALL", "B2M_CALL", "CLM_CALL"]
# Graphic representation
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)
# Subplot 1
N = 4
positiveCount = []
negativeCount = []
for param in paramForSubPlot1:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
positiveCount.append(parameterToCount[param_positive])
negativeCount.append(parameterToCount[param_negative])
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
rects1 = ax1.bar(ind, positiveCount, width, color='blue')
rects2 = ax1.bar(ind+width, negativeCount, width, color='red')
ax1.set_xlim(-width,len(ind)+width)
ax1.set_ylim(0,45)
ax1.set_ylabel('Count')
ax1.set_title('Autoantibody')
xTickMarks = [param for param in paramForSubPlot1]
ax1.set_xticks(ind+width)
xtickNames = ax1.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
# Subplot 2
N = 4
positiveCount = []
negativeCount = []
for param in paramForSubPlot2:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
positiveCount.append(parameterToCount[param_positive])
negativeCount.append(parameterToCount[param_negative])
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
rects1 = ax2.bar(ind, positiveCount, width, color='blue')
rects2 = ax2.bar(ind+width, negativeCount, width, color='red')
ax2.set_xlim(-width,len(ind)+width)
ax2.set_ylim(0,45)
ax2.set_ylabel('Count')
ax2.set_title('Autoantibody')
xTickMarks = [param for param in paramForSubPlot2]
ax2.set_xticks(ind+width)
xtickNames = ax2.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
# Subplot 3
N = 4
positiveCount = []
negativeCount = []
for param in paramForSubPlot3:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
positiveCount.append(parameterToCount[param_positive])
negativeCount.append(parameterToCount[param_negative])
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
rects1 = ax3.bar(ind, positiveCount, width, color='blue')
rects2 = ax3.bar(ind+width, negativeCount, width, color='red')
ax3.set_xlim(-width,len(ind)+width)
ax3.set_ylim(0,45)
ax3.set_ylabel('Count')
ax3.set_title('Autoantibody')
xTickMarks = [param for param in paramForSubPlot3]
ax3.set_xticks(ind+width)
xtickNames = ax3.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
# Subplot 4
N = 4
positiveCount = []
negativeCount = []
for param in paramForSubPlot4:
param_positive = str(param)+"_positive"
param_negative = str(param)+"_negative"
positiveCount.append(parameterToCount[param_positive])
negativeCount.append(parameterToCount[param_negative])
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
rects1 = ax4.bar(ind, positiveCount, width, color='blue')
rects2 = ax4.bar(ind+width, negativeCount, width, color='red')
ax4.set_xlim(-width,len(ind)+width)
ax4.set_ylim(0,45)
ax4.set_ylabel('Count')
ax4.set_title('Autoantibody')
xTickMarks = [param for param in paramForSubPlot3]
ax4.set_xticks(ind+width)
xtickNames = ax4.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
ax1.legend( (rects1[0], rects2[0]), ('Positive', 'Negative') )
fig.canvas.set_window_title(diagnostic)
plt.show()
def describe_discreteVariable(discreteCohorte, discreteVariableName):
"""
-> Describe discrete variable, enumerate possible status
and dispplay proportion of NA values
-> discreteCohorte is a cohorte of discrete parameter
(obtain with the assemble_CohorteFromDiscreteAllFiles function)
-> discreteVariableName the name of the discrete variable to check
(could be the real name or just the pX associated)
"""
numberOfPatientINCohorte = len(discreteCohorte)
paramToNonAvailableCount = {}
# Init paramToNonAvailableCount
for patient in discreteCohorte:
cmpt = 1
for scalar in patient:
scalarInArray = scalar.split("_")
if(len(scalarInArray) > 1):
paramToNonAvailableCount[scalarInArray[0]] = 0
cmpt += 1
# Remplir Dict
for patient in discreteCohorte:
cmpt = 1
for scalar in patient:
scalarInArray = scalar.split("_")
if(len(scalarInArray) > 1):
if(scalarInArray[1] == "NA"):
paramToNonAvailableCount[scalarInArray[0]] += 1
cmpt += 1
# Parse variable
if("\\" in discreteVariableName):
realVariableName = discreteVariableName
parameterIndexNumber = "undef"
parameterIndex = open("PARAMETERS/Control_variable_index.csv")
for line in parameterIndex:
line = line.split("\n")
lineInArray = line[0].split(";")
if(lineInArray[1] == discreteVariableName):
parameterIndexNumber = lineInArray[0]
parameterIndex.close()
if(parameterIndex != "undef"):
listOfPossibleStatus = []
statusToCount = {}
for patient in discreteCohorte:
for scalar in patient:
scalarInArray = scalar.split("_")
if(len(scalarInArray) > 1):
param = scalarInArray[0]
if(param == parameterIndexNumber):
if(scalarInArray[1] != "NA" and scalarInArray[1] not in listOfPossibleStatus):
listOfPossibleStatus.append(scalarInArray[1])
for status in listOfPossibleStatus:
statusToCount[status] = 0
for patient in discreteCohorte:
for scalar in patient:
scalarInArray = scalar.split("_")
if(len(scalarInArray) > 1):
param = scalarInArray[0]
if(param == parameterIndexNumber):
if(scalarInArray[1] in listOfPossibleStatus):
statusToCount[scalarInArray[1]] += 1
fig, ((ax1), (ax2)) = plt.subplots(nrows=1, ncols=2)
nonAvailableProportion = (float(paramToNonAvailableCount[parameterIndexNumber]) / float(len(discreteCohorte)))*100
name = ['NA', 'A']
data = [ paramToNonAvailableCount[parameterIndexNumber], (len(discreteCohorte) - paramToNonAvailableCount[parameterIndexNumber])]
explode=(0, 0.15)
ax1.pie(data, explode=explode, labels=name, autopct='%1.1f%%', startangle=90, shadow=True)
ax1.axis('equal')
data = []
for status in listOfPossibleStatus:
data.append(statusToCount[status])
ind = np.arange(len(listOfPossibleStatus))
width = 0.10
rects1 = ax2.bar(ind, data, width, color='cyan')
ax2.set_xlim(-width,len(ind)+width)
ax2.set_ylabel("Count")
xTickMarks = [param for param in listOfPossibleStatus]
ax2.set_xticks(ind+width)
xtickNames = ax2.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
realVariableName_formated = realVariableName.replace("\\", " ")
fig.canvas.set_window_title(realVariableName_formated)
plt.show()
else:
print "[WARNINGS] => Parameter " +str(discreteVariableName) + " not found"
else:
if(discreteVariableName in paramToNonAvailableCount.keys()):
realVariableName = "undef"
parameterIndex = open("PARAMETERS/Control_variable_index.csv")
for line in parameterIndex:
line = line.split("\n")
lineInArray = line[0].split(";")
if(lineInArray[0] == discreteVariableName):
realVariableName = lineInArray[1]
parameterIndex.close()
listOfPossibleStatus = []
statusToCount = {}
for patient in discreteCohorte:
for scalar in patient:
scalarInArray = scalar.split("_")
if(len(scalarInArray) > 1):
param = scalarInArray[0]
if(param == discreteVariableName):
if(scalarInArray[1] != "NA" and scalarInArray[1] not in listOfPossibleStatus):
listOfPossibleStatus.append(scalarInArray[1])
for status in listOfPossibleStatus:
statusToCount[status] = 0
for patient in discreteCohorte:
for scalar in patient:
scalarInArray = scalar.split("_")
if(len(scalarInArray) > 1):
param = scalarInArray[0]
if(param == discreteVariableName):
if(scalarInArray[1] in listOfPossibleStatus):
statusToCount[scalarInArray[1]] += 1
fig, ((ax1), (ax2)) = plt.subplots(nrows=1, ncols=2)
nonAvailableProportion = (float(paramToNonAvailableCount[discreteVariableName]) / float(len(discreteCohorte)))*100
name = ['NA', 'A']
data = [ paramToNonAvailableCount[discreteVariableName], (len(discreteCohorte) - paramToNonAvailableCount[discreteVariableName])]
explode=(0, 0.15)
ax1.pie(data, explode=explode, labels=name, autopct='%1.1f%%', startangle=90, shadow=True)
ax1.axis('equal')
data = []
for status in listOfPossibleStatus:
data.append(statusToCount[status])
ind = np.arange(len(listOfPossibleStatus))
width = 0.10
rects1 = ax2.bar(ind, data, width, color='cyan')
ax2.set_xlim(-width,len(ind)+width)
ax2.set_ylabel("Count")
xTickMarks = [param for param in listOfPossibleStatus]
ax2.set_xticks(ind+width)
xtickNames = ax2.set_xticklabels(xTickMarks)
plt.setp(xtickNames, rotation=45, fontsize=10)
plt.tight_layout()
realVariableName_formated = realVariableName.replace("\\", " ")
fig.canvas.set_window_title(realVariableName_formated)
plt.show()
else:
print "[WARNINGS] => Parameter " +str(discreteVariableName) + " not found"
| 35.180106 | 236 | 0.72741 | 6,603 | 59,771 | 6.422838 | 0.082993 | 0.020231 | 0.027352 | 0.013369 | 0.865668 | 0.854492 | 0.844565 | 0.81726 | 0.802877 | 0.780877 | 0 | 0.020046 | 0.143682 | 59,771 | 1,698 | 237 | 35.200825 | 0.80855 | 0.043717 | 0 | 0.814183 | 0 | 0 | 0.157244 | 0.030195 | 0 | 0 | 0 | 0.002945 | 0 | 0 | null | null | 0 | 0.005386 | null | null | 0.045781 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
7730a744c78e0d83dffc5560dc09d7f185bc33e1 | 19,859 | py | Python | msgraph-cli-extensions/v1_0/personalcontacts_v1_0/azext_personalcontacts_v1_0/vendored_sdks/personalcontacts/models/_personal_contacts_enums.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | msgraph-cli-extensions/v1_0/personalcontacts_v1_0/azext_personalcontacts_v1_0/vendored_sdks/personalcontacts/models/_personal_contacts_enums.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | 22 | 2022-03-29T22:54:37.000Z | 2022-03-29T22:55:27.000Z | msgraph-cli-extensions/v1_0/personalcontacts_v1_0/azext_personalcontacts_v1_0/vendored_sdks/personalcontacts/models/_personal_contacts_enums.py | thewahome/msgraph-cli | 33127d9efa23a0e5f5303c93242fbdbb73348671 | [
"MIT"
] | null | null | null | # coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for license information.
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is regenerated.
# --------------------------------------------------------------------------
from enum import Enum, EnumMeta
from six import with_metaclass
class _CaseInsensitiveEnumMeta(EnumMeta):
def __getitem__(self, name):
return super().__getitem__(name.upper())
def __getattr__(cls, name):
"""Return the enum member matching `name`
We use __getattr__ instead of descriptors or inserting into the enum
class' __dict__ in order to support `name` and `value` being both
properties for enum members (which live in the class' __dict__) and
enum members themselves.
"""
try:
return cls._member_map_[name.upper()]
except KeyError:
raise AttributeError(name)
class Enum10(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
CATEGORIES = "categories"
CATEGORIES_DESC = "categories desc"
CHANGE_KEY = "changeKey"
CHANGE_KEY_DESC = "changeKey desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
LAST_MODIFIED_DATE_TIME = "lastModifiedDateTime"
LAST_MODIFIED_DATE_TIME_DESC = "lastModifiedDateTime desc"
ASSISTANT_NAME = "assistantName"
ASSISTANT_NAME_DESC = "assistantName desc"
BIRTHDAY = "birthday"
BIRTHDAY_DESC = "birthday desc"
BUSINESS_ADDRESS = "businessAddress"
BUSINESS_ADDRESS_DESC = "businessAddress desc"
BUSINESS_HOME_PAGE = "businessHomePage"
BUSINESS_HOME_PAGE_DESC = "businessHomePage desc"
BUSINESS_PHONES = "businessPhones"
BUSINESS_PHONES_DESC = "businessPhones desc"
CHILDREN = "children"
CHILDREN_DESC = "children desc"
COMPANY_NAME = "companyName"
COMPANY_NAME_DESC = "companyName desc"
DEPARTMENT = "department"
DEPARTMENT_DESC = "department desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
EMAIL_ADDRESSES = "emailAddresses"
EMAIL_ADDRESSES_DESC = "emailAddresses desc"
FILE_AS = "fileAs"
FILE_AS_DESC = "fileAs desc"
GENERATION = "generation"
GENERATION_DESC = "generation desc"
GIVEN_NAME = "givenName"
GIVEN_NAME_DESC = "givenName desc"
HOME_ADDRESS = "homeAddress"
HOME_ADDRESS_DESC = "homeAddress desc"
HOME_PHONES = "homePhones"
HOME_PHONES_DESC = "homePhones desc"
IM_ADDRESSES = "imAddresses"
IM_ADDRESSES_DESC = "imAddresses desc"
INITIALS = "initials"
INITIALS_DESC = "initials desc"
JOB_TITLE = "jobTitle"
JOB_TITLE_DESC = "jobTitle desc"
MANAGER = "manager"
MANAGER_DESC = "manager desc"
MIDDLE_NAME = "middleName"
MIDDLE_NAME_DESC = "middleName desc"
MOBILE_PHONE = "mobilePhone"
MOBILE_PHONE_DESC = "mobilePhone desc"
NICK_NAME = "nickName"
NICK_NAME_DESC = "nickName desc"
OFFICE_LOCATION = "officeLocation"
OFFICE_LOCATION_DESC = "officeLocation desc"
OTHER_ADDRESS = "otherAddress"
OTHER_ADDRESS_DESC = "otherAddress desc"
PARENT_FOLDER_ID = "parentFolderId"
PARENT_FOLDER_ID_DESC = "parentFolderId desc"
PERSONAL_NOTES = "personalNotes"
PERSONAL_NOTES_DESC = "personalNotes desc"
PROFESSION = "profession"
PROFESSION_DESC = "profession desc"
SPOUSE_NAME = "spouseName"
SPOUSE_NAME_DESC = "spouseName desc"
SURNAME = "surname"
SURNAME_DESC = "surname desc"
TITLE = "title"
TITLE_DESC = "title desc"
YOMI_COMPANY_NAME = "yomiCompanyName"
YOMI_COMPANY_NAME_DESC = "yomiCompanyName desc"
YOMI_GIVEN_NAME = "yomiGivenName"
YOMI_GIVEN_NAME_DESC = "yomiGivenName desc"
YOMI_SURNAME = "yomiSurname"
YOMI_SURNAME_DESC = "yomiSurname desc"
class Enum11(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
CATEGORIES = "categories"
CHANGE_KEY = "changeKey"
CREATED_DATE_TIME = "createdDateTime"
LAST_MODIFIED_DATE_TIME = "lastModifiedDateTime"
ASSISTANT_NAME = "assistantName"
BIRTHDAY = "birthday"
BUSINESS_ADDRESS = "businessAddress"
BUSINESS_HOME_PAGE = "businessHomePage"
BUSINESS_PHONES = "businessPhones"
CHILDREN = "children"
COMPANY_NAME = "companyName"
DEPARTMENT = "department"
DISPLAY_NAME = "displayName"
EMAIL_ADDRESSES = "emailAddresses"
FILE_AS = "fileAs"
GENERATION = "generation"
GIVEN_NAME = "givenName"
HOME_ADDRESS = "homeAddress"
HOME_PHONES = "homePhones"
IM_ADDRESSES = "imAddresses"
INITIALS = "initials"
JOB_TITLE = "jobTitle"
MANAGER = "manager"
MIDDLE_NAME = "middleName"
MOBILE_PHONE = "mobilePhone"
NICK_NAME = "nickName"
OFFICE_LOCATION = "officeLocation"
OTHER_ADDRESS = "otherAddress"
PARENT_FOLDER_ID = "parentFolderId"
PERSONAL_NOTES = "personalNotes"
PROFESSION = "profession"
SPOUSE_NAME = "spouseName"
SURNAME = "surname"
TITLE = "title"
YOMI_COMPANY_NAME = "yomiCompanyName"
YOMI_GIVEN_NAME = "yomiGivenName"
YOMI_SURNAME = "yomiSurname"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum12(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum13(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
CATEGORIES = "categories"
CHANGE_KEY = "changeKey"
CREATED_DATE_TIME = "createdDateTime"
LAST_MODIFIED_DATE_TIME = "lastModifiedDateTime"
ASSISTANT_NAME = "assistantName"
BIRTHDAY = "birthday"
BUSINESS_ADDRESS = "businessAddress"
BUSINESS_HOME_PAGE = "businessHomePage"
BUSINESS_PHONES = "businessPhones"
CHILDREN = "children"
COMPANY_NAME = "companyName"
DEPARTMENT = "department"
DISPLAY_NAME = "displayName"
EMAIL_ADDRESSES = "emailAddresses"
FILE_AS = "fileAs"
GENERATION = "generation"
GIVEN_NAME = "givenName"
HOME_ADDRESS = "homeAddress"
HOME_PHONES = "homePhones"
IM_ADDRESSES = "imAddresses"
INITIALS = "initials"
JOB_TITLE = "jobTitle"
MANAGER = "manager"
MIDDLE_NAME = "middleName"
MOBILE_PHONE = "mobilePhone"
NICK_NAME = "nickName"
OFFICE_LOCATION = "officeLocation"
OTHER_ADDRESS = "otherAddress"
PARENT_FOLDER_ID = "parentFolderId"
PERSONAL_NOTES = "personalNotes"
PROFESSION = "profession"
SPOUSE_NAME = "spouseName"
SURNAME = "surname"
TITLE = "title"
YOMI_COMPANY_NAME = "yomiCompanyName"
YOMI_GIVEN_NAME = "yomiGivenName"
YOMI_SURNAME = "yomiSurname"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum14(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum15(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
class Enum16(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
VALUE = "value"
VALUE_DESC = "value desc"
class Enum17(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum18(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum19(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
HEIGHT = "height"
WIDTH = "width"
class Enum20(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
VALUE = "value"
VALUE_DESC = "value desc"
class Enum21(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum22(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum23(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
VALUE = "value"
VALUE_DESC = "value desc"
class Enum24(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum25(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum26(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
VALUE = "value"
VALUE_DESC = "value desc"
class Enum27(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum28(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum29(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
CATEGORIES = "categories"
CATEGORIES_DESC = "categories desc"
CHANGE_KEY = "changeKey"
CHANGE_KEY_DESC = "changeKey desc"
CREATED_DATE_TIME = "createdDateTime"
CREATED_DATE_TIME_DESC = "createdDateTime desc"
LAST_MODIFIED_DATE_TIME = "lastModifiedDateTime"
LAST_MODIFIED_DATE_TIME_DESC = "lastModifiedDateTime desc"
ASSISTANT_NAME = "assistantName"
ASSISTANT_NAME_DESC = "assistantName desc"
BIRTHDAY = "birthday"
BIRTHDAY_DESC = "birthday desc"
BUSINESS_ADDRESS = "businessAddress"
BUSINESS_ADDRESS_DESC = "businessAddress desc"
BUSINESS_HOME_PAGE = "businessHomePage"
BUSINESS_HOME_PAGE_DESC = "businessHomePage desc"
BUSINESS_PHONES = "businessPhones"
BUSINESS_PHONES_DESC = "businessPhones desc"
CHILDREN = "children"
CHILDREN_DESC = "children desc"
COMPANY_NAME = "companyName"
COMPANY_NAME_DESC = "companyName desc"
DEPARTMENT = "department"
DEPARTMENT_DESC = "department desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
EMAIL_ADDRESSES = "emailAddresses"
EMAIL_ADDRESSES_DESC = "emailAddresses desc"
FILE_AS = "fileAs"
FILE_AS_DESC = "fileAs desc"
GENERATION = "generation"
GENERATION_DESC = "generation desc"
GIVEN_NAME = "givenName"
GIVEN_NAME_DESC = "givenName desc"
HOME_ADDRESS = "homeAddress"
HOME_ADDRESS_DESC = "homeAddress desc"
HOME_PHONES = "homePhones"
HOME_PHONES_DESC = "homePhones desc"
IM_ADDRESSES = "imAddresses"
IM_ADDRESSES_DESC = "imAddresses desc"
INITIALS = "initials"
INITIALS_DESC = "initials desc"
JOB_TITLE = "jobTitle"
JOB_TITLE_DESC = "jobTitle desc"
MANAGER = "manager"
MANAGER_DESC = "manager desc"
MIDDLE_NAME = "middleName"
MIDDLE_NAME_DESC = "middleName desc"
MOBILE_PHONE = "mobilePhone"
MOBILE_PHONE_DESC = "mobilePhone desc"
NICK_NAME = "nickName"
NICK_NAME_DESC = "nickName desc"
OFFICE_LOCATION = "officeLocation"
OFFICE_LOCATION_DESC = "officeLocation desc"
OTHER_ADDRESS = "otherAddress"
OTHER_ADDRESS_DESC = "otherAddress desc"
PARENT_FOLDER_ID = "parentFolderId"
PARENT_FOLDER_ID_DESC = "parentFolderId desc"
PERSONAL_NOTES = "personalNotes"
PERSONAL_NOTES_DESC = "personalNotes desc"
PROFESSION = "profession"
PROFESSION_DESC = "profession desc"
SPOUSE_NAME = "spouseName"
SPOUSE_NAME_DESC = "spouseName desc"
SURNAME = "surname"
SURNAME_DESC = "surname desc"
TITLE = "title"
TITLE_DESC = "title desc"
YOMI_COMPANY_NAME = "yomiCompanyName"
YOMI_COMPANY_NAME_DESC = "yomiCompanyName desc"
YOMI_GIVEN_NAME = "yomiGivenName"
YOMI_GIVEN_NAME_DESC = "yomiGivenName desc"
YOMI_SURNAME = "yomiSurname"
YOMI_SURNAME_DESC = "yomiSurname desc"
class Enum30(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
CATEGORIES = "categories"
CHANGE_KEY = "changeKey"
CREATED_DATE_TIME = "createdDateTime"
LAST_MODIFIED_DATE_TIME = "lastModifiedDateTime"
ASSISTANT_NAME = "assistantName"
BIRTHDAY = "birthday"
BUSINESS_ADDRESS = "businessAddress"
BUSINESS_HOME_PAGE = "businessHomePage"
BUSINESS_PHONES = "businessPhones"
CHILDREN = "children"
COMPANY_NAME = "companyName"
DEPARTMENT = "department"
DISPLAY_NAME = "displayName"
EMAIL_ADDRESSES = "emailAddresses"
FILE_AS = "fileAs"
GENERATION = "generation"
GIVEN_NAME = "givenName"
HOME_ADDRESS = "homeAddress"
HOME_PHONES = "homePhones"
IM_ADDRESSES = "imAddresses"
INITIALS = "initials"
JOB_TITLE = "jobTitle"
MANAGER = "manager"
MIDDLE_NAME = "middleName"
MOBILE_PHONE = "mobilePhone"
NICK_NAME = "nickName"
OFFICE_LOCATION = "officeLocation"
OTHER_ADDRESS = "otherAddress"
PARENT_FOLDER_ID = "parentFolderId"
PERSONAL_NOTES = "personalNotes"
PROFESSION = "profession"
SPOUSE_NAME = "spouseName"
SURNAME = "surname"
TITLE = "title"
YOMI_COMPANY_NAME = "yomiCompanyName"
YOMI_GIVEN_NAME = "yomiGivenName"
YOMI_SURNAME = "yomiSurname"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum31(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum32(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
CATEGORIES = "categories"
CHANGE_KEY = "changeKey"
CREATED_DATE_TIME = "createdDateTime"
LAST_MODIFIED_DATE_TIME = "lastModifiedDateTime"
ASSISTANT_NAME = "assistantName"
BIRTHDAY = "birthday"
BUSINESS_ADDRESS = "businessAddress"
BUSINESS_HOME_PAGE = "businessHomePage"
BUSINESS_PHONES = "businessPhones"
CHILDREN = "children"
COMPANY_NAME = "companyName"
DEPARTMENT = "department"
DISPLAY_NAME = "displayName"
EMAIL_ADDRESSES = "emailAddresses"
FILE_AS = "fileAs"
GENERATION = "generation"
GIVEN_NAME = "givenName"
HOME_ADDRESS = "homeAddress"
HOME_PHONES = "homePhones"
IM_ADDRESSES = "imAddresses"
INITIALS = "initials"
JOB_TITLE = "jobTitle"
MANAGER = "manager"
MIDDLE_NAME = "middleName"
MOBILE_PHONE = "mobilePhone"
NICK_NAME = "nickName"
OFFICE_LOCATION = "officeLocation"
OTHER_ADDRESS = "otherAddress"
PARENT_FOLDER_ID = "parentFolderId"
PERSONAL_NOTES = "personalNotes"
PROFESSION = "profession"
SPOUSE_NAME = "spouseName"
SURNAME = "surname"
TITLE = "title"
YOMI_COMPANY_NAME = "yomiCompanyName"
YOMI_GIVEN_NAME = "yomiGivenName"
YOMI_SURNAME = "yomiSurname"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum33(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
EXTENSIONS = "extensions"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
PHOTO = "photo"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum34(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
class Enum35(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
VALUE = "value"
VALUE_DESC = "value desc"
class Enum36(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum37(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum38(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
HEIGHT = "height"
WIDTH = "width"
class Enum39(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
VALUE = "value"
VALUE_DESC = "value desc"
class Enum40(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum41(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
VALUE = "value"
class Enum5(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
PARENT_FOLDER_ID = "parentFolderId"
PARENT_FOLDER_ID_DESC = "parentFolderId desc"
class Enum6(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DISPLAY_NAME = "displayName"
PARENT_FOLDER_ID = "parentFolderId"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Enum8(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DISPLAY_NAME = "displayName"
PARENT_FOLDER_ID = "parentFolderId"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Get2ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DISPLAY_NAME = "displayName"
PARENT_FOLDER_ID = "parentFolderId"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Get3ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Get4ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Get6ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
ID_DESC = "id desc"
DISPLAY_NAME = "displayName"
DISPLAY_NAME_DESC = "displayName desc"
PARENT_FOLDER_ID = "parentFolderId"
PARENT_FOLDER_ID_DESC = "parentFolderId desc"
class Get7ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ID = "id"
DISPLAY_NAME = "displayName"
PARENT_FOLDER_ID = "parentFolderId"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Get8ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
class Get9ItemsItem(with_metaclass(_CaseInsensitiveEnumMeta, str, Enum)):
ASTERISK = "*"
CHILD_FOLDERS = "childFolders"
CONTACTS = "contacts"
MULTI_VALUE_EXTENDED_PROPERTIES = "multiValueExtendedProperties"
SINGLE_VALUE_EXTENDED_PROPERTIES = "singleValueExtendedProperties"
| 32.449346 | 94 | 0.714235 | 1,921 | 19,859 | 7.066632 | 0.112962 | 0.013554 | 0.111381 | 0.120663 | 0.928692 | 0.928692 | 0.928692 | 0.928692 | 0.928692 | 0.928692 | 0 | 0.004666 | 0.190543 | 19,859 | 611 | 95 | 32.502455 | 0.839813 | 0.036256 | 0 | 0.898039 | 0 | 0 | 0.281358 | 0.047784 | 0 | 0 | 0 | 0 | 0 | 1 | 0.003922 | false | 0 | 0.003922 | 0.001961 | 0.994118 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 8 |
7758468840ce8cce498d603e7ef149db953590c6 | 182 | py | Python | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/margay/phys/Phys_Studio_LongRange.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 69 | 2021-12-16T01:34:09.000Z | 2022-03-31T08:27:39.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/margay/phys/Phys_Studio_LongRange.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 6 | 2022-01-12T18:22:08.000Z | 2022-03-25T10:19:27.000Z | platform/radio/efr32_multiphy_configurator/pyradioconfig/parts/margay/phys/Phys_Studio_LongRange.py | lmnotran/gecko_sdk | 2e82050dc8823c9fe0e8908c1b2666fb83056230 | [
"Zlib"
] | 21 | 2021-12-20T09:05:45.000Z | 2022-03-28T02:52:28.000Z | from pyradioconfig.parts.ocelot.phys.Phys_Studio_LongRange import PHYS_OQPSK_LoRa_Ocelot
class PHYS_OQPSK_LoRa_Margay(PHYS_OQPSK_LoRa_Ocelot):
#Inherit all from Ocelot
pass | 30.333333 | 88 | 0.846154 | 27 | 182 | 5.296296 | 0.555556 | 0.188811 | 0.272727 | 0.265734 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.10989 | 182 | 6 | 89 | 30.333333 | 0.882716 | 0.126374 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0.333333 | 0.333333 | 0 | 0.666667 | 0 | 1 | 0 | 0 | null | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 8 |
621535945420b2150d396d3ff7797b01b0acdf7e | 22,719 | py | Python | tests/test_p/test_and_gate.py | SimLeek/coordencode | 092783b07fe9f025a7104c6cb8979a639387e52a | [
"MIT"
] | 3 | 2021-02-10T15:38:22.000Z | 2021-12-13T02:10:17.000Z | tests/test_p/test_and_gate.py | SimLeek/coordencode | 092783b07fe9f025a7104c6cb8979a639387e52a | [
"MIT"
] | null | null | null | tests/test_p/test_and_gate.py | SimLeek/coordencode | 092783b07fe9f025a7104c6cb8979a639387e52a | [
"MIT"
] | null | null | null | import numpy as np
from pnums import PInt
def test_and_3d():
a = PInt(0, 0, 0, bits=2)
b = PInt(0, 0, 0, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 0, 1, bits=2)
b = PInt(0, 0, 0, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 0, 1, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(0, 1, 0, bits=2)
b = PInt(0, 1, 0, bits=2)
c = a & b
assert c.asfloat() == (0, 1, 0)
a = PInt(1, 0, 0, bits=2)
b = PInt(1, 0, 0, bits=2)
c = a & b
assert c.asfloat() == (1, 0, 0)
a = PInt(1, 1, 0, bits=2)
b = PInt(1, 1, 0, bits=2)
c = a & b
assert c.asfloat() == (1, 1, 0)
a = PInt(1, 0, 1, bits=2)
b = PInt(1, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (1, 0, 1)
a = PInt(0, 1, 1, bits=2)
b = PInt(0, 1, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 1, 1)
a = PInt(1, 1, 1, bits=2)
b = PInt(1, 1, 1, bits=2)
c = a & b
assert c.asfloat() == (1, 1, 1)
# 001
# 010
a = PInt(0, 0, 1, bits=2)
b = PInt(0, 1, 0, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 1, 0, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
# 001
# 100
a = PInt(0, 0, 1, bits=2)
b = PInt(0, 1, 0, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 1, 0, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
# 001
# 110
a = PInt(0, 0, 1, bits=2)
b = PInt(1, 1, 0, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(1, 1, 0, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 0)
# 001
# 101
a = PInt(0, 0, 1, bits=2)
b = PInt(1, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(1, 0, 1, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
# 001
# 011
a = PInt(0, 0, 1, bits=2)
b = PInt(0, 1, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(0, 1, 1, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
# 001
# 111
a = PInt(0, 0, 1, bits=2)
b = PInt(1, 1, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(1, 1, 1, bits=2)
b = PInt(0, 0, 1, bits=2)
c = a & b
assert c.asfloat() == (0, 0, 1)
# final
a = PInt(10, 11, 12, bits=8)
b = PInt(6, 13, 7, bits=8)
c = a & b
assert c.asfloat() == (2, 9, 4)
np.testing.assert_array_almost_equal(
[
[
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
],
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0],
],
],
[
[
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0],
[0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0, 2.0],
],
[
[0.0, 0.0, 0.0, 0.0, 0.0, 2.0, 0.0, 0.0],
[2.0, 2.0, 2.0, 2.0, 0.0, 0.0, 0.0, 0.0],
],
],
],
c.tensor,
)
a = PInt(10, 11, 12, bits=8, confidence=1)
b = PInt(6, 13, 7, bits=8, confidence=0.9)
c = a & b
assert c.asfloat() == (2, 9, 4)
a = PInt(10, 11, 12, bits=8, confidence=0.9)
b = PInt(6, 13, 7, bits=8, confidence=0.8)
c = a & b
assert c.asfloat() == (2, 9, 4)
a = PInt(10, 11, 12, bits=8, confidence=0.6)
b = PInt(6, 13, 7, bits=8, confidence=0.7)
c = a & b
assert c.asfloat() == (2, 9, 4)
a = PInt(10, 11, 12, bits=8, confidence=0.8)
b = PInt(6, 13, 7, bits=8, confidence=0.7)
c = a & b
assert c.asfloat() == (2, 9, 4)
np.testing.assert_array_almost_equal(
[
[
[
[
0.00183674,
0.00183674,
0.00183674,
0.00183674,
0.05142858,
0.03,
0.00183673,
0.00183674,
],
[
0.00551021,
0.00551021,
0.00551021,
0.00551021,
0.05510204,
0.03367347,
0.10469388,
0.00551021,
],
],
[
[
0.00551021,
0.00551021,
0.00551021,
0.00551021,
0.05510204,
0.03367347,
0.06183673,
0.00551021,
],
[
0.01653061,
0.01653061,
0.01653061,
0.01653061,
0.06612246,
0.04469386,
0.932449,
0.01653061,
],
],
],
[
[
[
0.00551021,
0.00551021,
0.00551021,
0.00551021,
0.05510204,
0.03367347,
0.0055102,
0.06183673,
],
[
0.01653061,
0.01653061,
0.01653061,
0.01653061,
0.9391836,
0.04469386,
0.11571428,
1.0316327,
],
],
[
[
0.01653061,
0.01653061,
0.01653061,
0.01653061,
0.06612246,
1.0034693,
0.07285714,
0.07285716,
],
[
1.4320408,
1.4320408,
1.4320408,
1.4320408,
0.21183676,
0.27612248,
0.20510204,
0.3042857,
],
],
],
],
c.tensor,
)
np.testing.assert_array_almost_equal(
[
[
[
[
0.00122449,
0.00122449,
0.00122449,
0.00122449,
0.03428572,
0.02,
0.00122449,
0.00122449,
],
[
0.00367347,
0.00367347,
0.00367347,
0.00367347,
0.03673469,
0.02244898,
0.06979592,
0.00367347,
],
],
[
[
0.00367347,
0.00367347,
0.00367347,
0.00367347,
0.03673469,
0.02244898,
0.04122449,
0.00367347,
],
[
0.01102041,
0.01102041,
0.01102041,
0.01102041,
0.04408164,
0.02979591,
0.62163264,
0.01102041,
],
],
],
[
[
[
0.00367347,
0.00367347,
0.00367347,
0.00367347,
0.03673469,
0.02244898,
0.00367347,
0.04122449,
],
[
0.01102041,
0.01102041,
0.01102041,
0.01102041,
0.6261224,
0.02979591,
0.07714286,
0.6877551,
],
],
[
[
0.01102041,
0.01102041,
0.01102041,
0.01102041,
0.04408164,
0.6689796,
0.04857143,
0.04857144,
],
[
0.95469385,
0.95469385,
0.95469385,
0.95469385,
0.1412245,
0.18408166,
0.1367347,
0.20285714,
],
],
],
],
c.normalize(1.0).tensor,
)
def test_and_3d_unsure():
"""a = PInt(0, 0, 0, bits=2, confidence=.55)
b = PInt(0, 0, 0, bits=2, confidence=.55)
c = a & b
assert c.asfloat() == (0, 0, 0)"""
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(0, 0, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(0, 1, 0, bits=2, confidence=0.55)
b = PInt(0, 1, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 1, 0)
a = PInt(1, 0, 0, bits=2, confidence=0.55)
b = PInt(1, 0, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (1, 0, 0)
a = PInt(1, 1, 0, bits=2, confidence=0.55)
b = PInt(1, 1, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (1, 1, 0)
a = PInt(1, 0, 1, bits=2, confidence=0.55)
b = PInt(1, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (1, 0, 1)
a = PInt(0, 1, 1, bits=2, confidence=0.55)
b = PInt(0, 1, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 1, 1)
a = PInt(1, 1, 1, bits=2, confidence=0.55)
b = PInt(1, 1, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (1, 1, 1)
# 001
# 010
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(0, 1, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 1, 0, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
# 001
# 100
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(0, 1, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(0, 1, 0, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
# 001
# 110
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(1, 1, 0, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
a = PInt(1, 1, 0, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 0)
# 001
# 101
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(1, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(1, 0, 1, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
# 001
# 011
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(0, 1, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(0, 1, 1, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
# 001
# 111
a = PInt(0, 0, 1, bits=2, confidence=0.55)
b = PInt(1, 1, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
a = PInt(1, 1, 1, bits=2, confidence=0.55)
b = PInt(0, 0, 1, bits=2, confidence=0.55)
c = a & b
assert c.asfloat() == (0, 0, 1)
# final
a = PInt(10, 11, 12, bits=8, confidence=0.55)
b = PInt(6, 13, 7, bits=8, confidence=0.55)
c = a & b
assert c.asfloat() == (2, 9, 4)
np.testing.assert_array_almost_equal(
[
[
[
[
0.00454592,
0.00454592,
0.00454592,
0.00454592,
0.03889285,
0.03889286,
0.00454592,
0.00454592,
],
[
0.01363776,
0.01363776,
0.01363776,
0.01363776,
0.04798468,
0.04798469,
0.08233164,
0.01363776,
],
],
[
[
0.01363776,
0.01363776,
0.01363776,
0.01363776,
0.04798468,
0.04798469,
0.08233164,
0.01363776,
],
[
0.04091327,
0.04091327,
0.04091327,
0.04091327,
0.07526021,
0.07526021,
0.43781123,
0.04091327,
],
],
],
[
[
[
0.01363776,
0.01363776,
0.01363776,
0.01363776,
0.04798468,
0.04798469,
0.01363776,
0.08233164,
],
[
0.04091327,
0.04091327,
0.04091327,
0.04091327,
0.47215813,
0.07526021,
0.10960714,
0.50650513,
],
],
[
[
0.04091327,
0.04091327,
0.04091327,
0.04091327,
0.07526021,
0.47215816,
0.10960714,
0.10960714,
],
[
0.9318011,
0.9318011,
0.9318011,
0.9318011,
0.29447454,
0.29447457,
0.2601276,
0.32882148,
],
],
],
],
c.tensor,
)
a = PInt(10, 11, 12, bits=8, confidence=1)
b = PInt(6, 13, 7, bits=8, confidence=0.9)
c = a & b
assert c.asfloat() == (2, 9, 4)
a = PInt(10, 11, 12, bits=8, confidence=0.9)
b = PInt(6, 13, 7, bits=8, confidence=0.8)
c = a & b
assert c.asfloat() == (2, 9, 4)
a = PInt(10, 11, 12, bits=8, confidence=0.6)
b = PInt(6, 13, 7, bits=8, confidence=0.7)
c = a & b
assert c.asfloat() == (2, 9, 4)
a = PInt(10, 11, 12, bits=8, confidence=0.8)
b = PInt(6, 13, 7, bits=8, confidence=0.7)
c = a & b
assert c.asfloat() == (2, 9, 4)
np.testing.assert_array_almost_equal(
[
[
[
[
0.00183674,
0.00183674,
0.00183674,
0.00183674,
0.05142858,
0.03,
0.00183673,
0.00183674,
],
[
0.00551021,
0.00551021,
0.00551021,
0.00551021,
0.05510204,
0.03367347,
0.10469388,
0.00551021,
],
],
[
[
0.00551021,
0.00551021,
0.00551021,
0.00551021,
0.05510204,
0.03367347,
0.06183673,
0.00551021,
],
[
0.01653061,
0.01653061,
0.01653061,
0.01653061,
0.06612246,
0.04469386,
0.932449,
0.01653061,
],
],
],
[
[
[
0.00551021,
0.00551021,
0.00551021,
0.00551021,
0.05510204,
0.03367347,
0.0055102,
0.06183673,
],
[
0.01653061,
0.01653061,
0.01653061,
0.01653061,
0.9391836,
0.04469386,
0.11571428,
1.0316327,
],
],
[
[
0.01653061,
0.01653061,
0.01653061,
0.01653061,
0.06612246,
1.0034693,
0.07285714,
0.07285716,
],
[
1.4320408,
1.4320408,
1.4320408,
1.4320408,
0.21183676,
0.27612248,
0.20510204,
0.3042857,
],
],
],
],
c.tensor,
)
np.testing.assert_array_almost_equal(
[
[
[
[
0.00122449,
0.00122449,
0.00122449,
0.00122449,
0.03428572,
0.02,
0.00122449,
0.00122449,
],
[
0.00367347,
0.00367347,
0.00367347,
0.00367347,
0.03673469,
0.02244898,
0.06979592,
0.00367347,
],
],
[
[
0.00367347,
0.00367347,
0.00367347,
0.00367347,
0.03673469,
0.02244898,
0.04122449,
0.00367347,
],
[
0.01102041,
0.01102041,
0.01102041,
0.01102041,
0.04408164,
0.02979591,
0.62163264,
0.01102041,
],
],
],
[
[
[
0.00367347,
0.00367347,
0.00367347,
0.00367347,
0.03673469,
0.02244898,
0.00367347,
0.04122449,
],
[
0.01102041,
0.01102041,
0.01102041,
0.01102041,
0.6261224,
0.02979591,
0.07714286,
0.6877551,
],
],
[
[
0.01102041,
0.01102041,
0.01102041,
0.01102041,
0.04408164,
0.6689796,
0.04857143,
0.04857144,
],
[
0.95469385,
0.95469385,
0.95469385,
0.95469385,
0.1412245,
0.18408166,
0.1367347,
0.20285714,
],
],
],
],
c.normalize(1.0).tensor,
)
| 28.048148 | 61 | 0.288657 | 2,237 | 22,719 | 2.921323 | 0.050514 | 0.062739 | 0.058301 | 0.060597 | 0.966488 | 0.963428 | 0.963428 | 0.950115 | 0.944759 | 0.926549 | 0 | 0.415063 | 0.597341 | 22,719 | 809 | 62 | 28.082818 | 0.2993 | 0.0103 | 0 | 0.83844 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.079387 | 1 | 0.002786 | false | 0 | 0.002786 | 0 | 0.005571 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 10 |
6236eea83ff9deefb998814f48ef78d4bd56ee83 | 17,579 | py | Python | pyActionRecog/action_caffe.py | xiaoye77/Optical-Flow-Guided-Feature | db153fc4c139201d3f73b7ca8be6c80210906ec6 | [
"MIT"
] | 196 | 2018-07-07T14:22:37.000Z | 2022-03-19T06:21:11.000Z | pyActionRecog/action_caffe.py | xiaoye77/Optical-Flow-Guided-Feature | db153fc4c139201d3f73b7ca8be6c80210906ec6 | [
"MIT"
] | 2 | 2018-07-09T09:19:09.000Z | 2018-07-17T15:08:49.000Z | pyActionRecog/action_caffe.py | ParrtZhang/Optical-Flow-Guided-Feature | 07d4501a29002ee7821c38c1820e4a64c1acf6e8 | [
"MIT"
] | 48 | 2018-07-10T02:11:20.000Z | 2022-02-04T14:26:30.000Z | import sys
import caffe
from caffe.io import oversample
import numpy as np
from utils.io import flow_stack_oversample, fast_list2arr, generateLimb, generateROI
import cv2
import random
import pickle
class CaffeNet(object):
def __init__(self, net_proto, net_weights, device_id, input_size=None):
caffe.set_mode_gpu()
caffe.set_device(device_id)
print '1'
self._net = caffe.Net(net_proto, net_weights, caffe.TEST)
input_shape = self._net.blobs['data'].data.shape
if input_size is not None:
input_shape = input_shape[:2] + input_size
print input_shape
transformer = caffe.io.Transformer({'data': input_shape})
if self._net.blobs['data'].data.shape[1] == 3:
transformer.set_transpose('data', (2, 0, 1)) # move image channels to outermost dimension
transformer.set_mean('data', np.array([104, 117, 123])) # subtract the dataset-mean value in each channel
elif self._net.blobs['data'].data.shape[1] == 4:
transformer.set_transpose('data', (2, 0, 1)) # move image channels to outermost dimension
transformer.set_mean('data', np.array([104, 117, 123, 0])) # subtract the dataset-mean value in each channel
else:
pass # non RGB data need not use transformer
self._transformer = transformer
self._sample_shape = self._net.blobs['data'].data.shape
def predict_single_frame(self, frame, score_name, over_sample=True, multiscale=None, frame_size=None):
img_id = random.randint(0, 1000)
if frame_size is not None:
frame = [cv2.resize(x, frame_size) for x in frame]
if over_sample:
if multiscale is None:
os_frame = oversample(frame, (self._sample_shape[2], self._sample_shape[3]))
else:
os_frame = []
for scale in multiscale:
resized_frame = [cv2.resize(x, (0,0), fx=1.0/scale, fy=1.0/scale) for x in frame]
os_frame.extend(oversample(resized_frame, (self._sample_shape[2], self._sample_shape[3])))
else:
os_frame = fast_list2arr(frame)
data = fast_list2arr([self._transformer.preprocess('data', x) for x in os_frame])
self._net.blobs['data'].reshape(*data.shape)
self._net.reshape()
out = self._net.forward(blobs=[score_name,], data=data)
return out[score_name].copy()
def predict_single_flow_stack(self, frame, score_name, over_sample=True, frame_size=None):
if frame_size is not None:
frame = fast_list2arr([cv2.resize(x, frame_size) for x in frame])
else:
frame = fast_list2arr(frame)
if over_sample:
os_frame = flow_stack_oversample(frame, (self._sample_shape[2], self._sample_shape[3]))
else:
os_frame = fast_list2arr([frame])
data = os_frame - np.float32(128.0)
self._net.blobs['data'].reshape(*data.shape)
self._net.reshape()
out = self._net.forward(blobs=[score_name,], data=data)
return out[score_name].copy()
def predict_single_frame_with_attention(self, frame, score_name, joints,
over_sample=True, multiscale=None, frame_size=None):
# TODO: uncomment the following to visualize
# img_id = random.randint(0, 1000)
# cv2.imwrite('visualize/{}_ori_img.jpg'.format(img_id), frame[0])
if frame_size is not None:
frame = [cv2.resize(x, frame_size) for x in frame]
pose_map = np.zeros(frame_size, dtype='float32')
scale_x = pose_map.shape[0] / 255. # row
scale_y = pose_map.shape[1] / 255. # col
pose_map = [np.expand_dims(generateLimb(pose_map, joints, scale_x, scale_y), axis=2), ]
# TODO: uncomment the following to visualize
# cv2.imwrite('visualize/{}_ori_img.jpg'.format(img_id), frame[0])
# cv2.imwrite('visualize/{}_ori_pose.jpg'.format(img_id), pose_map[0])
# img_grey_ori = cv2.cvtColor(frame[0], cv2.COLOR_BGRA2GRAY)
# pose_concat = np.tile(pose_map[0].astype('uint8'), 3)
# pose_squeezed = pose_map[0].astype('uint8').squeeze(axis=2)
# pose_color_map = cv2.applyColorMap(pose_concat, cv2.COLORMAP_JET)
# img_merge_ori = cv2.addWeighted(frame[0], 0.5, pose_color_map, 0.5, 0)
# cv2.imwrite('visualize/{}_ori_weighted.jpg'.format(img_id), img_merge_ori)
if over_sample:
if multiscale is None:
os_frame = oversample(frame, (self._sample_shape[2], self._sample_shape[3]))
os_pose_map = oversample(pose_map, (self._sample_shape[2], self._sample_shape[3]))
else:
os_frame = []
os_pose_map = []
for scale in multiscale:
resized_frame = [cv2.resize(x, (0,0), fx=1.0/scale, fy=1.0/scale) for x in frame]
resized_pose_map = [cv2.resize(x, (0,0), fx=1.0/scale, fy=1.0/scale) for x in pose_map]
os_frame.extend(oversample(resized_frame, (self._sample_shape[2], self._sample_shape[3])))
os_pose_map.extend(oversample(resized_pose_map, (self._sample_shape[2], self._sample_shape[3])))
else:
os_frame = fast_list2arr(frame)
os_pose_map = fast_list2arr(pose_map)
# TODO: uncomment the following to visualize
# for i in xrange(os_frame.shape[0]):
# img_to_show_ = os_frame[i, :, :, :].squeeze()
# pose_to_show_ = os_pose_map[i, :, :, :].squeeze()
#
# img_grey_ori_ = cv2.cvtColor(img_to_show_, cv2.COLOR_BGRA2GRAY).astype('uint8')
# pose_squeezed_ = pose_to_show_.astype('uint8')
# img_merge_ori = cv2.addWeighted(img_grey_ori_, 0.5, pose_squeezed_, 0.5, 0)
# cv2.imwrite('visualize/{}_{}_weighted.jpg'.format(img_id, i), img_merge_ori)
# cv2.imwrite('visualize/{}_{}_img.jpg'.format(img_id, i), img_to_show_)
# cv2.imwrite('visualize/{}_{}_pose.jpg'.format(img_id, i), pose_to_show_)
raw_data = np.append(os_frame, os_pose_map, axis=3)
#####################################################################
data = fast_list2arr([self._transformer.preprocess('data', x) for x in raw_data])
# TODO: uncomment the following to visualize
# for i in xrange(os_frame.shape[0]):
# img_to_show = data[i, :3, :, :].squeeze().transpose(1, 2, 0)
# pose_to_show = data[i, 3, :, :].squeeze()
# img_to_show[:, :, 0] += 104
# img_to_show[:, :, 1] += 117
# img_to_show[:, :, 2] += 123
#
# print img_to_show.shape
# print pose_to_show.shape
# img_grey_ori = cv2.cvtColor(img_to_show, cv2.COLOR_BGRA2GRAY).astype('uint8')
# pose_squeezed = pose_to_show.astype('uint8')
# img_merge_ori = cv2.addWeighted(img_grey_ori, 0.5, pose_squeezed, 0.5, 0)
# cv2.imwrite('visualize/{}_{}_weighted_post.jpg'.format(img_id, i), img_merge_ori)
# cv2.imwrite('visualize/{}_{}_img_post.jpg'.format(img_id, i), img_to_show)
# cv2.imwrite('visualize/{}_{}_pose_post.jpg'.format(img_id, i), pose_to_show)
self._net.blobs['data'].reshape(*data.shape)
self._net.reshape()
out = self._net.forward(blobs=[score_name,], data=data)
print out.max()
return out[score_name].copy()
def predict_single_frame_with_roi(self, frame, score_name, joints,
over_sample=True, multiscale=None, frame_size=None):
# TODO: uncomment the following to visualize
# img_id = random.randint(0, 1000)
# cv2.imwrite('visualize/{}_ori_img.jpg'.format(img_id), frame[0])
assert isinstance(frame_size, tuple)
frame = [cv2.resize(x, frame_size) for x in frame]
use_roi = False
scale_x = frame_size[0] / 336. # row
scale_y = frame_size[1] / 256. # col
if joints:
roi_top_w, roi_top_h, roi_w, roi_h = generateROI(joints, [0, 13, 14, 15, 16, 17], scale_x, scale_y, 40, 40)
if roi_h > 40 and roi_w > 40:
use_roi = True
# TODO: uncomment the following to visualize
# cv2.imwrite('visualize/{}_ori_img.jpg'.format(img_id), frame[0])
# cv2.imwrite('visualize/{}_ori_pose.jpg'.format(img_id), pose_map[0])
# img_grey_ori = cv2.cvtColor(frame[0], cv2.COLOR_BGRA2GRAY)
# pose_concat = np.tile(pose_map[0].astype('uint8'), 3)
# pose_squeezed = pose_map[0].astype('uint8').squeeze(axis=2)
# pose_color_map = cv2.applyColorMap(pose_concat, cv2.COLORMAP_JET)
# img_merge_ori = cv2.addWeighted(frame[0], 0.5, pose_color_map, 0.5, 0)
# cv2.imwrite('visualize/{}_ori_weighted.jpg'.format(img_id), img_merge_ori)
if over_sample:
if multiscale is None and not use_roi:
os_frame = oversample(frame, (self._sample_shape[2], self._sample_shape[3]))
elif use_roi:
os_frame = []
roi_mult_list = np.arange(2., 3., 0.1).tolist()
for roi_mult in roi_mult_list:
roi_top_w, roi_top_h, roi_w, roi_h = generateROI(joints, [0, 13, 14, 15, 16, 17], scale_x, scale_y,
40, 40, roi_mult)
target_size = (self._sample_shape[2], self._sample_shape[3])
resized_roi = [cv2.resize(x[roi_top_h:roi_h + roi_top_h, roi_top_w:roi_w + roi_top_w],
target_size) for x in frame]
os_frame.extend(resized_roi)
else:
os_frame = []
for scale in multiscale:
resized_frame = [cv2.resize(x, (0, 0), fx=1.0/scale, fy=1.0/scale) for x in frame]
os_frame.extend(oversample(resized_frame, (self._sample_shape[2], self._sample_shape[3])))
else:
os_frame = fast_list2arr(frame)
# TODO: uncomment the following to visualize
# for i in xrange(len(os_frame)):
# img_to_show_ = os_frame[i].squeeze()
# cv2.imwrite('visualize/{}_{}_img.jpg'.format(img_id, i), img_to_show_)
# pose_to_show_ = os_pose_map[i, :, :, :].squeeze()
#
# img_grey_ori_ = cv2.cvtColor(img_to_show_, cv2.COLOR_BGRA2GRAY).astype('uint8')
# pose_squeezed_ = pose_to_show_.astype('uint8')
# img_merge_ori = cv2.addWeighted(img_grey_ori_, 0.5, pose_squeezed_, 0.5, 0)
# cv2.imwrite('visualize/{}_{}_weighted.jpg'.format(img_id, i), img_merge_ori)
# cv2.imwrite('visualize/{}_{}_img.jpg'.format(img_id, i), img_to_show_)
# cv2.imwrite('visualize/{}_{}_pose.jpg'.format(img_id, i), pose_to_show_)
# raw_data = np.append(os_frame, os_pose_map, axis=3)
#####################################################################
data = fast_list2arr([self._transformer.preprocess('data', x) for x in os_frame])
# TODO: uncomment the following to visualize
# for i in xrange(os_frame.shape[0]):
# img_to_show = data[i, :3, :, :].squeeze().transpose(1, 2, 0)
# pose_to_show = data[i, 3, :, :].squeeze()
# img_to_show[:, :, 0] += 104
# img_to_show[:, :, 1] += 117
# img_to_show[:, :, 2] += 123
#
# print img_to_show.shape
# print pose_to_show.shape
# img_grey_ori = cv2.cvtColor(img_to_show, cv2.COLOR_BGRA2GRAY).astype('uint8')
# pose_squeezed = pose_to_show.astype('uint8')
# img_merge_ori = cv2.addWeighted(img_grey_ori, 0.5, pose_squeezed, 0.5, 0)
# cv2.imwrite('visualize/{}_{}_weighted_post.jpg'.format(img_id, i), img_merge_ori)
# cv2.imwrite('visualize/{}_{}_img_post.jpg'.format(img_id, i), img_to_show)
# cv2.imwrite('visualize/{}_{}_pose_post.jpg'.format(img_id, i), pose_to_show)
self._net.blobs['data'].reshape(*data.shape)
self._net.reshape()
out = self._net.forward(blobs=[score_name,], data=data)
# print np.argmax(out[score_name], axis=1)
# TODO: check wrong samples
return out[score_name].copy()
def get_result(self, result, out, score_name):
if result is None:
result = out[score_name]
result = np.expand_dims(result, axis=0)
else:
result = np.append(result, np.expand_dims(out[score_name].copy(), axis=0), axis=0)
return result
def get_score_label(self, label_name):
out = self._net.forward()
label = out[label_name]
return out, label
def forward_roi_net(self, net_base, pid, roi_mult):
with open('{}/scale_mult_{}'.format(net_base, pid), 'w') as fscale:
fscale.write('{:.1f}'.format(roi_mult))
out, label = self.get_score_label('label')
return out, label
def forward_rgb_net(self, net_base, pid, os_id):
with open('{}/oversample_id_{}'.format(net_base, pid), 'w') as fscale:
fscale.write('{}'.format(os_id))
out, label = self.get_score_label('label')
return out, label
def forward_merge_net(self, net_base, pid, os_id, roi_mult):
with open('{}/scale_mult_{}'.format(net_base, pid), 'w') as fscale:
fscale.write('{:.1f}'.format(roi_mult))
with open('{}/oversample_id_{}'.format(net_base, pid), 'w') as fscale:
fscale.write('{}'.format(os_id))
out, label = self.get_score_label('label')
return out, label
def predict_single_frame_from_cpp(self, net_base, frame_name, score_name, pid, is_roi=True, over_sample=True,
save_score=False, is_merge=False):
result = None
label = 0
if is_roi:
if over_sample:
roi_mult_list = np.arange(2.2, 3.2, 0.1).tolist()
for roi_mult in roi_mult_list:
out, label = self.forward_roi_net(net_base, pid, roi_mult)
result = self.get_result(result, out, score_name)
else:
roi_mult = 2.5 # np.arange(2., 3., 0.1).tolist()
out, label = self.forward_roi_net(net_base, pid, roi_mult)
result = self.get_result(result, out, score_name)
elif is_merge:
if over_sample:
oversample_id_list = np.arange(0, 10).tolist()
roi_mult_list = np.arange(2.4, 3.4, 0.1).tolist()
for os_id, roi_mult in zip(oversample_id_list, roi_mult_list):
out, label = self.forward_merge_net(net_base, pid, os_id, roi_mult)
result = self.get_result(result, out, score_name)
else:
roi_mult = 2.5
out, label = self.forward_roi_net(net_base, pid, roi_mult)
result = self.get_result(result, out, score_name)
else: # trunk
if over_sample:
oversample_id_list = np.arange(0, 10).tolist()
for os_id in oversample_id_list:
out, label = self.forward_rgb_net(net_base, pid, os_id)
result = self.get_result(result, out, score_name)
else:
out = self._net.forward() # blobs=[score_name, ], data=data
label = out['label']
result = self.get_result(result, out, score_name)
#####################################################################
if save_score:
with open('scores/{}.pkl'.format(frame_name), 'w') as fscore:
pickle.dump(result, fscore)
# print np.argmax(out[score_name], axis=1)
return np.swapaxes(result, 0, 1).copy(), int(label.max())
def predict_single_frame_motion(self, net_base, fc_score_name_list, pid, over_sample=True):
# result_fc = None
# result_fusion = None
result_fc_dict = {}
for fc_score_name in fc_score_name_list:
result_fc_dict[fc_score_name] = None
result = None
label = 0
if over_sample:
oversample_id_list = np.arange(0, 10).tolist()
for os_id in oversample_id_list:
with open('{}/oversample_id_{}'.format(net_base, pid), 'w') as fscale:
fscale.write('{}'.format(os_id))
out = self._net.forward() # blobs=[score_name, ], data=data
label = out['label']
for fc_score_name in fc_score_name_list:
result_fc_dict[fc_score_name] = self.get_result(result_fc_dict[fc_score_name], out, fc_score_name)
else:
out = self._net.forward() # blobs=[score_name, ], data=data
label = out['label']
for fc_score_name in fc_score_name_list:
result_fc_dict[fc_score_name] = self.get_result(result_fc_dict[fc_score_name], out, fc_score_name)
for fc_score_name in fc_score_name_list:
if result is None:
result = result_fc_dict[fc_score_name]
else:
result = np.append(result, result_fc_dict[fc_score_name], axis=1)
return np.swapaxes(result, 0, 1).copy(), int(label.max())
| 50.659942 | 121 | 0.583196 | 2,372 | 17,579 | 4.022766 | 0.08516 | 0.042444 | 0.033012 | 0.030811 | 0.818906 | 0.797632 | 0.78516 | 0.741878 | 0.717669 | 0.708342 | 0 | 0.028879 | 0.281017 | 17,579 | 346 | 122 | 50.806358 | 0.726086 | 0.278343 | 0 | 0.58371 | 1 | 0 | 0.01842 | 0 | 0 | 0 | 0 | 0.00289 | 0.004525 | 0 | null | null | 0.004525 | 0.036199 | null | null | 0.013575 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
6586fbf32153a0d3c125a116436a7dabaf60f2e9 | 11,599 | py | Python | L96_emulator/dataset.py | m-dml/emulator_L96 | 2d0d7981aa11d60f26d31e2263af7dcde1a39720 | [
"Apache-2.0"
] | 1 | 2021-08-10T08:15:36.000Z | 2021-08-10T08:15:36.000Z | L96_emulator/dataset.py | m-dml/emulator_L96 | 2d0d7981aa11d60f26d31e2263af7dcde1a39720 | [
"Apache-2.0"
] | null | null | null | L96_emulator/dataset.py | m-dml/emulator_L96 | 2d0d7981aa11d60f26d31e2263af7dcde1a39720 | [
"Apache-2.0"
] | null | null | null | import torch
import numpy as np
from L96_emulator.util import sortL96intoChannels, as_tensor
class Dataset(torch.utils.data.IterableDataset):
def __init__(self, data, offset=1, J=0,
start=None, end=None,
normalize=False, randomize_order=True):
if len(data.shape) == 2:
self.J, self.K = J, data.shape[1]//(J+1)
assert data.shape[1]/(J+1) == self.K
self.data = sortL96intoChannels(data, J)
self.offset = offset
if start is None or end is None:
start, end = 0, self.data.shape[0]-self.offset
assert end > start
self.start, self.end = start, end
self.normalize = normalize
self.mean, self.std = 0., 1.
if self.normalize:
self.mean = self.data.mean(axis=(0,2)).reshape(1,-1,1)
self.std = self.data.std(axis=(0,2)).reshape(1,-1,1)
self.data = (self.data - self.mean) / self.std
self.randomize_order = randomize_order
def __getitem__(self, index):
""" Generate one batch of data """
idx = np.atleast_1d(np.asarray(index))
return self.data[idx]
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = torch.randperm(iter_end - iter_start, device='cpu') + iter_start
else:
idx = torch.arange(iter_start, iter_end, requires_grad=False, device='cpu')
X = self.data[idx,:]
y = self.data[idx+self.offset,:]
return zip(X, y)
def __len__(self):
return (self.end - self.start) #self.data.shape[0]
def divide_workers(self):
""" parallelized data loading via torch.util.data.Dataloader """
if torch.utils.data.get_worker_info() is None:
iter_start = torch.tensor(self.start, requires_grad=False, dtype=torch.int, device='cpu')
iter_end = torch.tensor(self.end, requires_grad=False, dtype=torch.int, device='cpu')
else:
raise NotImplementedError('had no need for parallelization yet')
return iter_start, iter_end
class DatasetMultiStep(Dataset):
def __init__(self, data, offset=1, J=0,
start=None, end=None,
normalize=False, randomize_order=True):
super(DatasetMultiStep, self).__init__(
data=data, offset=offset, J=J, start=start, end=end,
normalize=normalize, randomize_order=randomize_order
)
self.offset = torch.as_tensor(np.asarray(offset, dtype=np.int).reshape(1,-1), device='cpu')
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = torch.randperm(iter_end - iter_start, device='cpu') + iter_start
else:
idx = torch.arange(iter_start, iter_end, requires_grad=False, device='cpu')
io = (idx.reshape(-1,1) + self.offset.reshape(1,-1)).flatten()
X = self.data[idx].reshape(-1,self.J+1,self.K) # reshapes time x n_trials into single axis !
y = self.data[io].reshape(-1, np.prod(self.offset.shape), self.J+1,self.K)
return zip(X, y)
class DatasetMultiTrial(Dataset):
def __init__(self, data, offset=1, J=0,
start=None, end=None,
normalize=False, randomize_order=True):
assert len(data.shape) == 3 # N, T, K*(J+1)
self.N, self.T = data.shape[:2] # trial count, trial length
self.J, self.K = J, data.shape[-1]//(J+1)
assert data.shape[-1]/(J+1) == self.K
self.data = sortL96intoChannels(data.reshape(-1,self.K*(self.J+1)),J=J) # N*T, J+1, K
self.offset = offset
if start is None or end is None:
start, end = 0, self.T-self.offset
assert end > start
self.start, self.end = start, end
self.normalize = normalize
self.mean, self.std = 0., 1.
if self.normalize:
self.mean = self.data.mean(axis=(0,2)).reshape(1,-1,1)
self.std = self.data.std(axis=(0,2)).reshape(1,-1,1)
self.data = (self.data - self.mean) / self.std
self.randomize_order = randomize_order
def __getitem__(self, index):
""" Generate one batch of data """
idx = np.atleast_1d(np.asarray(index))
return self.data[idx]
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = [torch.randperm(iter_end - iter_start, device='cpu') for j in range(self.N)]
idx = torch.cat([j*self.T + iter_start + idx[j] for j in range(len(idx))])
else:
idx = [torch.arange(iter_start, iter_end, requires_grad=False, device='cpu') for j in range(self.N)]
idx = torch.cat([j*self.T + idx[j] for j in range(len(idx))])
X = self.data[idx].reshape(-1,self.J+1,self.K) # reshapes time x n_trials into single axis !
y = self.data[idx+self.offset].reshape(-1,self.J+1,self.K)
return zip(X, y)
def __len__(self):
return self.N * (self.end - self.start)
class DatasetMultiTrial_shattered(DatasetMultiTrial):
def __init__(self, data, offset=1, J=0,
start=None, end=None, K_local=None, n_local=1,
normalize=False, randomize_order=True):
super(DatasetMultiTrial_shattered, self).__init__(
data=data, offset=offset, J=J, start=start, end=end,
normalize=normalize, randomize_order=randomize_order
)
self.K_local = self.K if K_local is None else K_local
assert self.K_local <= self.K
self.n_local = n_local
assert self.n_local >= 1
self.local_pad = (2,1) # L96 diff.eq. needs info from 3 relative locations k=-2,-1,+1
self.local_size = np.sum(self.local_pad)
idx_Ks_out, idx_Ks_in =[], []
local_idx = np.arange(-self.local_pad[0]*n_local,self.K_local+self.local_pad[1]*n_local)
for k in np.arange(0, self.K-self.K_local+1, self.K_local):
idx_Ks_in.append(np.mod(local_idx+k, self.K))
idx_Ks_out.append(np.arange(self.K_local)+k)
self.l_regs = len(idx_Ks_in)
self.idx_Ks_in, self.idx_Ks_out = np.concatenate(idx_Ks_in), np.concatenate(idx_Ks_out)
def __getitem__(self, index):
""" Generate one batch of data """
idx = np.atleast_1d(np.asarray(index))
return self.data[idx]
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = [torch.randperm(iter_end - iter_start, device='cpu') for j in range(self.N)]
idx = torch.cat([j*self.T + iter_start + idx[j] for j in range(len(idx))])
else:
idx = [torch.arange(iter_start, iter_end, requires_grad=False, device='cpu') for j in range(self.N)]
idx = torch.cat([j*self.T + idx[j] for j in range(len(idx))])
X = self.data[:,:,self.idx_Ks_in][idx].reshape(-1,self.J+1,self.l_regs,self.K_local+self.local_size*self.n_local)
y = self.data[:,:,self.idx_Ks_out][idx+self.offset].reshape(-1,self.J+1,self.l_regs,self.K_local)
X = X.transpose(0,2,1,3)
y = y.transpose(0,2,1,3)
X = X.reshape(-1, *X.shape[2:])
y = y.reshape(-1, *y.shape[2:])
return zip(X, y)
def __len__(self):
return self.l_regs * self.N * (self.end - self.start)
class DatasetMultiTrialMultiStep(DatasetMultiTrial):
def __init__(self, data, offset=1, J=0,
start=None, end=None,
normalize=False, randomize_order=True):
super(DatasetMultiTrialMultiStep, self).__init__(
data=data, offset=offset, J=J, start=start, end=end,
normalize=normalize, randomize_order=randomize_order
)
self.offset = torch.as_tensor(np.asarray(offset, dtype=np.int).reshape(1,-1), device='cpu')
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = [torch.randperm(iter_end - iter_start, device='cpu') for j in range(self.N)]
idx = torch.cat([j*self.T + iter_start + idx[j] for j in range(len(idx))])
else:
idx = [torch.arange(iter_start, iter_end, requires_grad=False, device='cpu') for j in range(self.N)]
idx = torch.cat([j*self.T + idx[j] for j in range(len(idx))])
io = (idx.reshape(-1,1) + self.offset.reshape(1,-1)).flatten()
X = self.data[idx].reshape(-1,self.J+1,self.K) # reshapes time x n_trials into single axis !
y = self.data[io].reshape(-1, np.prod(self.offset.shape), self.J+1,self.K)
return zip(X, y)
class DatasetRelPred(Dataset):
def __init__(self, data, offset=1, J=0,
start=None, end=None,
normalize=False, randomize_order=True):
if len(data.shape) == 2:
self.J, self.K = J, data.shape[1]//(J+1)
assert data.shape[1]/(J+1) == self.K
self.data = data.copy().reshape(-1, self.J+1, self.K)
self.offset = offset
if start is None or end is None:
start, end = 0, self.data.shape[0]-self.offset
assert end > start
self.start, self.end = start, end
self.normalize = normalize
self.mean, self.std = 0., 1.
if self.normalize:
self.mean_in = self.data.mean(axis=(0,2)).reshape(1,-1,1)
self.std_in = self.data.std(axis=(0,2)).reshape(1,-1,1)
self.data = (self.data - self.mean_in) / self.std_in
self.mean_out = np.mean(self.data[:-self.offset] - self.data[self.offset:], axis=(0,2)).reshape(1,-1,1)
self.std_out = np.std(self.data[:-self.offset] - self.data[self.offset:], axis=(0,2)).reshape(1,-1,1)
self.randomize_order = randomize_order
def __getitem__(self, index):
""" Generate one batch of data """
idx = np.atleast_1d(np.asarray(index))
return self.data[idx] #, (self.data[idx+self.offset,:] - self.data[idx,:] - self.mean_out) / self.std_out
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = torch.randperm(iter_end - iter_start, device='cpu') + iter_start
else:
idx = torch.arange(iter_start, iter_end, requires_grad=False, device='cpu')
X = self.data[idx,:]
y = (self.data[idx+self.offset,:] - self.data[idx,:] - self.mean_out) / self.std_out
return zip(X, y)
class DatasetRelPredPast(DatasetRelPred):
def __iter__(self):
""" Return iterable over data in random order """
iter_start, iter_end = self.divide_workers()
if self.randomize_order:
idx = torch.randperm(iter_end - iter_start, device='cpu') + iter_start
else:
idx = torch.arange(iter_start, iter_end, requires_grad=False, device='cpu')
X = np.concatenate((self.data[idx,:], self.data[idx,:]-self.data[idx-self.offset,:]), axis=1)
y = (self.data[idx+self.offset,:] - self.data[idx,:] - self.mean_out) / self.std_out
return zip(X, y)
| 41.27758 | 121 | 0.59712 | 1,689 | 11,599 | 3.943162 | 0.076377 | 0.062462 | 0.033033 | 0.036036 | 0.82012 | 0.803453 | 0.793844 | 0.781832 | 0.765315 | 0.744144 | 0 | 0.017865 | 0.26166 | 11,599 | 280 | 122 | 41.425 | 0.759809 | 0.070523 | 0 | 0.703884 | 0 | 0 | 0.008318 | 0 | 0 | 0 | 0 | 0 | 0.043689 | 1 | 0.101942 | false | 0 | 0.014563 | 0.014563 | 0.223301 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
659054d040246ba13fb74d26b3ffea9aca3ba696 | 24,973 | py | Python | tests/cases/resources/tests/field.py | cchmc-bmi-os/serrano | ffecaa8e866423386e8a8c2432f99dd02ae7b4c1 | [
"BSD-2-Clause"
] | 6 | 2015-01-16T14:27:54.000Z | 2020-08-27T16:32:52.000Z | tests/cases/resources/tests/field.py | cchmc-bmi-os/serrano | ffecaa8e866423386e8a8c2432f99dd02ae7b4c1 | [
"BSD-2-Clause"
] | 52 | 2015-01-05T19:11:18.000Z | 2017-02-16T14:28:38.000Z | tests/cases/resources/tests/field.py | cchmc-bmi-os/serrano | ffecaa8e866423386e8a8c2432f99dd02ae7b4c1 | [
"BSD-2-Clause"
] | 6 | 2015-07-29T18:52:04.000Z | 2020-01-02T16:04:01.000Z | import json
from django.test.utils import override_settings
from avocado.models import DataField
from avocado.events.models import Log
from restlib2.http import codes
from .base import BaseTestCase
from tests.models import Title
class FieldResourceTestCase(BaseTestCase):
def test_get_all(self):
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 5)
self.assertEqual(response['Link-Template'], (
'<http://testserver/api/fields/{id}/stats/>; rel="stats", '
'<http://testserver/api/fields/{id}/>; rel="self", '
'<http://testserver/api/fields/{id}/values/>; rel="values", '
'<http://testserver/api/fields/{id}/dist/>; rel="distribution", '
'<http://testserver/api/fields/{id}/dims/>; rel="dimensions"'
))
def test_get_all_unrelated(self):
# Publish unrelated field
DataField.objects.filter(model_name='unrelated').update(published=True)
# Should not appear in default request since it's not related
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 5)
# Switch the tree, now it should be the only one
response = self.client.get('/api/fields/?tree=unrelated',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 1)
def test_stats_capable_setting(self):
f = DataField.objects.get_by_natural_key('tests', 'title', 'name')
# Initially, the default stats_capable check will be used that allows
# for stats on all non-searchable fields so we will expect that the
# stats endpoint will return normally.
response = self.client.get('/api/fields/{0}/'.format(f.pk),
HTTP_ACCEPT='applicaton/json')
content = json.loads(response.content)
self.assertEqual(response.status_code, codes.ok)
self.assertTrue('stats_capable' in content)
response = self.client.get('/api/fields/{0}/stats/'.format(f.pk),
HTTP_ACCEPT='applicaton/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(response['Link-Template'], (
'<http://testserver/api/fields/{id}/stats/>; rel="self", '
'<http://testserver/api/fields/{parent_id}/>; rel="parent"'
))
# Now, overriding that setting so that this field is not
# "stats_capable" should 'disable' the stats endpoint for that field.
with self.settings(SERRANO_STATS_CAPABLE=lambda x: x.id != f.pk):
response = self.client.get('/api/fields/{0}/'.format(f.pk),
HTTP_ACCEPT='applicaton/json')
content = json.loads(response.content)
self.assertEqual(response.status_code, codes.ok)
self.assertFalse('stats_capable' in content)
response = self.client.get('/api/fields/{0}/stats/'.format(f.pk),
HTTP_ACCEPT='applicaton/json')
self.assertEqual(response.status_code, codes.unprocessable_entity)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=True)
def test_get_all_orphan(self):
f = DataField.objects.get_by_natural_key('tests', 'title', 'name')
# Orphan one of the fields we are about to retrieve
DataField.objects.filter(pk=f.pk).update(field_name="XXX")
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 4)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_all_orphan_check_off(self):
f = DataField.objects.get_by_natural_key('tests', 'title', 'name')
# Orphan one of the fields we are about to retrieve
DataField.objects.filter(pk=f.pk).update(field_name="XXX")
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 5)
def test_get_one(self):
f1 = DataField.objects.get_by_natural_key('tests',
'office',
'location')
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Not allowed to see
response = self.client.get('/api/fields/{0}/'.format(f1.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.not_found)
response = self.client.get('/api/fields/{0}/'.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertTrue(json.loads(response.content))
event = Log.objects.filter(event='read', object_id=f2.pk)
self.assertTrue(event.exists())
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=True)
def test_get_one_orphan(self):
f = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Orphan the field before we retrieve it.
# NOTE: Used to be model_name, but changed due to the tree
# filtering removing it from the set.
DataField.objects.filter(pk=f.pk).update(field_name="XXX")
response = self.client.get('/api/fields/{0}/'.format(f.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.internal_server_error)
@override_settings(SERRANO_CHECK_ORPHANED_FIELDS=False)
def test_get_one_orphan_check_off(self):
f = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Orphan one of the fields we are about to retrieve
DataField.objects.filter(pk=f.pk).update(field_name="XXX")
response = self.client.get('/api/fields/{0}/'.format(f.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
def test_get_privileged(self):
f1 = DataField.objects.get_by_natural_key('tests',
'office',
'location')
# Superuser sees everything
self.client.login(username='root', password='password')
response = self.client.get('/api/fields/?unpublished=1',
HTTP_ACCEPT='application/json')
self.assertEqual(len(json.loads(response.content)), 12)
response = self.client.get('/api/fields/{0}/?unpublished=1'
.format(f1.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertTrue(json.loads(response.content))
# Make sure the unpublished fields are only exposed when explicitly
# asked for even when a superuser makes the request.
response = self.client.get('/api/fields/',
HTTP_ACCEPT='application/json')
self.assertEqual(len(json.loads(response.content)), 5)
response = self.client.get('/api/fields/{0}/'.format(f1.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.not_found)
def test_values(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# title.name
response = self.client.get('/api/fields/{0}/values/'.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertTrue(content['items'])
self.assertTrue(len(content['items']), 7)
response = self.client.get(
'/api/fields/{0}/values/?processor=first_title'.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertTrue(content['items'])
self.assertTrue(len(content['items']), 1)
def test_values_no_limit(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# title.name
response = self.client.get('/api/fields/{0}/values/?limit=0'
.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
data = json.loads(response.content)
self.assertTrue(data['items'])
self.assertFalse('previous' in response['Link'])
self.assertFalse('next' in response['Link'])
self.assertTrue('parent' in response['Link-Template'])
self.assertTrue(
'self' in response['Link'] and 'base' in response['Link'])
def test_zero_division_error(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Delete everything for now
Title.objects.all().delete()
response = self.client.get('/api/fields/{0}/values/?limit=0'
.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
data = json.loads(response.content)
self.assertEqual(data['items'], [])
def test_values_random(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Random values
response = self.client.get('/api/fields/{0}/values/?random=3'
.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 3)
# Even though we are requesting 3 values, the query processor should
# limit the population to 1 value so make sure that the call returns
# only that single value since all values in the population should be
# returned when the random sample size is bigger than population size.
response = self.client.get(
'/api/fields/{0}/values/?random=3&processor=first_title'
.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(len(json.loads(response.content)), 1)
def test_values_query(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
response = self.client.get('/api/fields/{0}/values/?query=a'
.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content)['items'], [
{'label': 'Analyst', 'value': 'Analyst'},
{'label': 'Guard', 'value': 'Guard'},
{'label': 'Lawyer', 'value': 'Lawyer'},
{'label': 'Programmer', 'value': 'Programmer'},
{'label': 'QA', 'value': 'QA'},
])
message = Log.objects.get(event='items', object_id=f2.pk)
self.assertEqual(message.data['query'], 'a')
response = self.client.get(
'/api/fields/{0}/values/?query=a&processor=under_twenty_thousand'
.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content)['items'], [
{'label': 'Guard', 'value': 'Guard'},
{'label': 'Programmer', 'value': 'Programmer'},
{'label': 'QA', 'value': 'QA'},
])
def test_values_validate(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Valid, single dict
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps({'value': 'IT'}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertEqual(content, {
'value': 'IT',
'label': 'IT',
'valid': True,
})
message = Log.objects.get(event='validate', object_id=f2.pk)
self.assertEqual(message.data['count'], 1)
# Invalid
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps({'value': 'Bartender'}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertEqual(content, {
'value': 'Bartender',
'label': 'Bartender',
'valid': False,
})
# Mixed, list
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps([
{'value': 'IT'},
{'value': 'Bartender'},
{'value': 'Programmer'}
]),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertEqual(content, [
{'value': 'IT', 'label': 'IT', 'valid': True},
{'value': 'Bartender', 'label': 'Bartender', 'valid': False},
{'value': 'Programmer', 'label': 'Programmer', 'valid': True},
])
# Error - no value
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps({}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.unprocessable_entity)
# Error - type
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps(None),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.unprocessable_entity)
def test_labels_validate(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
# Valid, single dict
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps({'label': 'IT'}),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertEqual(content, {
'value': 'IT',
'label': 'IT',
'valid': True,
})
def test_mixed_validate(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
response = self.client.post(
'/api/fields/{0}/values/'.format(f2.pk),
data=json.dumps([
{'label': 'IT'},
{'label': 'Bartender'},
{'value': 'Programmer'}
]),
content_type='application/json',
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
content = json.loads(response.content)
self.assertEqual(content, [
{'value': 'IT', 'label': 'IT', 'valid': True},
{'value': 'Bartender', 'label': 'Bartender', 'valid': False},
{'value': 'Programmer', 'label': 'Programmer', 'valid': True},
])
def test_stats(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
f3 = DataField.objects.get_by_natural_key('tests',
'title',
'salary')
# title.name
response = self.client.get('/api/fields/{0}/stats/'.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertTrue(json.loads(response.content))
self.assertTrue(
Log.objects.filter(event='stats', object_id=f2.pk).exists())
# title.salary
response = self.client.get('/api/fields/{0}/stats/'.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
stats = json.loads(response.content)
self.assertTrue(stats)
self.assertTrue(
Log.objects.filter(event='stats', object_id=f3.pk).exists())
self.assertEqual(stats['min'], 10000)
self.assertEqual(stats['max'], 200000)
self.assertAlmostEqual(stats['avg'], 53571.42857, places=5)
# Using an invalid query processor should fall back to the default.
response = self.client.get('/api/fields/{0}/stats/?processor=INVALID'
.format(f3.pk),
HTTP_ACCEPT='application/json')
stats = json.loads(response.content)
self.assertEqual(stats['min'], 10000)
self.assertEqual(stats['max'], 200000)
self.assertAlmostEqual(stats['avg'], 53571.42857, places=5)
# Using a valid query processor should affect the stats.
response = self.client.get(
'/api/fields/{0}/stats/?processor=under_twenty_thousand'
.format(f3.pk),
HTTP_ACCEPT='application/json')
stats = json.loads(response.content)
self.assertEqual(stats['min'], 10000)
self.assertEqual(stats['max'], 15000)
self.assertEqual(stats['avg'], 13750)
# project.due_date
f11 = DataField.objects.get_by_natural_key('tests',
'project',
'due_date')
response = self.client.get('/api/fields/{0}/stats/'.format(f11.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
stats = json.loads(response.content)
self.assertTrue(stats)
self.assertTrue(
Log.objects.filter(event='stats', object_id=f11.pk).exists())
self.assertEqual(stats['min'], '2000-01-01')
self.assertEqual(stats['max'], '2010-01-01')
def test_empty_stats(self):
f2 = DataField.objects.get_by_natural_key('tests',
'title',
'name')
Title.objects.all().delete()
response = self.client.get('/api/fields/{0}/stats/'.format(f2.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertTrue(json.loads(response.content))
self.assertTrue(
Log.objects.filter(event='stats', object_id=f2.pk).exists())
def test_dist(self):
f3 = DataField.objects.get_by_natural_key('tests',
'title',
'salary')
default_content = [
{'label': '10000', 'value': 10000, 'count': 1},
{'label': '15000', 'value': 15000, 'count': 3},
{'label': '20000', 'value': 20000, 'count': 1},
{'label': '100000', 'value': 100000, 'count': 1},
{'label': '200000', 'value': 200000, 'count': 1},
]
# title.salary
response = self.client.get('/api/fields/{0}/dist/'.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content), default_content)
event = Log.objects.filter(event='dist', object_id=f3.pk)
self.assertTrue(event.exists())
# Using an invalid processor should fallback to the default processor.
response = self.client.get('/api/fields/{0}/dist/?processor=INVALID'
.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content), default_content)
# Using the custom query process, we should be limited to a smaller
# salary set.
response = self.client.get('/api/fields/{0}/dist/?processor=manager'
.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content), [
{'label': '15000', 'value': 15000, 'count': 1},
])
def test_dims(self):
f3 = DataField.objects.get_by_natural_key('tests',
'title',
'salary')
default_content = {
u'size': 4,
u'clustered': False,
u'outliers': [],
u'data': [{
u'count': 3,
u'values': [{'label': '15000', 'value': 15000}]
}, {
u'count': 1,
u'values': [{'label': '10000', 'value': 10000}]
}, {
u'count': 1,
u'values': [{'label': '20000', 'value': 20000}]
}, {
u'count': 1,
u'values': [{'label': '200000', 'value': 200000}]
}],
}
# title.salary
response = self.client.get('/api/fields/{0}/dims/'.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content), default_content)
event = Log.objects.filter(event='dims', object_id=f3.pk)
self.assertTrue(event.exists())
# Using an invalid processor should fallback to the default processor.
response = self.client.get('/api/fields/{0}/dims/?processor=INVALID'
.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content), default_content)
# Using the custom query process, we should be limited to a smaller
# salary set.
response = self.client.get('/api/fields/{0}/dims/?processor=manager'
.format(f3.pk),
HTTP_ACCEPT='application/json')
self.assertEqual(response.status_code, codes.ok)
self.assertEqual(json.loads(response.content), {
u'size': 1,
u'clustered': False,
u'outliers': [],
u'data': [{
u'count': 1,
u'values': [{'label': '15000', 'value': 15000}]
}]
})
| 44.996396 | 79 | 0.536099 | 2,556 | 24,973 | 5.133412 | 0.100548 | 0.086884 | 0.060361 | 0.076214 | 0.824175 | 0.80032 | 0.773798 | 0.768691 | 0.7504 | 0.720372 | 0 | 0.019396 | 0.33316 | 24,973 | 554 | 80 | 45.077617 | 0.76851 | 0.070957 | 0 | 0.716247 | 0 | 0 | 0.164284 | 0.042237 | 0 | 0 | 0 | 0 | 0.23341 | 1 | 0.048055 | false | 0.002288 | 0.016018 | 0 | 0.066362 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
659a6aa98627cc55243cfab21d2995c9b9c6b629 | 48 | py | Python | src/lib/_threading_local.py | DTenore/skulpt | 098d20acfb088d6db85535132c324b7ac2f2d212 | [
"MIT"
] | 2,671 | 2015-01-03T08:23:25.000Z | 2022-03-31T06:15:48.000Z | src/lib/_threading_local.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 972 | 2015-01-05T08:11:00.000Z | 2022-03-29T13:47:15.000Z | src/lib/_threading_local.py | wakeupmuyunhe/skulpt | a8fb11a80fb6d7c016bab5dfe3712517a350b347 | [
"MIT"
] | 845 | 2015-01-03T19:53:36.000Z | 2022-03-29T18:34:22.000Z | import _sk_fail; _sk_fail._("_threading_local")
| 24 | 47 | 0.8125 | 7 | 48 | 4.571429 | 0.714286 | 0.375 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.0625 | 48 | 1 | 48 | 48 | 0.711111 | 0 | 0 | 0 | 0 | 0 | 0.333333 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | null | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 7 |
65b1dde9bac114ec65373ad4f87fcb883abcf772 | 84,852 | py | Python | kfs/layers/core.py | the-moliver/kfs | 6b1505aef46eb7e8dd0b04d76e16f45737288622 | [
"MIT"
] | 75 | 2016-05-07T03:04:34.000Z | 2021-07-04T18:01:40.000Z | kfs/layers/core.py | the-moliver/kfs | 6b1505aef46eb7e8dd0b04d76e16f45737288622 | [
"MIT"
] | 11 | 2017-04-09T00:01:58.000Z | 2018-11-19T00:30:11.000Z | kfs/layers/core.py | the-moliver/kfs | 6b1505aef46eb7e8dd0b04d76e16f45737288622 | [
"MIT"
] | 12 | 2017-02-11T00:25:39.000Z | 2018-12-20T03:14:52.000Z | # -*- coding: utf-8 -*-
from __future__ import absolute_import
from __future__ import division
import numpy as np
import copy
import inspect
import types as python_types
import warnings
from keras import backend as K
from keras import activations
from keras import initializers
from keras import regularizers
from kfs import constraints as kconstraints
from keras import constraints
from keras.engine import InputSpec
from keras.engine import Layer
from keras.utils.generic_utils import func_dump
from keras.utils.generic_utils import func_load
from keras.utils.generic_utils import deserialize_keras_object
from keras.legacy import interfaces
class FilterDims(Layer):
'''The layer lets you filter any arbitrary set of axes by projection onto a new axis.
This is can be useful for reducing dimensionality and/or regularizing spatio-temporal models or other
models of structured data.
# Example
```python
# As a temporal filter in a 5D spatio-temporal model with input shape (#samples, 12, 3, 30, 30)
# The input has 12 time steps, 3 color channels and X and Y of size 30:
model = Sequential()
model.add(TimeDistributed(Convolution2D(10, 5, 5, activation='linear', subsample=(2, 2)), input_shape=(12, 3, 30, 30)))
# The output from the previous layer has shape (#samples, 12, 10, 13, 13)
# We can use FilterDims to filter the 12 time steps on axis 1 by projeciton onto a new axis of 5 dimensions with a 12x5 matrix:
model.add(FilterDims(filters=5, sum_axes=[1], filter_axes=[1], bias=False))
# The weights learned by FilterDims are a set of temporal filters on the output of the spatial convolutions
# The output dimensionality is (#samples, 5, 10, 13, 13)
# We can then use FilterDims to filter the 5 temporal dimensions and 10 convolutional filter feature map
# dimensions to create 2 spatio-temporal filters with a 5x10x2 weight tensor:
model.add(FilterDims(filters=2, sum_axes=[1, 2], filter_axes=[1, 2], bias=False))
# The output dimensionality is (#samples, 2, 13, 13)
# We can then use FilterDims to spatially filter each spatio-temporal dimension with a 2x13x13 tensor:
model.add(FilterDims(filters=1, sum_axes=[2, 3], filter_axes=[1, 2, 3], bias=False))
# We only sum over the last two spatial axes resutling in an output dimensionality of (#samples, 2)
```
# Arguments
filters: number of filters to apply.
filter_axes: a list of the axes of the input to filter
sum_axes: a list of the axes of the input that should be summed across after filtering
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, output_dim)`
and (output_dim,) for weights and biases respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
b_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the bias.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
b_constraint: instance of the [constraints](../constraints.md) module,
applied to the bias.
bias: whether to include a bias (i.e. make the layer affine rather than linear).
input_dim: dimensionality of the input (integer).
This argument (or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
ND tensor with arbitrary shape.
# Output shape
ND tensor with shape determined by input and arguments.
'''
def __init__(self, filters,
sum_axes,
filter_axes,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_activation=None,
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(FilterDims, self).__init__(**kwargs)
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.activation = activations.get(activation)
self.kernel_activation = activations.get(kernel_activation)
self.filters = filters
self.sum_axes = list(sum_axes)
self.sum_axes.sort()
self.filter_axes = list(filter_axes)
self.filter_axes.sort()
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.use_bias = use_bias
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
ndim = len(input_shape)
assert ndim >= 2
kernel_shape = [1] * (ndim - 1)
kernel_broadcast = [False] * (ndim - 1)
bias_broadcast = [True] * (ndim - 1)
for i in self.filter_axes:
kernel_shape[i-1] = input_shape[i]
if self.filters > 1:
kernel_shape.append(self.filters)
kernel_broadcast.append(False)
bias_broadcast.append(False)
for i in set(range(1, ndim)) - set(self.filter_axes):
kernel_broadcast[i-1] = True
kernel_shape = tuple(kernel_shape)
self.kernel = self.add_weight(kernel_shape,
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_bias:
bias_shape = [1] * (ndim - 1)
for i in set(self.filter_axes) - set(self.sum_axes):
bias_shape[i-1] = input_shape[i]
bias_broadcast[i-1] = False
if self.filters > 1:
bias_shape.append(self.filters)
bias_shape = tuple(bias_shape)
self.bias = self.add_weight(bias_shape,
initializer=self.kernel_initializer,
name='bias',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
else:
self.bias = None
self.kernel_broadcast = kernel_broadcast
self.bias_broadcast = bias_broadcast
self.built = True
def call(self, x, mask=None):
ndim = K.ndim(x)
xshape = K.shape(x)
W = self.kernel_activation(self.kernel)
if self.filter_axes == self.sum_axes:
ax1 = [a-1 for a in self.sum_axes]
if self.filters > 1:
ax1 = ax1 + list(set(range(ndim)) - set(ax1))
else:
ax1 = ax1 + list(set(range(ndim - 1)) - set(ax1))
ax2 = list(set(range(ndim)) - set(self.sum_axes))
permute_dims = list(range(len(ax2)))
permute_dims.insert(self.sum_axes[0], len(ax2))
outdims = [-1] + [xshape[a] for a in ax2[1:]] + [self.filters]
ax2 = ax2 + self.sum_axes
W = K.permute_dimensions(W, ax1)
W = K.reshape(W, (-1, self.filters))
x = K.permute_dimensions(x, ax2)
x = K.reshape(x, (-1, K.shape(W)[0]))
output = K.reshape(K.dot(x, W), outdims)
if self.use_bias:
b_broadcast = [i for j, i in enumerate(self.bias_broadcast) if j not in self.sum_axes]
b = K.squeeze(self.bias, self.sum_axes[0])
if len(self.sum_axes) > 1:
b = K.squeeze(b, self.sum_axes[1] - 1)
if len(self.sum_axes) > 2:
b = K.squeeze(b, self.sum_axes[2] - 2)
if K.backend() == 'theano':
output += K.pattern_broadcast(b, b_broadcast)
else:
output += b
output = K.permute_dimensions(output, permute_dims)
elif self.filters > 1:
# bcast = list(np.where(self.broadcast)[0])
permute_dims = list(range(ndim + 1))
permute_dims[self.sum_axes[0]] = ndim
permute_dims[ndim] = self.sum_axes[0]
if K.backend() == 'theano':
output = K.sum(x[..., None] * K.pattern_broadcast(W, self.kernel_broadcast), axis=self.sum_axes, keepdims=True)
else:
output = K.sum(x[..., None] * W, axis=self.sum_axes, keepdims=True)
if self.use_bias:
if K.backend() == 'theano':
output += K.pattern_broadcast(self.bias, self.bias_broadcast)
else:
output += self.bias
output = K.squeeze(K.permute_dimensions(output, permute_dims), ndim)
if len(self.sum_axes) > 1:
output = K.squeeze(output, self.sum_axes[1])
else:
if K.backend() == 'theano':
output = K.sum(x * K.pattern_broadcast(W, self.kernel_broadcast), axis=self.sum_axes, keepdims=True)
else:
output = K.sum(x * W, axis=self.sum_axes, keepdims=True)
if self.use_bias:
if K.backend() == 'theano':
output += K.pattern_broadcast(self.bias, self.bias_broadcast)
else:
output += self.bias
output = K.squeeze(output, self.sum_axes[0])
if len(self.sum_axes) > 1:
output = K.squeeze(output, self.sum_axes[1]-1)
return self.activation(output)
def compute_output_shape(self, input_shape):
if self.filters > 1:
ndim = len(input_shape)
output_shape = [input_shape[0]] + [1] * (ndim-1)
for i in set(range(1, ndim)) - set(self.sum_axes):
output_shape[i] = input_shape[i]
output_shape.append(self.filters)
permute_dims = list(range(ndim + 1))
permute_dims[self.sum_axes[0]] = ndim
permute_dims[ndim] = self.sum_axes[0]
output_shape = [output_shape[i] for i in permute_dims]
output_shape.pop(ndim)
if len(self.sum_axes) > 1:
output_shape.pop(self.sum_axes[1])
else:
output_shape = input_shape
output_shape = [output_shape[i] for i in set(range(len(input_shape))) - set(self.sum_axes)]
if len(output_shape) == 1:
output_shape.append(1)
return tuple(output_shape)
def get_config(self):
config = {
'filters': self.filters,
'sum_axes': self.sum_axes,
'filter_axes': self.filter_axes,
'activation': activations.serialize(self.activation),
'kernel_activation': activations.serialize(self.kernel_activation),
'use_bias': self.use_bias,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'bias_initializer': initializers.serialize(self.kernel_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'bias_regularizer': regularizers.serialize(self.bias_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint),
'bias_constraint': constraints.serialize(self.bias_constraint)
}
base_config = super(FilterDims, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class FilterDimsV1(Layer):
'''The layer lets you filter any arbitrary set of axes by projection onto a new axis.
This is can be useful for reducing dimensionality and/or regularizing spatio-temporal models or other
models of structured data.
# Example
```python
# As a temporal filter in a 5D spatio-temporal model with input shape (#samples, 12, 3, 30, 30)
# The input has 12 time steps, 3 color channels and X and Y of size 30:
model = Sequential()
model.add(TimeDistributed(Convolution2D(10, 5, 5, activation='linear', subsample=(2, 2)), input_shape=(12, 3, 30, 30)))
# The output from the previous layer has shape (#samples, 12, 10, 13, 13)
# We can use FilterDims to filter the 12 time steps on axis 1 by projeciton onto a new axis of 5 dimensions with a 12x5 matrix:
model.add(FilterDims(filters=5, sum_axes=[1], filter_axes=[1], bias=False))
# The weights learned by FilterDims are a set of temporal filters on the output of the spatial convolutions
# The output dimensionality is (#samples, 5, 10, 13, 13)
# We can then use FilterDims to filter the 5 temporal dimensions and 10 convolutional filter feature map
# dimensions to create 2 spatio-temporal filters with a 5x10x2 weight tensor:
model.add(FilterDims(filters=2, sum_axes=[1, 2], filter_axes=[1, 2], bias=False))
# The output dimensionality is (#samples, 2, 13, 13)
# We can then use FilterDims to spatially filter each spatio-temporal dimension with a 2x13x13 tensor:
model.add(FilterDims(filters=1, sum_axes=[2, 3], filter_axes=[1, 2, 3], bias=False))
# We only sum over the last two spatial axes resutling in an output dimensionality of (#samples, 2)
```
# Arguments
filters: number of filters to apply.
filter_axes: a list of the axes of the input to filter
sum_axes: a list of the axes of the input that should be summed across after filtering
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, output_dim)`
and (output_dim,) for weights and biases respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
b_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the bias.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
b_constraint: instance of the [constraints](../constraints.md) module,
applied to the bias.
bias: whether to include a bias (i.e. make the layer affine rather than linear).
input_dim: dimensionality of the input (integer).
This argument (or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
ND tensor with arbitrary shape.
# Output shape
ND tensor with shape determined by input and arguments.
'''
def __init__(self, filters_simple,
filters_complex,
sum_axes,
filter_axes,
activation='relu',
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_activation=None,
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(FilterDimsV1, self).__init__(**kwargs)
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.activation = activations.get(activation)
self.kernel_activation = activations.get(kernel_activation)
self.filters_simple = filters_simple
self.filters_complex = filters_complex
self.sum_axes = list(sum_axes)
self.sum_axes.sort()
self.filter_axes = list(filter_axes)
self.filter_axes.sort()
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = kconstraints.UnitNormOrthogonal(self.filters_complex)
self.bias_constraint = constraints.get(bias_constraint)
self.use_bias = use_bias
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
ndim = len(input_shape)
assert ndim >= 2
kernel_shape = [1] * (ndim - 1)
kernel_broadcast = [False] * (ndim - 1)
bias_broadcast = [True] * (ndim - 1)
for i in self.filter_axes:
kernel_shape[i-1] = input_shape[i]
kernel_shape.append(2 * self.filters_complex + self.filters_simple)
kernel_broadcast.append(False)
bias_broadcast.append(False)
for i in set(range(1, ndim)) - set(self.filter_axes):
kernel_broadcast[i-1] = True
kernel_shape = tuple(kernel_shape)
self.kernel = self.add_weight(kernel_shape,
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_bias:
bias_shape = [1] * (ndim - 1)
for i in set(self.filter_axes) - set(self.sum_axes):
bias_shape[i-1] = input_shape[i]
bias_broadcast[i-1] = False
bias_shape.append(self.filters_complex + self.filters_simple)
bias_shape = tuple(bias_shape)
self.bias = self.add_weight(bias_shape,
initializer=self.kernel_initializer,
name='bias',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
else:
self.bias = None
self.kernel_broadcast = kernel_broadcast
self.bias_broadcast = bias_broadcast
self.built = True
def call(self, x, mask=None):
ndim = K.ndim(x)
xshape = K.shape(x)
W = self.kernel_activation(self.kernel)
if self.filter_axes == self.sum_axes:
ax1 = [a-1 for a in self.sum_axes]
ax1 = ax1 + list(set(range(ndim)) - set(ax1))
ax2 = list(set(range(ndim)) - set(self.sum_axes))
permute_dims = list(range(len(ax2)))
permute_dims.insert(self.sum_axes[0], len(ax2))
outdims = [-1] + [xshape[a] for a in ax2[1:]] + [self.filters_complex + self.filters_simple]
ax2 = ax2 + self.sum_axes
W = K.permute_dimensions(W, ax1)
W = K.reshape(W, (-1, 2 * self.filters_complex + self.filters_simple))
x = K.permute_dimensions(x, ax2)
x = K.reshape(x, (-1, K.shape(W)[0]))
output = K.dot(x, W)
output_complex = K.sqrt(K.square(output[:, :self.filters_complex]) + K.square(output[:, self.filters_complex:2*self.filters_complex]) + K.epsilon())
output_simple = output[:, 2*self.filters_complex:]
output = K.reshape(K.concatenate([output_complex, output_simple], axis=1), outdims)
if self.use_bias:
b_broadcast = [i for j, i in enumerate(self.bias_broadcast) if j not in self.sum_axes]
b = K.squeeze(self.bias, self.sum_axes[0])
if len(self.sum_axes) > 1:
b = K.squeeze(b, self.sum_axes[1] - 1)
if len(self.sum_axes) > 2:
b = K.squeeze(b, self.sum_axes[2] - 2)
if K.backend() == 'theano':
output += K.pattern_broadcast(b, b_broadcast)
else:
output += b
output = K.permute_dimensions(output, permute_dims)
else:
# bcast = list(np.where(self.broadcast)[0])
permute_dims = list(range(ndim + 1))
permute_dims[self.sum_axes[0]] = ndim
permute_dims[ndim] = self.sum_axes[0]
if K.backend() == 'theano':
output = K.sum(x[..., None] * K.pattern_broadcast(W, self.kernel_broadcast), axis=self.sum_axes, keepdims=True)
else:
output = K.sum(x[..., None] * W, axis=self.sum_axes, keepdims=True)
output_complex = K.sqrt(K.square(output[..., :self.filters_complex]) + K.square(output[..., self.filters_complex:2*self.filters_complex]) + K.epsilon())
output_simple = output[..., 2*self.filters_complex:]
output = K.concatenate([output_complex, output_simple], axis=-1)
if self.use_bias:
if K.backend() == 'theano':
output += K.pattern_broadcast(self.bias, self.bias_broadcast)
else:
output += self.bias
output = K.squeeze(K.permute_dimensions(output, permute_dims), ndim)
if len(self.sum_axes) > 1:
output = K.squeeze(output, self.sum_axes[1])
return self.activation(output)
def compute_output_shape(self, input_shape):
ndim = len(input_shape)
output_shape = [input_shape[0]] + [1] * (ndim-1)
for i in set(range(1, ndim)) - set(self.sum_axes):
output_shape[i] = input_shape[i]
output_shape.append(self.filters_complex + self.filters_simple)
permute_dims = list(range(ndim + 1))
permute_dims[self.sum_axes[0]] = ndim
permute_dims[ndim] = self.sum_axes[0]
output_shape = [output_shape[i] for i in permute_dims]
output_shape.pop(ndim)
if len(self.sum_axes) > 1:
output_shape.pop(self.sum_axes[1])
return tuple(output_shape)
def get_config(self):
config = {
'filters_simple': self.filters_simple,
'filters_complex': self.filters_complex,
'sum_axes': self.sum_axes,
'filter_axes': self.filter_axes,
'activation': activations.serialize(self.activation),
'kernel_activation': activations.serialize(self.kernel_activation),
'use_bias': self.use_bias,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'bias_initializer': initializers.serialize(self.kernel_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'bias_regularizer': regularizers.serialize(self.bias_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint),
'bias_constraint': constraints.serialize(self.bias_constraint)
}
base_config = super(FilterDimsV1, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class SoftMinMax(Layer):
"""Computes a selective and adaptive soft-min or soft-max over the inputs.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(SoftMinMax(32, input_dim=16))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# this is equivalent to the above:
model = Sequential()
model.add(SoftMinMax(32, input_shape=(16,)))
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(SoftMinMax(32))
```
# Arguments
output_dim: int > 0.
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, output_dim)`
and (output_dim,) for weights and k parameters respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
k_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the k parameters.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
k_constraint: instance of the [constraints](../constraints.md) module,
applied to the k parameters.
input_dim: dimensionality of the input (integer). This argument
(or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(nb_samples, input_dim)`.
# Output shape
nD tensor with shape: `(nb_samples, ..., output_dim)`.
For instance, for a 2D input with shape `(nb_samples, input_dim)`,
the output would have shape `(nb_samples, output_dim)`.
"""
def __init__(self, units,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=constraints.NonNeg(),
k_initializer='zeros',
k_regularizer=None,
k_constraint=None,
tied_k=False,
activity_regularizer=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(SoftMinMax, self).__init__(**kwargs)
self.units = units
self.kernel_initializer = initializers.get(kernel_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.k_initializer = initializers.get(k_initializer)
self.k_regularizer = regularizers.get(k_regularizer)
self.k_constraint = constraints.get(k_constraint)
self.tied_k = tied_k
self.activity_regularizer = regularizers.get(activity_regularizer)
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.tied_k:
k_size = (1,)
else:
k_size = (self.units,)
self.k = self.add_weight(shape=k_size,
initializer=self.k_initializer,
name='k',
regularizer=self.k_regularizer,
constraint=self.k_constraint)
self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
self.built = True
def call(self, x, mask=None):
# W = K.softplus(10.*self.kernel)/10.
W = self.kernel
if self.tied_k:
kX = self.k[0] * x
kX = K.clip(kX, -30, 30)
wekx = W[None, :, :] * K.exp(kX[:, :, None])
else:
kX = self.k[None, None, :] * x[:, :, None]
kX = K.clip(kX, -30, 30)
wekx = W[None, :, :] * K.exp(kX)
output = K.sum(x[:, :, None] * wekx, axis=1) / (K.sum(wekx, axis=1) + K.epsilon())
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
def get_config(self):
config = {
'units': self.units,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'k_initializer': initializers.serialize(self.k_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'k_regularizer': regularizers.serialize(self.k_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint),
'k_constraint': constraints.serialize(self.k_constraint)
}
base_config = super(SoftMinMax, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class WeightedMean(Layer):
"""Computes a selective and adaptive soft-min or soft-max over the inputs.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(SoftMinMax(32, input_dim=16))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# this is equivalent to the above:
model = Sequential()
model.add(SoftMinMax(32, input_shape=(16,)))
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(SoftMinMax(32))
```
# Arguments
output_dim: int > 0.
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, output_dim)`
and (output_dim,) for weights and k parameters respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
k_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the k parameters.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
k_constraint: instance of the [constraints](../constraints.md) module,
applied to the k parameters.
input_dim: dimensionality of the input (integer). This argument
(or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(nb_samples, input_dim)`.
# Output shape
nD tensor with shape: `(nb_samples, ..., output_dim)`.
For instance, for a 2D input with shape `(nb_samples, input_dim)`,
the output would have shape `(nb_samples, output_dim)`.
"""
def __init__(self, units,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
kernel_constraint=constraints.NonNeg(),
activity_regularizer=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(WeightedMean, self).__init__(**kwargs)
self.units = units
self.kernel_initializer = initializers.get(kernel_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
self.built = True
def call(self, x, mask=None):
# W = K.softplus(10.*self.kernel)/10.
W = self.kernel
wekx = W[None, :, :]
output = K.sum(x[:, :, None] * wekx, axis=1) / (K.sum(wekx, axis=1) + K.epsilon())
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
def get_config(self):
config = {
'units': self.units,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint)
}
base_config = super(WeightedMean, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class DenseNonNeg(Layer):
"""A densely-connected NN layer with weights soft-rectified before
being applied.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_dim=16))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# this is equivalent to the above:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
output_dim: int > 0.
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, output_dim)`
and (output_dim,) for weights and biases respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
b_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the bias.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
b_constraint: instance of the [constraints](../constraints.md) module,
applied to the bias.
bias: whether to include a bias
(i.e. make the layer affine rather than linear).
input_dim: dimensionality of the input (integer). This argument
(or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(nb_samples, input_dim)`.
# Output shape
nD tensor with shape: `(nb_samples, ..., output_dim)`.
For instance, for a 2D input with shape `(nb_samples, input_dim)`,
the output would have shape `(nb_samples, output_dim)`.
"""
def __init__(self, output_dim, init='glorot_uniform',
activation=None, weights=None,
W_regularizer=None, b_regularizer=None, activity_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, input_dim=None, **kwargs):
self.init = initializations.get(init)
self.activation = activations.get(activation)
self.output_dim = output_dim
self.input_dim = input_dim
self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
self.initial_weights = weights
self.input_spec = [InputSpec(ndim='2+')]
if self.input_dim:
kwargs['input_shape'] = (self.input_dim,)
super(DenseNonNeg, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.input_dim = input_dim
self.input_spec = [InputSpec(dtype=K.floatx(),
ndim='2+')]
self.W = self.add_weight((input_dim, self.output_dim),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((self.output_dim,),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
self.built = True
def call(self, x, mask=None):
W = K.softplus(10.*self.W)/10.
b = K.softplus(10.*self.b)/10.
output = K.dot(x, W)
if self.bias:
output += b
return self.activation(output)
def get_output_shape_for(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1] and input_shape[-1] == self.input_dim
output_shape = list(input_shape)
output_shape[-1] = self.output_dim
return tuple(output_shape)
def get_config(self):
config = {'output_dim': self.output_dim,
'init': self.init.__name__,
'activation': self.activation.__name__,
'W_regularizer': self.W_regularizer.get_config() if self.W_regularizer else None,
'b_regularizer': self.b_regularizer.get_config() if self.b_regularizer else None,
'activity_regularizer': self.activity_regularizer.get_config() if self.activity_regularizer else None,
'W_constraint': self.W_constraint.get_config() if self.W_constraint else None,
'b_constraint': self.b_constraint.get_config() if self.b_constraint else None,
'bias': self.bias,
'input_dim': self.input_dim}
base_config = super(DenseNonNeg, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class Feedback(Layer):
"""An adaptive Devisive Normalization layer where the output
consists of the inputs added to a weighted combination of the inputs
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_dim=16))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# this is equivalent to the above:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, input_dim)`
and (input_dim,) for weights and biases respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
b_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the bias.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
b_constraint: instance of the [constraints](../constraints.md) module,
applied to the bias.
bias: whether to include a bias
(i.e. make the layer affine rather than linear).
input_dim: dimensionality of the input (integer). This argument
(or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(nb_samples, input_dim)`.
# Output shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
For instance, for a 2D input with shape `(nb_samples, input_dim)`,
the output would have shape `(nb_samples, input_dim)`.
"""
def __init__(self, init='glorot_uniform',
activation=None, weights=None,
W_regularizer=None, b_regularizer=None, activity_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, input_dim=None, **kwargs):
self.init = initializations.get(init)
self.activation = activations.get(activation)
self.input_dim = input_dim
self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
self.initial_weights = weights
self.input_spec = [InputSpec(ndim='2+')]
if self.input_dim:
kwargs['input_shape'] = (self.input_dim,)
super(Feedback, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.input_dim = input_dim
self.input_spec = [InputSpec(dtype=K.floatx(),
ndim='2+')]
self.W = self.add_weight((input_dim, input_dim),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_dim,),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
self.built = True
def call(self, x, mask=None):
output = K.dot(x, self.W)
if self.bias:
output += self.b
return self.activation(x + output)
def get_output_shape_for(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1] and input_shape[-1] == self.input_dim
output_shape = list(input_shape)
output_shape[-1] = self.input_dim
return tuple(output_shape)
def get_config(self):
config = {'init': self.init.__name__,
'activation': self.activation.__name__,
'W_regularizer': self.W_regularizer.get_config() if self.W_regularizer else None,
'b_regularizer': self.b_regularizer.get_config() if self.b_regularizer else None,
'activity_regularizer': self.activity_regularizer.get_config() if self.activity_regularizer else None,
'W_constraint': self.W_constraint.get_config() if self.W_constraint else None,
'b_constraint': self.b_constraint.get_config() if self.b_constraint else None,
'bias': self.bias,
'input_dim': self.input_dim}
base_config = super(Feedback, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class DivisiveNormalization(Layer):
"""An adaptive Devisive Normalization layer where the output
consists of the inputs divided by a weighted combination of the inputs
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_dim=16))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# this is equivalent to the above:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
activation: name of activation function to use
(see [activations](../activations.md)),
or alternatively, elementwise Theano function.
If you don't specify anything, no activation is applied
(ie. "linear" activation: a(x) = x).
weights: list of Numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, input_dim)`
and (input_dim,) for weights and biases respectively.
W_regularizer: instance of [WeightRegularizer](../regularizers.md)
(eg. L1 or L2 regularization), applied to the main weights matrix.
b_regularizer: instance of [WeightRegularizer](../regularizers.md),
applied to the bias.
activity_regularizer: instance of [ActivityRegularizer](../regularizers.md),
applied to the network output.
W_constraint: instance of the [constraints](../constraints.md) module
(eg. maxnorm, nonneg), applied to the main weights matrix.
b_constraint: instance of the [constraints](../constraints.md) module,
applied to the bias.
bias: whether to include a bias
(i.e. make the layer affine rather than linear).
input_dim: dimensionality of the input (integer). This argument
(or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(nb_samples, input_dim)`.
# Output shape
nD tensor with shape: `(nb_samples, ..., input_dim)`.
For instance, for a 2D input with shape `(nb_samples, input_dim)`,
the output would have shape `(nb_samples, input_dim)`.
"""
def __init__(self, init='glorot_uniform',
activation=None, weights=None,
W_regularizer=None, b_regularizer=None, activity_regularizer=None,
W_constraint=None, b_constraint=None,
bias=True, input_dim=None, **kwargs):
self.init = initializations.get(init)
self.activation = activations.get(activation)
self.input_dim = input_dim
self.W_regularizer = regularizers.get(W_regularizer)
self.b_regularizer = regularizers.get(b_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.W_constraint = constraints.get(W_constraint)
self.b_constraint = constraints.get(b_constraint)
self.bias = bias
self.initial_weights = weights
self.input_spec = [InputSpec(ndim='2+')]
if self.input_dim:
kwargs['input_shape'] = (self.input_dim,)
super(DivisiveNormalization, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.input_dim = input_dim
self.input_spec = [InputSpec(dtype=K.floatx(),
ndim='2+')]
self.W = self.add_weight((input_dim, input_dim),
initializer=self.init,
name='{}_W'.format(self.name),
regularizer=self.W_regularizer,
constraint=self.W_constraint)
if self.bias:
self.b = self.add_weight((input_dim,),
initializer='zero',
name='{}_b'.format(self.name),
regularizer=self.b_regularizer,
constraint=self.b_constraint)
else:
self.b = None
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
self.built = True
def call(self, x, mask=None):
W = K.softplus(10.*self.W)/10.
b = K.softplus(10.*self.b)/10.
if self.bias:
b = K.softplus(10.*self.b)/10.
output = x / (1. + K.dot(x, W) + b)
else:
output = x / (1. + K.dot(x, W))
return self.activation(output)
def get_output_shape_for(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1] and input_shape[-1] == self.input_dim
output_shape = list(input_shape)
output_shape[-1] = self.input_dim
return tuple(output_shape)
def get_config(self):
config = {'init': self.init.__name__,
'activation': self.activation.__name__,
'W_regularizer': self.W_regularizer.get_config() if self.W_regularizer else None,
'b_regularizer': self.b_regularizer.get_config() if self.b_regularizer else None,
'activity_regularizer': self.activity_regularizer.get_config() if self.activity_regularizer else None,
'W_constraint': self.W_constraint.get_config() if self.W_constraint else None,
'b_constraint': self.b_constraint.get_config() if self.b_constraint else None,
'bias': self.bias,
'input_dim': self.input_dim}
base_config = super(DivisiveNormalization, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class Gram(Layer):
def __init__(self, diag=True, input_dim=None, data_format=None, **kwargs):
super(Gram, self).__init__(**kwargs)
if data_format is None:
data_format = K.image_data_format()
self.data_format = data_format
self.diag = diag
self.input_dim = input_dim
if self.input_dim:
kwargs['input_shape'] = (self.input_dim,)
def build(self, input_shape):
ndim = len(input_shape)
assert ndim == 4
if self.data_format == 'channels_first':
self.stack_size = input_shape[1]
elif self.data_format == 'channels_last':
self.stack_size = input_shape[3]
else:
raise ValueError('Invalid data_format:', self.data_format)
if self.diag:
d = 0
else:
d = -1
self.tril = np.nonzero(np.tri(self.stack_size, self.stack_size, d).ravel())[0]
self.built = True
def call(self, x, mask=None):
xshape = K.int_shape(x)
if self.data_format == 'channels_first':
x = K.reshape(x, [-1, xshape[1], xshape[2]*xshape[3]])
out = K.batch_dot(x, K.permute_dimensions(x, [0, 2, 1]))
elif self.data_format == 'channels_last':
x = K.reshape(x, [-1, xshape[1]*xshape[2], xshape[3]])
out = K.batch_dot(K.permute_dimensions(x, [0, 2, 1]), x)
out = K.permute_dimensions(K.gather(K.permute_dimensions(K.reshape(out, [-1, self.stack_size**2]), [1, 0]), self.tril), [1, 0])
return out
def compute_output_shape(self, input_shape):
return (input_shape[0], len(self.tril))
def get_config(self):
config = {
'input_dim': self.input_dim}
base_config = super(Gram, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class GaussianMixtureDensity(Layer):
'''A layer for creating a Gaussian Mixture Density Network.
It should only be used as the last layer of the network and in
combination with GaussianMixtureDensityLoss
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_dim=16))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# this is equivalent to the above:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
output_dim: int > 0.
init: name of initialization function for the weights of the layer
(see [initializations](../initializations.md)),
or alternatively, Theano function to use for weights
initialization. This parameter is only relevant
if you don't pass a `weights` argument.
weights: list of numpy arrays to set as initial weights.
The list should have 2 elements, of shape `(input_dim, output_dim)`
and (output_dim,) for weights and biases respectively.
bias: whether to include a bias (i.e. make the layer affine rather than linear).
input_dim: dimensionality of the input (integer).
This argument (or alternatively, the keyword argument `input_shape`)
is required when using this layer as the first layer in a model.
# Input shape
2D tensor with shape: `(nb_samples, input_dim)`.
# Output shape
2D tensor with shape: `(nb_samples, output_dim)`.
'''
def __init__(self, output_dim, num_components, init='glorot_uniform', weights=None,
bias=True, input_dim=None, **kwargs):
self.init = initializations.get(init)
self.activation = activations.get(activation)
self.output_dim = output_dim
self.input_dim = input_dim
self.num_components = num_components
self.bias = bias
self.initial_weights = weights
self.input_spec = [InputSpec(ndim=2)]
if self.input_dim:
kwargs['input_shape'] = (self.input_dim,)
super(Dense, self).__init__(**kwargs)
def build(self, input_shape):
assert len(input_shape) == 2
input_dim = input_shape[1]
self.input_spec = [InputSpec(dtype=K.floatx(),
shape=(None, input_dim))]
self.W_mu = self.init((input_dim, num_components*self.output_dim),
name='{}_W_mu'.format(self.name))
self.W_sigma = self.init((input_dim, num_components),
name='{}_W_sigma'.format(self.name))
self.W_pi = self.init((input_dim, num_components),
name='{}_W_pi'.format(self.name))
if self.bias:
self.b_mu = K.zeros((self.num_components*self.output_dim,),
name='{}_b_mu'.format(self.name))
self.b_sigma = K.zeros((self.num_components,),
name='{}_b_sigma'.format(self.name))
self.b_pi = K.zeros((self.num_components,),
name='{}_b_pi'.format(self.name))
self.trainable_weights = [self.W_mu, self.b_sigma, self.W_pi, self.b_mu, self.b_sigma, self.b_pi]
else:
self.trainable_weights = [self.W_mu, self.b_sigma, self.W_pi]
if self.initial_weights is not None:
self.set_weights(self.initial_weights)
del self.initial_weights
def call(self, x, mask=None):
output_mu = K.dot(x, self.W_mu)
output_sigma = K.dot(x, self.W_sigma)
output_pi = K.dot(x, self.W_pi)
if self.bias:
output_mu += self.b_mu
output_sigma += self.b_sigma
output_pi += self.b_pi
return K.concatenate([output_mu, K.exp(output_sigma), K.softmax(output_pi)], axis=-1)
def get_output_shape_for(self, input_shape):
assert input_shape and len(input_shape) == 2
return (input_shape[0], self.output_dim)
def get_config(self):
config = {'output_dim': self.output_dim,
'init': self.init.__name__,
'activation': self.activation.__name__,
'bias': self.bias,
'input_dim': self.input_dim}
base_config = super(Dense, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class DenseDistance(Layer):
"""Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, kernel) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `kernel` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).
Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with `kernel`.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
units: Positive integer, dimensionality of the output space.
activation: Activation function to use
(see [activations](../activations.md)).
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix
(see [initializers](../initializers.md)).
bias_initializer: Initializer for the bias vector
(see [initializers](../initializers.md)).
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix
(see [regularizer](../regularizers.md)).
bias_regularizer: Regularizer function applied to the bias vector
(see [regularizer](../regularizers.md)).
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
(see [regularizer](../regularizers.md)).
kernel_constraint: Constraint function applied to
the `kernel` weights matrix
(see [constraints](../constraints.md)).
bias_constraint: Constraint function applied to the bias vector
(see [constraints](../constraints.md)).
# Input shape
nD tensor with shape: `(batch_size, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(batch_size, input_dim)`.
# Output shape
nD tensor with shape: `(batch_size, ..., units)`.
For instance, for a 2D input with shape `(batch_size, input_dim)`,
the output would have shape `(batch_size, units)`.
"""
def __init__(self, units,
activation=None,
L2square=False,
kernel_initializer='glorot_uniform',
kernel_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
metric='L2',
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(DenseDistance, self).__init__(**kwargs)
self.units = units
self.activation = activations.get(activation)
self.kernel_initializer = initializers.get(kernel_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.metric = metric
self.L2square = L2square
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
self.built = True
def call(self, inputs):
if self.metric is 'L1':
return K.sum(K.abs(inputs[..., None] - self.kernel[None, ...]), axis=-2)
elif self.L2square:
return K.sum(K.square(inputs[..., None] - self.kernel[None, ...]), axis=-2)
else:
return K.sqrt(K.maximum(K.sum(K.square(inputs[..., None] - self.kernel[None, ...]), axis=-2), K.epsilon()))
if self.activation is not None:
output = self.activation(output)
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
def get_config(self):
config = {
'units': self.units,
'activation': activations.serialize(self.activation),
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint)
}
base_config = super(DenseDistance, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class Distance2RBF(Layer):
"""Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, kernel) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `kernel` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).
Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with `kernel`.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
units: Positive integer, dimensionality of the output space.
activation: Activation function to use
(see [activations](../activations.md)).
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix
(see [initializers](../initializers.md)).
bias_initializer: Initializer for the bias vector
(see [initializers](../initializers.md)).
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix
(see [regularizer](../regularizers.md)).
bias_regularizer: Regularizer function applied to the bias vector
(see [regularizer](../regularizers.md)).
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
(see [regularizer](../regularizers.md)).
kernel_constraint: Constraint function applied to
the `kernel` weights matrix
(see [constraints](../constraints.md)).
bias_constraint: Constraint function applied to the bias vector
(see [constraints](../constraints.md)).
# Input shape
nD tensor with shape: `(batch_size, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(batch_size, input_dim)`.
# Output shape
nD tensor with shape: `(batch_size, ..., units)`.
For instance, for a 2D input with shape `(batch_size, input_dim)`,
the output would have shape `(batch_size, units)`.
"""
def __init__(self, units,
kernel_initializer='ones',
kernel_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(DenseDistance, self).__init__(**kwargs)
self.units = units
self.activation = activations.get(activation)
self.kernel_initializer = initializers.get(kernel_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.metric = metric
self.L2square = L2square
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
self.built = True
def call(self, inputs):
if self.metric is 'L1':
return K.sum(K.abs(inputs[..., None] - self.kernel[None, ...]), axis=-2)
elif self.L2square:
return K.sum(K.square(inputs[..., None] - self.kernel[None, ...]), axis=-2)
else:
return K.sqrt(K.maximum(K.sum(K.square(inputs[..., None] - self.kernel[None, ...]), axis=-2), K.epsilon()))
if self.activation is not None:
output = self.activation(output)
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
def get_config(self):
config = {
'units': self.units,
'activation': activations.serialize(self.activation),
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint)
}
base_config = super(DenseDistance, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class Distance(Layer):
"""Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, kernel) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `kernel` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).
Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with `kernel`.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
units: Positive integer, dimensionality of the output space.
activation: Activation function to use
(see [activations](../activations.md)).
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix
(see [initializers](../initializers.md)).
bias_initializer: Initializer for the bias vector
(see [initializers](../initializers.md)).
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix
(see [regularizer](../regularizers.md)).
bias_regularizer: Regularizer function applied to the bias vector
(see [regularizer](../regularizers.md)).
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
(see [regularizer](../regularizers.md)).
kernel_constraint: Constraint function applied to
the `kernel` weights matrix
(see [constraints](../constraints.md)).
bias_constraint: Constraint function applied to the bias vector
(see [constraints](../constraints.md)).
# Input shape
nD tensor with shape: `(batch_size, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(batch_size, input_dim)`.
# Output shape
nD tensor with shape: `(batch_size, ..., units)`.
For instance, for a 2D input with shape `(batch_size, input_dim)`,
the output would have shape `(batch_size, units)`.
"""
def __init__(self,
metric='L2',
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(Distance, self).__init__(**kwargs)
self.metric = metric
self.input_spec = InputSpec(ndim=3)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) == 3
self.stack_size = input_shape[-2]
self.tril = np.nonzero(np.tri(input_shape[-2], input_shape[-2], -1).ravel())[0]
self.built = True
def call(self, inputs):
if self.metric is 'L1':
out = K.sum(K.abs(inputs[..., None] - K.permute_dimensions(inputs, (0,2,1))[:, None, ...]), axis=-2)
elif self.metric is 'L2':
out = K.sqrt(K.maximum(K.sum(K.square(inputs[..., None] - K.permute_dimensions(inputs, (0,2,1))[:, None, ...]), axis=-2), K.epsilon()))
out = K.permute_dimensions(K.gather(K.permute_dimensions(K.reshape(out, [-1, self.stack_size**2]), [1, 0]), self.tril), [1, 0])
return out
def compute_output_shape(self, input_shape):
return (input_shape[0], len(self.tril))
def get_config(self):
config = {
'metric': self.metric,
}
base_config = super(Distance, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class GatedMultiply(Layer):
"""Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, kernel) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `kernel` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).
Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with `kernel`.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
units: Positive integer, dimensionality of the output space.
activation: Activation function to use
(see [activations](../activations.md)).
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix
(see [initializers](../initializers.md)).
bias_initializer: Initializer for the bias vector
(see [initializers](../initializers.md)).
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix
(see [regularizer](../regularizers.md)).
bias_regularizer: Regularizer function applied to the bias vector
(see [regularizer](../regularizers.md)).
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
(see [regularizer](../regularizers.md)).
kernel_constraint: Constraint function applied to
the `kernel` weights matrix
(see [constraints](../constraints.md)).
bias_constraint: Constraint function applied to the bias vector
(see [constraints](../constraints.md)).
# Input shape
nD tensor with shape: `(batch_size, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(batch_size, input_dim)`.
# Output shape
nD tensor with shape: `(batch_size, ..., units)`.
For instance, for a 2D input with shape `(batch_size, input_dim)`,
the output would have shape `(batch_size, units)`.
"""
def __init__(self, units,
kernel_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(GatedMultiply, self).__init__(**kwargs)
self.units = units
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.kernel_initializer = initializers.uniform(-.1, 0)
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
self.built = True
def call(self, inputs):
output = K.exp(K.dot(K.log(inputs + 1e-5), K.sigmoid(100.*self.kernel)))
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
def get_config(self):
config = {
'units': self.units,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint)
}
base_config = super(GatedMultiply, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
| 45.841167 | 165 | 0.595731 | 9,985 | 84,852 | 4.911267 | 0.039659 | 0.032627 | 0.01301 | 0.007627 | 0.943229 | 0.931952 | 0.926344 | 0.917372 | 0.90636 | 0.904382 | 0 | 0.011273 | 0.306876 | 84,852 | 1,850 | 166 | 45.865946 | 0.822539 | 0.375548 | 0 | 0.802266 | 0 | 0 | 0.040962 | 0 | 0 | 0 | 0 | 0 | 0.030896 | 1 | 0.066941 | false | 0 | 0.019567 | 0.00206 | 0.146241 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
02a476aec07036b4272e40c46ee5c5187723c539 | 3,878 | py | Python | dev/python/2019-06-08 rms of real data.py | konung-yaropolk/pyABF | b5620e73ac5d060129b844da44f8b2611536ac56 | [
"MIT"
] | 74 | 2017-11-06T17:53:48.000Z | 2022-03-27T12:14:46.000Z | dev/python/2019-06-08 rms of real data.py | konung-yaropolk/pyABF | b5620e73ac5d060129b844da44f8b2611536ac56 | [
"MIT"
] | 116 | 2018-01-16T21:36:29.000Z | 2022-03-31T11:46:04.000Z | dev/python/2019-06-08 rms of real data.py | konung-yaropolk/pyABF | b5620e73ac5d060129b844da44f8b2611536ac56 | [
"MIT"
] | 30 | 2018-06-28T13:19:53.000Z | 2022-03-25T02:52:48.000Z | abfFilePathsA=R"""
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_14_DIC1_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_14_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_14_DIC2_0004.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_14_DIC2_0011.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_14_DIC2_0015.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_14_DIC2_0020.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_21_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_21_DIC2_0004.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_21_DIC2_0007.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC2_0003.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC2_0006.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC2_0015.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC1_0005.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC1_0002.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC1_0008.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC1_0011.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC2_0003.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_31_DIC2_0006.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_21_DIC1_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_21_DIC1_0006.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_21_DIC1_0009.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC1_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC1_0003.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_23_DIC1_0006.abf
""".strip().split("\n")
abfFilePathsB=R"""
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_16_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_16_DIC2_0003.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_16_DIC2_0006.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_16_DIC2_0009.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC2_0003.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC2_0006.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC1_0002.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC2_0009.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC1_0008.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_17_DIC2_0014.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_22_DIC2_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_22_DIC2_0003.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_22_DIC2_0006.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_22_DIC2_0009.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_22_DIC1_0000.abf
X:\Data\F344\Aging Hipp\E-I-balance\2019_05_22_DIC1_0003.abf
""".strip().split("\n")
abfFilePaths = sorted(abfFilePathsA + abfFilePathsB)
import os
import sys
PATH_HERE = os.path.abspath(os.path.dirname(__file__))
PATH_DATA = os.path.abspath(PATH_HERE+"../../../data/abfs/")
PATH_SRC = os.path.abspath(PATH_HERE+"../../../src/")
sys.path.insert(0, PATH_SRC)
import matplotlib.pyplot as plt
import numpy as np
import pyabf
if __name__=="__main__":
plt.figure(figsize = (8, 6))
plt.ylabel("RMS Noise (pA)")
plt.xlabel("ABF ID")
plt.title("RMS Noise (20 percentile of all sweeps)")
for abfNumber, abfPath in enumerate(abfFilePaths):
print(abfPath)
abf = pyabf.ABF(abfPath)
abfRmsBySweep = []
for sweepNumber in abf.sweepList:
abf.setSweep(sweepNumber)
snip = abf.sweepY[:abf.sweepEpochs.p2s[0]] # pre-epoch
abfRmsBySweep.append(np.std(snip))
abfRms = np.percentile(abfRmsBySweep, 20)
print("%s.abf RMS = %.04f pA" %(abf.abfID, abfRms))
if "DIC1" in abf.abfID:
color = "r"
else:
color = "b"
plt.plot(abfNumber, abfRms, '.-', ms = 20, color = color)
plt.axis([None, None, 0, None])
plt.show() | 45.093023 | 66 | 0.752965 | 764 | 3,878 | 3.573298 | 0.14267 | 0.078755 | 0.141758 | 0.220513 | 0.72967 | 0.714286 | 0.714286 | 0.714286 | 0.714286 | 0.714286 | 0 | 0.2 | 0.093605 | 3,878 | 86 | 67 | 45.093023 | 0.576671 | 0.002321 | 0 | 0.025316 | 0 | 0 | 0.712771 | 0.455791 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | false | 0 | 0.063291 | 0 | 0.063291 | 0.025316 | 0 | 0 | 0 | null | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 8 |
02add4ef4a42a5cead6e2055d16e491b94cf96f6 | 148 | py | Python | tornado_sqlalchemy_login/tests/test_all.py | timkpaine/tornado_sqlalchemy_login | aa574b71e39c6594d86d1022b870f350f535e861 | [
"Apache-2.0"
] | 1 | 2021-02-16T23:16:55.000Z | 2021-02-16T23:16:55.000Z | tornado_sqlalchemy_login/tests/test_all.py | timkpaine/tornado_sqlalchemy_login | aa574b71e39c6594d86d1022b870f350f535e861 | [
"Apache-2.0"
] | 8 | 2019-12-30T23:59:20.000Z | 2022-02-25T00:03:47.000Z | tornado_sqlalchemy_login/tests/test_all.py | timkpaine/tornado_sqlalchemy_login | aa574b71e39c6594d86d1022b870f350f535e861 | [
"Apache-2.0"
] | null | null | null | # accurate coverage
from tornado_sqlalchemy_login.utils import *
from tornado_sqlalchemy_login.sqla import *
from tornado_sqlalchemy_login import *
| 29.6 | 44 | 0.858108 | 19 | 148 | 6.368421 | 0.473684 | 0.272727 | 0.520661 | 0.644628 | 0.528926 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0.101351 | 148 | 4 | 45 | 37 | 0.909774 | 0.114865 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | true | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | null | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 8 |
02af226d2dd7aee3726d2a165a6b3b4c60bad982 | 5,400 | py | Python | tests/common/test_merkle_tree.py | MikitaSaladukha/my-blockchain | c09091762dc559d41b8aa29fbe8267aff834a57c | [
"Apache-2.0"
] | 4 | 2021-11-14T17:16:03.000Z | 2022-03-17T21:01:42.000Z | tests/common/test_merkle_tree.py | MikitaSaladukha/my-blockchain | c09091762dc559d41b8aa29fbe8267aff834a57c | [
"Apache-2.0"
] | null | null | null | tests/common/test_merkle_tree.py | MikitaSaladukha/my-blockchain | c09091762dc559d41b8aa29fbe8267aff834a57c | [
"Apache-2.0"
] | 5 | 2021-07-30T14:27:37.000Z | 2021-12-15T12:08:46.000Z | from common.merkle_tree import build_merkle_tree
from common.utils import calculate_hash
def test_given_1_leaf_when_build_merkle_tree_then_leafs_hash_is_computed_correctly():
l1 = "blabla data 0"
merkle_tree = build_merkle_tree([l1])
assert merkle_tree.value == calculate_hash(l1)
def test_given_2_leaves_when_build_merkle_tree_then_all_leaves_hashes_are_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
merkle_tree = build_merkle_tree([l1, l2])
assert merkle_tree.left_child.value == calculate_hash(l1)
assert merkle_tree.right_child.value == calculate_hash(l2)
def test_given_2_leaves_when_build_merkle_tree_then_root_hash_is_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
merkle_tree = build_merkle_tree([l1, l2])
assert merkle_tree.value == calculate_hash(calculate_hash(l1) + calculate_hash(l2))
def test_given_4_leaves_when_build_merkle_tree_then_all_leaves_hashes_are_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
merkle_tree = build_merkle_tree([l1, l2, l3, l4])
assert merkle_tree.left_child.left_child.value == calculate_hash(l1)
assert merkle_tree.left_child.right_child.value == calculate_hash(l2)
assert merkle_tree.right_child.left_child.value == calculate_hash(l3)
assert merkle_tree.right_child.right_child.value == calculate_hash(l4)
def test_given_4_leaves_when_build_merkle_tree_then_middle_childs_are_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
merkle_tree = build_merkle_tree([l1, l2, l3, l4])
assert merkle_tree.left_child.value == calculate_hash(calculate_hash(l1)+calculate_hash(l2))
assert merkle_tree.right_child.value == calculate_hash(calculate_hash(l3)+calculate_hash(l4))
def test_given_4_leaves_when_build_merkle_tree_then_root_is_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
merkle_tree = build_merkle_tree([l1, l2, l3, l4])
assert merkle_tree.value == calculate_hash(calculate_hash(calculate_hash(l1)+calculate_hash(l2))+calculate_hash(calculate_hash(l3)+calculate_hash(l4)))
def test_given_6_leaves_when_build_merkle_tree_then_all_leaves_hashes_are_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
l5 = "blabla data 4"
l6 = "blabla data 5"
merkle_tree = build_merkle_tree([l1, l2, l3, l4, l5, l6])
assert merkle_tree.left_child.left_child.left_child.value == calculate_hash(l1)
assert merkle_tree.left_child.left_child.right_child.value == calculate_hash(l2)
assert merkle_tree.left_child.right_child.left_child.value == calculate_hash(l3)
assert merkle_tree.left_child.right_child.right_child.value == calculate_hash(l4)
assert merkle_tree.right_child.left_child.left_child.value == calculate_hash(l5)
assert merkle_tree.right_child.left_child.right_child.value == calculate_hash(l6)
assert merkle_tree.right_child.right_child.left_child.value == calculate_hash(l5)
assert merkle_tree.right_child.right_child.right_child.value == calculate_hash(l6)
def test_given_6_leaves_when_build_merkle_tree_then_root_is_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
l5 = "blabla data 4"
l6 = "blabla data 5"
merkle_tree = build_merkle_tree([l1, l2, l3, l4, l5, l6])
assert merkle_tree.value == calculate_hash(calculate_hash(calculate_hash(calculate_hash(l1)+calculate_hash(l2))+calculate_hash(calculate_hash(l3)+calculate_hash(l4))) + calculate_hash(calculate_hash(calculate_hash(l5)+calculate_hash(l6))+calculate_hash(calculate_hash(l5)+calculate_hash(l6))))
def test_given_5_leaves_when_build_merkle_tree_then_all_leaves_hashes_are_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
l5 = "blabla data 4"
merkle_tree = build_merkle_tree([l1, l2, l3, l4, l5])
assert merkle_tree.left_child.left_child.left_child.value == calculate_hash(l1)
assert merkle_tree.left_child.left_child.right_child.value == calculate_hash(l2)
assert merkle_tree.left_child.right_child.left_child.value == calculate_hash(l3)
assert merkle_tree.left_child.right_child.right_child.value == calculate_hash(l4)
assert merkle_tree.right_child.left_child.left_child.value == calculate_hash(l5)
assert merkle_tree.right_child.left_child.right_child.value == calculate_hash(l5)
assert merkle_tree.right_child.right_child.left_child.value == calculate_hash(l5)
assert merkle_tree.right_child.right_child.right_child.value == calculate_hash(l5)
def test_given_5_leaves_when_build_merkle_tree_then_root_is_computed_correctly():
l1 = "blabla data 0"
l2 = "blabla data 1"
l3 = "blabla data 2"
l4 = "blabla data 3"
l5 = "blabla data 4"
merkle_tree = build_merkle_tree([l1, l2, l3, l4, l5])
assert merkle_tree.value == calculate_hash(calculate_hash(calculate_hash(calculate_hash(l1)+calculate_hash(l2))+calculate_hash(calculate_hash(l3)+calculate_hash(l4))) + calculate_hash(calculate_hash(calculate_hash(l5)+calculate_hash(l5))+calculate_hash(calculate_hash(l5)+calculate_hash(l5))))
| 44.628099 | 297 | 0.763148 | 833 | 5,400 | 4.559424 | 0.055222 | 0.2396 | 0.12217 | 0.14534 | 0.970774 | 0.964718 | 0.948131 | 0.945498 | 0.889152 | 0.852027 | 0 | 0.037817 | 0.138148 | 5,400 | 120 | 298 | 45 | 0.778255 | 0 | 0 | 0.666667 | 0 | 0 | 0.093889 | 0 | 0 | 0 | 0 | 0 | 0.322222 | 1 | 0.111111 | false | 0 | 0.022222 | 0 | 0.133333 | 0 | 0 | 0 | 0 | null | 1 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.